{ "noun": { "question": [ "plateau", "Clark value", "Asthenosphere", "lithosphere", "syncline ", "anticline", "Normal fault", "Reverse fault", "Translational fault", "plate tectonics ", "Joint", "cleavage ", "Cut-in-fill terrace ", "Traceable erosion", "Integrated contact", "Parallel unconformity", "Angular unconformity", "Sedimentary contact", "Intrusive contact", "groundwater", "diving", "Geothermal heating rate", "Weathering crust", "Weathering", "Physical weathering", "Metamorphism", "Karst", "magnitude", "Isoseismal line", "earthquake intensity", "glacier", "swamp", "ocean current", "Turbid current", "coastline", "rock", "Magmatic rock", "sedimentary rock", "Metamorphic rock", "sedimentary rock", "Weathering", "Physical weathering and chemical weathering", "Sedimentary differentiation", "Compaction", "Cementation", "Metasomatism", "Recrystallization", "bedding", "geology", "continental margin", "lithosphere", "Biofossil", "Sedimentation", "Magma", "anticline", "earthquake", "Seafloor spreading", "petroleum", "Geoscience", "lithosphere", "Cambrian", "Intrusion", "Stratum occurrence", "Normal fault", "earthquake", "plate tectonics ", "petroleum", "landslide", "rock", "Fossil", "Earth science", "continental margin", "lithosphere", "Biofossil", "Sedimentation", "marble", "Thrust fault", "earthquake", "Seafloor spreading", "petroleum", "Earth science", "cleavage ", "mantle", "Archaean", "Oxbow lake", "Delta", "basalt", "Yanshan Movement", "syncline ", "natural gas", "continental margin", "Guteng (Teng) Fort Noodles", "Karstification", "Aeolian loess", "Andesite", "United paleocontinent", "Earth resources", "Reverse fault", "Ediacara fauna", "Small shell fauna", "Fingerfacies fossil", "sedimentary facies", "phase transition", "Walther's Law ", "Vertical accumulation", "Original levelness principle", "Stratum overlap principle", "Lateral accumulation", "Overlap", "Retrogradation", "Retrogression", "Progradation", "High-water system tract", "Cyclic sedimentation", "Universal principle of chronology", "Stratigraphic structure", "rock stratum", "stratum", "Stratigraphic division", "Stratigraphic correlation", "Lithostratigraphic unit", "group", "group", "paragraph", "layer", "Chronostratigraphic unit", "Yu", "circles", "system", "System", "rank", "Time band", "Biostratigraphic unit", "Extended zone", "Pinnacle zone", "Composite band", "Septum", "Spectral frenum", "Dumb layer", "Layer type", "Unit layer type", "Boundary layer type", "Diachronic", "Sedimentary assemblage", "Compensation basin", "Uncompensated basin", "Hypercompensated basin", "Foreland basin", "Polar shift curve", "Biota", "Wallace line", "Wilson cycle", "Caozhuang rock series", "Fuping Movement", "Luliang Movement", "Jinning movement ", "Jixian Group", "Nantuo Formation", "Doushantuo Formation", "Lantern shadow group", "Caledonian movement", "Mantou Group", "Longmaxi Formation", "Angaran floral province ", "Gondwana flora province ", "Cathaysian floral province ", "European and American flora", "Hercynian movement ", "Devonian Xiangzhou type", "Devonian Nandan type", "Shanxi Formation", "Soochow Movement", "Indosinian Movement", "Yanshan Movement", "Himalayan movement", "Extended group", "Yan'an Formation", "Songhua River Group", "Shanwang Formation", "Red soil of three-toed horse", "Gondwana", "Lauria", "Sandstone with tabular cross-bedding", "Shale rich in terrestrial biological assemblage", "Black shale containing plant fossils", "Bamboo leaf limestone", "Purplish red siltstone or silty mudstone with halite pseudolite", "Oolitic hematite ", "Oolitic limestone containing trilobite fragments", "Reef limestone", "Sandstone with impression or groove pattern on the bottom", "Siliceous and argillaceous rocks containing swimming ammonites", "Graptolite shale facies", "Shell phase", "Neotectonic movement", "Weathering crust", "Moho surface", "Standard fossil", "lithosphere", "mineral", "syncline ", "Conversion fault", "Graben", "Horst", "Ground temperature level", "Ground temperature rate", "Laminated structure", "Metamorphism", "Rock-forming mineral", "cleavage ", "sedimentary facies", "Phase marker", "geology", "Discuss the present and the ancient", "lithosphere", "mineral", "Geological process", "Double metamorphic zone", "Weathering crust", "Rock porosity", "groundwater", "glacier", "crystal", "Clark value", "Isomorphism", "sedimentary rock", "Occurrence of magmatic rock", "Metamorphism", "Mechanical deposition differentiation", "Ripple mark", "Pyroclastic rock", "sedimentary facies", "Delta", "Turbid current", "Clear water sedimentation of carbonate rocks", "Stratigraphic sequence law", "Standard fossil", "geological structure ", "Angular unconformity contact relationship", "Synsedimentary fold", "Fault structure", "Petroliferous basin", "Integrated contact", "Moho surface", "Standard fossil", "lithosphere", "mineral", "syncline ", "Stratum strike", "geology", "Discuss the present and the ancient", "lithosphere", "mineral", "Geological process", "Double metamorphic zone", "Weathering crust", "Rock porosity", "groundwater", "glacier", "crystal", "Clark value", "Isomorphism", "sedimentary rock", "Occurrence of magmatic rock", "Metamorphism", "Mechanical deposition differentiation", "Ripple mark", "Pyroclastic rock", "sedimentary facies", "Delta", "Turbid current", "Clear water sedimentation of carbonate rocks", "Stratigraphic sequence law", "Standard fossil", "geological structure ", "Angular unconformity contact relationship", "Synsedimentary fold", "Fault structure", "Petroliferous basin", "geology", "Discuss the present and the ancient", "Geoid", "Island arc and trench", "lithosphere", "mineral", "Geological process", "Endodynamic geological process", "Weathering crust", "Weathering", "Denudation", "Transport action", "Sedimentation", "Mechanical deposition differentiation", "Diagenesis", "crystal", "Clark value", "Isomorphism", "Crystal habit", "rock", "Magmatic rock", "sedimentary rock", "Metamorphic rock", "Magma", "Structure of magmatic rock", "Structure of magmatic rock", "Occurrence of magmatic rock", "primary magma ", "Magmatic differentiation", "Magmatic assimilation and contamination", "Metamorphism", "bedding Structures ", "Ripple mark", "Roundness", "Sphericity", "Component maturity", "Structural maturity", "Pyroclastic rock", "Internal detritus", "Angular unconformity", "fault", "Fingerfacies fossil", "anticline", "sedimentary facies", "mineral", "Inverted fold", "Stratigraphic dip", "Angular unconformity", "fault", "Molasse construction", "anticline", "sedimentary facies", "mineral", "Inverted fold", "Stratigraphic dip", "geology", "Mountain area: the area with an altitude of more than 500 meters and a relative height difference of more than 200 meters is called mountain area, which is further divided into", "Continental margin: the marginal zone connecting the continent and the ocean, which can be divided into the following secondary units", "continental shelf", "Seamount", "Aseismic ridge", "Mid-Ocean Ridge", "biosphere", "Geothermal energy", "Source of geothermal energy", "Geothermal gradient", "Geothermal depth", "Moho surface", "Gutenberg discontinuity ", "Crustal", "mantle", "earth 's core", "Silicon aluminum layer", "lithosphere", "Asthenosphere", "rock", "Endodynamic geological process", "Exodynamic geological process", "Geological process", "gravity anomaly", "Geomagnetic anomaly", "Magnetic declination", "Clark value", "abundance", "mineral", "Elemental element", "chemical compound", "crystal", "Amorphous", "Crystal mineral", "Amorphous mineral", "Spatial lattice", "Isomorphism", "colour", "Self-color", "Otherness", "False color", "Streaks", "gloss", "mohs hardness scale ", "cleavage ", "fracture", "Definition of Gemstone", "Rock structure", "Rock structure", "Intrusion", "Molten lava", "batholith", "Magma", "Lava", "Magmatism", "Extrusive action", "volcano", "surrounding rock", "Massive structure", "Weathering", "Residuals", "Wind transport", "soil", "Sheet flow", "Flood current", "Denudation", "Lateral erosion", "Scour action", "Diluvium", "Diluvial fan", "Slope deposit", "Alluvium", "river flat", "groundwater", "Karst", "glacier", "Snowline", "Glacial valley", "Fin ridge", "Angular peak", "wave", "tide", "Erosive flow", "lagoon", "Marine erosion", "lake", "swamp", "Wind erosion", "Magmatic differentiation", "Metamorphism", "Recrystallization", "Metasomatism", "Contact metamorphism", "Pneumatolytic hydrothermal metamorphism ", "Dynamic metamorphism", "Regional metamorphism", "Migmatization", "tectonic movement", "Fault structure", "fold", "syncline ", "anticline", "fault", "Reverse fault", "Parallel unconformity", "attitude of stratum ", "Horst", "Graben", "Stratigraphic unconformity", "Magnitude of earthquake", "earthquake intensity", "Occurrence", "mineral", "mineral products", "deposit", "ore deposit", "grade", "Metallic minerals", "Non-metallic minerals" ], "answer": [ "It refers to the highland with an altitude of more than 500m, flat top, small fluctuation and relatively vast area.", "The mass fraction of elements in the crust.", "That is, the partial melting layer of upper mantle material.", "It is the hard rock part above the asthenosphere.", "The middle is a new stratum, and the older strata appear symmetrically on both sides.", "It is the upward bending of the rock stratum, forming the central part of the older rock stratum, and the rock strata on both sides become new symmetrically in turn.", "The hanging wall is relatively lower and the footwall is relatively higher.", "The hanging wall is relatively rising and the footwall is relatively falling.", "The two walls are relatively staggered along the strike direction of the fault plane.", "The lithospheric plate is floating on the asthenosphere and growing, moving and disappearing. As the plates move and interact with each other, various geological structures in the lithosphere are formed, called~", "It refers to a kind of fracture in rock stratum and rock mass, but the rock blocks on both sides of the fracture surface have no significant displacement.", "It refers to the characteristic of minerals splitting along a certain direction in the crystal after being subjected to mechanical force (that is, the weak points of some chemical bond links in the crystal lattice).", "There is no impact on the step surface, and bedrock can be seen at the lower part of the step dam.", "Also known as source-oriented erosion, it is the erosion that lengthens the river towards the source.", "It means that the occurrence of the upper and lower strata is consistent and the age is continuous.", "If the occurrence of the two sets of strata above and below the unconformity surface is consistent, that is, they are parallel to each other, called~", "If the occurrence of the two sets of strata above and below the unconformity is oblique, it is~.", "Also known as cold contact, it is a contact relationship formed by the condensation of magma into magmatic rock mass, after the crust rises and is exposed to weathering and denudation, and the new rock strata are deposited on it when the crust falls.", "Also known as thermal contact, it is a release relationship formed after the hot magma intrudes into the surrounding rock and condenses into the magmatic rock mass.", "It refers to the water existing in the surface soil layer and underground rock voids.", "The groundwater in the saturated zone above the first impermeable layer is called~.", "That is, geothermal gradient refers to the thermal energy generated by the decay of radioactive elements in the earth below the normal temperature layer. Usually, the value of temperature increase every 100m downward is called geothermal warming rate.", "Residues and soil formed by biological weathering form a discontinuous thin shell on the land.", "It refers to the whole process of forming loose deposits due to physical and chemical changes of hard rocks and minerals on or near the surface in contact with the atmosphere, water and organisms in situ.", "The process of mechanical crushing of rocks and minerals in situ without changing their chemical composition at or near the surface.", "It refers to a geological process in which the solid rock in the deep underground changes in rock structure, structure and chemical composition under the action of high temperature, high pressure and chemically active fluid, thus forming new rock.", "It refers to the corrosion, erosion-dissolution and associated accumulation of surface water and groundwater on surface and underground soluble rocks (carbonate rocks, gypsum and halogen salts, etc.), which are dominated by chemical dissolution and supplemented by mechanical erosion.", "An amount used to measure the intensity of an earthquake.", "The connecting line of the same seismic intensity points is~.", "Indicates the extent of the ground affected and damaged by the earthquake.", "A huge natural ice body formed by years of snow and moving and existing in cold regions of the earth for a long time", "It is a place where the land surface is too wet, where a large number of hydrophilic plants grow and organic matter accumulates.", "The water mass in the ocean flowing regularly in a certain direction is called~.", "It refers to a high density fluid stirred by sediment moving in the clear sea water.", "It is the boundary between the sea and the land. It is generally an intertidal zone between the high tide level and the low tide level.", "A solid aggregate consisting of naturally occurring minerals or similar substances.", "The product of condensation and consolidation of magma formed by partial melting of rocks in the deep crust or upper mantle.", "The rock is formed by the weathering products of surface rocks, pyroclastic materials and bioclastic materials under the main external dynamic geological processes, such as transportation, deposition and consolidation.", "It refers to a kind of rock formed by the change of mineral composition, chemical composition or structure under the condition of metamorphism (temperature, pressure or fluid change) and basically maintaining solid state.", "In the field, it occurs in layers and undergoes sorting; The surface of the rock layer can have ripple marks, cross layers, mud cracks and other structures; The rock strata continue greatly in the transverse direction; The shape of sedimentary rock geological body may be similar to the range of river, delta, sandbar and sandbar; The degree of consolidation of sedimentary rocks varies, and some are even unconsolidated sediments.", "The destructive action of rocks on or near the surface under the action and influence of temperature, water, air and organisms.", "The rock is only formed by mechanical crushing, and its chemical composition has not changed, which is physical weathering. Chemical weathering refers to the weathering process of chemical decomposition of rocks under the action of oxygen, water and various acids and organisms dissolved in water.", "The phenomenon that the weathering products of the parent rock and the sediments from other sources are deposited successively according to the differences in particle size, shape, density, mineral composition and chemical composition in the process of transportation and deposition is called sedimentary differentiation.", "Under the heavy load pressure of the overlying sediment, the sediment discharges water, reduces porosity and increases density.", "The process of precipitation of minerals from pore solution and binding of loose sediments into hard rocks.", "During diagenesis, one mineral in the sediment (rock) is replaced by another mineral with different chemical composition.", "Mineral components are dissolved and re-precipitated to make fine grains aggregate into coarse grains.", "The composition, structure, color, thickness and shape of sedimentary rocks change along the longitudinal direction and show the layering phenomenon.", "The main object of the study is the solid earth. At present, it mainly studies the surface of the solid earth - the crust or lithosphere.", "The transitional zone between the continent and the ocean basin includes the continental shelf, the continental slope and the continental base.", "It is composed of rocks, including the upper layer of the crust and upper mantle. (The part of the earth above the asthenosphere).", "The ancient biological remains and relics preserved in the stratum are generally filled or replaced by calcareous and siliceous materials.", "The process in which substances transported by various agents accumulate in new places under the influence of the reduction of kinetic energy of the medium or the change of physical and chemical conditions, as well as biological action.", "Hot, viscous and volatile melting body formed in the deep crust or upper mantle and mainly composed of silicate.", "Curved upward in shape, the rock strata on its two wings are opposite to each other, with new symmetrical and repeated folds on the core and old wings.", "A rapid tremor of the earth or crust.", "It is assumed that the mid-ocean ridge is the place where the sea floor splits due to the rise of heat flow, and molten magma ejects from it to push the rocks on both sides to form a new sea floor. The mid-ocean ridge is the outlet of the rising mantle material. The rising mantle material condenses to form a new ocean crust, and pushes the first formed ocean floor to gradually expand symmetrically to both sides. The expansion of the seafloor at the mid-ocean ridge may cause the continents on both sides of the new ocean to gradually move away from each other, and may also cause the old oceanic crust to subduct and submerge along the Benioff zone (subduction zone) at the trench of the continental margin, and return to the mantle, thus completing the renewal of the old oceanic crust.", "Combustible organic minerals, which exist in underground rock voids in liquid form, are a mixture of hydrocarbons with complex composition.", "The research object is the gas (atmosphere) around the earth, the water (hydrosphere) on the earth's surface, the earth's surface morphology and the solid earth itself.", "It is composed of rocks, including the upper layer of the crust and upper mantle. (Earth part above the asthenosphere)", "It is a geological period from about 570 million years ago to the beginning of the Paleozoic era 55 million years ago. It can be divided into three periods: early, middle and late. It is the beginning stage of modern biology and the first period of Paleozoic era.", "The process in which deep magma moves upward and invades the earth's crust but does not erupt until it is condensed and consolidated.", "It refers to the spatial distribution of rock strata, which is used to indicate the strike, dip and inclination of rock strata occurrence, and is collectively referred to as the three elements of occurrence.", "A fault with a relatively rising footwall and a relatively falling hanging wall.", "A rapid tremor of the earth or crust.", "The lithosphere of the earth is composed of plates; The world is divided into six major plates, and the position of the sea and land is constantly changing.", "Combustible organic minerals, which exist in underground rock voids in liquid form, are a mixture of hydrocarbons with complex composition.", "Under the action of gravity and other factors, the soil or rock mass on the slope slides down along a certain weak surface or weak zone.", "A naturally formed aggregate consisting of solid minerals or rock cuttings.", "The ancient biological remains and relics preserved in the stratum are generally filled or replaced by calcareous and siliceous materials.", "It is one of the six basic natural sciences of mathematics, physics, chemistry, astronomy, geoscience and biology. The research objects are the gas (atmosphere) around the ball, the water (hydrosphere) on the earth's surface, the earth's surface morphology and the solid earth itself.", "The transitional zone between the continent and the ocean basin includes the continental shelf, the continental slope and the continental base.", "It is composed of rocks, including the upper layer of the crust and upper mantle. (Earth part above the asthenosphere)", "The ancient biological remains and relics preserved in the stratum are generally filled or replaced by calcareous and siliceous materials.", "The process in which substances transported by various agents accumulate in new places under the influence of the reduction of kinetic energy of the medium or the change of physical and chemical conditions, as well as biological action.", "Metamorphic rock, also known as marble. It is formed by regional metamorphism or contact metamorphism of carbonate rocks. It is mainly composed of calcite and dolomite, in addition to wollastonite, talc, tremolite, diopside, plagioclase, quartz, periclase, etc. It has granular metamorphic structure and massive (sometimes banded) structure. Generally, white and gray marble is dominant.", "Reverse fault with angle less than 25 degrees. Low-angle reverse fault is the main structural style of orogenic belt.", "A rapid tremor of the earth or crust.", "It is assumed that the mid-ocean ridge is the place where the sea floor splits due to the rise of heat flow, and molten magma ejects from it to push the rocks on both sides to form a new sea floor. The mid-ocean ridge is the outlet of the rising mantle material. The rising mantle material condenses to form a new ocean crust, and pushes the first formed ocean floor to gradually expand symmetrically to both sides; The expansion of the seafloor at the mid-ocean ridge may cause the continents on both sides of the new ocean to gradually move away from each other, and may also cause the old oceanic crust to subduct and submerge along the Benioff zone (subduction zone) at the trench of the continental margin, and return to the mantle, thus completing the renewal of the old oceanic crust;", "Combustible organic minerals, which exist in underground rock voids in liquid form, are a mixture of hydrocarbons with complex composition.", "The research object is the gas (atmosphere) around the earth, the water (hydrosphere) on the earth's surface, the earth's surface morphology and the solid earth itself.", "The plane formed by the fracture of minerals in a fixed direction after external force.", "The mantle is the middle part of the earth below the Moho surface and above the Gutenberg surface.", "The Archaean era is a long time away from us. It is the oldest period in the history of geological development, lasting for 1.5 billion years. It is the initial stage with clear geological records in the history of earth evolution.", "A lake formed by river diversion, bend cutting and straightening, or river aging.", "When the river enters the estuary, the water area suddenly widens. In addition to the blocking effect of Shanghai water or lake water on the river, the flow rate decreases, and a large amount of sediment is accumulated by mechanical handling. The shape of the formed sedimentary body looks like a triangle from the plane.", "The basalt is mainly composed of sodium aluminosilicate or calcium aluminosilicate. The content of silicon dioxide is about 45-52%. It also contains high iron oxide and magnesium oxide. It is a fine and dense black rock. It belongs to basic volcanic rock.", "It refers to the widespread crustal movement in China during the Jurassic and Cretaceous.", "In shape, the rock strata bend downward, the rock strata on both wings tend to be opposite, the new rock strata form the core, and the old rock strata form the fold on both wings.", "Hydrocarbon-based combustible gas stored in underground rock voids.", "The transitional zone between the continent and the ocean basin includes the continental shelf, the continental slope and the continental base.", "At the interface between the mantle sphere and the outer core fluid sphere, the P-wave velocity of seismic waves drops sharply at this interface.", "In the area of soluble rock, a series of geological processes, such as chemical dissolution, mechanical erosion and collapse, are mainly carried out by groundwater on soluble rock.", "A grayish yellow or brownish yellow loose soil-like sediment formed mainly by wind deposition, mainly composed of silt and clay, with pores and vertical joints developed.", "Neutral calc-alkaline extrusive rock, which is equivalent to diorite in composition, is a transitional rock from acidic rock to basic rock, that is, intermediate igneous rock between rhyolite and basalt. It is often porphyritic foliation, and the phenocrysts contained in it are often striped plagioclase, and one or more dark minerals, such as amphibole, pyroxene, mica, etc., among which feldspar still plays an important role. There are many colors of andesite, which are usually darker than rhyolite, but there are still more light-colored minerals in andesite than in rhyolite.", "Lighter silica-alumina continental blocks float on the heavier silica-magnesium layers like massive icebergs and drift on them. The world's continents were connected in the late Paleozoic, known as the United Paleocontinent or Pangaea.", "All kinds of materials used for life and production, except solar energy, which are directly obtained by human beings from nature. The most important ones include mineral resources, energy, land resources, water resources and biological resources", "The hanging wall is relatively rising and the footwall is relatively falling.", "The earliest soft-body metazoan group appeared after the end of the Neoproterozoic ice age, mainly including coelenterates, annelids, arthropods and some species with unknown classification status. It was first discovered in Edica in southern Australia.", "Many phyla of marine invertebrates with shells appeared at the end of the Ediacaran period and flourished in the early Cambrian. It includes mollusca, gastropoda, brachiopoda, monoplate, sponges and some species with unknown classification status. It is the earliest shelled animal.", "It refers to fossils that can reflect certain specific environmental conditions.", "The material performance of a specific sedimentary environment is the synthesis of rock characteristics and biological characteristics formed in a specific sedimentary environment.", "The lateral (spatial) change of sedimentary facies.", "Only those phases and phase areas that can be observed to be adjacent to each other can be overlapped. (The sequential change of adjacent sedimentary facies in the vertical direction is consistent, that is, the change in the horizontal or vertical direction can be predicted according to the change in the vertical or horizontal direction of adjacent sedimentary facies.", "It refers to the sedimentation of sediments falling from top to bottom in the water body and depositing at the bottom of the sedimentary basin in turn.", "The strata formed by sedimentation are nearly horizontal at the time of sedimentation, and all the strata are parallel to this horizontal plane. The non-horizontal strata that people see now are transformed by later tectonic action.", "The original state of sedimentary strata is from old to new from bottom to top. If this order is changed, it indicates that there is tectonic transformation.", "It refers to the horizontal displacement of sediment particles in the process of medium transportation and the deposition when the medium energy decays.", "In the process of transgression, the strata form an upward superimposition towards the continental direction, and the sediment particles from bottom to top change from coarse to fine, and the distribution area of new strata is larger than that of old strata.", "It refers to the phenomenon of transgressive migration of the lithofacies belt to the land direction in the coastal zone during transgression (it refers to that the rock cuttings are sent back to the source area). Especially in the delta development area.", "In the process of regression, the strata form a retreat or undershoot towards the ocean, and the particles of sediment from bottom to top change from fine to coarse, and the distribution area of new strata is smaller than that of old strata.", "It refers to the phenomenon of regression migration of the lithofacies belt to the sea during regression (it refers to that the rock debris on the land is sent back to the sedimentary area).", "It is a system tract deposited during the high water level period of the global sea level, and it is deposited during the late rising, stable and early falling of the sea level.", "It refers to the sedimentary process in which the change of the environmental unit in a certain sedimentary environment or the change of the mode of action in a certain sedimentary process results in the vertical regular repetition of the stratigraphic sedimentary unit.", "The rock strata formed in the process of all lateral accumulation must be diachronic.", "It refers to the spatiotemporal fabric mode of the strata that make up the stratum.", "Stratified rock with the same or similar lithology restricted by two parallel or nearly parallel interfaces.", "It refers to the synthesis of layered rocks preserved on the earth's surface, including sedimentary rock strata, volcanic rock strata and metamorphic rock strata.", "Similar and similar stratigraphic groups are formed into different stratigraphic units according to different stratigraphic material properties.", "The strata in different areas are spatially correlated and extended.", "According to the difference in vertical direction of lithologic characteristics of the strata, the strata are stratified and the stratigraphic units are divided by establishing the stratigraphic system and sequence.", "The basic unit of the lithostratigraphic unit system is the stratigraphic body with relatively consistent lithology and certain structural types.", "According to the principle of similar lithology, genetic correlation and similar structural type, the formation is combined to form a stratigraphic unit higher than the formation.", "The stratigraphic unit that is one level lower than the formation is the subdivision of the formation. It is generally a stratigraphic unit composed of strata with the same or similar lithology, one structural type and related genesis.", "The smallest lithostratigraphic unit. There are two types: one is the combination of rock strata with the same or similar lithology, or the combination of basic sequences with the same structure, which can be used for layering in field profile research; The second is the rock or ore layer with special lithology and obvious signs, which can be used as the sign layer or the special layer for regional geological mapping.", "It refers to the strata formed within a specific geological time interval. This unit represents all the strata formed within a certain time range in the geological history, and only represents the strata formed within this period.", "The largest chronostratigraphic unit, corresponding to the time \"universe\", is divided according to the largest stage of biological evolution, that is, the existence and mode of living matter.", "The second-level chronostratigraphic unit, corresponding to the time \"generation\", is divided according to the overall outlook of the development of the biological world and the stages of the crustal evolution.", "The chronostratigraphic unit below the boundary corresponds to the \"period\", and the main division is based on the stages of the evolution of the biosphere.", "The secondary stratigraphic unit in the system corresponds to the \"epoch\". Generally, one epoch can be divided into two to three epochs according to the biological interface.", "The most basic unit of chronostratigraphy, corresponding to \"period\", is mainly divided according to the biological evolution characteristics of family and genus.", "The lowest unit in the chronostratigraphic unit, corresponding to the geological chronostratigraphic unit \"time\", refers to all stratigraphic records within a \"time\", and is divided according to the evolution of genera and species.", "Stratigraphic units divided according to the biological fossils preserved in the stratum are characterized by the same fossil content and distribution, and have different three-dimensional stratigraphic bodies with the adjacent unit fossils.", "It refers to the stratigraphic body represented by any biological taxonomic unit within the whole continuous range. It represents the stratum occupied by the biological classification unit from \"occurrence\" to \"extinction\".", "It refers to a section of strata in the most prosperous period of some fossil genera and species, excluding the strata in which such fossils first appeared and finally disappeared.", "It refers to the stratum occupied by the unique fossil assemblage. The fossils contained in this stratum or a certain kind of fossils constitute a natural combination as a whole, and are significantly different from the biological fossil combination in the adjacent stratum.", "It refers to the stratum between two specific biological surfaces. This zone is not necessarily the distribution range of one or several biological taxons, but is defined by the biological surface defined by these biological interfaces.", "It is a stratum containing a specific fossil representing the evolutionary lineage. It can be either the extension of a fossil taxon in an evolutionary pedigree or the extension before the emergence of the descendant taxon of the fossil taxon.", "Strata where biostratigraphic units cannot be established due to lack of fossils.", "In the process of stratigraphic division and establishment of stratigraphic units, a stratigraphic model representing the new stratigraphic unit is specified, which is called stratigraphic type.", "It refers to the model profile on which a stratigraphic unit is based. Its upper and lower limits are defined by the boundary layer type.", "A specific point in a special stratigraphic sequence defined by the stratigraphic boundary between two stratigraphic units.", "It is also called time invasion, which refers to the phenomenon that the boundary between the lithostratigraphic unit and the chronostratigraphic unit is inconsistent, or the boundary between the lithostratigraphic unit and the chronostratigraphic unit is oblique.", "Also known as sedimentary formation, it is a sedimentary rock symbiosis complex formed in a certain geological period and can reflect the main structural background of its sedimentary process.", "It refers to the sedimentary basin in which the decline rate of sedimentary basement is generally consistent with the deposition rate, resulting in the unchanged paleowater depth and no major changes in the lithofacies type.", "It refers to the sedimentary basin where the decline rate of sedimentary basement is higher than the deposition rate, resulting in the increase of water depth and the lack of sediment compensation filling. Although the geological time is very long, the sediment is very thin.", "It refers to the sedimentary basin in which the deposition rate of sediment is higher than the decline rate of sedimentary base, resulting in shallow water depth and sediment thickness greater than the decline rate of sedimentary base.", "A sedimentary basin between the craton and the front of the orogenic belt. It is also called piedmont depression and foredeep.", "The curve formed by the position of the paleomagnetic poles obtained from different geological ages of the same plate is called the pole shift curve.", "It refers to the geographical division with important differences in biological classification and evolution system formed over a long period of time under the influence of temperature control and geographical isolation.", "The dividing line between the two biological divisions of the Eastern Ocean Boundary Region and the Australian Boundary Region between Asia and Australia.", "The development cycle model of continental plate separation and ocean basin evolution.", "The era belongs to ancient Archaean. It is mainly distributed in the supracrustal xenoliths of the Neo-Archean granitoids in Xingshan, Huangbaiyu, Naoyumen Dongshan and other places in Qian'an, Hebei Province. It belongs to the layered disordered rock body. The main rocks are various gneiss, schist, plagioclase amphibolite and felsic. The metamorphic degree reaches high amphibolite facies to granulite facies.", "It refers to the tectonic movement that occurred in North China at the end of the Archean (2.6-25 billion years), accompanied by a large number of magmatic activities and metamorphism, and strong folding and denudation of the crust. This movement solidified the Archean active sediments in the area, increased the siliceous and aluminous crust, and formed the ancient land core of North China.", "It refers to the tectonic movement that occurred in North China at the end of the Paleoproterozoic era, accompanied by a large number of magmatism and metamorphism, and a strong folding and denudation of the crust. This movement further solidified and united the dispersed Archean continental core into a larger continental block: the original platform of North China - the embryonic form of the North China plate.", "The tectonic movement occurred in South China at the end of the Mesoproterozoic and late Neoproterozoic, which made both sides of the Yangtze plate, the southeast margin and the lower Yangtze region form a stable zone with the Yangtze ancient plate, thus forming a stable Yangtze continental plate.", "It belongs to the middle Proterozoic, distributed in North China, and represents the sedimentation of shore-shallow sea, lagoon and intertidal environment.", "It belongs to the upper series of the South China System and is distributed on both sides of the Kangdian ancient land in China. It is grayish purple to purplish red sandy argillaceous conglomerate, with complex gravel composition, non-separated particles, massive bedding or cross-flow bedding, and glacial scratches on the gravel, representing continental glaciers and nearshore glacial sea deposits.", "It belongs to the middle part of the upper Sinian system in southern China and is distributed in eastern Yunnan, northern Guangxi, eastern Guizhou, western Sichuan, northern Hunan, western Hubei and Daba Mountains. It is dominated by carbonates. The rocks are gray-grey black medium-layer, with less internal debris, generally containing pyrite and flint, and containing tri-river calcareous sponge bone needles, reflecting a relatively deep-water stagnant sedimentary feature.", "It belongs to the upper part of the upper Sinian system and is mainly distributed in western Hubei, central Guizhou, eastern Yunnan, western Sichuan, southern Shaanxi and other places. It is dominated by carbonate. The lower part represents the shallow beach deposits at the edge of the high-energy and oxygen-enriched carbonate platform, the middle part contains dark asphalt limestone and siliceous limestone, and the upper part represents the carbonate tidal flat and lagoon environment, which is the deposition under dry climate conditions.", "Caledonian movement is the general term of the early Paleozoic crustal movement, which generally refers to the crustal movement between the Silurian and Devonian in the early Paleozoic, and belongs to the main orogenic episode in the early Paleozoic.", "It belongs to the lower Cambrian system and is distributed in North China. It is purplish red calcareous shale mixed with argillaceous limestone. Sedimentary structures such as mud cracks, rain marks, rock salt pseudo-crystals, ripple marks and fishbone cross-bedding are developed. It belongs to shore-shallow sea deposits under thermal climate conditions.", "It belongs to the lower part of the Lower Silurian and is distributed in Hubei, Hunan, Yunnan, Guizhou, Sichuan and Shaanxi. The lower part of the formation is black graptolite shale, representing stagnant and non-compensated sea basin; The upper part is blue-gray, yellow-green argillaceous or silty shale, containing a small amount of graptolites, reflecting the accelerated sedimentation and gradually transforming into a compensation basin.", "In the Carboniferous, it was distributed in North Asia, the the Junggar Basin in Xinjiang, and the flora in the north of Northeast China. It was dominated by herbaceous true ferns and seed ferns. Woody plants had obvious growth rings, representing the northern temperate climate, and its representatives were spoon leaves, etc.", "In the Carboniferous, the flora represented by the tongue fern flora distributed in Gondwana is characterized by monotonous plant species, reflecting the colder climate in the middle and high latitudes of the southern hemisphere.", "During the Permian, it was mainly distributed in the flora of East Asia and Southeast Asia, and can be divided into the northern subregion and the southern subregion, characterized by a large number of large feathery ferns and single-mesh ferns.", "During the Permian, it was mainly distributed in the flora of Europe and the eastern part of North America, and there was no trace of the large feathery fern flora.", "It refers to the late Paleozoic orogeny. The Hercynian movement caused the Hercynian geosyncline in Western Europe, the Appalachian geosyncline in eastern North America, the Ural geosyncline at the Eurasian border, the Kazakh geosyncline in Central Asia, and the Tianshan, South Qinling and Great Khingan Mountains in China to fold back, forming a huge mountain system. At this time, the geosyncline zone between the ancient platforms in the northern hemisphere became denuded mountains, and the completion of Hercynian tectonic stage marked the end of the Paleozoic era.", "It is a shallow marine sedimentary type of the marine Devonian system in the nearshore and oxygen-enriched environment in southern China, distributed in the platform area, represented by the Middle Devonian series along the coast of the Yujiang River and the Upper Devonian series in the central part of Hunan.", "It is a farshore, anoxic and calm marine basin sedimentary type of marine Devonian in southern China, developed in the platform area, represented by the Middle and Upper Devonian in Nandan and Luofu, Guangxi.", "It belongs to the Middle Permian and is distributed in North China. The lower part is gravelly quartz sandstone with cross bedding, and the upper part is sand shale with minable coal seams. It contains abundant plant fossils and has a marine interlayer with only 2.6m thick fossils such as tongue shellfish. It represents the delta plain peat swamp environment and tropical humid climate conditions under the background of regression.", "A tectonic movement occurred in South China in the early Late Permian, which triggered the transgression and regression of seawater, sedimentary cycles, lithofacies changes, biological changes and volcanic activities in South China.", "A tectonic movement occurred in China and its surrounding areas between the Middle and Late Permian and Triassic, which changed the situation of China's \"South China Sea and North China\" before the middle of Triassic, and caused all the folds in western Sichuan, Gansu and southern Qinghai to rise; The sea water retreated to southern Xinjiang, Tibet and western Yunnan; Most of the middle and lower reaches of the Yangtze River and South China have changed from shallow sea to land. Since then, China's northern and southern lands have been linked together, and most of the country is in a land environment.", "The tectonic movement widely occurred in China from the late Triassic to the Cretaceous, also known as the old Alps stage. It is mainly manifested by fold and fault movement, magma eruption, intrusion and metamorphism in some areas.", "It refers to the tectonic movement in China in the Cenozoic era, which caused the docking and collision of the Indian plate and the Asian plate, and the uplift of the Qinghai-Tibet Plateau; Trench-arc-basin system was formed in the eastern margin of the ancient Asian continent, and active back-arc or intracontinental rifting occurred within the continent.", "It belongs to the late Middle Triassic to the late Triassic, and is mainly distributed in the Ordos Basin, mainly composed of gray-green and yellow-green sandstone and shale, with black oil shale at the bottom, and coal seams at the top, with a total thickness of 2000m. It is a large depression basin in the temperate semi-humid climate environment.", "It belongs to the Middle Jurassic and is mainly distributed in the Ordos Basin. The lower part is the Baotashan sandstone section, which is mainly composed of gray white and flesh red massive oblique bedding coarse sandstone and fine sandstone, the bottom part is coarse sandstone with fine gravel, and the upper part is mainly fine sandstone with gray black argillaceous siltstone and shale; Rich in coal.", "The age is from the late Early Cretaceous to the late Cretaceous, which is widely distributed in the Songliao Plain area and belongs to freshwater lacustrine deposits.", "It belongs to the middle Miocene and can only be seen in the east of Linqu, Shandong Province. It is a crater lake deposit, and the upper part is yellow coarse sandstone with conglomerate; The lower part is black, white and brown paper shale with diatomite.", "The Pliocene brownish red and bright red clay widely accumulated in the north, especially in the area of Shanxi and Shaanxi, contains fossils such as tridactylus and rhinoceros, formerly known as tridactylus laterite. Because Baode is the most typical in Shanxi, it is also known as Baode laterite.", "Also known as the southern continent, the Carboniferous-Permian joint ancient land is located in the southern hemisphere, including modern South America, Africa, the Arabian Peninsula, India, China's Tibet, Australia and Antarctica.", "Also known as the northern continent, the Carboniferous-Permian joint ancient land is located in the northern hemisphere, including modern North America, Europe and most of Asia.", "The sediments are mainly composed of coarse sand and medium sand, which are well sorted and rounded, and the bedding is inclined in one direction. The inclined direction indicates the direction of water flow, which is a unique sedimentary feature of river environment.", "The rock composition is clayey (sometimes diatomite), with horizontal lamina development, rich in fresh water bivalves, fish, phylloptera, insects and frogs fossils, as well as plant stem and leaf fossils, which are well preserved. Freshwater biological assemblage indicates that it is a sedimentary environment of continental water body. Fossils are well preserved, and even some fine structures are preserved, indicating that it is a still water environment. The sediment is fine and has horizontal lamination, which also indicates that the water body is calm and the transportation distance is far. Generally, it should be the sediment from the deeper part of the shallow lake area to the deep lake area (the center of the lake) under the humid climate.", "The rock is black, fine-grained, clayey and rich in plant fossils. The massive preservation of plant fossils shows that the climate was warm and humid and the plants grew luxuriantly. After burial, after dehydration, the carbonaceous material is preserved, resulting in black rock. The fine-grained sediments reflect the flat terrain and the long transportation distance. Therefore, the black shale containing plant fossils represents the plain marsh deposits under warm and humid climate conditions.", "The rock has long and flat carbonate gravels. The gravels are similar to bamboo leaves from the longitudinal section, and are well rounded. Some bamboo leaves can be oxidized to yellow or brown on the surface. Non-oriented or slightly oriented, calcareous cementation. The cause of formation of bamboo-like limestone is generally believed to be that the first deposited calcium carbonate is broken due to the impact of storm waves when it is not consolidated or just consolidated, and is rounded by the impact of waves (it is easy to be rounded because it is not hard), and then cemented into rock by the newly deposited calcium carbonate, which has the nature of syngenetic conglomerate. The brownish yellow halo on the gravel surface usually reflects that these gravels were once exposed to the water surface for oxidation, and the surface Fe2+was oxidized to Fe3+, showing a brownish yellow, which reflects the coastal environment with shallow water and high energy.", "The rock is purplish red and red, composed of silty sand or clay, and cubic rock salt pseudolite can be seen on the layer. The formation of halite is closely related to the arid climate. Under dry climate conditions, due to massive evaporation of water and increasing salinity in the water body, rock salt crystallizes when the salinity reaches saturation. Most of the halite pseudocrystals seen in the rocks are isolated and scattered crystals. It is speculated that the halite was not dried up in the whole basin when it was formed, but only occurred in some shallow water sections. The rock salt crystals are covered by sediments after growing, and dissolved due to the reduction of salinity of seawater. The remaining voids are filled with clay components, thus preserving the crystal form of rock salt, so it is called rock salt pseudo-crystal. The rocks are mainly fine-grained sediments, indicating that the terrain was flat at that time, representing the deposition of coastal or lakeside areas under dry climate conditions. The environment is usually judged according to the fossils contained in this set of rocks and its upper and lower strata. If there are marine fossils, it may be coastal sediments. If there are terrestrial lacustrine biological assemblages, it should be coastal sediments.", "The rock is iron red, the basic composition is hematite (Fe2O3), with oolitic structure, and the diameter of oolite is about 0.5~2mm, sometimes containing biofossil fragments. Oolitic hematite represents that under warm and humid climate conditions, iron can exist in acidic water as a colloid (the water body contains humic acid and is acidic), and then be brought to the beach and shallow sea by the river. Under the condition of water turbulence, the iron can be condensed and precipitated with gravel or bone chips as the core. It is speculated that it represents the turbulent shallow sea high-energy environment under humid, hot and humid climate conditions.", "There are different contents of oolites in limestone, with grain size of about 1mm. There are many trilobite fragments associated with it. When the calcium carbonate content in the warm sea basin reaches supersaturation, once the waves stir up the sand particles and biological debris on the sea floor, the calcium carbonate will condense and deposit around them in a concentric manner and form an oolitic structure. The trilobite debris is also the result of wave impact. It represents the warm and turbulent shallow sea high-energy environment.", "The rock body is composed of reef building organisms. The biological content generally accounts for more than 50%. Reef-forming organisms include corals, stromatoporoids, algae and calcareous sponges, and some reef-loving and reef-attached organisms fill in the gap of the reef-forming organisms together with plaster. Reef-forming organisms generally live in the tropical clear and normal shallow water with a water temperature of about 20 degrees. The water depth is not more than 50~70m, and the highest is 30m. Therefore, the reef limestone reflects the tropical warm and clear shallow water environment.", "It is rhythmically interbedded with mudstone, and the thickness of each rhythmic layer is not large, ranging from tens of centimeters to tens of centimeters. The content of sandstone matrix is high, with progressive bedding. On the bottom of the sandstone, impression and deep-water trace fossils, such as trough mold, trench mold, tool mold, etc., are often developed, and obvious or less obvious Baoma sequence can be seen, and plankton fossils (such as graptolites, etc.) can be seen in the mudstone. It represents typical turbidity current (gravity current) deposition.", "Black-brown, maroon, gray-black thin-layer to medium-layer ferromanganese siliceous rock, siliceous mudstone (shale), gray-black thin-layer marlstone and carbonaceous calcareous shale, horizontal bedding, ammonites and other plankton fossils, no benthic fossils. It represents deeper water and lower energy environment.", "Mainly black shale and siliceous shale, rich in graptolites and other plankton fossils, but no or rare benthic fossils. It represents the water depth, stagnant current and non-compensated sea environment.", "The biofacies formed by the dense accumulation of benthos with thick shells, such as brachiopods, bivalves, gastropods, trilobites, etc., are called shell facies. It reflects the coastal and shallow sea environment with warm climate, low water depth and turbulence.", "Generally, it refers to the crustal tectonic changes (2 points) that occurred in the Neogene and since, which are represented by vertical uplift (1 point) and horizontal movement (1 point).", "The sign of unconformity (2 points), due to long-term weathering and denudation, remains of refractory substances, generally iron and siliceous substances (2 points).", "It is a first level discontinuous interface. At 33 km underground (1 point), it is the interface between the crust and mantle 2 (3 points).", "The fossils with the fastest evolution speed and the widest distribution (1 point) can identify the age of the stratum (3 points).", "The crust and the top of the upper mantle (above the asthenosphere) are composed of solid rocks, collectively known as the lithosphere.", "It is a homogeneous object with relatively fixed chemical composition and physical properties formed under various geological processes (1 point), and is the basic unit of rock composition (2 points).", "The two wings are basically symmetrical, the core stratum is younger (2 points), and the two wings stratum is older (2 points).", "It refers to a special translation fault (2 points) occurring at the mid-ocean ridge (2 points).", "It is mainly composed of two normal faults with basically the same strike and opposite dip, and there is a common falling wall between the two normal faults. P271", "It is mainly composed of normal faults with basically the same strike and opposite dip direction, and there is a common early rising wall between the two normal faults. P272", "It is expressed in \u2103, and the depth increased when the temperature increases by 1 \u2103. P22", "We increase the temperature for every 100 meters of depth. P22", "That is, stromatolite, which is formed by the slime secreted by blue-green algae that binds and hardens the fine material. Its growth forms two basic laminae due to seasonal changes: the first is algae-rich laminae, also known as dark layer or dark zone, which is dark because of the high content of algae components and rich in organic matter; The other is rich in carbonate laminae, with less algae content and less organic matter, so the color is light. Two basic laminations appear alternately, forming a laminated structure. P2072", "It refers to the geological process in which the composition, structure and structure of the original rock change basically in a fixed state due to the change of physical and chemical conditions in the specific geological environment below. The new rocks formed by metamorphism are called metamorphic rocks. The original rocks of metamorphism can be sedimentary rocks, magmatic rocks and metamorphic rocks. P185", "Silicate minerals (plagioclase, K-feldspar, pyroxene, hornblende, mica, olivine, clay minerals, etc.) and quartz in oxygen-bearing salt minerals account for the most of the chemical composition of minerals, accounting for about 91% of the total amount of minerals. These minerals are the main common minerals that make up rocks and are called authigenic minerals. P35", "The phenomenon of regular cracking of minerals along the crystal lattice after external force. The smooth plane of cleavage is called cleavage surface. It can be divided into one, two and three groups. If the fracture surface of the grain is a flash plane on the specimen, it is the cleavage surface. P335", "It refers to the combination of a certain sedimentary environment and the characteristics of sedimentary rocks deposited in the environment. P209", "Sedimentary rock characteristics (such as rock type, color, material composition, structure, structure, lithologic combination, etc.), paleontological characteristics (such as species, ecology, biological traces, etc.) and geochemical characteristics. These elements of sedimentary rock characteristics are the material records of the corresponding environmental conditions, which are the basis for our analysis of sedimentary facies", "A natural science with the earth as its research object. At present, geology mainly studies the surface of the solid earth - the lithosphere, and its material composition, formation, distribution and evolution; Study the internal structure of the earth, surface morphology and the regularity of its development and evolution.", "Through the geological phenomena and results left over by various geological events, the conditions, processes and characteristics of the occurrence of ancient geological events are inversely deduced by using the laws of current geological processes.", "A part of the upper mantle composed of solid rocks above the asthenosphere and the crust are collectively referred to as the lithosphere. It is a rigid shell of the earth, \"floating\" on the plastic asthenosphere.", "Minerals are simple substances or compounds formed by geological processes.", "Various natural processes that cause the continuous movement, change and development of the material composition, internal structure and surface morphology of the crust are called geological processes.", "The double metamorphic belts caused by the subduction of the oceanic plate along the Benioff belt between the island arc and the continental margin, one is the high-pressure and low-temperature metamorphic belt distributed on the side of the ocean, and the other is the high-temperature and low-pressure metamorphic belt parallel to it.", "It refers to the discontinuous thin shell (layer) formed on land by residual and soil layer formed by biological weathering.", "It refers to the ratio of total pore volume in rock to rock volume.", "It refers to the water buried in the ground, that is, the water in the loose deposits and rock voids below the surface.", "A large body of slowly flowing ice on the continent.", "The solid with regular arrangement of internal particles in three-dimensional space is called crystal.", "The average content percentage of various elements in the crust is called Clark value internationally.", "It refers to the phenomenon that other ions or atoms with similar properties occupy the position of the original ions or atoms in the mineral crystal structure without causing qualitative change in chemical bonding and crystal structure type. However, it can cause changes in chemical composition and other related properties.", "Also known as \"hydrous rock\", it is a kind of rock formed under the surface or near-surface conditions by the weathering products formed by a series of external geological processes such as weathering and denudation of the previously formed rock (parent rock), and then transported, deposited and consolidated.", "It refers to the shape and scale of magmatic rock mass in space, the relationship between magmatic rock mass and surrounding rock, and the depth and geological tectonic environment at the time of formation.", "Metamorphism is the process of changing the mineral composition, structure and structure of rocks due to internal force geological process.", "In the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation.", "Ripple marks are sand ripples or sand waves formed when sandy sediments move under the action of water (or wind).", "It refers to the rock formed by the accumulation of various clastic materials formed by volcanism.", "It refers to the sum of sedimentary environment and sediment (rock) characteristics formed in the environment (including rock, biological and geochemical characteristics).", "When the river with mud and sand enters the impounding basin, the sediment accumulates in the estuarine area due to the decrease of flow velocity, and leads to the irregular progradation of the shoreline towards the basin.", "It refers to the gravity flow of sediment particles supported by eddy current (turbulence) and transported in the fluid in a suspended state.", "It refers to the carbonate deposition in the epicontinental sea environment where there is no or little inflow of terrigenous materials.", "For layered strata, the old strata are formed first, and the new strata are stacked layer by layer. The higher the strata are, the newer the strata are.", "In a stratigraphic unit, select a few unique biological fossils, which have the characteristics of short survival time, wide geographical distribution, large number, and well preserved rivers that are easy to identify. They are called standard fossils.", "It refers to the deformation products of rocks formed by various internal and external geological processes, specifically manifested as bending deformation (plastic deformation products) and fracture deformation (brittle deformation products) of rocks.", "It refers to the unconformity contact relationship between the upper and lower strata of the unconformity surface with different occurrence and angular intersection.", "It refers to the folds formed by gradual deformation during the formation of rock strata, namely, the formation of sedimentation.", "It refers to the structure formed by fracture deformation when the stress borne by the rock reaches or exceeds its fracture strength.", "It refers to the sedimentary basin where industrial oil and gas flow has been discovered.", "The occurrence of the new and old strata is consistent, the lithological changes and paleontological evolution are gradual and continuous, the ages of the new and old strata are continuous, and there is no stratum missing. During the formation of the stratum, the stable sedimentary environment is basically maintained, and the tectonic movement is mainly the slow decline of the crust, even if it rises, it does not make the sedimentary surface rise above the water surface and suffer denudation.", "It is a first level discontinuous interface, 33 km underground (1 point), and the interface between the crust and mantle 14 (3 points).", "The fossils with the fastest evolution rate (2 points) and the most widespread distribution (2 points).", "The crust (2 points) and the top of the upper mantle (above the asthenosphere) (2 points) are both composed of solid rocks, collectively known as the lithosphere.", "It is a homogeneous object with relatively fixed chemical composition and physical properties formed under various geological processes (1 point), and is the basic unit of rock composition (2 points).", "The two wings are basically symmetrical, the core stratum is younger (2 points), and the two wings stratum is older (2 points).", "The strike line refers to the intersection line of the rock layer and any horizontal plane (any horizontal line on the rock layer). Direction refers to the direction indicated at both ends of the line. It is expressed in azimuth. It indicates the horizontal extension direction of the rock stratum.", "A natural science with the earth as its research object. At present, geology mainly studies the surface of the solid earth - the lithosphere, and its material composition, formation, distribution and evolution; Study the internal structure of the earth, surface morphology and the regularity of its development and evolution.", "Through the geological phenomena and results left over by various geological events, the conditions, processes and characteristics of the occurrence of ancient geological events are inversely deduced by using the laws of current geological processes.", "A part of the upper mantle composed of solid rocks above the asthenosphere and the crust are collectively referred to as the lithosphere. It is a rigid shell of the earth, \"floating\" on the plastic asthenosphere.", "Minerals are simple substances or compounds formed by geological processes.", "Various natural processes that cause the continuous movement, change and development of the material composition, internal structure and surface morphology of the crust are called geological processes.", "The double metamorphic belts caused by the subduction of the oceanic plate along the Benioff belt between the island arc and the continental margin, one is the high-pressure and low-temperature metamorphic belt distributed on the side of the ocean, and the other is the high-temperature and low-pressure metamorphic belt parallel to it.", "It refers to the discontinuous thin shell (layer) formed on land by residual and soil layer formed by biological weathering.", "It refers to the ratio of total pore volume in rock to rock volume.", "It refers to the water buried in the ground, that is, the water in the loose deposits and rock voids below the surface.", "A large body of slowly flowing ice on the continent.", "The solid with regular arrangement of internal particles in three-dimensional space is called crystal.", "The average content percentage of various elements in the crust is called Clark value internationally.", "It refers to the phenomenon that other ions or atoms with similar properties occupy the position of the original ions or atoms in the mineral crystal structure without causing qualitative change in chemical bonding and crystal structure type. However, it can cause changes in chemical composition and other related properties.", "Also known as \"hydrous rock\", it is a kind of rock formed under the surface or near-surface conditions by the weathering products formed by a series of external geological processes such as weathering and denudation of the previously formed rock (parent rock), and then transported, deposited and consolidated.", "It refers to the shape and scale of magmatic rock mass in space, the relationship between magmatic rock mass and surrounding rock, and the depth and geological tectonic environment at the time of formation.", "Metamorphism is the process of changing the mineral composition, structure and structure of rocks due to internal force geological process.", "In the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation.", "Ripple marks are sand ripples or sand waves formed when sandy sediments move under the action of water (or wind).", "It refers to the rock formed by the accumulation of various clastic materials formed by volcanism.", "It refers to the sum of sedimentary environment and sediment (rock) characteristics formed in the environment (including rock, biological and geochemical characteristics).", "When the river with mud and sand enters the impounding basin, the sediment accumulates in the estuarine area due to the decrease of flow velocity, and leads to the irregular progradation of the shoreline towards the basin.", "It refers to the gravity flow of sediment particles supported by eddy current (turbulence) and transported in the fluid in a suspended state.", "It refers to the carbonate deposition in the epicontinental sea environment where there is no or little inflow of terrigenous materials.", "For layered strata, the old strata are formed first, and the new strata are stacked layer by layer. The higher the strata are, the newer the strata are.", "In a stratigraphic unit, select a few unique biological fossils, which have the characteristics of short survival time, wide geographical distribution, large number, and well preserved rivers that are easy to identify. They are called standard fossils.", "It refers to the deformation products of rocks formed by various internal and external geological processes, specifically manifested as bending deformation (plastic deformation products) and fracture deformation (brittle deformation products) of rocks.", "It refers to the unconformity contact relationship between the upper and lower strata of the unconformity surface with different occurrence and angular intersection.", "It refers to the folds formed by gradual deformation during the formation of rock strata, namely, the formation of sedimentation.", "It refers to the structure formed by fracture deformation when the stress borne by the rock reaches or exceeds its fracture strength.", "It refers to the sedimentary basin where industrial oil and gas flow has been discovered.", "A natural science with the earth as its research object. At present, geology mainly studies the surface of the solid earth - the lithosphere, and its material composition, formation, distribution and evolution; Study the internal structure of the earth, surface morphology and the regularity of its development and evolution.", "Through the geological phenomena and results left over by various geological events, the conditions, processes and characteristics of the occurrence of ancient geological events are inversely deduced by using the laws of current geological processes.", "The closed surface formed by the extension of the average sea level through the continent.", "The island arc is an arc-shaped distribution of volcanic islands, which extends for hundreds to thousands of kilometers and is often developed at the edge of the continental shelf; Long strip of land with a depth of more than 6km, called trench, is often developed on the side of the island which is isolated from the ocean.", "A part of the upper mantle composed of solid rocks above the asthenosphere and the crust are collectively referred to as the lithosphere. It is a rigid shell of the earth, \"floating\" on the plastic asthenosphere.", "Minerals are simple substances or compounds formed by geological processes.", "Various natural processes that cause the continuous movement, change and development of the material composition, internal structure and surface morphology of the crust are called geological processes.", "The geological process caused by the internal energy of the earth and affecting the whole crust and even the lithosphere.", "It refers to the discontinuous thin shell (layer) formed on land by residual and soil layer formed by biological weathering.", "Weathering refers to the destruction of surface rocks under various geological agents.", "The destruction of ground rocks and weathering products by various external geological agents is called denudation.", "The product of weathering and denudation is transferred from its original position by water, ice, sea, wind, gravity, etc., which is called transportation.", "During the transportation of parent rock weathering and denudation products by external forces, due to the reduction of flow velocity or wind speed, the melting of glaciers and other factors, it will lead to the gradual deposition of transported materials, which is called sedimentation.", "In the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation.", "Diagenesis refers to the process of turning loose sediment into consolidated rock after sediment deposition.", "The solid with regular arrangement of internal particles in three-dimensional space is called crystal.", "The average content percentage of various elements in the crust is called Clark value internationally.", "It refers to the phenomenon that other ions or atoms with similar properties occupy the position of the original ions or atoms in the mineral crystal structure without causing qualitative change in chemical bonding and crystal structure type. However, it may cause qualitative changes in the chemical composition and other related properties, page 1 of 8.", "When the growth conditions are fixed, the same kind of crystal can always develop into a certain shape. This property is called crystal habit.", "Rock is a naturally occurring mineral aggregate with certain structure, structure and stable shape, and is the product of geological process.", "Also known as \"igneous rock\", it is formed by the magma in the depth of the crust invading the crust or ejecting from the surface to condense and crystallize.", "Also known as \"hydrous rock\", it is a kind of rock formed under the surface or near-surface conditions by the weathering products formed by a series of external geological processes such as weathering and denudation of the previously formed rock (parent rock), and then transported, deposited and consolidated.", "It is formed by metamorphism of magmatic rocks and sedimentary rocks formed earlier in the earth's crust under the influence of a series of internal geological processes such as magmatic activity and tectonic movement, and subjected to higher temperature and pressure.", "Magma is a hot, viscous and volatile melting body with silicate as the main component formed in the deep underground.", "It refers to the rock characteristics shown by the crystallinity, particle size, shape and interrelation between mineral particles in the rock.", "It refers to the rock characteristics shown by the arrangement and filling mode of rock components (minerals).", "It refers to the shape and scale of magmatic rock mass in space, the relationship between magmatic rock mass and surrounding rock, and the depth and geological tectonic environment at the time of formation.", "It refers to the initial magma formed by partial melting of upper mantle material or partial or total melting of shell material.", "It refers to the whole process of the original homogeneous magma, without the addition of foreign materials, finally producing magma of different composition by its own evolution.", "The magma melts the surrounding rocks or xenoliths, and changes the composition of the magma, which is called assimilation. Incomplete assimilation is called contamination.", "Metamorphism is the process of changing the mineral composition, structure and structure of rocks due to internal force geological process.", "Bedding is a layered structure formed by changes in mineral composition, color, texture and other characteristics along the vertical direction of the original sedimentary plane.", "Ripple marks are sand ripples or sand waves formed when sandy sediments move under the action of water (or wind).", "It refers to the degree to which the edges and corners of particles are abraded and rounded.", "It refers to the degree to which the particles are close to the sphere.", "It refers to the degree to which the clastic component approaches the most stable final product under the transformation of weathering, transportation and sedimentation. Page 2 of 8", "The sorting and roundness of debris particles are the extent to which the particle content is close to the limit.", "It refers to the rock formed by the accumulation of various clastic materials formed by volcanism.", "It is formed by the weakly consolidated carbonate sediments deposited in the basin, washed and agitated by running water or waves, and piled up in situ or transported and deposited in a short distance.", "A type of unconformity. (1 point) Because of tectonic movement, the strata of the upper and lower parts of the unconformity are not parallel (1 point), and there is an angular stratigraphic contact relationship between them (2 points).", "Fracture with obvious displacement dislocation (3 points) along the fracture surface (1 point).", "All fossils or fossil groups representing special geographical environment (2 points) and indicating special lithofacies are called facies fossils or facies fossil groups (2 points).", "It is a fold structure, with the upward prominent bending of the rock stratum, the core stratum is older (2 points), and the two wing strata are younger (2 points).", "According to the color of the sedimentary strata, the mineral composition of the sediments, the grain size, the structure and the type of biological fossils, the lithofacies analysis is carried out to restore the paleoenvironment (2 points). According to the sedimentary environment, the strata can be divided into three major categories: marine facies, marine and continental transitional facies, and continental facies (2 points).", "It is a homogeneous object with relatively fixed chemical composition and physical properties formed under various geological processes (1 point), and is the basic unit of rock composition (2 points).", "It refers to the inversion of one wing of the fold (2 points) and the normal fold of the other wing (2 points).", "The line drawn along the stratum slope perpendicular to the stratum strike (2 points) is called the dip line, and its projection on the horizontal plane refers to the direction of the stratum dip (2 points).", "A type of unconformity. (1 point) Because of tectonic movement, the strata of the upper and lower parts of the unconformity are not parallel (1 point), and there is an angular stratigraphic contact relationship between them (2 points).", "Fracture with obvious displacement dislocation (3 points) along the fracture surface (1 point).", "It is a set of continental conglomerate and sandstone deposits (2 points) formed in intermountain basins, piedmont basins and other places after the formation is converted from marine facies to continental facies (2 points).", "It is a fold structure, with the upward prominent bending of the rock stratum, the core stratum is older (2 points), and the two wing strata are younger (2 points).", "According to the color of the sedimentary strata, the mineral composition of the sediments, the grain size, the structure and the type of biological fossils, the lithofacies analysis is carried out to restore the paleoenvironment (2 points). According to the sedimentary environment, the strata can be divided into three major categories: marine facies, marine and continental transitional facies, and continental facies (2 points).", "It is a homogeneous object with relatively fixed chemical composition and physical properties formed under various geological processes (1 point), and is the basic unit of rock composition (2 points).", "It refers to the inversion of one wing of the fold (2 points) and the normal fold of the other wing (2 points).", "The line drawn along the stratum slope perpendicular to the stratum strike (2 points) is called the dip line, and its projection on the horizontal plane refers to the direction of the stratum dip (2 points).", "The study of the earth. It is a knowledge system about the material composition, internal structure, external characteristics, interaction and evolution history of the earth. At this stage, due to the limitation of observation and research conditions, the lithosphere is the main research object, and also involves the hydrosphere, the biosphere, the deeper part of the lithosphere, and some extraterrestrial materials.", "(1) Low mountains, with an altitude of 500-1000 meters. 2) Zhongshan, 1000-3500 meters above sea level. 3) High mountains, with an altitude of more than 3500 meters. The mountain stretches in a linear shape and is called mountain range. A number of parallel or roughly parallel mountains with genetic links are called mountain systems, such as the Alps-Himalaya mountain system. Hill: an undulating area with an altitude of less than 500 meters and a relative elevation difference of 200 meters. Plain: the area with relatively flat terrain, large area and relative elevation difference of only tens of meters. Plateau: a flat and broad area with an altitude of more than 600 meters. Basin: a basin-like area with high surroundings and low middle", "Continental slope: the area with obvious steep slope on the outer side of the continental shelf. The water depth range is about 130-2000 meters, and the average slope is about 4 degrees and 17 minutes. The width varies from place to place, and submarine canyons and land slope terraces are often developed on it. Continental base: the gently inclined zone between the continental slope and the ocean basin, with a slope of 5 '- 35', which is mostly distributed on the seabed with a water depth of 2000-5000 meters. The continental base is mainly formed by the accumulation of turbidite and slump materials developed on the continental slope.", "It refers to the shallow submarine valley surrounding the mainland, with flat terrain and an average slope of more than 0.3 degrees (the shallow platform surrounding the mainland has an average slope of 0 degrees and 7 minutes, with an average width of about 75 kilometers, a depth of about 60 meters, and a lower boundary depth of about 130 meters). Island arc: arc-shaped islands that extend far and are distributed in bands. The island arc protrudes to the ocean, and the inner side is the continent. Trench: a narrow and long depression with a depth of more than 6000 meters is often developed outside the island arc. It is about a few to dozens of kilometers wide. The island arc and trench form an arc - trench system, which is often developed at the boundary of land and ocean. Ocean basin: the main body of the seabed, which is a relatively flat zone between the continental margin and the mid-ocean ridge, with a general water depth of 4000-6000 meters. Deep-sea plain: a gentle zone near the continental margin with an average depth of about 4877 meters and a very small slope (<1/1000).", "The height of the ocean basin is more than 1000 meters, and it is an isolated conical highland. Most of them are formed by submarine volcanic eruption.", "It is mainly distributed on both sides of the mid-ocean ridge and is a ridge-like uplift highland, characterized by no or few earthquakes.", "The global ocean floor mountain system across the ocean is called the mid-ocean ridge. It is the zone where the oceanic crust is generated, and its seismic and volcanic activities are very strong. There is a huge rift in the central part of the mid-ocean ridge called the central rift.", "The circle of organisms and their living activities.", "The huge heat energy inside the earth.", "1) The decay energy of radioactive elements (the main source of geothermal energy). 2) Chemical reaction energy. 3) Gravity energy. 4) The rotational energy of the earth. 5) Crystallization energy, etc.", "The increased temperature for every 100 meters of depth.", "The temperature from the surface to the different depths of the earth's core, that is, the temperature in the variable temperature layer, the constant temperature layer and the warming layer.", "The discontinuity where the propagation velocity of seismic waves changes drastically in several depths underground is the interface between the cover layer in the center of the upper mantle and the low-velocity layer.", "At 2891KM, the seismic wave is about 3000KM underground, and the discontinuity with continuous and violent changes in propagation is located at the interface between the lower mantle and the outer core.", "The earth's crust, composed of solids and rocks, is the outermost layer of the earth.", "The circle between Moho and Gutenberg.", "The part from the surface of Gutenberg to the center of the earth.", "The average thickness of the upper crust is 15KM, the average density is 2.98g/CM, and its material composition is similar to basalt,", "Below the Moho surface to 80KM, the upper mantle cover is solid. Together with the crust above the Moho surface, it forms the hard crust of the crust.", "At the depth of 80-220KM, the seismic wave velocity obviously decreased, and the shear wave could not pass locally. It is speculated that some materials may be in melting state and have plasticity.", "It is an aggregate of one or several minerals formed by various geological processes and stable under certain geological and physicochemical conditions.", "Geological processes generated by internal geological agents,", "Geological processes generated by external geological agents.", "The change and development of the material composition, internal structure and surface morphology of the crust or lithosphere caused by various geological agents.", "Due to the uneven surface of the earth and the uneven density of the interior, the actual measured gravity value is different from the theoretical value.", "The data obtained are inconsistent with the normal value.", "The magnetic north-south direction indicated by the geomagnetic needle (magnetic line of force) is the magnetic meridian direction, and there is an included angle between it and the geomagnetic meridian.", "The weight percentage of chemical elements in the earth's crust.", "The average content of elements in the crust. Chapter II", "It refers to relatively stable natural elements and compounds formed under certain geological and physicochemical conditions.", "Minerals are composed of one element, which is rare in nature.", "Minerals are formed by the combination of two or more elements. Most minerals in nature are compounds.", "A solid in geometric polyhedron form with regular arrangement of internal particles (molecules, atoms, neutrons, protons).", "Internal particles are neither arranged regularly nor have geometric polyhedron shape.", "Minerals with regular repeated arrangement of internal particles (atoms, ions or molecules) are called crystal minerals.", "Minerals with irregular arrangement of internal particles (atoms, ions or molecules) are called crystal minerals.", "The equivalent points in the crystal structure are abstracted, and these equivalent points form a geometric figure that extends infinitely in three-dimensional space. This geometric figure is called spatial lattice.", "In the process of mineral crystallization, a particle (ion, atom) in the crystal is replaced by other particles with similar chemical properties, without changing the original crystal structure, only changing the physical and chemical properties of the mineral.", "The color of a mineral is the result of its absorption of visible light at different wavelengths. It is divided into self color, other color and false color.", "The inherent color of minerals, such as crystal (SIO2) colorless and transparent, peacock emerald green, etc.", "The color of minerals, such as amethyst and smoky crystal, due to the mechanical mixing of foreign colored impurities.", "Light interference caused by oxide film or cleavage surface on mineral surface.", "It refers to the color of mineral powder, and generally refers to the color of the trace left by the mineral after wiping on the white porcelain plate. Because the streak of minerals is more stable than the color of minerals, it is more reliable to identify minerals using the streak of minerals than using the color of minerals.", "The ability of mineral surface to reflect visible light is called gloss. According to the intensity of its reflected light, the gloss can be divided into", "It refers to the degree of resistance of minerals to scratching, pressing and grinding by external forces. According to the principle that minerals with high hardness can be characterized by minerals with low hardness, F. Mohs of Germany selected 10 minerals as the standard and divided the hardness into 10 grades. The 10 minerals are 1. talc, 2. gypsum, 3. calcite, 4. fluorite, 5. apatite, 6. orthoclase, 7. quartz, 8. topaz, 9. corundum, and 10. diamond. (The hardness of minerals refers to the degree of resistance of minerals to external mechanical forces, which can be divided into scratch hardness, indentation hardness and grinding hardness according to the nature of mechanical forces.)", "The nature of the mineral breaking into a smooth plane in a certain direction under the impact or compression of an external force is called cleavage, and its smooth surface is called cleavage surface.", "It refers to the fracture surface in any direction after being stressed. Fractures can be formed in crystalline minerals, as well as in cryptocrystalline or amorphous mineral aggregates.", "In a broad sense, jade refers to minerals or mineral aggregates with bright colors, attractive luster and tough texture. People often say that jewelry refers to precious jade. Chapter III", "The degree of crystallinity, particle size, degree of automorphism of its constituent substances (mineral and vitreous) and their interrelationship.", "Arrangement and filling mode of different mineral aggregates in rocks.", "The process of magma rising and migrating along the fault to a certain part of the crust to condense and form,", "Magma ejected from the surface", "A kind of plutonic intrusive rock mass with large scale and oblong shape in plane.", "The high temperature produced in the deep crust or upper mantle is hot and viscous. It contains molten body with silicate as the main volatile.", "A rock formed by condensation of lava.", "The whole geological process of magma formation, migration and condensation, the change of magma itself and its influence on the lithosphere.", "The process of magma ejecting from the surface to condense.", "The underground magmatic rock reaches the surface along a certain channel in the crust.", "Rock around the intrusion", "The minerals that make up the rock are evenly distributed and have no directional arrangement. Chapter IV", "The process in which the rocks near the surface of the earth are destroyed in situ under the action of atmosphere, temperature, water and biology. [It refers to the process of decomposition and destruction of rocks and minerals in situ due to the effects of temperature, atmosphere, water and biology under the supergene conditions (normal temperature, atmospheric pressure, oxygen-enriched and water).]", "Weathering products formed by long-term weathering on the surface of bedrock.", "The process by which the wind carries debris to other places.", "It is a loose material composed of organic matter, humus, minerals, water and air. Weathering crust; On the surface of the continental crust, there is a discontinuous thin shell composed of residual material and soil.", "Rainwater or snowmelt flows to the lower part along the slope of the ground.", "The slope flows to the gully or mountain stream, forming a stream of fast running water.", "The destructive effect of various exogenous forces on the ground rocks and weathering products under the state of movement.", "The river continuously erodes and damages the riverbed in the horizontal direction, making the valley slope recede and widening the valley.", "The destructive effect of the flood flow on the rocks in the valley with its own power and the sediment it carries.", "When the flood flows out of the portal, a large amount of debris is accumulated to form diluvium.", "The diluvium is mostly fan-shaped on the plane.", "The slope flow carries the debris washed down from the upper part of the slope to the lower part of the slope for accumulation.", "Sediments formed by mechanical sedimentation of rivers.", "Due to the lateral erosion of the river, the original riverbed gradually becomes floodplain", "Water buried in the soil layer or rock gap below the ground (water body existing in rocks and loose deposits).", "The dissolution and destruction of groundwater on soluble rock. [The destructive effect of groundwater on surrounding rocks in the process of movement. It can be divided into mechanical subsurface erosion (only visible in underground rivers) and chemical subsurface erosion (also known as karstification). The soluble rock distribution area formed under the action of groundwater is called karst topography or karst topography.] The wave movement of sea water: the wave movement of sea water caused by wind and other reasons is called wave movement.", "A moving ice body formed by snow on land", "The area covered with snow all year round is called snow field, and its lower limit is called snow line", "Valley formed by glacial denudation. Ice bucket: a three-sided steep wall formed by glacial denudation.", "A jagged ridge between two adjacent cirques.", "If multiple cirques share the remaining peaks.", "The regular wavy movement of the sea water.", "The periodic fluctuation of sea water is caused by the tidal force generated by the moon - earth system.", "It is the high-density underwater weight containing a large amount of suspended substances in the ocean or lake.", "A bay that is isolated from the sea due to the expansion of the sand bar or sand mouth.", "The dissolution of seawater is more obvious on the carbonate coast. Seawater contains more carbon dioxide, so it has strong corrosion effect.", "A waterlogged depression of continental water.", "An area on the land where extremely humid, hygrophilic plants grow in large numbers and peat accumulates.", "The process of rock damage caused by the impact and abrasion of the wind on the ground with its own kinetic energy and the sand carried by it. Chapter 5 Fossils: Paleontological remains and traces preserved in the stratum due to natural effects.", "Without the addition of foreign materials, the original magma with uniform composition relies on its own evolution to form the whole process of magma with different composition.", "Internal force geological processes such as crustal movement, magmatic movement and geothermal flow change cause changes in material and chemical conditions, so that the composition, structure and structure of the original rock can be transformed while the rock formed in the crust is basically solid.", "It is a process in which the minerals in the original rock are dissolved, the components are migrated, and then precipitated and crystallized in the process of rock metamorphism.", "In the process of metamorphism, when the material components are brought in and out due to the fluid phase migration, the interaction between the components is caused.", "It is caused by magmatic activity and is a small-scale local metamorphism that occurs near the contact zone between magmatic rocks and surrounding rocks.", "It is a kind of metamorphism caused by the metasomatism of the surrounding rock by the gas-water solution with chemical mobility, which changes the mineral composition of the rock, structure, etc.", "Under the stress generated by tectonic movement, rocks or constituent minerals are deformed, broken and even recrystallized.", "It refers to a kind of metamorphism with large area distribution and complex action factors.", "It is on the basis of regional metamorphism that the internal heat flow in the crust continues to rise, resulting in the infiltration, metasomatism and penetration of deep hydrothermal fluid and local remelting slurry, which make the rock metamorphism. Chapter VII", "The internal force causes the deformation and displacement of the crust and even the lithosphere.", "The rocks (rock or rock mass) in the crust, especially the rocks with greater brittleness and close to the surface, are prone to fracture and dislocation under stress, which is generally called fault structure.", "The bending phenomenon of rock stratum.", "It refers to the downward protruding bend of the rock stratum, and the rock strata on both wings incline from both sides to the center.", "It refers to the upward protruding bend of the rock stratum, and the rock strata on both wings incline outward from the center.", "The rock block has obvious displacement along the fracture surface.", "Fault with relatively rising hanging wall and relatively falling footwall", "The boundary of rock strata is roughly parallel, and there is a significant lack of stratum.", "The spatial distribution and occurrence status of geological bodies (rock strata, rock masses, ore bodies, etc.) in the crust.", "Two or two groups of roughly parallel faults, in which the rock block is a common rising wall and its two sides are falling walls, are such fault combinations.", "Two or two groups of roughly parallel faults, in which the rock block is the common falling wall, and its two sides are the rising wall, such a fault combination. Stratum: a layer or group of strata with a certain horizon, that is, strata have the meaning of age and sequence.", "A part of the stratum is missing between the two sets of strata deposited successively. Resulting in the discontinuity of the ages of the upper and lower strata.", "A measure of the magnitude of energy released by an earthquake.", "The degree of damage to the surface and buildings caused by the earthquake.", "It refers to the shape and size of the rock mass, the contact relationship with the surrounding rock, and the geological tectonic environment in which it was formed (it refers to the output state of the geological body in three-dimensional space). Chapter VIII", "It is only a rock containing a certain amount of useful minerals.", "Natural mineral resources that exist in the lithosphere and can be used by the national economy.", "The enrichment section of useful minerals formed under certain geological processes that can meet the current mining and utilization requirements in both quality and quantity.", "Block in the deposit that can be exploited.", "The unit content of the constituent (element, compound or mineral) contained in the ore. Generally expressed in percentage.", "Useful mineral resources from which metal elements can be extracted.", "Minerals from which non-metallic products can be extracted or whose properties can be directly used." ] }, "choice": { "question": [ "The following structures that can coexist in the same type of rocks are:\nChoose from:\n\nA. Stomatal, almond-shaped, thousand-piece\nB. Plate-shaped, gneiss-shaped, pillow-shaped\nC. Wave marks, mud cracks, parallel bedding", "The most soluble of the following rocks are:\nChoose from:\n\nA. Limestone\nB. Dolomite\nC. Mudstone", "The following materials are mainly formed by groundwater metasomatism:\nChoose from:\n\nA. Nodule\nB. Sinter\nC. Silicified Wood\nD. Stone Salt Pseudocrystal", "Among the following structures, the smallest displacement of geological body is:\nChoose from:\n\nA. Normal Fault\nB. Reverse Fault\nC. Translation Fault\nD. Joint", "The following geological ages are arranged from old to new as:\nChoose from:\n\nA. S-O-Z-T-P\nB. Z-O-S-P-T\nC. Z-S-O-T-P\nD. S-Z-O-P-T", "The following fully cleaved minerals are:\nChoose from:\n\nA. Quartz\nB. Muscovite\nC. Pyroxene", "The sequence of various leases in the formation of sedimentary rocks is:\nChoose from:\n\nA. Weathering-transporting-denudation-sedimentary-diagenesis\nB. Weathering-denudation-transporting-sedimentary-diagenesis\nC. Denudation-weathering-transporting-sedimentary-diagenesis", "Chemical weathering is most easily carried out at:\nChoose from:\n\nA. Cold and dry areas\nB. Warm and humid areas\nC. Arid areas", "The plate-like intrusive rock mass formed by magma intrusion along the fault plane (zone) of surrounding rock is called:\nChoose from:\n\nA. Rock Plant\nB. Rock Bed\nC. Rock Wall", "Most volcanoes on the Earth are located in:\nChoose from:\n\nA. Continental Rift Zone\nB. Pacific Rim Zone\nC. Alpine-Mediterranean-Himalayan Zone", "The process in which the original rock is basically in solid state and is changed into another new rock mainly by internal dynamic geological process is called:\nChoose from:\n\nA. Magmatism\nB. Weathering\nC. Metamorphism", "The main changes of limestone metamorphism into marble are:\nChoose from:\n\nA. Mineral composition\nB. Rock color\nC. Rock structure", "River erosion mainly occurs in:\nChoose from:\n\nA. Downstream\nB. Midstream\nC. Upstream", "The bedrock is formed by magma:\nChoose from:\n\nA. Ejection\nB. Intrusion\nC. Metasomatism", "Root cleavage of plants belongs to:\nChoose from:\n\nA. Chemical weathering\nB. Physical weathering\nC. Biological weathering", "The interface between crust and mantle is called:\nChoose from:\n\nA. Gutenberg\nB. Conrad\nC. Moho", "Gabbro is:\nChoose from:\n\nA. Extrusive Rock\nB. Intrusive Rock\nC. Metamorphic Rock", "Among the following rocks, the weakest resistance to chemical weathering is:\nChoose from:\n\nA. Granite\nB. Limestone\nC. Quartz Sandstone", "Gymnosperms flourished in:\nChoose from:\n\nA. Paleozoic\nB. Cenozoic\nC. Cenozoic 29. Streaks were (color mineral powder\nD. mineral surface", "The morphology of amphibole is:\nChoose from:\n\nA. Flaky\nB. Granular\nC. Long columnar", "Reptiles flourished in:\nChoose from:\n\nA. Paleozoic\nB. Cenozoic\nC. Mesozoic", "In the Mohs hardness tester, the minerals with hardness grade 7 are:\nChoose from:\n\nA. Gypsum\nB. Quartz\nC. Topaz", "Quiet volcanoes emit mainly:\nChoose from:\n\nA. Acid lava\nB. Basic lava\nC. Neutral lava", "The main mineral compositions of gabbro are:\nChoose from:\n\nA. Hornblende and plagioclase\nB. Pyroxene and plagioclase\nC. Plagioclase and quartz", "Glaciers:\nChoose from:\n\nA. Have sorting function\nB. Have rounding function\nC. Have no sorting function' in the process of detritus transportation", "The half order of lake chemical deposition in arid climate region is:\nChoose from:\n\nA. Sulfate-chloride-carbonate\nB. Chloride-sulfate-carbonate\nC. Carbonate-sulfate-chloride", "In which period did human, true images and real horses appear:\nChoose from:\n\nA. J\nB. K\nC. T\nD. Q", "Indosinian movement occurred in the period of:\nChoose from:\n\nA. Carboniferous\nB. Permian\nC. Triassic\nD. Cretaceous", "The skarn type deposit was formed by which of the following metamorphism:\nChoose from:\n\nA. Contact metasomatic metamorphism\nB. Regional metamorphism\nC. Burial metamorphism\nD. Dynamic metamorphism", "Caledonian movement occurs in:\nChoose from:\n\nA. Mesozoic\nB. Late Paleozoic\nC. Early Paleozoic\nD. Cenozoic", "Which of the following fold structures must have undergone stratigraphic inversion:\nChoose from:\n\nA. Tilting fold\nB. Vertical fold\nC. Inclined fold\nD. Rolling fold", "In a nappe structure, if the younger rock block is exposed in the older rock block due to strong erosion, the structure is called:\nChoose from:\n\nA. Klippe\nB. Structural window\nC. Overthrust fault\nD. Thrust fault", "Sheet structure is a common structure in regional metamorphic rocks. Which of the following is the most metamorphic structure:\nChoose from:\n\nA. Plate structure\nB. Thousand-piece structure\nC. Sheet structure\nD. Strong gneiss", "According to isotopic dating, the longest geological period is:\nChoose from:\n\nA. Proterozoic\nB. Paleozoic\nC. Mesozoic\nD. Cenozoic", "If trilobites are found in the strata, then the stratigraphic age is:\nChoose from:\n\nA. Early Ordovician\nB. Tertiary\nC. Cretaceous\nD. Early Cambrian", "Which sedimentary formation reflects the transformation from marine facies to continental facies:\nChoose from:\n\nA. flysch deposition\nB. turbidity current deposition\nC. molasite deposition\nD. pyroclastic deposition", "Which part of the earth belongs between Gutenberg surface and Moho surface:\nChoose from:\n\nA. Upper crust\nB. Lower crust\nC. Mantle\nD. Core", "Mesozoic gymnosperms have been unprecedented development, which of the following plant fossils do not belong to gymnosperms:\nChoose from:\n\nA. Big feather fern\nB. Cycad\nC. Pine and cypress\nD. Ginkgo biloba", "The boundary, system, series and order are:\nChoose from:\n\nA. Lithostratigraphic unit\nC. Biological taxonomic unit\nB. Time unit\nD. Chronostratigraphic unit", "What class does quartz belong to in Mohs hardness tester:\nChoose from:\n\nA. 5\nB. 6\nC. 7", "Which of the following rocks belongs to dynamic metamorphic rocks:\nChoose from:\n\nA. Schist\nB. Gneiss\nC. Mylonite\nD. Marble", "Which of the following is a monoelemental mineral:\nChoose from:\n\nA. Quartz\nB. Plagioclase\nC. Biotite\nD. Diamond", "Fine-grained dikes belong to:\nChoose from:\n\nA. plutonic rocks\nB. shallow rocks\nC. sedimentary rocks\nD. subvolcanic rocks", "The unique mineral group formed by metamorphism is:\nChoose from:\n\nA. Olivine, pyroxene, hornblende, biotite\nB. Andalusite, hornblende, kaolinite, magnetite\nC. Sericite, andalusite, wollastonite, garnet\nD. Pyroxene, kyanite, graphite, garnet", "Streamline structure generally occurs in which of the following rock types:\nChoose from:\n\nA. Basalt\nB. Andesite\nC. Rhyolite\nD. Trachyte", "Basalt belongs to:\nChoose from:\n\nA. Ultrabasic rock\nB. Basic rock\nC. Acid rock\nD. Neutral rock", "The shape of the earth is:\nChoose from:\n\nA. Spherical\nB. Ideal rotating ellipsoid\nC. Apple shape\nD. An approximately pear-shaped ellipsoid of revolution", "The following units that do not belong to the continental surface form are:\nChoose from:\n\nA. Island arc\nB. Hills\nC. Rift Valley\nD. Basin", "In the following places, the greatest gravity of the earth is:\nChoose from:\n\nA. Equator\nB. Antarctica\nC. Tropic of Cancer\nD. Beijing", "Among the following areas, the area with the highest heat flow value is:\nChoose from:\n\nA. Continental area\nB. Pacific\nC. Atlantic\nD. The Indian Ocean", "What is closely related to human activities and geological processes in the atmosphere is:\nChoose from:\n\nA. Troposphere\nB. Stratosphere\nC. Middle layer\nD. The warm layer", "Among the following areas, the area with the smallest earthquake probability is:\nChoose from:\n\nA. Japan\nB. Taiwan\nC. Alaska\nD. Guangzhou", "Among the following silicate minerals, the most easily weathered is:\nChoose from:\n\nA. Quartz\nB. Biotite\nC. Olivine\nD. Hornblende", "Earthquakes distributed in the mid-ocean ridge are characterized by:\nChoose from:\n\nA. Shallow source and large magnitude\nB. The source is deep and the magnitude is large\nC. The source is shallow and the magnitude is small\nD. The source is deep and the magnitude is small", "The following phenomena that do not belong to groundwater sedimentation are:\nChoose from:\n\nA. Dropping stones in karst caves\nB. Petrified wood\nC. Sinter\nD. Bottom-showing structure", "The following are not minerals:\nChoose from:\n\nA. Ice\nB. Quartz\nC. Coal\nD. Natural gold", "Among the following minerals, the hardness is greater than that of quartz:\nChoose from:\n\nA. Topaz\nB. Fluorite\nC. Orthoclase\nD. Calcite", "Granite belongs to:\nChoose from:\n\nA. Acid plutonic intrusive rocks\nB. Neutral epithermal intrusive rocks\nC. Basic plutonic intrusive rocks\nD. Basic epithermal intrusive rocks", "A clastic rock containing 8% medium gravel, 10% fine gravel, 17% coarse sand, 16% medium sand, 18% fine sand, 14% coarse silty sand and 17% fine silty sand should be named:\nChoose from:\n\nA. Silty sandstone containing gravel\nB. Gravel sandstone containing silt\nC. Gravel siltstone\nD. Silty conglomerate", "Among the wave zones in the barrier-free coastal zone, the highest energy is:\nChoose from:\n\nA. Rising wave zone\nB. Wave-breaking zone\nC. Broken wave zone\nD. Surfing belt", "The sand flat in the tidal flat subfacies of the barrier coast belongs to:\nChoose from:\n\nA. High tide flat\nB. Mesotidal flat\nC. Low tide flat\nD. Chaoshangping", "Yanshan tectonic stage belongs to:\nChoose from:\n\nA. Cenozoic\nB. Mesozoic\nC. Paleozoic\nD. Proterozoic", "The most important standard fossil of Cambrian is:\nChoose from:\n\nA. graptolite\nB. Coral\nC. Semi-freshwater fish\nD. Trilobites", "According to the distribution of temperature below the continental surface and the source of geothermal energy, the interior of the earth can be divided into the following temperature layers:\nChoose from:\n\nA. High thermal layer\nB. Outer thermosphere\nC. Normal temperature layer\nD. Inner thermal layer", "The continental crust is composed of:\nChoose from:\n\nA. Silicon-aluminum layer; B Silicon magnesium layer\nB. Magnesium iron layer\nC. Ferrosilicon layer", "Among the following actions, mechanical weathering of rocks is:\nChoose from:\n\nA. Temperature change\nB. Ice splitting\nC. Release weight of rock\nD. Root cleavage", "When the river flows, the energy is related to:\nChoose from:\n\nA. Riverbed width\nB. River flow\nC. Velocity of river water\nD. Height difference of riverbed topography", "The movement mode of seawater is mainly as follows:\nChoose from:\n\nA. Sea waves\nB. Tide\nC. Ocean currents\nD. Turbidity current", "The common grains in carbonate rock structure are:\nChoose from:\n\nA. Internal grains\nB. Biodetritus\nC. Oolitic\nD. Granules", "A typical modern barrier-free coastal sedimentary environment can be divided into:\nChoose from:\n\nA. Lagoon\nB. Coastal dunes\nC. Back shore\nD. Foreshore", "Mesozoic includes:\nChoose from:\n\nA. Permian\nB. Triassic\nC. Jurassic\nD. Cretaceous", "Most coal was found in the following strata:\nChoose from:\n\nA. Carboniferous\nB. Jurassic\nC. Permian\nD. Triassic", "The occurrence factors of rock strata include:\nChoose from:\n\nA. Strike\nB. Thickness\nC. Tendency\nD. Inclination", "Geometric elements of faults include:\nChoose from:\n\nA. Fault plane\nB. Break the disk\nC. Displacement\nD. Inclination", "The first-order structural units in petroliferous basins are:\nChoose from:\n\nA. Anticline zone\nB. Rise\nC. Depression\nD. The ramp", "The formation conditions of gravity flow include:\nChoose from:\n\nA. A certain depth\nB. Provenance conditions\nC. Trigger mechanism\nD. Stagnant water environment", "In which period did human, true images and real horses appear:\nChoose from:\n\nA. J\nB. K\nC. T\nD. Q", "Indosinian movement occurred in the period of:\nChoose from:\n\nA. Carboniferous\nB. Permian\nC. Triassic\nD. Cretaceous", "The skarn type deposit was formed by which of the following metamorphism:\nChoose from:\n\nA. Contact metasomatic metamorphism\nB. Regional metamorphism\nC. Burial metamorphism\nD. Dynamic metamorphism", "Caledonian movement occurs in:\nChoose from:\n\nA. Mesozoic\nB. Late Paleozoic\nC. Early Paleozoic\nD. Cenozoic", "Which of the following fold structures must have undergone stratigraphic inversion:\nChoose from:\n\nA. Tilting fold\nB. Vertical fold\nC. Inclined fold\nD. Rolling fold", "In a nappe structure, if the younger rock block is exposed in the older rock block due to strong erosion,the structure is called:\nChoose from:\n\nA. Klippe\nB. Structural window\nC. Overthrust fault\nD. Thrust fault", "Sheet structure is a common structure in regional metamorphic rocks. Which of the following is metamorphic structure:\nChoose from:\n\nA. Plate structure\nB. Thousand-piece structure\nC. Sheet structure\nD. Gneiss structure", "According to isotopic dating, the longest geological period is:\nChoose from:\n\nA. Proterozoic\nB. Paleozoic\nC. Mesozoic\nD. Cenozoic", "If trilobites are found in the strata, then the stratigraphic age is:\nChoose from:\n\nA. Early Ordovician\nB. Tertiary\nC. Cretaceous\nD. Early Cambrian", "Which part of the earth belongs between Gutenberg surface and Moho surface:\nChoose from:\n\nA. Upper Crust\nB. Lower Crust\nC. Mantle\nD. Core", "Mesozoic gymnosperms have been unprecedented development, which of the following plant fossils do not belong to gymnosperms:\nChoose from:\n\nA. Big feather fern\nB. Cycad\nC. Pine and cypress\nD. Ginkgo biloba", "The boundary, system, series and order are:\nChoose from:\n\nA. Lithostratigraphic unit\nC. Biological taxonomic unit\nB. Time unit\nD. Chronostratigraphic unit", "What class does quartz belong to in Mohs hardness tester:\nChoose from:\n\nA. 5\nB. 6\nC. 7", "The shape of the earth is:\nChoose from:\n\nA. Spherical\nB. Ideal rotating ellipsoid\nC. Apple shape\nD. An approximately pear-shaped ellipsoid of revolution", "The following units that do not belong to the continental surface form are:\nChoose from:\n\nA. Island arc\nB. The hills\nC. Rift Valley\nD. Basin", "In the following places, the greatest gravity of the earth is:\nChoose from:\n\nA. Equator\nB. Antarctica\nC. Tropic of Cancer\nD. Beijing", "Among the following areas, the area with the highest heat flow value is:\nChoose from:\n\nA. Continental area\nB. Pacific\nC. Atlantic\nD. The Indian Ocean", "What is closely related to human activities and geological processes in the atmosphere is:\nChoose from:\n\nA. Troposphere\nB. Stratosphere\nC. Middle layer\nD. The warm layer", "Among the following areas, the area with the smallest earthquake probability is:\nChoose from:\n\nA. Japan\nB. Taiwan\nC. Alaska\nD. Guangzhou", "Among the following silicate minerals, the most easily weathered is:\nChoose from:\n\nA. Quartz\nB. Biotite\nC. Olivine\nD. Hornblende", "Earthquakes distributed in the mid-ocean ridge are characterized by:\nChoose from:\n\nA. Shallow source and large magnitude\nB. The source is deep and the magnitude is large\nC. The source is shallow and the magnitude is small\nD. The source is deep and the magnitude is small", "The following phenomena that do not belong to groundwater sedimentation are:\nChoose from:\n\nA. Dropping stones in karst caves\nB. Petrified wood\nC. Sinter\nD. Bottom-showing structure", "The following are not minerals:\nChoose from:\n\nA. Ice\nB. Quartz\nC. Coal\nD. Natural gold", "Among the following minerals, the hardness is greater than that of quartz:\nChoose from:\n\nA. Topaz\nB. Fluorite\nC. Orthoclase\nD. Calcite", "Granite belongs to:\nChoose from:\n\nA. Acid plutonic intrusive rocks\nB. Neutral epithermal intrusive rocks\nC. Basic plutonic intrusive rocks\nD. Basic epithermal intrusive rocks", "A clastic rock containing 8% medium gravel, 10% fine gravel, 17% coarse sand, 16% medium sand, 18% fine sand, 14% coarse silty sand and 17% fine silty sand should be named:\nChoose from:\n\nA. Silty sandstone containing gravel\nB. Gravel sandstone containing silt\nC. Gravel siltstone\nD. Silty conglomerate", "Among the wave zones in the barrier-free coastal zone, the highest energy is:\nChoose from:\n\nA. Rising wave zone\nB. Wave-breaking zone\nC. Broken wave zone\nD. Surfing belt", "The sand flat in the tidal flat subfacies of the barrier coast belongs to:\nChoose from:\n\nA. High tide flat\nB. Mesotidal flat\nC. Low tide flat\nD. Chaoshangping", "Yanshan tectonic stage belongs to:\nChoose from:\n\nA. Cenozoic\nB. Mesozoic\nC. Paleozoic\nD. Proterozoic", "The most important standard fossil of Cambrian is:\nChoose from:\n\nA. Graptolite\nB. Coral\nC. Semi-freshwater fish\nD. Trilobites", "According to the distribution of temperature below the continental surface and the source of geothermal energy, the interior of the earth can be divided into the following temperature layers:\nChoose from:\n\nA. High thermal layer\nB. Outer thermosphere\nC. Normal temperature layer\nD. Inner thermal layer", "The continental crust is composed of:\nChoose from:\n\nA. Silicon-aluminum layer;\nB. silicon magnesium layer\nC. Magnesium iron layer\nD. Ferrosilicon layer", "Among the following actions, mechanical weathering of rocks is:\nChoose from:\n\nA. Temperature change\nB. Ice splitting\nC. Release weight of rock\nD. Root cleavage", "When the river flows, the energy is related to:\nChoose from:\n\nA. The riverbed width\nB. River flow\nC. Velocity of river water\nD. Height difference of riverbed topography", "The movement mode of seawater is mainly as follows:\nChoose from:\n\nA. Sea waves\nB. Tide\nC. Ocean currents\nD. Turbidity current", "The common grains in carbonate rock structure are:\nChoose from:\n\nA. Internal grains\nB. Biodetritus\nC. Oolitic\nD. Granules", "A typical modern barrier-free coastal sedimentary environment can be divided into:\nChoose from:\n\nA. Lagoon\nB. Coastal dunes\nC. Back shore\nD. Foreshore", "Mesozoic includes:\nChoose from:\n\nA. Permian\nB. Triassic\nC. Jurassic\nD. Cretaceous", "Most coal was found in the following strata:\nChoose from:\n\nA. Carboniferous\nB. Jurassic\nC. Permian\nD. Triassic", "The occurrence factors of rock strata include:\nChoose from:\n\nA. Strike\nB. Thickness\nC. Tendency\nD. Inclination", "Geometric elements of faults include:\nChoose from:\n\nA. Fault plane\nB. Break the disk\nC. Displacement\nD. Inclination", "The first-order structural units in petroliferous basins are:\nChoose from:\n\nA. Anticline zone\nB. Rise\nC. Depression\nD. The ramp", "The formation conditions of gravity flow include:\nChoose from:\n\nA. A certain depth\nB. Provenance conditions\nC. Trigger mechanism\nD. Stagnant water environment", "The shape of the earth is:\nChoose from:\n\nA. Spherical\nB. Ideal rotating ellipsoid\nC. Apple shape\nD. An approximately pear-shaped ellipsoid of revolution", "The following units that do not belong to the terrestrial surface form are:\nChoose from:\n\nA. Island arc\nB. The hills\nC. Split\nD. Basin", "Closely related to activities and geological activities in the circle is:\nChoose from:\n\nA. Troposphere\nB. Stratosphere\nC. Middle layer\nD. The warm layer", "According to the change of temperature and density, the circle can be divided into:\nChoose from:\n\nA. Troposphere\nB. Stratosphere\nC. Middle layer\nD. Warm layer", "The terrestrial crust is composed of:\nChoose from:\n\nA. Silicon-aluminum layer\nB. Silicon magnesium layer\nC. Mg-Fe layer\nD. Ferrosilicon layer", "Among the following silicate minerals, the most easily weathered is:\nChoose from:\n\nA. British\nB. Cucumite\nC. Olives\nD. Flash", "The main energy sources for external geological work are:\nChoose from:\n\nA. Radioactive energy\nB. Solar energy\nC. Earth leads energy\nD. Kodeolly", "Among the following rocks, the most easily weathered is:\nChoose from:\n\nA. peridotite; B granite\nB. British sandstone\nC. Coriolis force", "Among the energy sources of geological work, the energy inside the earth includes:\nChoose from:\n\nA. Solar radiation energy\nB. Rotational energy\nC. Heavy energy\nD. Crystallization energy and chemical energy\nE. Gravitational energy of the sun and the moon\nE. Radioactive energy", "Among the following actions, those belonging to rock mechanical weathering are:\nChoose from:\n\nA. Temperature change\nB. Ice splitting\nC. Release weight of rock\nD. Root cleavage", "The main factors affecting weathering are:\nChoose from:\n\nA. Geological camp\nB. Rock structure and structure\nC. Mineral composition of the rock;\nD. Natural geographical conditions ", "The type of ice transport by glaciers is:\nChoose from:\n\nA. Suspension\nB. Jumping\nC. Pushing\nD. Carrying", "Diagenesis mainly includes:\nChoose from:\n\nA. Compaction\nB. Cementation\nC. Recrystallization\nD. Colloidal deposition", "The following are not minerals:\nChoose from:\n\nA. Ice\nB. Quartz\nC. Coal\nD. Native gold", "The water that does not participate in crystal composition and has nothing to do with mineral crystals is:\nChoose from:\n\nA. Zeolite water\nB. Combined water\nC. Interlayer water\nD. Adsorbed water", "Among the following minerals, the hardness is greater than that of Quartz:\nChoose from:\n\nA. Topaz\nB. Fluorite\nC. Orthoclase\nD. Calcite", "Vammica and Miscellaneous mica belong to:\nChoose from:\n\nA. Complete cleavage\nB. Incomplete cleavage\nC. Extremely complete cleavage\nD. Moderate cleavage", "Olivine belongs to:\nChoose from:\n\nA. Cyclic silicate\nB. Silicate with island structure\nC. Silicate with frame structure\nD. Chain structure silicate", "Granite belongs to:\nChoose from:\n\nA. Acidic deep intrusive lumbite\nB. Neutral shallow intrusive intrusive rocks\nC. Basic plutonic intrusive rocks\nD. Basic shallow intrusive intrusive rock", "If Fe3+: Fe2+ is more than 3, the bottom color of sedimentary rock is:\nChoose from:\n\nA. Brown\nB. Hyper colour\nC. Light green gray\nD. Red or brownish red", "A clastic rock containing 8% medium gravel, 10% fine gravel, 17% coarse sand, 16% medium sand, 18% fine sand, 14% coarse silty sand and 17% fine silty sand should be named:\nChoose from:\n\nA. silty sandstone containing gravel\nB. Gravel sandstone containing silt\nC. Gravel siltstone\nD. Silty conglomerate", "Leafy limestone is:\nChoose from:\n\nA. Object shelf structure\nB. Granular structure\nC. Grain structure\nD. Muddy structure", "The common grains in carbonate rock structure are:\nChoose from:\n\nA. Internal grains\nB. Detritus\nC. Oolitic\nD. Granules", "What kind of rocks can diamonds be produced in:\nChoose from:\n\nA. Peridotite\nB. Kimberlite\nC. Gabbro\nD. Andesite", "Caledonian movement occurs in:\nChoose from:\n\nA. Early Paleozoic\nB. Late Paleozoic\nC. Cenozoic\nD. Mesozoic", "In the following mineral assemblages, the four minerals are all silicates:\nChoose from:\n\nA. quartz, olivine, gypsum, potash feldspar\nB. fluorite, talc, muscovite, plagioclase\nC. potash feldspar, talc, andalusite, garnet\nD. pyroxene, calcite, hornblende, pyrite", "Which of the following fold structures must have undergone stratigraphic inversion:\nChoose from:\n\nA. Inclined fold\nB. Vertical fold\nC. Inclined fold\nD. Recumbent fold", "In a nappe structure, if the older rock block is exposed above the younger rock block due to strong erosion, the structure is called:\nChoose from:\n\nA. Klippe\nB. Structural window\nC. Overthrust fault\nD. Thrust fault", "Sheet structure is a common structure in regional metamorphic rocks. Which of the following is the most metamorphic structure:\nChoose from:\n\nA. Plate structure\nB. thousand-piece structure\nC. sheet structure\nD. gneiss weak", "Streamline structure generally occurs in which of the following rock types:\nChoose from:\n\nA. Basalt\nB. Andesite\nC. Rhyolite\nD. Trachyte", "If trilobites are found in the strata, then the stratigraphic age is:\nChoose from:\n\nA. Early Ordovician\nB. Tertiary\nC. Cretaceous\nD. Early Cambrian", "Which of the following minerals is clay mineral:\nChoose from:\n\nA. Kaolinite\nB. Calcite\nC. Garnet\nD. Pyrite", "At which time did the total extinction of dinosaurs occur:\nChoose from:\n\nA. Ordovician\nB. Tertiary\nC. Cretaceous\nD. Cambrian", "The part above Moho is called:\nChoose from:\n\nA. Upper crust\nB. Lower crust\nC. Core\nD. Mantle", "The era when human beings appeared was:\nChoose from:\n\nA. Tertiary\nB. Quaternary\nC. Sinian\nD. Permian", "The unique mineral group formed by metamorphism is:\nChoose from:\n\nA. sericite, andalusite, wollastonite, garnet\nB. andalusite, hornblende, kaolinite, magnetite\nC. olivine, pyroxene, hornblende, biotite\nD. pyroxene, kyanite, graphite, garnet", "What grade does diamond belong to in Mohs hardness tester:\nChoose from:\n\nA. 5\nB. 7\nC. 8", "Which of the following rocks does not belong to dynamic metamorphic rocks:\nChoose from:\n\nA. Schist rock\nB. Fault breccia\nC. Mylonite\nD. Cataclastic", "Peridotite belongs to:\nChoose from:\n\nA. Ultrabasic rock\nB. Basic rock\nC. Neutral rock\nD. Acid rock", "The boundary, system, series and order are:\nChoose from:\n\nA. Lithostratigraphic unit\nC. Biological taxonomic unit\nB. Time unit\nD. Chronostratigraphic unit Lamprophyre veins belong to Lithostratigraphic unit", "According to isotopic dating, the longest geological period is:\nChoose from:\n\nA. Proterozoic\nC. Mesozoic\nB. Paleozoic\nD. Cenozoic", "Streamline structure generally occurs in which of the following rock types:\nChoose from:\n\nA. basalt\nB. andesite\nC. rhyolite\nD. trachyte", "At which time did the total extinction of dinosaurs occur:\nChoose from:\n\nA. Ordovician\nB. Tertiary\nC. Cretaceous\nD. Cambrian", "What kind of rocks can diamonds be produced in:\nChoose from:\n\nA. Peridotite\nB. Kimberlite\nC. Gabbro\nD. Andesite", "Caledonian movement occurs in:\nChoose from:\n\nA. Early Paleozoic\nB. Late Paleozoic\nC. Mesozoic\nD. Cenozoic", "In the following mineral assemblages, the four minerals are all silicates:\nChoose from:\n\nA. quartz, olivine, gypsum, potash feldspar\nB. fluorite, talc, muscovite, plagioclase\nC. potash feldspar, talc, andalusite, garnet\nD. pyroxene, calcite, hornblende, pyrite", "Which of the following fold structures must have undergone stratigraphic inversion:\nChoose from:\n\nA. Inclined fold\nB. Vertical fold\nC. Inclined fold\nD. Recumbent fold", "In a nappe structure, if the older rock block is exposed above the younger rock block due to strong erosion, the structure is called:\nChoose from:\n\nA. Klippe\nB. Structural window\nC. Overthrust fault\nD. Thrust fault", "Sheet structure is a common structure in regional metamorphic rocks. Which of the following is metamorphic structure:\nChoose from:\n\nA. Plate structure\nB. Thousand-piece structure\nC. Sheet structure\nD. Gneiss structure is the weakest", "Skarn type deposit is which of the following type of deposit:\nChoose from:\n\nA. Endogenetic deposit\nB. Metamorphic deposit\nC. Contact metasomatic deposit\nD. Eluvial deposit 5 beds", "If trilobites are found in the strata, then the stratigraphic age is:\nChoose from:\n\nA. Early Ordovician\nB. Tertiary\nC. Cretaceous\nD. Early Cambrian", "Which of the following minerals is clay mineral:\nChoose from:\n\nA. Kaolinite\nB. Calcite\nC. Garnet\nD. Pyrite", "Which sedimentary formation reflects the transformation from marine facies to continental facies:\nChoose from:\n\nA. Flysch deposition\nB. Turbidity current deposition\nC. Molasite deposition\nD. Pyroclastic deposition", "The part above Moho is called:\nChoose from:\n\nA. Upper crust\nB. Lower crust\nC. Mantle\nD. Core", "The era when human beings appeared was:\nChoose from:\n\nA. Tertiary\nB. Quaternary\nC. Sinian\nD. Permian", "The unique mineral group formed by metamorphism is:\nChoose from:\n\nA. Sericite, andalusite, wollastonite, garnet\nB. andalusite, hornblende, kaolinite, magnetite\nC. olivine, pyroxene, hornblende, biotite\nD. pyroxene, kyanite, graphite, garnet", "What grade does diamond belong to in Mohs hardness tester:\nChoose from:\n\nA. 5\nB. 7\nC. 8", "Which of the following rocks does not belong to dynamic metamorphic rocks:\nChoose from:\n\nA. schist rock\nB. fault breccia\nC. mylonite\nD. cataclastic", "Peridotite belongs to:\nChoose from:\n\nA. Ultrabasic rock\nB. Basic rock\nC. Neutral rock\nD. Acid rock", "The boundary, system, series and order are:\nChoose from:\n\nA. Lithostratigraphic unit\nC. Biological taxonomic unit\nB. Time unit\nD. Chronostratigraphic unit Lamprophyre veins belong to Lithostratigraphic unit", "According to isotopic dating, the longest geological period is:\nChoose from:\n\nA. Proterozoic\nC. Mesozoic\nB. Paleozoic\nD. Cenozoic", "Streamline structure generally occurs in which of the following rock types:\nChoose from:\n\nA. Basalt\nB. Andesite\nC. Rhyolite\nD. Trachyte", "At which time did the total extinction of dinosaurs occur:\nChoose from:\n\nA. Ordovician\nB. Tertiary\nC. Cretaceous\nD. Cambrian" ], "answer": [ "C", "C", "B", "D", "B", "C", "B", "B", "C", "B", "C", "C", "C", "B", "C", "C", "B", "B", "C", "C", "C", "B", "B", "B", "C", "C", "D", "C", "A", "C", "D", "B", "D", "A", "D", "C", "C", "A", "A", "C", "C", "D", "B", "C", "C", "B", "D", "A", "B", "B", "A", "D", "C", "C", "D", "C", "A", "A", "A", "B", "C", "B", "D", "B", "A", "A", "B", "A", "A", "B", "B", "A", "A", "A", "B", "B", "A", "D", "C", "C", "A", "A", "C", "C", "D", "B", "C", "C", "B", "D", "A", "B", "B", "A", "D", "C", "C", "D", "C", "A", "A", "A", "B", "C", "B", "D", "B", "A", "A", "B", "A", "A", "B", "B", "A", "A", "A", "B", "B", "D", "A", "A", "A", "A", "C", "B", "A", "B", "A", "B", "C", "A", "C", "D", "A", "C", "B", "A", "D", "A", "B", "A", "B", "A", "C", "D", "A", "A", "C", "D", "A", "C", "B", "B", "A", "D", "A", "A", "B", "A", "C", "C", "B", "A", "C", "D", "A", "A", "C", "D", "A", "C", "B", "B", "A", "D", "A", "A", "B", "A", "C", "C" ] }, "completion": { "question": [ "Endodynamic geological processes include ().", "The intensity of river erosion mainly depends on (), etc.", "Glacier denudation can form () and other special landforms", "There are four main forms of sea water movement.", "Limestone is composed of () minerals.", "D is the code of ().", "The way to lengthen the river is", "The most significant zone of sea erosion is", "The deposition of () marks the final stage of chemical deposition of salt lakes.", "Granite is ().", "The zone with the strongest wave erosion is ().", "The zone with the most frequent development of turbidity is ().", "The main site of marine sedimentation is ()", "Due to the continuous expansion of the sea floor, the age of the ocean crust gradually increases with the increase of the distance from the ocean ridge ().", "Most of the glacial valleys shaped by glaciation have a cross section of () shape.", "The reason for the strong folding of the rock stratum is", "Angular unconformity is usually associated with ().", "The lithology that can become a good water-resisting layer is ()", "The boundary of plate structure is ()", "The main geophysical methods used to divide the structure of the earth's inner sphere are", "The chronostratigraphic units are (); The geological age unit is ().", "The earth can be divided into three circles from the surface to the center. These three circles are", "The Mesozoic can be divided into three periods from old to new, namely", "The occurrence factors of rock stratum include", "According to the depth of sea water, the marine environment can be divided into", "According to the direction of river erosion, it can be divided into", "The three elements of geomagnetism are", "The Paleozoic can be divided into six periods from old to new", "The main types of glaciers are", "According to the content of silica in magma, it can be divided into", "The Cenozoic can be divided into three periods from old to new. These three periods are", "The active volcanoes in the world are concentrated in three zones, which are", "In the inclined fold, the axial plane is inclined, the rock strata on both wings are inclined (), and the dip angle is ().", "The main ways of physical weathering are", "According to the route of volcanic eruption, it can be divided into", "According to the factors and properties of weathering, weathering is divided into", "The tectonic movement is divided into", "According to the occurrence of axial plane, folds can be divided into", "The contact relationship between intrusive rock and surrounding rock can be divided into", "The process of seismic geological process can be divided into", "The main mode of metamorphism is", "The parent rock of the sediment includes", "Three substances dissociated from the parent rock of sediment during weathering", "Chemical weathering destroys the original rock and forms new minerals", "Main methods of sediment transport by running water", "Main methods of sediment transport by wind", "Determinants of transport and deposition of substances in real solution", "According to the nature of the transported materials, the sedimentation differentiation is divided into", "In the process of mechanical deposition differentiation, according to the particle size, the precipitated particles are", "The diagenesis process mainly includes", "According to the difference of material sources, sedimentary rocks are divided into", "Minerals in sedimentary rocks can be divided into", "The structure of clastic particles can be divided into", "The structure of clastic particles can be divided into", "The types of cement mainly include", "In bedding structure, what is the main composition of bedding", "Common biogenic structures", "Common clastic components in terrigenous clastic rocks", "Terrestrial clastic rocks are divided into", "Sandstone is divided into", "Common clay minerals", "During compaction, the contact relationship between rigid particles in sandy sediment is shown as", "Main types of grain debris in endogenous sedimentary rocks", "The types of endogenous sedimentary rocks mainly include", "Main mineral composition in carbonate rocks", "Gutenberg noodles are__________ And___________ Interface.", "The types of volcanic eruption are_________ And_________ Two types.", "Igneous rocks can be divided into ultrabasic, basic, neutral, acidic, vein rocks and other categories. Please list the names of one type of rocks in this order: __________, __________, ____________, ____________, __________.", "The Mesozoic era has existed from morning to night, and their codes are respectively.", "Metamorphism includes ____________________ And___________ Three categories.", "Pyroclastic rocks can be divided into __________________ And_________ Three categories.", "The three stages of rock deformation development are ___________, ____________ and ____________.", "The geomagnetic elements include ______________________ And ___________;", "According to the change of temperature and density, the atmosphere can be further divided into ___________, ___________, ______________________ And ___________.", "There are two most important seismic wave velocity change interfaces in the interior of the earth___________ And ___________, According to this, the interior of the earth can be divided into ______________________ And ___________.", "According to the geological environment and physicochemical factors of metamorphism, metamorphism can be divided into __________, ______________________ And ___________.", "The double metamorphic belts are located on the ocean side___________ ___________ parallel to it.", "Chemical weathering includes several important chemical reactions, which are ___________, ___________, ______________________ And ___________.", "The sea water movement mainly includes __________, ____________________ And __________.", "Glaciers are divided into__________ And __________.", "According to the content of silica, magmatic rocks can be divided into __________, ____________________ And __________. (Indicate the content of silicon dioxide, otherwise the judgment is wrong)", "The diagenesis of sediments mainly includes ____________________ And __________.", "The color of sedimentary rocks can be divided into ____________________ And __________.", "The structural types of carbonate rocks mainly include ____________________ And __________.", "The common grains in carbonate rocks include __________, ____________________ And__________ Etc.", "The sedimentary types of alluvial fan include __________, ____________________ And __________.", "According to the plane geometry of the river, the river can be divided into __________, ____________________ And__________ Four types.", "The river facies can be further divided into __________, ____________________ River__________ Four subfacies.", "According to the depth of the lake and its geographical location, the clastic lake facies can be divided into __________, __________, ____________________ And__________ Wait for several subphases.", "Galloway's genetic types of delta include ____________________ And __________.", "A typical modern barrier-free coastal sedimentary environment can be divided into __________, ____________________ And__________ Wait for several secondary environments.", "The types of subfacies of barrier coastal facies are ____________________ And __________.", "The carbonate sedimentary facies zone is divided into ____________________ And __________.", "The basic form of fold is__________ And __________.", "According to the mechanical properties at the time of formation, joints can be divided into__________ And__________ Two types.", "According to the relative displacement relationship between the two walls of the fault, it can be divided into __________, __________ There are four types of translation fault and hinge fault.", "The primary structural unit within the petroliferous basin is ____________________ And __________.", "The boundaries between plates are __________, ____________________ And__________ Four types.", "Gutenberg noodles are__________ And___________ Interface.", "The types of volcanic eruption are_________ And_________ Two types.", "Igneous rocks can be divided into ultrabasic, basic, neutral, acidic, vein rocks and other categories. Please list the names of one type of rocks in this order: __________, __________, ____________, ____________, __________.", "The Mesozoic era has existed from morning to night, and their codes are respectively.", "Metamorphism includes ____________________ And___________ Three categories.", "Pyroclastic rocks can be divided into __________________ And_________ Three categories.", "The three stages of rock deformation development are ___________, ____________ and ____________.", "The geomagnetic elements include ______________________ And ___________;", "According to the change of temperature and density, the atmosphere can be further divided into ___________, ___________, ______________________ And ___________.", "There are two most important seismic wave velocity change interfaces in the interior of the earth___________ And ___________, According to this, the interior of the earth can be divided into ______________________ And ___________.", "According to the geological environment and physicochemical factors of metamorphism, metamorphism can be divided into __________, ______________________ And ___________.", "The double metamorphic belts are located on the ocean side___________ ___________ parallel to it.", "Chemical weathering includes several important chemical reactions, which are ___________, ___________, ______________________ And ___________.", "The sea water movement mainly includes __________, ____________________ And __________.", "Glaciers are divided into__________ And __________.", "According to the content of silica, magmatic rocks can be divided into __________, ____________________ And __________. (Indicate the content of silicon dioxide, otherwise the judgment is wrong)", "The diagenesis of sediments mainly includes ____________________ And __________.", "The color of sedimentary rocks can be divided into ____________________ And __________.", "The structural types of carbonate rocks mainly include ____________________ And __________.", "The common grains in carbonate rocks include __________, ____________________ And__________ Etc.", "The sedimentary types of alluvial fan include __________, ____________________ And __________.", "According to the plane geometry of the river, the river can be divided into __________, ____________________ And__________ Four types.", "The river facies can be further divided into __________, ____________________ River__________ Four subfacies.", "According to the depth of the lake and its geographical location, the clastic lake facies can be divided into __________, __________, ____________________ And__________ Wait for several subphases.", "Galloway's genetic types of delta include ____________________ And __________.", "A typical modern barrier-free coastal sedimentary environment can be divided into __________, ____________________ And__________ Wait for several secondary environments.", "The types of subfacies of barrier coastal facies are ____________________ And __________.", "There are two most important seismic wave velocity variation boundaries in the interior of the earth, namely___________ And ___________, According to this, the interior of the earth can be divided into ______________________ And___________", "According to the energy sources and characteristics that cause geological work, it can be divided into internal energy and external energy, which include ___________, ____________, ___________, _______________________ And____________", "According to the geological environment and physicochemical factors of metamorphism, metamorphism can be divided into __________, ______________________ And___________", "Chemical weathering includes several important chemical reactions, which are ___________, ___________, ______________________ And___________", "The main factors affecting weathering are___________ And___________", "The loose deposits on the surface are passed by the wind ______________________ And___________ Transport to other places", "Aeolian deposits include__________ And___________", "According to the existing form of water in minerals and its role in the crystal structure of minerals, it can be divided into __________, __________, ____________________ And__________", "According to the content of silica, magmatic rocks can be divided into __________, ____________________ And__________", "The diagenesis of sediments mainly includes ____________________ And__________", "The color of sedimentary rocks can be divided into ____________________ And__________", "The structural types of carbonate rocks mainly include ____________________ And__________", "The common grains in carbonate rocks include __________, ____________________ And__________ etc.", ", their codes are respectively.", "From morning to night in the Mesozoic era", "Metamorphism includes ____________________ And___________ Three categories.", "When dividing the strata of a region, it is generally to establish a___________ Section. All sections with complete stratigraphic exposure, normal sequence, clear contact relationship and well preserved fossils can be used as___________ Section. If it is marine strata, it often shows repeated changes of lithofacies from coarse to fine and from fine to coarse. Such a change is called a deposit ___________, That is, each set of transgressive horizon and regressive horizon constitutes a complete sedimentary ___________. Strata can also be divided according to lithology. The lithologic change reflects the sedimentation to a certain extent___________ Changes, while deposition___________ The change of the earth's crust is often closely related to the crustal movement. Therefore, the strata are divided into many units according to lithology, which can basically represent the development stage of local geological history________ and", "Weathering includes ____________________ And___________ Three categories.", "Pyroclastic rocks can be divided into_________ Three categories.", "The Mesozoic era has existed from morning to night, and their codes are respectively.", "According to the mechanical properties of plate activity, the contact types of plate boundary can be divided into _________________ And_________ Three.", "In 1968, Lepihon divided the global lithosphere into __________, ___________, __________, _____________________ And___________ Six sectors________ and", "Weathering includes ____________________ And___________ Three categories.", "Pyroclastic rocks can be divided into_________ Three categories.", "Most earthquakes are generated by brittle rocks in the earth's crust, and seismic energy is transmitted in a certain way. It is commonly used to measure the intensity of energy released by an earthquake." ], "answer": [ "Tectonism, earthquake, magmatism, metamorphism", "Flow velocity, rock properties that make up the riverbed, and sediment concentration in the flowing water", "Enclosed valley, U-shaped valley, ice bucket, horn peak, suspended valley, edge ridge.", "Wave, tide, ocean current, turbidity current", "calcite", "devonian", "Tracing erosion, meandering, delta formation", "Coastal zone", "chloride", "Intrusive rock", "Coastal zone", "Continental slope", "Neritic zone.", "Grow old", "\u201cU\u201d", "Strong horizontal extrusion.", "The occurrence of underlying strata is inconsistent", "Mudstone (shale).", "Separation type boundary, aggregation type boundary and dislocation type boundary.", "Seismic wave method.", "Universe, boundary, system, system and stage; Era, generation, era, era and period", "Atmosphere, hydrosphere, biosphere (crust, mantle, core).", "Triassic (T), Jurassic (J) and Cretaceous (K).", "Inclination, strike and dip.", "Coastal zone, shallow zone, semi-deep zone and deep zone.", "Mechanical erosion and chemical erosion.", "Magnetic field strength", "Cambrian \u2208, Ordovician O, Silurian S, Devonian D, Carboniferous C, Permian P.", "Continental glaciers and mountain glaciers.", "Ultrabasic magma, basic magma, neutral magma and acidic magma.", "Paleogene E, Neogene N, Quaternary Q", "Pacific Rim, Tethys Belt, Mid-Ocean Ridge.", "On the contrary, inequality", "Load release (unloading), thermal expansion and contraction of rocks and minerals, freezing and thawing of water (ice splitting), crystallization and deliquescence of salts.", "Penetration eruption, fissure eruption and central eruption.", "Chemical weathering, physical weathering and biological weathering.", "Horizontal motion, vertical motion.", "Vertical fold, inclined fold, inverted fold, recumbent fold.", "Sedimentary contact, intrusive contact and fault contact.", "There are four stages: seismogenic, imminent, seismogenic and aftershock.", "Recrystallization, metamorphic crystallization, metasomatism, metamorphic differentiation, tectonic deformation.", "Magmatic rock, metamorphic rock and sedimentary rock", "Debris, dissolved matter and insoluble residue", "clay mineral", "Rolling handling, jumping handling, suspended handling", "Jump, overhang, creep", "solubility", "Mechanical deposition differentiation and chemical deposition differentiation", "Gravel, sand, silt, clay", "Compaction, cementation, dissolution, alteration, metasomatism and recrystallization.", "There are four major types of terrigenous clastic rocks, endogenous sedimentary rocks, pyroclastic rocks and sedimentary rock associations.", "Terrigenous clastic minerals, authigenic minerals and secondary minerals.", "Excellent, good, medium", "Extremely angular, angular, sub-circular, circular, extremely circular", "Basement cementation, pore cementation, contact cementation, filling cementation", "Fine layer, layer system and layer system group", "Laminated structure, wormhole, wormhole", "Quartz, feldspar, rock cuttings", "Conglomerate, sandstone, siltstone, mudstone", "Very coarse sandstone, coarse sandstone, medium sandstone, fine sandstone, very fine sandstone", "Kaolinite, illite, montmorillonite", "Point contact, line contact, bump contact, stitch contact.", "Endoclast, bioclastic, oolitic, pellet, agglomerate", "Aluminum rock, iron rock, manganese rock, phosphorous rock, evaporite, combustible organic rock, siliceous rock, carbonate rock", "Calcite, dolomite", "Mantle and core", "Crack type, central type", "Peridotite, gabbro, andesite, rhyolite, pegmatite", "Triassic, Jurassic, Cretaceous, T, J, K", "Regional metamorphism, dynamic metamorphism, contact metamorphism", "Agglomerate, volcanic breccia, tuff", "Plastic deformation, elastic deformation, brittle deformation", "Magnetic declination, magnetic inclination and magnetic field strength", "Troposphere, stratosphere, mesosphere, warm layer, escape layer", "Moho surface, Gutenberg surface, crust, mantle, core", "Dynamic metamorphism, contact metamorphism, regional metamorphism, migmatization", "High pressure and low temperature metamorphic zone", "Oxidation, dissolution, hydrolysis, hydration, biochemical weathering", "Wave, tide, current, turbidity current", "Continental glacier", "Ultrabasic rock SiO2<45%, basic rock SiO245-52%, neutral rock SiO252-65%, acidic rock SiO2>65%", "Compaction", "Inherited color, primary color, secondary color", "Grain structure", "Endoclast, bioclastic, oolitic, agglomerate", "Debris flow deposit, braided channel deposit, overflow deposit, sieve deposit", "Pingzhi River, meandering river, braided river, reticulated river", "River bed, embankment, river apron, oxbow lake", "Lake delta, shore lake, shallow lake, semi-deep lake, deep lake", "River controlled delta, wave controlled delta, tidal controlled delta", "Coastal dunes, backshore, foreshore, nearshore", "Lagoon subfacies, tidal flat subfacies, barrier island subfacies", "Supratidal zone", "Anticline", "Shear joint", "Normal fault", "Uplift, depression, slope", "Hailing, transform fault, subduction zone and deep trench, ground suture", "Mantle and core", "Crack type, central type", "Peridotite, gabbro, andesite, rhyolite, pegmatite", "The Mesozoic era has existed from morning to night, and their codes are respectively.", "Regional metamorphism, dynamic metamorphism, contact metamorphism", "Agglomerate, volcanic breccia, tuff", "Plastic deformation, elastic deformation, brittle deformation", "Magnetic declination, magnetic inclination and magnetic field strength", "Troposphere, stratosphere, mesosphere, warm layer, escape layer", "Moho surface, Gutenberg surface, crust, mantle, core", "Dynamic metamorphism, contact metamorphism, regional metamorphism, migmatization", "High pressure and low temperature metamorphic zone", "Oxidation, dissolution, hydrolysis, hydration, biochemical weathering", "Wave, tide, current, turbidity current", "Continental glacier", "Ultrabasic rock SiO265%", "Compaction", "Inherited color, primary color, secondary color", "Grain structure", "Endoclast, bioclastic, oolitic, agglomerate", "Debris flow deposit, braided channel deposit, overflow deposit, sieve deposit", "Pingzhi River, meandering river, braided river, reticulated river", "River bed, embankment, river apron, oxbow lake", "Lake delta, shore lake, shallow lake, semi-deep lake, deep lake", "River controlled delta, wave controlled delta, tidal controlled delta", "Coastal dunes, backshore, foreshore, nearshore", "Lagoon subfacies, tidal flat subfacies, barrier island subfacies", "(Moho, Gutenberg, crust, mantle, core)", "(Heavy energy, radioactive energy, page 4 of 8, rotational energy, crystal energy and chemical energy, solar radiation energy, solar and lunar gravitational energy)", "(dynamic metamorphism, contact metamorphism, regional metamorphism, migmatization)", "(oxidation, dissolution, hydrolysis, hydration, biochemical weathering)", "(Physical and geographical conditions, lithology)", "(suspension, jump, push)", "(eolian sand, eolian loess)", "(adsorbed water, crystal water, structural water, zeolite water, interlayer water)", "(Ultrabasic rock SiO2<45%, basic rock SiO245-52%, neutral rock SiO252-65%, acidic rock SiO2>65%)", "(compaction, cementation, recrystallization)", "(Inherited color, primary color, secondary color)", "(particle structure, biological skeleton structure, grain structure)", "(endoclastic, bioclastic, oolitic, agglomerate)", "Triassic, Jurassic, Cretaceous, T, J, K", "Regional metamorphism, dynamic metamorphism, contact metamorphism", "Standard, standard, cycle, cycle, environment, environment", "Physical weathering, chemical weathering, biological weathering", "Volcanic agglomerate, volcanic breccia, tuff", "Fault, seismic wave, magnitude", "Triassic, Jurassic, Cretaceous T, J, K", "Tension type, convergence type, shear type", "Eurasian plate, Indian Ocean plate, Antarctica plate, Africa plate, Pacific plate, America plate", "Physical weathering, chemical weathering, biological weathering", "Volcanic agglomerate, volcanic breccia, tuff", "Fault, seismic wave, magnitude" ] }, "tf": { "question": [ "Minerals all have cleavage", "According to the content of quartz, magmatic rocks can be further divided into four categories: ultrabasic, basic, neutral and acidic", "The upper wall of a fault refers to the ascending disk of a fault", "The first period of Mesozoic is called Triassic", "Permian is the last period of Paleozoic", "Physical weathering does not change the chemical composition of rocks", "The largest flat surface of gravel in alluvium tends to the upper reaches of rivers", "There is no groundwater in magmatic rock area", "Most glacial deposits have obvious bedding", "Sediments in the ocean are carried by rivers", "Desert is mainly formed by the geological action of wind", "In the anticline structure, the rock strata in the central part are relatively old", "Weathering is the geological action of wind", "Bedding is an important structure of sedimentary rocks", "If the strike of rock strata is known, the tendency of rock strata will be known", "Sea earthquakes often lead to turbidity current on the seabed", "Rock folds are mainly formed by external dynamic geological processes", "The contact relationship between two sets of strata with the same occurrence is called integrated contact", "The gravity sold by matter on the earth is the universal gravitation of the earth", "The hardness of quartz is greater than that of calcite", "Stomatal structures are common in volcanic rocks", "Differential weathering is caused by different climates", "Bedrock is a rock foundation", "In the original horizontal state, the overlying strata are newer and the underlying strata are older", "Earthquake damage to the ground and buildings is called earthquake intensity", "The coastline is the boundary between continental crust and oceanic crust", "The higher the position of the valley terrace, the earlier the terrace formed", "Plate structure, thousand-piece structure and sheet structure are common structures in metamorphic rocks", "The surface gravity value increases with the increase of latitude and decreases with the increase of altitude", "Lithosphere refers to the crust", "Quartz has well developed cleavage", "Marine geological processes are mainly sedimentation", "When side erosion occurs, the convex banks of rivers are scoured and collapsed", "Jointed plane refers to the plane formed by the fracture of minerals along a certain direction inside the crystal after being acted by mechanical force", "The type of glaciers in China belongs to mountain glaciers", "Diluvium is the product of river flood period", "The geological process of swamp is mainly biological sedimentation", "Differential weathering is caused by different climates", "Bottom erosion is the erosion that widens the riverbed and valley floor", "The hardness of quartz is greater than that of calcite", "Shear waves cannot pass through liquid objects", "Ice stains have good bedding, sorting and roundness", "When the river turns, the sedimentation mainly occurs on the convex bank side", "The water level in general civil wells can represent the water level of the phreatic surface in this area", "The crystal plane of quartz has grease luster, and its fracture surface has glass luster", "The chemical formula of calcite is “CaCO3”", "Karst caves are mainly formed near the phreatic surface", "Almond-shaped structure mainly occurs in volcanic rocks", "Calcite has two groups of cleavage", "Most earthquakes in the world belong to tectonic earthquakes", "Calcite is the main mineral composition of limestone and marble", "Field investigation is the most basic and important link in earth science work, which can obtain the first-hand information of the studied objects.", "The shape and size of the earth refers to the shape and size of the geoid.", "Each ocean floor has a ridge or uplift, of which the Pacific Ocean floor is an uplift and the other three ocean floors are ridges.", "Flat-topped seamounts are seamounts near sea level, which are formed by regional subsidence and submergence after their tops are leveled by weathering, denudation and seawater erosion.", "The earth's crust consists of two layers: silicon-aluminum layer and silicon-magnesium layer.", "The boundary between continental crust and oceanic crust is at the coastline.", "The gravity of the surface decreases with the increase of latitude.", "The electrical properties of the earth's interior are mainly related to the magnetic permeability and conductivity of the materials in the earth.", "Radioactive elements are generally concentrated in the surface of the solid earth, and mainly concentrated in metamorphic rocks.", "Crustal movement can be divided into horizontal movement and vertical movement according to the direction of movement.", "The crustal movement after Quaternary is generally called neotectonic movement.", "The root cause of earthquakes lies in plate movement.", "Active volcanoes can only be distributed at the edge of plates.", "The whole interior of the earth is molten, and magma exists everywhere.", "In metamorphism, the main function of static pressure is to raise the temperature of metamorphic reaction.", "When clastic material is transported in flowing water, clay-grade particles once deposited are eroded again, which requires greater flowing water speed.", "The retreat of the waterfall is caused by the erosion of the river.", "The stone forest in Lunan, Yunnan Province was formed by river erosion.", "Finland, a country of thousands of lakes, has many lakes that are the causes of rivers.", "The amygdala in volcanic rocks belongs to the form of crystalline aggregates.", "According to Rick's Law, deep in the earth's crust, minerals precipitate in the direction of maximum stress and dissolve in the direction of minimum stress.", "According to the classification standard of debris particle size in petroleum industry, the debris particles of 0.25 ~ 0.5 mm are medium sand.", "Generally speaking, clastic particles with good roundness have higher sphericity.", "Alluvial fan facies sediments are generally poorly sorted and rounded.", "Salty lakes refer to lakes with salinity greater than 35%.", "According to PH value, seawater belongs to weakly alkaline medium.", "The change of material composition and bedding structure of turbidite in vertical direction follows Bowma sequence, and turbidite profiles in nature should show complete Bowma sequence.", "The code name of Cretaceous is T.", "The original occurrence of volcanic eruptions is mostly inclined.", "On the plane, the dome structure is characterized by old rock strata at the center and young rock strata around it.", "Field investigation is the most basic and important link in earth science work, which can obtain the first-hand information of the studied objects.", "The shape and size of the earth refers to the shape and size of the geoid.", "Each ocean floor has a ridge or uplift, of which the Pacific Ocean floor is an uplift and the other three ocean floors are ridges.", "Flat-topped seamounts are seamounts near sea level, which are formed by regional subsidence and submergence after their tops are leveled by weathering, denudation and seawater erosion.", "The earth's crust consists of two layers: silicon-aluminum layer and silicon-magnesium layer.", "The boundary between continental crust and oceanic crust is at the coastline.", "The gravity of the surface decreases with the increase of latitude.", "The electrical properties of the earth's interior are mainly related to the magnetic permeability and conductivity of the materials in the earth.", "Radioactive elements are generally concentrated in the surface of the solid earth, and mainly concentrated in metamorphic rocks.", "Crustal movement can be divided into horizontal movement and vertical movement according to the direction of movement.", "The crustal movement after Quaternary is generally called neotectonic movement.", "The root cause of earthquakes lies in plate movement.", "Active volcanoes can only be distributed at the edge of plates.", "The whole interior of the earth is molten, and magma exists everywhere.", "In metamorphism, the main function of static pressure is to raise the temperature of metamorphic reaction.", "When clastic material is transported in flowing water, clay-grade particles once deposited are eroded again, which requires greater flowing water speed.", "The retreat of the waterfall is caused by the erosion of the river.", "The stone forest in Lunan, Yunnan Province was formed by river erosion.", "Finland, a country of thousands of lakes, has many lakes that are the causes of rivers.", "The amygdala in volcanic rocks belongs to the form of crystalline aggregates.", "Gutenberg surface is the interface between crust and mantle", "The Earth's crust consists of silicon-aluminum layer and silicon-magnesium layer", "The whole interior of the earth is molten and magma exists everywhere", "In metamorphism, the main function of static pressure is to raise the temperature of metamorphic reaction", "Mylonite is a rock formed by dynamic metamorphism", "Hydration does not belong to chemical weathering", "Under the same climatic conditions, rocks with amorphous, fine grained, isogranular structure or large porosity are less likely to be chemically weathered than rocks with the same composition, such as crystalline, unisogranular and coarse grains", "In weathering, quartz is the most difficult mineral component of rock to be weathered", "Honeycomb stone is the product of wind erosion", "Aeolian deposits include Aeolian sand and Aeolian loess", "Structural water does not participate in crystal composition and has nothing to do with mineral crystals", "The amygdala in volcanic rocks belongs to the morphology of crystalline aggregates", "The color of minerals can be described by analogy, such as the pig liver color of hematite", "The luster of mineral actually refers to the luminescence of mineral surface", "The Mohs hardness of diamond is 10", "The cleavage of muscovite is incomplete", "Both augite and hornblende are chain structure silicate minerals", "Kaolinite and mica are silicate minerals with layered structure", "Sedimentary rocks and metamorphic rocks can be transformed into each other", "Quartz and amphibole in magmatic rocks have special significance and can reflect SiO2 saturation in magmatic rocks, so they can be called acidity indicator minerals", "Bowen reaction series explains the general law of mineral symbiosis and association in magmatic rocks", "Granite is an acid plutonic intrusive rock", "Andesite is a neutral extrusive rock", "Metamorphic rocks formed by metamorphism of sedimentary rocks are called orthomorphic rocks", "Gneiss is a regional metamorphic rock", "As a result of mechanical sedimentary differentiation, sediments are distributed regularly along the transport direction in the order of gravel-sand-silt-clay", "The white color of pure quartz sandstone inherits the color of quartz, which is a kind of inherited color", "Rock strata with a thickness of 0.5 ~ 0.lm are called thin strata", "A clastic rock containing 8% medium gravel, 10% fine gravel, 17% coarse sand, 16% medium sand, 18% fine sand, 14% coarse silty sand and 17% fine silty sand should be named as \"gravel-bearing silty sandstone\"", "Detrital particles with good roundness generally have higher sphericity", "Basement cementation represents the simultaneous deposition of debris and cementation, which is the product of rapid accumulation", "Clay rock is mainly composed of fine particles with grain size < 0.05 mm, and contains a large amount of clay minerals (kaolinite, montmorillonite, hydromica, etc.) loose or consolidated rock", "The \"bamboo leaf\" body in bamboo leaf limestone widely distributed in North China is a typical internal clastic" ], "answer": [ "False", "False", "False", "True", "True", "True", "True", "False", "False", "False", "False", "True", "False", "True", "True", "True", "False", "False", "False", "True", "True", "False", "False", "True", "True", "False", "True", "True", "False", "False", "False", "False", "False", "False", "True", "True", "True", "False", "False", "True", "True", "False", "True", "True", "False", "True", "True", "True", "False", "True", "True", "True", "True", "True", "True", "False", "False", "False", "True", "False", "True", "True", "False", "False", "False", "True", "True", "True", "False", "False", "False", "False", "True", "False", "True", "True", "True", "False", "False", "True", "True", "True", "True", "True", "True", "False", "False", "False", "True", "False", "True", "True", "False", "False", "False", "True", "True", "True", "False", "False", "False", "False", "False", "False", "True", "True", "False", "False", "True", "True", "True", "False", "False", "True", "False", "True", "False", "True", "True", "True", "False", "True", "True", "True", "False", "True", "True", "True", "False", "True", "False", "True", "False", "False" ] }, "qa": { "question": [ "How do the three types of rocks transform each other", "Material source of sediments forming sedimentary rocks", "Mixing phenomenon of clastic particles with different grain sizes", "Under what conditions will the colloidal solution agglomerate", "Characteristics of colloidal sediment", "Differences in chemical composition between sedimentary rocks and magmatic rocks", "Differences between horizontal bedding and parallel bedding", "Structural characteristics of scour marks", "Structural characteristics of mud cracks", "Geological significance of sandstone", "Identification marks for the transformation from mechanical compaction to chemical compaction", "Main objects and purposes of geoscience research", "What are the three outer spheres of the earth? What effect will the interaction have on the solid earth?", "Which period (from old to new) does the Paleozoic era in the geological chronology include? In which period did China's coal resources mainly form?", "What geological processes are the mountain glacier landform (Tianshan Mountain), desert landform (Xinjiang, Gansu and Shaanxi) and loess landform developed in northwest China, and what are the main influencing factors?", "What is tectonic movement? What mountain system is the name of the tectonic movement occurring in the Tertiary and after, and how do they affect the evolution of China's regional environment?", "Research significance of earth science", "What is geological process? What genetic types are included?", "Taking the shallow sea environment as an example, explain its main sedimentary types", "What is the difference between the relative geological age and isotope dating, and what are the main criteria for determining the relative geological age?", "What kind of tectonic event does the contact relationship of angular unconformity reflect", "The main purpose and significance of studying and studying earth science", "What are the three outer spheres of the earth? What effect will the interaction have on the solid earth?", "In the geological chronology, which five ages are included from old to new, and how many billion years ago is the oldest one?", "What geological processes are responsible for the mountain glacier landform, desert landform, loess landform and their sediments developed in northwest China? What are the main factors of influence?", "What is tectonic movement? The tectonic movement occurred in the Tertiary and after it is named according to what mountain system in China. How does this movement affect the evolution of China's regional environment?", "What are the three outer spheres of the solid earth from top to bottom? According to the environment of natural water, what are the types of water?", "What is the main basis for the division of the inner sphere of the earth? What interface is divided into three major spheres: crust, mantle and core?", "The downward erosion of surface water and its main geomorphic types", "Magmatism and its main types, giving examples of magmatic rocks formed by different types of magmatism", "Types of plate boundaries and the names of the six global lithospheric plates", "What is the significance of field practice in geoscience research? Why?", "What are the corresponding relations and essential differences between the two geological age systems of \"universe, generation, era and era\" and \"universe, boundary, system and system\"? Give an example.", "What kind of tectonic movement does the conformity, parallel unconformity and angular unconformity in sedimentary strata represent?", "What geological processes form granite, sandstone and gneiss? Try to explain its distinctive features.", "The Content and Task of Geohistory Research", "Main indicators of sedimentary environment discrimination", "Common sedimentary environment and sedimentary facies types", "Sedimentary characteristics of fluvial facies", "Main characteristics of delta deposits", "Lacustrine sedimentary characteristics", "Principles and methods of stratigraphic division and correlation", "Contact relationship of strata", "Classification of main stratigraphic units", "The classification system of lithostratigraphic units and its basis", "The hierarchical system of chronostratigraphic units and its division basis", "Analytical methods of historical geotectonics", "Basins can be divided into several types according to the relationship between the descending amplitude of the basement of the sedimentary basin, the depth of seawater and the sedimentary thickness?", "Methods of recovering ancient plates in geological history", "Wilson cycle", "Division of main tectonic stages in geological period", "Major Archaean geological events in North China", "Discuss the influence of Luliang Movement on China", "The Influence of the Caledonian Movement (Guangxi Movement) on China", "The main influence of the Soochow Movement on China", "The Influence of Indo-Chinese Movement on China", "The Influence of Yanshan Movement on China", "The Impact of Himalayan Movement on China", "How to identify sedimentary rocks and igneous rocks and their main rock types?", "Principles of rock naming P198", "Five main structural components of carbonate rock P204", "Three deformation stages of rock P233", "Object deformation mode P229", "Geological chronology unit P41", "Chronostratigraphic unit P42", "Submarine terrain unit P15", "Land-based terrain unit P15", "Fold classification and combination P245", "Fault classification and combination P269", "Internal force geological process P68-71", "External geological process (classification) is the same as above", "Plate boundary type P297", "Mechanical classification of joints. P264", "Characteristics and formation process of unconformity P172-173", "Subtypes and main sedimentary characteristics of river and delta facies PP218", "Wilson's carbonate sedimentary facies belt P222-223", "Formation process of meandering river P85", "Formation process of river terrace and its geological significance P92", "Zhang joint and its characteristics", "Shear joint and its characteristics P264", "\"V\" pattern rule P167-169", "Concept and basic characteristics", "Main features", "Relationship with oil and gas", "Characteristics of synsedimentary anticline and its relationship with oil and gas accumulation P25", "2. Geosphere structure", "3. Moho surface or Moho surface", "4. The crust (layer A) can be divided into upper and lower layers", "5. Geological process", "6. Minerals", "7. Rock", "8. Homogeneous polymorphism of minerals", "9. Stripes", "10. Hardness", "11. Morse hardness tester", "12. cleavage", "13. Fracture", "14. Magma", "15. Magmatism", "16. Igneous rock", "17. Intrusion", "18. Eruption or volcanic activity", "19. Volcanic eruption type", "20. Distribution law of modern volcanoes", "21. Occurrence", "22. According to the amount of SiO2 in igneous rocks, it is the same as the classification of magma", "23. Sedimentary rock", "23. Formation process of sedimentary rocks", "24. Weathering crust", "25. Characteristics of sedimentary rocks", "26. Classification of sedimentary rocks", "27. Metamorphism", "28. Factors of metamorphism", "29. Characteristics of metamorphic rocks", "30. Type of metamorphism", "31. Tectonic movement", "32. Meaning of neotectonic movement and old tectonic movement", "33. Evidence of neotectonic movement", "34. Evidence of old tectonic movement", "35. Rock stratum", "36. Occurrence of rock stratum", "38. I. Fold", "2\u3001 Basic form of fold", "39. Fault structure", "40. Joints", "41. Fault", "42. Several elements of fault", "43. Classification according to the relationship between the relative displacement of the two walls of the fault", "44. Earthquake", "45. Magnitude", "46. Seismic intensity", "47. Spatial distribution of earthquakes~world seismic zone", "48. The global lithosphere is divided into six major plates", "49. Boundary and type of plate", "50. Research methods of crustal history", "51. The oldest Great Ice Age in the world", "53. The last big ice age", "54 Main points and defects of the theory of continental drift (in 1912, German scholar Weigner put forward the \"theory of continental drift\")", "The influence factors and results of metamorphism are discussed.", "Discuss the formation process of marine erosion landform", "Discuss the existing types of water in minerals", "Discuss Bowen reaction sequence and its use", "This paper discusses the basis for the classification of sandstone composition and explains why this is the basis for classification.", "How to identify sedimentary rocks, igneous rocks and their main rock types?", "1. The influence factors and results of metamorphism are discussed.", "2. Discuss the formation process of marine erosion landform", "3. Discuss the existing types of water in minerals", "4. Discuss Bowen reaction sequence and its use", "5. The main types of bedding structures of sedimentary rocks are listed and explained.", "6. This paper discusses the basis for the classification of sandstone composition and explains why this is the basis for classification.", "Geological year representative? (Both tabulation and writing are acceptable) (8 points)", "Briefly describe the classification of sedimentary rocks and their main rock types (10 points)", "Briefly describe Bowen reaction series and symbiotic combination law? (10 points)" ], "answer": [ "After the existing magmatic rocks, sedimentary rocks and metamorphic rocks are lifted to the surface, sedimentary rocks can be formed by weathering, mechanical crushing, transportation and sedimentation; Existing sedimentary rocks, magmatic rocks and metamorphic rocks can be transformed into metamorphic rocks due to temperature, pressure or fluid action; Further changes in temperature and pressure conditions can make the original sedimentary rocks, magmatic rocks or metamorphic rocks melt to form magma, and the magma can be consolidated to form new magmatic rocks.", "(1) The weathered terrigenous clastic material of the parent rock (2) the deep pyroclastic material and the deep hot brine (3) the biological debris formed in the process of biological life activities (4) organic matter and cosmic material", "When the flow strength is P, the maximum particle size of the gravel that can be rolled is 8 cm, and the maximum particle size that can be suspended is 2.2 mm. When the flow strength is less than P, particles with particle size of 8cm and 2.2mm can be deposited at the same time, thus forming double-mode gravel. When the flow intensity changes repeatedly around P, an interbed of sandy and gravel deposits can be formed. If the flow intensity decreases sharply, the mixed sediment of multi-mode gravel, sand, silt and mud with poor separation can be formed.", "When the colloid loses its stability in the process of transportation, the colloid material will condense and deposit; When colloids with different properties of charge meet, the colloidal substances will condense and deposit; The addition of different types of electrolytes also neutralizes the charge of the colloidal particles, thus condensing and depositing.", "The sediment is gelatinous; After diagenesis, there are often shell-like fractures; The rock particles are fine and have strong water absorption; Chemical composition is unstable", "The total amount of iron in sedimentary rocks and magmatic rocks is similar, but Fe2O3>FeO in sedimentary rocks and FeO>Fe2O3 in magmatic rocks; The content of alkali metals and alkaline earth metals in sedimentary rocks is lower than that in magmatic rocks, but the content of CaO in sedimentary rocks is higher than that in magmatic rocks.", "Horizontal bedding: the fine layer interface is straight, parallel to each other and parallel to the layer system interface. The horizontal bedding reflects the sedimentary environment of slow flow or hydrostatic hydrodynamic conditions. Parallel bedding: similar to horizontal bedding, the difference is that the fine layer of parallel bedding is thicker, the sediment grain size is thicker, and the interface between the fine layers is unclear. The sedimentary environment reflects the hydrodynamic conditions of shallow water body and high velocity.", "The washed sediment particles are fine, the overlying sediment particles are coarse, and the bottom of the overlying strata contains the debris of the underlying washed strata; The scouring groove is tongue-shaped or irregular on the plane; The long axis of the scouring channel is consistent with the flow direction; The convex end to the low flat end of the impression represents the flow direction of the scouring fluid.", "The profile is V-shaped, and the plane is interconnected polygon, and the dry crack is usually filled by the overlying sediment; Mud cracks are generally developed in mudstones, argillaceous siltstones or limestone of equivalent grain size in swamps, lakes, floodplains, lagoons, tidal flats and shoals; Mud-fracture is a sign of shallow water, and the V-shaped characteristics of mud-fracture can be used as a basis for distinguishing the top and bottom of the formation.", "Paleogeography, paleoclimate and paleostructure can be restored through sandstone composition, structure and other characteristics; Sandstone is a good reservoir; In engineering geology, it has high compressive strength and can be used as foundation; Minerals: placer minerals that can be used as raw materials for building materials and glass and contain Au, Pt, Cu, U, diamond, etc.", "When the particles are closely aligned by sliding and rearranging to form point contact, the load pressure is transmitted through the contact between particles; When the load pressure increases gradually, the stress accumulation at the contact between particles will become linear contact; If the stress continues to increase, it will cause the dissolution of the contact area, thus changing from mechanical compaction to pressure solution; When the pressure solution continues, the line contact will develop into concave-convex contact and suture contact.", "The research object of earth science is the gas surrounding the earth, the water body on the earth's surface, the earth's surface morphology and the solid earth itself. (1) Theoretical significance: to reveal the laws of the formation and evolution of the earth, so that humans can correctly understand the nature and establish a dialectical materialistic world view. (2) Practical application: It plays an important role in the search, development and utilization of natural resources: it plays an important role in guiding human beings to adapt, protect, utilize and transform the natural environment and fight against various natural disasters. 1) Guide prospecting; 2) Provide foundation site selection and site stability evaluation for large projects: geoscience helps to avoid potential engineering hazards; 3) Solve the problem of coordinated development of population, resources and environment; 4) Prevention and mitigation of geological disasters (such as landslides, landslides, debris flows, earthquakes, volcanoes, land subsidence, etc.)", "The outer part of the earth is composed of the atmosphere, hydrosphere and biosphere. Atmospheric sphere: It is the outermost gas sphere around the earth's surface due to the gravity of the earth. Hydrosphere: refers to the continuous sphere formed by the surface water of the earth, mainly in the form of gas, solid and liquid. The oceans account for 97%, glaciers 2.15% and groundwater 0.6%. Biosphere: refers to the continuous sphere on the surface of the earth composed of living things and living areas, which is the general name of all living things and their living environment on the earth. Weathering, denudation, transportation and sedimentation are affected by the three factors. Although the spheres are a separate system, they are interconnected, interacted, infiltrated and interacted with each other to jointly promote the changes in the external environment of the earth. Cut high and fill low.", "The Paleozoic includes the following periods: Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. China's coal resources were mainly formed in the Permian.", "It is caused by external dynamic geological process: namely, surface geological process, which is caused by external energy of the earth and occurs on the surface of the earth, also known as external dynamic geological process or external geological process; Solar radiation energy and solar and lunar gravitational energy. Main factors: solar radiation energy and solar and lunar gravitational energy mainly from outside the earth promote the movement and circulation of the atmosphere, hydrosphere and biosphere, making them the direct power to transform the surface or surface of the earth's crust. Erosion of glaciers: the destruction of ice bed rocks by glaciers with their own power and carried sand in the process of flow can be divided into excavation and abrasion. The denudation of wind includes two ways: blowing and abrasion. Wind deposition: eolian loess", "Tectonic movement mainly refers to the deformation and displacement of the lithosphere caused by the internal force of the earth, as well as the accretion and extinction of the ocean floor. The Himalayan Mountains represent the Cenozoic orogeny in China. This movement has a significant impact on the geographical environment of Asia. In this movement, the elevation difference between the east and west of China has increased, the monsoon circulation has strengthened, and the natural geographical environment has undergone obvious regional differentiation: the Qinghai-Tibet uplift is the highest plateau in the world, and the tropical and subtropical environment in the Tertiary has been replaced by alpine desert; Northwest China is in a dry environment due to the increasing inland nature; The east becomes a humid monsoon area. It is generally believed that the Himalayan movement is divided into three episodes: the first episode occurred at the end of the Eocene and the beginning of the Oligocene, and the Qinghai-Tibet region became a land, thus turning into a denuded area; The second episode occurred in the Miocene, with a large crustal uplift, accompanied by large-scale faulting and magmatic activity; The third episode occurred at the end of the Pliocene and the beginning of the Pleistocene. The Qinghai-Tibet Plateau as a whole rose strongly, forming a modern landscape pattern. The current altitude of all high mountains and plateaus in China is mainly the result of the rise since the third act of the Himalayan movement. In the regions on both sides of the Tropic of Cancer, there are deserts, known as the Tropic of Desert. The south of China just crosses the Tropic of Cancer, and deserts should also be distributed according to the convention. However, the high Qinghai-Tibet Plateau blocks the westerly circulation, strengthens the East Asian monsoon, brings abundant rain to the south of China, and makes the south of the Yangtze River green, known as the oasis of the Tropic of Cancer. In northern China, the northwest region is far from the sea and blocked by mountains, so it is difficult for the water vapor over the sea to reach there, so there is a large area of desert; North China is also relatively dry, and only the northeast is far away from the Himalayan orogeny and relatively close to the ocean, can it appear relatively wet. In this way, the Himalayan orogeny determines the geographical and climatic pattern of China.", "(1) Theoretical significance: to reveal the laws of the formation and evolution of the earth, so that humans can correctly understand the nature and establish a dialectical materialistic world view. (2) Practical application: It plays an important role in the search, development and utilization of natural resources: it plays an important role in guiding human beings to adapt, protect, utilize and transform the natural environment and fight against various natural disasters. 1) Guide prospecting 2) Provide foundation site selection and site stability evaluation for large projects: geoscience can help to avoid engineering hidden dangers 3) Solve the problem of coordinated development of population, resources and environment. (3) Prevention and mitigation of geological disasters (such as landslides, landslides, debris flows, earthquakes, volcanoes, land subsidence, etc.)", "Geological processes are all kinds of processes that constantly change the material composition, structural structure and surface morphology of the crust or lithosphere due to natural causes. According to different classification criteria, geological processes can be divided into different types (1) Surface geological processes can be classified according to the types of energy that cause geological processes: geological processes mainly caused by external energy sources of the earth and occurring on the surface of the earth, It is also called exogenic geological process or exogenic geological process; Internal geological process: geological process mainly caused by internal energy of the earth, also known as internal dynamic geological process or internal force geological process 2) geological process of surface water according to the type of medium transmitting geological agent: transformation process of surface water on the surface; Geological function of groundwater: the transformation process of groundwater on the surface of the crust; The geological process of the ocean: the transformation process of the ocean to the surface; The geological process of lakes: the process of lakes transforming the surface; The geological process of glaciers: the process of glaciers transforming the surface; Geological effect of wind: the transformation process of wind on the surface. (3) Weathering is classified according to the specific way of geological process: under the surface or near-surface environment, the rocks and minerals in the crust or lithosphere are decomposed and destroyed in situ due to the effects of temperature, atmosphere, water and biological factors. Denudation: The process in which various geological agents (such as wind, water, glaciers, etc.) destroy the surface rocks and strip the damaged materials in place during their movement. Transport: the process of products from weathering and denudation moving from one place to another with moving media. Sedimentation: The process of substances transported by various agents to accumulate in new places under the reduction of kinetic energy of the medium or the change of physical and chemical conditions, as well as biological action. Diagenesis: The process of consolidating loose sediments to form sedimentary rocks.", "Shallow sea is a relatively flat shallow sea area outside the coast. Its water depth is from the low tide line down to 200m shallow sea with current effect, and strong waves can also affect the seabed. Therefore, the sea water in this zone is relatively turbulent, rich in oxygen, and the sunlight can generally reach the lower layer of the sea water; The salinity is relatively normal; It is rich in biology. There are algae and most benthic animals in the upper part of the plant, and the number and type of organisms in the lower part of the plant are reduced. There are various types of sediments in the shallow sea zone, and clastic sediments such as Al, Fe, Mn oxides, calcium carbonate, calcium magnesium carbonate and calcium phosphate are generally developed. The main sedimentary types include mechanical sedimentation (clastic sedimentation), chemical sedimentation and biological sedimentation (1) characteristics of clastic sedimentation in shallow sea: clastic sedimentation in shallow sea is arranged from nearshore to far shore with gravel, coarse sand, fine sand, silt and clay in sequence. The sand and silt layers are mostly distributed in the upper part of the shallow sea zone, with crisscross layers and ripple marks. The argillaceous materials are mostly distributed in the deeper part, often with horizontal bedding. The characteristics of the sediments in the shallow sea zone are as follows: \u2460 Coarse grains, mainly sand and gravel, with cross bedding and asymmetric ripple marks. \u2461 It contains a large number of biological fossils, has good roundness and sorting, and its composition is relatively simple. \u2462 The far shore zone is fine in grain size, mainly composed of silty sand and argillaceous, with horizontal bedding, undeveloped ripple marks, sometimes symmetrical ripple marks, well sorted, and complex in composition. (2) Chemical deposition in shallow sea The shallow sea is a favorable place for chemical deposition. The sediments mainly include carbonate rocks, siliceous rocks, aluminum, iron, manganese oxides and hydroxides, collophanite, and glauconite. (3) The biological deposition in shallow sea \u2460 crustaceous limestone and bioclastic rock After the benthos living in shallow sea zone died, the shells of organisms and plaster mixed and accumulated to form crustaceous limestone; Fragments of biological shells or bones are mixed with other sediments to form bioclastic rocks. \u2461 The reef reef is a kind of uplift accumulation formed on the seabed by the skeleton, shell and some sediments of the organisms (such as corals, bryozoa and stromatoporoids) that proliferate in situ on the seafloor, including the shore reef, barrier reef and atoll.", "Geological age refers to the age of various geological events on the earth, which includes two meanings: one is the sequence of geological events, which is called relative geological age; The second refers to the present age of each geological event. Because it is mainly determined by isotope technology, it is called isotope geological age, also known as absolute geological age. The criteria for determining the relative geological age are mainly as follows: The determination of the relative age is mainly based on three main aspects: the principle of overlapping of rock strata, the evolution law of biota and the cutting relationship between geological bodies (rock strata, rock masses, rock veins, etc.). (1) Stratigraphic sequence law: The original occurrence at the time of formation is generally horizontal or nearly horizontal, and the old strata formed first are always below, and the new strata formed later are covered above. This normal stratigraphic superposition relationship is the stratigraphic sequence law. (2) Fossil sequence law: the general trend of biological evolution is from simple to complex, from low to high; The biological types that have appeared in the past will never be repeated in the future evolution process. The former sentence reflects the stages of biological evolution, and the latter sentence reflects the irreversibility of biological evolution. This law is used to determine the relative geological age as follows: there are different palaeontofossil assemblages in the strata of different ages, and there are the same or similar paleontofossil assemblages in the strata of the same age; The simpler the form and structure of the fossil assemblage of paleontology is, the older the age of the stratum is, and vice versa, the newer it is, that is, the sequence law of the chemical stone layer. (3) The cutting law between geological bodies: newer geological bodies always cut or intersect older geological bodies, or the new ones are cut and the old ones are cut. The relative geological age of geology is determined by the interpenetration and cutting relationship between geological bodies. Relative geological age refers to the determination of the rocks formed in each geological history period and the biological assemblages contained in the rocks in order, showing the new and old relationship of rocks. Therefore, the relative age can only indicate the time of each geological event, and there is no absolute quantitative relationship.", "Angular unconformity means that the occurrence of the upper and lower sets of strata is inconsistent, and the contact of the two sets of strata intersects at a certain angle; The strata age is discontinuous, with strata missing (sediments discontinuous) and denudation surface between y. This shows that before the formation of the second set of strata, there was horizontal compression movement and upward movement, which caused angular contact between the upper and lower sets of strata. The angular unconformity reflects the significant horizontal compression movement and the accompanying ascending and descending movement. Contrary to the parallel unconformity, it is generally believed that the angular unconformity reflects the orogenic movement.", "(1) Theoretical significance: to reveal the laws of the formation and evolution of the earth, so that humans can correctly understand the nature and establish a dialectical materialistic world view. (2) Practical application: It plays an important role in the search, development and utilization of natural resources: it plays an important role in guiding human beings to adapt, protect, utilize and transform the natural environment and fight against various natural disasters. 1) Guide prospecting 2) Provide foundation site selection and site stability evaluation for large projects: geoscience can help to avoid engineering hidden dangers 3) Solve the problem of coordinated development of population, resources and environment. (3) Prevention and mitigation of geological disasters (such as landslides, landslides, debris flows, earthquakes, volcanoes, land subsidence, etc.)", "Atmospheric sphere: It is the outermost gas sphere around the earth's surface due to the gravity of the earth. Hydrosphere: refers to the continuous sphere formed by the surface water of the earth, mainly in the form of gas, solid and liquid. The oceans account for 97%, glaciers 2.15% and groundwater 0.6%. Biosphere: refers to the continuous sphere on the surface of the earth composed of living things and living areas, which is the general name of all living things and their living environment on the earth. Weathering, denudation, transportation and sedimentation are affected by the three factors. Although the spheres are a separate system, they are interconnected, interacted, infiltrated and interacted with each other to jointly promote the changes in the external environment of the earth. Cut high and fill low.", "Archean, Proterozoic, Paleozoic, Mesozoic and Cenozoic. The oldest is 3.8 billion years ago.", "It is caused by external dynamic geological process: surface geological process: geological process mainly caused by external energy of the earth and occurring on the surface of the earth, also known as external dynamic geological process or external geological process; Solar radiation energy and solar and lunar gravitational energy. The main factors are respectively the planing action of the glacier: the destruction of the ice bed rocks by the glacier's own power and the sand carried during the flow process, which can be divided into excavation action and abrasion action. The denudation of wind includes two ways: blowing and abrasion. Wind deposition: eolian loess", "Tectonic movement mainly refers to the deformation and displacement of the lithosphere caused by the internal force of the earth, as well as the accretion and extinction of the ocean floor. The Himalayan Mountains represent the Cenozoic orogeny in China. This movement has a significant impact on the geographical environment of Asia. In this movement, the elevation difference between the east and west of China has increased, the monsoon circulation has strengthened, and the natural geographical environment has undergone obvious regional differentiation: the Qinghai-Tibet uplift is the highest plateau in the world, and the tropical and subtropical environment in the Tertiary has been replaced by alpine desert; Northwest China is in a dry environment due to the increasing inland nature; The east becomes a humid monsoon area. It is generally believed that the Himalayan movement is divided into three episodes: the first episode occurred at the end of the Eocene and the beginning of the Oligocene, and the Qinghai-Tibet region became a land, thus turning into a denuded area; The second episode occurred in the Miocene, with a large crustal uplift, accompanied by large-scale faulting and magmatic activity; The third episode occurred at the end of the Pliocene and the beginning of the Pleistocene. The Qinghai-Tibet Plateau as a whole rose strongly, forming a modern landscape pattern. The current altitude of all high mountains and plateaus in China is mainly the result of the rise since the third act of the Himalayan movement. In the regions on both sides of the Tropic of Cancer, there are deserts, known as the Tropic of Desert. The south of China just crosses the Tropic of Cancer, and deserts should also be distributed according to the convention. However, the high Qinghai-Tibet Plateau blocks the westerly circulation, strengthens the East Asian monsoon, brings abundant rain to the south of China, and makes the south of the Yangtze River green, known as the oasis of the Tropic of Cancer. In northern China, the northwest region is far from the sea and blocked by mountains, so it is difficult for the water vapor over the sea to reach there, so there is a large area of desert; North China is also relatively dry, and only the northeast is far away from the Himalayan orogeny and relatively close to the ocean, can it appear relatively wet. In this way, the Himalayan orogeny determines the geographical and climatic pattern of China.", "Atmospheric sphere: It is the outermost gas sphere around the earth's surface due to the gravity of the earth. From the ground to the top, it is divided into troposphere, stratosphere, mesosphere, warm layer and exosphere hydrosphere in turn: it refers to the continuous sphere formed by the surface water of the earth, mainly in the form of gas, solid and liquid. The oceans account for 97%, glaciers 2.15% and groundwater 0.6%. Biosphere: refers to the continuous layer of the earth's surface composed of organisms and their life activity zones. Water in the aeration zone: water buried in the aeration zone. Diving water: gravity water (saturated zone water) with free surface buried above the first stable aquiclude below the surface. Confined water: gravity water (interlayer water) buried in the permeable layer between two stable aquifers.", "Seismic wave velocity is affected by medium density and elasticity. The propagation velocity of seismic waves generally increases with depth, but there are two obvious first-order discontinuity interfaces, one obvious low-velocity zone and several second-order discontinuity interfaces. According to various physical properties of the earth's interior, especially the change of seismic wave velocity, the earth's interior can be divided into the following three first-order spheres and two discontinuous interfaces: from the outside to the inside, the earth is divided into three layers by the Moho surface and the Gutenberg surface, namely, the crust, the mantle and the core. The crust is mainly composed of rocks, the mantle is mainly composed of peridotite containing magnesium, iron and silicon, and the core, the real heart of the earth, is mainly composed of iron and nickel.", "(1) Downstream erosion: the process of the river bottom being damaged by the detrital material carried by the river, and the valley deepening is called river erosion. The deep valleys and river terraces formed can also form vortex holes and funnels. Due to the steep slope and rapid water flow, the river produces strong down-erosion, which erodes the valley into a \"V\" type valley landform. This valley slope is steep, with a slope of more than 70 \u00b0. The narrowest part of the famous Tiger Leaping Gorge on the Jinsha River is only 30m, the steepest slope and valley can reach 70 \u00b0, and the depth of the canyon can reach 3000m (2) down-erosion groove: due to the large slope of the riverbed, the rapid flow, the down-erosion effect is very strong, often forming down-erosion groove. (3) Small waterfall: the rocks exposed in the active mountain area are mainly igneous and metamorphic rocks, with strong folds and developed fractures, which often form steep cliff landform in the mountain area. Waterfalls often appear at the steep cliff where the fault passes, or because of the difference in lithology between the two sides of the fault, the riverbed is in a stepped shape, and the water flows down from a high place, especially the Pi Lian sag, which is dominated by flying waterfalls or multiple or multi-level surface flow waterfalls. Once the waterfall is formed, the downward erosion is more intense, and the surface flow waterfall has a strong scouring erosion effect on the riverbed. At the place where the waterfall falls, the water splashes all over, eroding the rocks below and causing the upper rocks to collapse. The water flows down and hits in the direction, forming a deep pool under the action of gravity. River terrace: due to the rise of active mountains, the down-erosion is enhanced, so that the original valley bottom remains on the new valley slope in a stepped shape, becoming a stepped terrain on both sides of the valley.", "Magmatism: After the formation of magma, it rises to the upper part of the crust along the tectonic weak zone or spills out of the surface. During the process of rising and moving, the composition of magma changes continuously due to the change of physical and chemical conditions, and finally condenses into rock. This is a complex process. The rock formed is magmatic rock. Main types: intrusion and eruption When magma intrudes into the crust, it is called intrusion. The rock formed by condensation is called intrusive rock. If magma directly erupts to the surface or into the air, it is called extrusive action or volcanism. The magma ejected from the surface is called lava, and the rock condensed from lava is called extrusive rock or volcanic rock. There are batholith and stock in deep intrusion; Shallow intrusive rock bed, rock wall, rock basin and rock cap.", "The division of lithospheric plates is based on the boundary of strong tectonic activity. According to the different modes of relative motion between plates, plate boundaries can be divided into three types: (1) discrete plate boundaries refer to the axis of the ocean ridge, where the plates on both sides move against each other, the plate boundaries are separated by tension, and the asthenosphere materials are upwelling and condensed into a new ocean floor lithosphere. Therefore, discrete plate boundaries are also known as accretive or constructive plate boundaries. (2) Convergence type plate boundary, that is, the plate subduction zone near the trench or the collision zone between the continental plates. When the ocean and the continental plate converge, due to the high density and low position of the oceanic plate, the oceanic plate always subducts under the continental plate and forms a trench on the surface. When the oceanic plate continuously subducts under the continental plate and gradually disappears on the surface, The continental plate behind it may collide with other continental plates with similar density, resulting in strong tectonic deformation, magmatism and metamorphism, and forming mountains. At this time, the strong tectonic deformation zone formed is called plate collision zone (equivalent to the so-called orogenic zone before); It can also be divided into two sub-types: subduction boundary and collision boundary. After the disappearance of the ancient oceanic plate, the land blocks that were originally located on both sides of the oceanic plate healed up. The exposed line of the ancient subduction zone on the surface is called the suture line (or suture zone). \u2460 Subduction boundary: the adjacent ocean and continental plate overlap each other, generally the ocean plate subducts below the continental plate, also known as the subduction zone. There are two secondary types: island arc - trench type, island arc developed above the subduction zone, mainly seen in the margin of the Northwest Pacific; Andean type (mountain arc trench type): the ocean plate subducts under the mountain arc along the continental margin trench, mainly seen in the western edge of the South American continent. \u2461 Collision type boundary: collision zone or welding line between two continental plates (3) dislocation type plate boundary, namely, conversion fault, with horizontal shear slip of plates on both sides, usually without plate growth or plate death. The transform fault is generally distributed near the ocean ridge and sometimes extends to the continental margin, such as the San Andres fault in the western United States. According to the above three boundary types, the global lithosphere is divided into six plates: Eurasia plate, Africa plate, India plate, Pacific plate, America plate and Antarctica plate.", "The universality of space determines that geoscientists must first go to the field to observe the nature, treat the nature as a natural laboratory for research, and it is impossible to move the huge and complex nature into the room for research. Field survey is the most basic and important link of earth science work. It can obtain first-hand information of the research object.", "Geological age refers to the age of various geological events on the earth, and stratum refers to the layered rock formed in a certain geological period. (1) The division of geological chronological unit and stratigraphic unit divides the geological history into some time periods of different levels according to certain basis, and takes the age, generation, era and epoch as the units, which is the geological chronological unit. The strata formed in the geological time corresponding to the era, generation, era and era are respectively in units of universe, boundary, system and series, which is the stratigraphic unit. The geological age unit, stratigraphic unit, and the series of the Paleozoic era can be further divided into groups, groups, and segments in the stratigraphic unit. For example, the strata formed in Phanerozoic era are Phanerozoic; The stratum formed in the Paleozoic era is called the Paleozoic, and the stratum formed in the Cambrian era is called the Cambrian.", "(1) Conformity means that the ages of two sets of strata are continuous, and the occurrence of the strata is consistent and parallel to each other, which indicates that there is no discontinuity between them during sedimentation. Although there was alternation of ascending and descending movement, the sediment did not stop. (2) Parallel unconformity, also known as pseudo-conformity, refers to the overlapping of two sets of strata with basically the same occurrence, but the age is discontinuous, and the sediments (or strata) of some ages are missing. This contact relationship indicates that there has been lifting movement during this period, and the land has been eroded, resulting in uneven erosion surface between the two sets of strata, which is called unconformity surface. The era of missing strata is the period of crustal rise. Parallel unconformity reflects the ascending and descending movement of the earth's crust, and the ascending and descending movement will cause the changes of sea and land, so it can also be said that parallel unconformity reflects the land-forming movement. Angular unconformity: refers to the contact between two sets of strata that are not parallel to each other, and the strata age is not continuous, and there are strata missing between them (the sediments have been interrupted), which indicates that before the formation of the second set of strata, there has been horizontal compression movement and upward movement, so that the upper and lower sets of strata are in angular contact. The angular unconformity reflects the significant horizontal compression movement and the accompanying ascending and descending movement. Contrary to the parallel unconformity, it is generally believed that the angular unconformity reflects the orogenic movement.", "Granite is formed by magmatism. Magmatism: After the formation of magma, it rises to the upper part of the crust along the tectonic weak zone or spills out of the surface. During the process of rising and moving, the composition of magma changes continuously due to the change of physical and chemical conditions, and finally condenses into rock. This is a complex process. The rock formed is magmatic rock. Sandstone is formed by sedimentation, which refers to the accumulation of materials transported by various agents in new places under the reduction of medium kinetic energy or change of physical and chemical conditions and biological action. Gneiss is formed by metamorphism, which refers to the geological process of forming new rocks due to the change of physical and chemical conditions in the specific geological conditions and environment underground, which makes the original rocks change in material composition, structure and structure basically in solid state. Sedimentation is caused by external geological processes, which are mainly caused by external energy sources of the earth and occur on the surface of the earth. Magmatism and metamorphism are caused by internal geological processes, which are caused by internal energy sources of the earth and mainly occur in the interior of the earth.", "The research object of geohistory is the stratum formed in the geological history period. The so-called stratum refers to the synthesis of layered rocks preserved on the earth's surface, including sedimentary rock stratum, volcanic rock stratum and metamorphic rock stratum. The research content of geohistory involves the formation of the earth, the origin of life, the evolution of organisms, the changes of paleogeography, the separation and separation of plates and the interaction of different spheres of the earth. It can be further subdivided into three aspects: first, study the formation sequence, age, division of stratigraphic units, establishment of stratigraphic system and stratigraphic space-time correlation (stratigraphy); The second is to study the paleoenvironment, paleogeography and evolution (sedimentary paleogeography) of the formation of the strata according to the sedimentary components, sedimentary facies and their space-time distribution characteristics; The third is to restore the paleotectonic background, distribution pattern of ancient plates and their separation history (historical geotectonics) of the formation of the strata according to the sedimentary combination, sedimentary palaeogeography, palaeobiogeography, paleoclimate, paleomagnetism and other structural indicators of the strata. Its research tasks include: (1) studying the biological evolution history of the formation and development of the biological world in the geological history period (the organic world); (2) Study the sedimentary development history of paleogeographic changes during the geological history period; (3) Study the pattern of continental and oceanic plates, plate separation process, tectonic evolution history and tectonic movement history during geological history.", "Because there are a series of unique physical, chemical and biological processes in a specific sedimentary environment, a unique combination of sedimentary characteristics is formed. We call these sedimentary characteristics that can reflect the sedimentary environment conditions as facies indicators. To sum up, it mainly includes three categories: biomarkers, physical indicators and geochemical indicators of rocks and minerals. Biomarkers (1) refer to facies fossils; (2) Morphological function analysis; (3) Analysis of community paleoecology. Physical indicators (1) sediment color; (2) Sediment structure; (3) Primary sedimentary structure: bedding structure, bedding structure, quasisyngenetic deformation structure, biological and chemical genetic structure. Rock and mineral indicators (1) sediment structure components; (2) Authigenic minerals. Other indicators (1) vertical change sequence of sedimentary facies; (2) Lateral variation of sedimentary facies; (3) The spatial geometry of the sedimentary body.", "Sedimentary facies types of terrestrial environment: glacial deposits: including moraine deposits, glacial water deposits, glacial sea gravel deposits; River deposits: including riverbed retention deposits, bank deposits, natural embankment deposits, crevasse fan deposits, flood plain deposits, flood plain deposits, flood marsh deposits, and oxbow lake deposits; Lake deposits: including clastic deposits, chemical deposits and biological deposits. Marine and continental transitional environment sedimentary facies: generally refers to delta environment delta plain delta front marine environment sedimentary facies type terrigenous clastic shore-shallow sea deposits: tidal flat deposits, barrier bar - lagoon deposits; Shallow water carbonate deposits: carbonate tidal flat deposits, reef deposits, shelf carbonate deposits; Subsea deep-sea sediments: pelagic and semi-pelagic background sediments, marine gravity current event sediments.", "The vertical superimposed binary structure of bottom sediment (coarse grain of riverbed) and top sediment (fine grain of embankment and flood plain), and the binary structure has multi-cycle; The inter-mountain river channel is relatively straight, with high velocity and deep cutting, mainly preserving the coarse clastic sediments of the river; In the plain area, the flow velocity of rivers is small, the valley is wide, and the meanders are developed, mainly including river channels, sand bars and flood plain deposits.", "The sedimentary facies are roughly distributed in a circular band on the plane; From the delta plain to the predelta, the grain size changes from coarse to fine, the phytoliths and terrestrial biofossils decrease, while the marine biofossils increase, and various types of cross bedding change into a single horizontal bedding.", "The water body is closed, and the hydrodynamic force is wave and lake current; From the lakeside to the deep lake, the grain size of the sediment gradually became finer, and the bedding type changed from cross bedding to horizontal bedding; In the humid climate area, the lakes are mainly composed of fine sandstone, siltstone and clay rock, and there are coarse clastic sediments in the lakeside, and there are often unique lacustrine biological associations; In arid climate areas, lakes are mostly chemical deposits, and dry cracks and other structures are common in the lakeside, and fossils are generally scarce.", "Stratigraphic division refers to the formation of similar and similar stratigraphic groups into different stratigraphic units according to different stratigraphic material properties. Stratigraphic correlation refers to the spatial correlation and extension of strata in different areas. Main principles: the principle that the material properties of the stratum are equivalent; The principle of inconsistent stratigraphic correlation of different stratigraphic units. Main methods: petrological methods: lithology combination method, marker layer method, stratigraphic structure comparison method; Biostratigraphic methods: standard fossil method, fossil combination method; Method of tectonic movement surface; Isotopic age determination and stratigraphic division and correlation; Magnetic stratigraphic correlation.", "Unconformity contact: angular unconformity: separating the lower folded or tilted strata from the upper horizontal strata; Unconformity: refers to the separation surface between the sedimentary cover and the underlying magmatic rock or deep metamorphic rock; Pseudo-conformity: refers to the separation surface of the upper and lower strata with parallel or nearly parallel occurrence and irregular erosion and exposure marks. Conformity contact: small discontinuity: the sedimentary discontinuity in the stratum caused by the interruption of sedimentation or the change of sedimentary environment; Continuous: refers to the contact relationship between rock layers formed by continuous sedimentation.", "Chronostratigraphic units: universe, boundary, system, series, stage and time zone; Geological age unit: age, generation, era, era, period and time; Lithostratigraphic unit: group, group, segment and layer; Biostratigraphic units: extension zone, combination zone, peak zone, spectral pedigree and interval zone.", "The lithostratigraphic unit refers to the stratigraphic unit that is divided according to the vertical difference of the lithologic characteristics of the strata, and the stratigraphic system and sequence are established. Lithostratigraphic units include groups, groups, segments and layers. Group: a stratigraphic unit higher than the group, which is the combination of the group. The principles of their combination are: similar lithology, correlation of genesis, similarity of structural types, etc. Generally, a group is composed of groups with similar lithology, similar structure and related genesis. The top and bottom boundaries of the group are generally unconformity boundaries, or obvious conformity boundaries; Group: the basic unit of lithostratigraphic unit system, which is a stratigraphic body with relatively consistent lithology and certain structural type. The top and bottom boundaries of the formation are obvious. They can be unconformity boundaries or marked conformity boundaries, but there can be no unconformity boundaries within the formation; Member: a stratigraphic unit lower than the formation, which is the subdivision of the formation. The principle of segmentation includes: differences in lithology, structure and formation genesis within the formation. The top and bottom boundaries of the segment should also be obvious, generally marked by obvious conformity boundaries; Stratum: The smallest lithostratigraphic unit. There are two types: one is the combination of rock strata with the same or similar lithology, or the combination of basic sequences with the same structure, which can be used for layering in field profile research; The second is the rock or ore layer with special lithology and obvious signs, which can be used as the sign layer or the special layer for regional geological mapping.", "Chronostratigraphic unit refers to the stratigraphic body formed in a specific geological time interval. This unit represents all the strata formed in a certain time range in the geological history, and only represents the strata formed in this period of time. From high to low, it can be divided into six levels: universe, boundary, system, series, stage and time zone, which correspond to the era, generation, era, period and time of the geological chronological unit. Universe: the largest chronostratigraphic unit, corresponding to the time \"universe\", is divided according to the largest stage of biological evolution, that is, the existence and mode of living matter; Boundary: the second-level chronostratigraphic unit, corresponding to the time \"generation\", is divided according to the overall development of the biosphere and the stages of crustal evolution; System: the chronostratigraphic unit below the boundary, corresponding to the \"period\", which is mainly divided according to the stages of the evolution of the biosphere; Series: the secondary stratigraphic unit within the system, which corresponds to the \"epoch\". Generally, one epoch can be divided into two to three generations according to the biological interface appearance; Stage: the most basic unit of chronostratigraphy, which corresponds to \"stage\" and is mainly divided according to the biological evolution characteristics of family and genus; Time zone: the lowest unit in the chronostratigraphic unit, corresponding to the geological chronostratigraphic unit \"time\", refers to all stratigraphic records within a \"time\", and is divided according to the evolution of genera and species.", "Sediment composition and sedimentary assemblage (or sedimentary formation): The crustal tectonic state is uneven, and the amplitude and speed of vertical and horizontal crustal movement in different regions are different, which has an important impact on the migration distance, sorting degree and deposition speed of sediments during the deposition process, and will also be reflected in the sediment composition; Sediment thickness analysis: Paleoenvironment and thickness analysis of sediments are important means to judge the amplitude and rate of vertical crustal movement; Analysis of sedimentary facies and sedimentary palaeogeography: the analysis of sedimentary facies and sedimentary environment is not only an important method for the study of stratigraphic genesis, but also an important means for the study of the structural background of stratigraphic formation; Analysis of sedimentary basins: the types and characteristics of sedimentary basins are closely related to their tectonic positions, and the tectonic state determines the types and characteristics of basins; Analysis of tectonic movement surface: tectonic movement surface, i.e. unconformity and discontinuity, is the direct record of crustal movement. Unconformity, whether angular or parallel, is the result of crustal uplift and denudation.", "Compensation basin: refers to the sedimentary basin in which the decline rate of the sedimentary basement is generally consistent with the deposition rate, resulting in the unchanged ancient water depth and no major changes in the lithofacies type; Non-compensated basin: refers to a sedimentary basin with thin sediment despite a long geological time, because the decline rate of sedimentary basement is higher than the deposition rate, resulting in increased water depth and no sediment compensation filling; Over-compensation basin: refers to the sedimentary basin in which the deposition rate of sediment is higher than the decline rate of sedimentary basement, resulting in shallow water depth and sediment thickness greater than the decline rate of sedimentary basement.", "Geosuture tracing method: the collision and subduction between plates can leave a mark of their combination and collision - the geosuture. The geosuture itself is a huge and complex superlithospheric deep fault zone, and the geological development history of the blocks on both sides often has great differences. Special geological records such as ophiolite suite, melange accumulation and high-pressure metamorphic belt are distributed intermittently along the suture line; Identification of the ancient continental margin: the continental margin between the plate suture and the continental plate is the part of the continental margin of the plate, which is different from both the stable cratonic basin within the continental plate and the sedimentary and structural characteristics of the oceanic basin; Paleomagnetic method: Through demagnetization technology, the magnetization direction at the time of rock formation can be restored, and the paleolatitude can be calculated. According to the location of the sampling points, the paleolatitude of the ancient plate at that time can be determined. Through the analysis of the paleomagnetic paleolatitude of the same plate at different times, the movement direction and distance of the plate can be deduced. The position of the ancient magnetic pole can also be calculated through the paleomagnetic research, and then the orientation of the ancient plate can be restored according to the magnetic declination and the ancient magnetic pole. By systematically studying the ancient magnetic poles of different plates in different periods, we can get the change track of the ancient magnetic poles of different plates in different periods, that is, the track of pole shift; Bio-paleogeography: Bio-paleogeography division mainly refers to the geographical division with important differences in biological classification and evolution system formed by temperature control and geographical isolation for a long time. The two continents separated by the vast ocean basin often have different biological groups; Paleoclimate analysis: Paleoclimate refers to the synthesis of various climatic factors such as rainfall, temperature, wind force and wind direction in the geological history period. Due to the relative movement between plates, two plates with great differences in latitude collided today, and there must be sediments and biosphere with great differences in climate conditions adjacent to each other, which provides an important basis for finding plate boundaries; Characteristics of magmatic rock assemblages: various magmatic rock assemblages in the lithosphere are also controlled by different tectonic environments. There are special magmatic rock assemblages in the ocean crust, island arc, and stable continental plate. Through the spatial distribution pattern of these magmatic rock assemblages, we can also infer the plate distribution or the location of the suture.", "The development cycle model of continental plate separation and ocean basin evolution. Embryonic period: refers to the period when the continental rift is formed by the internal extension of the continental plate. The modern example is the East African Rift; Initial ocean basin stage: refers to the continental crust splitting to form a narrow and long trough, and the ocean crust appears locally, and the modern example is the Red Sea; Mature ocean period: due to the outward expansion of the mid-ocean ridge, the subduction and reduction of the ocean edge has not yet occurred, which makes the ocean basin expand rapidly. The modern example is the Atlantic Ocean; Decline ocean period: the mid-ocean ridge continues to expand, but the ocean edge appears subduction and subduction, shrinking the ocean basin and shrinking the area. The modern example is the Pacific Ocean; Residual ocean basin stage: with the mutual compression of the continental plates, the ocean crust rapidly decreases, the ocean basin shrinks sharply, and the residual small ocean basin appears, the modern example is the Mediterranean; Extinction period: with the collision of the continental plates, the ocean basin finally closed, the sea area disappeared, and formed an orogenic belt. The remnants of the oceanic crust (ophiolite suite) remained along the collision zone (ancient suture). The modern example is the Alps-Himalayan orogenic belt.", "Caledonian tectonic stage: late Neoproterozoic to Silurian (800Ma~416Ma); Hercynian structural stage: early Devonian to late Permian (416Ma~254Ma); Old Alpine tectonic stage: early Triassic to late Cretaceous (254Ma~65Ma); Indosinian tectonic stage: Early Triassic to Middle Triassic (254Ma~227Ma); Yanshan tectonic stage: Late Triassic to the end of Cretaceous (227Ma~65Ma); Neo-Alpine tectonic stage: Cenozoic (65Ma~); Himalayan tectonic stage: Cenozoic (65Ma~).", "Metathermal event, magmatism, tectonic movement event", "It refers to the tectonic movement that occurred in North China at the end of the Paleoproterozoic era, accompanied by a large number of magmatism and metamorphism, and a strong folding and denudation of the crust. This movement further solidified and united the dispersed Archean continental core into a larger continental block: the original platform of North China - the embryonic form of the North China plate. During the Luliang period, five blocks constituting the later Chinese Mainland can be identified, namely, the original Sino Korean block, the Yangtze block, the Cathaysian block, the Harbin block and the Junggar block. Among them, the original Sino-Korean block was first assembled by small landmasses such as the Tarim craton and Sino-Korean craton during the Luliang period, thus forming a unified crystalline basement. Other blocks also underwent tectonic movements with different intensities during the Luliang period, but failed to form a unified crystalline basement.", "In the late Caledonian period, an east-west folded belt was formed on the north side of the North China Plate and spliced in the north of the North China Plate. The ancient Qilian Mountains and the North Qinling Ocean disappeared on the south side, and the Qaidam Plate and Qinling microplate collided with the North China Plate; The Yangtze plate collided with the Cathaysian plate to form the southeast orogenic belt. Except for the remaining trough in the Qinzhou Fangcheng area in southeastern Guangxi and the continental Devonian system in the eastern Yunnan belt, other parts of South China are denuded ancient land or mountains. The Caledonian movement caused South China to form the same plate and changed the ancient geographical pattern of eastern China to become the northern land of the South China Sea, bounded by the Qinling Mountains, the southern area is the sea and the northern area is the land; There is a South Qinling rift on the northern margin of the Yangtze plate, and the East Qinling collided and butted at the end of Caledonian.", "The Dongwu movement was a crustal movement that occurred in the early stage of the Late Permian, triggering the transgression and regression of seawater, sedimentary cycles, lithofacies changes, biological changes and volcanic activities in South China. The performance of the Soochow Movement: large-scale regression, basalt eruption, and the rise of the Cathaysian ancient land.", "The Indosinian movement in South China led to the formation of the Hunan-Guizhou-Guangxi highland and the separation of the eastern and western sea basins, and the differentiation between the east and the west became clear. The volcanic activity in the eastern small basin reflects the role of the Pacific plate; It led to large-scale regression in the late Middle Triassic, and the South China plate rose to land, merging with the North China plate, and the Qinling Ocean disappeared, forming the Qinling fold belt; This led to the rise of the folds in the southern Guizhou and Youjiang rift troughs. The impact of the Indosinian movement on the eastern part of China: the paleogeography, paleoclimate and paleotectonic changes in the eastern part of China are characterized by stages, and the Triassic systems in South China and North China are characterized by time dichotomy, and large-scale regression and paleoclimatic changes (drought - humidity) occur at the same time; The eastern part of China has changed from the confrontation between north and south to the differentiation between east and west, the western part is a large stable basin (forming Sichuan, Ordos, the Junggar Basin), and the eastern part is a small fault basin; Some plates were amalgamated to form a large-scale Indosinian fold belt. After that, several large plates in China were basically assembled; Magmatism and mineralization: 190-230ma magmatism was generated, and endogenetic metal deposits were formed in the middle and lower reaches of the Yangtze River, Qinling Mountains and Sanjiang areas. At the same time, the circum-Pacific belt began to develop. After the Indosinian Movement, the ancient geographical pattern of China changed from the northern land of the South China Sea to the east-west differentiation.", "Yanshan movement refers to the widespread tectonic movement in China from the late Triassic to the Cretaceous. It is named after the Yanshan Mountain near Beijing as the standard area. In the area west of the Great Khingan-Taihang Mountains, the tectonic activity is weak, lacking magmatic activity and stratigraphic fold; In the area to the east of the Great Hinggan Mountains and Taihang Mountains, tectonic activity is strong, which is specifically manifested by crustal rupture and the formation of many fault basins. Volcanic rock deposits are widely developed in the basin, and the strata are strongly deformed and folded, forming a widely distributed unconformity contact relationship, and the strata are subject to different degrees of metamorphism; Nadanhada and Taiwan formed very developed tectonic migmatites; The strong magmatic activity formed the world-famous circum-Pacific metal metallogenic belt.", "It is generally believed that the Himalayan movement is divided into three episodes: the first episode occurred at the end of the Eocene and the beginning of the Oligocene, and the Qinghai-Tibet region became a land, thus turning into a denuded area; The second episode occurred in the Miocene, with a large crustal uplift, accompanied by large-scale faulting and magmatic activity; The third episode occurred at the end of the Pliocene and the beginning of the Pleistocene. The Qinghai-Tibet Plateau as a whole rose strongly, forming a modern landscape pattern. All the high mountains and plateaus in China have reached the current altitude, which is mainly the result of the rise since the third act of the Himalayan movement; It led to the formation of mountains in West Asia, the Middle East, Himalayas, western Myanmar, Malaysia, and the Western Pacific island arc including Taiwan, China Island of China, and the disappearance of the ancient Mediterranean between China and India; As a result, the elevation difference between the east and west of China has increased, the monsoon circulation has strengthened, and the natural geographical environment has undergone obvious regional differentiation; Qinghai-Tibet region: the uplift is the highest plateau in the world, and the tropical and subtropical environment from Paleogene to Neogene is replaced by alpine desert; Northwest China is in a dry environment due to the increasing inland nature; The eastern region has become a humid monsoon region.", "How to identify sedimentary rocks, igneous rocks and their main rock types? (15 points) Answer: 1. It can be identified according to its geological occurrence, that is, according to its genetic difference, it is generally believed that igneous rocks are formed by magmatism, including intrusive rocks formed by intrusion, volcanic rocks formed by magmatic eruption and subvolcanic rocks between them. The sedimentary rock is the rock formed by weathering and denudation, transportation, sedimentation, and finally consolidation and diagenesis. (4 points) 2. It can be distinguished from its structure and structure: igneous rocks can have crystalline structure, aphanitic structure, vitreous structure, porphyritic structure, porphyritic structure, etc., while sedimentary rocks can generally see clastic structure, argillaceous structure, chemical structure, biological structure, etc. The igneous rock structure can be seen as massive structure, rhyolitic structure, flow structure, stomatal structure, almond structure, etc., while the sedimentary rock can be seen as layered structure, top structure (such as ripple marks, mud cracks, etc.), biological relic structure, etc. (5 points) 3. The main rock types of igneous rocks include ultrabasic rocks, basic rocks, neutral rocks, acidic rocks, and vein rocks, while the rock types of sedimentary rocks include sedimentary rocks, which can be divided into two types according to their genesis and composition: clastic rocks, chemical rocks, and biochemical rocks. In addition, there are some sedimentary rocks formed under special conditions. Clastic rocks mainly include sedimentary clastic rocks and volcaniclastic rocks. Sedimentation of clastic rock according to grain size", "Clastic rocks are mostly composed of several or more grain-size clasts. The classification designation is based on the grain size of \u2265 50%. The grain size with content of<50% to \u2265 25% is defined as the main adjective before the name of rock type, such as\u201c \u00d7\u00d7 Quality. The secondary adjective is defined by the grain size of<25% to \u2265 10%, and the \"containing \u00d7\u00d7 The form of \"is written in the front, such as\" fine sand medium sandstone with coarse sand \". When there is no grain-size component data and only the naked eye is used for identification, the basic name of the rock, such as fine conglomerate and coarse sandstone, can be determined, and sometimes the main adjective, such as gravelly sandstone, can be determined", "Carbonate rocks are mainly composed of five main structural components: grain, plaster, cement, grain and biological framework", "Three deformation stages of rock: elastic deformation, plastic deformation and fracture deformation", "There are five ways of object deformation: stretching, squeezing, shearing, bending and twisting", "The geological chronological unit is the time scale for recording the relative geological age. The universal geological chronological unit in the world is divided into five basic units, from large to small: era, era, era and period.", "Chronostratigraphic unit is the sum of strata formed in each geological chronological unit. Chronostratigraphic units from old to new include five basic units: universe, boundary, system, series and stage", "four", "According to the elevation and fluctuation characteristics, the land terrain can be divided into mountain, hill, plain, plateau and basin.", "Classification a) Position classification of fold: axial plane nearly vertical 90 \u00b0 - 80 \u00b0 axial plane inclined 80 \u00b0 - 20 \u00b0 axial plane nearly horizontal 20 \u00b0 - 0 \u00b0 hinge nearly horizontal 0 \u00b0 - 10 \u00b0 I. Vertical horizontal fold \u2163. Oblique horizontal fold \u2165. Horizontal fold hinge inclined 10 \u00b0 - 70 \u00b0 II. Vertical plunging fold \u2164. Oblique plunging fold \u2166. Horizontal fold hinge nearly vertical 70 \u00b0 - 90 \u00b0 5 Attached table: the position of fold in space depends on the axial plane and the attitude of hinge. The vertical coordinate represents the dip angle of the hinge, and the horizontal coordinate represents the dip angle of the axial plane. The fold can be divided into seven types. The axial plane of the first three types of folds is vertical, indicating that the two wings of the folds have opposite inclination and equal inclination; The axial plane of the IV and V fold is inclined, indicating that the dip angles of the two wings of the fold are not equal; One wing of the strata in Class VI recumbent fold and Class VII recumbent fold faces downward. The characteristics of oblique fold are that the inclination and inclination of the hinge and the axial plane are basically the same, the inclination of the axial plane is 20 \u00b0 - 80 \u00b0, the inclination of the hinge is 10 \u00b0 - 70 \u00b0, but the lateral inclination of the hinge on the axial plane is 80 \u00b0 - 90 \u00b0 b) the ideal geometric shape classification of the fold c) the classification of the thickness change of the fold layer d) the classification of the geometric relationship between the various fold surfaces The most basic combination can be summarized as the following three types of holomorphic fold, discontinuous fold and transitional fold", "\uff081\uff09 Classification of geometric relationship of fault 1. According to the relationship between fault strike and rock stratum strike, strike fault dip fault, oblique fault, bedding fault 2. According to the geometric relationship between fault strike and regional tectonic line, longitudinal fault, transverse fault, oblique fault (II) Classification of relative movement between two sides of fault 6", "Tectonic movement, magmatism, earthquake, metamorphism", "According to the mode of action, external geological processes can be divided into weathering, denudation, transportation, sedimentation and diagenesis", "Discrete boundary, converging boundary and transforming boundary", "The joint can be divided into tension joint and shear joint according to the stress environment generated by the joint", "According to the occurrence of the upper and lower strata of the unconformity surface and the characteristics of the crustal movement reflected, the unconformity can be divided into parallel unconformity and angular unconformity. Parallel unconformity features: parallel unconformity shows that the occurrence of the upper and lower sets of strata are parallel to each other, but there are some ages of strata missing between these two sets of strata. The formation process of parallel unconformity: decline to accept sedimentation \u2192 rise, sedimentary discontinuity and denudation \u2192 redevelopment, and angular unconformity of redeposition: it is mainly manifested as the absence of some strata between the upper and lower strata, and the occurrence is different. The formation process of angular unconformity: decline to accept sedimentation \u2192 horizontal movement of the crust, fold rise (often accompanied by fault movement, magmatism, regional metamorphism, etc.), sedimentary discontinuity, denudation \u2192 redevelopment, redeposition7", "Modern river types can be divided into straight river, meandering river, braided river, reticulated river according to channel shape, bifurcation parameters and curvature? Flat and straight rivers with small curvature and curvature index (curvature index=river length/river valley length)<1.5 usually only occur within a short distance of a certain upper reach of a large river, or are they small rivers? The meandering river, also known as the meandering river, is a single channel, with its curvature index>1.5, mainly distributed in the middle and lower reaches of the river? Braided flow is a multi-channel flow, and many times of branching and converging form a braided flow. The channel is wide and shallow, with small curvature. Because the channel is unstable, it is often diverted and migrated, so it is also called wandering river. It is mainly developed in the middle and lower reaches of the river. Can this river develop in the alluvial fan and ice water area? The reticulated river has the characteristics of curved multi-channels. The channel is narrow and deep. It is mainly developed in the middle and lower reaches of the river in the form of a network? In the shallow water zone where the river flows into the lake, the sediments are the coarsest, mainly sandy, and small cross bedding is developed. Sandstone is lenticular with few biological fragments? The main body of the foreset delta is finer than the topset, mainly composed of silty sand and fine sandstone, well sorted, with few cross bedding, and mainly composed of massive bedding and grain-sequence bedding formed by rapid deposition. There is strong biological disturbance structure, and the sand body is lenticular.? Shallow lake deposit with thickened bottom layer, fine grain size, mainly composed of silt and mud. Mainly horizontal bedding. Many bioclastic", "Nine phase bands? (1) Basin? (2) Open continental shelf? (3) Carbonate slope foot? (4) Platform front slope? (5) Platform edge reef? (6) Platform edge shoals? (7) Open platform? (8) Limited platform? (9) Platform evaporite basin area platform margin area platform area X zone Y zone Z zone", "The concave bank is severely damaged by the transverse circulation, the lower part of the valley slope is hollowed out, and the upper rock loses its support and collapses, so the valley slope of the concave bank gradually retreats towards the downstream of the concave bank. While the concave bank is suffering from lateral erosion, the bottom water constantly moves the products of the concave bank destruction to the convex bank. When the kinetic energy of the bottom water is reduced by the friction of the riverbed, it is deposited on the downstream side of the convex bank, so the sediment makes the convex bank continue to expand towards the downstream direction of the riverbed. As the concave bank continues to recede, the convex bank continues to extend forward, the curvature of the valley continues to increase and become more curved", "During the relatively stable period of crustal movement, the river is dominated by lateral erosion, and the river valley continues to move laterally to form a wide river valley. If the flood plain formed by alluvial deposits in the river valley is in the rising state due to the crustal movement or the river erosion base level is lowered, the river's undercutting is strengthened again, making the riverbed lower, and the original flood plain is relatively raised, and the general flood cannot reach, The formation of a flat terraced terrain, which is distributed on the valley slope of the river valley and cannot be submerged by flood, is called the river terrace. The ascending and descending movement of the crust will cause the change of the intensity of river erosion. Therefore, the river terrace is often regarded as terrace 10", "Tensile joint A joint formed by rock under the action of tensile stress. Its main characteristics are: (1) The fracture surface is rough and uneven, and the tensile joint developed in conglomerate or coarse sandstone often passes around the grain surface, and generally does not cut through the grain. (2) The joint occurrence is unstable, and it is often meandering or serrated on the plane. (3) The joint disappears when it extends not far along the strike, but another tensile joint in the same direction can appear on its side, forming a lateral phenomenon on the plane. (4) The two walls are open, and there is a joint wall distance visible to the naked eye. Sometimes the upper wall distance is large, wedge-shaped, and gradually disappears downward. (5) The joint development is often sparse, with large joint spacing and low frequency. (6) The tail end is often branched or with almond-shaped knot, with irregular branching direction and irregular shape", "Shear joint A joint formed by rock under shear stress. Its main characteristics are as follows: 11 (2) The joint occurrence is relatively stable, and it extends in a straight line on the plane (3) It extends far along the strike, sometimes forming a relatively regular plume on the side of the joint. The shear trend (rotation direction) can be determined according to the plume. The method is to observe it perpendicular to the joint trend. If the front joint extends to the right side of the rear joint, the shear trend of the joint is dextral (or dextral), and vice versa. (4) The two walls are closed and the distance between the walls is very small. Sometimes only one seam can be seen with the naked eye. (5) Joint development is relatively dense, with small spacing and high frequency. (6) The rocks on both sides of the joint often have small displacement along the joint surface, resulting in the staggered phenomenon of the two measured rocks of the joint, and can leave scratches on the joint surface. (7) The tail change can be divided into broken tail, diamond and fork", "The outcrop boundary shape of the inclined strata depends on the occurrence of the strata, the terrain and the relationship between the two. When the inclined rock outcrop valley or ridge, it is in \"V\" shape. According to the rock occurrence, ground slope direction and slope, the \"V\" shape is also different. This rule is called \"V\" shape rule. When the rock stratum dip is opposite to the ground slope direction, the bending direction of the rock stratum boundary and the terrain contour line is the same at the valley. When the \"V\" shape tip of the rock stratum boundary points to the upstream of the valley and crosses the ridge, The \"V\" shaped tip points to the lower slope of the ridge, but the curvature of the rock stratum boundary is always smaller than the curvature of the contour line. On the contrary, when the rock stratum dip is the same as the ground slope, and the rock stratum dip is greater than the ground slope angle, the rock stratum boundary and the terrain contour line bend in the opposite direction in the valley. The \"V\" shaped tip of the boundary points to the downstream on the ridge, and points to the same slope of the ridge. When the rock stratum dip is the same as the ground slope, When the dip angle of the rock stratum is less than the slope angle of the ground, the bending direction of the outcrop boundary of the rock stratum is the same as that of the contour line of the terrain. In the valley, the \"V\" shaped tip points to the upstream on the ridge, and the \"V\" shaped tip points to the downhill of the ridge. This is different from the opposite. The shape difference is that the \"V\" shaped curvature of the outcrop boundary is greater than the curvature of the contour line of the terrain", "1) It refers to faults developed during sedimentation. That is to say, fault while sedimentation 2) There are many synonyms, including syngenetic fault, growth fault, sedimentary fault, post-sedimentary fault, etc. 3) synsedimentary fault is mainly synsedimentary normal fault, which is mainly developed at the edge or inside of sedimentary basin, belonging to a tectonic type of extensional environment", "1) Most of the regional synsedimentary faults extend along the edge of the basin and are in arc or arc echelon on the plane. The arc-shaped concave surface points to the descending wall, that is, the fault system develops in a stepped shape from the edge to the center of the basin, from the old to the new. 2) The synsedimentary faults are mostly steep up and gentle down on the profile, and the concave surface is upward in a plow-shaped bending shape. 3) The strata on the descending wall of the synsedimentary faults are significantly thickened, The activity intensity of the fault is usually measured by the ratio \"growth index\" or \"growth index\" of the thickness of the corresponding layer of the falling wall and the rising wall. 4) Synsedimentary faults often develop associated sedimentary sliding structures. 5) Synsedimentary faults often develop \"reverse traction\" deformation on the hanging wall, forming a rolling anticline structure", "1) The main active period of the synsedimentary fault is often the main development period of the oil generation sag. With the advance of the synsedimentary fault, the sedimentary sag also shifts and directly controls the distribution of the oil generation sag. 2) The secondary synsedimentary fault zone in the sedimentary sag extends in an arc shape to the center of the oil generation sag in a stepped manner, The fault faces the most favorable position for oil and gas migration and accumulation 3) The associated structure of synsedimentary fault (rolling anticline and tilting fault block) is the most favorable trap type 4) Due to the early development and long duration of synsedimentary fault, it promotes oil and gas to be in the state of migration, accumulation and re-concentration for a long time, which is conducive to the formation of multiple trap types and superimposed compound oil in the deep and shallow oil and gas superimposed slices 14", "The characteristics of the synsedimentary anticline are: thin top and thick wing, gentle top and steep bottom, thick top and thin wing, and the relationship between high point migration and oil and gas accumulation is conducive to the formation of good reservoir paleo-uplift belt with long-term developed trap structure and favorable facies belt with oil storage physical properties, which is the saddle of the paleo-uplift where oil and gas are highly concentrated. Due to the small flow resistance, the sandstone is tongue-shaped protrusion and the top becomes thin, crossing the top of the paleo-uplift, It is pinched out at one side of the back water and forms lithologic reservoir with the cooperation of other factors", "1\u3001 The outer three circles: (1) the atmosphere, (2) the hydrosphere (3) the biosphere II, and the inner three circles: (1) the crust (2) the mantle (3) the core", "The first order discontinuity between the crust and mantle; Gutenberg surface: the first discontinuity between the mantle and the core.", "The upper crust (layer A '), which is similar to the composition of granite, is called granitic layer, also called silica-alumina layer; The lower crust (A ''), which is similar to the composition of basalt, is called basaltic layer, also known as siliceous magnesium layer.", "The natural force acting on the earth changes the material composition, internal structure and surface morphology of the earth.", "It is a homogeneous object with relative chemical composition and physical properties formed under various geological processes, and is the basic unit of rock composition.", "It is a mineral aggregate combined in a certain way under various geological processes. It is the main material that constitutes the crust and mantle", "Under different external conditions (temperature, pressure and medium), substances with the same chemical composition can be crystallized into two or more crystals with different structures, forming minerals with different crystal morphology and physical properties, namely, homogeneous polymorphs", "Color of mineral powder. It is usually to observe the color of the trace of minerals on the strip plate (unglazed porcelain plate)", "It refers to the degree of resistance of minerals to external forces, such as drawing, pressing and grinding.", "[Standard mineral name/hardness grade] Talc~1 gypsum~2 calcite~3 fluorite~4 apatite~5 orthoclase~6 quartz~7 topaz~8 corundum~9 diamond~10", "Under the action of force, the mineral crystal breaks in a certain direction and produces a smooth plane.", "Irregular fracture surface without a certain direction after the fracture of minerals under stress. The degree of fracture appearance and the perfection of cleavage increase and decrease with each other.", "It is a high temperature viscous silicate magma fluid naturally formed in the deep crust or upper mantle, rich in volatile components, and is the parent body of various magmatic rocks and magmatic deposits.", "The whole process of magma generation, migration, accumulation, change and condensation into rock.", "Formed by the melting or partial melting of rocks in the mantle or crust, such as the cooling and consolidation of magma", "When the magma rises to a certain position, because the external pressure of the overlying strata is greater than the internal pressure of the magma, forcing the magma to stay in the crust and condense and crystallize, this magmatic activity is called intrusion", "The magma breaks through the overlying strata and erupts to the surface, which is called extrusive action", "1\u3001 Fissure eruption (also known as Icelandic eruption type) II. Central eruption: (1) quiet eruption; (2) Stromboli style hair spray; (3) Explosive spray hair style.", "(1) Circum-Pacific volcanic belt; (2) Alps-Himalayan volcanic belt; (3) Atlantic Ridge Volcanic Zone.", "It refers to the shape, size, contact relationship with the surrounding rock mass and the geological tectonic environment in which it was formed.", "It is divided into four categories: ultrabasic rocks (SiO2<45%), basic rocks (45%~52%), neutral rocks (52%~65%) and acidic rocks (>65%).", "The clastic materials formed by destruction are deposited in situ or transported, and then formed by complex diagenesis. These rocks formed by external forces are sedimentary rocks.", "\uff081\uff09 Destruction of pre-formed rocks 1. Weathering (types of weathering: physical weathering, chemical~, biological~) 2. denudation (there are two ways of mechanical and chemical denudation) (2) transport 1. mechanical transport (wind, water, glacier, sea water, gravity, etc. can carry out mechanical transport 2. chemical transport (in addition to wind, glacier, etc., water, lake, sea, etc. also carry out chemical transport) \uff083\uff09 Sedimentation (sedimentation mode includes mechanical sedimentation, chemical sedimentation and biological sedimentation) (4) diagenesis (from loose sediment to solid rock) 1. compaction 2. dehydration 3. cementation 4. recrystallization", "Under weathering, the surface layer of the crust forms a thin residual crust, called weathering crust, which is discontinuous on the bedrock", "\uff081\uff09 Composition of sedimentary rocks (1). Chemical composition(2). Mineral composition (clastic minerals, clay minerals, chemical and biogenic minerals) (2) The color of sedimentary rock(3) The structure of sedimentary rock (1). Clastic structure (2). Argillaceous structure (3). Chemical and biological structure (4) The structure of sedimentary rock (1). Bedding structure: Due to the periodic changes of climate, season and other factors during the sedimentary process, It will inevitably cause changes in the flow direction and flow rate of the transport medium (such as water), so that the quantity, composition, particle size, and organic matter composition of the transport material will also change, and even there will be a certain period of sedimentation discontinuity, which will make the sediment form a layered structure due to the different composition, color, and structure in the vertical direction, It is generally called bedding structure (horizontal bedding, wavy bedding, oblique bedding) (2). bedding structure (ripple marks, dry cracks, salt crystal marks and false images, rain marks, biological traces) (3). nodules (primary nodules, epigenetic nodules) (4). biological fossils", "\uff081\uff09 Clastic rocks (1), sedimentary clastic rocks (2), pyroclastic rocks (2), chemical rocks and biochemical rocks (3), special sedimentary rocks (1), tempestite (2), turbidite", "The rocks formed in the crust are basically in solid state, and are subject to the effects of temperature, pressure and chemically active fluids, resulting in geological processes of mineral composition, chemical composition, rock structure and structural changes", "Temperature, pressure, chemical factors", "(1) The rock recrystallization is obvious (2). The rock has specific structure and structure, especially the schistosity structure formed by the mineral recrystallization under certain pressure", "<1> Dynamic metamorphism<2>contact metamorphism<3>regional metamorphism<4>regional migmatization", "The internal force causes the deformation and displacement of the crust and even the lithosphere. Tectonic movement: permanent deformation of rock caused by tectonic movement.", "(1) Neotectonic movement: It is generally believed that neotectonic movement in Neogene and Quaternary. In short, neotectonic movement refers to the tectonic movement in the latest period in the history of crustal development. (2) Old tectonic movement: tectonic movement before neotectonic movement.", "(1) Landform marks: Since the neotectonic movement is relatively recent and the relevant geomorphic forms are well preserved, geomorphic methods have become one of the common methods for studying the neotectonic movement (2) Measurement data: for the current tectonic movement, it is impossible to leave observable traces on the geomorphic features in the short term, so we must resort to triangulation, leveling, remote survey, astronomical survey and other means, That is to observe the change of elevation and latitude at a point regularly to measure the direction and speed of tectonic movement.", "(1) A certain thickness of strata can be formed in a certain sedimentary area within a certain time. The quantitative conclusion of the rise and fall amplitude can be drawn to a large extent by analyzing the thickness of the rock stratum. (2) Analysis of lithofacies: lithofacies can generally be divided into marine facies, continental facies and transitional facies (such as delta facies at the entrance to the sea). Lithofacies change with the development of time and the change of space conditions. The lateral (horizontal) lithofacies change of the same rock layer reflects the difference of sedimentary environment in different areas at the same time. The vertical (vertical plane direction) lithofacies change of the same rock layer reflects the change of sedimentary environment in different periods of the same area, and this change is often the result of tectonic movement. (3) Structural deformation: tectonic movement often changes the occurrence of strata, resulting in structural deformation such as folds and faults. (4) Stratigraphic contact relationship (crust decline causes sedimentation and rise causes denudation, so various contact relationships recorded by crustal movement in the rock stratum are also evidence of tectonic movement): A. Integrated contact: when the crust is in a relatively stable decline (or although it has risen, but does not rise out of the sea), a continuous sedimentary rock stratum is formed, the old rock stratum is deposited below, the new rock stratum is on top, and there is no lack of rock stratum, This relationship is called integrated contact. B. Unconformity contact (due to tectonic movement, sedimentation is often interrupted to form strata with discontinuous ages. This relationship is called unconformity contact): a. Parallel unconformity: it indicates that the sedimentary area has undergone significant up-and-down movement and the paleogeographic environment has undergone significant changes in a period of time; B. Angular unconformity: it indicates that the crust has undergone uplift and fold movement and the paleogeographic environment has undergone great changes over a period of time.", "It refers to layered rock with the same or similar lithology restricted by two parallel or nearly parallel interfaces.", "The spatial existence of rock strata in the crust. (horizontal, inclined, vertical and inverted strata)", "The bending phenomenon of rock stratum.", "(1) Anticline: prominent bending, the rock strata on both wings incline outward from the center; (2) Syncline: the bending of the downward depression of the rock stratum, and the rock strata on both wings incline from both sides to the center)", "The rocks (rock or rock mass) in the crust, especially the rocks with greater brittleness and close to the surface, are prone to fracture and dislocation under stress, which is generally called fault structure", "Regular and crisscross fractures can be seen in almost all rocks.", "Fault structure with obvious displacement of rock block along the fracture surface", "Fault plane, fault line and fault wall", "A. Normal fault (the fault with relatively lower hanging wall and relatively higher footwall) b. Reverse fault (the fault with relatively higher hanging wall and relatively lower footwall) c. Thrust fault (the fault with relative displacement of two walls along the fault plane in the horizontal direction) d. Hub fault: the fault movement is rotational, as if the hanging wall is rotating around an axis", "It is the rapid tremor of the lithosphere. It is the stress concentration in some areas of the lithosphere caused by tectonic movement that causes rock deformation.", "Indicates the classification of the magnitude of the earthquake itself, which is related to the magnitude of the energy released by the earthquake a. Ultra-micro earthquake: earthquake with magnitude greater than 1 b. Micro earthquake: earthquake with magnitude greater than 1 but less than 3 c. Weak earthquake: earthquake with magnitude greater than 3 but less than 5 d. Strong earthquake: earthquake with magnitude greater than 5 but less than 7 e. Large earthquake: earthquake with magnitude 7 and above", "The degree of damage to the surface and buildings caused by the earthquake. An earthquake has only one magnitude.", "Circum-Pacific seismic zone, Mediterranean Himalayan seismic zone, mid-ocean ridge (ridge) seismic zone, continental fault valley seismic zone", "Pacific plate, Eurasian plate, Indian Ocean plate, African plate, American plate and Antarctica plate", "Tension type boundary, extrusion type boundary, shear type boundary", "<1> Stratigraphic division and correlation basis: (1) sedimentary cycle and lithological change (2) stratigraphic contact relationship (3) paleontology (fossil)<2>lithofacies palaeogeography analysis:<1>classification of sedimentary facies (marine facies, transitional facies, continental facies)<2>main basis of lithofacies analysis (biological fossils, lithological characteristics and structures, special minerals)<3>principles of lithofacies analysis: realistic analogy method<3>structural history analysis", "Sinian Ice Age The oldest ice age in China: South China Ice Age", "Quaternary Glaciation", "1\u3001 The main points of the continental drift theory are as follows: (1). The continental system is composed of lighter rigid aluminosilicate, which floats on the heavier viscous magnesiosilicate. (2) The continents of the world were connected after the Paleozoic Carboniferous and became Pangaea. The vast ocean surrounding the Pangea has become the Pangea. (3) Under the action of tidal force and geostrophic centrifugal force, it began to drift towards the equator and west since the Mesozoic era. The pan-continent gradually ruptured, separated and drifted, forming the basic pattern of modern land and sea distribution. (4) In the process of drifting towards the equator and westward, the front edge of each continent is compressed and folded to form mountains, such as Cordillera and Andes. The rear edge falls off due to the adhesion and drag of the silicon and magnesium layer to form island arcs and islands, such as island arcs and islands in the eastern edge of the Asian continent. The Atlantic Ocean, the Indian Ocean and the Arctic Ocean are formed in the process of continental drift, while the Pacific Ocean is the remnant of the Pan Ocean. 2\u3001 The main arguments of the theory of continental drift: (1). The contour of the continental coastline on both sides of the Atlantic is similar (2). The strata are similar (3). The geological structure is connected (4). The paleontology is similar (3). The fatal flaw of the theory of continental drift: (1) Can the continent float? The melting point of granite is lower than that of basalt. If the ground temperature is high enough to melt the basalt rock and allow the continental drift, but the granite still remains solid and floating on it, it is against the laws of physics (2) Can the continent drift? That is, the driving mechanism. Wigner believes that the centrifugal force of the earth's rotation makes the continent move from high latitude to the equator, and the tidal force also makes it drift westward; However, the calculation results show that its driving force is several orders of magnitude smaller than that required, and it is impossible to drive the continental drift at all. (3) Other controversial issues, for example, if the continental drift started from the Mesozoic era, how did the folded mountains formed before the Paleozoic era? 4\u3001 New evidence of the theory of continental drift: because many problems could not be explained, the theory of continental drift gradually declined in the 1930s. Since the 1950s, the research progress in paleomagnetism and marine geology has re-recognized the once silent theory of continental drift. (1) . Continent contour well spliced (2). Similar polar shift curve (3). Similar paleoclimate", "The main factors affecting and controlling the occurrence of metamorphism are temperature, pressure and chemically active fluids. In the metamorphic process, these factors do not exist in isolation, but often exist at the same time, coordinate and restrict each other, and play different roles under different circumstances, thus forming different characteristics of metamorphism. Generally speaking, temperature is the most important factor. With the increase of temperature, the activity of molecules or atoms in the rock increases, which creates the prerequisite for metamorphism, mainly causing recrystallization and the formation of new minerals. There are two functions of pressure. The static pressure is caused by the weight of the overlying material and increases with depth. Its function makes the temperature of metamorphic reaction rise and forms minerals with smaller molecular volume and larger specific gravity. The stress is a kind of directional pressure, which is related to tectonic movement. It is stronger in the shallow part of the crust and weaker in the deep part. In the shallow part of the earth's crust, the stress of crustal movement is the most concentrated, which mainly forms the changes in rock structure (mechanical transformation). In the depth of the earth's crust, chemical reactions between minerals are easy to occur due to the high temperature. Dissolve in the direction of maximum stress (pressure solution), precipitate in the direction of minimum stress, and form columnar and flaky minerals under the action of directional pressure. In the underground fluid, there are mainly H2O, CO2, F, Cl, B and other volatiles. It generally exists in intergranular pores and fractures of minerals. It may come from intergranular pores of protolith, dehydration of protolith minerals, or magma and deep crust. The function of fluid is to act as a solvent, promote the dissolution of components, increase the diffusion rate, and thus promote the recrystallization and metamorphism reaction. It can also participate in the metamorphism reaction as a component to form minerals without water or water. The aqueous solution is also an indispensable medium for substances to be brought in or out of the metasomatism. The above factors are not isolated, but coexisting, coordinated and constrained. Under different circumstances, certain factors play a leading role, and thus show different characteristics of metamorphism.", "The destruction of coastal and submarine rocks caused by the kinetic energy of sea water movement, the dissolution of sea water and the activities of marine organisms is called marine erosion. The mode of marine erosion includes mechanical denudation and chemical dissolution, but mainly mechanical denudation. There are two kinds of mechanical denudation: one is that the rock is destroyed by the rock alluvial by the sea water in the process of movement, which is called erosion; One is that the gravel sand carried by the moving sea water rubs and collides with the coast or the seabed, causing it to be damaged, which is called abrasion. The main driving force of sea erosion is sea waves. In the coastal zone composed of bedrock, the sea waves violently wash the coast with the surf and abrade the seabed and coastal rocks of the coastal zone with the gravel sand it carries. Tide can promote the destructive effect of waves in the open coastal zone. Under the continuous destruction of waves and tides, a groove extending along the coast is formed near the height of the waves at the base of the shore wall, which is called the sea erosion groove. The sea erosion groove continues to expand and deepen, and the upper rock of the sea erosion groove loses its support and collapses, forming a straight and steep rock wall, called the sea erosion cliff. If the sea erosion cliff retreats continuously under the action of sea waves, a slightly seaward inclined platform composed of bedrock is formed in front of the sea erosion cliff, which is called the sea erosion platform (or wave cutting platform). The bottom current carries the material from the denuded coast to the sea side outside the wave cutting platform and deposits it to form the wave building platform. The bedrock column left on the wave cutting platform that has not been denuded is called marine erosion column. The sea erosion platform is continuously widened due to the sea erosion, so that the wave will pass more and more long distance when impacting the cliff foundation, and the energy consumption of the wave is increasing. Finally, all the energy of the sea wave is consumed on the sea erosion platform. The sea erosion tends to stop. After that, if the crust rises so that the wave cutting platform rises to the height that the sea water cannot submerge, the sea erosion terrace will be formed.", "Water is an important component of many minerals. According to the existence form of water in minerals and its different role in the crystal structure of minerals, it can be divided into: adsorbed water: neutral water molecules are mechanically adsorbed on the surface of mineral particles or pores, such as the film water on the surface of clay minerals, T \u2248 110 \u2103 is completely lost, and the amount of water is not fixed. Water does not participate in the crystal composition and has nothing to do with the mineral crystal structure. Crystalline water: participates in the formation of mineral crystal structure in the form of neutral water molecules. The quantity of water is fixed and complies with the law of constant ratio. Due to the lattice binding, the loss temperature is high, about 200~500 \u2103, or even higher. After the loss, the structure is destroyed. Structural water (or combined water): It forms mineral crystal structure in the form of OH - or H+, H3O+ions, so it also has a certain content ratio. The bonding strength is higher, and the loss temperature is 500~900 \u2103, and H2O is released. Zeolite water: There are large cavities and channels in the crystal structure of zeolite minerals, in which H2O exists, occupying a certain position, with a certain upper limit of content, and will not damage the structure after escaping with the change of temperature. Interlayer water: the neutral water molecules existing between the structural units of layered silicate minerals, such as the interlayer water of montmorillonite, have variable content, water escape, and the structure is not damaged, but the interlayer spacing of adjacent structural units is reduced, and when there is water, it can be re-absorbed and expanded.", "American scholar N.L. Bowen (1922) found that the crystallization and precipitation of the main rock-forming minerals follow a certain sequence when the magma cools, and can be divided into two series, namely the continuous reaction series of plagioclase and the discontinuous reaction series of dark minerals, which is called Bowen reaction series. According to the different nature of the reaction, Bowen divided the main rock-forming minerals in the magmatic rocks into two reaction series. The continuous reaction series are frame-like aluminosilicate minerals with continuous gradual change in mineral composition, and there is no qualitative change in the internal crystal lattice. The discontinuous reaction series is Fe-Mg mineral, the change of mineral composition is discontinuous, and the internal crystal lattice has undergone qualitative change. If olivine becomes biotite, its crystalline framework changes from island to layer. In the process of magma crystallization of the above two series, the silica-aluminum minerals and ferromagnesium minerals have a co-junction relationship in turn, and the last two series evolved into a series, namely potassium feldspar, muscovite and quartz, which are the final products of magma crystallization. According to the above reaction series, the following practical problems can be solved: (1) Determine the crystallization order of minerals. The minerals in the upper part of the reaction series crystallized earlier than the minerals in the lower part. It is obvious that olivine and basic plagioclase are the earliest crystallization minerals, while quartz is the final product of magma crystallization. (2) The general law of mineral symbiosis and association in magmatic rocks is explained. Because the two reaction series have a co-junction relationship, when the magma cools to a certain temperature, it must crystallize a light-colored mineral and a dark mineral at the same time. For example, when the magma drops to 1550 \u2103, olivine, orthopyroxene and basic (calcium, feldspar) feldspar are precipitated to form ultrabasic rocks. When the temperature of magma drops to 1270 \u2103, clinopyroxene and feldspar precipitate simultaneously to form basic rocks. (3) The reasons for the diversity of magmatic rocks are explained. The same magma can form different types of magmatic rocks. (4) Some structural characteristics of magmatic rocks are explained. Such as the normal ring structure of plagioclase and the reaction edge structure of dark minerals.", "In rock classification, the characteristics that can be objectively identified and most closely related to the genesis of rocks must be selected as the basis for classification; Secondly, we should consider that the classification scheme is suitable for both field work and indoor research. According to the above principles, it is currently agreed that the classification of sandstone should reflect the following three issues: 1. the nature of the parent rock in the source area; 2. History of transportation and abrasion, i.e. rock maturity; 3. The physical condition of the medium at the time of deposition, namely the flow factor. Therefore, in terms of specific indicators, quartz, feldspar, rock debris and clay matrix in sandstone should be selected as the basis for classification. Because these variables are easy to identify and have genetic significance, the quantitative relationship between them can reflect the genetic characteristics of sandstone. The unstable clastic component can reflect the material source. Feldspar is the sign of granitic parent rock, while rock debris is the sign of volcanic, sedimentary and shallow metamorphic parent rock. The ratio of feldspar to cuttings (F/R, called source index) can reflect the basic characteristics of the parent rock association in the source area. The history of transportation and abrasion can be expressed by the relative volume ratio of stable and unstable components (i.e. Q/F+R, called mineral maturity). In general, the higher the mineral maturity, the better the abrasion condition and the longer the transportation history. The most widely distributed stable component in sandstone is quartz. The physical condition of the medium (density and viscosity) is an important factor that affects the mechanical deposition of clastic materials. The presence and quantity of clay matrix in sandstone is a specific indicator of the mechanical differentiation. This property of the medium can be expressed by the ratio of debris to matrix (i.e. C/M, called flow index). C/M ratio can directly reflect the degree of sand and mud mixing, that is, the quality of rock sorting. If the C/M ratio is very small, the sand and mud are mixed and the sorting is poor, indicating that the winnowing is not complete and the sediment accumulation speed is very fast", "1. It can be identified according to its geological occurrence, that is, according to its genetic difference, it is generally believed that igneous rocks are formed by magmatism, including intrusive rocks formed by intrusion, volcanic rocks formed by magmatic eruption and subvolcanic rocks bounded between them. The sedimentary rock is formed by weathering", "A: The main factors that affect and control the occurrence of metamorphism are temperature, pressure and chemically active fluids. In the metamorphic process, these factors do not exist in isolation, but often exist at the same time, coordinate and restrict each other, and play different roles under different circumstances, thus forming different characteristics of metamorphism. Generally speaking, temperature is the most important factor. With the increase of temperature, the activity of molecules or atoms in the rock increases, which creates the prerequisite for metamorphism, mainly causing recrystallization and the formation of new minerals. There are two functions of pressure. The static pressure is caused by the weight of the overlying material and increases with depth. Its function makes the temperature of metamorphic reaction rise and forms minerals with smaller molecular volume and larger specific gravity. The stress is a kind of directional pressure, which is related to tectonic movement. It is stronger in the shallow part of the crust and weaker in the deep part. In the shallow part of the earth's crust, the stress of crustal movement is the most concentrated, which mainly forms the changes in rock structure (mechanical transformation). In the depth of the earth's crust, chemical reactions between minerals are easy to occur due to the high temperature. Dissolve in the direction of maximum stress (pressure solution), precipitate in the direction of minimum stress, and form columnar and flaky minerals under the action of directional pressure. In the underground fluid, there are mainly H2O, CO2, F, Cl, B and other volatiles. It generally exists in intergranular pores and fractures of minerals. It may come from intergranular pores of protolith, dehydration of protolith minerals, or magma and deep crust. The function of fluid is to act as a solvent, promote the dissolution of components, increase the diffusion rate, and thus promote the recrystallization and metamorphism reaction. It can also participate in the metamorphism reaction as a component to form minerals without water or water. The aqueous solution is also an indispensable medium for substances to be brought in or out of the metasomatism. The above factors are not isolated, but coexisting, coordinated and constrained. Under different circumstances, certain factors play a leading role, and thus show different characteristics of metamorphism.", "A: The kinetic energy of the sea water movement, the dissolution of the sea water and the activities of marine organisms and other factors cause the destruction of the rocks on the coast and the seabed, which is called marine erosion. The mode of marine erosion includes mechanical denudation and chemical dissolution, but mainly mechanical denudation. There are two kinds of mechanical denudation: one is that the rock is destroyed by the rock alluvial by the sea water in the process of movement, which is called erosion; One is that the gravel sand carried by the moving sea water rubs and collides with the coast or the seabed, causing it to be damaged, which is called abrasion. The main driving force of sea erosion is sea waves. In the coastal zone composed of bedrock, the sea waves violently wash the coast with the surf and abrade the seabed and coastal rocks of the coastal zone with the gravel sand it carries. Tide can promote the destructive effect of waves in the open coastal zone. Under the continuous destruction of waves and tides, a groove extending along the coast is formed near the height of the waves at the base of the shore wall, which is called the sea erosion groove. The sea erosion groove continues to expand and deepen, and the upper rock of the sea erosion groove loses its support and collapses, forming a straight and steep rock wall, called the sea erosion cliff. If the sea erosion cliff retreats continuously under the action of sea waves, a slightly seaward inclined platform composed of bedrock is formed in front of the sea erosion cliff, which is called the sea erosion platform (or wave cutting platform). The bottom current carries the denuded coastal material 8 to the side of the sea water outside the wave cutting platform and deposits it to form the wave building platform. The bedrock column left on the wave cutting platform that has not been denuded is called marine erosion column. The sea erosion platform is continuously widened due to the sea erosion, so that the wave will pass more and more long distance when impacting the cliff foundation, and the energy consumption of the wave is increasing. Finally, all the energy of the sea wave is consumed on the sea erosion platform. The sea erosion tends to stop. After that, if the crust rises so that the wave cutting platform rises to the height that the sea water cannot submerge, the sea erosion terrace will be formed.", "Answer: Water is an important component of many minerals. According to its existing form in minerals and its different role in the crystal structure of minerals, it can be divided into: adsorbed water: neutral water molecules are mechanically adsorbed on the surface or pores of mineral particles, such as the film water on the surface of clay minerals, T \u2248 110 \u2103 is completely lost, and the amount of water is not fixed. Water does not participate in the crystal composition and has nothing to do with the mineral crystal structure. Crystalline water: participates in the formation of mineral crystal structure in the form of neutral water molecules. The quantity of water is fixed and complies with the law of constant ratio. Due to the lattice binding, the loss temperature is high, about 200~500 \u2103, or even higher. After the loss, the structure is destroyed. Structural water (or combined water): It forms mineral crystal structure in the form of OH - or H+, H3O+ions, so it also has a certain content ratio. The bonding strength is higher, and the loss temperature is 500~900 \u2103, and H2O is released. Zeolite water: There are large cavities and channels in the crystal structure of zeolite minerals, in which H2O exists, occupying a certain position, with a certain upper limit of content, and will not damage the structure after escaping with the change of temperature. Interlayer water: the neutral water molecules existing between the structural units of layered silicate minerals, such as the interlayer water of montmorillonite, have variable content, water escape, and the structure is not damaged, but the interlayer spacing of adjacent structural units is reduced, and when there is water, it can be re-absorbed and expanded.", "A: American scholar N.L. Bowen (1922) found that the crystallization and precipitation of the main rock-forming minerals follow a certain sequence when the magma cools, which can be divided into two series, namely the continuous reaction series of plagioclase and the discontinuous reaction series of dark minerals, which is called Bowen reaction series. According to the different nature of the reaction, Bowen divided the main rock-forming minerals in the magmatic rocks into two reaction series. The continuous reaction series are frame-like aluminosilicate minerals with continuous gradual change in mineral composition, and there is no qualitative change in the internal crystal lattice. The discontinuous reaction series is Fe-Mg mineral, the change of mineral composition is discontinuous, and the internal crystal lattice has undergone qualitative change. If olivine becomes biotite, its crystalline framework changes from island to layer. In the process of magma crystallization of the above two series, the silica-aluminum minerals and ferromagnesium minerals have a co-junction relationship in turn, and the last two series evolved into a series, namely potassium feldspar, muscovite and quartz, which are the final products of magma crystallization. According to the above reaction series, the following practical problems can be solved: (1) Determine the crystallization order of minerals. The minerals in the upper part of the reaction series crystallized earlier than the minerals in the lower part. It is obvious that olivine and basic plagioclase are the earliest crystallization minerals, while quartz is the final product of magma crystallization. (2) The general law of mineral symbiosis and association in magmatic rocks is explained. Because the two reaction series have a co-junction relationship, when the magma cools to a certain temperature, it must crystallize a light-colored mineral and a dark mineral at the same time. For example, when the magma drops to 1550 \u2103, olivine, orthopyroxene and basic (calcium, feldspar) feldspar are precipitated to form ultrabasic rocks. When the temperature of magma drops to 1270 \u2103, clinopyroxene and feldspar precipitate simultaneously to form basic rocks. 9 (3) explains the reason for the diversity of magmatic rocks. The same magma can form different types of magmatic rocks. (4) Some structural characteristics of magmatic rocks are explained. Such as the normal ring structure of plagioclase and the reaction edge structure of dark minerals.", "A: Bedding is a layered structure formed by changes in mineral composition, color, texture and other characteristics along the vertical direction of the original sedimentary plane. Bedding is not only the basic structural feature of sedimentary rocks, but also a good indicator for studying sedimentary environment or sedimentary facies. The bedding is generally divided into the following types according to the morphological characteristics: 1. Horizontal bedding: the fine layers and the fine layers are parallel to each other and are mainly formed in the fine silt and argillaceous rocks, and are mostly found in the sediments formed in the environment of slow flow or advection, such as flood plain, oxbow lake, lagoon, swamp, and closed bay sediments. 2. Parallel bedding: similar to horizontal bedding, the fine layer and the interface between the fine layer and the layer system are also parallel to each other, but it occurs in the coarse-grained sandstone, often accompanied by scouring, which is formed under the conditions of rapid flow and shallow water flow. 3. Wavy bedding: the fine layer is wavy, but its general direction is parallel to each other and parallel to the layer plane. There are two kinds of causes. One is caused by the oscillating waves. The wave layer is symmetrical, and it is mainly seen in the sediments of shallow water zone, bay and lagoon environment; The other is caused by weak unidirectional flow, and the wave layer is asymmetric, which is mostly seen in the flood plain sediments. 4. Oblique bedding: the fine layer and the layer system interface are oblique, and the layer system can overlap and cross. It is the structural feature on the rock profile after the sand grain or sand wave formed in the current (or wind) is buried. The tendency of the fine layer reflects the flow direction (wind direction) of the medium, and the thickness of the fine layer (equivalent to the height of the sand ripple or sand wave) reflects the flow velocity of the medium. Therefore, oblique bedding is often used as an important indicator of flow dynamics (velocity, direction, depth, etc.) and sedimentary environment. The common ones are as follows: a. Plate-like oblique bedding: the fine layer is caused by unidirectional inclination and unidirectional flow, which can be seen in riverbed sediments. B. Trough cross bedding: on the cross section of the bedding, the interface of the layer system is grooved, and the curvature of the fine layer is consistent with the groove or intersects with it at a small angle; In the longitudinal section, the interface of the layer system is cut with each other in a gentle arc, and the fine layer is oblique to it. It is common in river sediments. C. Wedge bedding: the strata are wedge shaped, and mostly appear in the shallow water zone of delta, lake and sea. 5. Lenticular bedding: small sandy lenses are continuously and regularly wrapped in the argillaceous layer, and there is oblique bedding inside the sandstone lens. It is most common in tidal sediments. 6. Grain-order bedding: also known as progressive bedding, it has no obvious fine layer boundary. The whole bedding mainly shows the change of grain size, that is, the grain size gradually changes from coarse to fine from bottom to top. It is the sedimentary feature of turbidity current and is relatively common. 7. Massive bedding: the lithology of the rock layer is uniform from bottom to top, and other internal bedding structures can not be seen with the naked eye. The thickness is generally greater than lm, which is the product of rapid accumulation of sediments. It can also be caused by biological disturbance.", "Answer: When classifying rocks, we must first select the characteristics that can be objectively identified and most closely related to the genesis of rocks as the basis for classification; Secondly, we should consider that the classification scheme is suitable for both field work and indoor research. According to the above principles, it is currently agreed that the classification of sandstone should reflect the following three issues: 1. the nature of the parent rock in the source area; 2. History of transportation and abrasion, i.e. rock maturity; 103. Physical conditions of the medium during deposition, i.e. flow factors. Therefore, in terms of specific indicators, quartz, feldspar, rock debris and clay matrix in sandstone should be selected as the basis for classification. Because these variables are easy to identify and have genetic significance, the quantitative relationship between them can reflect the genetic characteristics of sandstone. The unstable clastic component can reflect the material source. Feldspar is the sign of granitic parent rock, while rock debris is the sign of volcanic, sedimentary and shallow metamorphic parent rock. The ratio of feldspar to cuttings (F/R, called source index) can reflect the basic characteristics of the parent rock association in the source area. The history of transportation and abrasion can be expressed by the relative volume ratio of stable and unstable components (i.e. Q/F+R, called mineral maturity). In general, the higher the mineral maturity, the better the abrasion condition and the longer the transportation history. The most widely distributed stable component in sandstone is quartz. The physical condition of the medium (density and viscosity) is an important factor that affects the mechanical deposition of clastic materials. The presence and quantity of clay matrix in sandstone is a specific indicator of the mechanical differentiation. This property of the medium can be expressed by the ratio of debris to matrix (i.e. C/M, called flow index). C/M ratio can directly reflect the degree of sand and mud mixing, that is, the quality of rock sorting. If the C/M ratio is very small, the sand and mud are mixed and the sorting is poor, indicating that the winnowing is not complete and the sediment accumulation speed is very fast. eleven", "Clastic rocks, chemical rocks and biochemical rocks. In addition, there are some sedimentary rocks formed under special conditions. (3 ') 4. Clastic rocks mainly include sedimentary clastic rocks and volcaniclastic rocks. Sedimentation according to particle size", "Clastic rocks, chemical rocks and biochemical rocks. In addition, there are some sedimentary rocks formed under special conditions. (3 ') 4. Clastic rocks mainly include sedimentary clastic rocks and volcaniclastic rocks. Sedimentary clastic rocks can be further divided into conglomerate, sandstone, siltstone and clay according to grain size. According to the granularity of pyroclastic rocks, pyroclastic rocks can be divided into volcanic agglomerate, volcanic breccia and tuff. (3 ') Chemical and biochemical rocks, mainly including aluminum, iron and manganese rocks, silicon and phosphorous salts, carbonate rocks, evaporite rocks and combustible organic rocks. (2 '). Special sedimentary rocks include tempestite and turbidite. \uff082\u2019\uff09", "The crystallization process of magma from high temperature to low temperature includes two parallel evolution series: on the one hand, it is the continuous solid solution reaction series of plagioclase belonging to light-colored minerals (silica-alumina minerals), that is, from calcium-rich plagioclase to sodium-rich plagioclase (that is, from basic plagioclase to acidic plagioclase); (2 ') During the evolution of this series, the mineral crystal framework has not changed much, but the mineral composition has changed continuously, which is actually a continuous isomorphic process. On the other hand, it is a discontinuous reaction series of dark minerals (iron magnesium minerals), that is, crystallization in the order of olivine, pyroxene, hornblende and biotite; (2 ') During this series of evolution, there is not a continuous transition in composition between the adjacent minerals before and after, but a reaction between magma and early minerals to produce new minerals, and the crystal framework of adjacent minerals has also changed significantly. With the decrease of temperature, these two series synthesized a single discontinuous reaction series in the late stage of magma, crystallized potassium feldspar, muscovite in turn, and finally precipitated quartz. (2 ') Bowen reaction series, to a certain extent, explains the crystallization order and paragenetic association law of minerals in magma, and provides a simple method to master the classification of igneous rocks. The vertical line indicates the sequence of mineral crystallization from high temperature to low temperature; The horizontal line indicates that the minerals at the same horizontal position are basically crystallized at the same time and form certain types of rocks according to the symbiotic law. For example, pyroxene and calcium-rich plagioclase constitute basic rocks, which cannot be associated with quartz; (2 ') Potassium feldspar, sodium-rich plagioclase, quartz, biotite and other acidic rocks, which cannot be associated with olivine. The farther apart the minerals are in the vertical direction, the less chance of symbiosis. \uff082\u2019\uff09" ] }, "discussion": { "question": [ "Formation of the North China Plate", "The formation process of the Yangtze Plate", "Taking Yichang, Hubei as an example, the Sinian stratigraphic division and paleontological evolution of the Three Gorges area are reviewed.", "Paleogeography and palaeotectonics of the Carboniferous North China Plate (including typical sections, section analysis and spatial variation) ", "Taking northern Shaanxi as an example, a review of the Triassic stratigraphic division in the Ordos Basin", "During the development of Jurassic and Cretaceous geological history, my country's east and west were clearly differentiated. Where is the boundary? What are the characteristics of each?", "Key points of Paleogeography and Palaeotectonics in Tertiary China Mainland", "Paleogene Paleoclimate Zoning and Sedimentary Assemblage Types in Eastern China", "Neogene Paleoclimate Zoning and Sedimentary Assemblage Types in Eastern China", "The Cambrian Sedimentary History of the North China Plate", "History of Ordovician Sedimentary Development in the North China Plate", "History of Silurian Sedimentary Development in the North China Plate", "History of Devonian Sedimentary Development of the North China Plate", "History of Carboniferous Sedimentary Development of the North China Plate", "Sedimentary Development History of the Permian in the North China Plate", "Cambrian Sedimentary History of the Yangtze Plate", "Ordovician Sedimentary History of the Yangtze Plate", "Silurian Sedimentary History of the Yangtze Plate", "History of Devonian Sedimentary Development of the Yangtze Plate", "Carboniferous Sedimentary History of the Yangtze Plate", "Sedimentary Development History of the Permian in the Yangtze Plate", "Evolutionary Stages of Precambrian Paleontology and Its Representative Biological Groups", "The biological characteristics of the early Paleozoic", "Late Paleozoic biological interface", "Plant divisions on a global scale in the Late Paleozoic", "A Survey of Mesozoic Organisms", "What are the boundaries of the Mesozoic plant divisions in my country, what climates each represent, and what are their representative molecules", "main biological groups and representative fossils of each period in the Early Paleozoic", "Main biological groups and representative fossils of each period in the Late Paleozoic", "Main biological groups and representative fossils of the Mesozoic Era", "Early Paleozoic Mineral Resources", "Sedimentary Minerals of the Late Paleozoic", "Mesozoic sedimentary minerals and their distribution", "Minerals of the new generation", "Fracture structures and their field identification marks", "The approximate evolution sequence of the biological kingdom (plants and animals) in the geological history evolution from the Archaic to the Cenozoic", "Major types of global plate tectonic boundaries", "Comprehensively discuss the main manifestations of tectonic movement in strata, landforms and features", "The main stages in the formation of sedimentary rock layers in the Earth's crust", "Wind Sedimentation and Main Sediment Types in Northwest my country", "What are the five types of resources on earth? In which types of rocks are petroleum resources produced and stored , and which geological ages are the main coal accumulation periods in China?", "Field macro identification mark of fracture structure", "From the ancient times to the present, the approximate evolution sequence of the earth's biological world, which geological age did ancient humans start?", "Analyze the purpose and significance of learning earth science with your own life or joint practice", "Taking the 5.12 Wenchuan Earthquake in 2008 as an example, the main structural and geomorphological conditions for the occurrence of destructive earthquakes were analyzed.", "Try to discuss the types of internal and external geological processes and their interaction on plastic surface morphology.", "Based on the pattern of atmospheric circulation in the northern hemisphere, this paper comprehensively analyzes the reasons for the formation of large-scale deserts, loess and desertification in Northwest China.", "The Mainstream Understanding of Modern Earth Science Revolution and Its Established Foundation and Structural Power Source", "Dynamic action types and accumulation conditions of oil and gas reservoirs", "How did the Yangtze and Yellow River valleys form?", "Why is there a sudden change in the material composition of the river bed during the process of narrowing along the flow?", "How to solve the closed form equation of alluvial river bed?", "Why do alluvial rivers become curved?", "Quantitative interpretation of the impact of changes in underlying surface properties on local climate", "Formation and evolution of haze weather", "Distributed Hydrological Model", "Hydrological Forecasting of Data-Deficited Watersheds", "Thermal and kinetic conditions of soil desiliconization and aluminum enrichment?", "Rapoport's Law of Geographic Gradient in Species Size", "SLOSS-The \"big and small\" debate in nature conservation", "The Ecological Mechanism of the Forest-Grassland Transition Zone", "Formation Mechanism of Alpine Timberline", "Unusual patterns of forest vegetation distribution", "Why are there such large differences in biota in different regions?", "Relative pollen production and pollen source range of major plant species in different vegetation types in China", "Extraction of Low Frequency Signals in Tree Ring Climate Reconstruction", "Causes of China's forest vegetation decline in the late mid-Holocene\u2014climate change or human activities?", "The Mystery of the Origins of Agriculture", "The Mystery of the Prehistoric Flood", "The \"Great Lake Period\" of the ice section during the last glacial period in the arid area of my country's inland", "The \"Western Wind Mode\" of Precipitation Variation in the Dry Area of Inner Asia", "Are the Medieval Warm Period and the Little Ice Age global or regional climate anomalies?", "Scale Effect and Scale Transformation in Geographical Environment Evolution", "Environmental benchmark and its value assignment", "Interaction of Multiple Pollutants", "How to Quantitatively Distinguish the Hazards of Environmental Pollution and Other Factors to Human Health", "Lock-up and aging phenomena and mechanisms of chemical pollutants in complex environmental systems", "Environmental Pollution and Environmental Behavior of New Pollutants", "Environmental Behavior of Nanoparticles in Aquatic Environment", "Mechanism of water eutrophication", "Accumulation Mechanism and Causes of Hyperaccumulative Plants", "Natural Nitrogen Fixation and Nitrogen Removal Mechanisms", "Why is it difficult to determine the critical value of the development and utilization of renewable resources?", "Why is it difficult to formulate an optimal plan for natural resource development?", "Disaster Chain Transmission Mechanism", "Risk and Loss Assessment of Natural Disasters", "natural disaster cycle", "The Territory System of Human-Earth Relationship and Its Status in the Earth Surface System", "Formation Mechanism of \"Point-Axis System\" of Socioeconomic Spatial Structure", "How to understand the spatial structure of social economy and its influencing factors", "Formation Principles of Territorial Functions and Their Main Influencing Factors", "The Action Mode and Mechanism of Cultural and Institutional Factors in Global Environmental Change", "Mechanism of Information Technology Factors on Regional Spatial Reorganization", "Inter-scale integration process of regional culture", "Urbanization Process and Dynamic Mechanism", "Formation Mechanism and Development Trend of Megacities", "Mechanism and law of urban sprawl", "Formation Mechanism and Identification Method of Peri-urbanized Area", "Formation Mechanism and Space Effect of Industrial Clusters", "Periodicity and Principles of Industrial Space Transfer", "Differentiation and Reconstruction Mechanism of Rural Area Types", "Formation and Development of Agricultural Production Base in the New Era", "Human Environment Effects on the Formation and Evolution of Transportation Networks", "Principles and Calculation Methods of Interdependence Between Regions", "Spatial accessibility connotation and calculation method", "The Determination Method and Calculation of Optimum Population Size of City", "Wonders of Earth: Asking the Earth and Asking the Sky", "Origin of Life", "The Mystery of the Landing of Land Plants", "The Origin of Animals and the Cambrian Explosion", "Origin and early evolution of vertebrates", "Birds, their feathers, and the origin of flight", "The Mystery of the Mass Extinction", "Recovery after mass extinction", "human origin", "Geobiology and microbial geological processes", "The formation, evolution and dynamics of continents", "The geological history of continents on Earth: multiple continental break-up-recombination evolutionary cycles, or gradual outward growth around ancient continental cores?", "The formation, disintegration and evolution of supercontinents", "Multilayer detachment tectonics of continental lithosphere", "Mantle Plume Structure and Its Verification", "In-board construction process", "basin-mountain coupling", "Origin of Intracraton Basins", "The Charm of Tethys\u2014Origin, Evolution, and Resource-Environmental Effects", "UHP metamorphism", "Mysterious Ancient Asian Ocean", "active structure", "Strong Earthquakes and Active Blocks in Mainland China", "How long will the world's oil and gas last?", "Occurrence state and production process of excess coalbed methane", "How to turn the \"violent killer\" of coal mines into valuable clean energy", "Important Strategic Alternative Energy - Unconventional Oil and Gas Resources", "Contribution of Inorganic Effects to Oil and Gas", "Formation and Safe Utilization of Highly Toxic H2S in Natural Gas Reservoirs", "The root cause of abundant oil and gas resources in China's continental basins", "The Crux of the Difficulty in Oil and Gas Exploration in China's Marine Basins", "The dispute over uranium sources in ultra-large uranium deposits and the earth's uranium heterogeneity", "Extremely rich in mineral resources", "Appropriate Research Methods for Earth's Giant and Complex Systems", "Landslide Disaster and Its Forecast", "Starting Mechanism and Resistance Law of Debris Flow", "Advanced prediction of geological disasters in underground engineering or underground mining", "Final Safe Disposal of Highly Radioactive Nuclear Waste", "Mechanism and Prediction of Earthquake Induced by Reservoir", "Ground subsidence", "Mystery through the ages - 1908 Tunguska explosion", "How is the deep subducted continental crust reentranted?", "The Mystery of Niobium(Nb)-Tantalum(Ta) in the Formation of Continental Crust", "When were the oldest rocks on Earth formed?", "Why is the Earth the only place in the solar system that has granite?", "Why are stable continental cratons destroyed?", "snowball earth hypothesis", "Drivers of plate tectonics", "Whole-mantle convection and chemical stratification of upper and lower mantle", "lower mantle density trap", "Do mantle plumes exist?", "Can computational simulations be used to predict the material and properties of the Earth's interior?", "The Mystery of Earth's Lead Isotopes", "Heterogeneity of Mantle Composition and Its Causes", "How was the Earth's core formed?", "The Mystery of Light Elements in Earth's Core", "How hot is the Earth's core?", "How did the oldest objects in the solar system form?", "The Hypothesis of the Cause of Impact on the Earth-Moon System", "Distribution and origin of extinct nuclides", "Oxygen isotopic anomalies in the solar system", "Are there other worlds of life in our solar system?", "When did the Earth's atmosphere oxidize?", "Causes of large-scale marine organic matter accumulation events in geological history", "Ocean Chemistry in the Proterozoic Era\u2014Sulphurized Oceans", "Large-scale volcanic eruption is the culprit of biological extinction?", "Does Earth have a deep biosphere?", "When did life on earth begin?", "The End of Earth's Ecosystems: How Long Do We Have?", "Abiotic synthesis and evolution of organic matter on the early Earth", "Seabed cold seeps and their ecosystems", "Can the uplift of the Qinghai-Tibet Plateau cause global climate change?", "Reconstruction of past environmental changes from nuclide 10Be and magnetic susceptibility records preserved in Chinese loess", "Genesis of Dolomite and Oxygen Isotope Fractionation of Carbonate Minerals", "Why can unconventional stable isotopes be fractionated at high temperature?", "Mechanism of Nonmass Isotope Effect", "How to verify the \"global distillation effect\" of persistent organic pollutants?", "Are Toxic Substances Deadly?", "Natural Gas Hydrate: The Invisible Killer of Geological Cataclysm in Geological History?", "Did early life on Earth lead to the formation of the Precambrian ferrosilicon building?", "The role of microorganisms in the formation of minerals", "Formation and development and utilization of natural gas hydrate", "supercritical fluid deep in the earth", "source of water on earth", "Ocean Nd isotope and Nd element concentration paradox", "tsunami\t", "Earthquake Prediction", "heat flow paradox", "Earthquake Early Warning", "Measurement of Underground Absolute Stress", "Why Only a Few Large Earthquakes Have Direct Foreshocks", "Physical process of earthquake source rupture", "Prediction of Strong Surface Motion Based on Physical Process", "Mantle Convection and Plate Tectonics", "Hot spot and mantle plume hypothesis", "seismic tomography", "slow \tearthquake\t", "Detection and Separation of Dynamic Factors in Geodesy", "Is the core overspinning?", "The mystery of the variation of day length on a decadal scale", "Determination of high-resolution centimeter-level global geoid", "Research and Detection of Long Period Oscillation of Earth's Liquid Core", "Time-varying information acquisition of high-resolution Earth's gravity field", "Translational Oscillations of the Earth's Inner Core", "What is the source of excitation for the Chandler wobble?", "Cosmic Zero and Evolution", "The complete propagation theory and effective approximation of seismic waves in the earth medium", "High-precision seismic detection of deep resources", "Is it true that \"it is easy to enter the sky and difficult to enter the earth\"?", "The role of fluid in the structure of the earth's layer", "Can Human Activity Induce Earthquakes?", "Vertical Distribution of Radioactive Heat Generation Rates in Continental Lithosphere", "Origin of Earth's Magnetic Field", "Does the coseismic electromagnetic signal exist?", "Deformation mechanism and dynamics of continental crust and lithosphere", "Genetic Mechanism of Water-rock Interaction and Earthquake Process", "Is the mantle transition zone water-bearing?", "The Existing State of Metastable Olivine in the Mantle Transition Zone", "How is the energy of the solar eruption released?", "Origin of the solar wind", "Formation and dissipation mechanism of solar wind turbulence", "coronal heating mechanism", "Aurora Mystery", "earth space current system", "Wave-Particle Interaction in Magnetosphere", "The Trigger Mechanism and Electron Dynamics Behavior of Collisionless Magnetic Field Reconnection", "Mechanisms of Geomagnetic Storms and Substorms", "Magnetosphere\uf02dIonosphere\uf02dAtmosphere Coupling", "Why is the annual change of the ionosphere abnormal?", "Burst layered structure in the E region of the ionosphere: the Es layer", "Exotic phenomena and fundamental physical processes in the transition region of the atmosphere and space", "space weather disaster", "Geomagnetic Navigation and Biological Enlightenment", "How does water vapor enter the stratosphere?", "What kind of three-dimensional observation network can be established to meet the needs of weather and climate change monitoring and forecasting?", "Quasi-biennial periodic oscillation in the atmosphere and its impact", "Earth's Climate System Interaction with Multispheres", "Climate Change and Greenhouse Gases", "The role of bioaerosols in the nucleation of atmospheric ice nuclei", "Accurate Calculation of Atmospheric Vertical Velocity", "The role of aerosol in the formation of cloud precipitation and artificial weather modification", "Atmospheric Boundary Layer and Wind Energy Resource Development", "new particle generation", "Stratospheric influence on tropospheric weather and climate", "East Asian monsoon system", "Polar climate change and its impact", "Solar Activity and Earth's Weather and Climate", "Model Error Estimation Problems in Collective Data Assimilation", "Radiative Transfer and Its Effects in Inhomogeneous Atmosphere", "Combined urban and regional air pollution", "Climate Sensitivity and Feedback", "Influence of the Qinghai-Tibet Plateau on climate change in East Asia", "Electrical Problems and Climate Effects of the Earth's Atmosphere", "Thunderstorm electrification and lightning process and mechanism", "Physical and chemical effects of lightning discharge", "Global Carbon Cycle and Climate Change", "climate extremes", "Real-time detection of bioaerosols", "Research on Mechanism and Prediction of Rainstorm Disaster Weather", "Impact of Urbanization on Climate Change", "The role of human activities in global climate change", "The role of natural factors in global climate change", "Net climate effects of anthropogenic changes in the global nitrogen cycle", "Characteristics and causes of unique climate change in East Asia", "Climate system models and simulation and prediction of climate change", "Establishment of the three-dimensional structure theory of the circulation system on the western boundary of the ocean", "Causes and Prediction of Climate Interdecadal Variation", "Ocean Mixing - Prying the Fulcrum of Ocean Movement", "Formation and Evolution Mechanism of Thermohaline Circulation", "How much room for improvement is there in El Ni\u00f1o forecasting ability?", "sea-air interaction", "Holocene millennial-scale climate fluctuations", "Are Coastal Regions a Source or Sink of Atmospheric Carbon Dioxide?", "Research on Space Remote Sensing Observation of Deep Ocean", "Age differences in dissolved organic carbon in the ocean", "The Misfortune of Ocean Calcifying Organisms: Ocean Acidification", "Bioavailable nitrogen stocks in the ocean vary with glacial-interglacial periods?", "Why is there life in the deep sediments of the seabed?", "Will the oceans be saturated with anthropogenic carbon dioxide?", "Source-sink pattern?", "Ocean microbiological carbon pump", "Marine viruses - impacting global ecosystems at the nanoscale", "Non-thermophilic archaea: Emerging roles for the ocean's global carbon and nitrogen cycle", "Largest Genetic Engineering Lab on Earth - Gene Transfer of Marine Microbes", "Metabolic pathways and their environmental effects in deep-sea microbiomes", "The mystery of the species of marine micro-organisms? \u2014\u2014Enlightenment from pure line culture to environmental genomics research", "Environmentally Friendly Marine Fouling Biocontrol", "Deep Sea Hydrothermal Ecosystem", "The process of uplift of the Himalayas and its climatic and environmental consequences", "\"Changing Seas\" and Sea Level Change", "Sediment Transport and Deposition Mechanism in the Deep Sea", "How to evaluate the impact of large-scale water conservancy projects on estuaries and offshore ecosystems?", "Ripping of subducting plates: causes and consequences", "The impact of marine sedimentary dynamic processes on geological records", "Puzzling observations of mantle seismic anisotropy parallel to the trench", "Why are there no deep earthquakes deeper than 660 km?", "What factors control the size of subduction zone earthquakes?", "The difference between sedimentary rocks and igneous rocks in terms of mineral composition", "Geological year representative? (Both tabulation and writing are acceptable) (6 points)", "Briefly describe the evidence of seafloor expansion? (7 points)", "Briefly describe the distribution of earthquakes in the world.", "Briefly describe the main types of weathering", "Brief description of the types of parent rock weathering products", "Explain mechanical deposition differentiation", "Briefly describe the main types of diagenesis", "Explain the concept of karst and the basic conditions for its formation", "List the types of terrigenous clastic rocks and explain their grain size content standards.", "Brief introduction to the binary structure of river deposits", "Briefly describe the types of strata contact relationship", "Geological year representative? (Both tabulation and writing are acceptable) (8 points)", "Briefly describe Bowen reaction series and symbiotic combination law? (10 points)", "Briefly describe the distribution of earthquakes in the world.", "Briefly describe the main types of weathering", "Brief description of the types of parent rock weathering products", "Explain mechanical deposition differentiation", "Briefly describe the main types of diagenesis", "Explain the concept of karst and the basic conditions for its formation", "List the types of terrigenous clastic rocks and explain their grain size content standards.", "Brief introduction to the binary structure of river deposits", "Briefly describe the types of strata contact relationship", "The influencing factors and results of metamorphism are briefly described.", "Briefly describe the main types of weathering", "Brief description of the types of parent rock weathering products", "Explain mechanical deposition differentiation", "Briefly describe the main types of diagenesis", "The main types of bedding structures of sedimentary rocks are listed and explained.", "List the types of terrigenous clastic rocks and explain their grain size content standards.", "Geological year representative? (Both tabulation and writing are acceptable) (7 points)", "Briefly describe the classification of sedimentary rocks and their main rock types (7 points)" ], "answer": [ "Formation of continental core : Paleoarchean is dominated by basic eruptions, terrigenous sediments are thin, epicrustal rocks appear sporadically, Mesoarchean volcanic rocks are mainly intermediate and basic, and are still well developed, but sedimentary rocks have spread all over the region , indicating that the sedimentary thickness of the distribution of crustal rocks has increased significantly . The proportion of Neoarchean sedimentary rocks increased significantly, volcanic rocks appeared in the form of interlayers, and sedimentary rocks had obvious zoning. In Shandong, Inner Mongolia and other places, deposits rich in organic carbon have even appeared, and crustal rocks have been widely distributed in North China. The Paleoarchean and Neoarchean granite emplacements occurred in three periods: 3.24 billion years of granite and greisen diorite emplacement; 2.9 billion years of granite emplacement ; 2.7-2.5 billion years of granite emplacement . Its scale gradually increased, indicating that the silicon-aluminum shell continued to expand and thicken. By the end of the Neoarchean, the silicon-aluminum shell had begun to take shape, forming the prototype of the North China plate\u2014the continental core. Accretion of continental nuclei and formation of primitive plates\u2014Paleoproterozoic : The Paleoproterozoic continental core experienced tensional rifting\u2014closing uplift and intrusion of a large number of granite bodies. The Luliang movement reassembled the initially split continental core and further consolidated the crust. knot, the final formation of the original plate. Volcaniclastic sedimentary sequences of different scales developed in the early and middle stages, and the molasse accumulations in the late stage represent basement deposits . Rift trough development stage : Entering the Mesoproterozoic is the rift trough development stage, forming three sedimentary areas within the North China plate, the Yanshan Trough (distributed in the north -east direction ) ; the Western Henan Shelf Sea (connected to the Qinling Trough in the south ) ; at the stage of the Jiaoliao Deep Trough (North- North- East distribution ) , the sedimentary layer was extremely thick, reaching tens of thousands of meters, and there were mature and relatively high terrigenous clastics (quartz sandstone-carbonate-argillaceous rock ) deposition, known as cap-like deposition. The formation of the North China continental plate : the Qinyu movement at the end of the Mesoproterozoic ( 1 billion years) caused the overall uplift of North China. In the Neoproterozoic, the sedimentary range narrowed, and the Qingbaikou Group had no volcanic material, and the thickness became thinner, which belonged to the truly stable type of sedimentation. The parallel unconformity contact between the Middle and Upper Proterozoic represents the formation of the North China Block.", "The Lower Archean - Lower Proterozoic metamorphic basement exists in the core of the Yangtze region, and the metamorphic basement formed the embryonic form of the Yangtze plate. In the Mesoproterozoic and Neoproterozoic Yangtze plates, a set of caprock-like sediments developed, mainly carbonate rocks, clastic rocks and volcanic rocks, which did not reach a stable state. The Jinning Movement in the late Neoproterozoic ( 850~800Ma ) caused the inside of the plate to fold and metamorphose again, and the Proterozoic strata contacted the overlying Nanhua System at an angle , making both sides of the Yangtze Paleoplate, the southeastern margin, and the Lower Yangtze area and the Yangtze The plates together form the stable zone, thus forming the stable Yangtze continental plate.", "The Sinian section in the east of the Xia includes the Doushantuo Formation and the Dengying Formation. The Tianzhushan section at the top of the Dengying Formation contains a large number of small shell fossils, belonging to the Lower Cambrian. The Sinian Doushantuo Formation and Dengying Formation are both dominated by carbonates. The rocks of the Doushantuo Formation are gray - gray-black medium - thin-layered, with less clasts, generally containing pyrite and chert, and containing three-shot calcareous sponge spicules and uniaxial siliceous spongy spicules, reflecting the A deeper water stagnant sedimentary feature. The dolomite in the lower part of the Dengying Formation develops intraclasts with oolitic structure and cross-bedding, representing high-energy and oxygen-rich carbonate platform margin shoal deposits. The middle part contains dark bituminous limestone and siliceous limestone, which is rich in Wende zonium algae ; the upper middle - thick layer of dolomite contains Sinian tubes and individual pipe snails, with bird's eye structure and algal mats (stratoliths ) , representing the carbonate tidal flat and lagoon environment, which is deposited under dry climate conditions. During the Sinian period, the evolution of the biological world was faster than before, and some distinctive biological groups were formed. In the early Sinian , the micropaleoflora were dominated by coccoids , and genera such as Macrocystis and Boliella jeffica appeared ; . By the Late Sinian , the molecular forms in the micropaleoflora were diverse, and there were many genera and species, and the types in the echinococcus group were larger or some membranous shells had obvious spiny structures. The biggest feature of the Late Sinian is the emergence of a large number of metazoans and the diversification of phyla. The Ediacaran fauna appeared in this period. This fauna is a fauna with soft- bodied metazoans as the main body. The worms and coelenterates are mainly found in the Sinian strata of China, and the worms are the most widely distributed.", "Caledonian movement caused the North China plate to uplift into land after the Early Ordovician , and suffered long-term erosion and flattening. With the advent of late Carboniferous transgression , iron and aluminum materials were enriched in the ancient weathering crust, thus forming the famous \" Shanxi -style iron ore\" and \"bauxite layer\" in the lower part of the Benxi Formation . The sandy shale with thin coal seams and the limestone with scorpion are the products of coastal swamp to neritic environment . The Taiyuan Formation is divided into 3 members , and the bottom of each member begins with coarse clastic deposits, which contain silicified wood fossils, develop large-scale plate, trough or wedge -shaped cross bedding, and locally develop wave cross bedding , It is a combination of plain river to delta sedimentary facies; the middle part becomes thinner, with shale and coal seams; the upper part is limestone, containing marine benthic organisms, and the cycle phenomenon is very clear, reflecting the continental facies (plain river to delta sedimentary facies combination) and marine facies. An environment in which phases (coastal swamp to shallow sea) alternate . On the whole, the thickness of the entire Upper Carboniferous in the North China platform is only more than 100 meters. On the surface, the terrain in North China was flat at that time, and the range of crustal movement and sedimentation rate were relatively slow. The above-mentioned cycles may be related to changes in the supply rate of terrigenous clastic materials or frequent changes in global sea level. In the early Late Carboniferous , the lithology and thickness of the Benxi Formation had obvious spatial changes, reflecting the differentiation of paleogeography. In the Benxi area of the Taizi River Basin in Liaoning, the Benxi Formation is 160m~300m thick, containing 5~6 layers of marine limestone , and the coal seams are mineable. Tangshan, Hebei is about 80m thick, containing only 3 layers of marine limestone , thin coal 2 floors. It is about 40~65m thick to the middle and west of Shandong, and does not contain recoverable coal seams. As far as Taiyuan, Shanxi, the thickness is reduced to less than 50m , containing only one layer of marine limestone , and does not contain important coal seams. The Benxi Formation in the Taizi River Basin contains two fossil belts, the upper part is the spindlefly - small spindlefly belt, and the lower part is the stafffly belt. In Tangshan, Hebei and Taiyuan, Shanxi, only the upper fossil belt can be seen. The above features indicate that in the early Late Carboniferous , North China had a terrain that was low in the northeast and high in the southwest. At that time, the sea water first reached the Taizi River Basin in the northeast, and then gradually moved to North China. Further south to Fengfeng in Hebei, Jiaozuo in Henan, and most of Henan and Anhui, the deposits of the Benxi Formation are missing. However, in the Jiawang area of northern Subei, the Benxi Formation is about 100m thick , and the limestone interlayers are as thick as 50m . The lithology and foraminifer fossils contained therein are very similar to those in South China. It is likely to be related to the ancient sea area in the eastward extension of the South Qinling Trough. In the late Late Carboniferous , the area of transgression in the southern part of North China was more extensive, and there were obvious overlaps in northern Anhui, southern Henan and Ordos . However, in the northern Benxi, Beijing, Datong and Ordos Dongsheng areas, there are continental coal-bearing sedimentary areas. At the same time, the number and cumulative thickness of limestone interbeds in Shanghai facies in the north-south direction have also undergone a \" seeker- like \" change. There are only a few marine limestone interlayers in Tangshan , Hebei . To the south to the Qinshui Basin in the southeast of Shanxi and Cixian in southern Hebei, the Taiyuan Formation is 80-100m thick , the number of limestone layers has increased to 6 , and marine fossils are abundant. Further south to northern Anhui and Huainan areas, the number of limestone layers can reach 12 , with a total thickness of 80m . It can be seen from this that in the late Late Carboniferous , North China has transformed into a terrain high in the north and low in the south, and the coastline has gradually moved southward. The coal-bearing property of the Taiyuan Formation is generally best in the area between 34 \u00b0 30 ' and 37 \u00b0 30 ' north latitude. It happened to be the area where the coastal swamp environment was most widespread at that time.", "section of the continental Triassic in northern China . The Liujiagou Formation and Heshanggou Formation of the lower series are purple-red sandy argillaceous rocks, and most of the sandstones have cross-bedding; the Ermaying Formation of the lower middle series is also purple-red fluvial and lacustrine clastic rocks, containing Ken's fauna, Deposits of river and lake clastics in arid climates. The Tongchuan Formation in the upper part of the middle series and the Yanchang Formation in the upper series are collectively called the Yanchang Group, rich in Dannifern - Bernau fern flora, mainly gray-green, yellow- green sandstone and shale, with black oil shale in the lower part, and the top contains The coal seam, with a total thickness of 2000m , is a large depression basin in a temperate semi-humid climate environment .", "Its boundary line is Daxing'anling - Taihang Mountain - Wuling Mountain. Paleogeographic features of eastern China: During the Jurassic period, crustal tectonic changes and magmatic activities were intense, and many small faulted basins dominated by volcanic rock deposits were developed, forming a volcanic activity belt from the bank of Heilongjiang in the north to the southeast coast in the south, belonging to the Mesozoic ring. Part of the Pacific volcanic belt ; during the Cretaceous period, the magmatic activity was relatively weakened, and the volcanic belt moved eastward, and important oil-bearing basins such as Songliao, North China, and Jianghan appeared in the middle and late Cretaceous period . Paleogeography features of Northwest China: During the Jurassic period, the palaeogeography of western China was characterized by large stable basins and mountain ranges. Major basins included the Sichuan- Yunnan Basin, the Junggar Basin, the Tarim Basin, the Qaidam Basin, and the Hexi Corridor Basin, etc. . ( Affected by the climate zone, the sedimentary characteristics of these basins are different. In the basins north of the ancient Qinling - Paleo-Kunlun, the Lower Jurassic and the lower part of the Middle Jurassic are dark clastic rock deposits, which generally contain important coal beds; From the upper part of the Middle Jurassic to the Upper Cretaceous, there are generally variegated and purple-red clastic rock deposits, often containing salt deposits. The Jurassic and Cretaceous in the Sichuan-Yunnan Basin south of the Paleo-Qinling - Paleo-Kunlun are generally purple-red. red and variegated clastic rock deposits ) ; in the cretaceous period, the basin tended to shrink, especially in the Sichuan- Yunnan region .", "The Indian plate finally collided with the Asian plate in the late Eocene, the Neo-Tethys ocean basin disappeared, and the Indian plate continued to move north A In the late Eocene (39Ma), the movement direction of the Paleo-Pacific plate also changed significantly , that is, the direction of movement changed from NWW to NWW . Basin system, active back-arc or intracontinental rifting occurred in the interior of the continent; the Paleogene climate arid zone traversed Asia, occupying northwest and southeast China, and the Neogene climate was mainly warm and gradually cold, and finally entered the Quaternary glacial period; The Paleogene - Neogene system in China is dominated by continental deposits, and marine deposits are limited to local areas such as southern Tibet, the southwestern margin of the Tarim Basin, and the continental shelf seas in southeastern China; the Paleogene - Neogene system in China can be divided into eastern The boundary between the east and west parts is on the Helan Mountain - Longmen Mountain line. This boundary is an important meridional-oriented structural belt in China's regional geology. The structural framework and main dynamic factors of the east and west parts are obvious. difference.", "Paleogene paleoclimate in mainland China is clearly divided into latitudinal zones, and four zones can be divided in the east : the northern warm and humid climate zone, the central and northern humid and semi-arid climate zone, the central and southern arid climate zone, and the southern tropical and subtropical humid climate zone . The dual effects of climatic zones and tectonic zones lead to the existence of four sedimentary types in eastern China: intracontinental coal-bearing type, intracontinental oil-bearing type, red clastic gypsum-salt type and continental margin oil-bearing type. Intracontinental coal-bearing deposits : distributed in the south and north humid climate zones, namely the area north of the ancient Yinshan - Yanshan Mountains and south of the ancient Nanling Mountains. This area is characterized by coal, but also rich in mud Oil source rock. The north is represented by the Fushun Basin in Liaoning , which is one of the important coal bases in China. Basins such as Maoming in Guangdong and Baise in Guangxi to the south of the ancient Nanling represent another type of wet coal-bearing basin. The Maoming Basin was still under arid climate conditions in the early stage, with red clastic deposits as the main deposits , and some gypsum layers could still be included in some places. In the middle and late stages, the climate became obviously humid, and oil shale and coal seams appeared. Since there are saltwater biological fossils (such as Shenhai Shensuke ) in Nanning area , it is very likely that these basins are inland basins that have been affected by sea flooding . Intracontinental oil-bearing deposits: distributed in the semi-humid and semi-arid climate zone, that is, in the south of the ancient Yinshan Mountains and Yanshan Mountains, and in the north of Qinling Mountains and Dabie Mountains, where oil-generating rocks such as gray-black mudstone and oil shale are developed, interspersed with Gypsum-salt and red clast deposits, the Bohai Bay Basin located on the coast of the Bohai Sea and in the middle of Jizhong and northwest of Shandong is a typical representative of this sedimentary type. They are important oil-bearing basins in eastern China . Composed of uplifts and half-graben fault depressions , the thickness of the Paleogene in the fault depression area can reach 4000~5000m , while in the uplift, the thickness is very thin or even absent. On the section at the same location in this area, it can be seen that red rock formations and dark rock formations, gypsum and coal lines, and oil shale are produced alternately, reflecting that this area belongs to the transition zone between the northern humid zone and the southern arid climate zone. Red clastic gypsum-salt deposits : distributed in the arid climate zone, that is, the central-southern region between the ancient Qinling Mountains and the ancient Nanling Mountains . The Paleogene in the Xiong Basin are all red clastic rocks. In the Jianghan Basin located in the northern part of this climatic zone, Paleogene gypsum-salt layers and oil shale alternately appear, with alternate and transitional colors of arid and semi-humid. This sedimentary type extends northwestward in a banded manner until the Qaidam Basin and the Tarim Basin. Continental margin oil-bearing deposits: Paleogene - Neogene continental margin rift basins in eastern China are mainly located in the present-day South Yellow Sea, East China Sea, and South China Sea, and have a relatively humid marine climate. Marine-terrestrial interactive facies deposits are characteristic, and dark oil source rock series are well developed. It is an important oil and gas base in China's sea today. For example , the Paleogene Yacheng Formation in Yingqiong Basin is dominated by continental sand and mudstone deposits in the early stage, and marine-facies interactive facies in the late stage. , the thickness of the source rock series exceeds 1000m .", "Significant changes occurred in the paleoclimate of eastern China in the Neogene. The arid and semi- arid climate zone across eastern China in the Paleogene disappeared, making eastern China basically covered by a humid and semi-humid climate. The Neogene continental rift basins in eastern China entered the period of post-rift thermal subsidence sags. The scope of the basins generally expanded, and the thickness of the strata was relatively small. Most of the systems are incongruous contacts. The sedimentary characteristics of this stage in the continental plate are mainly manifested in two aspects: the distribution of extensive lignite beds and the large-scale basalt eruption in coastal areas. Since the Neogene in eastern China was basically composed of a humid and semi-humid climate, lignite beds were distributed in many areas, such as the Xiaowu Formation in the Sunwu area, the Fujin Formation in the Sanjiang Plain, the Guantao Formation in the Dongying Sag of the Bohai Bay Basin, and the Guanghua Formation in the Qianjiang Sag . There are lignite or peat layers in the Si Formation and the Neogene strata of Guangchang Toupo, Jiangxi. The Changchang Formation in Changchang, Qiongbei , and the Xiaolongtan Formation in Kaiyuan, Yunnan all contain lignite beds with industrial value, especially in the Xiaolongtan Basin. The section of coal-bearing strata is the most typical. The Miocene Xiaolongtan Formation is composed of white clay interbedded with lignite, with marl in the upper part and clastic rock in the lower part. It produces mammal fossils such as sharp-toothed pigs , with a thickness of 300~400m . The Pliocene Hetou Formation is composed of gray sandy clay interbedded with lignite and is 150m thick . Basalt eruptions in coastal areas are widespread. Such as the Changbai Mountains in the northeast, Miaodao Islands in the Bohai Strait, both sides of the Lujiang - Tancheng deep fault, Shengxian County in eastern Zhejiang, Zhangpu in Fujian, Penghu Islands in the Taiwan Strait, Leizhou Peninsula in Guangdong , northern Hainan Island, etc. There are large basalt flows, most of which belong to the Pliocene era. The sedimentary basins of the continental margin belt were still very thick in the Neogene strata, and the Neogene strata in the Yinggehai Basin had a maximum thickness of nearly 10,000 meters. The sedimentary characteristics were obviously different from those in the Paleogene, manifested in: the Neogene marine facies were obviously more, and the transgression increased For example, the Neogene Xiayang Formation, Baowei Formation, Dengloujiao Formation and Wanglou Formation in the Beibu Gulf were dominated by neritic deposits, and the Neogene in the Yingqiong Basin were all neritic and coastal dolomitic limestone and sandy mudstone deposits.", "the early Cambrian sedimentation, and in the middle period, it transgressed from the Qinling Ocean on the south side to the north, and Huainan, Western Henan, Longxian County in northern Shaanxi, and Helan Mountain in Ningxian County were the first to be affected, and littoral-neritic clastic rocks and phosphorous sandstones were deposited . Houjiashan Formation or Xinji Formation. In the late Canglangpu period, seawater intruded into the Yanshan and southern Liaoning areas. The Yanshan area is the Changping Formation, which is the leopard skin limestone containing ancient oil combs. The depositional range of the Mantou Formation gradually expanded from east to west to the Taihang Mountains, Zhongtiao Mountains, Ordos, and the western and southern margins of Alxa . During the Maozhuang period and Xuzhuang period of the Middle Cambrian , the transgression extended westward to Luliang Mountain. The seawater around Helan Mountain in the west also expanded eastward to the middle of Ordos, and the North China ancient land further shrunk. The transgression of the Zhangxia period expanded significantly, and stable shallow marine carbonate deposits were widely developed in North China, except in northern Shaanxi and Dongsheng, Inner Mongolia, which were still paleocontinents. In the late Cambrian , the palaeogeographical pattern of the North China plate changed significantly. The Huainan, western Henan and southern Shanxi areas began to rise in the south, and the seawater became shallower, forming dolomite-based deposits (Sanshanzi Formation ) . The dolomite layers rise from south to north. The Yanliao area in the north declined relatively, and it was littoral-neritic limestone deposits. At this time, the topography was high in the south and low in the north , showing a seesaw-like change with the early and middle Cambrian topography , and this topographic feature continued until the Ordovician.", "North China plate was bounded by the Dezhou - Shijiazhuang - Baode line in the early Early Ordovician , and the northern part was dominated by normal shallow sea environment; the southern environment was mainly intertidal - supratidal evaporative environment, and the thickness of the strata decreased from north to south, indicating that at this time North China The terrain of the area is low in the north and high in the south. In the middle of Early Ordovician, the southern part continued to uplift into a land denudation zone, while the dolomite representing the intertidal - supratidal evaporative environment migrated northward , forming gypsum-salt deposits in the southern Jin and northwestern Shandong areas . The scope of transgression expanded in the late Early Ordovician , and the Xiamajiagou Formation overlapped southward and northwestward. From the end of the Early Ordovician to the Middle Ordovician , the lithofacies were stable, and the transgression was still relatively extensive. In the middle and southern sections of the Taihang Mountains, Lvliang Mountains, Zhongtiao Mountains and other places, the Middle Ordovician Fengfeng Formation is a set of thick limestone Together with argillaceous limestone, dolomite, orthopedites and conodonts, it is about 140m thick , and it is also common in the western Shandong and northern Jiangsu areas deposited at the same time. In the Late Ordovician, the crust rose and a large-scale regression occurred, making the North China plate once again a paleocontinent denudation zone. It was deposited only in Yaoxian County, Shaanxi Province, and Guyuan, Ningxia Province on the southwestern margin. It is a set of benthic corals, brachiopods, three Normal shallow marine carbonate deposits such as leafworms , gastropods and crinoids are called the Beiguoshan Formation.", "The main body of the Silurian in the North China plate is still the denudation area of the ancient continent, lacking the Silurian deposits.", "During the Devonian, the interior of the North China plate was still in the state of denuded ancient land. The Devonian Qaidam block and the North China plate had been collided and connected, and coarse clastic molasse deposits developed in the piedmont basin of the Qilian Caledonian orogenic belt. In the Gansu Corridor area on the north side of the Qilian Mountains , the Lower and Middle Devonian snow-mountain groups are purple-red conglomerate, producing fossils such as plant sickle ferns and fish ditch scales. The Upper Devonian Shaliu Shui Group is composed of purple-red glutenite, siltstone and argillaceous siltstone, containing plant fossils such as orthorhombic veneer. The Shaliushui group and the Xueshan group are in unconformable contact at an angle, reflecting that the compression and uplift process of the Qilian orogenic belt is still going on. In the early Late Devonian of the northern margin of Qaidam , the Maoniushan Formation is purple-red conglomerate, sandstone, intermediate-acid volcanic rock and pyroclastic rock containing orthorhombic veneer, and the late Amniike Formation is purple-red conglomerate and glutenite, representing Sediments of active piedmont basins in the southern Qilian orogenic belt.", "The Xishan section of Taiyuan, Shanxi is divided into Benxi Formation and Taiyuan Formation from bottom to top. The boundary between the Carboniferous and the Permian is located in the lower part of the Taiyuan Formation; profile analysis: the lower part of the Benxi Formation forms \"Shanxi-style iron deposits\" and \"bauxite layers\". The sandy shale and limestone containing thin coal seams on it are the product of coastal swamp to shallow sea environment; the Taiyuan Formation is divided into 3 members , and the bottom of each member starts with coarse clastic deposits, and the coarse clastic deposits contain silicified wood Fossils , developed large-scale plate, trough or wedge -shaped cross-bedding, and partially developed wave cross-bedding, which is a combination of plain river to delta sedimentary facies; the middle part becomes thinner, with shale and coal seams; the upper part is limestone, It contains marine benthic organisms, and the cycle phenomenon is very clear, reflecting the environment in which continental facies (combination of plain rivers to delta sedimentary facies ) and marine facies (coastal swamps to shallow seas ) alternately appear; on the whole , the entire Upper Carboniferous of the North China Platform The thickness is only more than 100 meters, and the surface was flat in North China at that time , and the range of crustal movement and deposition rate were relatively slow. The above-mentioned cycles may be related to changes in the supply rate of terrigenous clastic materials or frequent changes in global sea level. Spatial changes: In the early Late Carboniferous , North China had a terrain that was low in the northeast and high in the southwest. At that time, the seawater first reached the Taizi River Basin in the northeast, and then gradually advanced to North China, while the transgression in northern Jiangsu came from the south; North China has changed in the late Late Carboniferous The terrain is high in the north and low in the south, and the coastline is gradually moving southward.", "Early Permian to early Middle Permian (middle - upper part of Taiyuan Formation and Shanxi Formation ) , coal accumulation environments generally appeared in North China and southern Northeast China. Corresponding to the period of the Shanxi Formation, due to the uplift of the ancient land in the northern part of the North China plate, the north of Taiyuan was entirely continental river-lake deposits, and the offshore peat swamp environment most favorable for coal accumulation migrated to the central part of North China; further south to the western Henan and Lianghuai regions , which contains multiple layers of marine limestone. The total thickness of the strata in this period is only about 200m , showing a stable tectonic environment. From the late middle permian to the early late permian, it is generally dominated by variegated to purplish red inland basins ( shihezi group ) with increased thickness and generally does not contain recoverable coal seams, indicating that the terrain difference has increased and a drier climate. However , in the Huainan area south of the Yellow River line ( 34 \u00b0 30 ' north latitude ) , the entire Shihezi Group contains important recoverable coal seams, and interbeds rich in tongue-shaped shells are common . Marine fossils such as siliceous sponges were also found in the Shangshihezi Formation in Yu County, Western Henan, indicating that the southern part of North China is an offshore marsh environment, and is often affected by sea flooding from the Qinling Trough to the south. In the late Late Permian , red fluvial and lacustrine clastic deposits (Shiqianfeng Formation ) were widely distributed throughout North China in arid climate . In the western section of the Qilian Mountains (Corridor Nanshan ) in Gansu Province , the layer corresponding to the Shiqianfeng Formation (Sunan Formation ) is purple-red continental clastic deposits, and the local green interlayer contains mixed Angara and Cathaysia flora. It shows that the late Hercynian movement in the mid-Permian has caused the North China - Qaidam plate and the Siberia - Mongolia plate to finally collide and merge, and the northern trough between them basically disappeared, thus promoting the migration and mixing of terrestrial plants of different flora.", "the Cambrian Yangtze area, the transgression was extensive, and the strata were clearly dichotomized. The lower series consisted of argillaceous and carbonate rock deposits with rich fossils; the middle and upper series were dominated by dolomitic carbonate rock deposits, with few fossils. The Meishucun Stage is based on small crustacean fossils, including the Meishucun Formation and the lower part of the Qiongzhusi Formation, the ancient trilobite Parabadiella.Eoredlichia of the Qiongzhusi Formation The appearance and disappearance of etc. are the upper and lower boundaries, Canglangpujie is the prosperous stage of trilobites, Redlichia It began to appear and continued until the early Cambrian period . The Longwangmiao Stage is characterized by the disappearance of a large number of Oleacidae and the appearance of R.chinensis . The group and step are consistent on this standard section, but the boundary line away from the standard section group is often time-diagnosed. The Meishucun Formation is parallel and unconformable on the dolomite karst erosion surface of the lower Dengying Formation of the Sinian System, and the phosphorite in the lower part of the Meishucun Formation is mixed with terrigenous clastics and interbedded with thin dolomite, which represents the beginning of transgression It belongs to the transgressive systems tract ( T S T ) , the central glauconite interlayer is the condensation section, the upper phosphorite and the top dolomite belong to the high-stand systems tract ( HST ) , and the top layer of the dolomite has obvious dissolution ditch and filling objects, indicating that the sea level has dropped briefly, which is a sign of dolomite exposure. The 20-30cm phosphorus-containing siliceous rock and glauconite claystone at the bottom of the Qiongzhusi Formation are typical condensation sections, and the black siltstone without trilobites below it is a transgressive systems tract , the upper part is rich in trilobite gray matter, brown-black siltstone, which becomes coarser upwards, and belongs to the high-stand systems tract ( HST ) . The littoral sandstone of the Canglangpu Formation is a new transgression beginning, which thins upwards into sandstone interbedded with sandstone, and belongs to the transgressive systems tract ( T S T ) , and the dolomite of the Longwangmiao Formation is a typical feature of the high-stand systems tract . Therefore, the Lower Cambrian in this area can be divided into three sequences, forming a second-order cycle. The lower Cambrian system can also be divided into four groups from bottom to top: Shuijingtuo Formation, Shipai Formation, Tianheban Formation, and Shilongdong Formation, which also constitute a secondary cycle. The Qinjiamiao Formation of the middle series is composed of thin to medium-layered dolomite, which is interbedded with lamellar dolomite and stromatolite dolomite, and contains a small amount of Anomocarella in the limestone interbed. It is 190 meters thick . The upper Sanyoudong Formation is thick dolomite with few fossils, 170 meters thick. Its top already belongs to the Ordovician, and the Middle and Upper Cambrian forms a secondary cycle. The Cambrian in the Yangtze area was a continental surface sea that was slightly higher in the west and lower in the east. Kangdian and Lu were always higher than the water surface and continued to expand. The two sequences of the Early Cambrian can be seen in the area, represented by the Meishucun Formation. The first sequence, thinning from west to east, is calcareous, siliceous and phosphorus deposits, and the second sequence is the maximum transgression period in the Yangtze area, forming the lower phosphorus-containing siliceous clay rocks and nickel, vanadium, and uranium-containing Carbonaceous silty shale\u2014anoxic, stagnant sea basin containing only planktonic organisms, the upper high-stand system tracts have obvious spatial differentiation, bounded by east longitude 105 \u00b0\u00b1, and sandy mudstone is mainly interbedded with carbonate rocks in the west , dominated by benthic trilobites, which are littoral and neritic deposits, and in the east are terrigenous clasts, which become thinner and less, and carbonate rocks increase, including benthic trilobites and reef-building ancient cupids . Brewed , warm, oxygen-enriched, normal-salinity surface sea. At the end of the Early Cambrian , it was dolomite deposition. The lithology of the middle and upper series is not clearly differentiated . It is composed of dolomite and dolomitic limestone . The dolomite with interbedded sandstone on the land decreases to the east, and the east is all dolomite deposits. The middle and late Cambrian transgression in the west expands, and the seawater deepens slightly. Limestone. The west is hot and dry, and gypsum-salt deposits are seen in southwestern Sichuan and northern Guizhou . In general, the Cambrian Yangtze area was a stable surface sea with a slight elevation in the west and a gentle slope in the east.", "Ordovician is a period in which the scope of transgression gradually expanded in geological history. In the interior of the Yangtze plate, it is manifested in the Kangdian ancient land that has been expanding from the west of the Middle Cambrian . overlying, resulting in significant lithofacies changes in the Yangtze region . The paleogeographic features of the Early Ordovician Yangtze area are similar to those of the Middle and Late Cambrian, which is a huge rimmed carbonate platform with different lithofacies belts arranged from west to southeast to north . In the western part of the plate , west of Sichuan and east of Yunnan to the east of the Kangdian ancient land is the littoral facies belt, where sandstone, shale, and calcareous shale with tidal bedding were deposited, and the debris increased and the grain size became coarser towards the ancient land. . From here eastward to the west of Hubei (northern and southeastern margins of the Yangtze River ) , carbonate deposits are dominated, with argillaceous deposits , bioclasts, endoclasts, oolites, etc. are well developed, forming a platform margin shoal environment. Further east to the Lower Yangtze region, all carbonate deposits. The western ancient land expanded in the Middle Ordovician , and the facies differentiation pattern in the Early Ordovician no longer existed. The entire Yangtze area was dominated by carbonate deposits with a small amount of mud, especially the contracted limestone of the Baota Formation. Wide distribution. The black shale of the Miaopo Formation is an underwater stagnant basin formed by a limited \" patch \" sag, surrounded by shallow-water carbonate deposits of the Datianba Formation that are simultaneously heterogeneous . Early Late Ordovician transgression was the largest, and nodular argillaceous limestone was deposited. In the late period, the sea level dropped, and the western Kangdian ancient land and the Yunnan-Guizhou-Guangxi ancient land were connected together, resulting in a stagnant sea basin in the Wufeng period, where typical graptolite shale facies was deposited.", "The Silurian Yangtze area was mainly a clastic shelf sea. With the intensification of the Caledonian movement , its sea area shrank continuously. The Longmaxi period was the beginning of another new transgression after the Late Ordovician regression. At this time, the sedimentary range was limited to the northern part of the Yangtze region, and the southern part of the Yangtze region was a large-scale ancient land. The paleogeographical environment of the early Longmaxi period is similar to that of the Wufeng period, and belongs to typical graptolite shale facies. Since the Late Longmaxi period, the Yangtze region has been in a normal continental shelf shallow sea environment, and the transgression range has expanded to the central and southern Guizhou area. In the area from northeastern Guizhou to southern Sichuan, the Early Silurian sediments were similar to those in Yichang, but carbonate deposits were developed in the strata corresponding to the lower part of the Luojaping Formation , and various organisms were very prosperous. Dianjiao, known as the Leijiatun Formation in Shiqian, Guizhou; the middle is the Xiushan Formation and the Huixingshao Formation , the Xiushan Formation is a normal shallow marine shelf facies, and the Huixingshao Formation is a set of purple-red, gray-green sandstone and siltstone And shale, containing fossils such as gastropods, bivalves and fishes, representing littoral facies deposition. It can be seen that by the late Middle Silurian , the interior of the Yangtze plate was affected by the splicing of the Yangtze plate and the Cathaysia plate, and the main environment gradually rose and became an onshore denudation zone. From the southeast to the Xiushui Basin in Jiangxi, the Silurian system is fully developed, the lower series is graptolite fine clastic facies, the middle series is normal neritic facies, and the upper series is continental facies. Further east to the Lower Yangtze area, the situation is similar to that of the Xiushui area. The lower Gaojiabian Formation is graptolite clastic rock, the middle Fentou Formation is crustal clastic rock, and the upper Maoshan Formation is littoral - transitional clastic rock containing fish fossils, with a thickness of more than 3000m . The early and middle Silurian Yangtze Plate was shallow sea sediments, rich in cephalopods such as Sichuan cornerstones, trilobites such as comet worms, and various indigenous corals with strong indigenous colors. More than 30% , indicating that the Silurian Yangtze area is an independent plate. In the eastern Yunnan area on the western margin of the Yangtze plate, it did not receive sedimentation until the late Middle Silurian after uplifting in the Late Cambrian . The Guandi Formation is dominated by marl and shale, including Sichuan hornstones. The lithology of the upper Miaogao Formation is composed of yellow-green, gray-green shale , nodular marl and limestone. On top is the Yulongsi Formation composed of black shale interbedded with thin nodular marl, containing brachiopods , trilobites, and conodont fossils, which are in integrated contact with the overlying Lower Devonian strata. It can be seen that the late The Silurian eastern Yunnan area was the subsidence center of the Yangtze plate. In the Late Silurian in South China , only the eastern Yunnan and Qin - fang areas had deposits, except for the transitional facies deposits in the lower reaches of the Yangtze River, all other areas rose to land.", "three depositional areas : the South China Sea, the Middle Yangtze and the Lower Yangtze . The paleogeographic evolution of the Devonian in the Nanhua Sea area is generally characterized by transgression and overlap. Early Devonian strata were not widely distributed , only found in eastern Yunnan and the Qin - fang Trough, and were deposited continuously with the Upper Silurian . In the middle and late Early Devonian , the transgression further expanded, especially in the northeast direction, and in the late Early Devonian, the transgression could reach southern Hunan. In the Lower Devonian in southern Hunan, continental and coastal deposits have appeared. The scope of transgression in the Middle and Late Devonian became wider. In the Middle Devonian , the transgression from central Guangxi to the northeast could reach central Hunan and the junction of Hunan and Jiangxi. The Tiaomajian Formation in the lower part of the Middle Devonian in central Hunan is dominated by clastic rock deposits from rivers and lakes to the shore , containing plant and fish fossil fragments. Shallow sea carbonate rocks are the main ones, and there are a large number of brachiopods, corals, layer foraminifera, echinoderms , and mollusk fossils. The lower part of the upper part of the Xikuangshan Formation is limestone, marl and argillaceous rock, containing the famous \"Ningxiang-style \" oolitic iron ore. The upper part of the Xikuangshan Formation is dominated by sandstone and siltstone , reflecting the late Devonian A progradational feature formed by regression . Similar to Guizhou and Guizhou, interplatform trough deposits also existed in central Hunan , but on a smaller scale than the former, mainly formed in the Qiziqiao and Shetianqiao periods. The Middle and Upper Devonian near the junction of Hunan and Jiangxi are dominated by clastic rock deposits, intercalated with limestone, argillaceous limestone and marl. The organisms include marine organisms such as brachiopods, echinoderms , corals, etc., as well as terrestrial plants and fish, reflecting the characteristics of the interaction between sea and land. The oolitic hematite in the upper part of the Upper Devonian is an important iron ore bed in the Devonian of South China. In the Minzhong area on the eastern margin of the Nanhuahai Basin, the Upper Devonian Nanjing Group is composed of coarse clastic deposits such as conglomerate, breccia, and sandstone with a thickness of 2000 m , representing continental active deposits, which may be related to the uplift of the southeastern mountains , which is the molasse deposition in piedmont fault basin. In addition to the Nanhua Sea, the eastern Sichuan, western Hubei, and northwestern Hunan in the Middle Yangtze region were also transgressed in the Middle and Late Devonian, and developed from the upper part of the Middle Devonian to the Upper Devonian. Formation is pure quartz sandstone of fluvial to littoral facies. The Huangjiadeng Formation in the lower part of the upper series is composed of fine sandstone, siltstone interbedded with mudstone and marl, and contains brachiopods and fragments of plant fossils. The lower part of the upper part of the upper part of the Shujingsi Formation is dominated by carbonate rocks, with oolitic hematite, oolitic chlorite and siderite, containing brachiopods Yunnan shellfish and small Yunnan shellfish, etc. The rocks are mainly plant fossils. In short, the Middle and Late Devonian deposits in the Sichuan-Hubei shallow sea area were dominated by intercontinental sediments, and the Late Devonian may have been connected to the Nanhua Sea and the Qinling Trough. In the Lower Yangtze area, only the Wutong Formation of the Upper Devonian can be seen. The lithology is composed of grayish white, light gray quartz sandstone, glutenite and light gray to yellowish gray siltstone mudstone. There are plant fossils orthorhombic and fish Chinese stickleback , star scale fish, etc., representing the deposition of offshore river and lake basins under humid climate conditions. In southern Anhui, western Zhejiang and other places, cross-bedding of coastal deposits and small brachiopod fossils were found in the Wutong Formation, indicating that there may have been marine flooding in the late period . In the Middle and Late Devonian , the Sichuan-Hubei Sea was further connected with the Nanhua Sea. The Lower Yangtze area is dominated by near-sea fluvial-lacustrine deposits, with marine layers in between, indicating that it may be connected to the ocean trough on the north side . To sum up, since the Early Devonian , the seawater in the east of Yunnan and Qinfang has gradually intruded into the mainland, especially the transgression in the northeast direction is very obvious, forming obvious stratigraphic overlap. Due to the influence of paleotopography and structural ups and downs, the transgression has a step-like feature. The early Devonian ( Praggian - early Eames ) transgression ranged roughly to northern Guangxi and southern Hunan, which was the first step; the middle Devonian Givetian transgression spread from central Hunan to western Jiangxi was the second step . The second step; the Late Devonian Frasian transgression reached northern Hunan, which may submerge the Jiangnan ancient land and connect with the Middle Yangtze region to form the third step; the scale of this transgression is consistent with the above-mentioned stratigraphic sequence and sea level change law .", "in the early Early Carboniferous ( Yanguan Stage ) was similar to that of the Devonian, and it was a typical continental surface sea, dominated by carbonate platform deposits, with relatively developed benthic organisms such as corals, brachiopods, and stratofora. However, in Langdai, Luodian, Guizhou, Hechi, and Liuzhou, Guangxi, there are siliceous and argillaceous limestone facies belts distributed in the northwest direction, reflecting the deep-water interplatform trough environment , which is the product of micro-fractures on the continental crust . The littoral clastic rock facies belt developed in the central and southern Guizhou region on the northern margin of the sea basin, and the vast ancient land north of Guiyang was still the Upper Yangtze ancient land. The Huguang area to the east of Xuefeng Paleoland is composed of surface marine limestone and marl deposits (Liujiatang Formation ) , and the area from the junction of Hunan and Jiangxi at the eastern edge of Huguang Sea to Lufeng, Guangdong Province is littoral clastic deposits. The eastern Jiangxi and Fujian and Zhejiang areas are continental deposits (such as Huashanling Formation and Zhuzangwu Formation ) . False Ula coral limestone ( Jinling Formation ) is developed in the lower Yangtze area with a thickness of several meters . In the areas of Changyang, Yidu, and Songzi in western Hubei, the lower part is siltstone and shale, and the upper part is limestone and dolomite, including false Ula coral, with a thickness of more than ten meters. The scope of transgression expanded in the late Early Carboniferous (Datang period), resulting in stratigraphic overlap . The Nanhua Sea is mainly composed of carbonate deposits , and the organisms are characterized by large elongated shellfish and a small number of swimming ammonites. The inter-platform trough environment still exists. The coal-forming environment of coastal swamps representing the formation of short-term regression occurs in the region, such as the Wanshoushan Formation in eastern Yunnan, the Xiangbai Formation in southern Guizhou, the Simen Formation in Guangxi , the Meashui Formation in the Huguang area, and the Jiangxi Zishan group. Its level gradually rises towards the east. The coastal coal-bearing depositional environment represented by the Zishan Formation is close to the eastern edge of the sea basin, and then to the east, it becomes the Yejiatang Formation ( western Zhejiang ) and Lindi Formation ( central Fujian ) with thin coal seams interleaved with continental facies . In the lower yangtze area, the lower part is littoral shallow sea sandstone ( gaolishan formation ) , the middle part is limestone ( hezhou formation ) , and the upper part is dolomite (laohudong dolomite ) , with a total thickness of only tens of meters. The littoral clastic deposits at the same horizon are also found in the Changyang area of western Hubei. The range of transgression in Late Carboniferous was further expanded. The western Zhejiang, western Fujian and middle Yangtze areas were all covered by seawater, and the lithofacies were relatively stable, all of which were carbonate deposits, generally 200-400m thick . In the Hunan, Guangdong, Guangxi and Lower Yangtze regions, they are called the Huanglong Formation and the Chuanshan Formation. The area of Yunnan, Guizhou and Guangxi is still in the subsidence center, and the thickness can exceed 800m . Coastal tidal flats near Xuefeng ancient land, upper Yangtze ancient land and Kangdian ancient land are mainly magnesium-bearing carbonate rocks ( dolomite ) with a small amount of clastic deposits.", "the turn of the Early and Middle Permian mainly occurred in the upper Yangtze area north of Kunming, Guiyang and the Jiangnan ancient land line . There is an obvious sedimentary discontinuity at the bottom of the Qixia Formation, and the coastal - limnological facies of the Liangshan Formation is generally developed. Terrigenous clastic deposits; in southern Sichuan, the Liangshan Formation containing siderite and pyrite layers overlies the Silurian. Since the mid-Qixia period, the South China plate has undergone a large transgression, causing the long-term erosion of the Yangtze ancient land to subside into a shallow sea. From Nanjing to the west to the Qixia period in Sichuan, the strata overlap continuously from east to west. The massive sea level rise could be linked to the melting of Gondwana's ice sheet as a result of global warming . The lithofacies of the Middle Permian Maokou Stage were clearly differentiated. The siliceous and argillaceous deposits represented by the Dangchong Formation or the Gufeng Formation in central Hunan and the Lower Yangtze region rarely contain benthic organisms, but are rich in planktonic ammonites and radiolarians, reflecting the deeper depths under anoxic conditions. stagnant water environment. The Maokou period offshore clastic coal-bearing deposits (Tongziyan Formation ) appeared in the eastern Fujian, Zhejiang and Jiangxi regions of the South China plate . The Cathaysia ancient land was uplifted from the mid-Maokou period , and became the terrigenous clastic supply area for coal-bearing deposits on the west side of the ancient land. The differentiation of South China plate tectonics was generally enhanced in the late Maokou period. The Emei rift movement at the western edge of the Yangtze River caused a large number of basalt eruptions and the entire region's regression, which made the top of the Maokou period lack the deposition of the Neomisfing fossil belt. The Eastern Lower Yangtze and Southeast regions are traditionally known as the \" Soochow Movement \" . The early Late Permian was characterized by the extensive development of offshore swamp deposits in the Longtan Formation, reflecting the significant regression events caused by the uplift movement. From west to east in the Late Permian Longtan period, an obvious phase transition from land to sea can be seen. In the western onshore part, the fluvial alluvial plain clastic rocks of the Xuanwei Formation are interbedded with coal beds, and the transitional zone between land and sea is the coal-bearing clastic rocks of the Longtan Formation intercalated with marine carbonate rocks. The eastern and northern parts of the Upper Yangtze Sea farther away from the Kangdian ancient land are bioclastic limestones and reef limestones of the Wujiaping Formation . In the area to the east of the Xuefeng ancient land, there is a significant phenomenon of lithofacies thickness zoning from northwest to southeast in the Longtan period. The Hunan- Jiangxi - Northern Guangdong area is a land-ocean alternate coal-bearing deposit, containing important recoverable coal seams. From the southeast to eastern Guangdong and central Fujian, it is dominated by continental facies with coarser grain size and no important coal seams, representing the sedimentary type at the margin of the Cathaysia paleocontinent. A new transgression occurred in South China during the Changxing Period. The eastern side of the Kangdian ancient land still maintains the continental coal-bearing sedimentary type. There are two types of marine sediments in the rest of the area : one is the shallow sea carbonate deposits of the Changxing Formation, containing benthic organisms such as flies , brachiopods, and corals, mainly distributed in the upper Yangtze shallow sea. The other type is the siliceous deposits of the Dalong Formation , which only contain plankton such as pseudotiloammonites , representing a non-compensating environment in relatively deep water. On the whole, the above two types are phase transitions, and sometimes the relationship between upper and lower can be seen. To sum up, the Late Permian sedimentary type of the South China plate generally presents a symmetrical pattern with coarser grain size at the east and west sides of the paleo-continent margins, developed continental facies and offshore marsh facies, and dominated by carbonate rocks in the middle. A type of limited continental surface sea with two-way terrestrial sources. From the perspective of the Permian coal accumulation in South China, the coal-bearing seams in different regions have a time-travel phenomenon that gradually shifted from the Maokou period to the Changxing period from east to west .", "about 35 Hundreds of millions of years ago, the earliest creatures appeared in the ancient archaic times. Representative biota: filamentous bacteria in the Pilba stromatolites of Western Australia; about 20 The Paleoproterozoic, 100 million years ago, saw an early increase in biodiversity. Representative biological group: Canadian Precambrian Gunflint Filamentous fossils found in the group; about 18 The earliest eukaryotic organisms appeared in the Paleoproterozoic hundreds of millions of years ago. Representative biological groups: macroscopic algae found in the Chuanlinggou Formation ( 17.5 100 million years ago ) , the polykaryotic algae belonging to the Chlorophyceae polychaetaceae found in the black cherts of the Wumishan Formation in North China ( about 1.2-1.4 billion years old ) , and the genus Chlorella found in the cherts of the Bitter Springs Formation in Australia ( about 1 0 billion years ) ; about 6.3 The earliest metazoan appeared at the end of the Neoproterozoic 100 million years ago. Representative biota: Ediacaran fauna, Weng'an biota, Miaohe biota.", "Marine invertebrates thrive : The Early Paleozoic is known as the age of marine invertebrates. Small crustaceans appeared in the early Early Cambrian . The most important representative fossils of the Cambrian period are trilobites, so the Cambrian period is also called the era of trilobites. Important Ordovician fossils are graptolite and nautilus among cephalopods. Silurian is a monolithic. Early Cambrian ancient cups, brachiopods, transverse plate corals, and single-banded radiant corals all have important stratigraphic significance; Cambrian biological explosion : Burgess Shale fossil group discovered in the Middle Cambrian of British Columbia, Canada (Fossils of Burgess Shale) (1909) , the fossils are well preserved in deep cement rocks, including more than 120 species belonging to 12 phyla of arthropods, molluscs, brachiopods, coelenterates and worms; the earliest vertebrate Appearance of animals : Jawless vertebrates began to appear in the Middle Ordovician . Agnathians flourished in large numbers during the Silurian, and began to adapt to freshwater life in the late Silurian. Jaws began to appear in the late Silurian ; plant kingdom : the early Paleozoic was dominated by marine algae. Terrestrial nudibranchs began to flourish in coastal lowlands and marsh areas in the Late Silurian ; Biofacies : Common biofacies in the Early Paleozoic include planktonic, benthic (shell) and reef facies. Biological divisions : The Cambrian world was divided into 3 biological regions based on the distribution characteristics of trilobites . Due to the emergence and prosperity of marine invertebrates during the Ordovician, the boundaries of biopaleogeographical divisions based on different biological types were also different. The Silurian Period was divided into 3 regions based on benthic corals and brachiopods .", "Vertebrates underwent important evolutions and gradually conquered continents : fishes flourished in the Devonian, and amphibians appeared at the same time; amphibians flourished in the Carboniferous - Permian, and reptiles appeared at the same time; land plants flourished gradually, changing the land Paleogeographical landscape : Small-scale forests appeared in the Late Devonian , and primitive gymnosperms began to appear at the same time; large-scale forests appeared in the Carboniferous, becoming the first important coal-forming period in the world, and at the same time began to show obvious plant geographic divisions; Permian Late gymnosperms dominated; marine invertebrates were rich and diverse, and major changes occurred in biological categories : from the Early Paleozoic to the Late Paleozoic, marine invertebrates underwent important changes, and the graptolite that flourished in the Early Paleozoic was almost completely extinct. Trilobites are greatly reduced, while corals, brachiopods and flies occupy important positions. At the end of the Late Paleozoic, important biological extinction events occurred.", "Late Carboniferous : Tropical plant area: mainly including most of China, Japan, Sumatra in Indonesia, Central Asia, Europe and eastern North America. It is characterized by a large number of tall stone pine, knot fern and Koda species , high trees and dense forests, forming a tropical forest landscape. Among them, the scale wood can be as high as 30~40m , with a diameter of 2m , but the trunk does not show annual rings; the Angara plant area: mainly includes North Asia, the Junggar Basin in Xinjiang and the northeastern part of China, mainly herbaceous true ferns and seed ferns , wood This plant has obvious annual rings, which represent the climate in the northern temperate zone, and its representatives include spoon leaves, etc.; Gondwana flora: located in the Gondwana continent, there is a lingual fern flora represented by the lingual fern, which is characterized by plants The species are monotonous, reflecting the colder climate in the middle and high latitudes of the southern hemisphere. Permian : The early and middle Permian is similar to the Carboniferous flora, but the tropical flora is divided into two flora, namely the Huaxia flora and the European and American flora. The former mainly includes East Asia and Southeast Asia, and can be divided into northern subregion and southern subregion , characterized by a large number of ferns and single net ferns ; the latter mainly refers to Europe and eastern North America, and there are no large ferns at all group traces.", "The Mesozoic biological world is characterized by the prosperity of terrestrial gymnosperms, reptiles (especially dinosaurs) and marine invertebrates ammonites; Mesozoic gymnosperms flourished, and at the same time the earliest angiosperms appeared in the Early Cretaceous, and by the Late Cretaceous The Gymnosperms replaced the dominance of gymnosperms in the terrestrial period; primitive mammals appeared in the Triassic, and primitive birds appeared in the Jurassic; at the end of the Cretaceous , the famous biological cluster extinction event in the history of the earth appeared.", "Late Triassic was bounded by the ancient Tianshan (or ancient Kunlun ) - ancient Qinling - ancient Dabie Mountains. The southern part is characterized by the Dictonia - phylla flora, which represents tropical and subtropical offshore environments; It is characterized by the Nymphaeum - Bernaueria flora, which represents the temperate humid inland environment; the boundary line of the Late Jurassic and Early Cretaceous plant divisions moved northward, roughly bounded by the Yinshan Mountains. There are many ginkgos in the north, and the ferns are characterized by the flourishing of thorny fern and Ruford fern flora . Few, small-leaved true ferns are characterized, conifers are scale-like leaflets clinging to the branches, and the cuticle is also thickened, such as short-leaf fir, etc. , reflecting the arid tropical - subtropical climate.", "Small shell fauna : It was first seen at the end of the Sinian period and flourished in large numbers at the beginning of the Cambrian period. Individuals are tiny ( 1~ 2 mm ) , multi- category marine invertebrates with shells , including soft tongue snails, veneers, gastropods, brachiopods, and prismatic shells with unknown classification positions, etc. Representative fossils include pipe snails, prismatic shells, etc.; trilobites : Trilobites are the earliest shelled animals that flourished after small shelled animals. They had many genera and species in the Cambrian Period, evolved rapidly, and had obvious ecological differentiation. The abundance of fossils is an important basis for the division and comparison of Cambrian strata. The early Cambrian trilobites were mainly Laidlicia , and the representative molecules included Laidlicia and Paleopsis, etc .; there were also planktonic Paleopsidia, such as Hubei Pectinia; The Cambrian is marked by the large number of pleuraia suborders , such as Shandong shield shell insects , the representatives of the middle and late stages include Dekkla, butterfly insects and bat insects, etc.; the fossils of the middle and late Late Cambrian include pleura . In the Middle and Late Cambrian , there were also widely distributed ball catches that engaged in planktonic life , such as wrinkle ball catches and false ball catches. In the Ordovician, due to the emergence and prosperity of newborn swimming nautilus and floating graptolites, trilobites no longer occupied a dominant position in the ocean, which was quite different from the Cambrian period . The strabismus suborder and the trinomia suborder are dominant, and the representative molecules include the fingerprint headworm, the small comb worm, the ancient etomega, the Nanjing triloboma and the small dahlmania, etc. Since the Silurian period, the trilobites have declined significantly, and only the Spectacles are more important, and the representative molecules include comet worms and crown worms. Graptolite : Graptolite is a group of marine animals in the geological history period, and the most common ones are tree-shaped graptolite and orthograptolite. From the Late Cambrian to the early Early Ordovician, there were mainly tree - shaped graptolites , such as thorny graptolite and lattice graptolite; Molecules have paired graptolite; the Middle- Late Ordovician was the peak period of graptolite development , mainly the dirowle climbing graptolite in the axonal suborder , cryptic suborder and axenic suborder , and the typical representative is the silk graptolite, fork graptolite, carved graptolite and gate graptolite, etc. The Silurian axonal and cryptoaxenic graptolites have disappeared, and the dirowle climbing graptolites were still prosperous in the early stage, and then the monograptolites emerged and became the main zoning fossils of the Silurian. The representative molecules of the Early Silurian include graptolite, graptolite, etc.; the middle Silurian is characterized by graptolite, represented by Mohs' graptolite ; at the end of Silurian, the orthograptolite declined sharply, with only a small amount Monograptites remain until the Early Devonian . Brachiopods : Brachiopods have been widely distributed since the Early Cambrian , and they are mainly chitin-like non-hinged classes, such as the small round shellfish, and there are also primitive representatives of the hinged class. The Ordovician was one of the peak periods for the development of brachiopods, and the three-decibel, orthomorphic, five- chambered and twisted -moon clams entered the peak stage, representing the Chinese orthomorphic shellfish, Yangtze shellfish, and Hercules shellfish. Nantes shellfish, etc., Shiyan shellfish and small mouth shellfish are all represented. Silurian brachiopod fossils are relatively reduced, but their internal structure is becoming more and more complicated. The main analogues and representative fossils are: pentaphyllous molluscs with a septum and spoon plate, such as pentaphyllites, and clamshells with wrists, such as Shiyan , Howell Shiyan and Tuwabe et al . Cephalopods : Cephalopods began to appear in the late Cambrian , and the early Paleozoic was mainly nautilus with simple sutures. The Ordovician was an important development period for Nautilus, with enlarged shells and complicated structure of body tubes in the shell, mainly straight shell types , and representative molecules include Manchurian hornites, Amen hornites, and Chinese hornites. From the Silurian period, nautilus began to decline. Coral : It first appeared in the Cambrian period. In China, bed corals were discovered in the Early Ordovician , and they began to flourish in the Late Ordovician . Mainly single- banded four-shot corals and bed corals, such as twisting corals, Argate corals, and omentoid corals. The Silurian period is the first flourishing period of corals. It is dominated by single-banded, foam-type radiant corals and bed corals, and can build reefs. At this time, the representative molecules include foam corals, cross corals, honeycomb corals, chain corals, Insolated corals, etc. In the Early Paleozoic, there are also gastropods, bivalves, bryozoans, echinoderms , ancient cups, and sponges, etc., especially the micro- paleontological conodonts, which have become important biological categories in the Early Paleozoic in recent years.", "Fish : All kinds of fish flourished in the Devonian Period, so it is called the \"Age of Fish\". In particular, freshwater fish appeared in large numbers, and they lived in inland rivers, lakes or estuaries, reflecting the evolutionary process of the animal kingdom's conquest of continents. Early Devonian fishes were mainly jawless, which belonged to lower fish-like animals; Middle and Late Devonian fishes were mainly placoderms, and the obvious evolution made the upper and lower jaws differentiated, such as ditch scale fish. Amphibians : In the Late Devonian , another huge step was taken in the conquest of continents by creatures, that is, the evolution of fish to amphibians. The individual ichthyophyllus, about 1m long, was found at the top of the Upper Devonian in eastern Greenland , which is a primitive Representative of amphibians; amphibians flourished during the Carboniferous period and occupied a dominant position. They mostly lived in rivers, lakes, and swamps near water, and they can be represented by the newt . Reptiles : In the late Carboniferous, the appearance of primitive reptiles was another major event in the history of vertebrate evolution, represented by the forest lizards produced in North America ; in the Permian, reptiles had further development, and the types were more diverse. Sex has also increased significantly . Famous representatives include the allosaurus found in North America and the dicynodonts found in all continents of the world, as well as the middle dragons that adapted to life in water . Terrestrial plants : Terrestrial plants represented by bare ferns began to appear in the late Silurian, and further developed in the Early Devonian, mainly representing the ferns; from the late Early Devonian to the Middle Devonian , began to Primitive pines with distinct roots, stems, and leaves appeared; late Devonian naked ferns became extinct, arbor-like plants dominated, small-scale forests appeared, and primitive gymnosperms began to appear . The Carboniferous terrestrial plants prospered further, and large-scale forests appeared for the first time on the earth, mainly represented by stone pine, knot fern , true fern, seed fern, koda, tongue fern, etc. Coral : The radiant coral was greatly developed in the Late Paleozoic, with various types and complex structures, and three development climax periods occurred in the Middle Devonian - Early Late Devonian , Early Carboniferous and Early Permian . In the Devonian period, double-banded and foamy corals were the main types, and their representatives included subcorals, star corals, slipper corals, etc.; to the Carboniferous - Permian period, in addition to the double-banded corals, there were a large number of triple-banded corals. The common ones are Guizhou corals, shed corals, Wenzel-like corals , etc. The foam corals have become extinct. Bed corals still played an important role in the Late Paleozoic and were important reef-building organisms . The common representatives are flute tube corals and Hayasaka corals. Brachiopods : Brachiopods flourished throughout the Late Paleozoic. The Devonian is characterized by the appearance of a large number of stone swallows, such as broad stone swallows and bow stone swallows. In addition, perforated molluscs (such as owl head shellfish) and small-mouthed shellfish (such as Yunnan shellfish, etc. ) are also very developed. Carboniferous - Permian long-bodied molluscs emerged, and important fossils such as large long- bodied shells and grid-shaped long-bodied shells. The emergence of specialized types of brachiopods in the late Permian (such as banana leaf shellfish ) may indicate a sign of a large decline in brachiopods in the late Permian . Foraminifera : The Carboniferous - Permian period was the flourishing period of foraminifera , especially the prosperity and rapid evolution of flies, making them important zoning fossils. The important representatives of the early Late Carboniferous flies are the spindle flies , and the late Late Carboniferous flies are enlarged, and the main representatives are the wheat flies and the false Schwagflies . The Permian period was the heyday of flies , and important fossil representatives such as Michaelis and New Shivag . Attenuation and morphological specialization ( such as trumpetfly ) appeared in the late Permian flies , and the end Permian completely extinct. In addition to the above categories, cephalopods, styloliths, and conodonts are also of great significance in the division and correlation of Late Paleozoic strata .", "Terrestrial plants : In the Late Triassic in China, the ancient Tianshan (or ancient Kunlun)-ancient Qinling - ancient Dabie Mountains line was the boundary, and the southern part was characterized by the Dictyophyllum - Gerina flora; The Dany fern - Bernau fern flora is characteristic, representing a temperate humid inland environment. In the Late Jurassic and Early Cretaceous , the boundaries of plant divisions moved northward obviously, roughly bounded by the Yinshan Mountains. There are many ginkgos in the north, and the ferns are characterized by the flourishing of thorny fern and Ruford fern flora . Few, small-leaved true ferns are characterized, conifers are scale-like leaflets clinging to the branches, and the cuticle is also thickened, such as short-leaf fir, etc. , reflecting the arid tropical-subtropical climate. Vertebrates : Dicynodonts among the Myodont amphibians and reptiles were very prosperous in the Early and Middle Triassic , especially the Dicynodonts and Cynognaths among the Dicynodonts were more eye-catching. Since the late Triassic period, the great development of dinosaurs and the return of reptiles to the sea marked the entry of reptiles into a new evolutionary stage. Among the Jurassic terrestrial dinosaurs, sauropods and ornithischians were extremely prosperous, and sauropods were further divided into vegetarian sauropods and carnivorous theropods. The sauropod is represented by Mamenchisaurus, the theropod is represented by Sichuansaurus, and the ornithischian is represented by Stegosaurus. Part of the reptiles returned to the ocean to live in the late Triassic, and successfully occupied the ocean domain in the Jurassic , represented by ichthyosaurs. In addition, there are flying dragons that live in the air. Reptiles similar to mammals appeared in the Early Jurassic , such as Bian's beast. Primitive birds, such as Archeopteryx, appeared in the late Jurassic. Cretaceous dinosaurs continued to evolve, mainly represented by Tyrannosaurus rex, Psittacosaurus, Hadrosaurus and so on. The late Cretaceous marine reptile mosasaurs took the place of ichthyosaurs. Pterosaurs were further developed, such as Junggar pterosaurs. A large number of bird fossils have been found in the Early Cretaceous strata , including Chinese birds, Huaxia birds, Porochi birds and Chaoyang birds. The Jurassic and Cretaceous periods were also periods of prosperity for teleosts and holoosteans, the former such as wolf-finned fishes and the latter such as Chinese bow-finned fishes. In the late Cretaceous , mammals with placenta also appeared . Marine invertebrates : The Mesozoic is characterized by the prosperity of ammonites and bivalves, and there are other six-shot corals, arrow stones, foraminifers, conodonts , and gastropods. The ammonites in the early Triassic had simple ammonite-like sutures, and the shell surface decoration was also simple, such as stechinite, and the late Triassic had ammonite-like or ammonite-like sutures, and the shells had nodules and ribs. Such as accessory tooth ammonites; Jurassic ammonites had complex ammonite sutures, such as Ariesite, Hong Kong ammonites, etc.; by the Cretaceous ammonites, the sutures became simpler, with strange shapes and irregular shapes, straight or spiral Shaped or convoluted, such as rod ammonites , Japanese ammonites, etc. Marine bivalves were also very important in the Mesozoic, especially in the Triassic. In the Early and Middle Triassic , there were Pseudomonas clams and clams . In the Late Triassic , Burmese clams with special shell decoration were the most important Representative; Jurassic triangular clam family, oyster family prosperous; Cretaceous is characterized by thick crustaceans. Freshwater lake biological combination : mainly freshwater bivalves, gastropods, fish, ostracods, ostracods and insects. In the Late Triassic , Shaanxi mussels and pearl mussels were commonly seen; in the Early Jurassic , there were wedge mussels , etc .; Individual freshwater bivalves were the representatives, while in North China and Northeast China, the thin-shelled fenugreeks were the representatives; the Late Jurassic biological assemblages represented by mayfly - like, oriental phyllids, and wolf-finned fish represented the typical of the lake facies, known as the E.-E.-L. fauna . The biological assemblage represented by trigonopsoid , clam, and Japanese clam in the Early Cretaceous is called T .", "The early Paleozoic deposits and strata-bound minerals are relatively rich, mainly including phosphorus, stone coal, iron ore, lead-zinc ore, polymetallic rare elements , gypsum salt and mercury, etc., and the ore-hosting layers are mainly concentrated in the Cambrian and Ordovician. The Silurian minerals are relatively poor. A series of black rock series developed in the Early Paleozoic in South China, such as the Early Cambrian , Late Ordovician Wufengian and Early Silurian Longmaxiian. The early Cambrian black rock series is composed of black shale, carbonaceous limestone, nodular phosphorite, silty shale, etc. The thickness varies greatly, ranging from 20 to 950m , but the layers are stable. They are distributed in Zhejiang, Anhui, Jiangxi, Hunan, Guangxi, Hubei, Guizhou, Sichuan, southern Shaanxi and other places. In this set of black carbon series, there are abundant stone coal, vanadium, phosphorus, barium and polymetallic mineral resources; the Wufeng period and the Longmaxi period are mainly a set of black graptolite shale facies, which are important in South China. oil formation. The bottom of the Cambrian in South China generally contains phosphorus, and the shallow sea areas in eastern Yunnan, central Sichuan, and western Guizhou have good ore-forming conditions, forming large-scale phosphorite deposits; in the black shale at the bottom of the Cambrian on the margins of the Yangtze plate, stone coal and Polymetallic elements, among which barite has reached the scale of super-large deposits; Hebei Handan-type iron ore is produced in the iron-rich carbonate strata of Majiagou Formation; Majiagou Formation in Shanxi and other places is rich in gypsum; South China, North China The Ordovician carbonate rock widely distributed in the region can be used as raw materials for cement, lime , solvent and construction industry.", "The minerals related to sedimentation in the Late Paleozoic strata in my country are relatively rich and widely distributed, mainly including the following types. Sedimentary minerals related to transgressive overburden and its transformation of ancient weathering crust: mainly iron, aluminum and refractory clay, such as hematite and siderite at the bottom of the Devonian in South China . As the transgression advances from southwest to northeast , the ore-bearing horizons gradually rise. The large-scale bauxite in the lower part of the Datang Stage in the Yunnan- Guizhou area ; the Shanxi-style iron ore and G -layer bauxite at the bottom of the Carboniferous Benxi Formation in North China belong to this type of sedimentary deposit; the sedimentary minerals related to deep water and relatively deep water depositional environment: Mainly phosphorus and manganese. Mainly found in the Devonian - Carboniferous interplatform trough sedimentary environment in South China, the upper Maokou Formation in Guizhou-Guangxi area and Dangchong Formation in Hunan-Guangdong area; energy minerals: mainly include coal, oil and natural gas. Coal is widely distributed in the Carboniferous - Permian strata in China. The main coal-bearing strata include the upper part of the Early Carboniferous in South China, the Maokou period in the southeast coast, the Longtan period in South China , and the Taiyuan and Shanxi formations in North China. The Permian system in the Junggar Basin, Xinjiang is one of the target layers for oil exploration. The Permian system in central Sichuan produces a large amount of natural gas; evaporite minerals: gypsum minerals in the late Early Carboniferous are widely distributed from Kashgar, Xinjiang in the west, through the South Tianshan Mountains, Hebei Province There are also gypsum minerals in the Shiqianfeng Formation in the narrow strip from the West Corridor to the central part of Ningxia, Hebei, Shaanxi and other places; stratabound polymetallic minerals: stratabound lead-zinc, tungsten, antimony, uranium and yellow are often produced in the Devonian carbonate rocks in South China . Iron ore deposits; in addition, the widely distributed Upper Paleozoic carbonate rocks in South China are also important smelting, chemical and building materials.", "China's Mesozoic sedimentary - stratabound minerals are relatively rich, known to include more than ten kinds of coal, oil shale, petroleum, natural gas, salt rock, brine, gypsum, iron, manganese, bauxite, and copper-bearing sandstone. Oil, natural gas, salts, iron are the most important. Coal: In the Late Triassic , both the south and north of China were coal-accumulating periods. South China is a sea-land alternation or offshore environment, rich in coal and good in quality (such as the lower coal seam of Anyuan Group in central Hunan and northern Jiangxi ) . Continental coal-bearing basins were widely distributed in the Late Triassic in North China, Northwest and Northeast China , but the quality of coal was poor. The Early and Middle Jurassic is one of the important coal-forming periods in China . The coal-accumulating areas are mainly distributed in the north, such as the Beipiao Formation in Liaoning, the Mentougou Formation in Beijing, the Datong Formation in northern Shanxi, the Yan'an Formation in northern Shaanxi, and the Badaowan Formation and Xishan Formation in Xinjiang. The kiln group is the representative, and they all form large coal fields. There are still coal-bearing deposits in the Early Jurassic in South China , but the scale and quality are not as good as those in the Late Triassic . In the early Early Cretaceous , important coal-bearing deposits were widely distributed in Northeast China and northern North China, represented by the Shahai Formation, Fuxin Formation, and Chengzihe Formation . Oil and natural gas: In the Northeast Sichuan Basin in the Yangtze Shallow Sea , the Jialingjiang Formation in the late Early Triassic is a well-known natural gas reservoir . The Songhuajiang Group in the Songliao Basin is the source layer and reservoir layer of the famous Daqing Oilfield, and it is also a representative large-scale oilfield of continental and transitional facies in the world. The Yingjisha Group in the southwest margin of Tarim represents important oil fields of shallow sea and coastal (lagoon ) type . Gypsum salt: The salty lagoon environment generally appeared in the shallow Yangtze Sea from the late Early Triassic to the early Middle Triassic , and gypsum minerals were formed. In the central Sichuan area of the Upper Yangtze Basin, there are also salt rocks and brines coexisting, which is a prospective area for searching for potassium salts. There are often important salt rock minerals in the late Cretaceous red beds in South China, such as the Hongdihe Formation in central Yunnan and the Hengyang Group in Hunan .", "The main minerals in China's Cenozoic era are coal, petroleum, oil shale and various salts, which have important economic value. Coal: Paleogene - Neogene is one of the important coal-bearing formations in the world, and China is no exception. Early Paleogene (Paleocene - Early Eocene) coal-accumulating areas were mainly distributed in the northeast and eastern Shandong, represented by the lower part of the Fushun Group in Liaoning. The coal accumulation areas in the late Paleogene (Late Eocene - Oligocene) moved to Hebei and Shanxi, and were also found in the Guangdong coast south of Nanling and the Baise Basin in Guangxi. Petroleum: Paleogene - Neogene is an important oil-bearing rock series in China, and oil resources with industrial value have been found in marine, continental or transitional strata. There are differences in oil accumulation areas in different periods: Late Paleocene - Early Eocene, mainly found in Jianghan, Subei and Sanshui Basins; Late Eocene - Oligocene, mainly distributed in Bohai Bay, Jianghan and Nanyang Basins; Late Oligocene - Miocene, mainly in the Junggar, Qaidam and Tarim Basins. The marine oil-bearing layers are located in the west of Taiwan, the Kashi Bay in the southwest of Tarim, the East China Sea and the South China Sea. Gypsum salt: The Paleogene arid climate zone was widely distributed, and the production areas of gypsum salt are more common. There are gypsum-salt deposits starting from Kashi Bay in the west, reaching Lanping and Simao areas in western Yunnan and Sanshui Basin in Guangdong in the south, and extending to Jianghan and Hengyang basins in the east . Potassium salts were also found near Dawenkou, Shandong . Neogene salt deposits are only found in the Qaidam and Turpan basins in the northwest. Quaternary Qinghai-Tibet Plateau and boron-containing salt deposits in salty lakes are also important mineral resources in western China", "Fracture: Fractures, including joints and faults, occur when deformation exceeds the strength limit of a rock. The identification marks of faults are in the field, and the faults can be identified and judged according to the following signs: \u2460 Discontinuity of structural lines and geological bodies: geological bodies or geological boundaries such as rock formations, ore-bearing layers, rock masses, and fold axes The sudden interruption and staggering phenomenon on the plane indicates that there may be faults. However, attention should be paid to distinguish it from the discontinuity phenomenon caused by not making the integration surface, rock mass intrusion contact surface, etc. \u2461Repetition and absence of strata: In a region, according to the normal stratum sequence, if some strata are asymmetrically repeated, some strata are suddenly missing or thickened, thinned, etc., these may be the existence of faults . \u2462The existence of scratches, grinding mirrors, and steps is often formed by the intermittent activity of faults or by certain resistance due to fault movement; the dynamic metamorphic rocks formed by faults in fault zones are called fault rocks. \u2463Geomorphology and hydrological signs: the existence of fault cliffs and fault triangles, the staggering of ridges and valleys, the staggering and deflection of alluvial fans, the sudden right-angle turning of the water system, the linear distribution of springs along a certain direction, the shape of lakes and swamps Discontinuously distributed in strips.", "The organic world on the earth was in its infancy in the Archaean Eon, and the most primitive organisms, single-celled fungi and algae, appeared in the ocean in the late Archaean Eon; the algae flourished unprecedentedly in the Proterozoic Eon, and low-level multicellular sponges appeared in the late Proterozoic Eon and coelenterates; terrestrial plants appeared in the Late Paleozoic; semi- aqueous and semi-terrestrial naked ferns and lycophytes appeared in the Devonian, and amphibians appeared, and reptiles appeared at the end of the Carboniferous Permian. Birds and mammals appeared in the late Mesozoic; in the Miocene, one of the ancient apes became anthropoids, and the earliest humans appeared in the early Quaternary. (See P232-234 for detailed answers)", "The division of lithospheric plates is based on the boundaries with strong tectonic activity. According to the difference in the relative motion between plates, plate boundaries can be divided into three types: 1. The discrete plate boundary refers to the axis of the oceanic ridge. The plates on both sides of the plate move away from each other. The plate boundary is stretched and separated, and the asthenosphere material is upwelled and condensed into a new ocean floor lithosphere. Therefore, the discrete plate Boundaries are also known as accretionary or constructive plate boundaries. 2. Convergent plate boundary, that is, the plate subduction zone near the trench or the collision zone between the continental plates. When the ocean and the continental plate converge, the oceanic plate always subducts under the continental plate due to the high density and low position of the oceanic plate , forming a trench on the surface, when the oceanic plate continues to subduct under the continental plate and gradually disappears on the surface, the continental plate behind it may collide with other continental plates with similar density, resulting in strong structural deformation, Magma and metamorphism, and the formation of mountains, the strong structural deformation zone formed at this time is called the plate collision zone (that is, equivalent to the so-called orogenic belt in the past); it can be divided into two subtypes: subduction boundary and collision boundary. After the demise of the ancient oceanic plate, the continental blocks that were originally located on both sides of the oceanic plate healed up, and the exposed line of the ancient subduction zone on the surface is called the suture line (or suture zone,). \u2460The subduction boundary is equivalent to the trench or the Benioff zone, where the adjacent ocean and continental plate overlap each other. Generally, the oceanic plate subducts under the continental plate, also known as the subduction zone. There are two subtypes: island arc-trench type, where island arcs are developed above the subduction zone, mainly found on the margin of the Northwest Pacific Ocean; Found on the western edge of the South American continent. \u2461Collision boundary: the collision zone or welding line between two continental plates. On the flat plate boundary, that is, the transform fault, the plates on both sides of it undergo horizontal shear slip, usually neither the growth nor the demise of the plate. Transform faults are generally distributed near oceanic ridges, and sometimes extend to the continental margin, such as the San Andreas fault in the western United States. According to the above three boundary types, the global lithosphere is divided into six major plates: the Eurasian plate, the African plate, the Indian plate, the Pacific plate, the American plate and the Antarctic plate. The subduction boundary between the East China Pacific plate and the Eurasian plate belongs to the convergent type, and the next-level type is the island arc-trench system. (p157-158)", "Tectonic movement mainly refers to the deformation and displacement of the lithosphere caused by the internal force of the earth, as well as the growth and demise of the ocean floor. Performance in strata: (1) lithofacies change and thickness of strata; (2) changes in contact relationship between strata: integrated contact, parallel unconformity contact, angle unconformity contact strata: layered rocks formed in a certain geological history period. Geographical manifestations: Today's landforms generally reflect neotectonic movements. Landforms that can reflect vertical movement mainly include river terraces, deep-cut valleys, planation planes, sea-forming terraces, and multiple rows of karst caves. Landforms that can reflect the horizontal movement of the crust mainly include dislocation of mountain ridges and deflection of water systems in the same direction. River terraces: The rivers that have formed floodplains are strengthened due to downward erosion, and the original valley bottoms remain on the new valley slopes in a stepped shape, forming a stepped terrain on both slopes of the valley, called river terraces. Deep incised valleys reflect the characteristics of the crust changing from relatively stable to strong upward movement. Representation in Terrain: Terrain Deformation Measurements. performance on the ground. The uplift of surface buildings and submersion by sea water, etc.", "Rocks formed by diagenesis are called sedimentary rocks. Diagenesis: The process of consolidating loose sediments to form sedimentary rocks. There are three main ways: (1) Compaction The sediment is under the load pressure of the overlying water body and the sediment, the water is discharged, the porosity is reduced and the volume is reduced. Features: The permeability of the sediment layer is reduced, the connection force between particles is increased, and the anti-erosion ability is enhanced, but the sediment cannot be transformed into rock. (2) Cementation is the process in which minerals precipitated from the pore solution (ie, cement) bond loose sediments into sedimentary rocks. Features: chemical or biochemical processes are involved, and cement can fill part of the voids in the rock, or all the voids; there are mainly calcareous, siliceous, and iron cements; clastic rocks are mainly diagenetic through cementation, and carbonates Rock is prone to strong cementation. (3) Recrystallization refers to the partial dissolution and recrystallization of mineral components in sediments under the condition of increasing pressure and temperature, so that amorphous becomes crystalline, and fine grains become coarse grains. The process by which sediments are consolidated into rock. Features: Before and after recrystallization, the crystal shape, size and arrangement of minerals change, but the chemical composition remains unchanged; recrystallization is related to mineral composition, particle size and other factors", "Wind deposition: The material deposited by wind is called aeolian deposit, which is purely mechanical deposition. (1) Accumulation method \u2460Sedimentation and accumulation: It occurs due to the weakening of wind speed (the sedimentation velocity is greater than the upward velocity of turbulence and vortex ). \u2461Accumulation in case of obstacles: Detritus accumulate due to encountering obstacles (such as woods, sand dunes, vegetation, stones, steep slopes, etc.). Sedimentation can occur on the windward slope of an obstacle or on the leeward slope of an obstacle. Characteristics of aeolian deposits 1) Detritus are mainly composed of sand, silt and a small amount of clay-level materials, with a particle size below 2mm. 2) The sorting property is higher than that of alluvium, which is determined by the high selectivity of wind transport. 3) Even if the detrital particles are very fine silt (mainly composed of quartz), they also have high roundness. 4) There may be more ferromafic and other chemically unstable minerals in the debris, such as pyroxene, hornblende, biotite, calcite, etc. These unstable minerals are in the sediment transported by hydraulic force. less present. 5) Cross-bedding with extremely large scale, its formation is the result of large-scale movement of aeolian deposits. 6) There are various colors, but the red tone is dominant, while green, black, and white are few. The main sediment types are eolian sand and eolian loess. Aeolian sand deposition 1 . When the wind-sand flow in the sand pile encounters an obstacle, it will be deposited due to energy consumption, and the obstacle will be buried to form a sand pile, which is tongue-shaped, less than 10m in height, and has interlaced layers inside . 2. Sandy hills formed by eolian deposits. It evolved from sand piles. Dunes can be of the following types: crescent, transverse, longitudinal, star dunes. 3. Try to discuss the six major plates in the world and the main active boundaries of their divisions. What type of plate boundary is the East China Pacific and the Eurasian plate? The division of lithospheric plates is based on the boundaries with strong tectonic activity. According to the difference in relative motion between plates, plate boundaries can be divided into three types: (1) Discrete plate boundaries refer to the axis of oceanic ridges, where the plates on both sides move away from each other, and the plate boundaries are stretched and separated , the asthenosphere material upwells and condenses into a new ocean floor lithosphere, so the discrete plate boundary is also called accretionary or constructive plate boundary. (2) Converging plate boundaries, that is, the plate subduction zone near the trench or the collision zone between the continental plates. When the ocean and the continental plate converge, the oceanic plate always subducts to the continent due to the high density and low position of the oceanic plate Under the plate, a trench is formed on the surface. When the oceanic plate continues to subduct under the continental plate and gradually disappears on the surface, the continental plate behind it may collide with other continental plates with similar density, resulting in a strong collision. Structural deformation, magma and metamorphism, and the formation of mountains, the strong structural deformation zone formed at this time is called the plate collision zone (that is, equivalent to the so-called orogenic belt in the past); it can be divided into two subduction boundaries and collision boundaries type. After the demise of the ancient oceanic plate, the continental blocks that were originally located on both sides of the oceanic plate healed up, and the exposed line of the ancient subduction zone on the surface is called the suture (or suture zone). \u2460The subduction boundary is equivalent to the trench or the Benioff zone, where the adjacent ocean and continental plate overlap each other. Generally, the oceanic plate subducts under the continental plate, also known as the subduction zone. There are two subtypes: island arc-trench type, where island arcs are developed above the subduction zone, mainly found on the margin of the Northwest Pacific Ocean; Found on the western edge of the South American continent. \u2461Collision type boundary: the collision zone or welding line between two continental plates (3) Alignment type plate boundary, that is, a transform fault, where the plates on both sides undergo horizontal shear slip, and usually there is neither plate growth nor plate demise. Transform faults are generally distributed near oceanic ridges, and sometimes extend to the continental margin, such as the San Andreas fault in the western United States. According to the above three boundary types, the global lithosphere is divided into six major plates: the Eurasian plate, the African plate, the Indian plate, the Pacific plate, the American plate and the Antarctic plate. The subduction boundary between the East China Pacific plate and the Eurasian plate belongs to the convergent type, and the next-level type is the island arc-trench system.", "The resources on the earth include: mineral resources, energy, land resources, water resources, biological resources, oil production and occurrence in sedimentary rocks. The main coal-forming periods in China are the Carboniferous, Permian, Jurassic and Tertiary (Paleogene, Neogene) minerals: generally refer to all mineral or rock resources buried underground in nature that can be used by humans. There are many sources of energy, such as biomass, water energy, geothermal energy, wind energy, tidal energy, solar energy, nuclear energy, etc., but at present the most important sources are non-renewable combustible organic minerals such as coal, oil, natural gas, and oil shale.", "Fracture: Fractures, including joints and faults, occur when deformation exceeds the strength limit of a rock. In the field, faults can be identified and movement of faults can be judged according to the following signs: 1 Discontinuity of structural lines and geological bodies: geological bodies or geological boundaries such as rock formations, ore-bearing layers, rock masses, fold axes, etc. on the plane The phenomenon of sudden interruption and staggering indicates that there may be faults. However, attention should be paid to distinguish it from the discontinuity phenomenon caused by not making the integration surface, rock mass intrusion contact surface, etc. \u2461Repetition and absence of strata: In a region, according to the normal stratum sequence, if some strata are asymmetrically repeated, some strata are suddenly missing or thickened, thinned, etc., these may be the existence of faults . \u2462The existence of scratches, grinding mirrors, and steps is often formed by the intermittent activity of faults or by certain resistance due to fault movement; the dynamic metamorphic rocks formed by faults in fault zones are called fault rocks. \u2463Geomorphology and hydrological signs: the existence of fault cliffs and fault triangles, the staggering of ridges and valleys, the staggering and deflection of alluvial fans, the sudden right-angle turning of the water system, the linear distribution of springs along a certain direction, the shape of lakes and swamps Discontinuously distributed in strips.", "The organic world on the earth was in its infancy in the Archaean Eon, and the most primitive organisms, single-celled fungi and algae, appeared in the ocean in the late Archaean Eon; the algae flourished unprecedentedly in the Proterozoic Eon, and low-level multicellular sponges appeared in the late Proterozoic Eon and coelenterates; terrestrial plants appeared in the Late Paleozoic; semi- aqueous and semi-terrestrial naked ferns and lycophytes appeared in the Devonian, and amphibians appeared, and reptiles appeared at the end of the Carboniferous Permian. Birds and mammals appeared in the late Mesozoic; in the Miocene, one of the ancient apes became anthropoids, and the earliest humans appeared from the Pliocene to the beginning of the Quaternary.", "(1) Theoretical significance: reveal the formation and evolution of the earth (2) Practical application: 1) Guide ore prospecting 2) Provide foundation site selection and site stability evaluation for large-scale projects: Earth science helps to avoid engineering hidden dangers. 3) Prevention and mitigation of geological disasters (such as landslides, landslides, mudslides, earthquakes, volcanoes, land subsidence, etc.).", "Earthquake: A rapid tremor of the earth or crust. The most notable geomorphological feature of Longmen Mountain today is the north-south segmentation: the elevation of the northern section is generally between 1000m and 2000m. Along the northeast direction, the boundary between the northern section of Longmen Mountains and the Sichuan Basin is gradually blurred; the terrain in the southern section is steep, and the basin-mountain boundary is clear. Wenchuan County is located in the Neo-Cathaysian structural belt of Jiudingshan, with complex geological structure, developed faults and folds, and the structural transformation of rock and soil is intense. At the same time, the stress field of regional tectonic movement makes rock mass joints and fissures develop, lithology is broken, and structural planes develop, which greatly changes the mechanical properties of rock mass and provides conditions for the development of geological disasters. What happened was a tectonic earthquake, and it was a shallow earthquake. squeezed by the Indian plate. The cause of tectonic earthquakes\u2014 the elastic rebound theory of fault genesis: \u2460The crust or lithosphere produces elastic strain under the action of tectonic stress and accumulates energy; \u2461When the stress increases beyond the strength limit of the rock at the source, the rock breaks suddenly Or make the existing fault in the earth's crust shift suddenly again, and the rocks on both sides of the fault recover and deform in the form of elastic rebound, and release a large amount of strain energy at the same time, resulting in an earthquake.", "Geological action is a variety of actions that cause continuous changes in the material composition, structure, and surface morphology of the crust or lithosphere caused by natural causes. According to different classification standards, geological processes can be divided into different types: (1) Classified according to the type of energy that causes geological processes External force geological processes are generated by external energy of the earth, and they mainly occur on or near the surface of the earth. The external force geological action is almost all involved in the gravity energy. The external geological action changes the surface morphology and the composition of crustal rocks. It can be divided into the geological effects of rivers, groundwater, glaciers, lakes and swamps, winds, and oceans. External force geological action can be divided into weathering action, slope gravity action, denudation action, transportation action, deposition action and induration diagenesis according to the sequence of its occurrence. Internal geological action: geological action mainly caused by the earth's internal energy, also known as internal dynamic geological action or internal force geological action. Inner geological processes include magmatism, metamorphism, tectonic movements and earthquakes. This type of geological action mainly occurs deep underground, and some can reach the surface. It deforms, shifts, or metamorphoses the lithosphere, or remelts materials to form new rocks. Tectonic movement refers to the mechanical movement of lithospheric material. It has two forms of movement, vertical and horizontal. Tectonic movement can deform and displace rocks, form various structural traces, shape the structure of the lithosphere, and determine the basis for the development of surface morphology. Tectonic movements can cause land and sea changes. Earthquakes are rapid tremors within the Earth caused by the sudden release of strain energy accumulated in rocks in the form of elastic waves . Earthquakes originate deep underground and travel to the surface. The vast majority of earthquakes are caused by tectonic movements that cause rock fractures. Magmatism is the whole process of magma from formation, movement to condensation and diagenesis. Magma is a high-temperature (800-1200\u00b0C) molten body of underground rock. It originates discontinuously from the top of the mantle or deep in the crust. After the magma is formed, it moves from the deep to the shallow along the weak zone. During the movement, as the temperature and pressure decrease, the magma itself changes and interacts with the surrounding rocks. Metamorphism is the effect that rocks below the weathering zone are transformed into new rocks in a solid state under the influence of temperature, pressure and fluid substances . After the rock is metamorphosed, its original structure and mineral composition have changed to varying degrees, and some can completely change the characteristics of the original rock. Weathering is the in-situ decomposition and fragmentation of minerals and rocks in the surface environment due to changes in atmospheric temperature, moisture, oxygen, carbon dioxide, and organisms. Denudation is a general term for the destruction and transformation of surface rocks and surface forms during the movement of rivers, groundwater, glaciers, and wind. The transport function is the process that geological force moves the materials formed by weathering and denudation from the original place to other places. Sedimentation is a process in which various substances transported by external forces precipitate and accumulate due to the reduction of force kinetic energy or changes in the physical and chemical conditions of the medium. Consolidation diagenesis is the process of transforming loose sediments into hard rock. This process often occurs due to the heavy-load pressure of the overlying sediments, which reduces the pores of the lower sediments, removes water, and strengthens the connection force between debris particles; Or due to the influence of pressure and temperature, the sediment is partially dissolved and recrystallized. The geological action of internal and external forces is related to each other, but the development trend is opposite. Internal forces complicate the composition and structure of the earth's interior and crust, causing the surface to fluctuate; external forces change the original composition and structure of the crust, flatten the surface fluctuations, and develop into a single one. Generally speaking, the action of internal force controls the process and development of the action of external force. Internal and external geological processes together affect the surface morphology of the earth.", "In the north of our country, the northwest region is far away from the ocean and blocked by high mountains. It is difficult for the water vapor above the ocean to reach there, so a large area of desert appears. The transport and deposition of wind formed aeolian sand and aeolian loess. Among the three transportation methods, jumping is the main method, and its transportation volume is about 70% -80% of the total transportation volume; the pushing volume is next, accounting for about 20%; the suspension volume is the least, generally not exceeding 10%. The materials that move and move, accounting for 90% of the transport volume, are mainly 0.2-2mm sand, which are mainly enriched below 30 cm from the ground, especially below 10 cm. They run close to the ground, and their transport distance is relatively short. ; Suspended substances are mainly debris smaller than 0.2mm, the transport distance is longer, and the finer the particles, the farther they are transported. The dust transported in northwest my country reaches the middle and lower reaches of the Yangtze River, and the dust in central Mongolia is blown to the Loess Plateau in the northwest. The fine dust is transported very far, while the coarse particles such as gravel remain in place. This is an important feature of wind transport. The particle size of my country's aeolian loess tends to change from coarse to fine from northwest to southeast. In addition, as the wind speed changes, the carrying capacity of the wind , that is, the size of the particles transported by the wind will also change. But as far as a certain area and a certain period are concerned, there is always a dominant wind speed.", "The 20th century is a new period for the development of modern earth science. During this process, a series of changes have taken place in traditional earth science, among which the most profound impact is the revolution of solid earth science.", "According to the dynamic synergy mode of oil and gas migration and accumulation, it can be divided into 3 categories and 8 kinds: the third category is: structural oil and gas reservoirs, stratigraphic oil and gas reservoirs, and lithologic oil and gas reservoirs. The eight types are: oil and gas reservoirs formed under the action of high pressure potential field ; oil and gas pools formed under the action of low pressure potential field ; natural gas pools formed by free release of dissolved gas carried by oil and water; oil and gas pools formed under the action of buoyancy; Oil and gas reservoirs; deep basin gas reservoirs formed by natural gas volume expansion; coal bed gas reservoirs formed by molecular adsorption; hydrated methane gas reservoirs formed by molecular hydration. Oil and gas reservoir is the basic unit of oil and gas accumulation and the object of oil and gas exploration. Oil and natural gas are in a dispersed state in the early stages of formation and exist in oil and gas formations. They must migrate and accumulate to form industrial oil and gas reservoirs that can be exploited. This requires certain geological conditions. These conditions can be summed up in six words: \"birth, storage, cover, enclosure, transportation, and protection\". Oil-gas-generating layer: Refers to oil-gas-bearing formations with oil-generating conditions. It is rich in organic matter, deposited in a reducing environment, with fine structure and dark color, mainly composed of argillaceous rocks and carbonate rocks. Oil and gas source layers can be marine or continental. In addition, oil and gas source layers must have a certain geological process, that is, reach maturity before oil and gas can form. Reservoir: It is a rock formation that can store oil and natural gas, and can also export oil and gas. It has good porosity and permeability, and is usually composed of sandstone, limestone, dolomite, and shale, volcanic rock, and metamorphic rock with fractures. Caprock: Refers to the rock formation covering the reservoir of oil and gas with poor permeability and difficult for oil and gas to pass through. It acts as a shield to prevent oil and gas from escaping. Shale, mudstone, evaporite, etc. are common cap rocks. Trap: It means that the oil and gas in the reservoir encounter some kind of barrier during the migration process, so that it cannot continue to move forward, and accumulate in a local area of the reservoir. This kind of oil and gas accumulation place is called a trap. close. Such as anticlines, dome traps, or traps formed by faults and monocline rock formations. Migration: After the oil and gas are formed in the oil and gas formation, they are transferred to the oil and gas storage layer with pores due to pressure, capillary action, diffusion, etc. shape. Due to the effect of gravity, the oil and gas points float up to the top of the oil and gas storage layer, but they cannot be concentrated in large quantities. Only when the tectonic movement forms a trap, the oil, gas, and water in the oil and gas storage layer will be released under the action of pressure, gravity, and hydrodynamic forces. Only by continuing to migrate and accumulate in traps can they become industrially valuable oil and gas reservoirs. Preservation: To preserve oil and gas, there must be suitable conditions. Only when the tectonic movement is not violent, the magmatic activity is not frequent, and the degree of metamorphism is not deep, it is conducive to the preservation of oil and gas. On the contrary, in areas where a large number of tensional faults are developed, the denudation depth is large, and even the magma is active, oil and gas cannot be preserved.", "Rivers are the basis for the survival of human beings and many creatures, and also the cradle of human history and civilization. At the same time, as the most important morphological force on the land surface, rivers strongly shape various surface landscapes, and their formation and development record the process of regional landform and environmental evolution. The Yangtze River and the Yellow River are the first and second largest rivers in China, and they are also world-renowned rivers. They all originate from the Qinghai-Tibet Plateau known as the \"roof of the world\", cross the three major terrain ladders in my country, and flow eastward into the sea. Their gestation and development provided a suitable geographical environment for the formation of Chinese civilization. The current water system pattern of the Yangtze River and the Yellow River is the result of the interaction between the hydrosphere, the atmosphere and the lithosphere during the long geological history. When were they born? What stages of development have you experienced? These scientific problems that have been questioning geomorphologists for more than a century still have no clear answers. In addition, the river and its surrounding environmental elements (such as vegetation, soil, groundwater, etc.) constitute an open system under dynamic equilibrium conditions. At different time and space scales, the mechanism and process of the river system responding to tectonic activities and climate change are currently unknown. Not sure. Therefore, it is of great scientific significance to study the formation process of the Yangtze River and the Yellow River system in detail, and to explore the interaction mechanism of various circles on the surface of the earth for understanding the formation and evolution of my country's macroscopic landforms and environment, and the uplift process of the Qinghai-Tibet Plateau. Protection, environmental governance and other aspects are also of practical significance. Since the Yangtze River and the Yellow River cross many different geological structural units from their source to their estuary, their formation history and evolution process are extremely complicated. At present, the debate on the formation and evolution of the Yangtze River Valley's landforms is mainly concentrated on the Three Gorges section and the Jinsha River bend. Regarding the cause of the Three Gorges of the Yangtze River, there are many different viewpoints on the cause of the Three Gorges of the Yangtze River, such as the river capture caused by retrograde erosion, the river formed first, and the superimposed river cut from the ancient flattened plane[1\uf02d4]; Some dating data have been obtained through the study of sedimentary strata of river terraces, basins and estuaries, but there is a large difference, the short one is only a few hundred thousand years old, and the long one is more than 2 million years[5\uf02d7]. Near Shigu, the Jinsha River suddenly turns from the original northwest-southeast direction to the northeast at an acute angle. This strange bend is called \"the first bend of the Yangtze River\". The cause of its formation is still an unsolved problem. In the past hundred years, relevant scholars have carried out many studies and put forward their own views. In summary, there are mainly two views, namely, the theory of river capture and the theory of non-rapture. The former believes that the Jinsha River originally passed through the South-South West longitudinal valley of Baihanchang-Jiuhe-Jianchuan from Shigu, and then poured into the Lancang River or the Red River through the Yangbi River. Originally, several rivers flowed southward along the fault. With the uplift of the Yunnan Plateau, the original outflow channel was blocked and several tributaries were eroded back to the source, which eventually caused the lower section of the Jinsha River to meander and flow east[8] . There are different opinions about the time of the river's invasion and penetration, ranging from the Eocene [9] to the late Pleistocene [10]. The non-capture theory holds that the horseshoe-shaped big bend in this area is completely controlled by structure, and is related to the conjugate structures of NW-SSE and NNE-SW, and is not the result of river capture[11]. In the past 20 years, the research on the Yellow River system has made remarkable progress. The study of geomorphology and sedimentology reveals that the formation of the Yellow River was carried out through capture and series of internal water systems: before the Pleistocene, there were a number of inland lake systems that were independent of each other and lacked channel communication in the middle and upper reaches of the Yellow River; around 1.2 million years ago, the Sanmenxia After being penetrated, the Yellow River below Lijiaxia emerged and flowed eastward into the sea; about 10,000 years ago, the Yellow River traced its source and eroded to connect with Zoige Ancient Lake, and the modern water system pattern was finalized[12,13]. The relationship between its formation and evolution, the uplift of the Qinghai-Tibet Plateau, and climate change has also been well explained[14, 15], that is, the uplift of the plateau during the Late Cenozoic determined the incision range of the river, and the deposition-erosion of the river It is closely related to climate change between glacial and interglacial periods. However, there are still large differences in the detailed development process and formation time. For example, there is no detailed picture of the distribution of the internal drainage system before the Pleistocene. The connection between them is still unclear, and there are different opinions on the time when the Sanmenxia penetrated. For example, some scholars[16] believe that the Yellow River partially cut through the Sanmenxia 400,000 years ago, completely cut through and flowed into the sea at 150,000 years ago. years ago. In addition, studies on local areas within the watershed have shown that tectonic uplift is the main cause of river terrace formation and has nothing to do with climate change [17, 18]. It is obvious that the development and evolution process of the Yangtze River and the Yellow River geological history is a very complicated issue. Different scholars have different research angles and different evidences, which are undoubtedly important reasons for reaching different opinions. To solve these scientific problems, it is necessary to collect a large amount of relevant data on geology, geomorphology, sedimentation, and paleo-environmental evolution with the support of constantly updated technical means, to eliminate the false and preserve the true, and to conduct detailed research using multidisciplinary comprehensive methods from a systematic perspective. .", "It has long been found that the material composition of the river bed of a river gradually becomes thinner from upstream to downstream. The upper reaches of a large river are often located in high and steep mountainous areas. The river channel is cut in the hard rock, and huge stones are piled up on the river bed, with a diameter of more than 1 m. Going downstream, as the gradient of the river channel decreases, the diameter of the gravel accumulated on the river bed becomes smaller. When the river enters the plain from the mountainous area, that is, in the middle and lower reaches of the river, the composition of the riverbed further becomes finer, from gravel to coarse sand, medium sand and fine sand in turn. The accumulation on the river bed is the accumulation of the sediment carried by the river as the hydrodynamic conditions change during the transportation process. Downward along the flow, the gradient decreases, the flow velocity slows down, and the maximum particle size of the sediment that the water flow can carry decreases, so the riverbed deposits will become thinner downstream. At the same time, the gravel collides with and rubs against the river bed (that is, abrasion) during its downward transport, which will also reduce the particle size. Predecessors have conducted long-term studies on the phenomenon of alluvium thinning downstream and its causes. It is believed that the main controlling factors of this change include the selective transport of particles, erosion rate, sediment supply, channel width, water depth, flow velocity, and river bed. Changes in slope etc. It is very interesting that people also found that there are discontinuous and jumping changes in the process of changing the river bed material from gravel to sand. When the particle size is reduced to about 10 mm, it will suddenly decrease to less than 1 mm in a very short distance, and there is a lack of transitional river sections dominated by 1-10 mm in the middle. This phenomenon was first discovered by Japanese geomorphologist D. Yatsu in 1955[1]. He studied the change of the median particle size of the riverbed constituent materials (bed sand) of two rivers in Japan, the Ogigawa and the Watasegawa. The former is a sandy river, and the median particle size is continuously reduced along the course, while the latter is a gravel-sand mixed river, and the median particle size is discontinuously reduced along the course, and the particle size changes suddenly from the gravel It is sandy grain grade, missing 1~10 mm grain grade in the middle. Later, this phenomenon was widely observed in different regions of the world and was universal. What is the reason for this particular phenomenon? has always puzzled scientists who study rivers. Although more than 50 years of research have been carried out and various hypotheses have been put forward, none of them can be generally accepted. This seemingly simple problem has become one of the difficult problems in fluvial geomorphology. Smith and Ferguson's research pointed out that there are three possible reasons for this discontinuous change: one is local base level control; the other is the input of fine-grained sediment from tributaries; the third is the crushing and wear of sediment particles [2]. When the local base level of the river rises, sediment deposits will occur, and the fine sediment will be superimposed on the originally deposited coarse sediment, so there will be a sudden change in particle size at the end of the sedimentation. Since the gradient of the tributary is larger than that of the main stream, the sediment input by the tributary is also thicker, so below the confluence of the tributary, the sand in the main stream bed will suddenly become thicker. These two factors only appear locally. Why is there a sudden change in the material composition of the river bed along the process of becoming thinner? \t7. If the sudden change of the material group of the river bed from gravel to sand is a common phenomenon, the above cannot be used reasons to explain. Fragmentation and abrasion of sediment particles determine particle size reduction, but cannot convincingly explain the absence of 1\u201310 mm size fractions. Some scholars explain the continuous change of particle size from the change of hydraulic conditions. Howard's mathematical modeling studies have shown that abrupt changes in particle size are often associated with abrupt changes in riverbed gradient, representing a critical transition from one river type to another [3, 4]. Ferguson believes that the sudden thinning of bed sand from gravel to sand can be achieved through the change of particle size under the influence of non-linearity and criticality in the process of mass transport and deposition under the condition of decreasing flow shear force downstream. sorting to achieve. He confirmed this hypothesis through a mathematical simulation of an ideal river composed of sand and gravel [5]. However, in order to truly explain this phenomenon, a question must be answered: Where did the missing 1-10 mm particle size sediment go? Did it not exist in the first place, or was it piled up in an unknown place? If it does not exist, it means that the 1-10 mm particle size sediment is physically unstable. However, in different river basins, the types of rocks are different, and the finer grains of sand are produced by weathering or abrasion of coarse grains. Rocks of different mineral compositions cannot all be unstable in the above-mentioned size classes. Therefore, more and more in-depth research is still needed to give a satisfactory explanation to the question of \"why there is a sudden change in the material composition of the river bed during the process of narrowing along the flow\".", "The riverbed of an alluvial river is composed of loose sediment, and the long-term interaction between the water flow carrying the sediment and the riverbed material has shaped the riverbed in various poses and with different expressions. For example, the lower reaches of the Yellow River and the middle and lower reaches of the Yangtze River are alluvial rivers, but the wandering section of the lower reaches of the Yellow River has a very wide and shallow riverbed with a steep gradient; while the Jingjiang section of the middle reaches of the Yangtze River has a narrow and deep riverbed with a gentle gradient. What is the reason? The difference in the shape of the riverbed is the result of long-term adjustments of different rivers to the water and sediment conditions determined by the natural geographical factors of the river basin. An alluvial river is an open system in a state of dynamic equilibrium, and its riverbed geometry is formed in its own accumulated sediments, which has strong plasticity. Under certain conditions of incoming water volume and its process, incoming sediment volume, incoming sand grain size and its process, and material composition at the river channel boundary, through the long-term interaction between the sediment-carrying water flow and the river bed, the river width, water depth, and ratio will tend to be formed. According to a specific combination of drop and flow velocity, the sand-carrying capacity of the water flow determined by this combination is just enough to transport all the sediment from the watershed downward without obvious erosion and deposition in the river bed. This state of neither flushing nor silting is also the state of sand transport balance. It can be seen that, under the conditions of this balance of sediment transport, a certain function will be formed between the geometrical variables of the riverbed (river width, water depth, gradient), hydraulic variables (flow velocity) and incoming water, sand, and sediment particle size composition. relation. Based on this functional relationship, the geometric shape of the alluvial river bed can be theoretically solved. The proposal of this idea is a leap from qualitative to quantitative research on river geomorphic process and morphology, which has great theoretical and practical significance. Starting from the above ideas, according to the law of water flow and sediment movement, the following equations can be listed [1]: Water flow continuity equation Water flow resistance formula Water flow sediment-carrying capacity formula In the formula, U is the average flow velocity of the section; B is the river width; h is average water depth; J is the hydraulic gradient, which can be regarded as the gradient of the river bed; n is the Manning roughness coefficient; S is the content of the sandy part of the suspended sediment bed; g is the acceleration of gravity; Settling velocity of sandy part; K, m are coefficients and exponents. In the above equation, Q represents the inflow condition. Since the sediment concentration S can be obtained by dividing the inflow amount by the inflow amount, it can represent the inflow condition when the inflow amount is given. The sediment settlement velocity \u03c9 is determined by the sediment particle size. How can we solve the closed form equation of the alluvial river bed? \t\u00b79\u00b7Reflects the particle size of the sediment. The above variables represent the conditions of incoming water and sediment determined by basin factors and can be regarded as constants. Manning's coefficient n is also a given constant, which is determined by the boundary conditions of the river bed. Under the condition of sediment transport balance, the actual sediment concentration is equal to the sediment-carrying capacity. Therefore, formula (3) expresses the balance condition of sediment transport. It can be seen that the river width, water depth, gradient and flow velocity are adjustable variables, which are unknowns of the equations. If the above equations can be solved, then it can be obtained under the given flow, incoming sediment and its particle size composition. The bed geometry of alluvial rivers. But, unfortunately, this equation system contains 4 unknowns of river width, water depth, gradient and flow velocity, but only 3 equations, and there is still one condition missing to solve it closed. How to supplement the fourth equation is an unresolved problem in river science. In order to seek this supplementary condition, various hypotheses (such as critical start hypothesis, minimum energy consumption rate hypothesis, maximum sand transport rate hypothesis, minimum activity hypothesis, minimum variance theory, etc.) have been produced in order to close the equation. The critical starting hypothesis considers pebble rivers in mountainous areas. Since the sediment entering the channel from the watershed can be regarded as flushing matter and does not participate in bed formation, formula (3) does not work, and two conditions need to be added. Based on the consideration of the stress balance of riverbed materials, Li Riming et al. [2] put forward two hypotheses: \u2460 Under the flat beach flow, all points on the riverbed boundary are in a critical starting state; No change. Thus, an equation system composed of 4 equations is obtained, and after solving, the river width, water depth, and slope of the pebble river in the mountainous area are obtained. Dou Guoren put forward the minimum activity hypothesis[3], thinking that under the given inflow, sediment and riverbed boundary conditions, different riverbed sections have different activities, and the riverbed tries to establish the section shape with the least activity in the process of adjustment. . He proposed the expression of river bed activity, and obtained the minimum value of it, and obtained the equation expressing the fourth condition. By solving the closed equations, the formulas of river width, water depth and river bed gradient are obtained. The minimum energy consumption rate hypothesis holds that the result of alluvial river adjustment will keep the energy consumption rate at a minimum. Write this hypothesis into a mathematical expression, and then solve it together with formula (1), formula (2), and formula (3), and you can get the river width, water depth, gradient and flow velocity. There are different forms of expression of energy consumption rate, that is, energy consumption rate per unit river length (\uf067 QJ, \uf067 is the specific gravity of water body), energy consumption rate per unit water body (\uf067 VJ ) and energy consumption rate per unit bed surface (\uf067 QJ/B ), there are some differences in the specific calculation process and results when different indicators are used. Zhang Haiyan [4] and Yang Zhida [5] calculated the minimum energy consumption rate per unit river length and unit water body respectively, and obtained the river width, water depth and river bed gradient. The maximum sediment transport rate hypothesis holds that the result of the adjustment of the alluvial river will maximize the sediment transport capacity determined by the geometry of the river bed, so that the sediment can be transported most effectively. In fact, river sediment transport requires energy consumption, and it is equivalent to maximize the sediment transport rate when the river energy consumption rate is given and to minimize the energy consumption rate when the river sediment transport rate is given. Therefore, the supplementary conditions based on the minimum energy consumption rate hypothesis and the maximum sediment transport rate hypothesis are essentially the same. The above lists the representative research results of establishing the formulas of river width, water depth, gradient and velocity by adding a condition to close the equation system. However, the supplementary conditions from different points of view are all just assumptions, which cannot be derived theoretically. This problem has become one of the unresolved problems in fluvial geomorphology. How to obtain supplementary conditions that have a solid physical foundation and can be derived theoretically is a problem that should be devoted to solving. Of course, it is also possible that under the given water-sediment and boundary conditions, the shape of the riverbed is inherently uncertain and there are multiple solutions, which is just the rich and colorful performance of natural rivers. If this is the case, proof is also required.", "Whether in a wide valley or on a vast alluvial plain, you will see meandering rivers, and it is extremely rare for a river to remain straight for a long distance. Even forked rivers are often curved in their individual channels. This shows that alluvial rivers always tend to develop in a curved direction. A channel whose bending coefficient (the ratio of channel length to valley length) is greater than a certain value (such as 1.5) is called a meandering flow in landform, or a channel with a curved river type. Why do alluvial rivers become curved? This question seems to be very simple, but it contains profound truth. Geomorphologists have conducted more than 100 years of research on the formation of curved river patterns. Although dozens of hypotheses have been put forward, no satisfactory explanations have been obtained so far [1]. The mystery behind the cause of meanders is one of the unresolved problems in geomorphology. Among the theories of meandering flow formation, many are based on the hypothesis that \"the energy consumption rate of the river system reaches the minimum\". A river is an open system in a state of dynamic equilibrium, which is expressed as a balance of forces and a balance of sediment transport. If the energy consumed by the river is just equal to overcoming the flow resistance and transporting the sediment, its energy consumption rate will be minimized. At this time, the river is in a state of sediment transport balance, and the riverbed neither erodes nor accumulates. The famous physical geographer Richthofen proposed as early as 1886 that in order to reduce erosion, rivers \"try to change their gradient, thus forming a meander\" [1]. According to modern studies, due to the automatic adjustment function of alluvial rivers, when the amount of sediment from the river basin exceeds the sediment-carrying capacity of the river, in order to achieve a balance, the river tends to reduce the gradient by increasing the river length, thereby reducing the sediment-carrying capacity. capacity and thus become curved [2, 3]. This hypothesis is widespread. However, some new findings challenge the above explanation. Scholars have found that the generalized \"meander flow\" does not depend on the presence of river sediment. On the surface of the glacier, due to sunlight, the glacier melts, and small curved rivers often develop to drain the glacier melt water. It is noteworthy that many of these streams are curved, with similar geometries to meanders that develop in alluvial deposits. At the same time, it has been observed that ocean currents in the ocean usually do not move in a straight line, but form a planar shape similar to a sinusoidal curve, which is very similar to the curved shape of a typical alluvial meandering river bed. The researchers found that there is a good linear relationship between the bend span of the meandering river bed and the width of the transition section between two bends, that is, the former is 12 times larger than the latter. A large sample size was involved in establishing this relationship, including both natural alluvial meanders and indoor model streams carrying sediment, as well as oceanic currents and streams on glacier surfaces that did not carry sediment, although the The scales differ by as much as 6 orders of magnitude, and they all obey this relationship[3]. It has even been found that the water flowing down a clean glass window during a heavy rain often does not flow in a straight line, but takes a curved flow path similar to a sinusoidal curve. The above findings indicate that the formation of meanders is related to some inherent nature of water flow. It can also be pointed out that in the straight sections of natural rivers, or in straight channels, when the flow rate is small, the movement of the water flow is also curved, and there are staggered bank accumulations. This also shows that the movement of water flow has an inherent requirement of tending to bend. Whether in a river or in an artificial channel, once the water flow bends, it will form a lateral circulation, or spiral flow. Due to the existence of lateral circulation, there will be a transverse gradient, which will cause the concave side of the water flow to erode and recede. Accumulation occurs, causing the convex bank to continuously silt and grow. This leads to further bending of the water flow, increasing the degree of bending of the channel [2, 4]. Obviously, as long as there is an initial bending of the water flow, the degree of bending of the water flow will continue to increase under the action of the above-mentioned positive feedback. If other conditions for the development of the meandering river bed are met, the channel will develop into a meandering river type. Following this line of thought, American scholar JP Friedkin first conducted experimental research on curved channels indoors. He deflects the channel at an angle at the inlet of the model creek, causing an initial bend in the flow. During the evolution of the model creek, this initial curvature increased and propagated downstream, eventually leading to the formation of a curved riverbed [4]. In order to finally solve the difficult problem of \"why alluvial rivers become curved\", future research should be carried out along two directions, that is, to answer \"why the movement of water flow has the inherent requirement of bending\" and \"what external conditions are favorable The formation of curved channels\" these two problems. Ancient Chinese scholars have known \"water-based music\" for a long time, but how to use the theory of modern fluid mechanics to reveal the physical mechanism of \"water-based music\" is still a difficult problem. The reason why the movement of water flow has an inherent requirement of bending may be related to some instability and complexity of the movement of water particle groups. It is possible to find a way to answer this question through the theory of water flow turbulence, the analysis of the mechanical stability of water flow motion, and the in-depth study of the nonlinear characteristics and complex behavior of water flow motion. In order to answer the question of \"what external conditions are conducive to the formation of curved channels\", predecessors have done a lot of work. The factors related to the formation of various river types, including curved river types, can be summarized into three aspects: incoming water, incoming sand, and river bed boundary conditions. What kind of incoming water, incoming sand, and river bed boundary conditions are most favorable for the development of curved channels have been extensively studied separately. Lower sediment concentration is beneficial to the formation of meandering river type. When the sediment concentration increases, the river bends and becomes a wandering branch; type will appear again [5]. The material composition of the riverbed boundary is closely related to the curvature of the river. When the anti-scourability of the riverbed boundary material is very low, the riverbed flow cannot be effectively restricted, and the convex bank will be cut and develop into a wandering branched channel instead of a curved channel. When the anti-scourability of the river bank material is strong, the concave bank is not easy to be eroded, and it is not conducive to the development of the curved river type [4]. Therefore, some moderate anti-scourability of the river bed is beneficial to the formation of curved channels. In the future, the physical mechanism of the influence of incoming water, incoming sand, and river bed boundary conditions on the formation of curved river patterns should be studied in depth. The comprehensive discriminant relationship between river type and non-bent river type is used to fully reveal the formation mechanism of curved river type.", "The underlying surface refers to the earth's surface that can exchange energy and matter with the atmosphere, including natural objects such as rocks, soil, water bodies, ice and snow, vegetation, and man-made objects such as cities, villages, and roads. The underlying surface affects atmospheric heat through ground radiation, latent heat transport, and turbulent transport, and affects atmospheric moisture through water surface evaporation, soil evapotranspiration, and plant transpiration. Therefore, the underlying surface is one of the important factors in the formation of the climate, and it is also one of the main ways in which human activities affect the climate. The changes of the underlying surface can generally be divided into surface feature changes, color changes and undulation changes. Changes in surface features refer to changes in surface cover, such as changing from vegetation coverage to bare soil, from river valleys to large reservoirs, etc.; color changes refer to changes in the basic color of the underlying surface caused by seasonal changes or changes in cover , such as plants changing from green to yellow, grass turning into sand due to overgrazing and reclamation, etc. Undulating changes refer to changes in the geometric shape, arrangement, and scale of surface units, such as building cities, reclamation from the sea, and afforestation, etc. . Changes in ground features and colors on the underlying surface will change the surface reflectance, thereby affecting the heat balance and water balance of the surface; fluctuations in the underlying surface will change the surface roughness, thereby affecting the near-surface wind speed distribution, latent heat transport and turbulent transport . From the perspective of physical geography, the study of the relationship between the underlying surface and the local climate mainly focuses on the local climate or microclimate effect caused by the change of the underlying surface, while the physical geographical units such as cities, water bodies, green spaces, ice and snow, etc. become the focus of attention. It has been recognized that the main cause of urban heat islands is the transformation of the underlying surface from green, soft, permeable forests or grasslands or farmland to gray, rigid, impermeable streets or houses or infrastructure , the resulting changes in surface albedo and surface heat balance lead to temperature differences between urban and suburban areas, which leads to the urban heat island phenomenon [1]. However, a large number of impermeable surfaces in the city have changed the permeability and runoff direction of the original surface, causing problems such as water accumulation, overflow, and sewage diffusion when the city encounters heavy rain. In order to cope with the inconvenience caused by urban heat islands and urban rainstorms to human production and life, in recent years, the cooling and humidification effects brought about by expanding the urban green area, changing the outer wall surface of buildings, and implementing roof greening are being recognized. The attention of scholars and urban management departments [2]. Building reservoirs and destroying or restoring wetlands can also have significant local climate effects. Relevant studies have pointed out that the water body has a cooling effect during the day and a warming effect at night, and its influence range can reach about 10 km on the leeward side of the water body, and only 2 km on the upwind side, and the height affected by the water body is 200\u2013400 m[3] . In addition, the climatic effects of oasis and sparse vegetation in arid regions [4], the climatic effects of forest vegetation [5], and the climatic effects of ice and snow cover are also one of the hotspots in the study of the relationship between the underlying surface and local climate. The underlying surfaces are not only of various types, but also of different spatial scales. There are not only large-scale quantitative explanations of the impact of changes in the properties of underlying surfaces of farmland, deserts, forests, \tand grasslands on local climate, but also small-scale buildings, green spaces, and pools. To study the climate effects of the underlying surface at different scales, ground observation, remote sensing monitoring and numerical simulation are mainly used. Obtaining the differences in meteorological elements such as temperature, humidity, wind direction and speed between different underlying surfaces through ground observation is the basis for objective understanding and correct evaluation of the climate effect of the underlying surface. Ground observations at fixed meteorological stations can obtain high-precision, continuous, and measured values of various meteorological elements at a certain location, and satellite remote sensing monitoring can obtain low-precision, discontinuous, and estimated values of some meteorological elements in a certain area , the combination of the two can clarify the corresponding relationship between the large-scale underlying surface and regional climate to a certain extent[6]. However, for small-scale underlying surfaces, such as urban underlying surfaces characterized by patches and diversification, whether it is surface meteorological observation data or high-spatial-resolution remote sensing data, there are problems such as accuracy and continuity. Insufficient in this respect, it is still unable to accurately and objectively reveal the climate effects of patches on different underlying surfaces. There are also many problems in numerical simulation. For example, the NCAR non-hydrostatic equilibrium mesoscale model MM5V3 in the United States can only increase the grid scale to 1 km at most, and the climate effect on the underlying surface with a scale smaller than 1 km cannot be reflected. The land surface process model (Mosaic method, statistical-dynamic method, etc.) is better for the simulation of the uniform underlying surface, but for the non-uniform underlying surface, there are problems such as parameterization of the landing surface flux and scale conversion[7]. In addition, since \"underlying surface\" is a spatial concept, to quantitatively explain the impact of changes in underlying surface properties on local climate, the most important thing is how to obtain \"surface\" data. How to transform the \"point\" data observed by meteorological stations into \"area\" data, how to fit the \"area data\" obtained by satellite remote sensing with the measured \"point\" data with high precision, that is, the problem of scale conversion is also the Difficulties in the study of the relationship between land and climate. Figure 1 \tSurface temperature changes of different underlying surfaces To further study the impact of underlying surface properties on local climate, first, quantitative analysis and parametric description of various attributes of different types of underlying surfaces should be carried out. Make it possible to establish a certain relationship with meteorological elements, and on this basis, systematically locate meteorological observations. Second, it is necessary to improve the observation method of the climate effect on the underlying surface, and integrate and apply various observation methods, so that the \"surface\" meteorological data can be directly obtained with high accuracy [8]. For example, by using an automatic weather station and a portable thermal imaging camera, it is possible to obtain high-precision \"surface temperature\" data within the viewing angle range of the thermal imaging camera on the ground, and the \"surface\" data of this small-scale underlying surface Combined with the high-resolution satellite remote sensing data at the same time, it is possible to grasp the distribution of climate effects on the same type of underlying surface in a larger spatial range. The third is to perfect and improve various existing numerical models, improve model sensitivity and resolution, and seek conversion relationships of different scales.", "The current understanding of haze weather is due to the rapid expansion of the economic scale and the acceleration of urbanization, the aerosol pollution of the atmosphere is becoming more and more serious, and there are more and more incidents of visibility deterioration caused by aerosols. These pollutants emitted by human activities, including direct emissions Aerosols and gaseous pollutants form fine particle secondary aerosols through chemical conversion and photochemical conversion, which can cause visibility to deteriorate and form haze. It is also called soot fog, smog, dry fog, haze, aerosol cloud, atmospheric brown cloud [1\uf02d3]. A very succinct description of haze weather is \"a low-visibility event caused by aerosol particles under high humidity conditions.\" The composition of aerosols that form haze weather is very complex. In recent years, the environmental effects caused by the increasingly serious haze weather and the climate effects caused by aerosol radiative forcing have widely attracted the attention of the scientific community, government departments and the public, and become a hot topic. In a broad sense, the essence of haze weather is fine particle aerosol pollution[4, 5], which belongs to the category of atmospheric aerosol. The definition of aerosol in the scientific community is \u201ca dispersion system formed by adding solid or liquid particles as an aerosol\". But so far, there is no uniform and accepted classification of atmospheric aerosols and a unified naming system for different types of aerosols. The concept of atmospheric aerosol has physical characteristics and chemical characteristics. The classification of aerosols can be based on the source classification, and aerosols mainly exist in the form of mixtures, and rarely exist in the form of simple substances, unless the condensation nuclei are exposed in the unsaturated atmosphere. Constrained by the Korra curve, the sulfuric acid and nitric acid droplets cannot cross the supersaturated hump and oscillate on the submicron scale. After excluding precipitation particles (raindrops, hail, graupel, rice snow, ice particles and snow crystals), the water droplets and ice crystals in the near-surface aerosol are meteorological fog and mist, and other non-aqueous components in the aerosol The object is what is called haze in meteorology [6]. Controversy over the definition and criteria of haze The scientific community generally believes that the formation of fog (light fog) must have a nucleation process, which requires a supersaturated environment, and the activation of particles is when the environmental supersaturation ratio exceeds its critical supersaturation ratio, or the particle diameter exceeds the critical diameter particles. The possible supersaturated nucleation humidity of condensation nuclei is roughly between 95% and 100%, but in actual meteorological observations, the relative humidity that defines haze and fog (light fog) is set very low, with some stations as low as 55% , the high one is only 85%, which is obviously low. How to get a reasonable standard requires a lot of actual high-precision observation and theoretical research work under the background of high humidity, and then put forward the corresponding Identify suggestions. Bret and Doyle clearly pointed out in the study on the long-term change trend of visibility affected by haze in the United States and the United Kingdom that the original visibility observation data needs to remove visual range obstacles such as precipitation and correct relative humidity to ensure that Figure 1. \tGuangzhou, November 2, 2003 morning The photo when there is haze (Baiyun Mountain in the distant view) \nFig. 2 \tThe photo of Guangzhou without haze on the morning of November 3, 2003 (Baiyun Mountain in the distant view) is of high quality, so that the misjudged haze can be separated from the light fog , and can also separate the mist mist in the haze [7, 8]. Combined with the World Meteorological Organization and other specifications, a conceptual model for identifying haze is initially given (Fig. 3). Figure 3 \tConceptual model for the distinction between haze and fog \nControversy on the definition and criteria of haze continues, mainly because the hygroscopic growth, mixing state, and nucleation characteristics of aerosol particles, especially submicron particles, are still very little known . The possible solution depends on the accurate observation of aerosol particles, two of which are key: one is to find a fast instrumental method that can accurately identify whether the atmospheric particles are water droplets, which depends on the development of optical technology; the other is to accurately measure nucleation Point, that is, the humidity threshold at which the condensation nuclei start to activate from the phase change humidity until they completely become water droplets. So far, encouraging progress has been seen, with preliminary results of relative humidity well above 95%, and close to 100%. Is there any direct observational evidence for the relationship between haze and photochemical smog? In aerosols in urban areas, the number of giant particles and submicron particles often differ by 106 times, while the deterioration of visibility is mainly contributed by fine particles, which are associated with the transformation of air particles, and the rapid process of hypothetical particle transformation is motor vehicle exhaust The photochemical smog gaseous precursors (nitrogen oxides, carbon monoxide, and volatile organic compounds) emitted by other sources such as photochemical smog drive photochemical processes through ultraviolet rays, and eventually form fine particles such as organic nitrates. There is still a practical problem that needs to be studied, that is, the current air quality evaluation system is a mass concentration system, which cannot evaluate the haze weather caused by numerous fine particle pollution. The understanding of photochemical smog is far from clear, especially the understanding of the precursors and products of photochemical smog, both in terms of observational facts and mechanism analysis, compared with ozone, the marker of photochemical smog, is very poor; as for photochemical smog The photolysis rate of smog precursors, the photochemical rate of products, the interaction between aerosols and photochemical processes, and whether the control of haze will lead to more serious ozone pollution have many questions at the current level of understanding. In the relationship between haze and meteorological conditions, is the source emission the main one? Or are meteorological conditions dominant? The formation of haze weather is caused by the source emission of air pollutants and the external cause by meteorological conditions. Urban air pollution makes smog appear frequently. The higher the emission standard rate of air pollutant sources, the higher the frequency of haze weather. When the source emission reaches the capacity limit of the most unfavorable diffusion meteorological condition, haze weather begins to appear; when the source emission reaches the capacity limit of the general diffusion meteorological condition, the haze weather occurs frequently; For a limited time, haze weather will appear every day [9]. Many scholars at home and abroad have conducted a lot of research on air quality from the perspective of weather situation, temperature inversion layer, mixed layer and various meteorological factors. For the relationship between haze weather and meteorological conditions, compared with the physical and chemical characteristics of aerosol In terms of research, little is known. For example, is the dilution and diffusion of atmospheric pollutants dominated by advection transport, or vertical exchange and turbulent transport? Still in doubt. The increase in humidity has contributed to the increase in the extinction coefficient of aerosols, and the haze particles will worsen visibility after absorbing moisture. The details of this process are still unclear. Does smog affect human health? A large number of extremely fine particles in haze weather can enter the alveoli of the human body through the respiratory tract, causing harm to the crowd. Satellite measurements show that the content of atmospheric aerosols in China's densely populated areas is about 10 times higher than that in Europe and the eastern United States. Exposure to very high aerosol concentrations can cause serious human health concerns, including respiratory and cardiovascular disease, DNA damage, and lung cancer. Although the biological mechanism is not yet fully understood, statistics show that haze has increased respiratory morbidity and cardiopulmonary disease mortality to a considerable extent. There is evidence that particles emitted by diesel engines contain mutagenic and carcinogenic substances. Statistical results strongly demonstrate the relationship between increased haze weather and mortality from lung cancer in large cities with high pollution, such as Guangzhou in southern China [10]. However, in order to clearly and clearly understand the relationship between haze weather and human health, it is necessary to analyze the daily mortality rate of urban residents and the relationship between PM10 , PM2.5, PM1, and the daily average concentration of black carbon particles, by comparing the health hazards of acute exposure to fine particle aerosols of different particle sizes and black carbon, the temporal and spatial changes of the exposure-response relationship, and the differences in health effects, test The health hazards of urban residents' acute exposure to fine particle aerosols and the contribution of black carbon components still need a long way to go.", "The theory and practice of modern hydrological modeling originated in the late 19th century. At present, it is mainly divided into two categories: lumped hydrologic model and distributed hydrologic model. Compared with the relatively simple lumped model, the distributed model parameterizes the hydrological process, the three-dimensional spatial distribution and time change of hydrological variables and parameters, and has the characteristics of decentralized input, decentralized output and centralized output (Fig. 1). The distributed hydrological model that simulates the characteristics of runoff production and confluence in a watershed based on physical theoretical mechanisms (mass conservation, momentum conservation, and energy conservation) is also commonly referred to as a \"white box model\". Figure 1 \tSchematic diagram of the principle of the distributed hydrological model Since the development of the distributed hydrological model, the problems it faces have obvious technical characteristics of the times. In the 1970s and 1980s, the development of distributed hydrological models was mainly limited by the development level of computers; after entering the 1990s, computers developed rapidly, and computing resources were no longer the bottleneck for the development of distributed hydrological models, especially in recent years. The \"3S\" technology (RS, GPS, GIS) of the earth's three-dimensional digital information has effectively promoted the development and promotion of distributed hydrological models. At present, the global hydrological community believes that a deep understanding of hydrological systems, complex system modeling, and interdisciplinary issues have become major difficulties that must be faced in distributed hydrological modeling. Beven summarized the problems faced by distributed hydrological models into five aspects: nonlinearity, scales, uniqueness, equivalence and uncertainty[1] . Among them, the first three problems belong to the cognition of the basic mechanism of hydrology, which is undoubtedly the biggest difficulty in the current distributed hydrological model. Nonlinear Problems Nonlinear problems are at the heart of most problems faced by distributed hydrological modeling. The hydrological system is a nonlinear system, and all distributed hydrological models involve the description of nonlinear hydrological processes. For example, to describe the runoff process in the calculation unit of the distributed hydrological model, no matter whether the Richards equation or the SCS curve number method is used, it is nonlinear equation. Reggiani[2] and others tried to directly apply the conservation equations of matter, energy and momentum to describe the hydrological process on the sub-basin and sub-grid scales to solve this kind of parameterization problem, but failed. Another aspect of the nonlinear problem is that the nonlinear system is very sensitive to the initial conditions and boundary conditions of the model, so it is often difficult to determine these two conditions in distributed hydrological simulation. Scale issues Scale issues \tin distributed hydrological models are closely related to nonlinear issues. Scale issues mainly include scale expansion of governing equations and parameterization, and scale coupling of distributed hydrological models and meteorological models (such as the large climate circulation model GCM and the regional model MM5). One of the physical characteristics of the distributed hydrological model is that its parameters can be obtained by field measurement, but the measurement results are only parameterized features on the point scale, and such measured parameters are directly applied to the model calculation unit (with a certain shape and area) Scale errors will inevitably occur. Regarding how to solve the scale problem, there are currently two different viewpoints as follows. Beven[3] believes that the scale problem will eventually prove to be unsolvable, and the scale dependence of the distributed hydrological model must be accepted; Bl\u00f6schl[4] believes that the scale problem will be Gradually make important advances in hydrological theory and practice. Due to the extensiveness and complexity of uncertainties \tin the hydrological system, and the current research methods for dealing with various uncertainties are still in the exploratory stage, the research on hydrological uncertainties has become a major issue in the current hydrological science research. A hot topic that has been discussed. Uncertainty in distributed hydrological models includes uncertainty expression methods, uncertainty analysis, model parameter uncertainty estimation, etc. The latest progress is manifested in: the use of multi-criteria integration technology for uncertain factors to assess the integrity of water resources systems; the use of risk analysis methods to estimate the risk of extreme hydrological events such as floods and droughts; Uncertainty: Using the stochastic distribution and identification of uncertain data sources to investigate methods for quantifying uncertainty in hydrological models. In order to promote the application and development of my country's distributed hydrological models, it is necessary to focus on research in the following aspects: \u2460 Strengthen the mechanism research of hydrological processes, increase the density of space observations, improve the means of observation, and strive from the perspective and depth of physics, Understand watershed hydrological processes and address nonlinear and scaling issues in distributed hydrological models. This is especially important for regions like China where hydrological infrastructure is relatively lacking and basic research needs to be strengthened. \u2461 Learn from the mature experience of foreign countries and strengthen the coupling research between distributed hydrological models and geographic information systems, including the compatibility of distributed hydrological model data formats with commonly used geographic information system data formats and the systematic coupling of hydrological models themselves and geographic information system software. \u2462 Pay attention to the development of remote sensing technology and improve the application level of remote sensing data in distributed hydrological models. \u2463 Research on the parallel computing model structure of the distributed hydrological model. At present, although the computing power of microcomputers and workstations can basically meet the computing needs of distributed hydrological models, as more partial differential equations are used to describe hydrological processes, more factors are considered in the application of numerical methods in models and modeling , the computational requirements of the distributed hydrological model will inevitably increase sharply. Therefore, it is necessary to consider the parallel computing structure in the model structure. At present, the distributed hydrological model is still a research model, and its simulation results are difficult to be directly applied to watershed management [5]. In the long run, establishing a management model or coupling it with a management model is the goal of the development of distributed hydrological models. There is no need to start from the basics in the development of hydrological models in our country. It is entirely possible to organize multidisciplinary personnel on the basis of foreign related research to directly develop and design distributed hydrological models for watershed management. In addition, the effective organization and intersection of multidisciplinary modelers is an important factor to promote the development of distributed hydrological models.", "Hydrological forecasting has gone from empirical formulas and lumped models to distributed models, and has achieved fruitful results. Looking back at hydrological forecast research, one common point is that most of the research focuses on data basins, that is, to establish appropriate empirical relationships or models based on existing data for forecasting. In today's world, there are countless data-poor watersheds. Moreover, watersheds with existing data may become data-free basins due to environmental changes that make historical data unavailable. How to solve the problem of hydrological forecasting in these basins without data has always been a difficult problem for hydrologists. Chinese hydrologists represented by Guo Jinghui and Liu Changming also started to work in this area as early as the 1950s. In order to solve this hydrological problem, the International Association for Hydrological Sciences (IAHS) officially launched a PUB (Prediction in Ungauged Basins) [1 ]'s International Hydrological Program, intends to spend the next ten years vigorously carrying out hydrological forecasting of watersheds with no data. In 1965, the hydrological community launched the first International Hydrological Decade (IHD). As the second 10-year plan of hydrology, the PUB plan is expected to be another milestone with far-reaching influence on the development of hydrology. Traditional hydrological forecasting methods are based on certain inputs (such as rainfall, human activities, pollutant emissions from upstream sections, etc.) and outputs (such as river flow, industrial and agricultural water use, and pollutant emissions from downstream sections) that can be obtained in advance. ), and then establish a relationship or model between the input and output (such as rainfall runoff model, water resource model, water quality model, etc.), and finally predict the output according to the future or design input. When any one of the three links of basin input, output and model is unknown, hydrological forecasting faces severe challenges. In order to solve this hydrological problem, the PUB executive committee will select watersheds with good data conditions in different hydroclimatic regions of the world in the next 10 years, carry out field observation programs, and thus produce demonstrations that can well reflect PUB plans and goals watershed, to carry out detailed scientific studies. Secondly, the status of hydrometric station data in world water resources and water quality management will be improved worldwide, and the close relationship between data and forecast uncertainty will be demonstrated in data-free demonstration basins. Third, the technical capability of data-unavailable watershed forecasting will be improved worldwide by using up-to-date information to limit the uncertainty of hydrological forecasts, based on the understanding of the differential control of climatic and land surface characteristics on hydrological processes. Fourth, water will be deepened by improving understanding of the ability of climate and land surface characteristics to control the natural variability of hydrological processes, of forecast uncertainties and of the impact of human-induced variability on climate and land surface characteristics. The scientific basis of literature. Finally, PUB-related capacity building will be broadly promoted in the community through the promotion of technology. The PUB plan has formulated the following five major tasks for the future: \u2460 Establish a new hydrological interpretation method from the existing data archives, including data supplements, reanalysis, inter - basin comparison and global hydrology; \t\u2461Through detailed process research, improve the theoretical level of describing process variability; \u2462Through uncertainty analysis and model diagnosis, enhance the representativeness of hydrological models for hydrological processes, and hydrological research will no longer just fit the flow process , but to mine the key signals of watershed response; \u2463 use new data collection methods, such as remote sensing technology, to understand the modeling and forecasting of large-scale processes; \u2464 develop hydrological models based on multi-scale expansion, calibration theory and complex systems. At present, research on uncertainty, forecasting of factors in various links of the water cycle, and forecasting of hydrological processes in data-deficient watersheds are the focus of research to solve this major hydrological problem. There is still a long way to go to solve this hydrological challenge, but the PUB program offers an opportunity to do so.", "The desiliconization and aluminization of soil means that under biological and climatic conditions, the primary aluminosilicate minerals in the soil are weathered and decomposed, and the alkali metal and alkaline earth metal ions and silicic acid in the product are leached with water, while the iron , aluminum oxide is relatively enriched process. The desiliconization process is the premise, and the aluminization is the result and appearance. In the tropical and subtropical areas of southern my country where the rain and heat are in the same season, desiliconization and aluminization are an important soil-forming process of iron-rich soils and ferroaluminites. Traditional soil science textbooks believe that iron-rich soil classes and The soil formation of ferroaluminites is the result of the long-term comprehensive action of two soil-forming processes, desiliconization, aluminization and bioaccumulation. For a long time, there have been different views on the migration and bioaccumulation of silicon in the process of soil formation and evolution in tropical regions of my country. It is generally believed that the soil in tropical regions of my country is undergoing a process of desiliconization and aluminization under high-temperature and rainy bioclimatic conditions, which is a typical zonal iron-aluminum-rich soil. Zhao Qiguo et al[1\uf02d3] based on the genetic characteristics of different parent materials in Kunming, Hainan and other regions and the silicon leaching data obtained from seepage water and drainage collectors, they believed that the process of desiliconization and aluminum enrichment was still going on, that is, the so-called Modern red soil is still in progress. Huang Zhenguo et al. [4] also believed that not all the red weathering crusts in southern my country are ancient weathering crusts, and the modern climate zonality has left a deep imprint on the process of desiliconization and aluminization of red weathering crusts, and the development of red weathering crusts has not The red weathering crust is not only an ancient weathering crust, but also affected by the modern climate. The red weathering crust has gone through the entire Quaternary since the Middle-Pliocene and has not been interrupted. The current weathering crust is the inheritance of the ancient weathering crust. Many international studies in tropical regions have similar conclusions. Correspondingly, there is a view that the red weathering crust in the tropical and subtropical regions of my country lacks obvious signs of desilication. As early as the 1940s, Zhu Xianmo [5] believed that the red soil and red weathering crust in southern my country were the result of long-term effects of the paleoenvironment, desilication was a process that once existed, and the biological small cycle process under modern bioclimatic conditions was the dominant one. The soil-forming process plays the role of resilication rather than desilication, and agrees with [6] Gong Zitong [7] on the \"formation of red weathering crust on Quaternary clay\". Mr. Lu Jinggang [8] also questioned whether there is red soil in modern times. He thinks that the desiliconization and aluminization process is still going on simply based on the presence of silicon in seepage water. The leaching loss is more obvious, but the soil base ions developed by volcanic ejecta since the middle and late Quaternary in the subtropical area of Zhejiang have not been leached. For more than half a century, similar debates have not ended. In fact, the weathering decomposition of aluminosilicates and the subsequent leaching of bases are common in humid regions, and the desilication process can be observed even in temperate or alpine cool regions. But again, even in typical tropical soils, many topsoils do not undergo significant desilication due to biological processes. Therefore, can the desiliconization and aluminization process of soil proceed, and what are the thermal and kinetic conditions for soil desiliconization and aluminization? \t\u00b7 27 \u00b7 In essence, it should be a problem of thermodynamic and kinetic control conditions [9], that is, under what temperature and moisture conditions and what concentration gradient conditions, the desilication process can proceed. Solving this problem will help to explore the migration of soil substances and the formation mechanism of some important characteristics under different environmental conditions, especially to provide a scientific basis for understanding the occurrence and classification of soils such as ferro-rich soil and ferro-aluminum soil in the south, and Provide more complete soil science evidence for understanding the biogeochemical cycle of silicon [10].", "Species domain (or species distribution area) is a basic biological attribute of species. The superposition of species ranges of all species constitutes the geographical pattern of biodiversity on Earth. There are significant interspecies differences in the size of species domains, and there are different distribution rules in different continents or oceans, biological groups, and environmental gradients. The causes of formation involve geographical differentiation, biological evolution, environmental evolution, and ecological responses.[ 1]. Among them, the phenomenon that the latitudinal width of the species domain expands with the increase of latitude was first discovered in 1979 by the Argentine biogeographer EH Rapoport. In 1989, American ecologist GC Steven discovered this phenomenon again, and named it \"Rapoport's Rule (Rapoport's Rule)\", thinking that it is a universal law of biogeography [2]. Later, in 1992 and 1996, Stevens further sorted out the connotation of the law and extended it to the mountain altitude gradient and ocean depth gradient. He further emphasized that the significance of Rapoport's law lies in the fact that the distribution of species domains and the geographic pattern of species richness may be controlled by the same mechanism. Rapoport's law was first questioned by Rohde et al., and soon it became a controversial swirl in ecology [3]. In the past 20 years, there have been heated discussions and extensive verification studies on the universality, testing methods, formation mechanism and relationship with other biogeographical patterns of Rapoport's law, but so far no conclusion has been reached. Summarizing these studies on different biological groups, environmental gradients, geographical regions and testing algorithms, Gaston et al. concluded that, regarding the universality of this rule, the existing tests are still insufficient and balanced in the above aspects, and it is difficult to be qualitative. And suggested that it be called \"Rapoport effect\" [4]. This review has further stimulated the verification research on elevation gradient and ocean depth in recent years; regionally, the verification research on continents other than the Americas has increased; the verification objects have been extended to animals-plants, land-marine life; also broadened further. The study found that the verification results of Rapoport\u2019s rule are largely affected by multiple factors in the verification process, such as the data sampling of the geographical distribution of species is often insufficient, and the data quality is uneven; the kinship between species affects the verification results, but on the earth The phylogenetic relationship trees of many biological groups have not been determined; almost all the existing inspection algorithms have different defects or deviations. The existence of these factors significantly contributes to the inconsistency of empirically verified results, thereby hindering the judgment of the phenomenon itself and the exploration of the underlying mechanism [5, 6]. Regarding the formation mechanism of Rapoport\u2019s law, several hypotheses have been put forward successively: \u2460 Differential extinction and diffusion hypothesis, mainly for high-latitude areas covered by glaciers; \u2461 Average climate condition effect hypothesis, emphasizing gradient changes in average climate conditions; \u2462 Interspecies competition Hypothesis, it is believed that the larger species range width at high latitude may be related to its lower species richness; \u2463 Evolution rate difference hypothesis, it is believed that the life process and genetic evolution process of tropical organisms are faster than those in cold regions, so newer species The proportion is high, and its distribution area is too late to expand[7]. \tThe latitudinal gradient of richness leads to an artificial \"pseudo-Rapoport phenomenon\" [8] in common data sampling design. The above explanations are not mutually exclusive, but none of them alone can cover all situations. In recent years, studies on the influence of regional boundaries on species distribution patterns have found that the physical constraints of boundaries have nothing to do with any biotic or abiotic environmental gradients, but have objective and real impacts on species ranges and species richness patterns, and have different effects. The algorithm verifies the results of Rapoport's law [9]. Therefore, it will take some time to establish a comprehensive explanatory framework for Rapoport's law; and the mechanism of association between species range and species richness pattern under the restriction of species distribution boundaries depends more on the answer to the previous question. The formation of species ranges is related to various ecological, evolutionary and geographical processes. Rapoport's law involves a large space scale, many factors, and the influence is difficult to control. Therefore, the approach based on simulation models and small-scale control experiments is the future direction of the verification and mechanism of Rapoport's law. Variations in species domains exist in different regions of the world and on environmental gradients of different scales. Whether there is a universal law depends on the joint action of many related biological, abiotic and historical factors. Therefore, the final interpretation and prediction of Rapoport\u2019s law , or will depend on the integration of these factors [10]. Although Rapoport's law addresses the size of a species' range, the intrinsic link between this property and other biological, ecological, and evolutionary properties of species is the beauty of this biogeographic puzzle. Figure 1 \tThe latitudinal breadth of different taxa species in North America varies with latitude Adapted from literature [2]", "The theory of island biogeography proposed by MacArthur and Wilson[1] has had a profound impact on nature conservation and is still of great significance today. According to the theory of island biogeography, Diamond[2] proposed that a large protected area can preserve more species than several small protected areas (the total area is equal to the former), and the design of nature reserves should tend to one large protected area. The principle of multiple rather than multiple small ones. This point of view has sparked heated debates since it came out, referred to as the SLOSS debate (single large or several small) [3]. The reasons for thinking that \"one big\" is better than \"how small\" are [4]: large protected areas have a higher species migration rate and lower species extinction rate [5]; large protected areas have larger capacity and carrying capacity. The resource supply is relatively stable; certain species populations and even individual survival require a large area; large protected areas can preserve the entire ecosystem and community structure, and can preserve the entire trophic level and food chain; Disasters have a buffering effect; predators, parasites, or competitors pose less of a threat in large protected areas; large protected areas are less expensive to manage than several small protected areas; small protected areas tend to reduce genetic diversity because of inbreeding The phenomenon is serious; there is a strong Allee effect in small protected areas, that is, low population density, resulting in low population growth rate, such as low chance of finding a mate for small populations, and decreased reproduction and individual survival. There are some reasons why \"how small\" is better than \"how big\" is [6]: \"how big\" can preserve more species than \"how small\" does not fully conform to the derivation of island biogeography, and island biogeography is under certain circumstances Under the same conditions, it can also be deduced that how small can save more species [7]; multiple small protected areas generally have habitat diversity, and thus can increase species diversity [8]; the life of many species, especially animals, Requires different habitats; multiple small protected areas can reduce competition, and the protected areas are independent of each other with less interference; multiple small protected areas can provide shelter for prey; migration rates of many species are related to protected areas The boundary length index of , rather than area correlation, so the species migration rate of multiple small protected areas is higher; the boundaries of multiple protected areas are also more, these boundaries are ecotones, which can increase species diversity; several small protected areas The area has a strong ability to adapt to disasters. Once a disaster occurs in a large protected area, the entire area is likely to be affected, while in a small protected area, only a few protected areas are often affected by disasters; \"islands\" (here refers to the protected area itself) are generally undersaturated with species. However, small islands are saturated relatively quickly; multiple small protected areas can generally accommodate more rare and endemic species [9]. The SLOSS debate is not over yet, but studies have suggested that the debate seems to have lost its meaning, especially for nature conservation. The premise of Diamond[2] is that there is no habitat difference between large protected areas and small protected areas, and the habitats of multiple small protected areas are also homogeneous, so species overlap among similar small protected areas, and the total number of species is small. However, this premise rarely exists in practice, because there are generally always habitat differences between multiple small protected areas. There are other reasons for thinking that the SLOSS debate is meaningless[10\uf02d12]: different species have different requirements for area size, so the SLOSS debate is related to species; island biogeography considers the number of all species, but what actually needs to be protected is often Some rare and specific species, and this does not fit the theory of island biogeography; a few small numbers, it is generally believed that a few small protected areas can increase species diversity, but more will reduce diversity, of course, this also It is related to the area demand of species; the purpose of nature protection is not only the number of species, but also the structure of species, which cannot be reflected in the SLOSS debate; the SLOSS debate does not take economic factors into account, and its applicability is poor; \"one big\" or \"several small\" \"It is no longer a problem, what is important is the network of protection zones, and so on. Therefore, there is a view that the SLOSS debate should be replaced by the corridor debate [13] and Metapopulation theory. For example, the extinction rate of the Meta population is related to the habitat area, and the SLOSS quantitative analysis can be carried out by mathematical methods [14-16]. Since nature protection cannot be completely independent of social and economic impacts, combining SLOSS debates with economic, social and cultural factors [17] to design a protected area that satisfies ecology, economy, and culture at the same time has become the current research and practice. problems that need to be resolved urgently. All in all, the establishment of nature reserves is to provide conditions and suitable places for the survival and breeding of rare and endangered species and the conservation and restoration of regional biodiversity. However, the area and scope of nature reserves cannot be expanded indefinitely due to the needs of human beings' own production and living activities. Within a limited area, how to set up nature reserves, and what is their shape and distribution? There has always been a theoretical dispute over whether a single large nature reserve is better than several small nature reserves. Favorable views also have their own shortcomings that are difficult to convince the other party, so far it is difficult to tell the winner. With the continuous development and intersection of various disciplines, the pure SLOSS debate can no longer completely solve the problem of nature reserve design, but often needs to be combined with other related factors such as economy, society, culture, politics, etc., and the interests of all parties must be considered. On the other hand, the traditional SLOSS debate is often qualitative or semi-quantitative analysis. With the development of geographic information, remote sensing, spatial analysis, biology and other related technologies, the design of nature reserves has entered the stage of quantitative mathematical analysis and landscape design. , and become the intersection of integrating multiple disciplines. Therefore, although the SLOSS debate has not yet yielded a systematic standard answer, the debate on the design concept of nature reserves caused by it still has scientific significance. With the continuous development of this discipline and related disciplines, the SLOSS debate has not only developed a new connotation, but also a broader extension, so it is still a focus of ecology and conservation biology research.", "There is often a forest-grass transition zone between forests and grasslands on the earth's land, generally known as the timberline. Among them, the timberline that appears in the high mountains is called the alpine timberline, and there is also a dry timberline between the grassland and the forest on the flat land. The survival mechanisms of the two types of timberlines have similarities, but also have their own characteristics, which have become the focus and difficulty of current research. Grasslands and forests are two major vegetation types with very different properties on the earth. A forest-steppe transition zone, or xeric forest limits, appears between the forest and grassland vegetation. Tropical savannah distributed under tropical summer rainy climate conditions is characterized by tall grass and rare trees; while in temperate regions, in addition to the landscape of savannah in North America and other places, it is more widely distributed in contiguous treeless grasslands , the arid timberline presents as patchy interlaced distribution of forest and grassland. The mechanism of forest-grass coexistence is very complex, and has been discussed by plant geographers and vegetation ecologists for a long time in the past century [1, 2]. It is generally believed that the following three reasons may affect the coexistence of forest and grass (Figure 1). Figure 1 \tFactors that affect the coexistence of trees and grasses are quoted from literature [1], modified. One is climate factors. Precipitation controls the maximum forest coverage. The magnitude, frequency, duration and seasonal distribution of precipitation determine the timing of root system's use of water. Insufficient precipitation or unfavorable time will lead to the failure of normal photosynthesis to increase biomass. Air temperature is superimposed on precipitation, affecting water availability. Sankaran et al. [2] reviewed the research on the savannah zone in the past half a century, and believed that the annual precipitation determines the \tmaximum coverage of trees in a forest-steppe transition zone ecological mechanism\u00b735\u00b7body. However, under the condition of constant annual precipitation, the tree cover varies greatly, which Sankaran et al. explained as the joint effect between soil, fire and grazing. The second is soil moisture. In climate-dominated forest distribution areas, soil water availability determines water availability in the root zone of vegetation, and soil water deficiency combined with high evaporation demand leads to cavitation of xylem vessels and rhizosphere, which affects water transport and dehydrates plants, or aborts due to stomatal closure Photosynthesis results in a lack of carbohydrates needed for metabolism [3]. The earliest two-layer model (two-layer model) for the plant root system proposed by Savanna (Savanna) believes that the coexistence is because the grass restricts its root system to the upper layer of the soil, while the tree root system is mainly distributed in the deep layer of the soil, in order to achieve the balance between the two. Minimization of soil moisture competition [4], with the development of isotopic techniques, more and more studies have found that the two-layer pattern is not universal. In particular, there are also a large number of fine roots of trees with absorption capacity distributed on the soil surface [5]. The two pool model of soil moisture proposed in recent years tries to explain this problem. Shallow soil moisture acts as a growth pool, represented by the coupling of water and nutrients and rapid utilization of water, while deep soil moisture acts as a storage pool, represented by the persistence of low nutrient content and water content. Trees can utilize water from different pools at different times, while herbaceous plants with shallow root systems can only utilize surface soil moisture [6]. The impact of topography on the pattern of forest and grass is mainly through the influence of soil moisture. Grasslands are generally flat, but flat terrain is not conducive to the growth of forests. Forests tend to grow in areas with changing topography for at least the following reasons: \u2460 The slope has the function of collecting water, which is one of the reasons why forests can grow in arid mountainous areas; \u2461 The terrain with a certain slope is good for forests. Drainage function, the drainage function of flat land is poor, which is not conducive to forest growth; \u2462The diversity of terrain can reduce inter-species and intra-species competition, and promote the formation of forests; \u2463The seeds of woody plants are relatively large, and the difference in elevation is conducive to seed dispersal . In addition, the flat terrain is conducive to the movement of air to form strong winds, which can cause physiological drought of trees and even uproot them, thus having a fatal impact on the survival of forests. The third is the interference factor. Grazing, fire, and young leaves gnawed by animals affect tree seed dispersal, standing regeneration, and intraspecific competition in semi-arid regions. It is generally believed that high-frequency and high-intensity fire disturbance is not conducive to the growth of trees, thereby increasing the coverage of herbaceous plants [7]. The effects of grazing and animal grazing are very complex, and they are more manifested as inhibiting the growth of herbaceous plants and promoting the growth of woody plants[7]. Regarding the vegetation pattern and its driving factors in the forest-grass ecotone, although the introduction of new research methods has allowed us to continuously improve the past theories and hypotheses, it still remains in a conceptual model and lacks the ability to predict future dynamics. Current unresolved questions include: \u2460 What factors control the relative proportions of woody and herbaceous plants in a given site? \u2461 How do woody plants and herbaceous plants interact? Does the ratio of woody to herbaceous plants change under specific environmental conditions? \u2462 How does the net primary productivity (NPP) of woody plants, herbaceous plants and the mixed ecosystem formed by them change with the change of the proportion of trees and grass[1]. Especially in recent years, a large number of studies have shown that a large number of forests in the forest-grass ecotone die under the condition of climate aridity[8]; other studies have also shown that woody invasions have occurred in many places in North America, South America, Africa and Australia ( wood encroachment) steppe zone[9]. Why are woody invasions successful in some areas and tree die-offs in others? These are still puzzles that currently plague the scientific community. The research on the mechanism of forest and grass pattern will help to answer this question fundamentally, and then make an accurate prediction of the dynamics of forest and grass vegetation in the future.", "The alpine timberline (alpine timberline) refers to the transition zone between the upper limit of the canopy forest and the alpine shrub-meadow zone, including tree islands and dwarf forests (Figure 1). and the main drivers of the distribution [1]. Therefore, alpine timberlines are more sensitive to climate change, and it is easy to capture the early signals of global climate change. The related change process and its measurement indicators can be used to explain the impact and response of terrestrial ecosystems to global change (such as biodiversity change and natural vegetation belt displacement and its feedback to the regional climate system), is one of the important contents of global change research [2]. However, there are still great uncertainties about whether the position of treelines will rise and the productivity of existing stands will increase under past and modern climate warming. According to the survey data, the responses of alpine/boreal timberlines to climate warming in different regions of the world show distinct trends such as increasing, remaining unchanged, or decreasing. There is still a lack of consensus on the formation mechanism of timberlines[3\uf02d5]. Fig. 1 \tThe formation mechanism of Alpine timberlines on the shady slopes of Seqila Pass in southeast Tibet (Abies georgei var. smithii, 4300~4400m) and on the sunny slopes (Juniperus saltuaria, 4400~4500m) The debate has been going on for more than 100 years, and various hypotheses (such as frost, wind and snow mechanical disturbance, reproductive disorder, carbon starvation, low temperature physiological drought, low temperature\uf02d growth restriction, etc.) were put forward [1, 5\uf02d6]. However, due to the complexity and inaccessibility of the alpine environment, relevant theoretical hypotheses still lack systematic and effective observational data to verify, or there are only partial and systematic observations but cannot be extended to other regions. Around the global treeline formation mechanism, the focus of debate is: Is there any water/nutrient stress caused by low temperature in the photosynthetic production of plants at high latitude/high altitude? Scholars such as K\u00f6rner[1, 7\uf02d8] based on the principles of plant physiology believed that high-altitude plants generally have \"carbon saturation\" (low temperature < 7\u00baC restricts cell division and cannot use too much photosynthetic product) rather than \"carbon starvation\" (low temperature and The resulting changes in soil moisture and nutrients have no direct impact on photosynthesis), because with the increase of altitude, leaf nitrogen content and maximum photosynthetic rate generally remain unchanged or even increase, and nonstructural carbohydrate content (nonstructural carbohydrate, NSC; characterizing photosynthetic products source-sink balance) is on the rise. Based on this low temperature\uf02d growth-limited physiological hypothesis, K\u00f6rner and Paulsen[9] further proposed that the soil low temperature threshold of 6.7\u00b10.8\u00baC in the growing season is the boundary factor for explaining the distribution of global treelines, because most plant terminal buds and root tip cells divide Generally inactive at < 7\u00baC, close to this soil low temperature threshold. This growth-restricted physiological hypothesis poses a serious challenge to the traditional physiological-ecological model theory (understanding plant growth and distribution based on the gas exchange process in leaves), and has attracted widespread attention in the academic community. However, the results of relevant control experiments in treeline areas do not fully support the low temperature-growth restricted physiological hypothesis[10], and the change trend of plant NSC content with altitude in different regions, species and seasons is not generally consistent[11 \uf02d13]. In the Emei fir forest line of Mount Gongga in the eastern Qinghai-Tibet Plateau, the NSC measurement data in different seasons do not support the view of \u201ccarbon saturation\u201d, pointing out that there may be a shortage of carbohydrates in winter [11\uf02d12]. In the Andes Mountains of South America and the Hengduan Mountains in southeast Tibet where the altitude is greater than 2500 m, some scholars [14-15] found that the leaf nitrogen content and maximum photosynthetic rate of the same species decreased with the increase of altitude. Leaf carbon isotope measurement data further pointed out that in humid alpine treeline areas, plant photosynthetic production should have water and nutrient stress exacerbated by soil low temperature[16\uf02d17]. Sveinbj\u00f6rnsson[18] believes that the change of non-structural carbohydrate content in plants is a risk investment strategy for plants to resist stress habitats (such as low temperature or drought), and cannot reflect whether photosynthesis is limited. Wieser and Tausz[5] further pointed out that the source and sink of photosynthetic products are a connected body that promotes or restricts each other, and environmental stress affects both. Therefore, whether high-altitude plants have a specialized physiological and ecological adaptation mechanism remains to be further tested. Previous studies on the formation mechanism of treelines have paid too much attention to short-term physiological measurements at the individual level, and further long-term observation and control experiments of treeline ecosystems are needed to understand the water and nutrient utilization processes that affect plant growth and distribution in treeline areas. Only in this way can it help to clarify the functional evolution characteristics and significance of the community from forest to shrub and grass near the timberline[17,19]. In addition, compared with adult trees, seedlings are more sensitive to changes in environmental factors. Whether the treeline will rise or fall under future climate change depends critically on the survival and growth of seedlings of understory trees[4]. Under climate warming, the reduction of snow cover in winter will cause extreme low temperature on the surface and soil drought, resulting in a decrease in the density of seedlings or the inability of seedlings to grow into large trees, and the position of the treeline may show a tendency to remain unchanged or even decline[20]. At the same time, it is necessary to further establish a cross-regional scale networked positioning observation platform and related mechanism models in terms of research methods to verify relevant theoretical hypotheses and understand the global universal treeline formation mechanism.", "In the Northern Hemisphere, the zonal evergreen forest vegetation forms a typical \"bimodal\" distribution along the latitude, that is, the two distribution centers are the tropical and subtropical evergreen broad-leaved forest at low latitudes and the temperate/cold temperate evergreen needles at high latitudes. Leaves/coniferous and broad-leaved mixed forests, between the two centers are usually distributed warm temperate forest vegetation dominated by deciduous broad-leaved forests; in the north of the evergreen coniferous forests at high latitudes, cold temperate deciduous coniferous forests are distributed - Larch forest[1]. Why can't evergreen forests be continuously distributed, forming this \"bimodal\" distribution phenomenon? Why do larch forests appear north of evergreen forests? These issues have long been closely concerned by researchers in plant geography and ecology. It is generally believed that evergreenness (long leaf life) is the adaptation to long-term stress environments such as high cold and water and nutrient deficiency, while deciduousness (short leaf life) reflects the rapid growth of plants and the adaptation to seasonal stress environments such as drought or cold winter. The adaptation result [2], that leaf lifespan is somewhat indicative of the distribution of plants/vegetation. However, the mechanism of forest plant leaf life changing with latitude is still unclear, so how to explain such anomalies in the distribution pattern of forest vegetation is still a puzzle of phytogeography. Internationally, there are few related studies on vegetation distribution and leaf lifespan. Based on the cost-benefit analysis theory, Kikuzawa[3] simulated the optimal leaf life distribution with the length of the growing season and the mechanism of carbon harvest maximization, and interpreted the zonal distribution of vegetation as follows: The growing season has obvious zonal differentiation; if the non-growing season is short or almost non-existent (tropical and subtropical regions), then keeping plants evergreen is beneficial for plant carbon harvesting; the non-growing season becomes longer (warm temperate regions) , then the consumption required by plants to maintain the amount of canopy leaves increases. When the consumption is greater than the cost required to build new leaves in the coming year, it is more economical for plants to adopt the strategy of defoliation; the non-growing season is further extended (temperate cold temperate regions), resulting in Plants cannot produce enough dry matter in a growing season to compensate for the cost of leaf construction, so they can only be compensated in stages by prolonging leaf life and long-term accumulation. At the same time, low leaf nitrogen content (which is negatively correlated with leaf life) also reduces this. Respiration consumption of high-cost leaves; when the cost of leaf construction cannot be compensated through long-term accumulation, the plant can only survive in the form of deciduous leaves, and reduce the cost of leaf construction and respiratory consumption as much as possible, and improve photosynthetic production efficiency (such as northern larch forests, etc. )[4]. Relevant studies on physiological and ecological mechanisms have shown that there is an inevitable relationship between the distribution of evergreen vegetation and the maintenance of system nutrients: in tropical and subtropical regions at low latitudes, plants face severe nutrient leaching problems in high-temperature and high-humidity environments. The form of extending leaf life can improve nutrient use efficiency to a certain extent[2], but high metabolism and high consumption in high temperature environment are not conducive to the continued extension of organ life, so the leaf life of 1 to 3 years is usually formed; at high latitudes In temperate and cold temperate regions, evergreen coniferous forests often have a relatively high leaf area index, which makes the probability of sunlight reaching the ground through the canopy much lower than that of broad-leaved forests, which further reduces soil temperature and limits the activities of underground roots, thereby limiting Uptake of soil nutrients and water by plants. Therefore, by increasing the aboveground parts, especially leaf biomass, to increase its own nutrient storage pool, and prolong the retention time of nutrients in plants in the form of prolonging leaf life to improve nutrient use efficiency, this may be possible for high-latitude evergreen coniferous forests. More economical [5, 6]. Recently, Zhang et al. [7] conducted a pattern analysis based on the survey data of forest canopy leaf life in 10 locations in the forest transect of eastern China, combined with the vertical transect of Gongga Mountain and global literature data. The main limiting factor of forest canopy leaf life (leaf life decreases with the increase of temperature, and in areas where the temperature is greater than 8\u00baC, leaf life increases with the increase of precipitation), its correlation follows Weber's law and can be fitted with Logistic function, using The average leaf life division of the national forest canopy drawn by the national annual average temperature and precipitation database according to this simple experimental equation is in good agreement with the early forest vegetation division in my country [8] (Figure 1). Figure 1. \tComparison of simulated Chinese forest canopy average leaf life span [7] and early Chinese forest vegetation partition [8] However, there is still a lack of in-depth understanding of the internal relationship between leaf life and forest vegetation distribution. One of the important reasons is that An internal factor that affects leaf lifespan\u2014the relationship between leaf construction cost (glucose equivalent per unit weight/area of leaves) and leaf lifespan is still inconclusive. In addition, for larix, it is difficult to interpret its distribution only from the trait of leaf life, and there are other more complicated mechanisms, such as high specific leaf area (leaf area per unit weight) which undoubtedly reduces the cost of leaf construction, The canopy structure with small leaves and better light transmission is conducive to improving soil temperature [4]. Therefore, to study the geographical distribution mechanism of larch, it is necessary to conduct multi-directional investigations and researches from multiple traits.", "Since G. Buffon proposed in 1761 that different regions of the world contain different biota (Buffon's law) [1], the research work of comparing the differences of biota in different regions and exploring why such differences have been continued, and from mammals Expanded to various categories such as birds, reptiles, insects and plants [2]. Explanations for this variability appeal to mechanisms such as plate tectonics, climate change, or the dispersal, competition, evolution, and extinction of organisms. Although these theories complement each other and provide reasonable explanations for the major biogeographic distribution patterns today, the uniqueness of biota in some regions (such as Cape of Good Hope in South Africa and Antarctica) is still very puzzling. C. Darwin conducted a five-year round-the-world scientific expedition from 1831 to 1836, showing the world the very different biota of South America, Australia, Africa and Europe. Through the in-depth study of biota in various regions, Darwin published the epoch-making \"Origin of Species\", and proposed the \"Evolution Theory\" that changed the entire human world view [3]. In 1856, P. Sclater and A. Wallace divided the world into six biogeographic divisions: Palaearctic, New North, Neotropical, Paleotropic, Oriental, and Australian based on the characteristics of fauna in each region[4\uf02d 6], and continues to use to this day. Compared with the stable zoogeographic division, the global phytogeographic division has been continuously revised. In 1879, A. Engler divided the world into four phytogeographic divisions: the northern temperate zone, the old tropical zone, the South American zone and the ancient Oceania zone. Later, R. Good and A. Takhtajan added the Australia area and the Cape of Good Hope area [6, 7]. In 2001, according to the latest research results, B. Cox canceled the Cape of Good Hope and Antarctic regions, and added the Indo-Pacific region[8]. These biogeographic divisions amply reveal striking differences in biota in different parts of the world. What accounts for such large differences in biota in different parts of the world? Climate is thought to be the main reason. Specific climatic conditions develop unique biomes (biome): tundra in cold and humid climates; boreal coniferous forests in cold and humid climates; temperate deciduous broad-leaved forests in warm and humid climates; hot and humid climates tropical rainforests; temperate grasslands in cold semi-arid climates; deserts in arid climates; sclerophyll evergreen dense shrubs (chaparral) in Mediterranean climates. The climatic changes in the Quaternary had a significant and profound impact on the distribution of organisms: many taxa disappeared in Europe, but were preserved and developed in parts of East Asia and North America. Migration, evolution, and extinction have formed distinctive biota [9, 10]. But why is there such a big difference in biota in areas with the same climatic conditions in Africa and South America (such as the Cape Verde Islands in Africa and the Galapagos Islands in South America)? Why does Australia have the same temperate and tropical climate as the northern hemisphere, but its creatures are so unique? Darwin particularly emphasized the role of biological heredity and natural selection, and believed that the differences in biota in different regions were due to the same ancestral species passing through various migration routes from the origin, spreading to new territories, and undergoing variation and evolution under conditions of geographical isolation. of. The biota of the Cape Verde Islands and the Galapagos Islands come from the African continent and the South American continent respectively. Why is there such a big difference in the biota in different regions due to the biota of the African continent and the South American continent? \t\u00b745\u00b7 There are obvious differences in the biota, so the biota that spread from the two continents to the islands are also different, and the islands form an isolated environment, which makes the biota gradually specialized and different from each other. However, how the biota in the distant Oceania, Africa, South America and other regions can cross the ocean barrier and spread is very puzzling. The theory of continental drift proposed by A. Wegener explained the differences in biota in different regions of the world from the perspective of the development history of the earth [11]. When continents are connected, due to the diffusion and migration of organisms, there are similar biological groups in the world; when continents are separated, isolated habitats are produced. Due to changes in the environment, some ancient species are extinct, and some new evolutionary groups appear, making each continent The organisms in between are different from each other. Biota in similar habitats in different regions appear similar in appearance, but they are completely different taxa in phylogenetics. They are the products of gradual evolution of biota in changing geographical environments with continental drift. The intermittent distribution of many vascular plants in East Asia and eastern North America reflects the independent evolution of biota in the two regions [12]. After studying the distribution of organisms around the world, Croizat pointed out that plate tectonics is the main reason for the different species contained in various bioregions around the world [13]. The uniqueness of the Australian biota is related to the long-term isolation of the Australian continent. As early as the Late Cretaceous period, 96 million years ago, Australia was separated from other continents, and ancient biota (such as marsupials) on it achieved unprecedented development without competition from other progressive types (such as placentals) . Competition among organisms ultimately determines the biota characteristics of an area. Due to the differences in the development history of different regions, the distinctive biological groups formed are called historical components. When organisms spread to a region, through competition with local historical components, different evolutionary directions occur, and biogenesis specializes to form various species groups. Different species establish a dynamic balance of biogeographical communities through competition[14] . Climate change, continental drift, and biological competition lead to the proliferation, evolution, and extinction of organisms, thus forming different biota. However, since the influencing factors and processes of global biota differences are very complex, under what factors and through what processes a regional biota is formed has always been an important problem in biogeography. Exploring the causes and laws of the formation of biota differences in different regions of the world will help to effectively predict the future distribution of organisms, and answer such important scientific questions as \"how will future global changes lead to changes in biological distribution?\". Molecular biology can essentially reveal differences and connections between groups of organisms, and thus will be an important tool in future research.", "Quantitative restoration of ancient vegetation landscape and land use using fossil pollen data is a huge challenge and arduous task for palynology in the 21st century[1]. Because of the quantitative reconstruction of ancient vegetation landscape, the quantitative relationship between pollen and vegetation must first be determined, and the relationship between pollen and vegetation is very complicated, which is restricted by many factors such as pollen production, pollen source range, pollen preservation ability, and landform environment. The difficulty is determining the pollen production and range of pollen sources of the major plant types. Because the pollen production and dispersal characteristics of different plants are different [2], even the pollen production of the same plant in different regions has obvious differences [3, 4]. In order to correctly understand the relationship between pollen and vegetation, Davis first proposed the concept of \"R\" value [5], but the R value is the ratio of the pollen percentage of the sampling point to the percentage of surrounding vegetation coverage, without considering the influence of foreign pollen, and the foreign pollen Will affect the authenticity of the R value. In the 1970s, Anderson believed that a linear model could be used to express the quantitative relationship between pollen and vegetation, and proposed the concept and calculation method of \"relative pollen yield\" [6]. The Anderson model takes into account the impact of exotic pollen, but requires absolute pollen data. At present, most palynologists still prefer to use pollen percentage to explain the change of pollen assemblage. Therefore, Parsons and Prentice, Prentice and Parsons created the ERV model based on the Anderson model to calculate pollen yield and pollen source range [7, 8]. Afterwards, Sugita was improved, commonly known as the Prentice-Sugita model [9], which made it possible to simulate or reconstruct ancient vegetation landscapes using pollen data. At present, some scholars have used the Prentice-Sugita model to quantitatively restore the ancient vegetation landscape and land use in some parts of Europe [2, 10\uf02d16]. Although many studies on topsoil pollen have been carried out in my country, the research on pollen yield and pollen source range has not been carried out basically[17]. Although the Prentice-Sugita model has been successfully applied in Europe, European studies have shown that the pollen yields of different plant types are significantly different, and the pollen yields of the same plant in different regions will also vary greatly. The pollen yield of a region cannot be directly applied to Another area [10,12, 18\uf02d21]. The Prentice-Sugita model can be used for reference to carry out research on the relative yield of pollen and the range of pollen sources in my country, but it must be corrected according to the characteristics of vegetation in my country to improve its unreasonable components, such as sedimentary basins, wind speed, and pollen deposition rate. It provides important basic data for the quantitative reconstruction of ancient vegetation and land use in the geological history period, and provides a scientific basis for the establishment of ancient vegetation and paleoclimate models based on pollen data and the prediction of future environmental changes.", "Tree rings are formed by the seasons during the growth of trees. The growth of trees is mainly controlled by two factors: one is the influence of ecological factors; the other is the influence of the genetic factors of the trees themselves[1,2]. The ecological factors include temperature, precipitation, light and so on. Therefore, the annual rings not only represent the age of the tree itself, but also record the year-by-year changes in environmental elements during the growth of the tree, that is, the tree rings have a certain \"memory\" ability for environmental changes. How do trees grow? How are annual rings formed? The growth of the trunk of the tree is characterized by axial growth (increase in height) and radial growth (increase in thickness). Tree radial growth is the main growth process that makes up wood. Tree rings are the products that are affected by seasonal changes in the external environment during the process of cell division, growth and differentiation in the vascular cambium [3]. Most trees produce one round per year. After the growing season begins, the meristem in the cambium begins to split to form new cells; the cambium splits inward to form secondary xylem, and outward to form secondary phloem. The tree disc shown in Figure 1 is a split stump from top to bottom, the middle is the pith, and from the pith to the outside are xylem, cambium and phloem. A complete annual ring can be divided into earlywood and latewood (Figure 1). The xylem can remain inside the cambium round after round, but the phloem will crack and partly fall off when it is squeezed and deformed during the growth process. This part is commonly called the epidermis. Tree rings have become an important means of studying historical climate change and predicting future climate change due to its advantages of wide distribution range, high resolution, accurate dating and easy access to samples [4]. Modern dendrochronology was founded in the early 20th century by the American astronomer AE Douglass. As an indispensable important proxy data for paleoclimate research, tree rings are one of the important technical approaches to study global past climate changes, and have played an extremely important role in the study of past climate and environmental changes, and have achieved fruitful research results. Although tree rings are an indispensable proxy indicator for paleoclimate research, with the deepening of dendrochronology research, it is found that when tree rings are used to study past climate changes, they still face the problem of how to extract climate information with physiological significance . For example, currently the most discussed problem in the field of dendrochronology is about the extraction of low-frequency signals. These issues, if not handled properly, may artificially affect the reliability of tree rings in studying climate change. It is well known that the recording of climate signals by tree rings is frequency-dependent [5]. It can be roughly considered that tree rings record climate signals including high-frequency signals and low-frequency signals. The change trend of tree rings on the 10-year scale and shorter time is relatively consistent, which is obviously shown as a cycle of 3 to 7 years [5]. The change trend of tree rings below the 10-year scale is called high-frequency Changes; the change trend of tree rings in 10 years, decades, centuries and hundreds of years time scales is called low-frequency changes. Some studies have pointed out that if the detrending method of the tree-ring sequence is improper or the mathematical analysis method is improperly used, only high-frequency signals can be extracted from the tree-ring sequence, and \tthe three-sectional structure, sapwood and heartwood of the tree in Figure 1 cannot be extracted well The schematic diagram of [12] the anatomical photo of the cross-section of Pinus tabulaeformis shows early wood, late wood and pseudo-annual rings [13] and low-frequency signals [6]. Extracting low-frequency signals from high-resolution proxy indicators such as tree rings is of great significance to the study of past climate changes. Therefore, the extraction of low-frequency signals in tree rings has attracted the attention of climatologists and dendrochronologists in recent years. great concern. How to accurately extract the low-frequency signals of paleoclimate changes from tree-ring records is a difficult problem. There are many reasons why it is difficult to extract low-frequency signals in the chronology: first, all trees have a certain life span, and tree-ring data cannot provide us with low-frequency signals beyond their tree age; secondly, when fitting tree growth trends ( detrending), if the mathematical method adopted is improper, the low-frequency information of climate change may be artificially filtered out [7]. The detrending methods of tree-ring sequence mainly include three methods: random curve, deterministic curve and empirical curve[8]. Among them, the spline function is widely used in the stochastic function method, which flexibly fits the growth trend based on the trend of the tree-ring curve itself [3], which itself has no physiological significance for trees. Therefore, this detrending method is used in Removing tree growth trends will most likely remove some climate information as well. In arid and semi-arid regions, deterministic curves such as linear functions and negative exponential functions are commonly used to fit the growth trend of trees [3]. The RCS method [9] is currently a method that can preserve low-frequency climate information in the chronology, but it also has certain limitations. For example, this method regards the first year of all samples as the first year of tree growth, which generally does not match the real physiological age of the tree, and may underestimate the trend of the tree's early growth. The growth trend curve of the RCS method is still fitted with a general negative exponential function, which was improved in later studies [10, 11]. However, the negative exponential function in the second half will also make the early sequence Variance is reduced. The RCS curve is based on a series of assumptions. For example, there is a common regional climate characteristic in the sampling area, and this characteristic can be simulated by a large enough number of tree-ring samples of different age groups; however, this hypothesis is difficult to achieve in practical applications. The method There is also the possibility of amplifying the trend of low-frequency changes in the tree-ring sequence, so one should be very careful when using this method to de-trend to ensure that the retained low-frequency information is caused by climate change. In short, the current tree ring detrending methods still have some defects. To sum up, the extraction of low-frequency signals in tree rings is a major scientific problem faced by dendrochronologists. Despite years of unremitting efforts by tree-ring scientists, a satisfactory solution for preserving low-frequency signals has not yet been found. Cook once pointed out that perhaps there is no method to preserve low-frequency signals longer than tree age in tree-ring climate research, and even if there is, it is not generally applicable [7]. Although tree rings play an irreplaceable role in studying historical climate change and predicting future climate development trends with their unique advantages. But at the same time, it should also be noted that there is still a need to find a better solution in terms of low-frequency signal extraction methods. With the solution of this problem, the climate information with tree physiological significance can be extracted to the maximum extent, and tree rings will play a greater potential in the study of climate change in the past.", "Forests are an important part of terrestrial ecosystems and one of the important links affecting the global carbon cycle. Studies have shown that forest decline during the Holocene may have increased the concentration of greenhouse gases in the global atmosphere [1]. The distribution of potential natural forests in China is quite extensive, from tropical monsoon forests to subtropical evergreen broad-leaved forests, evergreen and deciduous mixed forests, deciduous broad-leaved forests and coniferous forests in the entire eastern region; There are natural forests in abundant oases and alpine areas with abundant precipitation [2]. In the context of global warming today, studying the changes and causes of Holocene forests in China is of great scientific significance for understanding the response and feedback of vegetation to global changes, and it is also of practical significance for the prediction and management of forest resources in my country. China has a wealth of historical documents, and there are many records of forest changes in historical periods, but the detailed records are mainly limited to the past 2,000 years, and the records are not systematic, so it is impossible to restore the full picture of forest changes during the Holocene[3]. In the past few decades, palynologists have done a lot of research on the change history and causes of Holocene forests. Holocene palynological records show that since 6000-5000 years ago (late mid-Holocene), China\u2019s forests have generally declined [4], including the eastern monsoon regions (such as Guangdong Huguangyan Maar Lake[5] and Jiangxi Dahu [6], etc.) and arid and semi-arid areas in the west (such as Qinghai Lake in the northeast of the Qinghai-Tibet Plateau[7] and Daihai in Inner Mongolia[8], etc.). Around 5,000 years ago, Northwest Europe also experienced large-scale forest decline (elms were the main feature), and climate deterioration, pests and diseases, and human activities were considered to play a joint role[9]. There are currently two views on the forest decline in the late mid-Holocene in China: one holds that climate played a major role, and that the monsoon weakened in the late mid-Holocene due to the influence of solar radiation[10], leading to forest decline; the other It is believed that it is mainly the interference of human activities. Although there are a lot of archaeological and historical documents evidence that the destruction of forests by human activities, deforestation, and slash-and-burn farming began in the past 10,000 to 8,000 years [11-14], whether it was human activities that caused the large scale of forest area around 6,000 years ago? Decline is not well documented, and palynological records show only large-scale forest decline due to human activity beginning in the past 2000 years [15]. Is the decline of China's forests in the late mid-Holocene caused by climate change? Or is it caused by human activities? When did human activities become an important factor affecting forest change? These issues have become difficult problems to be solved in the field of environmental change. In China, there is no palynogram that is roughly synchronized and clearly defined by human activities, such as the increase in pollen of Ambrosia spp. family, it is difficult to distinguish whether it is wild or cultivated. To solve the above scientific problems, it is necessary to use pollen, plant residues, charcoal and phytoliths on the basis of accurate dating to explore the reasons for the decline of China's forest vegetation in the late Holocene\u2014climate change or human activities? \t\u00b755\u00b7Indicator analysis method, combined with historical documents and archaeological data for systematic research, in order to understand the impact and degree of natural climate and human activities on forest change.", "The Agricultural Revolution is one of the most important revolutions in human history. It ushered in a new era for human beings. Humans began to produce their own food instead of relying solely on nature. Agriculture stimulated population growth, made population distribution more concentrated, promoted exchanges and the accumulation of collective wisdom, greatly promoted the development of human society, and laid the foundation for the birth of human civilization. Many scholars believe that the origin of agriculture is closely related to environmental changes. This is true from the \"oasis theory\" advocated by the early British archaeologist Childe[1] and others to the origin theory of the \"crescent mountain front\" derived later. On this basis, Binford, an advocate of new archaeology, put forward a general theory of the origin of agriculture - \"the theory of marginal origin of species\", that is, agriculture was produced in the marginal area of a species distribution. But these theories have more or less encountered difficulties in explaining the history of agriculture. For example, according to these theories, it is obviously difficult to imagine that agriculture could have originated in tropical jungle areas, where species are surprisingly rich and diverse. However, in recent years, archaeologists have also found evidence of early agriculture in the tropical jungles of the Amazon. At present, there are many theories on the research on the origin of agriculture, but the \"population pressure theory\" and the \"feast theory\" have received more attention. According to the theory of population pressure, population pressure directly led to changes in the way of life of ancient humans, and there are two reasons for the changes in the relationship between population and resources: one is that the changes in the natural environment made the ancient humans survive. The density of animals and plants is reduced; the second is that the increase of population is close to the maximum load capacity of the environment. Figure 1 \tThe possible scope of the origin of agriculture [4] Mystery of the origin of agriculture \t\u00b7 57 \u00b7 Once the population growth exceeds the carrying capacity of the environment, people will be forced to choose more efficient means of feeding. As far as the origin of agriculture is concerned, at the end of the Pleistocene, humans began to rely on seasonally strong resources such as fish in rivers and seasonally migrating birds as food, so humans began to settle down, which led to population growth and Outward migration, the population pressure was first felt in the adjacent less resource-rich areas, which forced them to adopt agricultural production methods to increase energy carrying capacity [2]. Feast theory believes that in the early days of agriculture, under the conditions of limited number of domesticated animals and plants and unstable harvest, they could not account for a large proportion of human diet structure at that time. And some domesticated plants have nothing to do with satisfying hunger at all. Therefore, the domestication of some animals and plants may be the result of expanding the structure of food varieties and increasing the variety of delicacies under the condition of relatively abundant food resources [3]. Although the above two theories have diametrically opposite views, they both have their own rationality. This shows that the dynamic mechanism of the origin of agriculture in different regions may not be completely the same. But the meaning behind these two theories is profound. From the perspective of population pressure, the origin of agriculture is a passive and painful process; while from the perspective of feasting, the origin of agriculture should be a food journey. But there is a fact that everyone recognizes, that is, about 10,000 years ago, there were relatively stable signs of domestication of animals and plants in different parts of the world, such as West Asia, Central America, and China, and it was also the Holocene. In the initial period, the climate warmed up, and the emergence of cultivated crops should be the result of the interaction of three factors: the environment, plants and people. But the source and process of agriculture are still unsolved mysteries. What impetus prompted the origin of primitive agriculture some 10,000 years ago? Why did agriculture emerge simultaneously in many unconnected regions of the world during the early millennia of the Holocene? What conditions and motivations did the environmental changes provide for the origin of agriculture, and how did the environment, plants and humans interact to produce agriculture? These are still issues that need to be studied vigorously.", "Every Chinese has been familiar with the legend of \"Dayu's flood control\" since childhood, and there are many records in ancient literature. \"Mencius Teng Wengong\" said: \"During the time of Yao, the flood flowed across the world.\" \"Historical Records\" describes Yu's great achievements in detail: he forgot about his work, left home to control the water only four days after he was newly married, and \"passed through the house three times without entering.\" In order to fully understand the water regime and terrain, he traveled all over Kyushu, surveyed and measured the mountain and water potential, dredged nine rivers, repaired nine large lakes, dug through nine mountain ranges, and finally defeated the flood. In the West, there are also stories of the Great Flood and Noah's Ark. According to \"Gilgamesh\" in ancient Babylon, the gods decided to flood the world in order to punish the evil of the world and destroy mankind. At that time, only Utnapishti (Saisutra), who was a devout god, received divine inspiration in advance, built a big boat, and brought various animals and plants to the boat. Most of the flat land was flooded at night, and all life on the ground was wiped out. Only Utnapishti and the animals and plants on board were saved. This story can be seen as the origin of the legend of Noah's Ark. What is even more meaningful is that the clay tablets of the Assyrian Kingdom discovered in archaeological excavations in the Middle East also recorded the Great Flood. In the histories and legends of many nations in the world, there are strikingly similar legends of the \"Great Flood\". Moreover, there are amazing similarities in the time, place, characters, and content in the legend! Did a large-scale flood ever occur in prehistoric times? What caused the Great Flood to happen? Is the timing of floods consistent across the globe? Some people believe that most of the flood legends originated from the Sumerians in the Mesopotamia. They relied on archaeological findings. In the archaeological excavations in Mesopotamia, flood deposits have been found many times. But these flood deposits do not prove the extent of the flood. Another opinion is quite the opposite. They believe that during the last deglaciation, there was a general transgression, which flooded many coasts and parts of the land, so a worldwide flood did occur. Many ruins of civilizations submerged under the sea and traces of sea invasion have become strong arguments for this theory. But this situation is contrary to the legend. According to the legend, heavy rains were common during the Great Flood. There are also domestic scholars trying to prove the existence of the prehistoric flood from multiple angles. Xu Xusheng demonstrated the authenticity of the flood from the perspective of historical geography and historiography in his early years[1]. Later, Yu Weichao [2] noticed that the decline of the Late Longshan culture in eastern my country, such as the Liangzhu and Shandong Longshan cultures, was consistent with the occurrence time of prehistoric floods. development had a profound impact. In recent years, an important prehistoric large-scale settlement site, the Lajia site, has been discovered near Lajia Village in the Guanting Basin of Qinghai Province, which is located in the upper reaches of the Yellow River. According to Xia Zhengkai and other studies, it was destroyed by mass geological disasters such as earthquakes and floods about 3,750 years ago, and the floods lasted for nearly 1,000 years after that [3]. The above findings seem to be more consistent with the legend of Dayu's flood control, but the scope of the impact of this flood is \tvery small, which is quite different from the legend of the prehistoric flood. More research is needed to confirm. In addition, the Taosi site in Xiangfen, Shanxi Province in the late Longshan period is located in the legendary Yao's active area (now southwestern Shanxi). Here, archaeologists discovered a huge city site with an area of more than 2.8 million square meters, and its age is about 4300 years ago. There are indications that the site was once destroyed by a flood, most likely as a result of a prehistoric flood. However, these discoveries are also quite different from the legends about Dayu's flood control. None of the above explanations for the flood is convincing to this day. The unknown mystery contained in the legend still needs to be solved by us.", "The Quaternary climate is characterized by cycles of cold glacial periods and warm interglacial periods (corresponding to even-numbered and odd-numbered periods of deep-sea oxygen isotopes, respectively). The ice section of the last glacial period (approximately 60,000-28,000 years ago) is a special stage. In the deep-sea oxygen isotope records, it is classified into the odd-numbered stage of oxygen isotope (ie MIS 3)[1]. However, both deep-sea sediments and polar ice core records show that the temperature conditions during this period were significantly different from normal glacial and interglacial periods [1\uf02d3]. This period is also an important period for the global spread of modern humans. Due to the particularity of the MIS 3 phase, the academic community has implemented international or regional MIS 3 research programs, such as the \"third phase research project\" in Europe [4] and the third phase research in East Asia [5]. Existing research results show that climate anomalies appear in the MIS 3 stage in western my country. The temperature recorded in the Guliya ice core on the Qinghai-Tibet Plateau in the early stage of MIS 3 was comparable to the modern interglacial and the last interglacial[6\uf02d7], and the temperature recorded in the late stage of the Tengger Lake was even 1.5-3.0\u00baC higher than the present [8]. During the MIS 3 period, many huge lakes developed in the northern part of the Qinghai-Tibet Plateau, such as Hoh Xil Lake and Tianshuihai, where high lakes appeared, which was called the \"Great Lake Period of the Qinghai-Tibet Plateau\" or \"The Pan-lake Period of the Qinghai-Tibet Plateau\" [9\uf02d10]; Huge ancient lakes also appeared in the inland desert areas of China, such as the Tengger Desert, Badain Jaran Desert, and the arid Qaidam Basin[8, 11\uf02d12], known as \tthe \"Jilantai Lake\" reconstructed in Figure 1. \uf02dThe maximum extent of the \"Hetao\" ancient Great Lake (demarcated by the 1080 m contour line) and the depth image of the lake were revised according to literature [13] \t. Expect\". The new study also found that in the early period of MIS 3 there was a \"Jilantai-Hetao\" ancient great lake with an area of nearly 34,000 km2 in the Jilantai Basin and Hetao area[13] (Fig. 1). These evidences show that there was an abnormally wet period characterized by the development of extensive lakes in the inland desert area of my country and the arid area in the northern Qinghai-Tibet Plateau, and the natural landscape at that time was very different from the present. Although more and more evidences show that the inland arid regions of my country and the northern Qinghai-Tibet Plateau experienced abnormally humid and possibly warm climate conditions during the MIS 3 period, the reported abnormally humid climates mainly come from the evidence of ancient lake geomorphology, and the age differences are relatively large. big. Did the paleolakes of the Great Lakes period in the MIS 3 stage in the inland arid region form at the same time? Did loess-paleosol and desert deposits also record the same wet climate information? If there is an abnormally wet MIS stage 3 climate, what is its spatial extent? What mechanisms lead to such an unusually humid climate? Undoubtedly, the age and mechanism of the Great Lakes in the arid inland regions of western my country during the MIS 3 period is one of the unresolved problems in the field of climate and environmental change.", "The Asian continent can be basically divided into \"monsoon Asia\" and \"westerly Asia\" according to the situation of circulation control. The former refers to the area controlled by the Asian summer monsoon, and the latter refers to the area affected by the westerly circulation, mainly the mid-latitude inland arid region of Asia. The two are bounded by the edge of the modern monsoon. Predecessors have carried out a lot of research work on climate change in the Asian monsoon region. During the Holocene, the effective precipitation or precipitation changes in the monsoon region were highly consistent, and the precipitation patterns in the monsoon region at different time scales were basically clear. Solar radiation change is mainly affected by solar activity on a decadal-centennial scale, and the monsoon region precipitation change framework follows the solar radiation change pattern and is affected by factors such as solar activity [1\uf02d4]. In the mid-high latitude arid regions of inland Asia affected by the westerly circulation, instrumental data show climate transitions (moistening) under the background of global warming in the past few decades [5]; The \"cold and wet\" climate characteristics of the Little Ice Age in the monsoon area are opposite to the \"cold and dry\" climate [6]; the Holocene climate records show that the precipitation change in the westerly area and the monsoon area present a \"dislocation phase\" change relationship, that is, the current monsoon area is in the Holocene. When the precipitation intensified in the early Epoch, the westerly region was still in a dry state. When the middle Holocene monsoon was strong and then gradually weakened, the precipitation in the westerly region reached its peak and then showed a weakening trend (Fig. 1)[7]. Studying the precipitation change pattern and its driving mechanism in the westerly region on different time scales has important theoretical and practical significance for understanding the precipitation change law, regional water resource utilization and sustainable socio-economic development in the westerly circulation control area. Figure 1 \tSpatial differences in Holocene climate change in the inland arid region affected by the westerly circulation in mid-latitude Asia and the humid eastern region affected by the monsoon circulation At present, there are few precipitation (humidity) records in the inland arid region of central Asia affected by the westerly circulation and they cover time periods Short, high-resolution paleoclimate (paleoprecipitation) reconstruction is lacking, although it has been found that there is a \"dislocation phase\" relationship between precipitation (humidity) changes in inland arid regions and monsoon regions during the Holocene, and it has also been found that inland arid regions have Different from the monsoon region, the combination of \"cold-wet\" and \"warm-dry\" climatic characteristics, the precipitation (humidity) in the inland arid region has not been consistent with the change of the monsoon region in the past century, but the monsoon region and the westerly region existed during the Holocene. Are the different patterns of precipitation change universal? How large is the area where the \"Western wind mode\" [8] of climate change exists? Is there also a \u201cWestern wind mode\u201d of climate change on a longer time scale (such as the glacial-interglacial scale[9])? If so, are the drivers (solar radiation, atmospheric and oceanic circulation, high Asian topography) of the \"Western wind mode\" of climate change at different timescales the same? Under the background of global warming, what kind of changes will happen to the precipitation in my country's inland arid regions? These will be the focus and difficulty of future research on climate change in inland arid regions controlled by the westerly circulation.", "The study of climate change in the past 1000 years is of great significance to the understanding of the modern global warming mechanism, the testing and prediction of climate models. In the past thousand years, there have been two important climate events: one is the relatively warm Medieval Warm Period (MWP; generally refers to 800~1300AD); the other is the cold Little Ice Age (Little Ice Age, LIA; generally refers to 1400~1900AD) [1] (Figure 1). The concepts of MWP and LIA first came from the research on regional climate anomalies in Northwest Europe and the Atlantic Ocean[2]. Later, although some understandings of the regional performance, duration, fluctuation range, and driving factors of MWP and LIA were obtained on a global scale, the Some issues still have great controversy or research gaps, such as MWP and LIA are local or global? Where does the 20th century warm period compare to the MWP? What are the abnormalities of other climate and environmental factors (such as precipitation, humidity, climate variability mode, etc.) other than temperature in these two periods? Figure 1. \tThe temperature anomaly curve of the Northern Hemisphere over the past millennium reconstructed by proxy indicators is based on the average temperature from 1961 to 1990 AD. According to the literature [3], the figure is modified to indicate the Medieval Warm Period, the Little Ice Age and the global warming period in the past century due to continuous Most of the meteorological observation data are less than 150 years old. The study of climate change in the past millennium requires the use of proxy indicators such as tree rings, ice cores, corals, stalagmites, lake sediments, and historical documents. There are differences in these aspects, but taken together they may provide reliable information on climate change at larger scales (such as hemispheric scales). The temperature integration curve shows that there are clear MWP and LIA imprints in the temperature changes in the northern hemisphere over the past millennium. Although the fluctuation ranges of some sequence reconstructions are different (Fig. 1), the average temperature fluctuations in the northern hemisphere before the industrial revolution mostly did not exceed 1 \u00baC (interdecadal scale ). Therefore, the IPCC Fourth Assessment Report pointed out that the mean temperature of the northern hemisphere in the second half of the 20th century is likely (with a confidence level higher than 66%) to be the highest 50-year period in the past 1300 years [1]. It should be noted that the maximum temperature period of MWP has greater spatial heterogeneity than the global warming in the past 50 years, so the large-scale mean may \"smooth\" the high temperature signal of MWP in the local sequence[4], making the It is still difficult to explore the relationship between global warming and the range of MWP changes on a large scale. Although MWP and LIA are clearly shown in the mean temperature reconstruction of the Northern Hemisphere, what about the situation at the regional scale? As far as MWP is concerned, in addition to western Europe, Greenland, and eastern North America and other regions around the Atlantic Ocean, in northern Eurasia, central Asia, and individual research sites in the southern hemisphere, historical documents, tree rings, and ice cores all show that the medieval temperature was higher. Evidence [5]; historical documents and stalagmites in eastern my country and tree-ring records in western China reveal the existence of MWP, but ice core data in western China do not clearly reflect it. Even in regions showing MWP, the start and end times and periods of maximum temperature are different, and the warming range is different. In addition, with the accumulation of paleoclimate records, it is also found that some sequences were not warm in the Middle Ages, and even had a cold climate. Therefore, some scholars do not support the existence of global MWP [6]. As far as the LIA is concerned, there are relatively few controversies, and the global scope shows that there was indeed a period of low temperature before the warm period of the 20th century. , Seawater surface temperature off the coast of West Africa is low, low-latitude Qinghai-Tibet Plateau (Northern Hemisphere), Andes and Antarctic (Southern Hemisphere) ice core oxygen isotopes show colder signals, etc. [7]. In my country, both historical documents and natural proxy indicators such as tree rings, ice cores, and stalagmites provide evidence for the existence of LIA. Of course, more evidence shows that there is also a warm period in LIA, and there are regional differences in the beginning and end of the cold period and the degree of coldness. Some scholars have suggested that the two terms MWP and LIA should be avoided in the past millennium climate change research, so they are global concepts. Use can be misleading [8]. One of the ultimate goals of climate change research over the past millennium is to truly understand the dynamics of the Earth's climate system on a decadal to hundreds of-year scale. Research in recent years has shown that MWP and LIA are not warm and cold periods lasting hundreds of years, but have secondary cold and warm fluctuations; regional anomalies in hydrology and precipitation have also been found [9]. Although some progress has been made in the history and mechanism of climate change in the past millennium, are the temperature anomalies during the MWP and LIA global? How much does it vary across regions? Are there any anomalies in hydrology, precipitation (humidity), climate variability modes and other elements during these two important periods? What is the dynamic mechanism of climate anomalies? These are problems that need to be solved urgently. With the acquisition of high-quality climate proxy data in different regions of the world, it is possible to reconstruct the anomalies of different climate elements in MWP and LIA, and the improvement of climate models and their closer integration with proxy data[10] will surely unravel the \"MWP The scientific problems of climate change history and driving mechanism in the past millennium represented by \"Is LIA global or regional\" and so on.", "The hierarchy theory[1] and hierarchy theory[2] founded by system theorists and philosophers are the important theoretical basis for the deduction research of scale problems and scale effects[3]. The application of hierarchy theory to analyze geospatial results in the establishment of a geospatial hierarchy [4]. From a regional perspective, it is the main research field of natural regional systems to study surface natural complexes, reveal regional differentiation laws, and explore comprehensive natural regionalization at different scales [5]. The scale effect in geoscience research is concentrated in the spatial differentiation of the natural geographical environment. In other words, the earth\u2019s surface is divided into large, medium and small scale regional systems, based on the two most basic regions, zonal and non-zonal. The law of differentiation [6]. Based on the combination of natural elements with horizontal zonal distribution characteristics in the earth's surface, \"temperature zone\", \"natural area\" and \"natural zone\" are generated, and the \"natural zone\" is further generated based on the regional differences in \"natural zone\". \", thereby forming a large-scale zonal structure of geographical space; based on the division of different geographical types of units (such as geographical landscape type units) at different levels within the region, the methods of system theory and cybernetics are used to analyze the The study of the interaction relationship among them can establish the mesoscale regional geographical system structure of geographical space; analyzing the geographical landscape unit from the perspective of the unity of \"function and structure\" can establish the basic scale geographical landscape structure of geographical space[4] (Table 1). Under different scale backgrounds, geospatial elements often show different spatial forms, structures and details [7]. The \"sequential division method\" and \"combination method\" in the comprehensive physical geographic zoning method in geoscience research are in essence the embryonic form of the \"top-down\" and \"bottom-up\" thinking of geoscientific spatial scale transformation[6]. Table 1 \tScale of geographic space \uf02d Structural analysis model [4] Ten key questions need to be addressed in the study of geographic scale [8]: How does spatial heterogeneity change with scale? How do rate variables change with scale in process studies? How do dominant or dominant processes vary with scale? How do process properties change with scale? How does sensitivity change with scale? How does predictability change with scale? What are sufficient conditions for simple aggregation and disaggregation for scaling transformations? How to express the scale effect of disturbance factors? Can scale transformations span multiple scales or scale domains? Does the noise component vary with scale? But the core is the scale effect and scale conversion in the process of geographical environment evolution. Scale effect refers to the phenomenon that when the spatial data is aggregated to change its amplitude, granularity (or frequency), shape and direction, the analysis results will also change accordingly. In actual research, facing the same research topic and the same research scale, different researchers may choose different observation scales, and the information on different observation scales may obtain different research results after scale conversion [9]. In 1911, Mercer et al. found that the variance between sample values decreased with the increase of the sample size, and the decrease of variance made the estimation accuracy of some sample attribute averages in a certain area improved. In ecology, when changing scales and methods of zoning, changes in scale can affect research results to a large extent. The sensitivity of landscape pattern index to scale change varies with the definition of scale; the spatial autocorrelation coefficient is more sensitive to the change of area unit, and the degree of autocorrelation of a variable in the same landscape varies greatly at different scales. The existence of scale effects has also been confirmed and described in the fields of spatial data mining, hydrology, soil science, sociology, and human geography [10]. In the study of future global climate change trends, due to the inconsistency in scale selection and conversion, the conclusions are often inconsistent. Shi Yafeng et al. believe that the best estimate of global average temperature rise is 1.2\u00baC by 2050 (the lower and upper limits range from 0.8 to 1.8\u00baC), and 2.5\u00baC by 2100 (the range is 1.6 to 3.8\u00baC); while the IPCC (the According to the 2001 report of the Intergovernmental Panel on Climate Change, by 2100, the average temperature of the earth may increase by 1.4~5.8\u00baC [10]. Scaling is the process of transforming data or information from one scale to another. Scale conversion is necessary, but the effect of scale conversion is not ideal in specific work, and various problems and obstacles will appear in the research. On the one hand, local information cannot replace regional distribution information, otherwise it will make a mistake of partial generalization. Research fields such as global or regional climate change trend prediction, weather forecast, land consolidation, environmental monitoring, crop growth and yield forecast, major disaster assessment, vegetation, soil type and geological structure investigation, etc., can only be effective if they have a wide range of dynamic information connotations. substantive meaning. However, people usually observe and obtain this information on a very small area. The surface temperature observed by meteorological station observers using thermometers can only represent an area of a few square meters; the soil moisture content observed by hydrologists using neutron soil moisture analyzers can only represent soil moisture on an area less than 10 m2. On the other hand, information or models on a large scale are applied to small-scale regions, which conceals detailed energy flow and material flow information on small scales. For example, if the GCM (General Circulation Model) is used to estimate the precipitation or air temperature in a region, due to the influence of data errors, even if the model itself has no errors, the output results are not satisfactory; another example is to extend the scale of ASTER and ETM images to AVHRR When the resolution is less than 1 km, the amount of image information is greatly lost, and the scale expansion has no practical significance [10]. Scaling can be upscaling or downscaling. Some scholars use \"upscaling\" and \"downscaling\" to distinguish between upward and downward scaling processes, while others call them \"scaling expansion\" and \"scaling contraction\". Scale-up is the process of extrapolating observations, experiments, and simulation results on a fine scale to a larger scale. It is a \"coarse-grained\" research result, such as using a GCMs model to estimate the precipitation or air temperature in a region; and On the contrary, downscaling is the process of deducing the observation and simulation results on the macro scale to the micro scale. The main task of downscaling is to transform from coarser spatial and temporal resolutions to more detailed scale heterogeneity information. The purpose of downscaling is to apply large-scale observation data or model simulation results to local areas to solve local practical problems, such as how crop yields, water resources, and agricultural production in a certain area respond to changes in large-scale factors, such as Climate warming, CO2 concentration increase, etc. [11]. Fundamentally speaking, in order to truly standardize the scale conversion of geography, a framework system recognized by the scientific community must be established. At present, scaling down and scaling up conversion methods are widely used, and the basic conditions for establishing a scale conversion framework system are met. Table 2 is a preliminary scheme. Comprehensive application of the above two paradigms of self- \nadaptive scale system \n\u2460 Wavelet variance, wavelet entropy, etc.; \u2461 Binary tree transformation and other scale problems are common problems in the objective world. Only scale-based spatial problem research can truly reveal the spatial distribution of various geographical objects or phenomena. objective laws. In recent years, many monographs on scale issues have been edited and published at home and abroad, such as Scaling up in Hydrology using Remote Sensing, Scale in Remote Sensing and GIS, Modeling Scale in Geographical Information Science, Scale and Geographic Inquiry: Nature, Society and Method, \" Spatial Data Multi-scale Expression Model and Its Visualization\", etc., the above research results have promoted the in-depth study of scale issues [9]. Due to the strict requirements of geographic information science on computing units, Goodchild believed that scale is the most important issue in geographic information science, and even proposed the concept of \"scale science\" [12]. Geographers have recognized the importance of scale issues in scientific research, and have consciously or unconsciously used scale conversion methods or techniques in the research process. However, the reality is that scale conversion lacks a unified and effective theoretical and methodological system, which reduces the comparability of different studies. Therefore, it is very urgent to establish technical standards for geographic scale conversion, and it is imperative to create a scale science that serves geography. In addition, with the advancement of science and technology, the ability to obtain data will be greatly improved, which can solve the common data bottleneck problem in the field of geographic information science. However, in the actual application and research process, the appropriate scale of data sources is still a difficult problem for researchers. Therefore, with the increasing application of geographic information technology, the requirement to solve the \"scale gap\" of geospatial data is becoming more and more urgent, and research in this field needs to be further strengthened.", "The so-called environmental benchmark is currently generally considered to mean that when the content of a certain harmful substance in the atmosphere, water, soil and other environmental media exceeds its threshold, it will have adverse or harmful effects or effects on people or organisms living in it for a long time[1] . In fact, various data and studies have shown that the environmental benchmark is not the so-called maximum single concentration or single ineffective dose that does not cause adverse or harmful effects, but a multi-objective function or a value range based on different protection objects[2] . Environmental benchmarks are the scientific basis for setting environmental standards, and the allowable dose or concentration of pollutants stipulated in environmental standards should in principle be less than or equal to the corresponding benchmark values. Environmental benchmarking is a complex system [3]. According to environmental elements, it can be divided into atmospheric environment standards, water environment standards and soil environment standards; according to environmental protection objects, it can be divided into health standards, biological standards, ecological standards and physical standards; and pollution environment remediation benchmarks (Figure 1). However, until now, many environmental experts and environmental management personnel in my country, even in published environmental science dictionaries, have mistakenly equated environmental standards with environmental quality standards, which has affected the development of research on standards for the remediation of polluted environments in my country, and This has led to the fact that my country is still a country that has not formulated standards for remediation of polluted environments. Due to the lack of standards for remediation of polluted environment, there is no corresponding remediation standard or corresponding regulations as a reference or as a basis for environmental management in the remediation of polluted environment or the handling of emergency environmental accidents in our country. However, the use of environmental quality standards to judge whether the restoration of the polluted environment meets the requirements and as a control guide for environmental pollution accidents not only violates the laws of nature, but also creates serious conflicts with economic and social development. As a result, law enforcers have to \"Practice deceit\", so as not to achieve the purpose of environmental protection, in the end, the more \"protected\" the environment, the more problems and the worse the effect. Based on the needs of various countries for environmental management, the international community has been focusing on research on environmental benchmarks. Especially in recent years, countries such as the United States, the Netherlands, Canada, France, and Denmark have not only successively carried out research on environmental quality benchmarks, but also made great progress in research on benchmarks for remediation of polluted environments. Standards provide scientific basis and basic data [4\uf02d7]. As far as the domestic situation is concerned, in the late 1980s and early 1990s, Wu Yanyu et al. proposed to use the crop ecological effect method, the soil environment background value method and the food hygiene standard inversion method to carry out the research on the assignment of soil environmental quality standards [8 , 9]. In order to make the obtained soil environmental quality benchmarks better reflect the actual situation of the environment in my country, research on soil environmental quality benchmarks under the conditions of chromium-phenol compound pollution in the agricultural environment has also been carried out [10, 11]. In recent years, domestic researchers have carried out preliminary research on the quality benchmarks of heavy metal-polluted water bodies and sediments in the Le'an River, a tributary of Poyang Lake in Jiangxi Province[12]; they have also initially carried out population-based p, p'-DDE water bodies and sediments in the Bohai Bay area. Based on the long-term research in the Taihu Lake area, it is the first time to deduce and put forward the suggestion of the phosphorus remediation benchmark of soil (based on paddy soil) in my country[14]. Figure 1 \tClassification of international environmental benchmarks and their interrelationships But generally speaking, the study of environmental benchmarks in China is not only far behind that of developed countries in the world, but also far from meeting the needs of China's environmental protection development. In particular, the research on soil environmental quality standards and sediment quality standards in my country lags far behind the research on air and water environmental quality standards. One of the reasons is that soil and sediment have long been considered as places where domestic waste and various toxic substances are piled up and disposed of randomly. This traditional understanding and prejudice restricts people from correctly understanding soil and sedimentary environmental issues; moreover, the impact of soil and sedimentary environmental pollution on human health is indirect and potential compared with water and atmospheric environmental pollution, so it is easy to make People ignore soil and sediment environmental issues subjectively, which affects the study of soil and sediment environmental benchmarks. In recent years, with the gradual control of water and atmospheric environmental pollution in developed countries, the environmental pollution of soil and sediment has become more and more exposed, and the environmental benchmarks of soil and sediment have been paid more and more attention. Research on environmental benchmarks is generally expensive and time-consuming. Accurate acquisition of an environmental benchmark data requires a large amount of meticulous research work lasting for a long time, and the results are still uncertain; Natural variability, combined with irregularities in the state of the art, may result in the final result not being expressed in definite values. In other words, although the environmental benchmark is a concept of pure natural science and an objective fixed value, how to assign the value correctly is a long-term and challenging scientific problem. Generally speaking, the most critical step in the research on the assignment of environmental benchmarks is the correct selection of ecological receptors [3, 15]. For example, selecting plants, earthworms, soil microorganisms and freshwater fish as ecological receptors is based on the benchmark values of ecotoxicology studies of plants, earthworms, soil microorganisms and freshwater fish respectively. Usually, in order to make the assignment of environmental benchmarks accurate and objective, it is necessary to use multiple derivation methods based on multiple ecological receptors, such as land use (corresponding to soil environmental benchmarks) or water body use function (corresponding to water environmental benchmarks), using Sensitive plants or certain crops (corresponding to soil environment standards), sensitive terrestrial animals or livestock, poultry and wild vertebrates (corresponding to soil environment standards), aquatic organisms or fish (corresponding to water environment standards), and soil microorganisms (corresponding to water environment standards) Soil environment benchmark), etc. Because the observed or protected objects are different from the targets, the obtained reference values are also different. In order to maximize the inclusion of protected objects, the selected species, communities or populations should not only be typical, but should be several or even several groups. In this case, the reference value obtained for a pollutant may be multiple values or a range of values. Conceptually, environmental quality benchmarks are completely different from polluted environment remediation benchmarks. Therefore, its assignment is also completely different (Figure 2). Since environmental quality benchmarks need to follow the law of the environment\u2019s own evolution, its corresponding assignments are mostly based on the original geochemical background values, the most sensitive ecological indicators, and long-term low-dose chronic toxicity data. Among them, the most sensitive ecological indicators are mostly used The most sensitive ecological index known or the minimum dose that can cause adverse effects or abnormal physiological and biochemical reactions of the tested organisms can be measured by modern detection technology, that is, the threshold dose; while the restoration standard of the polluted environment is mostly based on the restoration of the environmental system. The natural ecological function is the target, and the corresponding assignments are mostly based on the lethal dose, median lethal dose (median lethal dose) and threshold dose of protected organisms (70% of the population or community) in acute toxicity tests, where the median lethal dose Refers to the dose at which 50% of the test organisms are observed to die under the pollutant exposure conditions, expressed in LD50. For human populations, LD50 refers to extrapolation of experimental results in mammals and inferences from observations in populations of accidental or suicidal toxicant exposures. Figure 2. \tThe conceptual differences and assignments between environmental quality benchmarks and polluted environment remediation benchmarks are precisely because environmental benchmarks are a long-term challenging scientific problem, and related research has always been an important research direction and scientific frontier in environmental geology and environmental biology. It has very important scientific value and practical significance in environmental science and environmental management [15\uf02d17]. At the same time, environmental benchmark data are very important scientific research data, and the research results of environmental benchmarks are socially shared. In order to solve the long-standing contradiction between \"under-protection\" and \"over-protection\" in my country's environmental protection and management work, my country urgently needs to carry out comprehensive environmental standard research and revision work; and in order to ensure the scientificity and conformity of this work To meet the actual needs of China's economic and social development, it is necessary to strengthen research on environmental benchmarks at the national level, especially to carry out basic research on benchmarks for remediation of polluted environments.", "There are many types of pollutants emitted by humans in the natural environment, such as heavy metals, persistent organic pollutants, endocrine disruptors, radioactive substances and so on. Different pollutants may affect ecosystems and human health through different microscopic mechanisms. It is particularly noteworthy that, under realistic conditions, the environment usually presents the coexistence of multiple pollutants (ie, \"composite pollution\"), and they also interact with each other, that is, joint effects. The main types of combined effects can be broadly categorized as additive, antagonistic, and synergistic [1]. So far, the interaction of pollutant effects is very poorly understood. In many cases, the harmful effects of the coexistence of multiple pollutants do not necessarily manifest as a simple superposition of effects, but can also be stronger than the superposition of independent effects, or weaker than the superposition of independent effects. The former is synergistic, and the latter is antagonistic. . Taking the simplest joint action mode of two pollutants (or factors) A and B on crops as an example, three action modes can be defined in general: one is strengthening action; the other is weakening action; the third is induction action. See Table 1[2]. Note in Table 1: Positive and negative type: the direction of action is opposite, and there is an interaction; the same difference type: the direction of action is the same, but the values are not equal, and there is an interaction; zero difference type: the values of A+B and A\u2229B are zero and the other One is non-zero, and there is an interaction; the same type: A+B and A\u2229B have the same direction and the same value, and there is no interaction. At present, research at home and abroad mainly focuses on the simple interaction of 2~3 pollutants on organisms and related influencing factors, which is still in its infancy. For example, the coexistence of heavy metal elements (copper, lead, zinc, cadmium, etc.), but the impact of this interaction on their chemical behavior is sometimes not reflected in the biological effects. It should be pointed out that although the above classification principles can be extended to the analysis of the interaction of multiple factors, the actual situation is likely to be much more complicated considering the differences in the sensitivity of the tested species and individual differences within the same species. Due to the wide variety of pollutants in the real environment, there are huge differences in their physical and chemical properties and environmental behaviors, it is almost impossible to obtain the combined effects of all two or more pollutants through actual measurement methods. The direction of future research is to combine the principles of interaction analysis, biostatistics methods, and new detection technologies, and gradually start from the comprehensive toxicological effects and degrees of multiple pollution on organisms and ecosystems, and then go deep into the physiological mechanisms that produce these results. Mechanism and Physicochemical Mechanism. Establish models such as quantitative structure-bioavailability relationship (QSBR) or quantitative structure-bioavailability relationship (QSBR) to predict the impact of interaction on pollutant behavior under complex pollution conditions, and design and implement necessary experimental verification; On this basis, it is possible to make a more correct evaluation of the comprehensive impact of pollutants on the environment under complex conditions. It is necessary to try to establish a theory or model to explain and predict the dose-response relationship of interaction (combined pollution), especially to emphasize the necessity and urgency of carrying out corresponding basic ecotoxicology research under the condition of combined pollution. Many examples have shown that the comprehensive toxicological responses of multiple pollutants (factors) are different under different action dose conditions. In addition to the fact that there may be a large difference in the situation of different doses, how to predict the difference between different doses is also one of the main difficulties in the study of pollutant interaction (that is, combined pollution) [3]. In addition, it is necessary to confirm the correlation between the characterization of the interaction (combined pollution) and its biological/ecological effects; the above studies will help to further evaluate the possible ecological risks and hazards of pollutant interactions (combined pollution) from The qualitative or semi-quantitative phase advances to the quantitative phase. In summary, any breakthrough in the theory or methodology of pollutant interaction (combined pollution) will significantly improve our understanding of the impact of environmental pollution on ecosystems and human health.", "A large number of evidences have shown that environmental pollution can directly or indirectly endanger human health. Respiratory diseases caused by atmospheric particles[1], cancers caused by trace toxic pollutants such as polycyclic aromatic hydrocarbons and chromium, and Minamata disease caused by methylmercury intake are typical examples. At the same time, there are many other factors that may endanger human health, such as various infectious diseases, unhealthy lifestyles, overloaded work pressure, and even traffic accidents. In the actual environment, environmental pollution and other harmful factors usually coexist, and the interaction between different factors is likely to cause changes in the way and degree of harm. Therefore, in the field of environmental science today, one of the challenging researches on the impact of environmental pollution on human health is how to quantitatively distinguish the contribution of environmental pollution factors to human health under the condition of coexistence and interaction of multiple hazards. At present, there have been some preliminary research examples. Among them, environmental epidemiology, which applies the theory and methods of epidemiology, is one of the more popular research methods in the world. It mainly discusses the natural and/or pollution factors in the environment that endanger the health of the population In particular, to study the correlation and causality among environmental pollution, other factors and human health, that is, to clarify the exposure-response relationship, and to provide a basis for setting environmental health (quality) standards and taking preventive measures. For example, the investigation of endemic diseases (iodine deficiency diseases and endemic fluorosis, etc.) caused by natural factors. In addition, since the 1950s, public nuisance diseases caused by environmental pollution have appeared one after another. In order to find out the etiology, environmental epidemiological investigations have been carried out extensively. The purpose of research is not only to clarify the correlation and causal relationship between environmental pollution or other natural factors and human health, but also to reveal the potential and long-term harm of environmental pollution to human health. Relevant research carried out in various countries involves: \u2460 Investigating the regional distribution, population distribution, time distribution, morbidity and mortality of specific diseases or body hazards in different regions of the population, and continuously observing their development and changing laws; \u2461 Investigating and testing the surrounding environment Harmful factors, including the content distribution, load level, spatio-temporal variation, occurrence form, transformation law and population exposure level of pollutants and certain trace elements inherent in the natural environment in the atmosphere, water, soil and food and other media, and conditions that cause human hazards and diseases; \u2462 analyze the survey data, determine the scope and degree of pollution, and the impact on human health, that is, determine the exposure-effect relationship and exposure-response relationship curve; \u2463 on this basis, study pollutants or other The threshold load of factors provides basic parameters for the formulation of environmental sanitation (quality) standards; \u2464 Comprehensive analysis of survey data provides clues or establishes hypotheses for the etiology of pollution diseases or environmental diseases, and then finds out the causal relationship. To investigate and quantitatively distinguish the factors of human health hazards, the environmental medium and the exposed body must be regarded as a closely related whole, and isolated and one-sided studies should be avoided. At the same time, the impact of environmental pollutants or other harmful factors on the health of the population is often characterized by low concentrations and long-term chronic hazards. Therefore, it is required that the sample of the research should be representative, the survey design should be comparative (exposed and non-exposed), and the data obtained should be effective. The larger the sample size, the better it can reflect the actual situation; but this usually requires a lot of manpower and material resources, and takes a long time. In the actual research at this stage, most of the methods such as sampling survey are used to save resources and funds, but the expected results can only be obtained to a certain extent or range, and there are different degrees of uncertainty in the extrapolation or expansion of the obtained results. . Conducting \"from cause to effect\" (prospective cohort study) or \"from effect to cause\" (retrospective case-control study) investigation, supplemented by various experimental studies will help to clarify the etiology. In addition, in the specific implementation process, the discrimination basis for a specific or non-specific disease or pre-disease effect must be unified in advance, and random and systematic interference factors in sampling or detection methods must be excluded. At present, one of the difficulties in research is that it is often difficult to effectively guarantee and control the representativeness and accuracy of survey samples or data under actual conditions, resulting in deviations or errors in the analysis of results and their interpretation of causes. It is important to point out that in the real world, there are usually joint effects between multiple environmental pollutants (that is, compound pollution) and between pollutants and other natural factors. Therefore, the health effects of non-biological factors are not single; the same , a health effect is often associated with multiple detrimental factors, and these must be considered. When studying a known factor, one should try to exclude the interference of other factors; and when studying an unexplained health abnormality or disease, try to find out the role of leading factors and auxiliary factors. The gradual expansion of multidisciplinary interdisciplinary and comprehensive and the continuous development of multivariate biostatistics, as well as the wide application of computers and the establishment of special mathematical models, in order to further explore the dynamic quantitative relationship between environmental pollution, other factors and abnormal human health or public hazards, In particular, quantitatively distinguishing the impact ratio of environmental pollution and other factors on human health hazards has opened up a broad way.", "Soil and sediment are usually the main sinks and final recipients of chemical pollutants, but when environmental conditions change, the pollutants in soil/sediment may be released again and become secondary pollution sources. Therefore, the occurrence, migration and transformation of chemical pollutants in soil/sediment determine their environmental fate and ecological risk. Due to the interaction between chemical pollutants and soil/sediments, although the total amount of pollutants measured by the intense extraction method does not change, its occurrence state has differentiated, showing that only a part of the pollutants are mobile and biologically effective. This phenomenon is called sequestration [1]. Recent studies have shown that as the interaction time between pollutants and soil/sediment prolongs (aging; aging), the degree of pollutant locking increases [2]. Locking is a double-edged sword: on the one hand, after pollutants are locked, their mobility and risks to the ecosystem are significantly reduced; on the other hand, the biodegradability and chemical reaction activity of locked pollutants are also reduced , forming a persistent residue. The ecological risk of locked-in pollutants is very low, but most environmental standards are formulated based on the total amount of chemical pollutants, without considering the locking phenomenon, and overestimate the risk of environmental pollution. Therefore, the research on chemical pollutant lock-in and aging phenomenon is not only a research frontier of environmental science, but also the decision-making basis of environmental management. However, due to the complexity of soil/sediment environmental media, although relevant research has attracted great attention from scientists and the research has continued to deepen, there are still scientific problems and challenges [3]. The locking of pollutants is caused by the interaction between pollutants and environmental media such as soil or sediment. Therefore, the study of pollutant adsorption/desorption in soil/sediment is an important means to understand pollutant locking. Soil/sediment is a highly complex medium composed of multiple components. For example, soil organic matter is a continuum composed of organic molecules with a molecular weight of several hundred to several million. The elemental composition, polarity and aromaticity can change greatly. At present, with the help of advanced instrumental analysis methods (elemental analysis, pore size distribution and surface area, infrared, NMR, etc.), only some single properties of organic matter can be known, and the exact structure of organic matter cannot be given. Some scholars proposed that organic matter can be divided into glassy organic matter with rigid structure (hard carbon) and rubbery organic matter with loose structure (soft carbon) [4,5]. Soil minerals also have different porosities, and the complex of organic matter and minerals is more complex. Therefore, the limitation of understanding of soil/sediment microstructure is one of the challenges in studying the microscopic mechanism of pollutant locking. In recent years, with the deepening of research, people have paid more attention to the contribution of some special soil components to pollutant locking, such as black carbon and lignin with aromatic structure. Although the content of these components in natural soil is low (less than 1/1000), they have great adsorption and locking capacity for pollutants [6, 7]. The complexity of the action point of the medium will inevitably lead to the complexity of the adsorption process. In the 1980s, it was believed that hydrophobic organic pollutants were mainly adsorbed on soil organic matter through interphase distribution behavior, and the concept of standardized distribution coefficient of organic matter (Koc) was proposed. Nature is irrelevant. However, with the progress of the research, it was found that there are nonlinear, irreversible and slow processes in adsorption; adsorption is controlled by multiple processes with different energies, not only distribution, but also special forces such as pore filling and surface adsorption. When the pollutant concentration is low, this special effect is often dominant; when the concentration is high, the distribution effect is dominant, so the adsorption model develops from a single-compartment model to a double-compartment model or a multi-compartment model [8]. Research into the microscopic mechanism of adsorption and locking has been ongoing, but attempts to elucidate whether the aromatic or aliphatic structure of organic matter plays a key role in locking have been unsuccessful. Because the locking phenomenon is a comprehensive and complex microstructure, for example, studies using different model adsorbents have shown that the organic matter inside the micropores is the microstructure that causes locking; studies on the adsorption mechanism of heavy metals have shown that heavy metals are not only adsorbed on the surface and Ion exchange adsorption can also undergo exclusive adsorption on the surface of particles with the same charge through coordination and other effects. Surface analysis techniques such as X-ray absorption spectroscopy can provide bonding information for heavy metals. Due to the different mechanism of action, the occurrence state of pollutants in soil/sediment is highly differentiated, so they show different desorption kinetics and bioavailability. For heavy metals, heavy metals can be divided into different states, such as exchangeable state, carbonate-bound state, organic matter-bound state, and residue state, by continuous extraction methods. There are many studies on the relationship between the form of heavy metals and their bioavailability, but because of differences in bioabsorption and assimilation capabilities, there is no consistent rule, but the overall performance is that the bioavailability of the above-mentioned combined state gradually decreases. No reliable method for characterizing bioavailability using solvent extraction has been developed for organic pollutants, but they can be classified into \"readily desorbed and bioavailable forms\", \"difficult to desorb and bioavailable forms\" and \"irreversible and biologically unavailable forms\". Irreversible means that due to the strong interaction of the pollutant with the soil/sediment, desorption cannot occur in the reverse process of adsorption. Irreversibility is due to pollutants being embedded in micropores or bound to high-energy sites; however, some scholars have suggested that pollutant molecules themselves can cause the collapse of adsorbent micropores. The irreversible action process is the root cause of lock-up. However, different processes (physical processes, chemical reactions, and biological processes) and different target organisms have different abilities to utilize locked pollutants. For example, most scholars believe that only dissolved pollutants can be utilized by microorganisms, but there are also reports that , a bacterium can utilize adsorbed naphthalene; while the efficiency of chemical oxidation of adsorbed pyrene is much greater than that of the reversible desorbed part, indicating that chemical oxidants as well as free radicals can oxidize adsorbed pollutants. Moreover, the locking of pollutants in soil/sediment is a dynamic process, and the occurrence state of pollutants will change due to changes in conditions. For example, the study found that when the exchangeable state of heavy metals is Heavy metals undergo a state change. Therefore, the environmental risk of locked pollutants has been debated. At present, two-chamber or three-chamber models including irreversible processes or mild extraction methods (organic solvents such as methanol and butanol, fast desorption methods, solid-phase (micro) extraction, semi-permeable membrane sampling devices, animal digestive fluids, and control methods have been developed. Conditions of supercritical fluid extraction, etc.) to predict the degree of lock-in, but only for the results of a specific study, there is no widely recognized method for quantifying the degree of lock-in. Regarding aging, the earliest classic report is the study published by Steinberg et al. in 1987 [9]. They found that the soil fumigant 1,2-dibromoethane was still present in the soil 19 years after its application. Although 1,2-dibromoethane freshly added to the same soil is highly volatile and water-soluble, and is quickly degraded by microorganisms, the residual 1,2-dibromoethane in the soil on site It migrates very slowly with the atmosphere and is not biodegradable. This shows that as the time of interaction with the soil increases, the binding state of pollutants in the soil changes, resulting in a large difference in the environmental effectiveness of newly added pollutants and on-site aged pollutants. A large number of subsequent studies have confirmed that aging is a common phenomenon, which can occur in both organic pollutants and heavy metals; it can lead to a decrease in the mobility and biological effectiveness of pollutants, as well as a decrease in chemical reactivity. The research on the aging mechanism of pollutants is still in-depth. One theory believes that during the aging process, pollutants move from low-energy points to high-energy points [2]; while some people propose that the aging effect is caused by pollutants in the pores of the adsorbent. Collapse requires a relaxation time [10]. Regarding the research on the locking and aging phenomena of organic pollutants in soil/sediment, there are still the following scientific problems: \u2460 Quantitative characterization of soil and sediment microstructure; \u2461 Microstructural mechanism and long-term kinetics of pollutant locking; \u2462 Locking Ecological risk assessment of locked-in pollutants; \u2463 Microbial degradation and chemical reaction mechanism of locked-in pollutants; \u2464 Changes and reactivation of locked-in pollutants; under development.", "The so-called new pollutants (emerging pollutants) do not only refer to the types of pollutants that have newly entered the environment, but also include those found to be widely present in the environment due to the improvement of analytical testing methods or the improvement of processing technology, and have a great impact on ecosystems, including Various organisms including humans constitute potentially harmful chemical or biological pollutants [1]. At present, the new pollutants that people pay more attention to mainly include brominated flame retardants, drugs and personal protective equipment, perfluorinated organic compounds, disinfection by-products of drinking water, sunscreen/ultraviolet filter agents, artificial nanomaterials and gasoline additives, etc.[ 2]. These substances are closely related to people's daily production and living activities, have a wide range of sources, and are produced and used in large quantities, making them potential environmental pollutants. However, little is known about the environmental behavior and ecotoxicity effects of these new pollutants, so they have become the focus of government departments and environmental scientists around the world, and they are also important research objects of environmental geosciences. There are many types of new pollutants. For example, polybrominated diphenyl ethers (PBDEs) have three commercial products, each of which is a mixture of multiple congeners [3, 4]; perfluorinated organic compounds (PFCs) include perfluorooctanoic acid (PFOA), perfluorooctanoic acid Alkanes sulfonic acid (PFOS) and perfluoropolytalol [5], pharmaceuticals and personal care products (PPCPs) include hundreds of compounds [6]. In particular, some new pollutants are often difficult to remove during sewage treatment [6, 7], and can be migrated and transformed through the atmosphere, water cycle, etc., so they are widely present in the environment and may have negative impacts on ecosystems and human health . Due to its extensive environmental pollution and potential ecotoxicity [8, 9], the Fourth Conference of the Parties to the Stockholm Convention (POPs Convention) in 2009 included tetra-heptabromodiphenyl ether, perfluorooctane sulfonic acid and its salts and perfluorooctanesulfonyl fluoride are included in the list of POPs [10]. At present, countries such as Europe and the United States have successively launched a series of comprehensive research projects, such as the PBDEs project plan of the US EPA, PPCPs research plan, etc., aiming to establish standard analysis methods for these new pollutants, and to study their environmental fate, ecological effects and Conduct a risk exposure assessment. my country has also launched some projects to promote research in this field. The environmental behavior and effects of emerging pollutants are currently a hot issue in the international environmental science community [11-13]. my country\u2019s research in this field has just started, basically in the establishment of analytical methods, screening of target compounds, and individual pollutants. Acute toxicity studies, etc. However, the regional pollution characteristics, environmental interface behavior, bioaccumulation capacity, and biological effects of these emerging pollutants need to be further studied. It is urgent to carry out work in the following aspects: \u2460 regional pollution characteristics, including the pollution characteristics of environmental multi-media and human bodies in industrial-intensive areas and key polluted areas; \u2461 migration and transformation behaviors, including environmental interface exchange behaviors, long-distance migration behaviors, Bioavailability, bioaccumulation, biomagnification, etc.; \u2462 Ecotoxicological effects, including toxic effects at the molecular, individual, population, community and ecosystem levels. The above research will provide the basis for the risk assessment of these pollutants, and provide a theoretical basis for the country to formulate a series of environmental management policies.", "Surface natural water is a complex system composed of true dissolved phase, Nanoparticles (NP) and suspended particles, but traditionally people used to use 0.2 \uf06dm or 0.45 \uf06dm filter membrane to separate the water phase and suspended particles in natural water. Particulate matter, NP, is often ignored and included in the aqueous phase. In fact, this is a misunderstanding of the concept of water phase for a long time. The characteristics of NP itself and its interaction with pollutants have a very important influence and control on the bioavailability and toxicity of pollutants in the water environment. Neglecting the characteristics and functions of NP will undoubtedly lead to deviations and misunderstandings in the understanding of pollutant content, occurrence form and bioavailability in the traditional water phase, and the development of NP research can provide information for the governance and control of pollutants in the water environment. Theoretical basis and technical support. However, since the research on NP in natural water involves many fields such as surface chemistry, physical chemistry, and water environment, and depends on the continuous breakthrough and innovation of advanced instrumental analysis methods and technologies, the current research work in this field at home and abroad is still limited. Relatively weak. Figure 1 \tThe source and composition of NP in the water environment [4] NP is a particle with a size of 1-100 nm, which is classified into the category of colloid (1-1000 nm) by this definition [1], so they have a special Surface chemical properties, structure, flocculation/dispersion behavior and toxicity [2]. NPs are ubiquitous in natural water environments and are heterogeneous particles with different origins, compositions, and physicochemical properties [3]. According to its source, NP in the water environment includes three categories: natural, human activities and engineering [4] (Fig. 1). Among them, the most common ones include iron/manganese oxides and hydroxides, aluminosilicates, organic carbon (including humic substances), and some biological NPs, such as bacteria and viruses. The occurrence of NP in the water environment is closely related to human activities, and the environmental problems related to NP brought about by the emergence and development of nanotechnology have attracted more and more attention. In the past two decades, human beings have made great achievements in the research and exploration of the \"positive effects\" of nanotechnology and nanomaterials, but there are not many studies on their possible \"negative effects\" on the environment and human health[5]. Several studies have shown that NP has multiple potential hazards to the environment and human health. These extremely small NPs and nanotubes have specific surface properties that can bind and transport toxic pollutants, while also potentially making themselves toxic by generating reactive groups. For example, NP produced by combustion is inhaled by mammals and humans, causing lung lesions. However, there is still a lack of research on the content analysis and monitoring of NP in natural water bodies, the separation and identification of occurrence forms, and the interaction mechanism between them and other pollutants. In recent years, some scholars at home and abroad have carried out some exploratory research work on NP in the water environment, among which the enrichment, separation and characterization of the physical and chemical properties of NP in water have become the current research focus. Technologies such as centrifugation, ultrafiltration, tangential ultrafiltration (CFF), and field-flow separation (FFF) have become common methods for the separation and enrichment of NPs in water bodies, while techniques such as scanning electron microscopy, transmission electron microscopy, and atomic force microscopy are often used. Characterization of NP size and morphology. However, in order to further characterize the properties of NPs more accurately, newer and better sample concentration methods and more selective and sensitive analytical techniques are needed. Compared with coarse particles, NPs are more easily transported in the water environment, so they are often used as a transport tool for pollutants, which has great significance for the behavior of trace pollutants in water, including heavy metals and organic pollutants. For example, carbon nanotubes control the behavior of some persistent organic pollutants (POPs) through the adsorption/desorption of such pollutants in water[6]; Strong affinity, is likely to be the sink of EDCs in the water [7]. Significant differences between NPs and coarse particles lie in the surface area and surface structure, as well as the size effect and dispersion/aggregation behavior they exhibit, which control the activity of NPs in aqueous environments. Changes in the size of NPs are likely to cause changes in their surface properties, thereby changing their activity in water, resulting in abnormal adsorption behavior for certain types of pollutants. In recent years, the research results in this area are still controversial [2, 8]. On the one hand, studies have shown that when the size of metal oxides reaches the nanometer scale, their activity and adsorption capacity for metal ions are enhanced, exceeding the normalized value of their specific surface area. On the other hand, some studies have also found that the adsorption of metal ions on goethite decreases with the decrease of its size. The study on the size effect of NPs (especially in the range of 1\u2013100 nm) in aqueous environments is still lacking, and this effect, together with other properties of NPs, dominates the interaction mechanism of NPs with other pollutants. The aggregation of NPs occurs under different hydration conditions or during the aging process, and its dispersion/aggregation behavior can generally be explained by the classic Derjaguim-Landau-Verwey-Overbeek (DLVO) theory. The aggregation behavior is determined by the net attraction between particles. Generation, such as covalent bond, electrostatic force, dipole, dipole\uf02ddipole, van der Waals force and hydrophobic force, etc. Under different surface charge, particle morphology, concentration, temperature, and different pH and ionic strength, different aggregation modes will be shown among NPs. For example, under different hydration conditions, clays such as kaolin and montmorillonite can form aggregation modes such as edge\uf02dface, face\uf02dface, edge\uf02dedge, etc. The physical and chemical conditions of natural water bodies are complex and changeable, and the composition and morphology of NPs are also more heterogeneous, making it difficult to study their dispersion/aggregation behavior. The dispersion/aggregation behavior of NPs causes changes in the effective surface area of particles, which will further affect the behavior of other pollutants related to NPs in the water environment, which is of great significance for the study of water environmental problems caused by NPs and their toxicity.", "Eutrophication refers to the excessive accumulation of nutrients such as nitrogen and phosphorus in natural water bodies, which causes the rapid reproduction of algae and other plankton, the decrease of water transparency and dissolved oxygen, the mass death of fish and other higher organisms, and the deterioration of water quality. A phenomenon that is most likely to occur in areas with slow water flow such as lakes, estuaries, and bays. From the perspective of geological evolution history, water eutrophication is an inevitable stage in the natural evolution of natural water bodies. Various dissolved substances (especially nutrients), debris and biological debris carried by rivers enter relatively closed and sluggish water bodies ( (such as lakes, estuaries, bays, etc.) will inevitably lead to the gradual transformation of water bodies from oligotrophic states to eutrophic states, and then to swamps and land, and eventually die out naturally. This is a very slow natural evolution process with a time scale of more than a thousand years. However, since the middle of the 20th century, high-intensity human activities have discharged a large amount of nutrients such as nitrogen and phosphorus into water bodies, artificially aggravating the eutrophication process of natural water bodies, resulting in frequent outbreaks of cyanobacteria blooms in lakes and ocean red tides (Fig. 1 ). At present, the eutrophication of water bodies caused by human activities has become \tone of the major water environmental problems faced by the world in the 21st century. Water eutrophication involves a series of complex physical, chemical and biological processes, and its causes not only depend on the nitrogen and phosphorus content of the water body, but also are closely related to many natural and human factors such as regional hydrological water quality, meteorological climate, geological landform, pollution sources, etc., such as The ratio of nitrogen and phosphorus, light, temperature, precipitation, wind direction and speed, flow rate, water form, external source input, internal source release, etc. (Figure 1). It is generally believed that nitrogen and phosphorus are the main limiting factors of water eutrophication. When the total nitrogen and total phosphorus content in water reach 0.2 mg/L and 0.02 mg/L respectively, eutrophication may occur from the perspective of nutrient single factor. Nutritization phenomenon [1]. Many studies have shown that phosphorus may be the limiting factor of eutrophication in lakes, and nitrogen may be the limiting factor of eutrophication in estuaries and coastal waters, but there are still many controversies on the mechanism of nitrogen and phosphorus limitation of eutrophication in different types of water[1\uf02d 3]. The main sources of nitrogen and phosphorus in water bodies include domestic sewage discharge, industrial sewage direct discharge, agricultural non-point source pollution, atmospheric deposition, and sediment nutrient release (Figure 2). Among them, point source pollution control is relatively easy to implement, while non-point source pollution and endogenous pollution are the difficulties and focuses of water eutrophication control. The domestic and foreign experience of eutrophication control in shallow lakes shows that even if the point source load in the watershed drops to the lowest point, lake cyanobacteria will still break out, indicating that non-point source and endogenous load play a very important role in controlling water eutrophication . Among the area source input, atmospheric deposition is also an important source [4\uf02d5], and this source is difficult to be cut off, and may become an uncontrollable input source in eutrophication control. In the control of internal and external sources, the role of the contribution ratio of internal and external sources, water renewal rate and their relationship in the process of water eutrophication is still difficult to define. In addition, from the perspective of water ecosystem structure, algae that are suitable for fish feeding are easy to enter the food chain cycle, while algae that are not suitable for fish consumption mostly aggravate the hypoxia phenomenon through microbial decomposition. Therefore, distinguishing the individual and cumulative effects of physical, chemical and biological factors on phytoplankton yield and composition is the key to grasping, predicting and ultimately solving the problem of eutrophication[3]. Figure 2 \tSchematic diagram of the relationship between estuary and coastal hydrodynamics, nutrient input, eutrophication (algae bloom) and anoxic/anaerobic[3] With the continuous expansion of harmful algal blooms in freshwater, estuaries and coastal waters around the world The algal toxins produced during the reproduction of algae blooms, especially the microcystins (MCs) produced by blue-green algae, have caused widespread concern at home and abroad for their harm and impact on the ecological environment and human health through drinking water , but the mechanism and inducing factors of algal toxin production are still not very clear. Some studies have shown that not only environmental factors such as light, nutrient concentration, and water temperature can affect the production of microcystins, but also the density of zooplankton that feed on algae in aquatic ecosystems is also closely related to algae toxins [6]. Nutrient imbalance caused by eutrophication and ecological stress such as predation may be the main reason for driving algae to produce toxins [7], but mutant genes may also play an important role in controlling algae toxin production [8]. So far, although a lot of research work has been carried out at home and abroad on the migration and transformation rules, occurrence forms, transport processes, transformation mechanisms, and the relationship between bioavailability and environmental factors of nitrogen and phosphorus nutrients in aquatic ecosystems, but Due to the lack of in-depth and systematic long-term on-site observations and studies on the three major interfaces of water-air, water-land, and water-sediment, as well as the biogeochemical cycle process and dynamic mechanism of nitrogen and phosphorus nutrients in the water body, the water body is rich in The mechanism of trophication is still one of the unsolved scientific problems, so that it is impossible to accurately predict and warn the outbreak time, distribution area and degree of damage of algae blooms. In addition, in the context of global change, the impact and feedback of climate change on water eutrophication and algal blooms still require long-term data accumulation. For example, an increase in temperature is conducive to the reproduction of some harmful algae (such as blue-green algae) [9], and changes in rainfall have an important impact on regional nutrient input loads [10].", "Hyperaccumulator plants, also known as hyperaccumulators or hyperaccumulators, refer to a phenomenon of excessive accumulation of heavy metals and other elements by plants, which was first proposed by Brooks et al. in 1977 [1]. In 1983, after Chaney proposed the idea of using the extraction of hyperaccumulative plants to remove soil heavy metal pollution [2], the research on the hyperaccumulation of heavy metals by plants has gradually attracted attention, and has become a research topic in many fields such as environmental geosciences and environmental biology. It is also a research frontier of environmental science and engineering [3]. Relevant studies have shown that hyperaccumulator plants should generally have four basic characteristics [3-5]: \u2460 Critical content characteristics, that is, the content of heavy metals in the aerial parts of plants such as stems or leaves should reach a certain critical content standard, such as 10000 for zinc and manganese. mg/kg, lead, arsenic, copper, nickel and cobalt are all 1000 mg/kg, cadmium is 100 mg/kg, and gold is 1 mg/kg; \u2461 transfer characteristics, that is, the content of heavy metals in the shoots of plants is greater than that in the roots; \u2462 Tolerance characteristics, that is, plants have strong tolerance to heavy metals, means that when plants are subjected to pollution stress of a certain concentration of heavy metals, the biomass of the aboveground parts (the sum of the weight of stems, leaves and seeds) does not decrease. No obvious symptoms of toxicity; \u2463 The characteristics of the enrichment coefficient, that is, the enrichment coefficient of the aboveground part of the plant (the ratio of the heavy metal content in the aboveground part of the plant to the heavy metal content in the soil and other media) is greater than 1.0. Some of the hyperaccumulators that have a greater impact and are widely studied include the zinc hyperaccumulator Thlaspi caerulescens and Sedum alfredii, the arsenic hyperaccumulator Pteris vittata, the cadmium hyperaccumulator Sunflower (Solanum nigrum) and peacock grass (Tagetes patula L.), etc. So far, more than 500 species of hyperaccumulator plants have been reported. Judging from the types of excessive accumulation of heavy metals by these plants, there are both essential metal elements for plants (such as zinc, copper, and manganese, etc.), and non-essential elements for plant growth (such as cobalt, nickel, and vanadium, etc.), especially those that are harmful to plants. Plant growth is prone to toxic elements (such as cadmium, lead, mercury and arsenic, etc.). Why can hyperaccumulator plants accumulate these toxic elements that usually cause damage to ordinary plants in such a large amount, and their own growth is not inhibited? This prompts people to think about how the hyperaccumulative characteristics of hyperaccumulative plants are formed? What mechanisms are involved in this hyperaccumulation property of plants? A full understanding of this problem undoubtedly has very important theoretical and practical significance for the work of plants in the restoration of polluted environments and the safe production of agricultural products [3]. By using the method of comparative study of hyperaccumulators and their non-hyperaccumulator species, some superficial understandings of the accumulation mechanism of hyperaccumulators have been obtained. Studies have found that under the same soil heavy metal level conditions, hyperaccumulator plants can accumulate heavy metals up to 100 times or even tens of thousands of times that of non-hyperaccumulator plants. Activation of heavy metals in the rhizosphere, such as: \u2460 protons secreted by roots to promote the activation of heavy metals; \u2461 low molecular weight organic acids such as acetic acid and succinic acid secreted by roots to acidify hyperaccumulative plant accumulation mechanism and its cause \t95. The environment of the rhizosphere promotes the activation of heavy metals At the same time, organic acids can also form chelates with heavy metals bound to the solid phase to enhance the solubility of heavy metals; \u2462 roots secrete metal chelating molecules such as plant iron carriers, phytochelating peptides, etc. to promote the binding of zinc, copper and manganese in the soil. \u2463 Some specific heavy metal reductases on the root cell membrane can promote the reduction of high-valent metal ions, thereby increasing the solubility of heavy metals [6]. But whether these mechanisms are unique to hyperaccumulator plants is still unclear. Fig. 1 \tThe cadmium hyperaccumulator plant Celestia nigra (left) found in the world and the cadmium hyperaccumulator plant Solanum nigrum independently discovered in China (right) are under the stress of heavy metal pollution. On the contrary, hyperaccumulative plants show that the shoot content is greater than the root content. The possible mechanism lies in the selective absorption of heavy metals by roots [3, 7]. This may be because hyperaccumulator plants usually only have superaccumulative ability for one or several heavy metals, but have no superaccumulative properties for other heavy metals. Hyperaccumulative plants, like ordinary plants, also absorb mineral nutrients including heavy metals in the soil through apoplast and symplast pathways, and heavy metals basically enter plants in the form of ions or metal chelates. The possible mechanism of selective absorption lies in the presence of specific transport proteins or channel regulation proteins induced by heavy metals on the root surface cell membrane or the plasma membrane of root xylem cells, which restrict heavy metals from entering the root from the soil and then transporting from the root to the root. other parts of the plant. Hyperaccumulative plants have strong tolerance to heavy metals. On the one hand, there may be compartmentalized distribution in their leaves, and heavy metals are mainly distributed in apoplasts and vacuoles, so as to prevent the damage of heavy metals to intracellular lysates[8 ]; on the other hand, it may lie in the removal of free radicals by certain antioxidant enzymes such as superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT), as well as certain organic compounds such as Histidine and phytochelatins (PCs) have chelation and detoxification effects on these heavy metals, and these chelation effects may also promote the excessive accumulation of heavy metals in plants [6, 9]. There has also been some progress in the research on the molecular biological mechanism of plant hyperaccumulation [10], such as the cloning and screening of the zinc transporter gene ZNT1, the cloning and screening of the histidine-synthesizing enzyme protein genes THG1, THB1, and THD1, etc. The discovery, isolation and identification of some functional genes related to hyperaccumulation and tolerance in bacteria, fungi, plants and animals, the application of some transgenic plants in the remediation of heavy metal contaminated soil, etc. At present, many famous universities and research institutions in the world are working hard to understand the scientific problem of the accumulation mechanism and causes of hyperaccumulator plants, including the rhizosphere action mechanism of hyperaccumulator plants, such as the excessive accumulation of heavy metals by hyperaccumulator plants Is it an active absorption process or a passive absorption process, and whether there is membrane peroxidation in the root system? What special transporters and powerful molecular detoxification mechanisms do hyperaccumulators possess? Are these special transport and detoxification mechanisms for heavy metals in plants inherent or induced by heavy metals? These issues will undoubtedly be the key research content and trends of this research direction in the future.", "Nitrogen is one of the main biogenic elements that constitute the basic substances of life such as protein and nucleic acid. Nitrogen in the environment mainly exists in three forms, including nitrogen gas, organically bound nitrogen (mainly nitrogen-containing organic compounds in organisms and various organic nitrogen compounds such as biological residues and excreta) and inorganic nitrogen compounds (mainly biological available nitrates, ammonium salts or ammonia). These three forms of nitrogen in the surface ecosystem are continuously transformed and cycled through physical, chemical and biological processes such as nitrogen fixation, ammonification (mineralization), nitrification, assimilation and denitrification under the participation of microorganisms. ,As shown in Figure 1. Fig. 1 Nitrogen cycle in surface ecosystem \u2460 Nitrogen fixation is one of the key links in the surface cycle of nitrogen [1]. Although nitrogen is the main component of the atmosphere (accounting for 78%), nitrogen molecules are a very inactive substance, and the breaking of the bond between two nitrogen atoms requires a lot of energy, which is generally difficult to be directly utilized by organisms, so Nitrogen must be combined with oxygen to form nitrate and nitrite through nitrogen fixation, or combined with hydrogen to form ammonia before it can be used by plants. There are three common ways to fix nitrogen: atmospheric nitrogen fixation, biological nitrogen fixation and artificial nitrogen fixation (industrial nitrogen fixation and chemical \u2460 translated from gsystem/nitrogen cycle EPA. jpg fossil fuels). Although the current flux of anthropogenic industrial nitrogen fixation is close to half of that of natural nitrogen fixation, natural nitrogen fixation (atmospheric nitrogen fixation, biological nitrogen fixation) is still the main nutrient transport pathway for maintaining the healthy growth of plants in surface ecosystems. Among them, biological nitrogen fixation is the most important, providing about 65% of the available nitrogen in the biosphere [2]. This process mainly converts nitrogen in the atmosphere into ammonia through nitrogen fixation by rhizobia, and an important link is that rhizobia closes the assimilation process of ammonia (ammonium) nitrogen. Although this phenomenon is considered to have evolved during a long-term symbiotic process, its mechanism is still unclear [3] and may be related to the control of amino acids [2]. At present, the control process of macro-scale nitrogen fixation in terrestrial ecosystems is still poorly understood, which restricts the effective modeling of ecosystem nitrogen load, net primary productivity and ecosystem carbon storage [4]. Studies on forest ecosystems have shown that nitrogen fixation in forest moss layers will be enhanced if the nitrogen use efficiency of the ecosystem decreases [5], but the mechanism of this enhanced effect is unclear [6]. In marine ecosystems, iron is generally believed to be the limiting factor for nitrogen fixation, but studies in the Atlantic Ocean have shown that phosphorus may play a more important role in controlling the growth of nitrogen-fixing organisms[7\uf02d8]. It can be seen that in various ecosystems on the surface, from the micro scale to the macro scale, there are a lot of unexplored problems in the process of nitrogen fixation. In contrast to nitrogen fixation, denitrification is the primary denitrification process that converts bioavailable nitrogen (nitrate nitrogen, NO3\uf02d) into inert nitrogen (N2), which renews the atmospheric nitrogen pool and maintains Ecosystem nitrogen balance. Under anaerobic conditions, nitrate nitrogen is reduced to nitrogen (N2) and nitrous oxide (N2O) by microorganisms (denitrifying bacteria). Generally, the lower the partial pressure of oxygen in the environment, the stronger the denitrification, but in the environment lacking nitrate, anaerobic conditions will inhibit the progress of the nitrification reaction, so that the denitrification will lose the source of the reaction substance. Therefore, in the surface environment, nitrification and denitrification often occur simultaneously in the oxidation-reduction boundary layer. Generally, nitrification occurs in the aerobic layer, while denitrification mainly occurs in the anoxic and anoxic layers. Due to the variability of the physical and chemical properties of the boundary layer, the nitrification-denitrification process is affected and restricted by many environmental conditions such as carbon source, nitrogen source, pH, Eh, salinity, temperature, microbial structure and activity. Biogeochemical cycle processes may have an important impact on nitrification-denitrification, but many links and mechanisms have not been understood so far. For example, Hulth et al. found that the anoxic nitrification process occurred in the sediment Mn cycle, and the denitrification that occurred at the same time may also be coupled with the early diagenetic process of other elements [9] (Fig. 2). At the same time, in the surface oxidation-reduction boundary layer, due to the low partial pressure of oxygen, the nitrification of ammonia (ammonium) nitrogen is not complete, and the intermediate nitrite in the nitrification process is reduced to enter the denitrification process after the formation of nitrite. In the second step of reaction, turn to Figure 2 \tfor the anoxic cycle schematic diagram of nitrification-denitrification process [9] into N2 and N2O. This reaction process is called coupled nitrification-denitrification process. Due to the existence of the coupled nitrification-denitrification process, it is quite difficult to accurately measure the nitrification and denitrification rates. Although N2 flux method, blocker method, isotope tracer method and other techniques are usually used to measure the nitrification and denitrification rate, but the mechanism of nitrification-denitrification coupling effect is not very clear, and the reliability of the determination rate is not very clear. dispute. In addition, in the context of global changes and human activities leading to an increase in the total amount of reactive nitrogen in the surface ecosystem, what kind of response, feedback and control mechanisms will be produced by nitrogen fixation and denitrification are also issues that need to be further explored. For example, the importance of denitrification for nitrogen removal in polluted rivers is difficult to assess due to the difficulty of on-site measurement of denitrification rates and the lack of comparable data in different regions [10].", "Natural resources refer to the general term of various environmental elements or things that can be exploited and utilized by humans to improve human welfare or survival ability under certain historical conditions, have certain scarcity, and are subject to social constraints [1]. There are many classification schemes for natural resources. According to the renewable nature of natural resources, they are usually divided into two categories: non-renewable resources and renewable resources. Renewable resources refer to natural resources that maintain or increase reserves through natural forces, such as water resources, land resources, biological resources, etc. Their basic feature is that the resources themselves have certain renewable (renewal) and self-recovery capabilities. The boundary between renewable resources and non-renewable resources is not absolute, they are relative[2], that is to say, the renewal ability of renewable resources is limited, and this limit is affected by natural renewal factors, It is also affected by factors such as human utilization methods, investment levels, and technological progress, and has dynamic characteristics [3]. Only in a reasonable way of development and utilization can resources have the ability to recover, renew, and re-grow, otherwise their renewable process will be blocked, and the reserves will continue to decrease and degrade and deplete [2]. Such as overfishing, hunting, and habitat destruction reduce the quantity and reproductive capacity of biological resources, which may eventually lead to the extinction of species and the reduction of biodiversity. Similarly, once land resources are overused, they may degrade due to soil erosion, salinization and desertification, and lose the productive capacity of the land. An important theoretical issue of resource science, that is, the main contradiction or basic contradiction to be studied and resolved by resource science is the contradiction between labor resources and natural resources [4]. On the one hand, the scale of resource development cannot exceed the bearing capacity of renewable natural resources, otherwise it will affect the renewal of resources and thus affect their continuous utilization. But on the other hand, if the mining scale of renewable resources is too low, these resources cannot be fully utilized, and the virtuous circle of resource economic system and resource ecological system is also not utilized [4]. Therefore, for renewable resources, it is very important to determine the critical value of its development and utilization. Its determination will help human society maximize the development of resources without affecting the sustainable use of resources, and turn resources into wealth. Although it is of great scientific and practical significance to determine the critical value of the development and utilization of renewal resources, it is not an easy task to determine the critical value of each region according to local conditions due to the constraints of many factors. Factors that make it difficult to determine the critical value of renewable resources development and utilization include: the heterogeneity of the distribution of renewable resources in space and time; the interconnection and mutual constraints of various renewable resources in the resource system; the multi-purpose of renewable resources; Uncertainty in future climate and environmental changes for resource renewal. An important aspect that is difficult to determine the critical value of the development and utilization of renewable resources lies in the significant temporal and spatial differences of renewable natural resources and their related environmental elements [5]. This means that it affects the dominance of resource renewal. Why is it difficult to determine the critical value of renewable resource development and utilization? \t\u00b7101\u00b7 Elements are significantly different in different regions and on different temporal and spatial scales. Ecosystems in different regions have specific laws of energy flow and material balance, resulting in significant differences in resource renewal speed, scale, and integrity in different regions. This spatio-temporal difference makes the critical value of resource development based on the data of one region (or experiment) not necessarily suitable for other regions. Another difficulty in determining the critical value for the development of renewable resources is that renewable resources such as climate, water, land, and organisms exist as a whole in nature, and they are interconnected and restrict each other to form a unified resource ecosystem. Changes in one resource will cause changes in other resources or environmental conditions, which in turn affect changes in the entire system [4]. For example, deforestation reduces the water storage capacity of the ecosystem, leading to increased soil erosion intensity and reduced soil fertility, which in turn affects forest restoration and productivity. The interaction or feedback between different factors in the ecosystem is complex, and often has a time lag and spatial continuity. Whether a certain resource is over-exploited is often determined on a larger time scale and space scale. reflected (Figure 1). That is to say, to determine the development critical value of a certain resource, it is necessary to systematically analyze other related resources or environmental factors in the region, and conduct analysis and monitoring on a higher temporal and spatial scale. Figure 1 \tPersistent rivers, springs, and wetlands in the southwestern United States are important water resources for humans, flora and fauna. Since 1800, the exploitation of groundwater in this region has led to the disappearance or alteration of many perennial rivers and wetlands, thereby affecting the water supply of riparian ecosystems. As an example, the left and right images were taken at the same location (Santa Cruz River, south of Tucson, USA), but the left image was taken in 1942 and the right image was taken in 1989. Comparing these two photographs, it is clear that the groundwater table has dropped by more than 100 feet due to over-abstraction of groundwater, leading to significant degradation of the riparian ecosystem. Source: Robert H. Webb, U.S. Geological Survey In an area, multiple renewable resources often exist at the same time, and the development of certain renewable resources will inevitably have harmful or beneficial effects on other resources. Therefore, in the development and utilization of regional resources, more attention should be paid to the comprehensive utilization benefits of updated resources. This benefit cannot be achieved by simple analysis of the critical values of several specific resources, but requires a systematic approach to analyze the most Excellent regional combination [6]. That is to say, resources, as a system, require scientific research on resources to conduct comprehensive research on resources across departments, disciplines, regions, and time periods. The sustainable development of any single resource is inseparable from the rational development and utilization of other resources, such as water \u2460 1 foot = 3.048\u00d710\uf02d1m. The sustainable utilization of resources cannot be separated from the comprehensive utilization and rational development of land resources, biological resources, climate resources and other resources. It is impossible to achieve sustainable utilization of one resource without considering other resources. Under the background of today's global environmental change, the multi-purposes shown by various renewal resources are constantly changing people's values in resource development [7], which poses new challenges to reasonable resource development and critical value determination . Another difficulty in determining the critical value of renewable resource development and utilization lies in the uncertainty of future climate change that will affect resource renewal. The climatic and environmental factors on which renewable resources depend are not static, and they have significant interannual fluctuations and long-term trends. Therefore, determining the critical value for the development and utilization of updated resources also needs to be dynamic and predictable. Affected by human activities, the concentration of CO2 greenhouse gas in the atmosphere has increased significantly since the industrial revolution, resulting in global temperature rise and changes in the temporal and spatial patterns of precipitation [8], thus having a significant impact on renewable resources such as water resources and biological resources. Although there is a general consensus on global climate change, the carbon emission scenarios used to drive future climate change still have large uncertainties. According to the results of future climate changes predicted by different global climate models (GCMs) There is a big difference [8]. The uncertainty of future climate change significantly affects the determination of the critical value of development and utilization of renewable resources such as climate resources, water resources, and biological resources. To sum up, although it is of great significance to determine the critical value of the development and utilization of renewable resources, it is still quite difficult at present. To solve this problem, it is necessary to conduct detailed research on the internal mechanism between renewable resources and between renewable resources and the surrounding environment on the basis of a deep understanding of the spatial and temporal distribution pattern of renewable resources, starting from the perspective of optimal utilization of the system , to comprehensively determine the maximum development amount of renewable resources in the system. At the same time, it is necessary to consider various uncertainties that affect the critical value of resource development, and determine the probability distribution characteristics of the corresponding critical value on the basis of obtaining the critical value of development.", "Natural resources refer to the general term of various environmental elements or things that can be exploited and utilized by humans to improve human welfare or survival ability under certain historical conditions, have certain scarcity, and are subject to social constraints [1]. Natural resources are scarce, and the pressure of agricultural and industrial development consumes the resources on the earth at a high speed, and leads to a series of environmental problems related to the irrational development and utilization of resources, which makes all human beings face the challenges of resource scarcity and environmental degradation. Therefore, in the development of natural resources, it is necessary to adhere to the principle of optimal utilization, so that the development of various natural resources in the region is generally optimal, that is, in the time series of human development, with the least consumption of resources, to maximize social welfare[ 2]. The most basic factors affecting resource optimization utilization are the goal (objective function) of resource optimization utilization and the limiting factors (constraint conditions) faced in the process of realizing this goal. Under the background of global shortage of resources, the principle of optimal utilization of resources has been generally accepted by human society. However, affected by factors such as the complexity of resource systems, differences in the temporal and spatial distribution of natural resources, differences in resource values and ethics, and multidisciplinary interdisciplinary issues, there are still considerable gaps in realizing the optimization of resource utilization in a general sense. There is no set of general optimal use of natural resources for reference. The goal of optimal utilization of resources is always closely related to the subject\u2019s goals and the subject\u2019s understanding of the resource system. Therefore, the optimal utilization of resources inevitably involves issues of ethics and value. However, ethics and values are inherently complex, making it difficult to generate various An optimal utilization plan that is satisfactory to all parties. The goals pursued by utilitarianism, egalitarianism, elitism and Pareto Criterion are different [3]. Classical utilitarianism believes that individual or collective behavior should maximize the welfare of the whole society; egalitarianism believes that the welfare of society is measured by the welfare level of the person with the lowest level of welfare; Quite the contrary, it holds that the welfare of society as a whole is measured by the level of well-being of the best off. Even for the concept of sustainable resource utilization widely accepted by human society, environmentalists and economists still have different understandings on how to realize the sustainable utilization of resources. Environmentalists believe that natural resources (such as energy, primary forests, and wild areas) are special assets that should be protected in order to achieve sustainable economic development. But economists disagree with this point of view. They just regard natural resources as a special kind of productive assets, and there is a substitution relationship between natural capital and other forms of capital. Leaving more oil and gas and relatively little human capital in the future is not optimal [4]. Natural resources and the environmental problems caused by the development and utilization of natural resources have significant temporal and spatial distribution characteristics [5]. Affected by spatial heterogeneity and pollutant spillover effects caused by resource development, natural resource optimization schemes are often established on specific regional and spatial scales. The selection of the optimized objective function and constraint conditions is different, resulting in the optimization scheme is often only a local optimum rather than a global optimum. At the same time, there are significant spatial differences among different regions, and the development and utilization of natural resources in different regions have their own characteristics and laws. The optimal resource development plan in a certain region may not be suitable for other regions. In addition, the size of the spatial scale has a significant impact on the optimal utilization of resources. On the one hand, small areas are relatively easy to operate, but the development and utilization of these regional resources are affected by the global economy; on the other hand, the environmental and ecological problems brought about by the development of small areas have global impacts. In this context, resource optimization research in different regions and fields has to face the combination of regional development and global development. When formulating resource optimization and utilization plans, both domestic and international resources and markets, and different economic and political conditions must be considered. systems and different conceptions of resources. All of these affect the establishment and realization of the optimal use of regional resources. When the development of natural resources and the formulation of environmental policies are dominated by local governments, local governments often pursue the maximization of the welfare of the jurisdiction area in isolation, rather than the maximization of the welfare of the entire country. Compared with the problem of maximizing national welfare on a large scale, small-scale regions have location advantages by harming the interests of other regions. Due to the wide distribution of cases in pursuit of this local optimum, many special phenomena have occurred in the process of resource development. For example, in my country, the reduction of soil and water conservation functions caused by excessive deforestation in the upper reaches of the river basin and the low-cost sewage charges in the upper reaches directly or indirectly lead to the aggravation of flood disaster losses in the downstream areas and the reduction of available clean water resources. Similar phenomena also exist in Europe. For example, Sweden has concentrated some factories on the border with Norway, which reduces the cost of domestic pollutant treatment and relatively improves the competitiveness of domestic products; similarly, France's \"basin institutions\" near the border (agence de bassin) charge much lower sewage charges than other places in the country [3]. When resource development and corresponding environmental issues are extended to many sovereign countries, solving the problem of optimal resource utilization and spillover becomes quite complicated. The consequences of human consumption of natural resources used to appear only on a local or regional scale. However, current human activities have caused larger-scale impacts on the earth, such as climate change, stratospheric ozone depletion, acid rain, etc. The serious consequences of the accumulation of greenhouse gases mainly carbon dioxide in the atmosphere are global warming and a series of related environmental problems [6]. From December 7 to 18, 2009, the 15th Conference of the Parties to the United Nations Framework Convention on Climate Change and the 5th Conference of the Parties to the Kyoto Protocol were held in Copenhagen, the capital of Denmark. More than 190 countries and regions participated in the conference. Representatives participated, including more than 100 leaders of countries, regions and international organizations alone. New goals such as reducing the use of fossil energy, improving energy efficiency, promoting the use of renewable energy, and related carbon tax and low-carbon economy have been widely discussed. All of these not only affect the cost of using fossil energy, but are also closely related to water resources (hydropower), land resources (renewable alternative energy production), and biological resources (biomass energy). These new issues are constantly changing traditional resources. Optimal use of concepts and methods. In addition to the spatial factors affecting the optimal utilization of resources, the time scale also affects the determination of resource optimization schemes. The time scales for resource optimization are different, and the schemes for optimizing utilization are also quite different. Economic construction has a hysteresis effect on resource and environment damage, which is difficult to adjust through the market mechanism. The economic benefits and ecological benefits directly caused by the development of natural resources are generally incompatible [2]. In the period of resource shortage, the economic benefits are often emphasized and the ecological benefits are damaged; while in the period of environmental deterioration, the ecological benefits of resources are overemphasized and the economic benefits are neglected. . Another basic factor that plagues the optimal utilization of natural resources is the lack of sufficient understanding of the complexity and internal laws of resource systems. The optimal use of resources in a general sense should be based on a correct understanding of the objective laws of resource systems. Usually, various natural resources in a specific area do not exist in isolation, but are interconnected, restricted and interdependent to form a complete and complex coupled system [2]. The natural resource system is an open system. It not only conducts material exchange and energy flow within the system, but also undergoes intense material exchange and energy transfer between the resource system and the surrounding environment. Any change in a resource will cause Changes to other resources or to the entire system. For example, the rapid rise of fossil energy prices in recent years has stimulated the production of fuel ethanol using grain as raw material (Figure 1). Although this has relieved the supply pressure of fossil energy to a certain extent, it has largely led to global food crisis and rapid rise in food prices (Figure 2). Affected by the complexity of the resource system, the local optimization of resource utilization does not necessarily reflect the global optimality of resource utilization. Figure 1 \tGlobal ethanol fuel production (1975~2005) Data source: WRI, 2007, Figure 2 \tThe rising trend of world grain and major agricultural product prices from 2005 to 2008 Data source: FAO, 2008, Irreversibility and uncertainty in economic process To a certain extent, it hinders the formulation of the optimal development and utilization of resources [7]. Once the irrational use of natural resources leads to the destruction of resources, or exerts a negative impact, it cannot be replaced or restored. Where there are economic decisions, there are impacts on the natural environment, both irreversible and uncertain, so there is value in retaining the option to avoid such impacts. In other words, a development project that has passed the traditional cost-benefit test may not necessarily pass the more complex test that takes into account uncertain and irreversible impacts on resources and the environment. In addition, the uncertainty of future technology and resource demand will significantly affect the optimal utilization of resources. Regarding the uncertainty of technology, on the one hand, human beings lack understanding of the negative impact of new technologies, which may encourage the premature application of these technologies; on the other hand, human beings have no ability to foresee new technologies that may be developed in the future. An overly optimistic attitude towards the development of new technologies will lead to an excessively high rate of exploitation of exhaustible resources and premature depletion of such resources, which will ultimately cost future generations a lot [8]. Conversely, if one is too pessimistic about the development of future technology, the exploitation rate of exhaustible resources will be too low. Likewise, humans have great uncertainty about resource needs. Human beings currently have little demand for a certain resource, but they may have a great demand for this resource after a period of time; in addition, the interests and preferences of future generations may be quite different from those of contemporary people. It is precisely because of the existence of these uncertain factors that there is a lack of sufficient information when evaluating and formulating resource optimization plans. In order to fundamentally solve the optimization problem of the development and utilization of natural resources in the future, it is necessary to rely on the development of resource science itself and gradually form a unified and standardized theory and method on the basis of interdisciplinary research. In particular, it is necessary to systematically understand the complexity of the resource system and the inherent mechanism underlying the heterogeneity of temporal and spatial distribution. In addition, in the optimization and utilization of resources, we must have an overall thinking and long-term vision, fully consider the long-term supply capacity of natural resources and the long-term bearing capacity of the environment, and take into account local and overall interests, immediate and long-term interests in the development process, so that the environment in harmony with development.", "A large number of disaster facts show that the increasingly extensive and profound impact of human beings on the natural world has caused complex and diverse changes in the natural world. There are fewer and fewer disasters caused by a single cause. The cause of any kind of disaster is not isolated, but is related to other factors. The occurrence of any kind of disaster must have an impact on its surrounding environment, thereby providing conditions for the occurrence of other things or phenomena. Therefore, once a major natural disaster occurs, it is very easy to take advantage of the interdependence and mutual restriction relationship between natural ecosystems to produce a chain effect. A series of disasters are triggered by one disaster and spread from one geographical space to another. In geographical space, this catastrophic inheritance effect in a chain-like and orderly structure is the disaster chain. Through in-depth research on the process of disaster occurrence, development, and evolution, understanding this chain law and deduction process, carefully analyzing and mastering these laws will help to understand the physical and chemical field changes in the previous disaster or \"chain\" process. It is of great theoretical and practical significance to predict the subsequent disasters so that effective measures can be taken early to block the disaster chain or reduce the losses caused by the disaster chain as much as possible. Figure 1 \tSchematic Diagram of Earthquake Disaster Chain There are two main types of research on disaster chains: one is to study the type, cause, evolution, prediction and defense of disaster chains as a whole; In the disaster chain, define and describe a specific disaster chain, and evaluate the risk level in the region according to the proposed disaster chain model. The former is mainly a theoretical study, while the latter is mostly a case study. In 1987, the famous seismologist Guo Zengjian first proposed the theoretical concept and classification of disaster chains [1]. He pointed out that a disaster chain is a situation in which a series of disasters occur one after another. There are four types of disaster chains: causal chain (this disaster creates triggering conditions for the next disaster or itself transforms into another disaster), homologous chain (a series of disasters) The succession of disasters is related to some factor other than them), mutual exclusion chain (one disaster occurs and another disaster does not happen again), and even chain (some disasters happen in close time by chance). In later studies, a mutual promotion chain [2] was added, that is, two disasters promote each other, such as the relationship between drought and forest fire. A complete chain process includes a disaster-causing ring, an exciting ring, a damage ring and a broken chain ring. Taking the geological disaster chain as an example, the disaster-causing ring is mainly composed of geological factors formed by geological structures; the excitation ring is mainly composed of non-geological factors such as heavy rain, earthquake, ice and snow melting water; the damage ring is composed of disasters formed after the disaster occurs. The composition of losses; the broken link refers to engineering governance and protective measures. A disaster chain in a broad sense is defined as a compound system including a group of disaster elements. There are a series of continuous reactions between the disaster elements and disaster subsystems in the chain. Integrity [3]. No matter which type of disaster chain is an irreversible dynamic change process, it embodies the meaning of different types of catastrophe evolution and the hidden order of nature, that is, self-organization, coordination, overall unity, and complexity[4] . The field effect mechanism of natural disasters and the blockchain concept believe that there is no direct inheritance relationship between adjacent foreign chain events. The occurrence of chain events is determined by the dynamic environment of the local field, the structure of the medium, and the combination of many physical and chemical factors, as well as the continuous and natural system evolution process. Front-chain events have immediate field effects and late-adjustment field effects that are propagated backward in the form of regional links containing implicit, smaller-scale events. The movement of every point in the field has after effects, and it is embodied or acted on the process and area, rather than individual points; or in other words, it acts on adjacent points in time and space rather than remote points. The superposition and evolution of various dynamic effects on the field and process will lead to the occurrence of a larger chain of events when the conditions are \"ripe\". Areas and their boundaries that have a certain internal connection, and whose structural characteristics, movement modes, dynamic environment, and evolution process are relatively integrated are the background of natural disaster chains and an important criterion for judging whether two events are related. In terms of case studies, many scholars have carried out research on disaster chains of different disaster types in different regions. Beginning in 1972, Chinese scientist Geng Qingguo used severe droughts for medium-term prediction of large earthquakes in terms of drought shock chains, and achieved remarkable results [5]. Since the 1970s, interdisciplinary research on disaster chains has made pioneering progress. For example, the research on the relationship between drought and earthquake, the relationship between ground temperature and precipitation, the relationship between solar eclipse and drought and flood, the relationship between tidal force and earth atmosphere, etc., have revealed that natural disasters have obvious characteristics of nonlinearity, openness, mass and concurrent[ 6]. At the beginning of 2008, low-temperature rain, snow and freezing disasters occurred in my country, which caused serious economic losses to some areas in the south. Zhou Jing et al. [7] analyzed and summarized the characteristics of low-temperature snowstorm and freezing disasters, and proposed to use the disaster chain theory to study the problem of snowstorm and freezing disasters in urban lifeline systems; introduced the related concepts of urban lifeline system disasters, and analyzed Disaster-causing causes, formation process and main types of disaster chains of freezing disaster chains are discussed, and countermeasures for disaster prevention and mitigation are discussed. In May 2008, the world-shocking Wenchuan Earthquake occurred in my country, which induced many secondary mountain disasters. Wang Chunzhen et al. [8] discussed the manifestations, disaster characteristics, and cause analysis of the earthquake secondary mountain disaster chain (network), described the disaster formation process of the three main disaster chains, and summarized the earthquake secondary mountain disaster chain The four laws of disaster formation, and briefly analyzed the causes of the mountain disaster chain network secondary to the Wenchuan Earthquake from the aspects of natural factors and human factors. Han Jinliang et al[9] expounded the definition, classification and classification of geological disaster chains on the basis of collected data, initially summarized the distribution law of geological disaster chains in China, and put forward some measures and suggestions for the prevention and control of geological disaster chains. The difficulty in the study of disaster chains lies in how to quantitatively describe the influence relationship and transmission process between different links in the disaster chain. This cannot be broken through, and the research on disaster chains cannot be truly carried out in depth. Therefore, the current main research direction is two aspects: In terms of theoretical research, explore the reasons why disasters are continuous, define what scale or degree of disaster can trigger the disaster chain, establish a disaster chain transmission model, and define the chain of disasters in different links. Influence Mechanism. It is preliminarily believed that energy conservation, energy transformation and redistribution are one of the reasons for the transmission of disaster chains, but how energy is conserved, transformed and redistributed during the disaster process requires theoretical deduction. In terms of practical application, by studying the formation mechanism of the disaster chain to predict disasters, establish an early warning mechanism, and form an overall prevention system for the disaster chain.", "Between 1984 and 2003, more than 4 billion people worldwide were affected by natural disasters, most of them in developing countries. From 1990 to 1999, the economic losses caused by disasters were more than 15 times that of the sum from 1950 to 1959[1]. The increased impact of disasters has resulted in human, economic and environmental losses, and the increase in disaster frequency and intensity has led to an increase in disaster risk, making many disaster risk prevention systems unable to respond in a timely manner, which is harmful to existing disaster prevention planning, disaster management, and post-disaster reconstruction. A pretty big challenge. In disaster management, it is very important for the government to mobilize the existing resources and deal with the disaster quickly and effectively in the shortest possible time to comprehensively assess the loss degree of the event. However, the disaster chain and industry chain mechanism that determines the loss, the qualified input-output data to support the assessment, and the effective and dynamic direct and indirect loss assessment techniques have become difficult problems in the loss assessment. The consequences of natural disasters mainly include three parts: casualties, economic impact and social impact, among which economic impact is an important part of disaster consequences. When the economic and social structure was relatively simple, the impact of natural disasters on the economy was mainly direct economic losses. As the economic and social structure becomes more complex, when an industry is hit by a disaster, it will affect its upstream and downstream industries through the industrial chain, and have a contagious and magnified effect. The impact will not only include direct economic losses, but also indirect economic losses. The indirect economic loss refers to the loss of production and operation during the suspension of production and reconstruction after the disaster. With the rapid development of the economy and society, the correlation between economic industries is getting higher and higher, and the indirect economic losses caused by disasters are correspondingly more and more serious, and sometimes even affect the stable operation of the entire economic system. In early 2008, the low-temperature rain, snow and freezing disaster occurred in our country, which directly threatened the stable operation of the national economy due to the power interruption caused by the collapse of the transmission line. According to official data, the Wenchuan Earthquake caused more than 800 billion yuan in direct economic losses, and countless indirect economic losses. Some foreign studies have also shown the importance of indirect economic losses. Hellegatte [2] took Hurricane Katrina as an example (Figure 1), and believed that when the direct loss exceeds 50 billion US dollars, the growth rate of the total economic loss begins to be faster than that of the direct economic loss. This effect becomes significant when the direct loss reaches $100 billion, and the indirect loss reaches 39% of the direct loss; when the direct loss exceeds $200 billion, the indirect loss equals the direct loss. Therefore, both direct and indirect economic losses should be included in the complete description of a major natural disaster \n. The HAZUS (Hazards United States) disaster loss assessment of natural disaster risk and loss assessment \u00b7 111 \u00b7 used by the U.S. Emergency Rescue Agency \talso takes indirect economic loss as an important content in the assessment of earthquakes, floods and typhoons. The impact assessment of direct and indirect economic losses is considered to be an important indicator in describing the intensity of natural disasters, assessing economic and social vulnerability, and improving reconstruction decisions abroad. The disaster economic loss assessment models proposed by FEMA (Federal Emergency Management Agency)[3], the World Bank, Swiss Reinsurance Company, the United Nations, and the United Nations in Latin America all take the assessment of indirect economic losses as an important content in addition to direct losses. The models applied to direct loss assessment include steady-state Poisson model, BP (Back-Propagation Algorithm) neural network model and projection pursuit network algorithm-based model, and indirect loss models include input-output model and CGE (computable general equilibrium) model. However, for the convenience of calculation, these models often assume that after the impact of disasters, the regional economy is still in equilibrium, the input-output coefficient remains unchanged, there is no adaptive process between production and demand, and prices remain unchanged. Obviously, these assumptions are not correct. reasonable. One of the most important challenges of our time is to completely reverse the trend of continued growth in risk. However, the current ability to change the status quo is very limited, mainly because understanding the dynamics of risk is very difficult. In terms of loss assessment, it is mainly caused by two main factors: one is that a major natural disaster in a certain area is usually a small probability event, and it is difficult to conduct research on the probability of disaster occurrence and roughly estimate the possible loss beforehand. The second is how to regard the indirect impact of disasters on the economy as a dynamic process? In fact, the economy is not balanced after a disaster, and there is a dynamic adaptation process between producers and consumers: when supply decreases, consumers may reduce demand, or turn to imported goods from outside the region; in the process of reconstruction and recovery, production The production capacity of manufacturers will gradually recover, and local demand will shift to local purchases. In addition, changes in supply and demand will cause price fluctuations, unless policies regulate prices. On the theoretical level, experts and scholars at home and abroad agree that indirect economic losses are an important part of the consequences of catastrophes. However, at the practical level, when describing and counting the consequences of disasters, the focus of attention is still on the assessment of direct economic losses such as the affected population, resettled population, casualties, damaged houses, damaged farmland, and factory buildings. Pure direct economic loss assessments greatly underestimate the total impact of natural disasters on the economic system. Strengthening the assessment of indirect economic impact is an important indicator to fully reflect the total economic loss caused by natural disasters. To develop such assessment models, a database of risk events is essential. In order to provide an estimate of disaster losses, the causal relationship at each level of the disaster chain is used to build a disaster database, decompose the risk of each node in the disaster chain, and use the model to evaluate the possibility of risk occurrence and possible losses. The study of loss assessment should reach the goal of how far the current social system is from its normal state; and to what extent external intervention can restore the social system to its original state or achieve a balanced state.", "When the variation of abnormal natural phenomena exceeds a certain level and brings harm to human society, it constitutes a natural disaster. Natural disasters are a manifestation of the conflict between man and nature, and have both natural and social attributes. At present, about 600 natural disasters occur every year on the earth, causing tens of thousands of deaths and billions of economic losses on the earth [1]. The formation conditions and influencing factors of many natural disasters have the characteristics of periodic changes. The study of the periodic characteristics of natural disasters is of great significance for understanding the law of natural disasters and predicting and forecasting natural disasters. The periodicity of natural disasters refers to the phenomenon that the scale (intensity), frequency and damage degree of natural disasters change regularly and alternately with time. With the development of society and the increase of human security needs, human beings need accurate cycles to predict the time of natural disasters in order to deal with the disasters brought by natural disasters to human beings. The periodicity of natural disasters is very complex. Not only are the periodic characteristics of different natural disasters very inconsistent, but also the periodic changes of the same natural disaster are significantly different. In addition, different regions also have different forms, forming a complex cycle series. From ancient times to the present, people have had a certain understanding of the regularity and periodicity of natural disasters. However, due to the superficial understanding of the causes and mechanisms of natural disasters, the root cause and essence of natural disasters have not been investigated. Therefore, the quantitative analysis and acquisition of the periodicity of natural disasters has become one of the main problems in disaster science, which is mainly reflected in the following four aspects: natural disasters are not equal to the variation of natural factors, and natural disasters have mutated disaster-causing factors , there is also the human society disaster-bearing body. Therefore, natural disasters are the unification of the variation of natural factors and the disaster-affected body of human society in time and space. Although we have a deep understanding of the cycle of some natural phenomena, such as the monthly cycle of the moon around the earth, the annual cycle of the earth around the sun, the 11-year cycle of sunspots, the 22-year magnetic cycle, and the return cycle of comets, etc., but the natural The cycles of phenomena and natural disasters are different in time and space, which results in the complexity of natural disasters in different time and space. Natural disaster cycle includes single natural disaster cycle and mass natural disaster cycle. Due to the existence of disaster chains, a single natural disaster interacts with a cluster of natural disaster cycles. Geographical entropy theory holds that, under the condition of not adding external energy interference, the change of classical entropy value is directional and irreversible, and it always spontaneously increases towards entropy, that is, toward disordered and chaotic conditions develop. Due to human activities and natural variation, the geographical positive entropy increases, the energy accumulation of natural disasters increases, and the possibility of natural disasters increases. They generally follow the periodic activity law of \"energy accumulation-energy release-energy accumulation\". After natural disasters are active and release a large amount of energy, it takes time to re-accumulate enough energy to enter the next active stage [2]. However, due to the existence of disaster chains, a single natural disaster becomes the fuse of mass natural disasters, which in turn causes changes in the cycle of mass natural disasters. The complexity of natural disaster systems. Natural disasters are a complex system, which is determined by the complexity of the disaster-causing system and the complexity of the disaster-affected body. With the development of human society, the variability of natural factors caused by human activities has become more complex. At the same time, due to the accumulation of human wealth, the exposure and vulnerability of disaster-affected bodies have also increased. This will also cause cyclical changes in natural disasters. Data collection is difficult. At present, data related to natural disaster cycles are mainly obtained from geological, archaeological, and literature records. However, most natural disaster cycles cannot be found in the literature, or the record years are short, and it is impossible to reconstruct the long cycle of natural disasters. Aiming at the periodicity of natural disasters, scholars at home and abroad have also carried out some related research. However, the statistical analysis of the natural disaster cycle is mainly based on the existing literature, such as the calculation of the storm surge disaster cycle in the Caribbean Sea by the Organization of American States using maximum likelihood estimation [3]. Some scholars also use geological exploration to obtain the periodicity of natural disasters. For example, the four major research teams of National Taiwan University and the California Institute of Technology pointed out that using the characteristics of coastal corals can record earthquake events and greatly improve the accuracy of earthquake cycle calculations. Reduce the error of decades to hundreds of years to the accuracy of several years or even less than one year [4]. At present, many scholars calculate the frequency of natural disasters from the perspective of risk, and then obtain the periodicity of natural disasters [5]. As one of the means of predicting natural disasters, the periodicity of natural disasters has the following difficulties due to its own complexity and uncertainty: How to effectively improve the accuracy of natural disaster cycle inferences? The periods of single natural disasters and mass natural disasters have different manifestations, some are a fixed number, while others are an interval value. At present, the calculation methods of natural disaster activity cycle are all statistical methods based on historical data. Due to the limitations of data and experimental conditions, the calculation results have large errors. How to reconstruct the long cycle of natural disasters? Find the marks left by natural disasters from nature, make up for the lack of existing data, and use existing data and modern science and technology to reconstruct the long-term natural disaster cycle. Is it possible to conduct indoor simulation experiments? Starting from the disaster-causing mechanism and process of natural disasters, the simulation experiment of the natural disaster cycle is carried out by using advanced technical methods, simulation experiments and other means. Through more than a century of research, scientists have gained a basic understanding of the periodicity of some natural disasters, such as droughts, earthquakes, and floods, but many important questions remain unanswered. For example, how to obtain the periodic law of natural disasters? Is there a certain connection between the cycles of various natural disasters? These efforts still require the efforts of the next generation of scientists. The purpose of studying the disaster cycle is to understand the process of disaster breeding and occurrence. The cycle of natural disasters is an important reference for disaster prevention and reduction, and disaster management planning. Understanding it, paying attention to it, and using it can not only improve the scientific level of disaster management, but also rationally plan the investment in disaster prevention and mitigation, and better realize the benefits of natural disaster management. Studying the periodic characteristics of natural disasters is of great significance for understanding the law of natural disasters and predicting and forecasting natural disasters.", "Human beings began to gradually understand the surrounding environment through production activities a long time ago, and then explored the relationship between human activities and the geographical environment from a philosophical point of view. During the Spring and Autumn Period and the Warring States Period, a variety of views on man and earth appeared in our country, including \"mandate of heaven\", mechanical materialism, and simple dialectical views. Since then, in the long feudal era, the concept of man and earth has not made much progress. Western modern geography explored the evolution of the geographical environment, distribution laws, and the inherent laws of man-land relations from different perspectives earlier. Scholars represented by German Lazer, French Montesquieu and American Semple formed the school of \"geographical environmental determinism\" under the influence of Lamarck and Darwin's \"evolution theory\". Representatives of the French School of Geosciences are J. Brunhes and P. Vidal de la Blache. This school studies the man-land relationship based on the concept of region. The \"probability theory\" they put forward believes that the man-land relationship is relative, not absolute. Human beings have selective power in the use of resources, can change and adjust natural phenomena, and foresee The more humans change nature, the closer the relationship between the two is, with a simple dialectical point of view [1, 2]. Mr. Wu Chuanjun introduced the thought of man-land relationship into our country for the first time, and put forward the famous assertion that \u201cthe regional system of man-land relationship is the core of theoretical research in geography\u201d [3]. To scientifically understand the relationship between man and land, the geographical environment corresponds to the subject, and the subject is human society. The so-called geographical environment has broad and narrow meanings. The narrow geographical environment refers to the natural complex, while the broad geographical environment refers to the social environment derived from inorganic and organic natural elements such as rocks, soil, water, atmosphere and organisms, and human beings and their activities. Humanistic elements of material or consciousness, such as politics, economy, culture, science and technology, art, customs, religious beliefs and moral values, are intertwined and closely combined to form a whole according to certain laws. The complex and open giant system formed by interlacing has a certain structure and functional mechanism inside, and has a certain geographical scope in space, which constitutes a human-earth relationship regional system. That is to say, \"the regional system of man-land relationship is a system of man-land relationship based on a certain area on the earth's surface\". It has regional differences in space and is constantly developing and changing in time [2]. The objective relationship between man and land is as follows: first, man is dependent on land, and land is the material basis and space on which people live. The geographical environment often affects the regional characteristics of human activities and restricts the depth of human social activities. , breadth and speed. This influence and restrictive effect varies with people's understanding and ability to use the land. A certain geographical environment can only accommodate a certain number of people and certain forms of activities, and the number of people and the form of activities change with the quality of people; second, in the relationship between man and land, people occupy an active position and have active functions , Man is the master of the land, and the geographical environment is an object that can be recognized, utilized, changed, and protected by humans. Whether the relationship between man and land is harmonious or contradictory depends not on the land but on the people. In short, people must rely on the land they live in as the basis of their survival activities, and they must actively understand, use and change the land consciously according to the laws of the land, so as to achieve the purpose of making the land better serve human beings. And its position in the earth's surface system \t\u00b7117\u00b7This is the objective relationship between man and earth. This relationship will become increasingly close with the continuous improvement of human science and technology and the development of productivity; at the same time, it will continue to change with the changes of the geographical environment under the influence of humans. law. The area system of man-land relationship focuses on the interaction and feedback between man and nature in the man-land system. The core goal of the research is to coordinate man-land relationship, to understand and seek the overall optimization and comprehensive balance of global, national or regional man-land relationship system from the aspects of spatial structure, time process, organizational sequence change, overall effect, synergy and complementarity, etc. It provides a theoretical basis for effective regional development and regional management [1]. The Earth's surface system is a natural social complex composed of the geosphere, atmosphere, hydrosphere, biosphere and anthroposphere. It is an open giant system in which the anthroposphere and the geosphere interact. 4]. Professor Qian Xuesen proposed the establishment of \"Earth Surface Science\" in 1983, pointing out that \"the Earth's surface refers to the part of the earth's environment that is most directly related to humans, specifically, from the bottom of the stratosphere to the upper part of the lithosphere.\" , refers to the land 5~6 km down, and the ocean down about 4 km. The impact of the earth's surface on people and the development of society are closely related, which is called the earth's surface system, or geographical system. The part of the earth's surface that goes outward The deeper part of the earth\u2019s surface and the earth\u2019s surface is the environment of the earth\u2019s surface[5].\u201d Human relations have evolved characteristics, and have experienced a transformation from being dominated by nature to dominated by humans, and the influence of modern humans on the region has expanded and strengthened unprecedentedly , man becomes a challenger who breaks the coordination of man-land relationship. Therefore, it is a major trend to strengthen the research on the impact of human activities on man-land systems. Major natural geographic processes, such as global climate change, pay more attention to the study of the driving mechanism of human activity factors, and important human and economic geographic activities pay more and more attention to the research on the interaction with resources and the environment. importance. For China, it is currently in a critical period of development, facing a series of major tasks such as global climate change, economic globalization, optimizing economic structure, protecting the ecological environment, improving the quality of the population, and realizing sustainable development. The research on the regional system of man-land relationship has great urgency and practical significance. Mr. Wu Chuanjun proposed that the research contents mainly include: \u2460 Theoretical research on the formation process, structural characteristics and development trend of the man-land relationship regional system; ; \u2462 The interaction between the two major systems of man and land, and the mechanism, function, structure, and overall regulation and countermeasures of the transfer and conversion of matter and energy; \u2463 The analysis of regional population carrying capacity, the key is to predict the increase in grain production; \u2464 Definitely A dynamic simulation model of regional human-land relationship. According to the interaction structure and potential of various elements in the system, predict the evolution trend of a specific regional system; \u2465 analysis of the regional differentiation law and regional type of man-land relationship; Optimization regulation model, that is, regional development multi-objective, multi-attribute optimization model [1].", "The spatial structure of social economy refers to the positional relationship of social economic objects in space, the degree of agglomeration, and the direction and intensity of interaction through linear infrastructure. The formation of socioeconomic spatial structure is not only the result of long-term socioeconomic development, but also the result of people implementing corresponding regional development policies according to the characteristics of regional nature, location, history, economy and other factors. \"Development\" necessarily signifies the emergence of socio-economic objects. And the emergence of several social and economic objects will produce a certain spatial organization within a certain range. The core of the \"point-axis system\" theory is the generalization of the theoretical model of the \"best structure and best development\" of the region, and it is also an effective form of spatial organization. The scientific connotation of the \"point-axis system\" is [1, 2]: in the process of national and regional development, most of the socio-economic elements gather on the \"points\" and are connected by linear infrastructure to form the \"axis\" . The \"point\" here refers to all levels of residential areas and central cities, and the \"axis\" refers to the \"infrastructure bundle\" connected by transportation, communication trunk lines, energy and water channels; the \"axis\" has a strong economic attraction to nearby areas and cohesion. The social and economic facilities concentrated on the axis have a diffusion effect on nearby areas through products, information, technology, personnel, finance, etc. The diffused material and non-material elements act on nearby areas and combine with regional productivity factors to form new productivity and promote social and economic development. In the development of countries and regions, industrial agglomeration belts will definitely be formed on the \"infrastructure bundle\"; due to the differences in geographical foundations and socio-economic development characteristics of different countries and regions, the formation process of the \"point-axis\" spatial structure has different inherent characteristics. Dynamics, forms, and different grades and scales; at different stages (levels) of social and economic development, the spatial structure formed by the social economy also has different characteristics. This feature is reflected in the degree of agglomeration and dispersion and the interaction between social and economic objects. With the further development of regional social economy, \"point\uf02daxis\" will inevitably develop into \"point\uf02daxis\uf02dagglomeration area\". The \"agglomeration area\" here is also a \"point\", which is a \"point\" with a larger scale and external force. \"Development axes\" have different structures and types, and the \"point-axis\" spatial structure system also affects regional development through spatial accessibility and location differential land rent. The scientific basis of the \"point-axis system\" is: W. Christaler put forward the \"central place theory\" as early as the 1930s, deduced the formation of the urban hierarchy and its formation mechanism. Theoretical geographers such as Hegelstrand proved in the 1960s and 1970s that similar to the principle of spatial interaction of objects, socioeconomic objects have two tendencies of spatial diffusion and spatial agglomeration [3]. The growth pole theory proposed by the French economist F. Perroux in the 1950s shows that a certain range of regional development often starts from the point occupied by one or a few enterprises. of. The growth pole theory is one of the basis of the theory of unbalanced development. Theories in these aspects are the scientific basis for proposing the \"point-axis system\". Why can the spatial structure of \"point-axis system\" reflect the actual organization form of social and economic objects in space, and therefore can be applied to the development planning of social economy? Among them, the main reason is that the organizational form formed by the interaction of social and economic objects in space is a scientific reflection of objective laws. Its formation mechanism mainly has the following two aspects: \u2460Agglomeration and diffusion are the two tendencies of the spatial movement of socio-economic objects. Since the emergence of modern geography, some human geographers have been studying the agglomeration phenomenon, agglomeration process and the resulting spatial pattern of social and economic objects. They believe that the process of forming a spatial pattern can be divided into two tendencies: spatial agglomeration and spatial diffusion. Socioeconomic objects must be concentrated in one region or point. This is the benefit of agglomeration. However, excessive agglomeration of social and economic objects at one point will inevitably lead to a series of side effects, requiring a certain degree of decentralized or balanced development. \u2461 Progressive diffusion leads to the formation of \"point-axis system\". Socioeconomic objects originate from one or several diffusion sources, gradually diffuse socioeconomic \"flows\" along several linear infrastructures (bundles) (also called \"diffusion channels\"), and form new structures with different intensities at different distances from the center. gather. Due to the law that the diffusion force decays with the distance, the scale of the new accumulation also decreases with the increase of the distance. As a result of the diffusion of diffusion sources in adjacent areas, the diffusion channels are connected to each other and become the development axis. With the further development of the social economy, the development axis is further extended, and new relatively small-scale agglomeration points and development axes are continuously formed. The \"point-axis progressive\" diffusion can realize regional unbalanced to more balanced development. There are four main stages in the formation of the \u201cpoint-axis\u201d spatial structure system of the socio-economy: the first stage, the equilibrium stage before the formation of the \u201cpoint-axis\u201d, the earth surface is a homogeneous space, built on the agricultural society (The settlements mainly in villages and towns), although they are distributed in an \"orderly\" state, they are in an unorganized state, and this spatial unorganized state has extremely low efficiency. In the second stage, social and economic objects begin to gather, points and axes begin to form at the same time, regional parts begin to be organized, and regional resource development and economy enter a period of rapid growth. Measured according to the stage of social and economic development, this spatial structure characteristic belongs to the initial stage of industrialization. In the third stage, the main \"point-axis system\" frame was formed, the social economy evolved rapidly, and the spatial structure changed greatly. It is a feature of spatial structure in the middle stage of industrialization. In the fourth stage, the \"point-axis\" spatial structure system is formed, and the region enters a comprehensive and organized state. Its formation is the result of the long-term self-organization process of social and economic elements, and it is also the result of scientific regional development policies and plans. From a macro perspective, the spatial structure has returned to the \u201cequilibrium\u201d stage. At this stage, although social organizations and economic organizations are highly efficient, population growth and economic growth, which are signs of social development, are not at high speeds. The socioeconomic spatial structure of these four stages embodies the general laws of major countries and regions, and is also consistent with the stage differences in the level of socioeconomic development and structural characteristics. The \"point-axis system\" can configure and improve the spatial structure of productivity, as well as the spatial organization of the entire social economy, so that the country or region can be optimally developed. The main effects are as follows: \u2460 The development and development of the \"point-axis system\" model can comply with the objective requirements of social and economic development and its objects must be aggregated into points in space to exert the effect of agglomeration; \u2461 The development of the \"point-axis system\" model can Give full play to the role of central cities at all levels; \u2462 Development in the mode of \"point-axis system\" can achieve the best spatial combination between production layout and linear infrastructure; \u2463 Development in the mode of \"point-axis system\" is conducive to urban Convenient connections between regions, between urban and rural areas; \u2464 Determination of key development axes within regions at all levels across the country can better combine national strategies and regional strategies. The \"point-axis system\" theory is a mechanism for the formation of the \"point-axis system\" of the theoretical socio-economic spatial structure that is widely used in various levels of land planning in China \t. It is also an effective spatial model under market conditions. Therefore, It is also applicable to the regional planning work of our country in the new era, and it is an important basis and means for spatial structure analysis and spatial planning [4]. Since the founding of New China, my country has carried out large-scale land development and regional development. In terms of the spatial organization of social and economic development, except for the period of the \"Three Fronts\" construction, it basically meets the requirements of the \"point-axis system\" spatial structure model objectively. . According to this theoretical model, by analyzing factors such as the distribution of resources and economic potential in various regions of our country, the eastern coastal zone and the Yangtze River zone should be the strategic focus of my country's land development and economic layout. These two primary axes form a \"T\" shape. The \"T\" structure strategy scientifically reflects the spatial distribution framework of my country's economic development potential. This strategy has achieved the best spatial combination of my country's productivity layout and transportation, water and land resources, urban support, and domestic and foreign markets. The construction of these two first-level axes can drive the country's economic and social development.", "Spatial structure refers to the degree and form of spatial agglomeration formed by the interaction of social and economic objects in space. Therefore, what this theory summarizes is not the spatial distribution law of a single element, but a synthesis of almost all social and economic objects, also known as the \"overall location theory\". This theory was first developed by the German human geographer Schluter (A. Schluter) in 1906 in the \"landscape\" theory of human geography, the prototype of the spatial structure theory. Afterwards, both Christaler and Liosch further developed the content of the concept of \"landscape\". It was the German scholar EV B\u00f6venter who did the systematic theoretical analysis and model derivation of the spatial structure theory. He tried to integrate the location theory of Weber, Dunen, and Liosh, and believed that the location theory should investigate and clarify as deeply as possible not only the production and goods, but also the geographical distribution of production factors including residence, employment, and mobility [1]. Due to the great richness of socio-economic coverage, it can be considered that the spatial combination of regional socio-economy can be observed from different spatial scales, levels, and angles, and different questions can be raised, and then their own development practices and theories can be produced. . However, in order to scientifically understand and plan the spatial structure of a region, these basic issues should be studied comprehensively and anatomically. The connotation of socio-economic spatial structure should include at least six aspects [1]: \u2460 Spatial structure composed of different socio-economic \"sparse\" and \"dense\" band-shaped or planar regions, generally refers to the social and economic development between large regions. imbalance problem. This structural problem mainly comes from natural zonal differences, the relationship between different positions relative to the sea and historical political centers, and the location relationship with international economic agglomeration areas. The unbalanced development of large regions has its objective law, that is, the inverted \"U\"-shaped law between economic growth and unbalanced development. \u2461 The framework or context of socio-economic spatial organization. For example, the growth pole model proposed by Western scholars and the \"point-axis system\" model proposed by the author reflect the effective form of socio-economic spatial organization, and are important structural models for formulating the rational distribution of productivity and urban development strategies in large regions. An operational model to scientifically solve the problems of \"sparse\" and \"dense\". \u2462 Optimal enterprise scale, city scale and central location hierarchy. Theoretical foundations include agricultural location theory, central place theory, cluster theory and agglomeration economics. \u2463 Spatial structure of land use centered on urban settlements or markets. \u2464 Spatial interaction. It includes not only the logistics, people flow, and financial flow between regions, but also the diffusion process and benefits of innovation, information, and technical knowledge. \u2465 The characteristics and evolution of the spatial structure at each stage of social and economic development. Through connotation analysis, it can be found that the socio-economic spatial structure is three-dimensional and has a certain system and structure. The spatial structure of regional development is the same as the departmental structure of regional development. They are two strategic issues in regional development and have equal importance. significance. Through the analysis of spatial structure characteristics, the characteristics and problems of the regional socio-economic system can be grasped, which is an important indicator of the regional development status [2]. In order not to understand the spatial structure of the social economy and the influencing factors of its formation \t123. The spatial structure of the \"sparse\" and \"dense\" band or planar regions of the socio-economy, that is, the spatial pattern of socio-economic development between large regions is It is an important content of our country's regional development research. As early as 1984, he put forward the \"point-axis system\" theory of socio-economic spatial structure, and proposed that my country's regional economic development and national land development in the next few decades take the coast and the Yangtze River as the primary axis, and the two form a \"T\" shape. macro-architecture [3]. For decades, this theory and model have been widely used in planning and practice across the country and regions. The factors influencing the formation and development of spatial structure are also key issues in the study of regional development in my country. Mineral resources, water resources, and transportation were once important factors in the industrialization and regional development of most countries in China and abroad. These factors have influenced and even determined the basic pattern of my country's regional development and productivity distribution. After the reform and opening up, especially the structural adjustment in recent years, the role of these traditional factors is declining. In the 1990s, international regional studies began to explore new factors in the formation and development of spatial structures, focusing on the impact and changes brought about by globalization, information technology, and knowledge-based economy development on national and regional development. Informatization and economic internationalization have become the dominant factors in the development of high-growth regions. The process of my country's reform and opening up is also the process of economic internationalization. While economic internationalization has greatly promoted the sustained and rapid development of my country's economy, it is also significantly changing the regional development pattern of our country [4]. Since entering the information age, due to the continuous expansion of globalization and information technology and its application, space and distance, relationship and connection have been endowed with new connotations, and elements such as information and knowledge have penetrated into the social and economic system, and the spatial structure is in a new state. In the face of the constraints of new influencing factors [5\uf02c 6], it is necessary to continue to systematically track the development trend of countries and regions, and accurately grasp the new factors and new patterns that affect regional development. Study the comprehensive effects of my country's natural foundation, economic globalization, informatization and other factors in national land development and regional development and at different stages, the unbalanced changes and laws of national land development and regional development, and the impact of national development strategies and their implementation on regional patterns . There are two trends in the space of socio-economic objects, that is, agglomeration and dispersion. In the next few decades in my country, spatial agglomeration will be the main tendency. That is to say, the direction of \"high density, high efficiency, and economical\" needs to be implemented in terms of land and space utilization. As China further participates in economic globalization, the development of foreign capital and foreign trade will promote and strengthen the formation of China's \"T\"-shaped spatial pattern. A metropolitan economic area with certain international competitiveness [7]. The new question is: How and to what extent can the gap between the regional development of our country be narrowed? For this reason, it is necessary to deeply study the unbalanced law of regional development. In practice, it is required to study the development of various regions, large regions, metropolitan areas, industrial agglomeration belts and various special types of areas, including the mode and approach of the coordinated development of resources, environment and social economy in each major type of area (metropolitan area , major industrial agglomeration belts, ethnic minority frontier areas, key resource development areas, areas where the ecological environment has been severely damaged, etc.). According to our country's natural foundation, current social and economic development goals and a large number of regional problems caused by long-term rapid growth, according to scientific principles and indicators, we propose a functional zoning plan for my country's development and governance in the next 15 to 20 years (secondary or third Classification of functional areas), determine the main functions and development principles of each major functional area, and propose supporting conditions for promoting the sustainable development of different functional areas.", "The earth's surface space where human beings live is an interrelated and interactive system organized by various elements including human activities in a certain order[1\uf02d4]. This complex system can be regarded as a functional totality[5, 6], and each part of this system forms several functional areas, or functional areas, due to the different functions they carry; The role played by the system is the \"territorial function\". In the early development of geography, the research on territorial function has been widely concerned by geographers from all over the world. French human geographer P. Vidal de la Blache (P. Vidal de la Blache) explored the formation of functional areas from the perspective of the relationship between people and land, and pioneered regional research[7]. To concretize such research; German geographer A. Hettner studied the spatial distribution of regions carrying various functions, and carried out the earliest systematic regionalization work. Hommel (HG Hommeyer made a further hierarchical division of functional regions, which marked the beginning of recognition of the spatial scale change attributes of regional functions; British geographer AJ Herbertson also carried out the division of functional regions on a global scale [8]. After entering the 20th century, geographers in the United States, the former Soviet Union, Germany, Japan and other countries have further expanded the scope of research on regional functions and applied them in a large number of fields such as land development, urban planning, agricultural zoning, and land use [9 \uf02d11], and Chinese geographers have made unremitting explorations on this [12\uf02d23]. Difficulties that need to be solved urgently [24, 25]. Territorial functions have the attributes of subjective cognition, diverse composition, interaction, spatial variation, and temporal evolution. These attributes indicate that the formation and evolution of territorial functions will be affected by extremely complex factors and mechanisms. A region carrying a certain function is called a functional area, so the formation process of regional functions can also be regarded as the formation process of a functional area. The formation of functional areas is inseparable from the spatial balance process of regional development. The so-called spatial balance of regional development means that the per capita level of any regional comprehensive development state tends to be roughly equal. The comprehensive development state here can be composed of economic development, social development, and ecological environment. of. Therefore, the formation of functional areas should be a positive process to realize the spatial balance of regional development, that is, the formation of functional areas is conducive to the narrowing of the gap in the per capita level of the comprehensive development status of different functional areas; if this condition cannot be met, Functional areas are difficult to achieve, or unreasonable [26]. Territorial functions are constantly changing with the change of time and space, and the factors that affect the evolution of territorial functions have always been the propositions that modern geography has paid attention to. At present, it is basically believed that the main factors affecting the evolution of regional functions include the following aspects: first, the development and growth of functional areas themselves, such as the expansion of urban scale, which changes the regional functions of cities; second, new factors and new mechanisms of regional development, For example, globalization has changed the regional functions of my country's coastal areas; the third is the change of the concept of development and values, such as the concept of ecological civilization, which has affected the regional functions of the original underdeveloped areas with intact ecosystem protection.", "Global environmental change is one of the hot issues in the world today. Since the 1990s, economic globalization has promoted the development of global productivity and accelerated the growth of the world economy. At the same time, it has also caused a series of environmental problems. With the development of society and economy, global environmental change with climate change as the core is affecting all aspects of human society extensively and profoundly, and has increasingly become a major issue related to national security, social progress and sustainable development. The core issue of global environmental change is the increasingly serious problems of resources, environment and development faced by human beings. With the deepening of research on global environmental change, scholars have deeply realized that global environmental change is not simply a scientific issue of the natural environment system itself, but a comprehensive and complex issue including international politics, economy, society and humanities. Therefore, the global environment Change research has attracted more and more participation in related disciplines, showing a trend of comprehensive research on natural and human elements [1]. In response to the above problems, the Earth System Science Partnership (ESSP) was established internationally, which consists of four major global environmental change science programs, namely the World Climate Research Program (WCRP), the International Geosphere-Biosphere International Geosphere-Biosphere Program (IGBP), International Human Dimensions Program on Global Environmental Change (IHDP), Biodiversity Program (An International Program of Diversity Science, DIVERSITAS). Since the 1990s, the field of global environmental change has gradually strengthened the research on human factors. This is mainly related to two factors: one is the \"International Human Dimensions of Global Environmental Change Program (IHDP)\" in this field; the other is the cultural and institutional shift in the field of geography since the 1990s. IHDP emphasizes the intersection, penetration, synthesis and integration of nature and society, science and policy, and its research fields are relatively wide. Generally speaking, it can be divided into two categories: core projects and ESSP sustainability joint projects. Currently, IHDP has 7 core programs, namely Global Environmental Change and Human Security (GECHS), Industrial Transformation (IT), Coastal Land-Sea Interaction (co-funded by LOICZ and IGBP), Earth System Governance (ESG), Global Land Program (GLP, co-financed with IGB, P), Urbanization and Global Environmental Change (UGEC) and Integrated Risk Governance (IRG). There are 4 ESSP sustainability joint programs, including Global Environmental Change and Food Systems (GECaFS), Global Carbon Project (GCP), Global Water System Project (GWSP) ), global environment and global environmental change in the mode and mechanism of cultural and institutional factors \t\u00b7 129 \u00b7 Change and Human Health (Global Environmental Change and Human Health) [2]. At the same time, after the econometric revolution in the 1950s and 1960s, the political economy school in the 1970s, and the new regionalism in the 1980s, the field of geography underwent a cultural and institutional turn in the 1990s ( cultural and institutional turn) [3, 4]. People began to reflect on the model of social development, and gradually realized that non-economic factors (society, culture and system, etc.), especially cultural factors, play an important role in the dynamic mechanism and spatial characteristics of economic activities. Faced with a series of environmental and social problems, people began to hope to offset various negative effects by giving full play to the role of culture. Therefore, since the 1990s, cultural and institutional factors have become an important research direction and hot spot in western geographical research. Generally speaking, the mode of action and mechanism of cultural and institutional factors mainly include the following aspects: \u2460 Emphasis on the re-understanding of \"economy\". The economy itself is increasingly understood as a semantically discursive phenomenon, produced by the \"expertise\" created by economists. \"Economy\" is no longer an objective fact, but a rhetoric; the economic system is open and socially embedded. \u2461 Emphasize that the economy is inseparable from culture and institutions, that is, the embeddedness of the economy. Scholars comprehensively study how cultural and institutional factors affect the existence of economic organizations and environmental evolution, and believe that economic development has its own unique cultural foundation and cultural process. All economic behavior is also social behavior, and economic processes, individual motivations, etc. must be understood in the context of broader socioeconomic and political rules, processes, and traditions, which may be formal or informal of. Cultural traditions, consumption patterns, and lifestyles have an important impact on regional economic development, and different types of culture will form different social and economic systems and environmental spaces. For example, participants in economic activities exhibit different behavioral characteristics according to gender, race, class, and cultural differences, and the institutional environment also creates different economic behaviors. \u2462 Emphasis on cultural and institutional networks. In regional development and environmental changes, attention has shifted from focusing on the institutional arrangements of a single location or organization to focusing on comprehensive institutional networks, and the concept of \"institutional thickness\" (institutional thickness) has been proposed. It is believed that: \u2460 In the process of regional development and environmental change, there are various subjects and institutions, including families, enterprises, governments, business associations, financial institutions, development agencies, trade unions, research and innovation centers, resource groups, etc. . Together, they provide the basis for various localization or common practice activities in social networks; \u2461Some subjects and institutions have organic connections such as mutual cooperation and exchanges, resulting in a high degree of interaction, promoting the generation of knowledge and the formation of innovation; \u2462 There is a strong sense of locality among institutions, that is, all subjects form a common sense around the regional socio-economic development goals, regional environmental changes, or specific agendas and projects. \u2463 Emphasis on regional economic control. Scholars have shifted from emphasizing institutional forms and structures to emphasizing institutional processes, emphasizing the co-evolution of culture, institutions, economy, and environment, in order to understand the institutional dynamics of regional development. Economic geographers study not only formal institutions, but also informal institutions, including networks of relationships, culture, and customs. In this process, scholars have studied how the institutional environment promotes technological innovation, how the regional cultural institutional environment (milieu) promotes the development of industrial areas, and then reveals the \"institutional space\" of technological innovation and diffusion [3\uf02d8] . Cultural and institutional factors and the above ideas are crucial to the study of regional sustainable development as well as global environmental change. Of course, culture and institutions are connotations and rich concepts, and their connotations need further research and discussion; especially the research on informal institutions needs to be strengthened. In addition, the relationship between regional culture and system and regional regulation, as well as the formation process of regional culture and system also need to be further explored.", "The development of today's world is dominated by two interrelated trends, namely globalization and informatization. The so-called informatization can be understood from the perspective of geography as the significant reduction in the time and space barriers of information and knowledge transmission caused by the widespread application of information technology [1]. That is to say, where the information infrastructure reaches, the availability of information and knowledge converges, and the law of spatial distance friction loses its effect to a certain extent. This historic change has greatly promoted the global exchange of economy, culture and consumption. Therefore, to a certain extent, information technology is one of the most powerful shaping forces of the social and economic system in this era. In recent years, the astonishing progress and wide application of information technology have had a huge impact on social and economic development [2]. For example, information technology has shaken the transaction mode of the traditional economy, changed people's consumption patterns and spatial cognition, and accelerated the process of knowledge innovation. In this situation, information technology is becoming an increasingly important location factor [1]. What kind of spatial impact this new location factor will bring and its mechanism of action are important issues that contemporary geographers must be concerned about. It is not only of great significance for understanding the core characteristics of spatial evolution in our era, but also an important factor that should be considered in formulating regional policies in the future. Since the 1990s, the regional spatial reorganization caused by information technology has aroused intense attention from scholars of various disciplines, and sparked a debate on the impact of information technology. From the perspective of geography, geographers pay more attention to the spatial impact of information technology, and the core of the debate is how to understand the regional spatial changes brought about by information technology. First, they focus on the question: Does information technology narrow or widen regional disparities? Because information technology enables the transmission of information and knowledge to some extent break through the law of spatial distance friction, some scholars believe that information technology can narrow the development gap between developed and underdeveloped regions. However, more scholars do not agree with this point of view, and are worried about new regional differentiation phenomena such as \"digital divide\" and \"digital differentiation\" that are emerging. Since the construction of information infrastructure requires a lot of investment, users must also have certain knowledge and be able to afford the corresponding higher costs, so there are huge spatial and social differences in the popularity of information technology and its facilities, that is, backward areas and poor people Be isolated from the process of informatization. This will inevitably increase the regional differentiation of social and economic development. This phenomenon exists at the global, national and even regional levels. Second, does information technology promote agglomeration or diffusion? Regarding this issue, in fact, as early as a century ago, Marshall pointed out that every reduction in the cost of transportation and communication means will change the force that makes the industry localize, and strengthen the \"centrifugal force\" of the industrial layout [3]. In the past ten years, many scholars have emphasized the impact of spatial proximity on enterprise development and agglomeration, and believe that there is a strong \"centripetal force\". Its core point is that although information technology has made the exchange of information between people more convenient than ever, it still cannot replace face-to-face communication. And it is the need for face-to-face communication that makes gathering necessary. However, information technology has led to less and less space and distance restrictions on social, economic and cultural life, especially the global diffusion of economic activities cannot be ignored. Therefore, scholars concluded that under the new information technology, \"centrifugal force\" and \"centripetal force\" coexist, leading to the simultaneous occurrence of localization and decentralization of economic development [4, 5]. They further relate the relationship of \"gathering\" and \"scattering\" to the type of economic activity and product life cycle. It is believed that those economic activities and innovative activities that rely on tacit knowledge tend to agglomerate, while stylized economic activities whose knowledge is implicit in technical systems tend to disperse; at the same time, new economic activities tend to agglomerate, while Once it's mature and stylized it tends to disperse. They believe that under the influence of information technology, the general trend of social and economic activities is decentralization, but there is also concentration in the dispersion; that is, what information technology causes is \"dispersed concentration\". Figure 1. \tAnalysis block diagram of the spatial impact of information technology [6] Generally speaking, the regional spatial reorganization driven by information technology is a complicated process. Controversies and opinions are also diverse. The determination of \"gathering\" and \"scattering\" is related to the spatial scope of the research [6]. Agglomeration observed between regions may be dispersed in urban areas. At the regional level, scholars have summarized three ways of regional space reorganization caused by information technology: \u2460 \"back shop\" mode, office automation based on modern information technology enables large enterprises to complete management and copywriting work on the computer network, so these Jobs can proliferate anywhere with a good information infrastructure and low labor and rent costs. \u2461 \"Remote work\" mode, that is, employees can work at home through modern information facilities. \u2462 The \"teleport\" model refers to a high-tech office area or office building that can provide advanced communication facilities (especially network facilities), so that small and medium-sized enterprises can share advanced information facilities [7]. At the city level, information technology promotes the emergence of a new type of urban space organization based on network relationships such as electronic communications and material facilities, that is, the network city. In addition, scholars have also studied the impact of information technology on traditional cities, pointing out that information technology has four major effects on urban development, namely collaboration, substitution, derivative, and enhancement effects [2]. In the process of regional spatial reconstruction brought about by information technology, on the one hand, business activities and production activities of enterprises are dispersed to small cities or the suburbs of large cities; Tendency to congregate in surrounding areas. This reinforces the city's hierarchy and dual structure of \"periphery-core\". The specific impact of information technology on regional spatial structure is related to the changes in working methods and enterprise organization and management methods caused by information technology. On the one hand, information technology can enable enterprises to reduce transaction costs and increase productivity, thus fostering a flexible production model [8]. On the other hand, information technology can enable enterprises to share scarce resources of individual enterprises such as technology, market information and experts through the network, and help enterprises reduce transaction costs, improve enterprise flexibility and rapid response capabilities. In addition, due to the development of information technology, the spatial friction of knowledge and information transmission is reduced, so that the scope of spatial connection of small and medium-sized enterprises is getting larger and larger. In this process, scholars have also tried to explore the mechanism of regional spatial reorganization caused by information technology. Among them, Liu Weidong and others believe that \"time cost\" is the core mechanism of regional spatial transformation caused by information technology [6, 9]. The transformation of enterprise business operation mode caused by information technology will lead to certain spatial results. In particular, ever-shorter product life cycles and mass customization will likely reshape the spatial organization of firms. In the new era characterized by globalization and informationization, the lowest production cost (money) may not be able to guarantee the company's victory in the market competition. With product lifecycles getting shorter and shorter, the timing of new product launches is critical to business success. Therefore, the widespread use of information technology has prompted \"time cost\" to play an increasingly important role in the spatial organization of enterprises. Generally speaking, there is a lot of controversy in the field of geography about the regional spatial reorganization caused by information technology, that is, whether information technology will promote agglomeration or diffusion. There is a time lag between the application of information technology and the regional spatial transformation it causes. There are not many empirical studies on the impact of information technology, and it is not enough to make a convincing conclusion. At the same time, the regional spatial reorganization driven by information technology is a complicated process, and its trend can be observed from different departments and different spatial levels. The conclusions of the research and observations so far are greatly affected by the spatial levels involved and the departments studied. There is still a lot to be further studied, and many issues still need more empirical research to verify.", "The tradition of regional culture research can be traced back to the ancient Greek geographers Ptolemy and Strabo. Ancient geographical literature includes geo-cultural regions at different scales. Taking China as an example, it includes spaces of different scales such as Lingnan culture, Huaxia culture, and East Asian culture. Regional culture is one of the important components of regional geography. In the regional geography of Alfred Hettner and Richard Hartshorne, and in the regional research of Vidalian Geography, there are records of large and small The culture of a small area. In the late 19th century and early 20th century, the emergence of geographical environment determinism made the study of regional culture break away from the tradition of topography, that is, began to explore the geographical reasons of cultural phenomena [1]. But the appearance of probability theory interrupts the causal analysis of regional culture research. The study of cultural districts has always outpaced the exploration of cultural causality. In the 1920s, \"cultural geography\" was born at the University of California, Berkeley. Since then, regional culture has become the research content of cultural geography. Carl O. Sauer, the \"father of cultural geography\", opposed Hart's view of regional holism, and analyzed regional culture from the perspective of landscape analysis[2]. Regional culture is listed as one of the five themes of cultural geography research in the conceptual form of cultural region [3]. In traditional cultural geography, cultural regions are divided into formal region, functional region and vernacular region. These cultural districts have been distinctly different from \"territories\" in regional geography. For example, the Chinese language area is a formal cultural area, which covers many traditional regional cultural areas; another example is the Lingnan cultural area or the Dixie cultural area in the United States as a rural cultural area, which is much larger than the local cultural unit of anthropological research. . Cultural districts cover different areas, and scholars have introduced a series of new concepts of cultural districts in order to distinguish cultural districts of different scales. The main ones are: \u2460 cultural realm, such as the East Asian cultural region; \u2461 cultural world, such as the Arab world; \u2462 cultural sphere, such as the Anglosphere. However, how to nest cultural areas of different scales in space has always been a problem that has not been fundamentally resolved. When cultural geography intersects with other branches of human geography, the problem of spatial nesting of cultural regions at different scales emerges. For example, when economic geography discusses the process of transnational corporations' embedding into the social and cultural environment of the investing country, it is inevitable to answer this question [4]. Investors need to think about whether to embed in the cultural area of the investing country or in the cultural area of the investment destination. These two cultural areas of different scales are the relationship between complex cultural system containing simple cultural system, or the relationship between national ideology and culture controlling local institutional culture. If such a relationship cannot be discussed clearly, then the issue of embedding cannot be discussed. For another example, when political geography and cultural geography intersect, it also involves the integration relationship of different cultural regions. The U.S. war against Afghanistan is no longer a state-to-state relationship, but rather a U.S. relationship with the Taliban in Afghanistan\u2019s Muslim culture, and it also involves the relationship between Afghanistan and its Muslim neighbor, Pakistan. This kind of cultural identity and political identity across cultural regions, countries, and regions within countries cannot be avoided in the judgment of group interest relations in geopolitical research. Since the 1980s, new cultural geography has appeared in the international geography circle [5]. New cultural geography conforms to the fourth turning point in the history of geography thought, and it has a new perspective on traditional cultural geography from the perspective of research methods and theories. Learning to reform [6]. The concept of place has become the core concept in the study of regional culture in the new cultural geography. The so-called \"place\" refers to the continuous decomposition of the grand cultural areas in the past into smaller cultural areas. In the process of decomposing cultural regions, Marxist political-economic models help cultural geographers analyze the differences between cultural regions from the perspective of structural and functional differences; feminist theory , post-colonial theory (post-colonial theory) helped cultural geographers establish the subjectivity of cultural regions from the perspective of women and nations; post-structuralism (post-structuralism) and psychological analysis (psychoanalysis) helped cultural geographers further The uniqueness of places or minimal cultural areas is established. The method of deep map has even become a fashionable method of recording or describing regional culture [7]. When cultural regions are continuously broken down, the world is increasingly divided into small cultural regions. As cultural region descriptions approach reality, the comprehensive strengths of geographic region cognition are lost. Therefore, the integration relationship between the fragmented small cultural area and the large cultural area has been raised to the level of research and discussion. Some new cultural geography researchers criticize the view of static space in traditional geography, so the process of integrating cultural regions at various scales in the process of cultural region change is also a problem they are committed to researching. There are different methods for the integration of cultural regions at different scales, for example, the holistic analysis tools of the regional school, the landscape combination method of the landscape school, the upper stability theory of cultural super-organism [8], non-representational geography ( non-representaional geography) cultural description. However, there is not yet a method for integrating cultural regions of different scales that all scholars agree on.", "The process of urbanization refers to the process of transforming rural population into urban population and rural areas into urban areas. According to its connotation and form of expression, the urbanization process can be decomposed into four aspects, namely, the population urbanization process reflecting the concentration and distribution of urban population, the economic urbanization process reflecting urban economic growth, and the spatial urbanization process reflecting urban spatial expansion and reflection. The social urbanization process of the diffusion of urban civilization. The economic growth process of urbanization is the internal driving force of urbanization, the process of population concentration and spatial expansion of urbanization is the external manifestation of urbanization, and the process of urbanization's civilization diffusion is the final result of urbanization. From the perspective of time change, the urbanization process generally follows the \"S\"-shaped curve change law, and generally goes through three development stages: occurrence, development, and maturity [1]. From the perspective of spatial change, urbanization can generally be divided into four stages: early urbanization stage, suburban urbanization stage, counter-urbanization stage, and metropolitan belt[2]. Since the urbanization process in different countries and regions has its own development rules and characteristics, and is a dynamic evolution process, the study of urbanization process has always been a hot and difficult point in the study of urban geography. The dynamic mechanism of urbanization is the sum of the comprehensive system composed of the generation mechanism of the necessary power to promote the occurrence and development of urbanization, as well as the various economic relations and organizational systems that maintain and improve this mechanism [3]. The process of urbanization is carried out under the interaction of the two forces of \"driving and braking\". Whether the urbanization process is driven by the driving force or controlled by the braking force [4]. The dynamic mechanism of urbanization has always been the focus of research by scholars from various countries [5]. It is generally believed that the pull of the city (industry) and the push of the countryside (agriculture) are the two basic drivers, which are specifically affected by regional resource conditions, geographical environment, policies, opening to the outside world, utilization of foreign capital, proliferation of large and medium-sized cities, and the role of community governments. and the behavior of farmers and other factors [6]. Since the population urbanization process, economic urbanization process, spatial urbanization process, and social urbanization process have the characteristics of complexity, dynamics, and regionality, it is necessary to reveal the characteristics, changing laws and The main direction of future research is to study the driving and braking mechanism, and to propose early warning schemes and measures to rationally regulate the speed and quality of urbanization.", "The city grows radially from the center, and the whole city maintains a single mass and expands outward. When the population exceeds 10 million, it is called a mega-city. Recently, the number of megacities (especially in the third world countries) has increased rapidly, even exceeding the growth rate of cities with a population of one million (worldwide, the number of cities with a population of one million has increased from 140 in 1960 to more than 600 today), and the larger the city, the faster the growth rate of population and other indicators. Now, there are about 22 megacities with a population of more than 10 million in the world. Megacities are constantly sprawling within a compact area, leading to increasingly serious problems with the growth of large cities (Figure 1, Figure 2). How do people understand the urban spatial structure formed by rapid urbanization? How to guide and regulate this urbanization space? What is the mechanism of the formation of this urbanization space? will become a scientific problem faced by urban geographers. Figure 1 \tThe mega-urban area in the United States Figure 2 \tThe urban agglomeration area in China The data of Taiwan Province are temporarily lacking. In the 1950s, similar terms appeared to describe this urban phenomenon. French geographer Jean Gottmann first used the term \"megalopolis\" in 1957 to call the 960-km-long Atlantic coastline megalopolitan in the northeastern United States from Boston to Norfolk in the south. region)[1], and was later also used in several other densely populated areas of large cities in the world, such as the Great Lakes region of the United States, the east coast of Japan, England, Northwest Europe, and the Yangtze River Delta in China. Now it mainly refers to an extremely huge urbanized area that is connected by many metropolitan areas and has close interactions in economic, social, cultural and other aspects of activities. Generally speaking, the \"megacity contiguous area\" is considered to be the regional space product of human society driven by industrialization and urbanization. Since the 1980s, the breakthrough of information technology has promoted economic globalization. In the context of globalization, the transformation of technology and information through trade and investment benefits the recipients, and also \"compresses\" the development space through the overlapping regulatory environment, further agglomerating the rapidly growing urbanization space, and urbanizing areas A new spatial order is derived, cities are more closely connected, and a multi-polar and multi-level global urban network is taking shape. In 2001, Hall believed that Chinese and European cities will still have some similar characteristics in the 21st century [2], which are mainly manifested in three aspects: \u2460 globalized city. Some cities in China and Europe will become part of a complex global economy, and these cities will exchange goods and provide services to each other on a global scale; \u2461 Mega-city regions. This is a vast networked urban complex with a complex structure, consisting of as many as 30 to 40 cities and surrounding small towns, and is a kind of \"polycentric mega-city\". regions)\u201d; \u2462 Mega-projects, these mega-projects are the concrete embodiment of globalized cities and mega-urban regions. In 2004, Taylor [3], professor of geography at Loughborough University in the UK, published World City Network: A Global Urban Analysis. This book is the research result of the Globalization and World Cities Study Group and Network (GaWC) established in 1998. Business services firms serve globalization, an analysis that challenges the conventional view that the world is just a jigsaw puzzle of political districts. Think of the city in the network as a kind of \"flow space\" (as a space of flows), and geographically as \"the space of places\" (as a space of places). In 2005, Wu Zhiqiang and others proposed the concept of global region [4]. Li Hongwei defined the global region [5]. In 2005, American Regional Planning Association compiled \"American 2050\" (American 2050) and also proposed the concept of \"beyond megalopolis\" [6]. The latest identification criteria are: \u2460 connecting at least two or more existing metropolitan areas (MA); \u2461 expected to have a total population of more than 10 million by 2040; \u2462 deriving a series of adjacent metropolitan areas (MA ) or micropolitan areas; \u2463 constitute an \"organic\" cultural region (organic culture region) with a significant historical background and common features; \u2464 occupy roughly similar natural environments; Connect a huge urban core; \u2466 Form a functional urban network by various goods and services flows; \u2467 Create a geographical unit suitable for large-scale regional planning; \u2468 Located in the United States; \u2469 Based on counties unit [7]. Afterwards, the \"beyond megalopolis\" was officially named \"megaregion\", with 10 megaregions including the Northeast Coast, Midwest, Southern California, and the Gulf of Mexico, with a population of about 1.97. million, accounting for 68% of the United States, and gathering 80% of the large cities with a population of more than one million [6]. In 2006, Hall[8] published \"The Polycentric Metropolis: Learning from Mega-City Regions in Europe\", further emphasizing: the mega-city region in space It is not single-center but multi-center functionally. This is a brand-new urban form, consisting of 10 to 50 towns that are physically separated from each other but functionally interconnected, gathered around one or more larger central cities, and organized through a new functional division of labor to form an individual city. Different functional urban regions (FUR) are connected by the \"flow space\" of highways, high-speed railways and telecommunication cables. There are 8 mega-city regions in Europe, including southeastern England, Ruhr in Germany, Rhine-Main, Randstad in the Netherlands, Paris in France, central Belgium, Greater Dublin, and northern Switzerland. Zhang Min, Gu Chaolin, etc. analyzed the deepening globalization process and the continuously strengthening urban integration situation in the Yangtze River Delta region, and pointed out that the Yangtze River Delta region is the region most likely to be built into a global urban area in China; and then from the spatial construction of the global urban area From the perspective of functional organization, it is proposed that by improving the functions of Shanghai as a global city, the functions of sub-global cities of Nanjing, Hangzhou, Suzhou, and Ningbo will be created, the functional connections between cities will be strengthened, and the integrated support system of regional networks will be built to build a \"multi-center layer domain\". The Yangtze River Delta global urban area [9]. Zhang Xiaoming and Zhang Cheng showed huge economic power through the new division of labor functions. Referring to the research methods of the POLYNET project team and related domestic research results, they constructed FUR to define the scope of the giant urban area in the Yangtze River Delta, and adopted the structure of productive service industry employees. The data were analyzed for functional associations among 16 major FURs [10]. Zhang Xiaoming also analyzed the characteristics of the Yangtze River Delta mega-urban area from the perspectives of polycentricity, functionality, and network, and pointed out that the Yangtze River Delta mega-urban area is a polycentric network-like urban area [11]. Yan Xiaopei, Mao Jiangxing, etc. also took the Pearl River Delta region as an example to analyze the human factors of land use change in mega-urban areas [12]. In 2007, Hall argued that China's metropolitan areas not only have similar attributes to other metropolises in the world, but also have their own characteristics. In the global economic landscape, China is obviously in a very unique position: it has become the \"new world factory\", producing many advanced consumer goods, and taking advantage of a combination of low production costs and advanced technology, This is similar to the early developments in many developed economies today (Germany in the 19th century, Japan in the mid-20th century, and Silicon Valley in the US). The speed of development and the size of the polycentric metropolitan areas in China are again markedly different: production processes are organized in \"clusters\" of discrete regions, and again highly networked urban agglomerations, especially in the Yangtze and Pearl River deltas [13]. In 2008, Zou Deci and others completed the \"Research on the Planning and Construction of my country's Large Urban Contiguous Areas\", and believed that my country's large urban contiguous areas are mainly distributed in the economically developed eastern coastal areas, from north to south in central and southern Liaoning, Beijing, Tianjin and Tang 6 metropolitan contiguous areas, Shandong Peninsula, Yangtze River Delta, the west bank of the Fujian Strait, and the Pearl River Delta, of which the Yangtze River Delta and the Pearl River Delta have basically formed the contiguous metropolitan areas of Beijing, Tianjin and Tangshan. Central and southern China, the Shandong Peninsula, and the west coast of the Fujian Strait show the rudiments of the spatial form of a contiguous metropolitan area [14]. In recent years, the development trend of cities around the world shows that the geographical spatial structure of rapidly urbanized areas has undergone drastic changes in terms of economy, society, technology, transportation, information and management due to economic globalization. Traditional urban geographical representation, spatial mechanism research, and even planning methods have fallen behind the changing requirements and no longer adapt to new developments. Urban geographers, urbanists, and planners need to jointly study and solve related problems.", "Cities have been developed for 5,000 years, but even in 1800 AD, urban populations accounted for only 2% of the world's population. In the past 200 years, the trend of urbanization in the world has accelerated, and the economic globalization in the ascendant has made cities in various countries develop at an unprecedented scale and speed. In 2006, more than half of the world's population lived in cities. According to the forecast of the United Nations report, from 2000 to 2030, the world's urban population will soar from 2.4 billion to 5 billion, accounting for the proportion of the world's total population will rise from 47% to more than 61%. Rapid urbanization often leads to uncontrolled urban growth; blind urban sprawl and expansion will inevitably make urban problems more and more irresolvable. The so-called urban sprawl refers to the blind expansion of urban space without organization, without prior planning, and ignoring the needs of transportation and service facilities. Urban sprawl is a major problem in the process of urban development in western countries, represented by the United States, that emphasize market and consumer sovereignty in the 20th century. According to the current development trend, by the end of the 21st century, contiguous urbanized sprawl areas will be formed all over the world (Figure 1). Figure 1 \tThe world\u2019s urbanization sprawl space that may appear at the end of the 21st century Early urban sprawl only refers to the spatial expansion of cities, but with the global urbanization process and the disorderly and greedy spread of urban land, most people believe that urban sprawl has caused a A series of environmental, energy and economic inefficiencies, social injustice, loss of community culture and other issues may even endanger the sustainable development of cities and the world. Mechanism and law \tof urban sprawl\u00b7145\u00b7The connotation of early urban sprawl mainly described the phenomenon of discontinuous development and utilization of urban space, and later gradually covered motorized travel, single-functional land use, and low-density development behaviors[1]. From a regional perspective, urban sprawl often leads to the disorderly spread and expansion of urban space, which puts enormous pressure on resources and the environment and makes the tension between man and land increasingly severe. At the urban level, urban sprawl often results in the loss of urban and natural ecological connections, various environmental pollution, traffic congestion, difficulty in housing choices, social interaction and psychological alienation, lack of (or long-term lag in) basic urban functions, and crime. Increased behavior and terrorism, and the resulting insecurities, imbalances between behavior and dreams, ideals, values, etc. This situation is deteriorating rapidly with the passage of time, and if it is not governed, it may eventually threaten the survival and development of the entire human society. Today, both urban experts and professionals, as well as decision makers and the general public, have felt that the disorderly sprawl and expansion of cities has become an increasingly serious scientific and social reality problem. Despite this, no effective solution to the problem has been found. Or do we abandon the urban way of living? Or inhibit the growth of the city? Or destroy existing \"problem\" cities? Obviously, these are not desirable. What is the solution to urban sprawl? Of course, it is not to solve urban problems by eliminating cities, but how to control and govern the existing urban sprawl. Since the reform and opening up, with the establishment of my country's urban land market mechanism, urban sprawl has become the main mode of spatial expansion in many large cities in my country. Moreover, my country's lack of cultivated land resources, large population, and rapid urbanization make this urban problem appear More complex and difficult, the mechanism and law of urban sprawl and expansion have become a century-old problem encountered by urban geographers and urban scientists.", "Peri-urbanized area is the primary stage and transitional type of transition from rural area to urbanized area in the process of industrialization and urbanization. Since the reform and opening up, under the combined effect of foreign capital influx, metropolitan radiation and diffusion, and rural industrialization, rural areas with relatively good location and endowment conditions, such as the Yangtze River Delta and Pearl River Delta, have generally developed and formed a A transitional regional type that is \"like a village but not a village\" and \"like a city but not a city\" with mixed urban and rural land use, rapid changes in the socio-economic structure, that is, a semi-urbanized area. From the perspective of the development process of urbanization, these areas have completed the transfer of their industrial structure from agriculture to non-agricultural industries, but the spatial transfer and agglomeration of their population and industries have not yet been completed, and they are still in the state of \"semi-urbanization\" [1] ]. The case study shows[2] that the prominent feature of my country's semi-urbanized areas is that their industrial structure and employment composition have been highly non-agricultural, showing the embryonic form of urban economy; but at the same time, these areas still maintain rural household registration, The spatial agglomeration of land and administrative systems, industry and population is still relatively low, presenting a unique regional landscape of \u201cvillages are like towns, and towns are like rural areas\u201d. With the rapid advancement of my country's urbanization process, a large number of semi-urbanized areas have been formed and will exist for a long time in China. On the one hand, peri-urbanized areas are the active areas of economic growth and the main places to absorb migrant workers, which play a key role in promoting the healthy development of urbanization and the overall planning of urban and rural areas. Relying on superior location conditions, land with abundant quantity and low price, labor factors and preferential policies, and flexible management policies, semi-urbanized areas attract a large amount of foreign direct investment, industrial diffusion in central urban areas or industrial agglomeration in rural areas, and become my country's The main processing and manufacturing base and the active economic growth area have rapidly completed the non-agricultural transformation of the economic structure and industrial structure. At the same time, peri-urbanized areas have many new job opportunities (such as labor-intensive manufacturing), low barriers to entry, and low cost of living, and represent an important upward ladder, thus absorbing most of the newly transferred from rural areas. incoming urban population. But on the other hand, peri-urbanized areas are in a critical period of urban-rural transformation, resource and environmental issues and social conflicts are relatively acute, and social transformation and spatial reconstruction are urgently needed. In the semi-urbanized areas, urban and rural land use is intertwined, the spatial layout is scattered, and the resource and environment problems are relatively serious. At the same time, the management system is seriously lagging behind, the number of landless farmers is increasing, and the immigrant population is pouring in. The original rural society has disintegrated, and the new China's urban communities have not yet formed, and the social structure has undergone drastic changes. Peri-urbanized areas are a global phenomenon. Since the middle and late 20th century, with the increasingly frequent exchange of elements between cities and villages, closer functional links, and blurred landscape boundaries, a large number of urban and rural areas have sprung up in developed and developing countries. \uf02dThe rural and rural dual landscapes are completely different, and the urban and rural functions and landscapes are mixed and intertwined. Emerging regions or landscape types. As early as the 1950s, Gottman mentioned this type of area in his metropolitan belt theory: \"The non-urban land between cities is not a rural area dominated by agricultural economic activities in the traditional sense, but a It is closely connected with the city with landscapes and products that are completely different from the city, providing recreational places for the urban population, and at the same time obtaining various services from the central city.\u201d [3] Due to the traditional urban-rural dual structure Theory and growth pole theory are at a loss for this. Researchers at home and abroad have carried out a large number of empirical studies and theoretical explorations, and successively proposed urban fringes, edge cities, and extended metropolitan regions. ), Desakota, urban-rural integration and other theoretical concepts or paradigms, to summarize and explain this emerging regional type with mixed urban and rural functions and landscapes, and to provide guidance for spatial planning. However, the research on urban fringe areas prevailing in Europe and the United States in the 1940s and 1960s and in the 1980s in my country [4] is only an important but incomplete theoretical summary of the phenomenon of semi-urbanization, and there is still an \"urban bias\" in theory. )\u201d, ignoring the driving force of rural development or rural urbanization; in terms of spatial scope, it only focuses on the surrounding areas of cities, while ignoring the original rural areas with highly developed non-agricultural economies. The formation mechanism of the marginal cities that have recently appeared in developed countries such as Europe and the United States is very different from that of my country's semi-urban phenomenon. The existing problems, policies and measures that should be adopted, and future development directions are also very different. The semi-urbanized area and the interlaced area of urban and rural land use in the urban expansion area [5,6] are very similar to the semi-urbanization phenomenon in my country in terms of landscape characteristics and formation dynamic mechanism, and have strong reference significance. They cover only a small but not all of the peri-urbanized areas. The Desakota model proposed by the Canadian geographer TG McGee essentially reflects a region-based, relatively dispersed urbanization path, and attaches great importance to the regional spatial changes caused by the two-way communication of interdependence and mutual influence between urban and rural areas[7] ,8]. Obviously, this model has important guiding value for China's urbanization development practice, and is an important theoretical basis for conducting research on my country's peri-urbanization. However, the dynamic mechanism of the formation and development of my country's peri-urbanization phenomenon is more complex than that of Southeast Asian countries, and it is more affected by policies and systems. Concepts such as urban-rural integration or networking have a strong ideal or subjective color, and are highly controversial in my country. Some scholars call them \"new utopias\" and are poorly operable. Peri-urbanization is an innovative theoretical paradigm proposed by international scholars in the late 1980s when studying the characteristics of urbanization and urban development in developing countries. It has become a new theoretical frontier and hotspot in international urban research [9,10]. In recent years, Chinese scholars have begun to conduct typical case studies on peri-urbanized areas, such as Dongguan City, Shaoxing City, Hangzhou-Ningbo Corridor, etc., and began to explore the definition and characteristics of typical peri-urbanized areas in my country. But generally speaking, although the phenomenon that my country's urbanization seriously lags behind industrialization has attracted widespread attention, the unique geographical type of peri-urbanized areas has not yet attracted the attention it deserves from domestic academic circles and management, and it is still a theory that needs to be deciphered urgently with practice puzzles. First, how to identify peri-urbanized areas? How to determine the boundaries of peri-urbanized areas, urbanized areas, and rural areas? Should different spatial scales or time points adopt the same or not defining indicators? These key issues have not yet gained a consensus in the academic community. Most of the existing methods and index systems for defining peri-urbanized areas are based on specific case studies. There is a lack of a set of methods for identifying peri-urbanized areas that are suitable for the national scale and at different time points, and a national peri-urbanized area has not yet been compiled. Spatial distribution map of the region. Second, the understanding of the formation mechanism and evolution dynamic mechanism of my country's peri-urbanized areas still needs to be deepened. In the process of urbanization from rural areas to urbanized areas, is the peri-urbanized area inevitable or accidental? Is it temporary and transitional or potentially long-term? Is the formation mechanism of my country's peri-urbanized areas basically the same as that of developed countries and other developed countries, or do they have their own characteristics? Third, what are the development directions or trends of peri-urbanized areas? What are the spatial reconstruction models and regulatory policies? In the early stages of its formation and development, peri-urbanized areas mainly relied on advantages such as superior location, preferential policies, and abundant land, labor and other factors, to attract a large amount of domestic and foreign investment, and quickly completed the non-agriculturalization of the industrial structure. However, this strategy of extensional economic growth and low price competition is increasingly showing its unsustainability. Faced with the manifestation of resource and environmental constraints, peri-urbanized areas must upgrade their industrial structure and reorganize their competitive advantages in a timely manner. At the same time, the semi-urbanized areas are in the stage of urban-rural social transformation. With the influx of migrants, the original rural communities have disintegrated, and the social structure has undergone drastic changes. It is urgent to explore ways and models of community integration.", "Stepping into the new economic era, the traditional location theory has encountered many difficult-to-explain spatial phenomena in the process of industrial spatial organization research. The new economy is a networked, globalized, high-risk, and dynamic knowledge economy. In the new economy, profound changes have taken place in the composition of industries and occupations. Economic globalization is becoming more and more obvious. Entrepreneurial dynamism is on the rise. Competition has reached a white-hot level. Information technology The (information technology, IT) revolution is unprecedentedly active, and the government has become active. Its innovation lies in its very distinctive characteristics of knowledge, effectiveness, externality and permeability. Under the background of the new economic era, the formation mechanism of industrial spatial clusters and their spatial effects have attracted the attention of academic circles, business circles, and governments [1\uf02d8]. Cluster originally meant the concentrated presentation of the same or similar things in a region. In the 1970s, Czamaskis introduced cluster into economics and proposed the concept of \"industrial cluster\". In 1990, American scholar Porter re-proposed the concept of industrial clusters in the book \"National Competitive Advantage\" [9]. A collection of companies and institutions concentrated on a location. Scholars at home and abroad have discussed the concept of industrial clusters in detail from different disciplines and different perspectives [10] (Table 1), but their essence is basically the same, covering the following main contents: First, industrial clusters correspond to certain regions It is based on the basis of specialization and collaboration. Table 1 \tDefinition of the concept of industrial cluster by domestic and foreign scholars. The concept of foreign representative scholars \t. Porter \tcluster is a phenomenon cluster in which interrelated companies or institutions in a specific industry gather in a specific geographical location. It is a group of companies or enterprises that rely on mutual interaction as a necessary condition to enhance their production efficiency or competitiveness. Redman \n\tcluster is a geographically concentrated cluster of enterprises in the production chain of one or a series of similar products. It is similar and similar enterprises. Concentrating in a geographical area can achieve a coordinated effect together , and enterprises automatically choose to join the cluster based on mutual cooperation to increase economic activities and mutual transactions. There are certain characteristics in an open industrial environment that attract \nIndustrial activities gather here and form the \tconcept of domestic representative scholars Wei Jiang Wang Jici \ncluster is a geographical agglomeration of interconnected enterprises and institutions in a certain field, and there are vertical connections of enterprises on the industrial chain within the agglomeration A cluster of horizontal linkages between competing firms and complementary firms is a group of geographically close interconnected firms and related institutions that are in a specific industry and linked together due to commonality and complementarity in the industry Formation Mechanism and Spatial Effects of Clusters \t\u00b7151\u00b7 A spatial agglomeration phenomenon of economic activities; secondly, industrial clusters depend on a specific social network, and are a system that includes single or multiple industries from input to output and even circulation. A complete value-added network of various relevant actors; finally, an industrial cluster is a new and efficient form of economic organization between the market and the hierarchy, within which the full flow of modern resources such as knowledge and technology can be realized . In the formation mechanism of industrial clusters, key enterprises are the basis for determining whether industrial clusters can be born. When the industrial cluster grows to a certain extent, the emphasis on location and the external economies of scale and external economies of scope brought about by the pursuit of specialization and division of labor become the main driving force for industrial clusters [11]. When the industrial cluster develops to a mature stage, the innovation network based on social capital will become the key factor to maintain the stable period of the industrial cluster [12]. Rootedness and network are the two main signs of the mature development of industrial clusters. Rootedness and regional innovation network are embedded with each other, making the development of industrial clusters rely on a deep localized social environment and forming their own unique advantages in global competition , instead of relying solely on the promotion of external forces. Therefore, whether to own and how to cultivate key enterprises, as well as the rooting and network construction of enterprises have become the main problems in the process of restricting the formation of industrial clusters. The spatial effects of industrial clusters are mainly manifested as: the spatial agglomeration and diffusion effects of industrial clusters, and the spatial proximity effect of industrial clusters. First of all, due to the specialization characteristics of industrial clusters, the internal enterprises have established effective spatial network relations with the upstream and downstream production and service industries. Therefore, industrial clusters have a spatial agglomeration effect, which promotes the geographical integration of enterprises in the same industry. concentration. Second, industrial clusters have a spatial diffusion effect. Enterprises within the industrial cluster can save production costs and make production more specialized through division of labor or diffusion, and finally make the industrial cluster as a whole achieve higher cluster efficiency than before. In addition, the spatial proximity effect of industrial clusters effectively reduces the transportation costs caused by frequent transactions between enterprises, and enables enterprises in industrial clusters to form a flexible economic and social complex. The industrial agglomeration network characterized by this kind of flexible specialization forms a dense regional network organization through division of labor and cooperation, and jointly faces the rapidly changing external market environment and technical conditions, which promotes the high efficiency of industrial agglomerations. production, which saves the transaction cost of a single enterprise, thereby effectively enhancing the regional competitiveness, and finally promoting the continuous growth and competitiveness of the entire regional economy through the cumulative effect of the cycle [13\uf02d16]. However, whether the spatial effect of this industrial cluster can be brought into play is often restricted by historical conditions, political system, economic system, and various complex factors within and between enterprises, which is also the difficulty in the study of the spatial effect of industrial clusters. Along with the practice of industrial clusters in the United States, Britain, Italy and other countries since the 1980s, the formation mechanism of industrial clusters and the significance of their spatial effects on regional economic development have been fully recognized by the theoretical and practical circles. However, although domestic and foreign scholars have analyzed the formation mechanism and spatial effects of industrial clusters from different perspectives, there is still a lack of quantitative analysis and in-depth research on the formation mechanism, development laws and spatial effects of industrial clusters in different periods. Therefore, the limitations of the existing theory on the formation mechanism of industrial clusters also provide a new space for further research.", "Industrial space transfer is an important factor in the formation of international or inter-regional industrial division of labor. It means that a country or region will be in different development stages such as innovation, maturity or decline after changes in resource supply or product demand conditions in order to achieve its own industrial progress. The economic behavior and process of transferring product production, sales, research and development, and even corporate headquarters to another country or region. Western scholars [1-7] have analyzed the periodicity and principles of industrial space transfer in their empirical analysis of long-term industrial development trends, focusing on the economic motivation and industrial space transfer of industries from developed countries to other countries. The evolution model of the object of transfer, the effect of industrial space transfer and other issues have formed several core concepts such as the flying geese model of industrial space transfer [1-3], product life cycle theory [4], and the mechanism of labor-intensive industrial space transfer [5]. theory. The limitations and difficulties of research on the periodicity of industrial spatial transfer and its principles lie in the fact that specific industrial spatial transfer theories are often produced in a specific historical period, and many analyzes are based on the research on the phenomenon of industrial spatial transfer at a specific stage of economic development. The phenomenon of industrial spatial transfer in a certain period has good explanatory power, but with the continuous development of industrial spatial transfer, the theoretical explanatory power of traditional industrial spatial transfer theory continues to decline, which constitutes the core of the research on the periodicity and principle of industrial spatial transfer. The difficulty lies. At the same time, different industrial space transfer theories have different research subjects, so they cannot fully explain the opposite party's industrial space transfer phenomenon, which leads to the one-sidedness of their theories. In addition, the previous theories of industrial spatial transfer mostly studied the industrial spatial transfer between countries from the perspective of the country, and lacked research on the industrial spatial transfer between different regions within a country. With the continuous expansion of the scale of industrial space transfer, the diversification of industrial space transfer methods, and the diversification and complexity of industrial space transfer subjects, some scholars began to study from new research perspectives such as new economic geography and industrial value chain [8\uf02d10] The periodicity and principles of industrial space transfer are analyzed in depth, in order to break through the limitations of traditional industrial space transfer theory and provide a richer theoretical explanation for the study of industrial space transfer.", "The Significance and Progress of Research on Rural Territory Types Before modern cities emerged, differences in rural development were mainly affected by physical and geographical elements. When human beings entered the stage of rapid industrialization and urbanization, the mode of social production has undergone tremendous changes, and the modern industrial civilization dominated by towns has gradually replaced the original agricultural civilization. The evolution of urban-rural relations, the intensity of human activities between urban and rural areas and their The transformation of the content of activities directly affects the traditional rural production methods and lifestyles. At the same time, it also affects the coupling relationship between rural natural ecological system and human social ecological system, so that rural development shows a composite feature of economic function, social function and ecological function that is different from the past. These changes in rural regional functions have also brought about huge changes in rural production relations and lifestyles. As a landscape manifestation, this huge change has also solidified into rural regional space through a certain form, and together with historical accumulation, has created the current rural regional landscape. diversity, complexity and diversity. The classification of rural areas has always been an important research field in human geography, especially in rural geography [1]. However, due to the influence of national strategic needs and the limitation of scientific cognition, the traditional research content of rural geography in my country focuses on rural settlement geography (or village geography) and agricultural geography [2]. : Before and after the reform and opening up, some scholars studied and partially promoted agricultural comprehensive zoning or special zoning because of the need to find out the family background and optimize the distribution of agricultural productivity[3]; after the reform and opening up in the 20th century, due to the dominant position of the city in the Influenced by the unbalanced growth strategy, there are relatively few studies on the types of rural areas. In some regional special studies, there have been some discussions on the division of rural areas. For example, Guo Huancheng carried out the division of rural economic types when studying the rural geography of Huanghuaihai, etc. [4]. In the 21st century, the \"three rural\" issues have increasingly become the focus of close attention to the national macro-strategy. In order to formulate more targeted policies and regulations, the research on the differentiation pattern of rural areas has attracted the attention of the academic circles, and has produced Many achievements have been made, but the existing research results still lack an in-depth discussion on the dynamic mechanism and its optimization and reconstruction. It is mainly reflected in: \u2460 There is a lack of a holistic view of urban and rural areas in the classification of rural areas, and rural areas are often discussed in terms of rural areas, and the study of rural area types has not yet been placed in the historical background of the evolution of urban-rural relations. \u2461 There is a lack of systematic research on the dynamic mechanism of rural regional differentiation. Before the classification of rural regions, the reasons for the emergence of one or another rural regional landscape have not been answered, let alone the future under the background of urban-rural transformation and development. Various geographical types may evolve to make scenario simulations or predictions; \u2462 The rural hollowing, which is characterized by abandoned rural homesteads and idle land during the rapid non-agricultural transfer of rural population, is becoming a prominent problem in the adverse evolution of rural territorial systems. Its theoretical analysis, regional identification, pattern simulation and visual expression urgently need to systematically promote in-depth research on the dynamics and causes of rural regional differentiation patterns, and propose a rural regional restructuring mechanism and model that meets the requirements of coordinated urban-rural development. It is also an urgent task to promote the innovation and development of rural geography. Under the new situation of urban-rural transformation and development, with the rapid development of urbanization in China and the gradual formation of regional dominant functional patterns, the rural territorial system has become a human geography system corresponding to urban areas. Especially in the key research field of rural geography, there is an urgent need to deeply explore the complex and extensive geographical issues of rural areas. At present, my country's urban and rural development is entering a period of transition, and rural development faces many practical problems, and some contradictions are still intensifying. The main contradictions are reflected in three aspects: first, the contradiction between the dual system of urban and rural economy, which is manifested in the unreasonable and disharmonious phenomena in the fields of production, distribution, and circulation between urban and rural areas [5], and the separation and isolation of urban and rural areas hinder The orderly flow of factors between urban and rural areas has led to the continuous widening of the gap between urban and rural areas; the second is the contradiction between the fairness of resource allocation and benefits, which is prominently manifested in the unreasonable allocation of rural land use in the fields of ensuring economic development, ecological protection, and food security; The \"new agriculture, rural areas and farmers\" problems caused by institutional changes and policy changes are grain farmers in traditional grain-producing areas, forest farmers in state-owned forest areas who have long relied on logging for a living, and miners in energy and mineral-intensive areas. With the development of urbanization and rural economy, my country's rural areas are differentiated and farmers' employment and income stratification are very obvious. These \"problems\" are deeply affected by institutional and policy adjustments such as strict protection of cultivated land, protection of natural forests, and closure and transfer of small enterprises. area\u201d is becoming increasingly apparent. From a geographical point of view, an important reason for the above-mentioned urban-rural contradictions and man-land contradictions lies in the \u201curban orientation\u201d of the national macro-policy for a long time, insufficient coordination and interaction between urban and rural areas, lagging behind in planning, management and construction of rural development, and supporting agriculture. Agriculture and even strong agriculture policies have not yet been matched, forming a relatively scattered settlement pattern in the countryside, a single-organized production system, and a poor-quality living environment, as well as a large number of social groups such as fishermen, herdsmen, immigrants, and migrant workers who are difficult to live and work in peace and contentment. The main obstacle to the orderly flow of elements among them and the optimal allocation of resources. In order to further analyze and solve the above-mentioned problems, the core task of the research on the rural area system in my country is to reveal the main controlling factors and the interaction mechanism of the spatial differentiation of rural areas, to identify the types of rural areas and their spatial organization, and to study the multifunctionality of different rural areas. and its different types of subject functions, to explore the coupling process and dynamic mechanism of the urban-rural regional system oriented by subject functions. In the future, rural geography should focus on reconstructing the regional pattern of \"three integrations and one improvement\" in different types of rural areas [6]. The main contents include: \u2460 Spatial reconstruction. In the process of urban-rural transformation and development, rural settlements have gradually shifted from the function of \"living\" to the multi-function of \"living, production, and ecology\". According to Christelle's central place theory, the core of spatial integration is to form a \"central place\" with a certain hierarchical relationship, to reconstruct the orderly rural structure for the rational flow of urban and rural elements and the relative concentration of rural elements, and to guide farmers to the central village (community) ) concentration, and promote the development of modern rural space in the direction of ecology, intensification and high efficiency. \u2461 Tissue reconstruction. The comprehensive improvement of natural villages promotes the scale of rural production, and an effective organizational subject is urgently needed to promote the intensification of factor allocation. At present, the main body of village regulation and regulation is government organizations at all levels, but the divided management of multiple departments has weakened the efficiency of the function, and the development of professional rural economic cooperation organizations is relatively slow, so it is particularly important to integrate multi-level organizations. We should focus on promoting the relationship of rural economic cooperation organizations from credit guarantee to contract, and then develop into a property right system connection, and promote the communityization, specialization and shareholding of rural organizations. \u2462 Industrial remodeling. The spatial reconstruction and organizational reconstruction of villages in the new era will promote the concentration and agglomeration of rural production factors, and build a new platform for the reshaping and upgrading of rural industries. Under the background of building a resource-saving and environment-friendly society, the unique advantages of agricultural resources, good ecological environment and broad entrepreneurial space promote the development of rural industries in the direction of parks and high efficiency. It is urgent to build urban-rural coordination from the level of industrial interaction. The innovative model of development and its long-term mechanism, and vigorously develop industrialized and efficient modern agriculture and rural joint-stock enterprises. Relying on the integration of rural space, organization and industry, we will deeply explore the development dilemma, solutions and coping strategies of typical rural problem areas, study the docking mechanism and model of different types of rural areas and the development of central cities and towns, and study the comprehensive improvement of rural productivity and competitiveness. Scientific approaches and countermeasures. Influencing factors of rural regional type differentiation and its optimization and reconstruction core issues Rural development and its landscape differentiation are the external manifestations of human production relations and lifestyles, and are deeply affected by many factors in the fields of nature, economy, society, system and policy. Natural factors include geomorphology, resource endowment, environmental quality, etc., which are the material basis for the differentiation of rural geographical types; economic factors, including industrial structure, economic strength, income level, etc., are the basic driving force for the differentiation of rural geographical types; social factors include population Quality, education and culture, social security, etc. are the environmental conditions for regional differentiation in rural areas. Under the dual system of urban and rural areas in my country, institutional and policy factors play an important role in regulating and guiding the differentiation of rural areas. For a long time, the spatial isolation and relative closure of my country's rural regional system is the result of traditional systems and institutional constraints, and has become the main obstacle to the optimization and reconstruction of rural regional space. Facing the strategic needs of the country\u2019s overall urban and rural development in the new era, domestic scholars have carried out research on urban and rural spatial organization and practical discussions, which involve some rural spatial reconstruction issues, such as village merging and township merging, hollow village renovation and central village Construction, rural space optimization and reconstruction, etc. [7, 8]. However, because the simulation analysis of the mechanism of urban-rural interaction in the transitional period and the systematic research on the rural spatial reconstruction mechanism need to be deepened, the existing academic research and practical work cannot adapt to and accurately grasp the spatio-temporal dynamics, differentiation laws and The scientific and technological needs of its regulatory pathways. The core scientific issues that still need to be broken through in-depth research on the differentiation of rural regional types and their optimal reconstruction include: \u2460 Spatiotemporal correlation of rural spatial reconstruction, organizational reconstruction and industrial remodeling. Due to the comprehensive influence of differences in history, culture and concepts, as well as different modes of modern production and life, it is not easy to construct a new pattern of rural regional types. It is urgent to focus on the comprehensive perspective of rural regional systems and their optimization, and promote system and mechanism innovation and Policy optimization design to realize the organic combination of new rural construction and urbanization [9]. \u2461 Multi-spatial scale transformation of rural spatial reconstruction. This is an important theoretical research topic, which needs to focus on revealing the coupling relationship between rural spatial reconstruction and regional development, urban-rural overall planning, rural landscape, intensive land use, etc. from the macro, meso and micro scales, as well as the rural economy. Regional differences in development forms and rural spatial reconstruction [10]. \u2462 The dynamic mechanism and multifunctionality of rural regional pattern evolution. In-depth analysis of the endogenous and external economic and social dynamics of the evolution of rural regional patterns, and in accordance with the requirements of building a new pattern of urban-rural economic and social integration, systematically carry out forward-looking simulations of the pattern of rural spatial reconstruction and the comprehensive improvement model and potential of hollow villages and prediction, to explore the dynamic balance between the protection of cultivated land to ensure food security and the demand for construction land to ensure the healthy development of urbanization in the process of coordinating urban and rural development, so as to promote the sound development of rural regional systems and the formation of urban-rural interaction patterns, so as to build a new type of Urban-rural relations and decision-making to promote sustainable rural development provide important support.", "The construction of agricultural production bases and its significance Agricultural production bases refer to concentrated production areas that occupy a relatively important position in the national or regional agricultural product economy and can provide a large amount of agricultural products inside and outside the region stably for a long time. Vegetables, animal husbandry, fishery, fruit and other various production bases. The construction of agricultural production bases is a complex systematic project with many influencing factors, involving many intricate links such as industry selection, location layout, development planning, production management, storage, transportation and circulation, benefit sharing, and risk avoidance. Systematic theoretical summarization and policy design and practice integration of the solutions to the above-mentioned links will form a typical model for the construction of regional agricultural production bases. my country has a vast land area, diverse geographical types, and abundant labor resources, but the contradiction between the large population and the small land is prominent. If a reasonable model can be adopted to effectively organize the decentralized operations of farmers, build agricultural production bases based on regional comparative advantages, and carry out professional agricultural products By increasing labor productivity, land use efficiency, and agricultural product commodity rate, it can significantly improve labor productivity, land utilization rate, and agricultural product commodity rate, which is conducive to strengthening cultivated land protection, enhancing agricultural production capacity and competitiveness, increasing the added value of agricultural products, and promoting farmers' income. , Promoting rural development and new rural construction are of great significance. The transformation of agricultural production bases is an inevitable trend in my country's urban-rural transition period to vigorously develop sustainable and efficient modern agriculture and strengthen the diversity of rural regional functions. my country's agricultural production base construction practice and existing problems Since the founding of New China, it has been an important task for my country's agricultural development and grain production to further promote the construction of agricultural production bases. During the period of planned economy, my country produced two modes of land reclamation by the Corps and government-led. The former refers to the construction of commodity grain bases in Northeast China, the construction of cotton and sugar beet production bases in Xinjiang, and the construction of tropical agricultural production bases such as rubber in Hainan since the 1950s. The special strategy in this special period achieved the dual goals of reclamation and frontier defense and comprehensive agricultural development. The latter refers to the construction of commercial grain base counties that began in the 1980s. Its typical feature is the strong support from the central government, and at the same time, the local government supports part of the funds and organizes the construction of agricultural bases. had a significant impact. Geographers played an important scientific and technological support role in the site selection, planning and construction of agricultural production bases at this stage. Generally speaking, due to the serious shortage of agricultural products, the construction of agricultural production bases dominated by administrative forces in the planned economy period encountered weak resistance, small risks, and relatively good results. However, in the process of marketization, the construction of agricultural bases faced increasing efficiency Practical problems such as low production efficiency, declining enthusiasm of production subjects, and serious shortage of productive inputs. After the 1990s, the supply of agricultural products was structurally surplus, which generally caused farmers to increase production but not increase income, which led to the strategic adjustment of agricultural structure. Under the background of the continuous market-oriented reform, farmers have more production autonomy, and the construction of agricultural production bases characterized by agricultural industrialization has entered a period of rapid development, which is particularly prominent in the developed coastal areas. Local governments, leading agriculture-related enterprises, specialized markets, cooperative organizations, village collectives, farmers, etc. have all become stakeholders. The main development models are: the leading enterprise-driven type mainly based on agricultural product processing and distribution enterprises, the market-driven type mainly based on professional wholesale markets for agricultural and sideline products, the intermediary organization linkage type linked by various intermediary organizations, and the development and growth of farmers' cooperatives. The establishment of corporate entities to engage in the integrated operation of agricultural production, the integrated model of cooperatives, the large-scale professional households driven by the large-scale operation of the planting and breeding industry, and the urban vegetable basket project-driven type, etc., are essential for meeting the needs of urban and rural residents for agricultural products and promoting the development of agricultural products. The stable increase of farmers' income will play an important leading and guarantee role. The experience of my country's agricultural production base construction since the marketization reform undoubtedly has important reference significance for future specific practices. Case analysis and theoretical research show that the above-mentioned models are often only the integration of partial link solutions, and there are still real or potential problems in some fields and system guarantees. For example, for the development model of leading enterprise-driven bases, although the market transaction costs for enterprises to purchase agricultural products in the market are reduced and the risk of raw material supply is avoided through contracts, it also reduces the loss of farmers\u2019 interests and property rights caused by the decline in market prices of agricultural products. It is relatively clear, but the binding force of this short-term contract is not enough, so that both parties may breach the contract. In addition, due to information asymmetry and unsound rural grassroots economic organizations, farmers have poor gaming ability and lack the right to speak. They are often in an extremely passive and weak position, their interests are often violated, and frictions between enterprises and farmers often occur . As for the intermediary organization linkage model, leading enterprises mainly entrust intermediary organizations to regulate the behavior of farmers, so that farmers can determine the production varieties, production scale and production standards according to the wishes of enterprises, and farmers entrust intermediary organizations to negotiate and negotiate with leading enterprises, and Strive for and maintain their own interests as much as possible, but intermediary organizations, as rational persons with relatively independent interests, have sufficient information about leading enterprises and farmers, and they are likely to engage in \"rent-seeking\" behaviors in the process Or the offside behavior of \"helping\" farmers' decision-making, such as forcing farmers to transfer their land to join the production base. In recent years, some foreign consortiums and industrial and commercial enterprises have been heavily involved in the field of modern agriculture. By \"renting instead of expropriating\" land to build agricultural sightseeing parks, farmers have obtained some economic benefits in the short term, but from the perspective of long-term development of regional agriculture and rural economy. It can be seen that this mode of land development and base construction will have a profound impact on the sustainability of the regional agricultural production system. Opportunities and challenges for the construction of my country's agricultural production bases in the new era Entering the 21st century, the internal and external environment of my country's agricultural production development has undergone more drastic changes, and opportunities and challenges for the construction and development of agricultural production bases coexist. The new opportunities are: \u2460 The consumption capacity of urban and rural residents has been greatly improved; \u2461 After joining the WTO, the export prospects of superior agricultural products will be broader; \u2462 After the market-oriented reform, the autonomy of farmers\u2019 production and management has been greatly improved; \u2463 Under the background of rapid urbanization, farmers\u2019 willingness to transfer land may be Even stronger; \u2464 The government's ability and willingness to support agriculture have been enhanced, and it has become a social consensus to \"promote agriculture with industry and lead the countryside with cities\", scientifically promote the construction of new countryside, and realize the integrated development of urban and rural areas. At the same time, we are also facing some new challenges: \u2460 Consumption demand is further transformed to centralization, high-quality, branding, and green; \u2461 Under the background of economic globalization, the impact of multinational companies on the domestic agricultural product market has intensified; \u2462 Technical barriers to international trade of agricultural products have increased. \u2463 The situation of water and soil resources in agricultural production is not optimistic, especially the shortage of water resources and the degradation of agricultural ecology are becoming more and more prominent; \u2464 The large outflow of rural young and middle-aged labor force has caused the weakening of the main body of agricultural production to a certain extent; \u2465 Intensive agricultural production The environmental pollution and its hazards in the world are intensifying, and the quality and safety of agricultural products are being severely tested. Under the new situation, the development of my country's agricultural production bases urgently needs innovation. The unsolved problems and current challenges exposed in the comprehensive practice. There have been innovations in the construction of standardization, standardization and marketization. Specifically include: \u2460 How to find out the relationship between market supply and demand in the process of economic globalization, that is, scientific analysis and simulation to predict the supply and demand situation of the international and domestic markets, research the market, track the market, and develop the market; \u2461 How to grasp the rules of competition and innovation mechanisms , that is to create a decision-making mechanism for foreign trade enterprises to quickly and effectively use the international trade rules for agricultural products; \u2462 how to select leading industries, that is, how to select leading industries according to local conditions and give full play to their unique advantages, and form competitive products through base construction to effectively prevent vicious competition caused by industrial convergence; \u2463 How to improve the benefit distribution mechanism, that is, to coordinate the interest relationship and guarantee mechanism among multiple subjects, especially among enterprises, cooperative organizations and farmers; The standardization system of circulation, and the transformation and promotion system of agricultural scientific and technological achievements; \u2465 How to integrate agricultural production factors, that is, innovate land circulation, ensure industrial financing, cultivate new farmers, and improve farmers' ability to settle and employ; \u2466 How to exert brand effect and avoid industrial risks , that is, how to create and operate a brand, realize brand management, reduce the operating risks of enterprises in the process of base construction, and enhance the rights and interests of farmers; \u2467 how to protect the agricultural ecological environment, that is, realize ecological conservation and increase production capacity during the development of agricultural production bases , implement cleaner production, reduce environmental pollution and protect biodiversity. In order to create a good environment for the formation and development of my country's agricultural production bases, innovations should be made in deep-level internal systems and policies: \u2460 Establish and improve credit evaluation and reward and punishment systems for leading agriculture-related enterprises, and give full play to the organization and \u2461 Improve laws and regulations, standardize the operation behavior of farmers\u2019 professional cooperative organizations, and effectively protect the rights and interests of farmers in land transfer; \u2462 Scientifically cultivate the main body of the construction of agricultural production bases, institutionalize farmers\u2019 vocational education and farmers\u2019 cultural skills training, and cultivate leading farmers. New farmers for the development of modern characteristic agriculture and new rural construction; \u2463 promote the construction of agricultural informatization projects, realize the sharing, intercommunication and interaction of comprehensive agricultural information, and steadily promote the transformation and promotion system construction of agricultural scientific and technological achievements; \u2464 research and release new rural construction and farmers Entrepreneurship Promotion Regulations (Law), encouraging leading agricultural enterprises, farmers\u2019 professional cooperatives and farmers to start businesses, guiding agricultural structure optimization, bill of lading yield, and increasing efficiency [9], to achieve sustainable development of agricultural production bases; \u2465 Improve the construction environment of agricultural production bases Impact assessment and supervision mechanism to ensure ecological, clean and safe agricultural production. For the construction of key grain production bases, we should especially proceed from the overall situation of ensuring national food security, innovate the policy guarantee system for national base construction, operation management and interests, gradually optimize the spatial layout of agriculture and grain production, and improve the comprehensive production capacity and quality of agricultural production bases. Competitiveness. In the new period of economic globalization and the rapid transformation and development of my country's urban and rural areas, many external and internal factors affecting the construction of agricultural production bases coexist, making the construction of agricultural production bases in my country face more difficulties and difficulties. For example, different crop types, different regional customs, and changing market environments may pose serious challenges to the development model and demonstration and promotion of agricultural production bases. Therefore, from the perspective of macro-strategy supported by scientific research and technology, we should pay more attention to how to scientifically coordinate and couple various influencing factors in the process of forming my country's agricultural production bases, and research and propose agricultural production with outstanding comparative advantages, good growth and strong competitiveness. Base model and its normative construction standard system. Under the new situation, facing the medium- and long-term planning and decision-making of national regional agriculture and food security, in-depth research on the laws of regional differentiation of agricultural production in my country driven by economic construction under the background of climate change, the multifunctionality of agricultural production areas, the construction of agricultural production market systems, and The coupling relationship between the modernization of agricultural production and the construction of new countryside should be regarded as an important direction and frontier field of modern geography, especially agricultural geography and rural geography.", "The transportation network is one of the important types and main components of the infrastructure network. It is the basic material support condition to ensure human survival and living ability, and it is also an important basic condition for improving the ability of regional sustainable development. The network of transportation facilities plays an important guiding role in the formation of regional spatial structure, especially in the spatial agglomeration and evacuation of regional social, economic and resource elements. Therefore, it is of great significance to study the basic law of the generation and expansion of transportation facilities network and the mechanism of its influence on regional development, which is of great significance for scientific understanding of the essence of regional development and the construction of a reasonable regional spatial order. For a long time, various regions have formed a large-scale transportation facility network, which has attracted the attention of geographers and formed a series of research. The key research contents include: the spatial structure of the transportation network and the temporal evolution of the transportation network, and around Following the expansion law of the transportation network, a development schema with universal significance has been constructed successively. Gould, Hilling and Taaffe, Ogundana, Gilbert, Stanley, Hayuth and Notteboom conducted systematic research on the law of the development and evolution of the port system, and successively condensed theoretical models such as five stages, six stages, and four stages; For example, O'Connor took Southeast Asia as an example to discuss the spatial pattern of the airline network, and proposed a four-stage evolution theory. O'Kelly proposed the famous theory - the hub-and-spoke network (hub-and-spoke networks), forming the theoretical cornerstone of aviation network research. Chinese scholars Jin Fengjun, Cao Youhui, Cao Xiaoshu, Han Zenglin, Wang Chengjin, and Wang Jiao'e conducted in-depth research on the spatial laws of the emergence and development of railway facility networks, port systems, expressway networks, aviation networks, and traffic networks in dense urban areas. . At present, most of the existing research stays at the level of the evolution law of the transportation facility network, and the research on the human environment effect of the development of the transportation network is still relatively weak, which is of great significance to the investigation of the regional development mechanism. What is the effect of the long-term formation and evolution of the transportation facility network on the human environment? How does it work? How to regulate? etc. These problems have not been well resolved, and will become an important research content of the traffic geography in the future. The ultimate goal of transportation geography is to provide scientific support for optimizing the human living environment. How the construction of transportation facilities affects the regional socio-economic spatial structure and ultimately changes the human living environment and the relationship between man and land is a key area worthy of in-depth research. To explore the humanistic-environmental effects of transportation infrastructure requires in-depth scientific observation and simulation analysis from the theoretical and practical levels, focusing on the comprehensive perspective of ecology-society-economy, summarizing the construction of transportation facilities and urbanization and regional development in my country since the 20th century The \"synergy\" process (that is, coupling relationship and spatial pattern) and the spatial effects of \"synergy\" and \"non-synergy\", the spatial coupling law of infrastructure, location and regional development policies, the impact of major infrastructure construction on urbanization and regional development patterns The role of evolution and strategic environmental assessment, research on the social and environmental effects of infrastructure in key urbanized areas.", "As an ancient discipline, geography is characterized by the diversity between regions. Regional differentiation leads to interactions between different regions, so regional interdependence and function has become one of the main contents of geography research. Mr. Ritter, the originator of modern geography, maintained this proposition in theory and practice: geography should firstly study all interrelated phenomena existing in all regions of the world [1]. Hartshorne, a famous American scholar, also pointed out that geography and history are combined sciences that study the world. Geography seeks to obtain a comprehensive knowledge of the regional differences of the world, and thus only in terms of their geographical importance\u2014that is, their relationship to the sum of regional differences [2]. British scholar Haggett focused on the regional differences in human activities and the resulting spatial interactions in his research. In the general analysis procedure of the location structure he established, regional interactions at different levels ranked first in the analysis order [3]. Entrusted by the Behavioral and Social Science Research Council of the National Academy of Sciences, the geography group headed by Taaffe proposed six guiding topics in geography that would have the greatest impact on the future theoretical geography system, including flow (vehicle flow, cargo flow, flow of people) and traffic network and space diffusion theory [4]. In his works, Mr. Wu Chuanjun regards the relationship between regions as one of the major research tasks of geography [5]. Regional interdependence in today's world is often manifested in regional division of labor, regional competition, regional cooperation, and various material flows, information flows, and capital flows due to regional interactions. At the same time, regions at different levels exhibit different patterns of dependence. Regional dependence at the macro level is often manifested as competition and cooperation at the national level. As the degree of globalization continues to deepen, capital begins to flow globally, and the production process is broken down into many links and reconfigured around the world through the media of multinational companies. The service industry is more dependent and the mutual influence between regions is stronger. The most intuitive manifestation of regional dependence at the macro level is that the financial crisis in the United States has dealt a blow to global economic development. Regional interdependence at the mesoscopic level is reflected in the interdependence between cities and cities, and between cities and their surrounding hinterlands. Cities are characterized by isomorphic industrial structures, competing for common markets and resources, and the dependence between cities and their hinterlands is often manifested in the exchange of resources, industries, population, ecological environment and other factors. Regarding the principle of regional interdependence, Adam Smith's \"absolute cost theory\", Ricardo's \"comparative advantage theory\" and Heckscher Ohlin's factor ratio theory partially explain this relationship from an economic point of view, Lu Dao The \"point-axis\" development model provided theoretical support for regional interdependence and development sequence from the perspective of geography [6]. However, the academic community has not yet reached a consensus on the calculation method of regional interdependence. Carroll introduced distance into the regional interaction formula, and mentioned the famous distance decay law, that is, the farther the region is, the smaller the interaction is [7]. Taaffe measures regional interactions from the perspective of economic connection strength, and introduces the population size and the distance between regions into the dependence measurement formula [8]. Muller's choice growth model emphasizes the use of transportation network, logistics and human flow to measure the interconnection between regions [9]. Berry and Black respectively use vector factors to analyze the commodity flow between regions, so as to measure the interrelationship between regions[10,11] . Yang Wuyang used the graph theory shortest path method to divide the scope of regional interaction [12]. Experts and scholars at home and abroad have successively proposed theories and methods such as the basic gravitational model, comprehensive scale, and diffusion potential, and studied the interaction between regions from different perspectives[13,14]. Due to the differences in the manifestations of regional interdependence at different spatial scales, different elements and structural combinations in different regions, and the same region has different development goals at different stages, so far, the academic research on regional interdependence There is still no consensus on the principles, laws and calculation methods. Traditional regional dependence measurement methods generally introduce distance factors as an inverse function of regional interactions. However, with the development of new technologies, especially information technology, and the deepening of globalization, the distance attenuation model is challenged. For example, Brown and Kevin point out that the acceptance of innovations drives cities to grow at a certain rate, thus changing the network of connections delimited by the gravity model [15]. Therefore, the principles, laws and calculation methods of regional interdependence are still an unresolved geographical problem.", "Accessibility is an important index in geography to investigate the convenience of connection between a certain node (region) and other nodes (regions) in a certain spatial system, or between all nodes in the entire system. important parameters of the relationship. Accessibility can be used to measure several attributes of different nodes in the regional spatial structure, such as differences in transportation costs, differences in transaction costs for social and economic exchanges with the outside world, and differences in basic capabilities for development. The research on accessibility has a long history, and it has contained the meaning of accessibility since the classical location theory. After the agricultural location and industrial location were put forward, accessibility has attracted the attention of scholars such as urban planning and transportation geography, and many studies have been formed, focusing on the concept definition and measurement methods of accessibility. Regarding the concept of accessibility, scholars at home and abroad have given various interpretations in terms of time, space, sociology, and psychology, but there is no generally accepted definition yet. Most scholars believe that accessibility refers to the convenience of using a specific transportation system to reach an activity site from a certain location. A typical case of foreign related research is the evaluation of the trans-European road or railway network; some scholars have conducted research on the accessibility of different regions and traffic modes. At the same time, a small number of scholars focus on the study of rural accessibility, focusing on measuring the frequency and quality of transportation services; some scholars focus on the accessibility of urban residents to various public service facilities (such as hospitals, parks, schools, libraries, etc.), and will The study of accessibility goes deep into the inner space of the city and becomes an important field of applied research on accessibility. With regard to the measurement methods of accessibility, geographers have successively constructed some mathematical models. The weighted average method of traffic cost is a widely used model: in the formula, Ai is the accessibility of node i; Tij is the shortest time it takes for node i to reach economic center j through the traffic network; Mj is the quality of economic center j , employment or population. At the same time, some scholars adopt the methods of opportunity accessibility and daily accessibility, which refer to the calculation of the population or economic activities reached by a certain node within a specific transportation cost or time limit. The third method is the potential model, its formula is: In the formula, P is the economic potential of node i; Mj is the quality of the economic center; Cij is the transportation cost from node i to center j; a is the distance friction between i and j coefficient. At present, most scholars use these three methods to measure the accessibility of spatial nodes. Correct expression and evaluation of spatial accessibility is of great significance for improving regional transportation network, rationally guiding population and industrial layout, and revealing regional development mechanism. To scientifically analyze the connotation of accessibility and design measurement models based on it, it is necessary to clarify the spatial scale of accessibility. On different spatial scales, the specific objects of accessibility measurement are different; within a region, accessibility reflects the difficulty of interaction between a certain city or region and other cities or regions, and its measurement method emphasizes the basic nature of nodes. Attributes. In the spatial scope of the city, accessibility is mainly aimed at the convenience of a certain group of people to use a certain public service facility, and its measurement method should pay more attention to the spatial discreteness and integration method of social classes and groups of people. Future research should focus on the following points. First, fully pay attention to the differences in natural background conditions, especially the topographical conditions have a decisive impact on traffic time or traffic costs. There is a big difference between the straight-line distance and the traffic distance, which has been easily overlooked by scholars for a long time question. Second, the connotation of the concept of accessibility needs to be further expanded and enriched, and concepts such as the accessibility of regional culture, the elimination of political barriers, the differentiation of social classes, the difficulty of popularizing the Internet, and the elements of the surrounding environment of individuals have been incorporated into the concept. category. Third, accessibility evaluation methods cannot match the fast-changing economic society, especially with the development of information and communication technology, which may change people's activities in time and space, and affect people's understanding and acceptance of accessibility. (such as risk, comfort), traditional traffic time and traffic cost measurement, future measurement methods should pay attention to these changes.", "The optimal population size of cities is a continuation of the discussion on the appropriate population. In social development, the demographic factor has dual characteristics, it exists both as a producer of social material wealth and as a consumer. There is a certain correlation between population size and development, hence the discussion of moderate population. The so-called moderate population is the most suitable population of a country or region, and it aims to find an ideal population state between \"overpopulation\" and \"underpopulation\". This theory was formed after the mid-19th century and became popular in the Western world, especially in the 1920s and 1930s [1]. With the development of the city, the population continues to gather in the city, and the scale of urban land is also expanding. Therefore, western urban economists began to discuss the relationship between urban population size and urban economic benefits. The optimal urban population size was from the 1950s to the 1970s It has received extensive attention and heated discussions [2]. my country also discussed similar issues in the 1980s: Should the strategic focus of China's urban development be placed on large cities or small towns? The theory of focusing on small towns holds that more small towns can reduce or even eliminate the gap between urban and rural areas, avoid the ills caused by the blind expansion of large western cities, and realize the employment of farmers through the development of township enterprises. The emphasis on large cities is mainly based on the viewpoint that \u201cthe economic agglomeration and scale benefits of large cities are higher than those of small cities\u201d. It is believed that the larger the size of the city, the higher the efficiency of the city is an objective law that does not depend on human will. This debate stems from uncertainty about the optimal population size of cities. It can be seen that the optimal population size of a city has very important guiding significance for urban development. So, is there a population size for cities that can maximize urban benefits? How is this value calculated? British urban economist KJ Button confirmed that the optimal population size of cities exists in theory. He made the judgment using the cost-benefit curve of city scale (Figure 1) [3]. AB in the figure is the average benefit curve, which means that as the city expands, the per capita benefit increases rapidly at the beginning, then the upward trend weakens, and finally declines; MB is the marginal benefit curve, which means the benefit that each additional unit member in the city should have ; AC is the average cost of living curve of the city, which tends to rise with the increase of the urban population and the expansion of the urban area, but in the case of a very small population, it may decline somewhat at the beginning; required expenses. \nFigure 1. \tThe cost/benefit curve of city scale P1 is the minimum reasonable size of the city. Before P1, the cost of adding a new unit of residents exceeds the average benefit of urban residents. Therefore, cities with a population less than P1 are uneconomical. P2 is the scale when the net benefit per person of urban life is the highest, and the difference between AB and AC is the largest, that is, the average benefit of urban residents is very high while the average cost is very low, which is the most ideal scale for existing urban residents. But at this time, MB > MC, the benefit of every additional resident in the city is still greater than the cost, and the urban population still needs to increase. P3 is the scale when the total net benefit obtained by the city reaches the highest level, and the social benefit is the highest at this time, which is the most ideal for decision makers. After P3, the cost of new urban residents is higher than the benefit, but from the point of view of the average benefit of residents, AB is still greater than AC at this time, so the population may continue to move in. P4 is at AB = AC, if the population increase cannot be stopped at this time, the city will exceed the upper limit of the optimal size and be uneconomical. Although Button's cost-benefit curve shows that the optimal population size of a city exists in theory, it does not give a method of measurement. Later, many scholars tried to use different methods to measure the optimal population size of cities. For example, some scholars use the cost-benefit method to calculate the optimal population size of Beijing as 11.64 million people [4], use the two-way optimization method to calculate the optimal city size of Guangzhou as 6.1 million people [5], and use the possibility-satisfaction method to calculate the optimal population size of Jinan. The moderate population in 2020 is 4.5 million people [6]. These calculations all refer to mathematical methods, and various mathematical models are established through complex index systems for calculation; however, due to different factors considered and different data used, the results obtained by these models for the same city will also be different. In fact, the optimal population size of a city can only be a relative concept. From different evaluation angles and using different evaluation standards, different optimal scales can be obtained; moreover, the optimal scale is a variable of time. As time changes, the technical level is changing, and people's values and values constitute the standard. It will also change accordingly, and the same standard will also lead to different optimal scales. As reflected in Button's cost-benefit curve, since the shape of the curve is not very certain, there is no definite value for P1, P2, P3, and P4; Corresponding to different optimal population size values; when the urban population size reaches a certain amount, due to the high population density and the benefit decline, the population will spread, the population density will decrease, and the urban area will expand. At this time, a new cost/benefit curve will be generated , creating a new intersection point. The determination of the optimal population size of a city is of great significance to the transformation of the urban economic growth mode, the formulation of the urban economic and social development strategy and the revision of urban planning. Therefore, it is necessary to study the optimal size of a specific city within a certain historical period and according to its specific conditions. However, a unified and universally accepted method for calculating the optimal city size has not yet been found, and talents in the new century need to use scientific and advanced methods to solve it.", "The earth has nurtured human beings, and human beings are constantly exploring the earth. The earth contains endless scientific mysteries that make human exploration never reach the limit. In a sense, the most intelligent thing in the world is not human beings, but nature itself. Everything has its own way, and nature is the most harmonious. Human beings can only survive in the process of adapting to the changes of nature, become smart in the process of exploring the mysteries of nature, and progress and develop in the process of mastering the laws of nature. No matter what you study or do, or whether you are male, female, old or young, as long as you live on this earth, you will have to accept the baptism of natural changes consciously or unconsciously. All the natural environments that we have to come into contact with are all related to the earth, and all are related to earth science. In a series of question marks, people must first go back to the source: how did the earth form? How is life born? Among the eight planets in the solar system, is the earth old or new? Why is there only human beings living on the earth? At what stage of growth is the earth, which has gone through nearly 4.6 billion years,? What will its development prospects look like? ... Is there life on Earth? The earth is a living star, the mountains are its bones, the flowing water is its meridians, the magma is like its blood, and the earthquake shows its pulse. Whenever an earthquake or a volcano erupts, the earth, which seems to be sleeping still, will be violently violent. , showing great energy, where does this energy come from? And how to gather and release? Matter is the basis of energy. The earth is a giant with an average radius of 6371.004km, a surface area of 5.11\u00d7108km2, a volume of 1.083\u00d71012km3, an average density of 5.518\u00d7103kg/m3, and a mass of 5.974\u00d71024kg; It is composed of elements. Among the 112 elements that have been discovered, only 94 exist in nature. The rest of the elements, namely No. 95 and all elements after it, can only appear in artificial reactors. Even the elements before No. 94 originally had 4 One (technetium Tc, promethium Pm, neptunium Np, plutonium Pu) is artificially synthesized and hardly seen in nature, which indicates that the entire earth is almost \"directed\" by 90 or 94 elements, which form molecules and aggregate into Minerals and rocks, mountains, waters and endless treasures that make up the earth and the entire material world. Will Earth's resources run out? From the perspective of the law of indestructibility of matter, resources will not be exhausted, but will only change from one form to another. When one material is exhausted, another material will replace it, and the cycle and succession of matter will last forever. will not end. Many substances that seem to have no use today will become useful in the future. Take the basalt that can be seen everywhere. Who would have thought that it can be used to pull silk as thin as cotton wool and spindles, and then make various special materials and products. , used in aerospace, aviation, military, transportation, fire protection, construction and many other fields, basalt has become an important new, high-performance, non-polluting green material in the 21st century, making turning stone into gold a reality. Among the hundreds of elements, there are more than 1,700 kinds of radioactive isotopes (nuclides), which are constantly decaying with different half-lives. Nuclides with short half-lives have died before or in the early formation of the earth, while nuclides with long half-lives The nuclides play little role in the range of the current age of the earth, and only the nuclides with half-lives between 106 and 1010 years, such as 40K, 235U, 238U, 232Th, 247Cm and 244Pu, play an important role in the evolution of the earth. important role[1], the most important of which is 235U, the decay energy they produce is 5.1\u00d71020~23\u00d71020cal/a\u2460[2], and the cumulative energy since the formation of the earth is 5.85\u00d71030~8\u00d71030cal[3], If these energies are concentrated, they are enough to melt the earth, but because the accumulation of energy is a long process, and it is released while accumulating (such as earthquakes and volcanic eruptions), the earth remains relatively stable. Radiation energy is then converted into thermal energy, mechanical energy, etc., and becomes the main driving force of earth movement (including sea and land changes, uplift, subsidence, accretion, denudation of geological bodies...) and material transformation. As the decay progresses, the mass of its parent isotope decreases continuously, and eventually becomes exhausted. The corresponding radioactive production capacity is weakened, and the internal energy of the earth is reduced, which in turn affects the vitality of the earth. It has been more than 140 years since Mendeleev's periodic table of elements was established in 1869. Except that some elements in vacant positions have been filled by later generations, there has been no major change. So, do other elements exist in nature besides the elements that have been discovered (except synthetic elements)? Different elements have different atomic weights and Clark values. Elements that are abundant on this planet are rare on another planet. Why do their abundances vary greatly? How are their decay and fractionation properties different? What impact will it have on the formation of resources and the evolution of climate and environment? Many radioactive elements are clocks for determining the age of geological bodies. The age of the earth is measured by them, but so far, only samples that can be measured for about 4 billion years have been found on the earth, which is nearly younger than the original age of the earth. 600 million years, what did the earth look like in this ancient 600 million years (4.6 to 4 billion years ago)? Is there no matter from those 600 million years or the matter from then did not retain the radioactive clock? What was the state of matter at that time? These are still a mystery. Everything has a beginning and an end. If the universe is more than 11 billion years old, the earth, which is only 4.6 billion years old, is still in its prime. In the family of eight planets, it may be the youngest. Because only it has water, life, and volcanic activity. Volcanic activity is a symbol of the planet\u2019s vitality. Except for Saturn\u2019s satellite I0, other planets have no volcanic activity so far, indicating that their vitality has degraded; the existing volcanoes indicate that volcanic activities have occurred in the past, but this cannot be explained. It is inferred that water and life existed when there was volcanic activity. Different planets have different stages of evolution and life processes. Even if water is found on some planets, life may not necessarily exist. Earth-shaking eras During the 4.6 billion years of earth evolution, multiple eras are divided from ancient times to the present, such as the Archaean, Proterozoic, Paleozoic, Mesozoic and Cenozoic. Each era has its specific meaning and specific environment, which is an important dividing line between gradual change and sudden change. The main basis for dividing these geological ages was biosignature and tectonic movement. Today's research shows that there are many new discoveries in the evolution from low-level organisms to high-level organisms, and even the origin of human beings, including many emergencies (such as the Cambrian explosion) and the trend of going back to ancient times. So, should some geological eras be re-divided? What should be the signs and boundaries of division? In the era without isotope dating, organisms can be used as the main marker, but with the advancement of science and technology, many geological ages and geological events must be determined by multiple indicators and comprehensive factors. In the long river of geological history, there are many earth-shaking eras. In such eras, fundamental changes have taken place in nature. In addition to the geological eras that have been divided, there are also some important eras in a certain geological era that deserve attention. These eras are relatively Although the long geological history is a short scene, it represents an important geological event and environmental mutation. As far as the Cenozoic is concerned, the uplift of the Qinghai-Tibet Plateau is a global geological event, accompanied by times of great changes. As mentioned above, around 45Ma, the Indian plate collided with the Eurasian plate, the Qinghai-Tibet Plateau was uplifted, the continental margin was dispersed, and the East Asian continental rift system and marginal sea (back-arc basin) were successively formed, which basically established the modern natural pattern. It is the most important geological event that occurred in the world in the early Cenozoic. Around 35Ma, following the collision between the Indian plate and the Eurasian plate, the Qinghai-Tibet Plateau experienced strong strike-slip and distortion on the northern and eastern margins, forming the Hengduan Mountains nearly orthogonal to the strike of the plateau terrane, indicating that the Qinghai-Tibet Plateau may suffer It comes from the eastward subduction of the western Arabian plate; and the subduction force of the Arabian plate is not unrelated to the stretching of the Great Rift Valley in East Africa, but the expansion of the Great Rift Valley in East Africa and the existence of the magma lake at the northern end are not as some Western scholars say Yes, it indicates that the African continent will be divided into two, and the eighth continent and the fifth ocean will appear on the earth; the ice sheet of the Antarctic continent began to form around 35Ma, and major changes have taken place in the biological world. Expect. Around 13Ma, the world changed drastically again, tectonic activities were very active, and volcanic activities spread all over the world. This period was the main formation period of the East African Rift Valley and the East Asian Continental Rift System and the important uplift period of the Qinghai-Tibet Plateau, running through the Asian continent The Tanlu-Yitong-Yilan fault on the eastern edge stretches for thousands of kilometers from south to north. A series of volcanic eruptions occurred almost at the same time. The ages are almost all around 13Ma. The magma properties are also based on mantle-derived alkali Sexual basaltic magma dominates. 2.6Ma is the beginning of the Quaternary period, when the world entered a cold period, and humans evolved at this time. Throughout the development of the earth, there are always gradual changes that breed sudden changes, and sudden changes are transformed into gradual changes. Every change is accompanied by the evolution and sudden change of things such as geology, biology, climate, and environment. They form a complete unity, complement each other, and complement each other. confirmed. These major geological events occurred in a specific historical period, then, what force and what factors promote the global geological event? At present, the earth is in a relatively intense period of activity. Not only are earthquakes, volcanic eruptions and other tectonic movements frequently occurring, but climate change is also in a period of high-frequency oscillations. Historically, there has been a trend of faster and faster amplitudes and shorter cycles. There is no need to complain about climate issues at the Copenhagen Conference. It seems that the climate issue is no longer a scientific issue, but a political issue, and discussing climate is not the business of scientists, but the business of politicians. People are generally concerned that climate warming will lead to a series of natural disasters, and the fuse of climate warming is the increase in carbon dioxide, which is caused by human emissions; therefore, it is necessary to investigate how humans emit carbon dioxide and who should be responsible for this Responsible? On the surface, such logic seems reasonable, but in fact, the fundamental problem lies in climate change itself. It is true that the increase of carbon dioxide is an important factor causing temperature rise, but climate change and temperature increase are not all caused by the increase of carbon dioxide, but are restricted by many factors, the main factor of which depends on the earth itself. The earth is a giant system, the release of earth energy, the change of geological bodies, the migration and exchange of matter, the interaction of the earth, water, air, and biological circles on the earth's surface, the position of the earth in nature and the relationship with the outside world (the universe) ), the degree of receiving solar radiation, etc., are the essence of climate change. Just imagine, a large volcanic eruption (such as the eruption of Mount Pinatubo in the Philippines in 1990) will release tens of millions of tons of greenhouse gases such as sulfur dioxide and carbon dioxide; and there will be dozens of volcanic eruptions of different sizes every year. How considerable the release of greenhouse gases such as carbon dioxide should be! Don't blame others. It is neither fair nor objective to attribute the responsibility for the increase in carbon dioxide to human emissions, especially those from developing countries. There are many ways to emit carbon dioxide in nature, and there was a period when there was no human beings, such as the Cretaceous period, when the concentration of carbon dioxide was higher than it is now. As we all know, about two-thirds of the earth's surface is water, one-third is land, and the places where humans live on one-third of the land roughly only account for one-half. This means that the scope of human activities is less than that of the earth. 20% of the surface, its impact on the global climate is limited, at most play a role in fueling the flames. Climate change has its own causes and laws, independent of people's will. Saying this is not to encourage human beings to emit carbon dioxide, save energy and reduce emissions, and implement a low-carbon economy. It is beneficial to both climate change and social and economic development. It should be a conscious action of human beings. Will history repeat itself? In the long course of history, the earth has undergone ever-changing changes, flat lands have raised mountains, seas have turned into mulberry fields, the climate has been hot and cold at times... the whole nature is moving forward with its inherent frequency and pace. History is a mirror. Nature and society often have many things that are surprisingly similar and repeat themselves. Although some creatures, such as dinosaurs, are extinct and will not be reborn, natural disasters and climate change will appear again and again. Sometimes they come quickly, sometimes they come slowly. In front of nature, human beings seem small and helpless. The best way is to continue to carry out extensive and in-depth scientific exploration, understand nature, and master the laws of natural changes; the deeper people understand nature, the stronger their ability to adapt to natural changes, and they will be able to change from passive to active and become the masters of nature . Of course, this is a long process.", "The origin of life is an eternal mystery and the most extraordinary event on earth. So far, there is no sufficient evidence that there is life on other planets besides Earth. At the beginning of the formation of the universe, that is, about 10 billion years ago, the basic elements of life, such as carbon, hydrogen, oxygen, nitrogen, sulfur and phosphorus, etc. were produced; perhaps in the later evolution of galaxies, some organic molecules such as amino acids, Purines and pyrimidines begin to form and disperse into interstellar dust and nebulae; under certain conditions, these molecules may aggregate into biomacromolecules like polypeptides, and then undergo the evolution of genetic code and several pre-biological systems to finally produce primitive cells The life of the structure. This series of evolutionary events is likely to have occurred during the formation of the earth, which is called the process of the origin of life [1]. We are not yet sure when and how life began. Generally speaking, the origin of life will not be earlier than the formation age of the solar system and the earth, and this process should have occurred between 4.6 billion years ago (the formation age of the solid earth) and 3.5 billion years (the oldest fossil age[2]) period in between. Today, on the primitive earth, the process of transforming inorganic compounds into simple organic compounds (amino acids, purines, pyrimidines, etc.) and then polymerizing into biological macromolecules (peptides, polynucleotides, etc.) is not a very complicated process Chemical reactions, the possibility of which have been proven by a large number of laboratory simulations. Organic carbon compounds also exist in some meteorites and cosmic dust as old as the earth and the solar system, and these evidences also show that it is possible for the early earth to produce biological macromolecules. The question now is: How did these biomacromolecules evolve into simple single-celled life? This evolution is a huge event in the origin of life, and it is a difficult \"gap\" between living and non-living things to bridge. How did life originate? There have been many hypotheses since ancient times. In the Spring and Autumn Period 2,500 years ago, Laozi wrote in the \"Tao Te Ching\", \"Tao begets one, one begets two, two begets three, and three begets all things.\" To many, slowly evolved. This may be the earliest claim about the origin of life. Several influential hypotheses are presented below. The first one is \"creation theory\". It is written in the first chapter of the \"Old Testament\" that God created all things in the world within seven days. This concept is generally accepted in the West in the Middle Ages. It can be said that until now, This concept is still accepted by many people. But the \"creation theory\" has no scientific basis, and there is no scientific explanation for the origin of life. The second is \"autogenesis\" or \"spontaneous generation\", which was widely popular before the 19th century. It believed that living things could be produced by non-living things at any time. For example, the Greeks believed that insects were born in the soil, and the Egyptians believed that life came from the Nile River. In ancient China, there is also a saying that rotten grass grows fireflies. In the middle of the 19th century, French microbiologist Pasteur completely denied the \"spontaneous generation theory\" with scientific experiments. The third is \"theory of living origin\", which was also quite popular in the West in the 19th century. \"Animogenism\" believes that life is inherent in the universe, but it is actually agnostic. In the second half of the 20th century, the theory of biological origin gradually developed to the current \"cosmic germ theory\". Until now, many scientists believe that the formation of enzymes (such as proteins) and genetic material necessary for life takes hundreds of millions of years. There was not enough time to complete these processes in the early days of the earth, so they believed that life must have come to the earth from somewhere in the universe in the form of spores or other life forms. This concept has certain basis. For example, comets and some chondrites contain not only solid water, but also organic compounds such as amino acids, terpenes, ethanol, purine, and pyrimidine. Life may be produced on comets and brought to the earth. Or when comets and meteorites hit the earth, these organic molecules undergo a series of reactions to produce life. In 1859, with the publication of Darwin's \"Origin of Species\", biological science underwent unprecedented changes, and at the same time, it brought a glimmer of light to human beings to reveal the eternal mystery of the origin of life. This is the modern \"chemical evolution theory\" \", which believes that life evolved slowly from inorganic substances in a certain corner of the primitive earth. The \"chemical evolution theory\" of the origin of life was first supported by the experiment of American scholar Miller in 1953 [3], he put ammonia, hydrogen, water and carbon monoxide in a sealed bottle, and inserted metal rods at both ends of the bottle, When it was powered on again, a large amount of amino acids were produced in the bottle after a few days. This experiment has made a big step forward in human understanding of the origin of life. From this, it can be inferred that the early earth may have synthesized inorganic molecules into organic molecules through lightning under normal temperature and pressure, and then further synthesized organic macromolecules, thereby producing life. In 1967, American scholar Blake discovered a large number of thermophilic organisms in the hot springs of Yellowstone Park [4]; in 1977, Chris also found a large number of thermophilic microorganisms in the hot springs at the bottom of the Pacific Ocean [5]. These discoveries add new evidence to the chemical evolution theory of the origin of life. Submarine hot springs and hot springs on land have many common characteristics, such as high temperature, a large amount of reducing gas, and a large number of thermophilic microorganisms living there. This unique environmental background may be similar to that of the early Earth, which had a high temperature and a reducing atmosphere, and the earliest forms of life produced may be some microorganisms that can adapt to high temperatures. Modern molecular biology studies have shown that some thermophilic microorganisms in hot springs contain ancient genes, and they are indeed the root types of the tree of life. According to the existing evidence, the process of the origin of life can be roughly described as: at the beginning of the formation of the earth, the atmosphere of the earth was full of gases such as CH4, CO, CO2, NH3, N2, H2, etc. Under the action of different energy sources, simple organic compounds (amino acids, purines, pyrimidines, etc.) are synthesized on the surface of heavy metals or clays (as chemical catalysts), and then polymerized into biological macromolecules (polypeptides, polynucleotides, etc.), these organic macromolecules May accumulate in the hot pools erupted by volcanoes on the early earth. Macromolecules self-select, and then self-organize, self-replicate and mutate to form nucleic acids (genetic material) and active proteins, separating structures (such as lipid membranes) ) are also produced synchronously, and finally the metabolic reaction under the control of genes (polynucleotides) provides energy for gene replication and protein synthesis. In this way, a self-replicating protocell surrounded by a biofilm is produced. This protocell may have been heterotrophic or chemoautotrophic, and it may have resembled modern thermophilic archaea living near hot springs. There are still many key steps in the above model of the origin of life that we do not understand very well, and we cannot repeat these processes in the laboratory, such as how organic molecules self-select, how the genetic code originates, and the compartmental structure (cell membrane and cell interior). How the membrane structure of the cell is formed and how the complex metabolic process inside the cell originates. We are still a long way from solving the ancient mystery of the origin of life. The origin of life research is a comprehensive subject involving biology, chemistry, geology, astronomy and many other disciplines.", "Introduction Terrestrial plants are the largest and indispensable primary producers in the modern earth's ecosystem. They are not only important for maintaining the stability of the ecosystem, but also inseparable from human survival. But little is known. The emergence of terrestrial plants is an important step and a major event in the co-evolution process of organisms and the environment on the earth. It opens up the way for the further development of the plant kingdom and provides the necessary food chain for the evolution and development of the animal kingdom. Optimizing the natural environment creates the necessary conditions. The significance of terrestrial plants successfully landing and occupying different terrestrial ecological domains is that it not only paved the way for the further development of the plant kingdom, but also provided food resources for the evolution of the animal kingdom, and at the same time greatly improved and optimized the natural environment, which eventually led to the establishment of the current terrestrial ecosystem. and perfect. The research on the landing process of terrestrial plants is an eternal mystery in the study of paleobotany[1], and its deciphering mainly involves: what kind of plant landed first? When to log in? The leading factor leading to the landing of terrestrial plants? How did plants develop and occupy different terrestrial ecology after landfall? And the evolution process of today's vegetation formation? The prelude to the landing of terrestrial plants Before the emergence of terrestrial plants, lichens played an important role in the transformation of the terrestrial environment. Lichen is one of the most widely distributed pioneer organisms on the earth. It is a symbiotic complex formed by a high degree of combination of fungi and green algae or cyanobacteria, with stable morphology and special structure. However, because they are relatively fragile and difficult to preserve as fossils, the fossils of lichens and fungi closely related to their systematic status are quite rare. At present, the earliest lichen fossils in the world come from Weng'an, Guizhou, China, and similar lichen fossils are preserved in the Doushantuo period phosphate rock layer (about 551-635 million years ago)[2]. The discovery of this fossil advanced the geological record of lichen fossils by about 200 million years, indicating that fungi had formed a symbiotic relationship with photoautotrophs as early as 600 million years ago, indicating that lichens may have formed a symbiotic relationship with surface rocks before land plants landed. The circle has been transformed and has become a pioneer in the establishment of terrestrial ecosystems. It was the appearance of these lichens and partial changes to the terrestrial environment that enabled terrestrial plants to complete the landing journey. Landing prelude of terrestrial plants In modern terrestrial ecosystems, bryophytes are a class of insignificant plants. Due to their small size and lack of vascular tissues in their bodies, the probability of being preserved as fossils is very low. The fossils of bryophytes and mosses were first discovered in the Late Devonian, but the study of microphyte fossils in the Early Paleozoic showed that before the appearance of terrestrial vascular plants, a kind of bryophyte had successfully landed and became the main producer of early terrestrial ecosystems , and can live in various terrestrial ecological environments, and is a pioneer in the transformation of terrestrial ecological environments. Cryptospores are a type of organic wall microfossils believed to be produced by mosses and bryophytes. Searching for the earliest cryptospores has become an important weapon for paleontologists to uncover the origin of terrestrial plants. The confirmed cryptospore fossils were produced in strata about 460 million years ago [3], and the Ordovician-Silurian cryptospores were of various types and widely distributed around the world. From the research on the development and distribution of cryptospores, it can be known that some bryophyte-like terrestrial plants appeared at least in the Middle Ordovician, which lasted until the beginning of the Devonian; and then with the prosperity of terrestrial vascular plants in the Devonian, Gradually withdrew from the stage of history. Terrestrial plants actually landed and multiplied. The Ordovician-Silurian period was a key period for the origin and early evolution of terrestrial vascular plants. Finding and studying fossils of terrestrial vascular plants during this period has always been one of the hotspots of international paleobotany research. Plants must meet three basic conditions to land and survive on land independently and for a long time[4]: the support system to support the plant body and the transportation system of water and nutrients; the organs to reproduce independently from the water body; and the organs to breathe and prevent water evaporation . Only when plants meet the above three basic conditions can they complete the real landing. This kind of plant is called terrestrial vascular plant. Cooksonia is recognized as one of the representative molecules of early terrestrial vascular plants, and its earliest fossil record was produced in strata about 325 million years ago [5]. The plant is dichotomous forked multiple times, terminal sporangia, containing Trismus in situ, stomata developed on the cuticle, and vascular tissue composed of annular tracheid cells. However, a series of problems such as when the earliest terrestrial vascular plants appeared, what biological characteristics they had, how the vegetation evolved in the early stage, and the relationship between vegetation formation and terrestrial ecosystem establishment are far from being resolved. According to the existing fossil evidence, before 420 million years ago, terrestrial vascular plants were small in size, simple in structure and few in species, indicating that terrestrial vascular plants adapted to the harsh conditions of land and their own structures within a long period of time after landing. Under the constraints of sex, the evolution was very slow; by about 400 million years ago, the diversity of terrestrial vascular plants had increased explosively, almost comparable to the rapid evolution of marine organisms in the Cambrian Explosion[6]. There are three main groups of early terrestrial vascular plants [4]: Rhiniphyta, Pteridophyta and Tribraidia. Raney ferns are small, simple dichotomous branches, with terminal, spherical or ellipsoidal sporangia, and are the main plant group of terrestrial ecosystems. There are sporangia spikes on the top of the branch shafts of the craft fern, and the spores are mostly kidney-shaped, with short stalks at the base. The craft fern is mostly regarded as the sister plant group of the modern lycopodium. Three-clad ferns are more complex than Rainey ferns, and are regarded as the ancestors of many important plant groups, such as true ferns, pre-gymnosperms, and cuneus. Except for Lycopodium, the most prominent feature of early vascular plants was the absence of leaves, and the origin of leaves became a very important issue. The Eophyllophyton of the Early Devonian in Yunnan, my country has obvious leaf-like structure[7]. However, the emergence of leaves is nearly 50 million years behind the earliest terrestrial vascular plant fossil records, showing that plants without leaves can still live. The appearance of leaves may be triggered by the reduction of atmospheric carbon dioxide levels, and the leaves greatly increase the photosynthetic capacity of plants and improve the primary productivity of the entire terrestrial ecosystem. By about 390 million years ago, with the increase of plant species, the individual size of plants has also undergone tremendous changes. Taller tree-like plants have appeared, and forests have appeared in relatively humid alluvial plain areas [8]. The establishment of early forest ecosystems provided excellent conditions for the survival and reproduction of terrestrial fauna, and played a positive role in promoting the subsequent landing of tetrapods. As terrestrial vascular plants continue to develop into more harsh ecological domains, water has become one of the most important factors restricting plant reproduction. Early terrestrial vascular plants all reproduced with spores, and some of them were restricted by water. Before 385 million years ago, plants that reproduced with seeds appeared. The emergence of seeds led to major changes in plant evolution, resulting in greater adaptability of the plant body, while affecting the survival and reproduction of animals. With the successive emergence of tree-like plants and seed plants, plants have occupied the harsh terrestrial ecological environment by about 370 million years ago, and the diversity pattern of terrestrial vascular flora has basically formed, and terrestrial vascular plants have become the dominant species on the earth. One of the most important forces of terrestrial ecosystems. Research Prospects At present, people's understanding of the landing journey of terrestrial plants is only a conceptual understanding, and many links of terrestrial plant landing still need definite plant fossil evidence. The Ordovician-Devonian period is an important period for deciphering the mystery of terrestrial plant landings. During this period, various fossil materials can provide important evidence for the landing journey of terrestrial plants. Through the comprehensive study of different types of plant fossils around the world, people can have a clear understanding of the characteristics and appearance of plants at each stage of the land plant landing process, and finally decipher the mystery of the land plant landing journey. China has fossil and stratigraphic resources to decipher the mystery of land plant landing, and should be able to make breakthroughs in research in this field. The deciphering of the \"Mystery of China\" [1] has become one of the hotspots in the study of terrestrial plant landings, and exploring the evolution of plants in the Silurian-Devonian period in China is regarded as one of the important goals of the international paleobotanical community. China's Ordovician-Devonian plant fossil materials are wonderful. After in-depth research, it should be able to reveal the mystery of early vegetation formation, differentiation and radiation, and the formation of different plant geographic divisions, and contribute to the deciphering of the mystery of terrestrial plant landing.", "Animals are the most complex and advanced life forms on earth. However, since the appearance of life on the earth, after about 3 billion years of long evolution, animals did not appear on the earth until the end of the Neoproterozoic period more than 600 million years ago, and then in the early Cambrian period 530 million years ago Rapid evolution occurs. In the history of life, the biological evolution event in which almost all basic animal classes originated rapidly in a short period of time in the early Cambrian period is called the \"Cambrian Explosion\". The origin of animals and the process and mechanism of the Cambrian explosion are listed as one of the top ten mysteries in natural science today, and have always been a major scientific issue that has attracted widespread attention and continuous exploration by the scientific community. Cambrian Explosion Questions Raised The beginning of the Cambrian explosion of animals as a puzzle was noticed by Darwin and his contemporaries of geology and paleontologists as early as the 1830s. This is clearly stated in the tenth chapter of Darwin's \"Origin of Species\": \"There can be no doubt that the Cambrian and Silurian trilobites have evolved from a certain crustacean, and this crustacean should lived long before the Cambrian period.... If my theory is correct, there can be no doubt that there must have been a considerable period of time before the deposition of the lowest strata of the Cambrian The entire time from the Cambrian to modern times is as long, or even longer.... But why were no fossil-rich strata found before the Cambrian? I cannot give a satisfactory answer.... This phenomenon is currently puzzling may actually be strong evidence against this theory.\u201d However, no one considered the Cambrian Explosion to be a real evolutionary event until the middle of the 20th century. The sudden appearance of various animal fossils from the Cambrian period has always been explained either by absence of Precambrian strata, or by non-preservation or discovery of Precambrian fossils. This change in understanding may be traced back to 1948. American stratigraphic paleontologist Cloud pointed out that the emergence of various multicellular animals in the Cambrian was as fast as geological records, and he used the term \"eruptive evolution\" to refer to Emphasis on the rapid and large radiation of animals in the Cambrian [1]. In 1956, German paleontologist Seilacher pointed out the authenticity of the explosive evolution of the Cambrian based on the study of trace fossils in the Precambrian-Cambrian transitional period[2]. Cloud in 1968[3] further clarified that the Cambrian explosion was a real and rapid bioradiative evolution event. Regardless of whether the event occurred within a few million years or a little longer, it is an abrupt event relative to geological time. After that, the concept of the Cambrian explosion gradually attracted attention in the field of paleontology. Since the 1970s in the late 20th century, a variety of mineralized animal shells and skeleton fossils (often called \"small shell fossils\") have been found in strata before the appearance of Cambrian trilobites. The rapid skeletal evolution that occurred over time further deepened the mystery of the Cambrian explosion. With the continuous discovery of different types of Precambrian and Cambrian fossil groups around the world, especially, with the late Precambrian Ediacaran biota, Cambrian Chengjiang biota and Burgess page With the discovery and in-depth study of rock biota, the Cambrian explosion of animals has been accepted by more and more paleontologists as a real evolutionary event[3]. At the same time, the Cambrian explosion phenomenon of animals is supported by molecular phylogenetic trees established through 18Sr DNA studies. The nature of the Cambrian explosion According to the research of the Burgess Shale fossil group, in 1989, Gould of Harvard University proposed a new biological evolution model in the book \"Wonderful Life\" (Wonderful Life), emphasizing the The revolutionary significance of the Cambrian explosive evolution of animals. In order to better understand the importance of the Cambrian explosion and re-establish the model of biological evolution, Gould proposed to use \"disparity\" to distinguish \"diversity\", that is, biological differentiation The degree of heterogeneity represents the number of species, and the degree of difference in shape represents the degree of difference in biological modeling. Then, the history of biological evolution after the Cambrian period is the history of the reduction of the degree of modeling difference and the increase of the degree of biological differentiation (Figure 1). This evolution model outlines an inverted biological evolution tree, which is exactly the same as Darwin's traditional biological evolution tree on the contrary. That is to say, in the history of more than 500 million years since the Cambrian explosion to the present, most of the basic shapes of creatures have gradually disappeared, and only some of the shapes of creatures have continued to evolve. The degree of differentiation of creatures in these continued branches is gradually increased. Figure 1 \tTwo biological evolution models a. Traditional Darwinian gradual change model; b. Gould\u2019s Cambrian explosion model So far, the essential characteristics of the Cambrian explosion revealed by all fossil evidence can be summarized as follows: \u2460 All animal shapes, including vertebrates, appeared rapidly in a short geological time in the early Cambrian. Modeling (body plan, bauplan) refers to a series of anatomical features that reflect the spatial relationship between the various organs of the biological body. \u2461 All animal phyla (referring to all organisms with the same shape) first-level biological groups on the modern earth have appeared in the Cambrian, and they are only part of the various animal shapes that appeared in the Cambrian, and a large number of strange shapes The animal shapes in the Cambrian period became extinct soon or gradually. \u2462 A complex ecosystem similar to that in the modern ocean was established in the early Cambrian. Animals have occupied different levels and habitats in the inner layer of seabed sediments and seawater, and have complex food chains including giant carnivores. Therefore, the Cambrian explosion is no longer simply understood as an explosion event of animal shapes, but also an evolution event such as the expansion of ecological space and the rapid establishment of complex ecosystems. Its importance can only be compared with the origin of life and the origin of intelligence. The Cambrian explosive evolution of animals raised the following difficult questions for evolutionary biologists: \u2460 Why did all animal shapes appear rapidly in the Cambrian? That is, why did all kinds of complex animals suddenly appear in the early Cambrian period after life experienced a slow evolution of more than 3 billion years? \u2461 Why did no new animal shapes appear after the Cambrian? That is to say, why did animal shapes become evolutionarily conservative after the Cambrian? \u2462 Since the animal shape is established at a specific stage in the embryonic development process - the \"phylotypic stage\", then the animal shape has evolutionary conservation issues after the Cambrian, that is, Why are germline-characteristic developmental stages in animal embryonic development evolutionarily conserved? The origin of animals before the Cambrian Explosion Any biological evolution event must have its occurrence and development process. Even if the Cambrian explosion of animals is a real biological evolution event, we still cannot think that animals will suddenly appear on the earth, and animal ancestors before the Cambrian should exist. Molecular phylogenetic trees and molecular clock studies support that the divergence of major groups of animals occurred in the Precambrian. Therefore, finding the last common ancestor of animals has become a difficult problem for paleontologists, because everyone does not know what the original animal ancestors looked like? In the case of the known Ediacaran fossil animal attributes that flourished at the end of the Precambrian period are widely doubted, we cannot rule out the hypothesis that animals already existed in the Precambrian period, but the ancestors of these primitive animals were very small, Or the shape is so strange that we can't recognize it. Developmental biologists and paleontologists have proposed specific primitive ancestry models. In 1995, a hypothesis proposed by American evolutionary developmental biologist Davidson et al. [6] attracted the attention of paleontologists. Davidson et al. believed that the life cycle of the primitive animal ancestors had a two-stage nature, and adults transformed rapidly from trocholarvae-like larvae through indirect development. According to this hypothesis, it is speculated that the ancestors of animals should be recessive trocholarvae, which may have existed in the Precambrian period, and possessed \"set-aside cells\" that developed into adults. This hypothesis reminds paleontologists that Precambrian animals may have existed in the form of trophoid-like larvae, which were limited by the environment and did not develop into adults. Larval animals like the Precambrian were small and difficult to preserve as fossils, which explains the absence of the Precambrian animal fossil record. Once environmental conditions permit, large adults can rapidly deform through indirect development. This hypothesis also supports the Cambrian explosion very well. In 1997, paleontologists Fortey et al. also put forward a hypothesis similar to Davidson et al. - \"body size increase hypothesis\". The hypothesis is that the ancestors of animals lived before the Cambrian as very small individuals in the crevices between the sand grains of ocean sediments. The Cambrian explosion of animals simply reflected an increase in animal size as a result of increased oxygen levels. However, the hypothesis of the existence of precambrian microzoan ancestors was opposed by Budd and Jensen[7]. Through morphofunctional analysis, they believed that the earliest bilaterally symmetrical animals must have a certain volume to maintain a body cavity. , vascular system, complex muscles and support systems and other tissues and organs of the body. However, one of the main groups of protostomes\u2014molting animals does not have a two-stage life cycle, which also questioned Davidson's \"plankton-eating larvae hypothesis\". If the bilaterian ancestor was miniature, then the descendants of this simple and tiny animal ancestor (e.g., platyhelminths, nematodes, rotifers, etc.) should be at the bottom of the bilaterian tree, which is similar to the molecular tree contradiction. Therefore, the ancestor of bilaterian animals should be a large, coelomous creature with complex shapes, ready for the Cambrian explosion. In fact, a large number of various phosphated embryo fossils discovered at the end of the Precambrian period in South my country[8], as well as animal resting egg and cyst fossils[9] provide the basis for the hypothesis of the Precambrian microanimal ancestors. Reliable fossil evidence. Another concern is that, since all bilaterian trunk groups are fossilized during the Cambrian Explosion, there is no reason to suspect that the common ancestor of the Bilaterians did not exist before the Cambrian Explosion. fossil record. Although trace fossils may also have been formed by ciliated protists and cnidarians, bilaterians have retractable body cavities and locomotor functions, so trace fossils are considered to be a reliable fossil record of bilateral symmetrical animals. Judging from the records of Precambrian trace fossils discovered around the world, the earliest trace fossils appeared no earlier than 560 million years ago. This time may represent the earliest time when bilaterally symmetrical animals appeared. Because the Ediacaran biota flourished at this time, Budd and Jensen suggested that some types in the Ediacaran biota may have been primitive bilaterians. First, because some Ediacaran organisms share features common to both cnidarians and bilaterals (certain radial symmetry features); and second, certain large cnidarian-level Ediacarans may represent bilateral symmetry Primitive types before the appearance. However, the Ediacaran fossils discovered so far still lack the anatomical characteristics of bilaterally symmetrical animals. Therefore, finding animals before the Cambrian explosion has always been an important task for paleontologists. How to Decipher the Cambrian Explosion At present, the scientific community discusses the origin of animals and the process and mechanism of the Cambrian Explosion from two different aspects of biology and environmental background. From the perspective of biology itself, the Cambrian explosion of animals needs to find reasons in several aspects such as genes, development, physiology, morphology and ecology. Give a full explanation. We need a comprehensive hypothesis that is analyzed from multiple perspectives. At the same time, the survival and evolution of organisms are closely related to the environment. Since life appeared on the earth, it has participated in all the changes in the earth's lithosphere, hydrosphere and atmosphere, and these environmental changes have in turn affected the evolution of organisms. Therefore, there is a co-evolutionary relationship between organisms and the environment. The reason why organisms experienced a slow evolution for about 3 billion years, and did not undergo a rapid evolution process until the end of the Neoproterozoic period, is obviously inseparable from the huge changes in the earth's environment during this period. With the updating of research techniques and the advancement of methods, more and more geologists (including structural, geochemical, geophysical, sedimentary and environmental geologists, etc.) began to participate in the research on the changes of the earth's environment before and after the Cambrian explosion , related research has gradually become or has become a multidisciplinary hotspot in the field of geosciences. Among them, the formation and disintegration of the Rodinia supercontinent in the early Neoproterozoic, and the subsequent long-term cold ice age\u2014the increase of atmospheric oxygen content and the dramatic change of atmospheric carbon dioxide content in \"snowball Earth\", As well as changes in the physical and chemical conditions of the ocean (including temperature, salinity, and various trace element contents, etc.), all may have a direct relationship with the origin of animals and the large radiation of multicellular organisms. However, the causal relationship between the changes in the above environmental factors and the origin of animals and the Cambrian explosion needs more reliable methods to demonstrate and explain on a global scale and a high-precision time frame. From this point of view, multidisciplinary research on a global scale is the best way to explore and solve the mystery of the Cambrian explosion.", "Introduction The \"evolution-developmental biology\" that has emerged in the past ten years has built a bridge between the study of biological morphology, embryonic development and gene regulation, and provided a new theory for the study of the nature of life and biodiversity in the 21st century frame. Paleontology plays a unique role in it. It can not only reveal the origin of major biological groups and the historical sequence of the appearance of major traits, but also fill in the vacancy of living organisms in reflecting biodiversity, and provide a basis for reconstructing life history and exploring the origin of biodiversity. The macroscopic spatio-temporal coordinates and unified evolutionary framework can also provide new comprehensive topics for disciplines such as molecular biology and developmental biology. Fossil records show that higher-order metagroups such as vertebrates, jawed fishes, bony fishes, finned fishes, and tetrapods originated successively during the Paleozoic Era 530-380 million years ago, laying the foundation for the radiation evolution of terrestrial vertebrates[ 1]. Fossils of early vertebrates represented by ichthyosaurs and ghost fish fill the gap between several major groups of living vertebrates (such as jawless and jawed, cartilaginous and bony fishes, fishes and tetrapods). The morphological gap between them will help to clarify the origin and evolution sequence of important features of vertebrates, and then provide indispensable information for solving the mysteries surrounding these major origin events of life history. The origin of vertebrates The vertebrate subphylum, the cephalochordate subphylum and the urochordate subphylum together constitute the chordate phylum. In the exploration of the origin of vertebrates, the relationship between these three subphylums is the most concerned one of the problems. It is generally accepted that vertebrates are more closely related to cephalochordates than to urochordates. However, due to the huge morphological gap between the living groups of these three subphyla, and the scarcity of early fossil records, we have not had a clear sequence of appearance of important anatomical features during the origin of vertebrates for a long time. know. The discovery of Haikou fish and Kunming fish in the Chengjiang biota not only provides a fossil record of the earliest vertebrates, indicating that vertebrates began to diverge in the Cambrian ocean 530 million years ago, but also presents us with the most primitive vertebrates. A combination of traits an animal may have [2]. The exquisite cartilaginous skull and soft body structure details displayed by Haikou fish and others have laid the foundation for further comparative anatomical research. The discovery of early cephalochordate and urochordate fossils in the Chengjiang biota has filled the morphological gap between the three living groups of chordates from another aspect, and provided more comparative information for the study of the origin of vertebrates. However, due to the incompleteness of the fossil record, the soft body structure information preserved by the early Cambrian fossils is ambiguous, and the research on the origin of vertebrates is still full of doubts. We need to discover more well-preserved Cambrian chordates, especially vertebrate fossil materials, and we also need to carry out research on animal burial to help us more accurately explain the anatomical features presented by the fossils. The origin of jawed species Jawed species are one of the most successful biological groups on the earth now, including cartilaginous fishes and bony fishes (ray-finned fishes, coelacanths, lungfishes and tetrapods), accounting for vertebrate More than 99.7% of the living species of animals [3]. Thanks to the jaws, vertebrates ended their simple filter-feeding life and started a lifestyle of active predation, thus opening up a wider living space. However, due to the scarcity of living genera and serious specialization of agnathians, the origin of the jaw and how the agnathians gradually acquired other key features, such as paired appendages, double nostrils, horizontal semicircular canals of the inner ear, cellular bones, etc., It has been a hotly discussed topic in academic circles for a long time, and many questions remain unresolved. The latest results of developmental biology and molecular biology show that the origin of jaws is closely related to the origin of double nostrils in jawed species [4]. In early embryonic development of jawed species, the premaxillary nerve ridge moves rostrally, invading the space between the two nasal sacs and Rathke's pouch; but in lamprey embryonic development, its The development of the nasal sac and the pituitary tube comes from the same base plate, the nasohypophysis plate, so the forward movement of the anterior maxillary nerve crest is blocked by the nasohypophysis plate during the development of the jawless embryo, and can only be along the nasohypophysis The ventral surface of the plate moves and eventually develops into the upper lip. In this sense, the separation of the two nasal sacs of the agnathians and the separation from the pituitary system is the most decisive biological evolution event that occurred before the origin of the jaw, which directly contributed to the origin of the jaw. However, it is regrettable that paleontological fossils have not been able to provide strong evidence support for this hypothesis so far. Extinct armored fishes are more closely related to jawed fishes than to lampreys. Armored fishes are one of the three groups with the highest diversity among armored fishes, but their distribution area is limited to China and northern Vietnam[1]. Carrying out in-depth comparative anatomical studies on the brain endocranium of armored fishes to clarify the key internal anatomical features, such as the distribution of cranial nerves, pituitary system, inner ear and nasal sac, may provide new insights for the exploration of the origin of the jaw. key evidence. The early differentiation of jawed fishes and the origin of teleosts In the traditional evolutionary biological classification, jawed fishes are divided into placoderms, spiny fishes, cartilaginous fishes, bony fishes, amphibians, reptiles, birds classes and mammals. Among them, placoderms and sticklebacks became extinct in the Paleozoic Era. Amphibians, Reptiles, Aves, and Mammals have four limbs and are collectively known as tetrapods or terrestrial vertebrates. Since the 1960s, branch systematics has emerged in the field of evolutionary biology. The new taxonomy requires that a natural taxon should include a certain ancestor and all its descendants. Therefore, quadrupeds according to the traditional definition should be included in the teleost class. According to this classification idea, jawed fishes are divided into placoderms, spiny fishes, bony fishes and cartilaginous fishes. The Silurian to Early Devonian was the initial stage of the evolution of jaws, but from a global perspective, most of the Silurian fossils of jaws come from microfossils (such as the scales of spiny fishes or cartilaginous fishes) Master) research, the understanding of the transition types between the four classes of jawed species has been in a \"blind man's touch\" stage for a long time, resulting in stagnation in the exploration of the early differentiation of jawed species. In recent years, a series of fossil discoveries from China, especially the report of the Silurian Xiaoxiang vertebrate group represented by the ghost fish[5], provided a rare opportunity for exploring the early differentiation of jawed species and the origin of bony fishes. fossil data. Some early teleosts, represented by grouper and ghost fish, had some characteristics of both teleosts and other jawed species, which greatly filled the morphological gap between the four major groups of jawed species. Evolutionary poles provide new evidence that will prompt us to re-examine the early divergence of jawed species and the origin of teleosts [6, 7]. In addition, with the continuous emergence of transitional types and the re-recognition of some important anatomical features, the academic community began to question the monophyletic of placoderms or spiny fishes[8]. Therefore, the discovery of more and more complete early jawed materials in the Silurian line will help to verify whether placoderms or spiny fishes are monophyletic, and also help to explore the origin of teleosts. The earliest undisputed fossils of cartilaginous fishes (teeth or near-complete individuals) are from the Early Devonian [9]. Silurian cartilaginous fish fossils are scattered spines or micro-scale materials, although they have some specific paleohistological features of cartilaginous fishes, but because cartilaginous fish brains or teeth specimens have not been found in Silurian The classification position of these materials still needs to be corroborated by more data. Silurian cartilaginous fish fossils are mostly distributed in Asia, and Mongolian fishes are one of the important representatives. They were first discovered in western Mongolia, and later in southern China and Tarim. The Chinese sticklebacks found in southern China and Tarim are another important representative of early cartilaginous fishes. Due to the lack of complete materials, there are still many doubts about the true attribution of the Silurian cartilaginous fish fossils, and its paleohistological characteristics need to be further described. In addition, the relationship between Mongolian fishes and Chinese sticklebacks needs to be clarified, and the determination of the taxonomic position of Mongolian fishes and other Silurian groups will directly affect the estimation of the divergence time point of cartilaginous fishes and teleosts. Early differentiation of lobe-finned fishes and the origin of tetrapods In the traditional classification, lobe-finned fishes only include those fishes with lobed paired fins, and there are only 6 living species of lungfishes ( 4 species of African lungfish, 1 species each of Australian and South American lungfish) and 2 species of coelacanth (African Latimer and Indonesian Latimer). Since tetrapods are derived from sarcopterygii about 380 million years ago, they are included in the subclass sarcopterygii in the cladistic classification. After hundreds of millions of years of evolution and continuous extinction events, there is a huge morphological gap between the three living groups of the subclass Sarcopteryx (lungfish, coelacanths, and tetrapods). It is difficult to solve the relationship between them in the study of biological species. Early loin-finned fish fossils played an indispensable key role in the exploration of the fish landing process, filling in the \"missing link\" between them, and providing clarification of the evolutionary relationship between major groups of loin-finned fishes and important The evolutionary sequence of the traits provides empirical data. The early Early Devonian was a critical period for the early divergence of saponin-finned fishes, and the fossil materials found in the Xishancun Formation and Xitun Formation in Qujing, Yunnan Province (Xitun Vertebrate Group) have well recorded this evolution process. In the past 30 years, the continuous discoveries and detailed studies of sarcopteryx fossils in the Xitun vertebrate group have greatly changed the traditional understanding of the phylogenetic relationship of sarcopterygians, providing a new perspective on the differentiation of extant subclasses of sarcopterygia. The empirical evidence reveals that southern China is the origin and early differentiation center of sarcophagi. However, since Yang's fish appeared in the earliest period of the Early Devonian, the differentiation of the three major subclasses of the sarcopteryx should have been completed at the turn of the Silurian and the Devonian, and the origin of the sarcopteryx has yet to be It needs to be traced back to an earlier age. The landing of fish is not a simple process of growing four legs and then toddling. From living in water to living on land, the pioneers of these terrestrial vertebrates required a series of completely new transformations in their body structure. However, for a long time, paleontologists' research on the fish landing process has mainly come from the study of the Late Devonian eupalopteryx and ichthyosaurs. The problem is that ichthyostigmas are so specialized that they have so many \"missing links\" with eupropods that it's hard to explain the origin of tetrapods. In the past 20 years, paleontologists have made many breakthroughs in the discovery of Devonian tetrapods and near-tetrapods[10], which have given us a vivid picture of evolution, and even more It has completely changed our traditional understanding of the fish landing process in the past. The new findings tell us that some important features of quadrupeds appeared when they were still living in water. For example, the appearance of phalanx (toe) bones was not originally to adapt to life on land, but may help them get out of the water. Lift your head up for air. Studies of the origin of quadruped limbs have provided us with outstanding examples of mutual validation of fossil, embryonic and gene expression data. The in-depth study of Kenny's fish established the homologous relationship between the internal nostrils of tetrapods and the posterior external nostrils of fishes. The study found that Kempey's fish is in the transition stage from the outer nostril to the inner nostril. Although the mandibular arch of Ken's fish is still composed of the maxilla and anterior maxilla, the front and back are not connected, and there is a gap in the middle, which is the position of the posterior external nostril (or primitive internal nostril) of Ken's fish. This means that in the evolution of sarcophagi, there is a process in which the maxillary bone and the premaxillary bone are split and then reconnected, which provides a channel for the \"drift\" of the nostrils and also \"drifts\" for the inner nostrils. Hypotheses provide evidence. We still have many puzzles to answer about the anatomical changes that accompanied the origin of tetrapods. We currently have reasonable hypotheses as to why the forelimbs and shoulder girdle evolved in this way. However, due to the lack of sufficient fossil clues, we have no proper explanation for the origin of the hind limbs and girdle. The characteristic changes indicated by the fossil record need to be verified by embryonic and gene expression data. For example, we do not know what the evolutionary mechanism of the reconnection of the maxillary bone and the premaxillary bone during the origin of the internal nares is. The study of developmental biology is closely related and requires the participation of evolutionary biologists and developmental biologists; in addition, our current understanding of the environmental constraints on the origin of tetrapods is still very limited, and we need to understand the early tetrapods in detail distribution and migration patterns, as well as evolutionary pressures acting at various stages of key anatomical changes. It can be expected that new discoveries of fossils and interdisciplinary interactions will tell us a more complete story about fish landings.", "Introduction Although the hypothesis that birds originated from dinosaurs is currently widely accepted by the academic community, several related questions remain unsolved: Which type of dinosaur is the closest relationship between birds and dinosaurs? When did birds and dinosaurs first diverge? Are the only three remaining fingers of birds equivalent to the second, third, and fourth fingers of primitive quadrupeds, or the first, second, and third fingers? What is the most primitive feather structure? From which group did it appear? Did the flight of birds originate from arboreal or terrestrial origin? When and how did the warm-blooded trait of birds arise? These issues involve many disciplines such as paleontology, ornithology, and evolutionary developmental biology. They are not only hot spots in today's academic circles, but also difficult scientific issues that will exist for many years to come. The origin of birds The scientific hypothesis that birds originated from dinosaurs has been put forward by Huxley in 1858. After nearly half a century of silence, it has been revived and worked hard by Ostrom et al. Since the 1980s, it has gradually become a theory widely accepted by scholars from all over the world. The discovery of feathered dinosaur fossils in China undoubtedly played a role in fueling the flames. It has become the consensus of most scholars that birds originated from a small theropod in the sauropod dinosaurs. However, there is still controversy about which type of dinosaur the birds are most closely related to. Dromaeosaurids, troodontids, oviraptosaurids, or a combination of them are generally considered to be most closely related to birds. However, on the one hand, phylogenetic analyzes by different scholars often lead to different results; on the other hand, On the one hand, the emergence of some newly discovered dinosaur species will also challenge the existing hypotheses [1, 2]. Therefore, the exploration and efforts to find the most recent ancestors of birds are destined to become a long-term and in-depth process. Another problem in bird origin research is the debate over the homology of its fingers. Traditional paleontological evidence supports that the only three fingers of birds are equivalent to the first, second, and third fingers of primitive tetrapods, while embryological studies mostly believe that birds retain the second, third, and third fingers of the ancestral type. Four fingers. Although this \"contradiction\" does not pose a real challenge to the hypothesis that birds originated from dinosaurs as some scholars have said, how to explain this phenomenon is undoubtedly a major academic issue. Some scholars put forward the hypothesis of \"frame shift\" during embryonic development in an attempt to resolve this \"contradiction\" [3]. Recently, based on the fossil evidence of ceratopsians (a branch of theropods) discovered in the Jurassic period in Xinjiang, Xu Xing et al. proposed a new hypothesis of \"outward transfer\", arguing that the three fingers of styrosaurids were actually It may also be the second, third, and fourth fingers [4]. Research on related issues involves many aspects of paleontology and evolutionary developmental biology. With more research and evidence synthesis, it is believed that the academic community will be able to reach a consensus on this issue soon. The origin of birds, their feathers, and flight \t\u00b7 197 \u00b7 The question of the timing of the origin of birds depends largely on the discovery of new fossils. Since the oldest known birds were found in the Late Jurassic, the ancestors of birds should have appeared before the Late Jurassic. At present, a variety of hairy small theropod dinosaurs that are very close to birds have been discovered in the strata of the Middle and Late Jurassic in my country, which has largely solved the ancestral evidence of birds that paleontologists faced in the past An embarrassing situation that often comes from strata later than Archaeopteryx. Although some scholars have proposed that the origin of birds may be as early as the late Triassic, the existing fossil evidence cannot support this inference. The origin of feathers Feathers were once regarded as one of the unique characteristics of birds. However, with the discovery of several feathered dinosaur fossils, the distribution of feathers in vertebrates and their origin have become one of the hotspots of current research. However, due to the preservation of fossil feathers, the interpretation of their structure and homology naturally often arouses great controversy. A recent fossil discovery shows that feathers were not only widely distributed in saurischian dinosaurs, but may also be found in ornithischians [5]. On the other hand, do the hairy skin derivatives [6] found in some pterosaurs also represent structures homologous to the original feathers of dinosaurs? If so, then the distribution of feathers will be further expanded. There is currently no solid evidence to support or refute this hypothesis. There are many studies on the development process of modern feathers, but there are often great differences in the understanding of the shape and structure of fossil primitive feathers. Some scholars believe that the hairy skin derivatives of some dinosaurs may be subcutaneous fibers, which have nothing to do with feather evolution [7]. Most scholars believe that the hairy skin derivatives found in some dinosaurs represent the original feather type [8], which may have a hollow structure. In the origin of feathers, whether the rachis or bifurcation first appeared is also one of the focuses of debate. Obviously, research in this area still needs more convincing fossil evidence. The process of the functional evolution of feathers is also a mystery. Although most scholars currently agree that the origin of feathers may not be directly related to flight, is it the regulation of body temperature, sexual attraction, or other adaptive needs? The identification and study of feather color in fossil birds is certainly an exciting development in recent years. Eumelanosomes have been identified from early Cretaceous bird feather fossils in Brazil [9]. The potential for identifying different chromatophores from the birds and dinosaurs of the Jehol Biota in my country is undoubtedly great. Not only can the color of their feathers be directly restored, but it may also help solve the homology of some controversial original feathers . Ultimately, however, it is an open question to what extent we can recover the original feather color. The origin of bird flight Regarding the origin of bird flight, there have been two opposing hypotheses for a long time: arboreal origin and terrestrial origin. Through the study of the living habits of the most primitive birds and some arboreal dinosaurs discovered in recent years, the theory of the arboreal origin of birds has gained more support. However, some scholars have proposed from the perspective of functional morphology that the ancestors of birds were in In the process of running, sufficient flight power can be obtained by flapping the forelimbs, so the theory of the terrestrial origin of bird flight is adhered to [10]. In addition, some scholars believe that the two hypotheses are not contradictory, and believe that running and arboreal play an important role in the origin of bird flight. Since most scholars believe that birds originated from a class of small theropod dinosaurs, and many structures and functions of bird forelimbs have begun a gradual evolution process in dinosaur ancestors, the key to the problem is the evolution of flight structures and their functions Analysis of the process. What role did running and arboreal play in the origin of bird flight will undoubtedly be a question that needs to be explored for a long time. Other Related Questions Around the study of birds, their feathers, and the origin of flight, there are many other related questions that deserve further exploration. For example, the analysis of ancient histology of early birds and dinosaurs can restore their growth rate and metabolic ability; whether early birds and dinosaurs had a similar homeostasis to modern birds; as more and more birds and birds The discovery of closely related dinosaurs and the distribution of many original bird features in dinosaurs have also attracted more attention to the definition of birds.", "All kinds of organisms on the earth have been in the process of life and death since their appearance and development until today. Statistics show that more than 99% of the organisms on the earth have been replaced since life began, and the extinction of organisms is happening all the time [1]. As early as the 1840s, John Phillips, a professor at the University of Oxford in the United Kingdom, found that life on Earth had experienced at least three major stages of development based on the fossil assemblages preserved in the strata in the United Kingdom. Referred to as the Mesozoic, Mesozoic) and the era of new organisms (referred to as the Cenozoic, now known as Cenozoic). Professor Phillips realized that each stage is dominated by a unique biological group, and there is an obvious replacement of organisms between these three major stages of biological development, so a large number of organisms become extinct in alternate stages [2], which is Western scholars put forward the idea of mass extinction earlier, some of which are still cited today. However, the early recognized role of mass extinction events in the evolution of life on Earth has not attracted much attention. Until the 1980s, American geophysicist Luis Alvarez, Walter Alvarez and his son, together with two other scientists, published an article in the \"Science\" magazine, saying that they had discovered in the boundary clay of the Cretaceous-Paleogene in Italy, Denmark and other places. The high anomaly of the iridium element characteristic of alien meteorites, and the hypothesis that the giant dinosaurs that once dominated for a while were extinct due to an asteroid hitting the earth[3] After that, the study of mass extinction has become a science that scientists have been actively discussing in recent years problem. At the same time, Professor Jack Sepkoski, a famous scientist at the University of Chicago in the United States, systematically counted the biodiversity recorded in the geological history period, and established the diversity curve of the earth's organisms since 600 million years (Figure 1), and found that the earth There have been five mass extinction events since the emergence of significant organisms on Earth, which occurred at the end of the Ordovician about 440 million years ago, the late Devonian about 364 million years ago, and the end of the Permian about 252 million years ago. , the end of the Triassic period 205 million years ago and the end of the Cretaceous period 65 million years ago[4]. These five mass extinctions occurred in a short period of time, and each time caused at least 76% of the species on the earth to become extinct. Among them, the mass extinction at the end of the Permian period had the greatest impact. At that time, more than 95% of the species in the ocean and 75% of the species on the land The above species became extinct (Table 1). In recent years, more and more scientists and public media have realized that the mass extinction event has played a decisive role in the evolution of the entire biological world, and may even completely destroy the entire biological world on earth. The cause of the mass extinction is still inconclusive, but what is certain is that each mass extinction has different \"causes and consequences\", and the situation is quite complicated: the external cause of the mass extinction event dominates, but the macroevolution law (internal cause) of the organism itself Nor can it be ignored. So, what is the reason for such a major disaster in the biological world? In recent years, scientists from various countries have carried out extensive exploration, which is very difficult. First of all, these events happened hundreds of millions of years ago. The history of the earth has undergone hundreds of millions of years of vicissitudes. Geological records at that time have been rarely preserved, and most organisms have been decomposed after death. Scientists look for and judge in rocks, and these rocks are often buried deep in the ground; secondly, there are various types of fossils, and after discovery, paleontologists need to identify and judge their age, which requires rich professional knowledge and experience In addition, large-scale environmental changes are often accompanied by mass extinction events, and it is difficult to restore the earth's environmental background hundreds of millions of years ago. Only by careful analysis, including paleontology, stratigraphy, paleoecology, geochemistry, biogeochemistry, isotope age and mineralogy, can certain aspects of the results be obtained. Figure 1 \tBiodiversity of Earth's oceans in the Phanerozoic Era[4] Table 1 \tTime , extinction range and main biological groups affected by the five mass extinctions \nBryozoan \nreef-building organisms, brachiopods, trilobites \n, quadrangular corals, trilobites, sea buds, brachiopods \n, large amphibians, archosaurs, ammonites \n, thick shells Clams Note: The data in the table are based on Sepkoski's published marine biostatistics. Chinese scholars have accumulated decades of research on the above-mentioned mass extinction, especially since the late 1980s, due to China's unique and complete geological section records and rich and colorful sedimentary types, as well as the increased support from various national departments A series of progress has been made in the study of the three mass extinctions in the Paleozoic Era [5~11]. So far, except for the five mass extinctions, most scholars believe that the extinction of dinosaurs at the end of the Cretaceous period may be related to an extraterrestrial body impact event, and there is no unified conclusion on the cause of the other four mass extinctions. After nearly 30 years of research, most scientists believe that the rapid deterioration of the earth's environment may be the leading factor in the mass extinction, including magma activities in the earth's interior leading to large-scale volcanic eruptions, a large amount of harmful gases entering the atmosphere, leading to the greenhouse effect; methane stored in the seabed The large-scale release of gas and the overturning of ocean stratified water bodies lead to a global anoxic environment, the icehouse effect, and cosmic ray bursts. Others believe that it is the result of the combined effect of multiple factors similar to the nature of \"Murder on the Orient Express\" [12,13]. The reason why the occurrence of mass extinction events in geological history has been highly valued by scientists from all over the world in recent years is because more than 70% of modern biologists believe that the earth is currently experiencing an unprecedented mass extinction event, and the global environment is undergoing rapid development. deterioration. But it is difficult to examine its seriousness with the scale of human civilization history. According to statistics made by the National Museum of Natural History in the United States in 1998, due to the destruction of the earth's biosphere by humans, some scientists predict that about half of the creatures on the earth will become extinct in a hundred years. The International Union for Conservation of Nature and The list of endangered organisms released annually by Nature Resources, IUCN shows that the scale and speed of extinction of organisms is obviously intensifying, and its intensity far exceeds any mass extinction in geological history.", "In the history of the evolution and development of life on Earth, the mass extinction is the most eye-catching. It is not only a manifestation of the periodic mutation of biological evolution, but more importantly, it is a direct sign of major mutations in the coupling systems of various circles in the history of the earth. It is a stage in geological history. important basis for division. However, from the perspective of the development of life, the mass extinction is a serious disturbance to the normal process of life evolution. It not only caused a sharp decline in biodiversity, but also severely damaged the relatively complete earth ecosystem structure established over a long period of time, making The category composition and ecosystem structure of the earth's organisms have obviously \"regressed\", returning to the \"lower\" state in the early stage of biological evolution. This phenomenon is most obvious after the previous mass extinction events in the Phanerozoic. For example, after the end-Permian mass extinction event, there was a period of millions of years in the early Triassic without coal formation on the global continents, known as \"coal absence\"; no true reef-building metazoans formed in the global oceans The biological reefs in this area are called \"reef loss\"; the siliceous sediments that were widely developed in deeper water areas with the participation of silicon-making organisms (such as radiolaria) - chert, also disappeared, so it is also called \"silicon loss\". (Figure 1), it can be seen that the land and sea at that time presented an extremely depressed ecological landscape. However, looking at the evolution history of the earth's organisms, the pace of biological development has never stopped. Although the mass extinction caused a short-term stagnation of biological evolution, it also accumulated greater momentum for greater evolutionary progress. After each mass extinction, organisms will undergo self-repair and recovery, resulting in new leaps and bounds in the functional structure and ecological adaptation of organisms, and new evolutionary radiations will occur. The contemporary earth's biological world is the product of countless extinctions and resurgences in the development of life for billions of years. It can be seen that the mass extinction is to create conditions for the rapid development of organisms, and the recovery is the real driving force for the major evolution of organisms. In the biological and geological records of the Phanerozoic, the mass extinction is the most distinctive, and its correlation with the associated major geological mutation events is the most significant, clearly marking the major coupling mutation facts of the various layers of the earth, so it is a long-term geoscience community. most important topic ever. However, in recent years, when scientists have expanded their research focus from the mass extinction to the study of biological recovery after extinction, they have found that the ecosystem in this period of biological depression has richer content in essence, containing very complex biological evolution and Environmental evolution history. Theoretically, since the rebuilding process of a system is much more complex than its destruction, the reorganization process of the ecosystem after the mass extinction and the ecosystem of the recovery period contain more geological history information. In the process of geological catastrophe marked by mass extinction and recovery, the extinction represents the intensity of geological action, the remaining organisms and their ecosystems after extinction indicate the limit of the event\u2019s effect, and the recovery process is also a measure of the damage degree of the event. indirect response. Therefore, the geological record of the biological recovery period contains richer information on major geological abrupt changes, and it is also of reference significance for the analysis of the mass extinction and its causative events. However, the study of the life process and ecosystem evolution during the recovery period is also a major problem in geoscience research, because the biological fossil record after the mass extinction is significantly \"depleted\", forming a huge contrast with the pre-extinction period in terms of both quality and quantity. Moreover, due to the huge environmental effect of the extinction event, the earth's environment after the mass extinction was significantly \"specialized\", and some rare sedimentary products were formed, which made the entire ecosystem present an extremely abnormal state, and what can be observed in the stratigraphic records Many important geological phenomena cannot be studied and analyzed with conventional scientific theories and thinking, thus adding difficulty to the study of this historical process. It is precisely because of this that the recovery of organisms after the mass extinction has become a new hot spot in contemporary geoscience research. In the history of life on earth, no matter which mass extinction occurs, there will always be some creatures that can survive the extinction event. From the perspective of their biological evolution and ecological adaptation functions, these organisms that survived major extinction events mainly include three types. One is the survivors of the main taxa that constituted the biota and ecosystem before the extinction. They once represented the most advanced stage of biological evolution when the extinction event occurred. representative of the group. These creatures are typical survivors of extinction events. In order to survive, they have undergone major changes in body structure and ecological adaptation compared with the pre-extinction period. The most notable change is the \"miniaturization\" of biological individuals, that is, the individuals of the remaining organisms are significantly smaller, and the structure of the organisms is also significantly simplified to adapt to the harsh ecological environment with scarce resources at that time. For this miniaturization phenomenon, Urbanek[1] cited the \"Lilliput\" story in the novel \"Gulliver's Travels\" (1726) written by the British writer Jonathan Swift as the \"Lilliput effect\". However, this \"miniaturization\" ecological adaptation is only an ecological adaptation strategy in the history of biological evolution, and does not represent the forward direction of biological evolution and development, so it has not achieved new development in the biological recovery after the mass extinction. Another type of biological group that passed through the extinction event is the \"opportunistic taxa\" that mainly lived in ecological marginal areas before the extinction. This type of organisms mainly lived in some \"abnormal\" special marginal environments before the mass extinction. Therefore, the impact of the mass extinction event on them is relatively small, on the contrary, they benefit from the extinction event to a certain extent. After the mass extinction, these opportunistic organisms not only developed in their \"abnormal\" environmental regions, but also further expanded into some \"normal\" ecological regions, becoming the so-called \"disaster taxa\" and dominating the ecosystem at that time. However, such \"opportunistic organisms\", because they originated in special ecological environment conditions, their biological structure and ecological function adaptation usually have obvious specificity and opportunism, and cannot become the mainstream of biological evolution and development. The restoration of the ecosystem has played a positive role in promoting, but in the process of biological recovery, its dominant position is gradually replaced or marginalized by new organisms. The third type of organisms are the so-called \"ecological generalists\" (generalists) who have the ability to survive in various ecosystems before extinction. Their salient feature is that they have a wide range of ecological adaptation and can adapt to a variety of ecological environmental conditions, but they usually do not constitute the main body of the ecosystem. Therefore, they have relatively strong resistance to extinction events, and there are more survivors in mass extinction events. However, such organisms usually do not undergo major changes in their ecological domains after the mass extinction. Although there will be progress in physiological structure and ecological functions, there will be no major shifts. Therefore, they generally do not constitute the flooding organisms after the mass extinction, but they have a certain role in promoting the maintenance and development of the ecosystem after the extinction. Along with biological extinction, biological regeneration is also happening continuously. Studies have shown that some important biological groups after the mass extinction all originated in the last period before the mass extinction. Among these \"progenitors\", except for a few who developed into flood organisms after the mass extinction, more taxa evolved into newly revived organisms. Resurgent organisms are the most important source of biological evolution and the establishment of new ecosystems after the mass extinction. Newborn organisms generally have a qualitative leap forward in terms of physiological structure and ecological function compared to extinct biological groups. However, the evolution of newly revived organisms and the determination of their ecological status are usually a relatively slow process. After several major extinction events in the Phanerozoic Era, the recovery of organisms and the reconstruction of ecosystems generally took 2 to 3 million years, and sometimes even Over 5 million years. The ecosystem during the remaining period after the mass extinction is very depressed. Except for a small number of remaining organisms, \"post-disaster flood organisms\" dominate various ecosystem spaces, because the ecological environment in this period has not recovered from the catastrophic event that led to the mass extinction. After recovery, this high-pressure environment is a \"paradise\" for opportunistic organisms, except for a very small number of \"miniature\" remaining organisms and \"ecological common groups\" that can still get involved. In addition to joining the ranks of opportunistic organisms, some newborn groups derived from the early \"ancestor type\" either died out or were in a state of depression. They did not gradually find their place in the ecosystem until after the recovery period and developed into an ecological system. subject of the system. In the history of biological differentiation in the Phanerozoic, the history of the rise and fall of microorganisms such as cyanobacteria at the bottom of the food chain is closely related to the differentiation process of animals that feed on them. In the normal ecosystem before the extinction, they have been in a state of suppression, and the microbial rocks such as stromatolites produced by their massive reproduction can only be preserved in marginal ecological environments such as saline lagoons where their consumers cannot survive. When the mass extinction event wiped out most of the consuming animals, they quickly reproduced in the normal shallow sea environment and became one of the important types of ecosystem structures in the post-mass extinction period and the early recovery period. However, once the emerging fauna emerged during the recovery period, they were quickly expelled to the original marginal ecological field. Therefore, they have the characteristics of typical post-disaster flood organisms. Similarly, after the previous mass extinctions in the Phanerozoic, there was also the extensive development of post-disaster flood animal groups with similar manifestations, such as the bivalve Claraia taxa at the beginning of the Triassic after the end-Permian mass extinction, which were newly born before the mass extinction At the end of the Permian, it evolved into a typical opportunistic ecological feature in the Early Triassic, and it was able to develop in large numbers in the widely distributed oxygen-poor calcium muddy mixed-phase marine environment at that time, becoming the dominant marine ecosystem in the Early Triassic. One of them, leaving a rich imprint on the stratigraphic record. Similar \"disasters\" (disasters) also include many other bivalve groups, such as Posidonia, Eumorphotis and so on. These groups were able to rise rapidly in the abnormal ecosystem at that time because of their relatively advanced physiological structure and ecological adaptation characteristics. The reason why bivalves were able to finally replace the ecological status of brachiopods that occupied the normal shallow sea environment of paleontology for more than 200 million years during the major geological mutation process at the turn of the Paleozoic and Mesozoic is that they have stronger metabolic capabilities and have a greater impact on the oxygen content in the environment. Relatively low dependence on [2]. Therefore, these thin-shelled bivalves with strong mobility were able to \"flood\" in the oxygen-poor marine environment at the beginning of the Triassic. Some cephalopods with relatively simple shell structures in the ocean at that time may have similar characteristics. Obviously, the survival and development of these catastrophic creatures played a positive role in promoting the improvement of the ecological environment at that time and accelerating the process of biological recovery. After the mass extinction, the survival period is characterized by \"survivors\". These \"survivors\" usually do not occupy an important position in the ecosystem, but generally live in some ecological marginal areas in a \"miniature\" manner. Although at the beginning of their surviving period, they sometimes become the dominant molecules in certain ecological regions, once new opportunistic organisms invade these regions, they are quickly repelled, and eventually they are squeezed out of the ecosystem and become extinct. Therefore, although the vast majority of \"surviving organisms\" escaped the mass extinction, they were eliminated during the survival period and initial recovery period after the extinction. Articulated brachiopods after the mass extinction at the end of the Permian, especially long-bodied shellfish and stone swallow shellfish, are typical representatives of this type of organisms. On the contrary, although some \"ecologically common taxa\" before the extinction mainly lived in some marginal ecological regions before the extinction, they still occupied a dominant position in some similar ecological regions during the remaining period after the mass extinction, and they could continue to Develop and preserve your position until the recovery period and subsequent radiation period. The most typical representatives of this kind of organisms are gastropods and tongue-shaped shellfish in brachiopods, which have similar performances in each mass extinction event in the Phanerozoic epoch. In the ecosystem in the surviving period after the mass extinction and in the early stage of recovery, due to the lack of high-level consumers, the survival and competition relationship between organisms tends to be simplified, and the struggle between organisms and the environment becomes the primary contradiction. The record of transformation effect has become an important symbol of biological recovery. In the geological records of this period, some relatively rare sedimentary markers in the normal ecosystem of the Phanerozoic Era can often be seen, such as stromatolites or lamellar sedimentary structures formed by microbial deposition, algal mat wrinkle structures, flat gravels Bamboo-leaved limestone, carbonate sedimentation cemented fans, etc., are not only developed in abnormal sedimentary facies in some ecological marginal areas, but also preserved in various normal marine sedimentary records, so they are often called \"abnormal sedimentary facies\". Things\" (unusual facies). These special sediments not only indicated the lack of transformation of metazoans at that time, but also reflected the abnormal chemical conditions of the marine (and atmospheric) environment at that time, and thus also directly reflected the imperfect development of both biological and environmental factors in the ecosystem. At the same time, the trace fossil records most closely related to the sedimentary basement also more intuitively indicate the recovery process of the ecosystem. Not only the diversity of trace-making organisms directly reflects the differentiation level of biological recovery, but also the form of trace-making indicates the main habitat of organisms\u2014the environmental state on the sedimentary interface. For example, the depth of biological burrows indicates the oxygen content in the underwater medium. The regularity of the quantity and foraging structure indicates the development level of competition among organisms, etc. From the perspective of the evolution process from the mass extinction to the recovery of the entire ecosystem, the mass extinction is extremely fast, but it represents the biggest turning point in the earth's ecosystem; the remaining period after the mass extinction is relatively short, but the biological replacement and ecological categories The transition is significant; the recovery period is usually a relatively long process, and its constraints can be summarized into two aspects: one is the organism itself, and the other is directly related to the intensity of the mass extinction. If the mass extinction affects the biological category The more complete the destruction of ecosystem and ecosystem structure, the longer the pace of biological recovery; the second is the level of biological evolution, which is usually stronger than the biological recovery in younger geological records. For example, the biological recovery after the end-Cretaceous extinction was significantly faster than that at the end of the Permian, but the recovery after the end-Permian extinction was slower than that of the Late Devonian mainly due to its greater extinction intensity . On the other hand, the recovery constraint is the ecological environment, in which the nature and intensity of the environmental events leading to the mass extinction have an important relationship with it, but the continuous environmental state after the mass extinction directly controls the recovery of the ecosystem and the recovery process of organisms. After the mass extinction at the end of the Permian, the biological recovery at the beginning of the Triassic was the longest after all the mass extinctions in the Phanerozoic. New research results in recent years have shown that the main reason for this \"delayed\" (delayed) biological recovery, which lasted more than 5 million years, is not only because the extinction at the end of the Permian was the largest in the Phanerozoic, but also because the entire The ecological environment of the Early Triassic was continuously turbulent and in a bad state[3]. The main reason why this harsh environment can be maintained for a long time and hinder the recovery of organisms is that there are still abnormal environmental events that cause environmental deterioration during the survival period and recovery period after the mass extinction[4]. It can be seen that biological recovery is a long-term process of biological progressive development and gradual improvement of environmental conditions. When organisms and the environment achieve a new balance in the joint evolution and can continue to develop, the ecosystem will enter a new radiation. The development period, thus biological evolution and ecosystem development also rose to a new stage. When we compare the biota and ecosystem structure after the recovery period with the ecosystem before the mass extinction, it is not difficult to find that no matter how wide the distance between the two and how long the biological crisis and recovery last, there are always some biological Pedigree is the same or continuous. These lineages, which flourished before the mass extinction, declined rapidly during the mass extinction, almost disappeared from the fossil record during the surviving period and the early recovery period, but reappeared and flourished after the recovery period, and became important in the radiation period ecosystem. member. Jablonski[5] borrowed biblical allusions to describe these types as \"resurrected creatures\" (Lazarus taxa). Obviously, the understanding and research of resurrected organisms are very important for the research on the recovery of organisms after the mass extinction. The re-emergence of these types not only changed the quantity of the recovery fossil record (differentiation, extinction and recovery rate, etc.), but also directly affected the extinction-survival-recovery-radiation process pattern. The length of time from disappearance to reappearance of resurrected organisms is related to the ecological environment such as high-pressure physical and chemical conditions after the mass extinction until the recovery period, but where they \"hid\" during this \"difficult\" period is still a mystery. Some scholars once put forward the \"refugium\" hypothesis based on biblical stories, that is, these creatures hid in some kind of refuge when the disaster (mass extinction) came, and they did not return until the environment was restored. There was a period of time when scholars from all over the world tried their best to find this \"refuge\" around the world, but until now, no convincing answer has been obtained. Therefore, in recent years, there have been new doubts about the existence of \"shelter\". Further studies have shown that some of the \"resurrected organisms\" considered earlier are probably just \"homomorphic organisms\" with ecological convergence in shape and structure. Throughout the history of the development of life on Earth, the mass extinction provides us with a remarkable observation point to pay attention to the qualitative leap in the process of life evolution. material. The progress of life is established on the basis of constant struggle against the earth's environment, and seeking balance in adapting to and transforming the environment. The biological evolution in the recovery period is the most typical forward evolution, but we still know very little about the biological recovery after the mass extinction. At present, relatively more data have been accumulated on the ecosystem evolution and biological recovery process in the early Triassic understanding (see Figure 1). After the mass extinction at the end of the Permian, the short survival period at the beginning of the Triassic was a long process of biological recovery spanning the entire Early Triassic until the beginning of the Middle Triassic. During this period, strong carbon isotope fluctuations are considered to be a manifestation of the extreme instability of the ecological environment at that time; normal marine benthic organisms represented by gastropods in the fossil record produced significant \"miniaturization\" ecological adaptation; during the mass extinction Some biological groups that dominated the normal benthic ecosystems in the pre-radiation and post-radiation periods, such as clade algae, calcareous sponges, corals, etc., basically disappeared from the fossil record; the metazoan reefs in the normal shallow sea area completely disappeared, and some cohesive organisms dominated The reef facies environment of the reef facies only appeared in the late stage of recovery, and all kinds of \"abnormal\" sediments were widely distributed in this facies area; Deep burrow remnants; marine hypoxic event records exist throughout the survival period and recovery period, and may also be one of the important factors controlling biological recovery, but the longer duration of hypoxic events in deep water areas may be related to biological recovery activities related. In addition, consistent with the ecosystem evolution process in the normal shallow sea area, there is a \"silicon loss\" in the deep water area and a \"coal loss\" on land [6]. Figure 1. \tThe evolution process of biological and environmental markers during the early Triassic recovery period [5] In short, the biological recovery after the mass extinction contains a wealth of key information such as biological evolution, biological-environment interaction, and common development. This scientific process The revelation will play an important guiding role for us to correctly understand and deal with the coordinated development of contemporary organisms and the environment, including humans, and the governance of various extreme ecological environments. However, the current international research work on biological recovery after the mass extinction has just started, a large number of scientific facts have yet to be discovered, and greater efforts are needed to correctly understand this process.", "Human beings are a special biological species among all living beings on the earth. They belong to a branch of mammals. They can walk upright, can make and use tools, and have a high degree of intelligence and culture. Human beings maintain a constant curiosity and interest in the origin process and mechanism of our own species. The scientific and religious circles have always regarded it as a major topic or issue, trying to study and interpret it. But until today, the origin of human beings is still a major unsolved basic scientific problem. Said frequently appeared in the newspapers. In many links of human origin and evolution, many mysteries are waiting to be solved, and many hypotheses are waiting to be verified. The current scientific research on the origin of human beings focuses on two aspects: the origin of the first human beings and the origin of modern humans. The core scientific problems can be decomposed into: when, where, and from what kind of ancient ape did the first humans evolve? When and where did modern humans originate, and who were their direct ancestors? What is the driving force of human origin and evolution? For these questions, scientists have put forward some hypotheses, but the basis for scientific evidence is still very weak. From a scientific point of view to explore the origin of the first human beings originated from Darwin. In the book \"The Origin of Mankind and Sexual Selection\" published in 1871, based on the similarity in shape, Darwin proposed that human beings may have evolved from an ancient ape similar to orangutans in Africa during the Eocene. And the ways are variation, heredity and natural selection [1]. Subsequent paleoanthropological research provided fossil evidence for Darwin's inference. Many human fossils were discovered in Africa in the 20th century: the first Australopithecus skull about 3 million years ago was discovered in South Africa in 1924; fossils of Australopithecus, Homo habilis and Homo erectus were successively discovered in East Africa since 1959, including 1974 The \"Lucy\" skeleton about 3.2 million years ago was discovered in Ethiopia; the \"Earth Ape\" 4.4 million years ago was discovered in Ethiopia in 1994; the \"Millennium Man\" 6 million years ago was discovered in Kenya in 2000; The \"Thomai Man\" discovered 7 million years ago in Chad, Central Africa (see Figure 1), is considered to be the earliest human fossil discovered so far. These findings led most scholars to believe that humans with the ability to walk upright were first born in Africa[2], and the time of the appearance of human ancestors determined from fossil materials and molecular biology based on the differences in the biochemical composition of modern humans and apes The deduced divergence time of apes also tends to match. But not all scholars agree with the view that Africa is the earliest birthplace of human beings. The Javanese Homo erectus skull was discovered in Indonesia in 1891, and the first skull of Peking Man was discovered in China in 1929. Due to the discovery of many important fossil materials, coupled with suitable geological environment and climatic conditions, Asia was once considered by many scholars as the birthplace of human beings. The Ramapithecus discovered in the Indian-Pakistan subcontinent is considered to represent the ancestor of human beings. In the 1970s, Lufengpithecus was discovered about 8 million years ago in Yunnan, China. So far, a few scholars believe that it is close to Australopithecus in shape. Early transition types of human transformation [3]. \nFigure 1 \tThe skull fossil of \"Thomai Man\" in Chad At present, the academic community generally agrees that human beings evolved from early apes. However, many ancient ape fossils have been found in Africa, Asia and Europe. What is the phylogenetic relationship between these ancient apes? Which ancient ape is the common ancestor of humans and living apes? Academia does not have a definite answer. As for the reason and process of transformation from ape to man, the academic circles are still studying it. The traditional and popular view is that the ancient apes, the ancestors of humans, originally lived in trees. Due to the gradual drying of the climate, the dense jungles in some parts of Africa began to shrink and disappear, forcing some ancient apes to live from the trees to the ground, resulting in upper and lower limbs. Functional differentiation, thus evolving into humans who can routinely walk upright. However, the point of view based on new observational studies is that habitual upright walking as a form of locomotion may have occurred before the separation of apes, or as early as in the extinct ancient apes. Walking develops the ability to walk upright on two legs. The reason why the research on the above-mentioned problems is difficult to reach a conclusion is, first of all, the limitations of the research materials. To discuss the origin and evolution of human beings, the most important and direct material is human fossils. As biological organisms, human remains have very little chance of being fossilized and preserved, and there are very few opportunities for them to be discovered and studied by researchers in ancient strata buried deep underground. Therefore, the evidence of fossils can only be broken and intermittent, and the chain of human evolution cannot be pieced together completely with fossil materials, and there must be many missing links in the middle. Africa currently has the most ancient human fossil materials discovered, and its age is also older, relatively speaking, it is more complete and systematic, so it is considered by most scholars to be the birthplace of human beings, but there are still many gaps in timing and regional materials . In Asia, especially South China, due to the overall material fault in the period of 8 million to 2 million years ago, there is no material from the stage of Australopithecus and Homo habilis, so it is excluded from the origin of human beings by the mainstream of academia , but the discovery of new materials in the future may change this pattern. The second is the limitation of scientific and technological means. At present, the research on the remains of ancestors is still at the level of bone morphology observation, that is, to study the behavioral abilities of human beings such as walking upright through the observation of the key parts of ancient fossils, and to establish the evolutionary system relationship through the comparison of the morphology between individuals. , so there are many artificial uncertainties and limitations, and it is impossible to enter the level of molecular biology to extract and analyze the genetic material and biological information of fossils. In addition, the dating of human fossils and the reconstruction of the living environment of millions of years ago cannot meet the requirements of accurately restoring the origin and evolution of ancient humans (see Figure 2). Figure 2 \tThe long evolutionary path from ape to man As for the origin of modern humans, it was not a problem at first, but now it has become a hot topic of academics. The traditional view is that when Homo erectus was formed in Africa, some groups left Africa around 1.8 million years ago, spread to Europe and Asia, and then evolved into early Homo sapiens and late Homo sapiens (ie modern humans). In 1984, Chinese scholar Wu Xinzhi and American scholars Wolpov and Thorn jointly proposed the \"multi-regional evolution theory\" of modern humans, arguing that the four major human races in the world are inseparable from the older humans in this region, and most of the genetic genes come from This area [4]. Specific to the origin of modern Chinese and even East Asians, Wu Xinzhi put forward the hypothesis of \"continuous evolution with incidental hybridization\", arguing that modern humans here are mainly evolved from local early Homo sapiens or even Homo erectus, and occasionally a small number of foreign populations migrated in. Gene exchange and integration with native populations [5]. In 1987, Kane and other three American geneticists published a paper in the journal Nature. Based on the research on mitochondrial cells of representative individuals of people living in various places, they proposed that the ancestor of modern humans was a woman 200,000 years ago (namely Eve), her descendants came out of Africa to Asia and Europe about 130,000 years ago, completely replacing the original local population and becoming the direct ancestors of all modern humans [6]. This view is known as the \"replacement theory\", \"out of Africa theory\" or \"Eve theory\". Several Chinese geneticists supported the \"replacement theory\" based on the study of genetic variation such as the Y chromosome, and further proposed that the Peking Man and its descendants in China are all going extinct in evolution. The modern humans here are 60,000 to 50,000 years ago The descendants of new humans who migrated through West Asia years ago [7]. The \"multi-regional evolution\" and \"out of Africa\" theories about the origin of modern humans are currently in fierce conflict. For many people, our direct ancestors are still a mystery. The two viewpoints are completely different from each other, and each has advantages and disadvantages. \"Multi-regional evolution theory\" is based on direct evidence\u2014fossil materials, and demonstrates the continuous evolution from early humans to modern humans from the continuity of fossil human form evolution in related areas and the inheritance of Paleolithic cultural development. It has certain material advantages, especially in East Asia represented by China. Moreover, the theory does not rule out a small amount of gene exchange and is more inclusive. However, the fossil materials and cultural relics on which the theory relies are not complete, and there are many weaknesses in the transition from ancient humans to modern humans, especially during the period between 100,000 and 40,000 years ago. On the one hand, this is due to the scarcity of fossil materials, and on the other hand, it is because the dating of this period has encountered a bottleneck, and it is impossible to make accurate dating for related materials. The advantage of the \"replacement theory\" is modern scientific and technological means, which deduce the systematic relationship and evolutionary sequence of human populations in various places through the degree of variation of modern human DNA. However, modern molecular biology studies have shown that the rate of gene mutation is not constant, and there are many uncertainties and deficiencies in deducing the history of ancient human evolution from modern human DNA under the assumed rate of gene change; and for a small number of ancient human fossils (especially DNA tests done by Neanderthals) are not yet very convincing because they get too few base pairs. In addition, this hypothesis is rarely supported by fossil materials, but is contradicted by human fossils and cultural remains in many places. Another fatal flaw is that this theory does not address the relationship between modern populations that migrated from Africa and native populations. In addition to the origin, in the process of human evolution, there are many scientific issues that are of great interest to the general public but difficult to solve by academic research. The formation of \"human race\" is one of them. Modern humans on the earth can be divided into four major races, namely Mongolian race (commonly known as \"yellow race\"), European race (commonly known as \"white race\"), African race (commonly known as \"black race\") and Australian race species (commonly known as \"brown species\"). They are concentrated and distributed all over the world, and there are differences in skin color, hair shape, head shape, body shape, blood type, and organ shape. But they're all of the same species, they can mate and produce offspring. Regarding the cause of human race, \"race theory\" has appeared in history, and it is believed that there are advantages and disadvantages among different races, which are due to different degrees of evolution. However, modern academic circles have abandoned this view and instead believe that all races on earth are equal and are descendants of a common ancestor, and that differences among races are the result of long-term adaptation of people living in different regions to specific environments. For example, the color of skin is determined by the amount of melanin contained in the skin. Melanin absorbs ultraviolet rays from the sun, thereby protecting body tissues under the skin. The ultraviolet rays of the sun near the equator are very strong, and people with less melanin are not easy to survive, so in the long-term evolution process, central and southern Africa and Australia have become the inhabited areas of black-skinned people [8]. However, there are still many aspects of the motivation and process of the formation of human races that we do not know. Simple \"environmental determinism\" cannot fully explain the causes of human race differences. Many factors such as heredity, variation, and natural selection may have played a role. Further analysis and research. The history of human beings recognized by the academic circles is about 7 million years old. For this long evolutionary process, scientific research has only teased out a rough outline, there are knowledge blind spots in many aspects, and many hypotheses are yet to be proven. In the future, with the continuous discovery of more and more valuable materials, the continuous development of scientific and technological means and the strengthening of interdisciplinary research, we will learn more about our own history.", "Geochemistry, Geophysics and Geobiology are subject systems that study chemical movement, physical movement and life movement in geological history, respectively. They are chemistry, physics and life science and earth science respectively. Disciplines formed by cross-combination. Among them, geochemistry and geophysics have been developed for nearly a hundred years, and are very mature in terms of theoretical system and technical methods. However, because the life movement itself has become extremely complex because it includes physical and chemical movements, the development of geobiology lags far behind because it needs technical support from geochemistry and geophysics. Internationally, substantive research on geobiology has just begun in the 21st century. In 2000, a symposium at the American Academy of Microbiology formally described geobiology as the study of attempts to understand the interaction of the biosphere and the geosphere; Blackwell Publishers launched the journal Geobiology in 2003; in 2004, the National Science Foundation In 2001, the National Research Council of the United States proposed \"Geobiology\" as one of the six major opportunities for basic research in Earth Science. In 2008, \"Life -Earth interaction and influence\" as one of the top ten problems in solid earth science. Geobiology is established and developed in the study of life processes in geological periods. It takes the evolution of the earth's environment and life processes as the main clue, pays attention to the co-evolution of life and the environment[1], and focuses on the study of the impact of the earth's environment on the biosphere in different geological periods. , and the role and impact of the biosphere on the atmosphere, hydrosphere and lithosphere [2, 3]. In this two-way effect, people have carried out a lot of research on the effect of the environment on organisms from the perspective of multicellular animals and plants, and formed many theories. However, in terms of the effects of organisms on the environment, no matter whether it is in the field of macroscopic or microscopic research, it is very weak. In order to break through the key bottleneck of the role of organisms on the environment in geobiology research and establish a new cognitive system for earth system science, breakthroughs must first be made in microbial geology. Microorganisms are the earliest life forms on the earth and the bottom layer of the ecosystem. From the perspective of the energy flow process of the ecosystem, whether it is a normal ecosystem using solar energy or a \"dark\" ecosystem using underground heat and chemical energy, the organisms that initially absorb energy from the environment are mainly microorganisms, and finally return energy from the biosphere The environment is still microbes. If considered from the perspective of ecosystem composition, traditional research lacks systematic research on most members of the bottom layer of the ecosystem, and lacks in-depth research on the archaea and bacteria that constitute the three major groups of the current life system. However, it is these Microorganisms play a vital role in global change, the formation of mineral resources, and the modification of Earth's surface systems. Therefore, the study of microbes is the support system of life on earth, and the geological process and evolution of microbes are the frontier and difficult point of the current development of geobiology. Difficulties in studying microbial geological processes and evolution in geobiology lie in: first, the geobiological and microbial geological processes that appeared in the geological period \t215. After a long geological age, microorganisms are difficult to preserve in general rocks; , Even if the microorganisms can be preserved, their morphology and structure are generally relatively simple, unlike multicellular animals and plants that can be directly identified and classified from the morphology, structure and structure of hard bones. Due to the problem of preservation, most of the current microbial geology researches mainly focus on some protozoa in the geological period and some microorganisms in the modern extreme environment. , the mechanism of microbial ecosystem maintenance, the relationship between microorganisms and macroorganisms, the process and law of the interaction between microorganisms and the environment, etc. are still poorly understood [4]. Solving the above-mentioned problems depends on breakthroughs in technical methods, especially the combination of technologies in the two fields of earth sciences and life sciences. A number of techniques in the geosciences enable the tracing of microbial geological processes at the molecular and atomic (isotopic) levels. The isotopic composition of monomeric carbon and hydrogen in geological microbial lipids can extract information about microorganisms and their environmental conditions from molecular and atomic levels; and it can be combined with isotopic means of metal elements such as iron and molybdenum to explore geological Interaction of microorganisms with various geological environments. At the same time, field surveys and indoor simulation experiments in modern life sciences have pointed out the direction for exploring and partially reproducing typical earth microbiological processes. Experimental simulation of the interaction of different microbial functional groups and clay minerals in the oxidation zone, nitrate reduction zone, iron-manganese reduction zone, sulfate reduction zone and methane formation zone will help to understand the water-sediment interface in the early diagenesis The interaction between nearby microorganisms and different minerals, so as to find out the process of microbial geological processes acting on the earth's environment through carbon and sulfur cycles. In order to overcome the second difficulty, people focus microbial geology research on some special groups in the huge microbial system. Although the study of microorganisms in geological periods is difficult to classify and carry out research at the genus level through morphology and structure like multicellular animals and plants, a series of geological processes in which microorganisms participate, such as land weathering, ocean carbon fixation, early diagenesis, etc. Profoundly changing the earth's environment, these geological processes are mainly related to some microbial functional groups [5, 6]. Regardless of modern or geological periods, microbial functional groups at different levels, such as autotrophic and heterotrophic microorganisms, aerobic and anaerobic bacteria, sulfate-reducing bacteria and methanogens, all strongly affect the various sphere systems of the earth. The geological process of microbial functional groups links microbial ecology with biogeochemical processes, and is the link to study the interaction between microorganisms and the environment. Therefore, the study of microbial functional groups in different geological environments is the key to breakthroughs in microbial geological processes and evolution. It involves scientific research on two levels. One is how different microbial functional groups respond to various geological environmental events, such as eutrophication, global hypoxia, global warming, abnormal biological crisis and various extreme environmental conditions, etc.; The second is how different microbial functional groups act and affect various geological environments and how they act, conditions, results and mechanisms, etc., such as the impact of the weathering and erosion of microbial functional groups on the hydrosphere and pedosphere[7], The impact of microbial activities on the atmosphere and ocean systems [8] and the impact of microbial deposition and diagenesis on the lithosphere, atmosphere and hydrosphere [9], etc.", "As the most basic unit of the earth's solid shell, continents are most closely related to human beings, and have long been widely concerned by the earth science community. Continent studies have developed rapidly with the progress of earth science. In the 1960s, with the establishment of the theory of plate tectonics, unprecedented revolutionary changes and developments took place in earth science. Using the view of plate tectonics to study continents has greatly enriched and promoted the study of continental tectonics; however, since the 1980s , with the deepening of research, geoscientists have gradually realized that continents are different from oceanic plates. They are \"assorted platters\" with complex material composition and structure, and have undergone long-term evolution and transformation. So far, it is difficult to use the classic plate tectonic theory to comprehensively understand and explain all the problems of the formation, evolution and dynamics of continents. Therefore, it is urgent to develop or even surpass the theory of plate tectonics and to establish a reasonable A new theory that summarizes the formation, evolution and dynamics of continents; in the 1990s, countries around the world successively proposed their own national plans for the study of \"continental dynamics\", aiming to lead the earth science community and integrate scientific research forces and research techniques , strengthen the research on the formation, evolution and dynamics of continents. Although after nearly 20 years of hard work, a large amount of new data has been accumulated and important new progress has been made in the study of the material composition, structure, formation and evolution of continents, but the answer to this scientific question is still a long way to go. But far away, continental dynamics is still one of the most important scientific problems in the earth science circle in the 21st century. Compared with the ocean, which accounts for two-thirds of the earth's surface area but only accounts for 5% of the earth's historical records, the continent is a complex platter composed of continental blocks and various materials and structures formed in different periods over more than 4 billion years , and suffered from long-term, complex material and structural transformation during the formation and evolution of the earth. It is the continents, the archives that record the long-term evolution history information of the earth, that provide us with a good carrier for exploring continental dynamics issues, and enable us to dare and be able to get involved in some core scientific issues of continental dynamics. The origin and characteristics of continents Continents are a complex combination of various blocks and components that have undergone multiple transformations during the long-term complex geological process of 4.6 billion years since the formation of the earth. Existing studies believe that the continents were derived from the differentiation of the mantle in the early formation of the earth, because the high-density mantle material partly melted to produce low-density material, differentiated, migrated upward, and added to the shallow surface of the crust. This process of differentiation, migration, and emplacement of mantle materials has resulted in different compositions and layered structures of continents at different depths. However, based on the very limited understanding of the early history of the earth, it is not enough to determine whether this genetic mechanism has always played a role in the formation and evolution of the continents for 4.6 billion years. Did the early partial melting and differentiation affect the composition of the later magma? How big is the impact? Due to the continuous extraction of low-density materials, the mantle composition will inevitably undergo changes in density and structure. How does this change control and affect the dynamic process of the continents? In addition, petrological research results have confirmed that the mantle-derived rocks are mainly basaltic, which obviously contradicts the neutral average composition of the existing crust. Therefore, the basic question to be faced in continental dynamics remains: How do continents form? What are its basic ingredients? What is the chemical signature of the lithospheric mantle beneath the continents? Is the subcontinental mantle a remnant after the crustal material has been precipitated? Is there any essential difference between continental mantle material and oceanic mantle material? At the same time, because the continental \"assortment\" is composed of different blocks, and due to the lateral differences in partial melting and emplacement, the continents show obvious anisotropy in material composition and structure in the vertical and horizontal directions. Coupled with the differences in continents formed in different historical periods, continents in different regions and at different times may have differences in causes and characteristics. Therefore, on a global three-dimensional scale, it is still a challenging scientific proposition to study the genesis and basic characteristics of continents and to compare the genesis and characteristics of continents in different geological historical periods. Growth and Preservation of Continents Continental growth includes vertical growth and lateral growth. The formation of the crust is realized through the migration of deep-seated materials to the surface, and eventually tends to achieve a relatively balanced and stable material in the continental crust. The formation, migration and emplacement of magma are the basic processes of continental vertical growth, and the lateral accretion of continents is also one of the important ways of continental growth. Lateral accretion is the welding and accretion of different continental blocks, subduction complexes, and subducted magmas on the continental margin through ocean-continent, arc-continent subduction, and continent-continent collisions. However, what is the relationship between this lateral accretion and the magma activity at the active continental margin? How were they modified by lateral accretion and subsequent tectonic activity? So far, how has the magma in the continent been produced? How do continents grow, and what is the process? These are still important issues to be explored by the geoscience community. The reason why continents can be preserved for a long time has always been an important scientific issue that has been explored by the geoscience community for a long time. So far, most studies have concluded that about 80% of the present-day continents were formed in the Mesoarchean and Neoarchean, and are characterized by orthosite granite, greisen diorite, and grenodiorite (Trodjemite Tonalite Granodiorite, TTG). However, the crux of the problem is, why are these rocks preserved for a long time? Most current studies attribute it to the low density of the continents, making it difficult to return to the mantle. However, more and more research results show that low-density continents can still undergo deep subduction. So, is the low density the reason why the continents are preserved? How much continental material was subducted? What is the proportion of continental growth and preservation in different geological history periods? Obviously, why the continents were preserved and evolved for a long time is still an important issue to be further studied. In recent years, studies on lithosphere delamination have shown that mantle delamination often occurs in continental orogenic belts during post-orogenic periods, resulting in thinning of the lithosphere. Preliminary studies on the North China Craton in my country have shown that it experienced large-scale thinning failures during the Yanshanian period. However, geophysical studies have shown that there is a deep continental root beneath the North American continent, inserted into the mantle below the continent. So what exactly does this difference mean? How did ancient continental roots form and how were they preserved? What is the significance of the respective continental dynamics? The existing studies on the structure and structure of continents and their tectonic processes have shown that the properties and structure of continental lithosphere are not \"rigid\" blocks as considered by plate tectonics, but are characterized by silicon-aluminum-rich, anisotropic, heterogeneous, multi- Semi-viscous elastoplastic with layered block structure, with rheological characteristics. The rheological properties of the continental lithosphere directly control the tectonic response and deformation behavior of the continent to acting forces. Continental lithospheres with different rheological structures will show completely different strain behaviors and modes even under the same boundary conditions and tectonic stress. At the same time, this rheological structure will also change with the evolution of the continent. Little is known about the rheological properties and structure of continents, their deep-depth changes, and their dynamics. Therefore, the exploration and research of continental dynamics must strengthen the research of continental rheology, especially the research of deep geology, fluid geology and geodynamics. In addition to the complexity of large-scale structural structures, rocks at different structural levels inside continental blocks also have different deformation behaviors, such as shallow brittle deformation and deep ductile deformation. However, the boundary between brittle and ductile deformation Often indeterminate, the transition zone is a complex function of factors such as depth, temperature, pressure, mineral composition, rock fabric, strain rate, and fluid. Therefore, the study of continental tectonics, subcontinental mantle movement and its surface response, volcanic and seismic activity mechanisms, continental structure and its tectonic response, and the physical, chemical, and biological processes and dynamics of continental tectonics will still be the future continental dynamics. important scientific problems in scientific research. The relationship between continental tectonics and plate tectonics Since its initial formation, continents have experienced high-speed growth and accretion in the Archean, preserved and evolved for a long time, and suffered tectonic evolution and transformation as an important part of plate tectonics. However, its relationship with plate tectonics has been a controversial and important substantive problem. The key is to what period of geological history can plate tectonics be traced back? Did the Archean have plate tectonics? Is it the same as the modern plate tectonic regime? Although there are reports of Archean ophiolites, there are still big differences in whether the composition of the present-day oceanic crust can be used to analogize the Archean oceanic crust. Moreover, under the Archaean tectonic-thermal regime, whether there are rigid blocks of a certain scale, what is the state and size of the continents, how the TG was formed, and many other issues are currently puzzling scientific problems. Plate tectonics at the continental margin is an indisputable fact. However, due to the complex composition and structural structure of the continent, tectonic deformation is far from limited to the plate boundary, and widespread diffuse tectonic deformation is also common in the interior of the continent. This is the study of plate tectonics. A prominent problem encountered during continental tectonics. In addition, when the plate movement passes through the continental lithosphere, its kinematics and evolution may have certain changes, not only mutual thrust and superimposition, but also lateral displacement. So how far can this vertical superposition and lateral strike-slip caused by plate action penetrate into the continent? Are plate tectonic regimes and patterns changing with continental material and structure evolution? How is it related to the formation and disintegration of supercontinents that have occurred many times in the evolution history of the earth? In addition, in addition to plate tectonics, whether the continent has its own tectonic dynamics and functions is still a challenging scientific problem in the field of continental dynamics. The relationship between continents and the earth system Continents are the basic components of the earth system, and they are the comprehensive evolution products of the dynamics inside and outside the earth and the interaction of various circles. Therefore, research on continental dynamics needs to be discussed within the framework of global ocean-continent tectonic evolution. The complex process of the formation, evolution, preservation, and extinction of continents is not only controlled by the internal dynamic geological processes (magmatism, earthquake, tectonic movement, metamorphism) in the Earth's inner circle, but also by the Earth's outer circle. Exodynamic geological processes in the interaction process of layers (biosphere, hydrosphere, atmosphere), such as weathering, denudation, transport and deposition, etc. Therefore, the study of continents must also be considered in the earth system, in order to expect the final solution of the problem. However, the biggest challenge lies in how to quantitatively express the control and influence of different dynamics in the Earth system on the formation and evolution of continents, and how the continental structure controls and affects the surface system and its evolution from the macroscopic to the microscopic. Relationship Between Continental Evolution and Global Change Continental evolution and global change are interdependent processes in the earth system, and have complex internal relationships, which are currently poorly understood by humans. Studies on the Alps, Qinghai-Tibet, Himalayas, Qinling and other orogenic belts and plateaus have shown that the evolution of continents plays an important role in controlling global climate change. Basin sediments record climate, hydrology, and structural deposition processes in detail. Lake basins and young active structures provide a good carrier for establishing the relationship between tectonic movement, terrain evolution, and climate change on a certain time scale. However, existing research is limited to small-scale effects of mountain uplift on climate change. The difficulty of future research will be how to extend this small-scale experience to the control of global continental evolution on global climate change, and even the control of global climate change on continental surface geomorphology and action processes, and then the adjustment of deep continental materials and structures. . In addition, the sustainable development of human society also requires us to study the relationship and laws of continental tectonic evolution and global change at the scale of human existence, not just at the geological scale.", "Continents are the homes on which human societies on earth depend for survival and reproduction. There are four main continents on the earth today, the largest is Eurasia and Africa, the second is the American continent, and the smaller two are the Australian continent and the Antarctic continent. When and how continents were formed and how they evolved after formation has always been a scientific subject that human society, especially geologists, pay close attention to and continue to study. Unlike the ocean floor, which is mainly composed of silica-magnesia rocks, the continental crust is mainly composed of silica-alumina rocks. It is now known that the oldest rocks on the ocean floor are only more than 100 million years old, while the oldest rocks that make up the continents were formed about 4 billion years ago. Research by geologists has found that the rocks on the ocean floor are mainly formed by the condensation of mantle material moving upward on the oceanic ridge, then migrating to both sides, and finally moving downward in the trench at the edge of the ocean to return to the mantle. It is this material movement process that leads to the continuous renewal of ocean floor rocks. The continent is different. The cycle of matter is mainly manifested in the erosion-transportation-deposition-diagenesis-metamorphism-melting-condensation crystallization-re-uplift and denudation of ancient rocks. As a result, although there are ancient rocks on the continent, their number is relatively small , In young rocks, information of ancient rocks is often preserved. Geologists have been studying the geological history of continents for more than 100 years. For a long time, most geologists believed that the geological environment of the early earth was very different from that of the present day. Roughly with 1.8 billion years as the boundary, before 1.8 billion years, the outer layer of the earth called the crust was relatively thin, and the geothermal gradient was relatively high, so the degree of rock metamorphism was relatively deep, resulting in the ancient rocks on the earth being mostly deep metamorphic crystals rock series. Before the 1950s, because human beings knew very little about the ocean floor, the geological community mainly discussed the geological history of the earth, especially the continents, based on the geological research of the continents. Earth's ancient ocean rocks are thought to have formed in ocean troughs. Such troughs, known as geosynclines, are filled and folded back by sediment to form mountains on Earth. Abroad, the Scandina Mountains in northwestern Europe, the Hercynian Mountains in central Europe, and the Alps in southern Europe are all considered to be products of geosyncline evolution. In my country, until the 1960s, the mainstream understanding in the geological field was based on this theory, but because of the discovery of magmatic rocks and multiple unconformity interfaces in the mountains of our country, it was believed that the mountains of our country were different from those of foreign countries. The difference is the product of multi-cycle evolution of geosynclines, and this understanding is called the multi-cycle tectonic theory[1]. In the geological circles of our country, two completely different arguments have been formed regarding the formation and evolution of the Chinese mainland. One argument holds that in the early history of the earth, the continents were relatively small, and they gradually grew up to the present scale after a long geological period, that is, from the early continental core of the earth, they gradually grew outward to the original platform and platform and present-day continents [2,3]. Another argument is that the evolution of the Chinese mainland has undergone giant cycles, and each giant cycle is further divided into multiple orogeny or geosyncline cycles[1, 4, 5]. With the in-depth study of the bottom of the ocean, the fact that the bottom of the sea is constantly updated has been discovered, and the phenomenon of continental drift has been confirmed; the advent of the theory of plate tectonics has triggered a revolution in the epistemology of the geosciences. After the theory of plate tectonics was applied to the study of continental geology, it was discovered that the so-called troughs are actually ancient ocean basins; the formation of mountains is the product of the accretion and final collision of such ancient ocean edges; the so-called multi-cycle tectonic evolution of troughs , which is actually the process from the accretion of such ancient oceanic margins to the final collision of different continental margins[6]. Geologists have discovered that there are different geodynamic environments such as continental rifts (East Africa), continental passive margins (Atlantic margins), continental active margins (Pacific margins) and continental collision zones (Himalaya-Alps) that exist on the surface today. It is a typical example of the different stages of the formation and evolution of a continent, and they constitute a cycle of the formation and evolution of a continent. Due to the differences in the geodynamic environment at the root, such a cycle can be further divided into three stages: continental disintegration and dispersion, convergence and recombination, and intracontinental evolution[7]. Existing data show that almost all the rocks in the geological record can be compared with rocks formed in these different stages. It is found in the orogenic belt that the Archaean TTG rock series are very similar to the current island arc complexes. These similarities reveal that in the geological history of the earth, such continental formation and evolution cycles have repeated many times. In recent years, geologists have studied the ancient rocks of the Australian continent, showing that about 4 billion years ago, continents and oceans similar to those of today appeared on the earth [8, 9]. In geological history, continents have undergone many cracks and reorganizations, forming different supercontinents in different geological periods. At present, such supercontinents recognized by most geologists include the Columbia supercontinent formed around 1.8 billion years ago, the Rodinian supercontinent formed around 1 billion years ago, and the Pangea supercontinent formed around 300 million years ago. Some scholars even believe that there were two supercontinents, Ur and Korealand, around 3 billion years ago and 2.5 billion years ago [10]; ]; some scholars believe that such a cycle is incomplete, and the formation and evolution cycle of a continent should include the formation stage of the continent, that is, the disintegration of a continent is the end of the life cycle of the continent, and it is also the beginning of a new continent cycle. start. According to the available data, from about 3.8 billion years ago to the present, the geological history of mainland China can be divided into six cycles, namely the Shihua cycle (2.8 billion years ago), the ancient Hua cycle (2.8 billion years to 2.35 billion years) , the North China Cycle (2.35 billion years to 1.75 billion years), the South China Cycle (1.75 billion years to 800 million years), the Cathaysia Cycle (800 million years to 270 million years) and the Pan-China Cycle (270 million years to the present), among which the former The geological records of the three cycles are mainly preserved in North China, forming the North China or Sino-Korean craton; the Nanhua cycle forming the South China or Yangtze and Tarim cratons; The Chinese mainland or the Eurasian continent outside individual regions in eastern China; and the pan-Chinese cycle formed the Chinese mainland and even the current global continental pattern[7]. In short, after nearly a hundred years of research, it has been found that continents are composed of multiple supercontinent cracking-recombination-evolution cycles in geological history, rather than the gradual growth of ancient continental cores; it is basically certain that the formation and evolution of continents, Like the formation and evolution of the ocean, it has a cyclical nature. The day when the life of the ocean ends is the time when a new continent converges and emerges. However, what factors lead to the disintegration of existing continents and the convergence of new continents, why the duration of different cycles is different, and why different regions show different geodynamic backgrounds in the same period still need further study.", "The tectonic state and dynamic mechanism in the early history of the Earth, the formation, disintegration and evolution of supercontinents, and the criteria for the restoration of supercontinents have been long-term explored and debated problems in geosciences. This paper makes a brief introduction to the formation, disintegration and evolution of supercontinents in the history of the earth, as well as the research status and main problems of related issues. Supercontinents in geological history A supercontinent is a combination of almost all continental blocks on Earth. The formation of supercontinents is closely related to the horizontal movement of continental blocks in the process of geography, that is, the \"birth\" of plates restricts the formation of supercontinents. However, there are different understandings of when the plate was \"born\" in geological history. In the early days, it was believed that the plate mechanism was only applicable to the Mesozoic, but it was gradually extended to the end of the Mesoproterozoic and even the end of the Paleoproterozoic. The main reason is that the remnant of the Phanerozoic oceanic crust\u2014the ophiolite suite\u2014was not found in the early history of the earth. The dynamics of the pre-plate mechanism are generally considered to be related to the uplifting asthenosphere similar to the mantle plume, that is, the plate underlayment (underpassing) effect. Many scholars have emphasized the characteristics of different development stages and the differences between them when discussing the stages of the earth's historical development. At present, some scholars tend to believe that the plate movement and the formation of supercontinents existed in the Neoarchean. However, compared with the modern plates starting from 1 billion years ago, the early Earth is characterized by the existence of many oceanic microplates and many intra-oceanic arcs, so there are significant differences in the signs and characteristics of plate movements in different geological historical stages. There may have been four supercontinents in the history of the earth, from oldest to newest, they are Kenorland, Columbia, Rodinia and Pangaea. The Kenoran supercontinent The Kenoran supercontinent may be a supercontinent that existed at the end of the Neoarchean. It is generally believed that it is composed of at least the Laurentians of North America, the Baltics of Europe, Australia and the Kalahari of southern Africa. Between these ancient cratons, there is evidence of converging continental margins and continent-continent collisions 2.6-2.4 billion years ago. These continents seem to have converged into the first supercontinent in the history of the Earth at the end of the Neoarchean, the Kenoran supercontinent, but the current research level is relatively low, and there has been no restoration map of the Kenoran supercontinent so far. The Columbia supercontinent The Columbia supercontinent is a supercontinent formed after the disintegration of the Kenoran supercontinent and recombined between 1.9 billion and 1.85 billion years ago. The key evidence for its existence comes from eastern India and the Columbia region of North America. Therefore, Rogers et al.[1] named the supercontinent Columbia supercontinent (upper left in Fig. 1). Hoffman[2] used the term Nuna when describing the collage of the Paleoproterozoic North American terrane, so different scholars described the \tsupercontinent of this period. The upper right picture is the restoration map of the Rodinia supercontinent [4]; the lower picture is the restoration map of the United Continent [1], some use Columbia and some use Nuna. Rodinia supercontinent Rodinia supercontinent was a supercontinent formed in the late Mesoproterozoic to early Neoproterozoic after the disintegration of the Columbia supercontinent (upper right in Figure 1). Due to the orogeny of Greenville and its close times, several separated continental blocks gradually converged into supercontinents during the Mesoproterozoic period. McMenamin et al. [3] first proposed the concept of \"Rodinia\" supercontinent, pointing out that Rodinia is a global supercontinent formed by continental collision 1 billion years ago. The United Continent The United Continent is a supercontinent composed of Gondwana and Eurasia after the breakup and disintegration of the Rodinia supercontinent at the end of the Paleozoic Era (about 250 million years ago) (Figure 1 below). The youngest and most studied supercontinent. Some geologists proposed that there existed a supercontinent called Vaalbara before the Kenoran supercontinent, which was composed of Kaapvaal in South Africa and the Pilbara craton in Western Australia. It is considered to be the first supercontinent in the history of the earth. 3.3 billion years ago, but its size is too small to be compared with the Proterozoic and Phanerozoic supercontinents. Supercontinent Evolution Each of the supercontinents mentioned above existed only in a relatively short period of geological history, while their rupture, disintegration and reorganization occupied a relatively long history. The process of the formation, disintegration and reorganization of the early supercontinent is still unclear, but the contours of the formation of the supercontinent from Columbia, through Rodinia to the United Continent have gradually become clear. From the breakup of Kenoran to the formation of the Columbia supercontinent The Kenoran supercontinent began to break up since the beginning of the Paleoproterozoic (about 2.4 billion years ago), and before finally forming the Columbia supercontinent, three large continents such as Ur, Nena and Atlantica were formed. The orogenic movement that started in 1.9 billion years made these three continental block groups converge gradually to form a supercontinent. From the rupture of Columbia to the formation of the supercontinent Rodinia The rupture of the supercontinent Columbia began more than 1.7 billion years ago, after the rupture of the supercontinent deposited thicker clastic rocks and carbon-containing rocks on discrete continental margins or in the interior. The sequence of salt rocks and a small amount of volcanic rocks formed the famous Mesoproterozoic strata such as the Lower-Middle Riffeian in Russia, the Lower Wendian Subgroup in India, the Belt Supergroup in North America, and the Changcheng Group-Jixian Group in North China. At the same time, magmatic events corresponding to the breakup of the Columbia supercontinent were also formed, among which the anorthosite-orchid porphyritic assemblage is particularly striking. Therefore, some scholars believe [5] that these sedimentary basins and magmatic rocks and their symbiotic huge mineral resource potential are related to the planetary-scale rifting events from 1.8 billion to 1 billion years ago. . The rupture and disintegration of the Columbia supercontinent laid the foundation for the convergence of the Rodinia supercontinent, forming the early Neoproterozoic (1 billion ~ 900 million years) image of the supercontinent Rodinia. From the breakup of Rodiani to the formation of the United Continent, the Rodinian supercontinent began to break up about 800 million years ago, and the discrete land blocks of Australia, India, Antarctica, Congo, and South America passed through 600 million to 500 million years ago The Pan-African orogeny caused the closure of the Mozambique Ocean, thus forming the Gondwana block group in the southern hemisphere. Another part of the landmass, mainly the Laurentian and Baltic landmass in the northern hemisphere, was united through the early Paleozoic Caledonian movement due to the closure of the original Atlantic Ocean; the addition of the Siberian landmass later constituted the main body of the northern continent. The southern and northern continents converged at the end of the Paleozoic Era to form the youngest supercontinent on Earth\u2014Pangaea. In the process of Earth's historical evolution, starting from the Kenoran supercontinent at the end of the Neoarchean, it has gone through the process of breaking up and reforming the supercontinent, and the geological history reflecting this process is called the supercontinent cycle. Each supercontinent cycle begins with the rupture of an ancient supercontinent and ends with the formation of a new supercontinent. Therefore, the supercontinent cycle is the largest and longest geological cycle in the history of the earth. The differences and particularities of these supercontinent cycles have become important features of global tectonic evolution in geological history. The main problems that exist However, the current research on supercontinents is still in the stage of hypothesis and data accumulation. Not only do geoscientists have different understandings of whether supercontinents exist in the process of geohistory, but there are also many differences among scholars who admit the existence of supercontinents. opinions, and there are some blind spots in the research work, which are manifested in: \u2460 Lack of in-depth and systematic research on methods and criteria for supercontinent restoration. Although some scholars have proposed some criteria for the restoration of Phanerozoic continents using climatological markers, these criteria are difficult to apply to the restoration of Proterozoic supercontinents. \u2461 In the study of global tectonics, the level of research work in several countries and regions is uneven, often lacking the necessary data to explore the reconstruction and evolution of supercontinents, resulting in insufficient evidence for the restoration of supercontinents. \u2462 Although some scholars have proposed several models to explain the dynamic mechanism of supercontinent formation and disintegration based on the current plate tectonics and deep geophysical data in the western Pacific Ocean, such research work is obviously in its infancy. \u2463 The starting time of the plate movement and when the first supercontinent formed in the geological history are still under debate, and the current focus is mainly on the sign of the initial plate movement. Obviously, it is unrealistic to define the birth of plate movement by the presence or absence of ophiolite or ophiolitic m\u00e9lange in modern plates. Therefore, the signs of plate movement in early geological history are still being explored and debated. Among the scholars currently studying supercontinents, tectonicists and paleomagnetism experts are the main ones. The majors are not complete enough, the personnel are not extensive enough, and there is a lack of participation of a large number of deposit scientists, stratigraphers, and geochemists, which has affected the supercontinent. The rapid improvement of research results. Only with the participation of a large number of geoscience experts can the study of global structure and supercontinents achieve more important progress.", "Working model Decollement/detachment layer refers to a relatively weak rock-structure combination layer above the underlying rock combination, controlled by the overlying folded rock combination, and characterized by strong shear deformation. Multiple detachment layers may develop simultaneously or continuously during lithosphere deformation, and detachment layers may develop on different structural scales, so multi-layer detachment tectonics is the main mode of lithosphere deformation. The discovery of large-scale low-angle faults, such as extensional detachment faults[1,2] and large-scale low-angle thrust faults[3,4], has led structural geologists to propose that the continental lithosphere may be stratified according to physical properties . Therefore, when the lithosphere undergoes large-scale structural deformation, structural detachment may occur along the physical property mutation interface, forming a lithosphere-scale decollement fault. These detachment faults divide the lithosphere into structural layers with different petrophysical properties. Due to the different depths and temperature-pressure conditions of each structural layer, the deformation behavior of each structural layer is also different. A variety of hypothetical models have been proposed for the multi-layer detachment structure of the lithosphere. For example, in the Himalayas, southern Tibet, and other orogenic belts, under the thickening of the lithosphere, it is proposed that the middle-crust channel (or channel) flow model (mid-crustal ductile channel flow) to adjust the excess mass, which explains the phenomenon of the coexistence of surface extension and compression structures[5~7]. Jackson [8]'s continental lithosphere sandwich structure model (Fig. 1) uses the relationship between lithospheric rock physical conditions and temperature and pressure conditions to explain the reasons for the occurrence and existence of delamination structures. Evidence problem \nFig. 1 \tJackson lithosphere sandwich structure model [8] The lithosphere multi-layer detachment structure model has been supported by some field observations and experimental evidence, mainly including: \u2460 Field observation support. Since the deformation traces of the deep structural layers of the lithosphere can be exposed under tectonic action, in some special structural parts, the structural traces formed by different structural layers of the lithosphere during the same tectonic process can \"co-exist\". A detailed structural analysis of such structural sites, supplemented by analysis of petrology, geochemistry and chronology, can gain new understanding. For example, based on the observation of low-angle normal faults in the North American Cordillera, Wernicke[1] and Lister et al.[2] proposed that low-angle normal faults are ductile shear zones in the middle crust, formed by extensional detachment (Fig. 2), the footwall formed a metamorphic core complex and the hanging wall formed a basin and range structure due to tilting. On the profile, the metamorphic core complex is composed of deep core metamorphic complex, middle-level ductile shear-detachment fault and shallow-level brittle deformation structure, and has the characteristics of three-layer structure. For an in-depth analysis of ductile detachment faults, the type of shear action and its variation can be quantitatively determined by kinematic vorticity, so as to establish a structural kinematic model[7, 9]. In the Hunan-Hubei-Chongqing-Sichuan border area of the middle and upper Yangtze in South China, Yan et al. [10], through the observation and analysis of the relationship between different styles of fault-fold combinations under the condition of thrust and nappe, proposed that the shallow lithosphere structure consists of multiple The thick-skinned-thin-skinned structural model of layered progressive pushover controlled by layer detachment structure (Fig. 3). The above shows that multi-layer detachment tectonics can develop in different structural layers of lithosphere. Fig. 2. \tThree delaminated tectonic models in the formation of low-angle normal faults and metamorphic core complex structures[1,2] Brittle upper crust; 2. Ductile crust; 3. Lithospheric upper mantle; 4. Asthenosphere; 5. Magmatic rock\u2461 Evidence from deep geophysical exploration. The most important evidence support for the multi-layer detachment structure of the lithosphere comes from the results of deep geophysical exploration, including deep seismic reflection, magnetotelluric sounding, and Bouguer gravity anomaly detection. After the 5.12 Longmenshan Wenchuan Earthquake in 2008, Burchfiel et al. [11], based on data such as the Bouguer gravity anomaly, believed that the upper crust of Longmenshan did not thicken significantly, but the middle and lower crust with one to several layers of ductile flow crust was flowing eastward. In the process, it was blocked and thickened by the hard central Sichuan block under the Sichuan Basin, resulting in the vertical uplift of Longmenshan and the accumulation of abnormal stress, which eventually led to the Wenchuan earthquake in May 2008. This indicates that there is a multi-layer detachment structure of the lithosphere in the Longmen Mountains and the area to the west. This structural model of Burchfiel et al. [11] further proves that the passage (or channel) flow mode [5~7]. In many mountain ranges, detachment faults have been recognized as one of the dominant factors of orogeny, such as the use of deep reflection seismic profiling (CORCOP) to discover a huge near-horizontal fault beneath the Blue Ridge of the Appalachians in the United States Thrust detachment zone, and pushed the Precambrian crystalline rock series of the Blue Ridge on the sedimentary cover of the Lower Paleozoic, and the thrust distance from the southeast to the northwest of the 5-15km thick foreign rock sheet Up to 260km[3, 4]. \u2462 Supported by experimental simulation results. In order to verify the multi-layer detachment structure model of the lithosphere, people use petrological and geochemical analysis methods, or carry out relevant high-temperature, high-pressure and petrological experimental analysis on rock samples at different structural levels, and on this basis put forward The delamination structural model of the lithosphere [12] better explains the deep structural detachment of the lithosphere. Figure 3 \tFormation model of multi-layered detached nappe thin-skinned structures in the South China Yangtze block [6] Fz, Fc, Fs and Ft represent the structural detachment of the Sinian base, Cambrian base, Silurian and Triassic lower parts, respectively Some key scientific issues involved in the detachment surface[10] Although the multi-layer detachment structural model of the continental lithosphere has been supported by some observations and experimental evidence, there are still some key scientific issues that have not been resolved. Therefore, the multi-layer detachment structural model It is still a working hypothesis. These scientific issues mainly include: \u2460 How to determine the interface properties of structural layer detachment? That is, what is the underground originality? For example, the physical properties and deformation behavior of the structural layer interface are the most critical issues in the deformation of multi-layer detachment structures. What we have observed in the field is the structural traces of the lithosphere exposed to the surface after multi-layer detachment and deformation. However, the interface inferred from deep geophysical data has multiple solutions. How to compare it with surface observations becomes a solution. the crux of the matter. Some researchers have also proposed to use high temperature and high pressure experiments to test. However, due to the limitation of the strain rate and boundary conditions such as temperature and pressure during the experiment, the current simulation is far from the actual environment of natural deformation. \u2461 What is the deformation process of structural layer detachment? That is, what kind of deformation and metamorphism process has it experienced? When we observe on the surface, we invert the deformation process according to some structural kinematics pointing signs during the deformation. However, during the observation process, it was also found that many kinematic directions at the same time were inconsistent or even opposite. In addition, the deformation process we are inverting now does not limit the time and deformation rate, and even completely ignores the deformation process. Therefore, this inversion process may be very different from the natural process. \u2462 Reasons for deformation differences between structural layers? Among the different structural layers observed in the field, there are huge differences in deformation structural styles and combinations, as well as the metamorphic temperature-pressure conditions reflected by them, but the interface between the two layers, that is, the ductile detachment shear zone, may sometimes only be a few centimeters. What is the reason, what is the tectonic process, and the tectonic force can cause such a huge deformation-metamorphism difference between the upper and lower structural layers? \u2463 How big is the scale of delamination? Since the multi-layer detachment is based on the observation of low-angle faults, the change and extension scale/depth of low-angle faults are the most important basis for determining the scale of delamination. However, there is no research example so far that can accurately determine The occurrence change and scale of low-angle faults to the deep. Another question related to this is whether this kind of delamination structure only occurs in the middle and upper part of the lithosphere, or can it occur in the entire continental lithosphere? For example, is the delamination at the bottom of the lithosphere[12] a type of delamination tectonics? \u2464 Is there really a middle and lower crustal ductile channel flow in the middle and lower crust? Is it one layer or several layers? Is the multilayer detachment tectonics of the lithosphere controlled by the flow of these ductile flow shells? Or to what extent can the ductile flow crust control the deformation of the lithosphere? How to determine the nature of this fluid? There are no definitive answers to questions about the relative viscosities or rheology of fluids. Since the above key problems have not been resolved, traditional structural geology field observations are limited by surface rock outcrops, there are uncertainties and multi-solutions in the interpretation of deep geophysical detection data, and rock geochemical data have limitations in geological relationship analysis. Insufficient, especially the boundary conditions set by experimental structural geology are far from satisfying the physical, chemical and geological reality of the deep deformation of the continental lithosphere. Therefore, only by combining the studies of structural geology, geophysics and rock geochemistry, and realizing the complementary advantages of multiple disciplines, can objectively carry out more in-depth research on the multi-layer delamination structure of the lithosphere. With the continuous refinement of field structural analysis and the discovery of new important evidence and key structural phenomena, coupled with the continuous application of modern testing and simulation techniques in the field of earth sciences, the temperature-pressure conditions at different depths of the lithosphere and their Morphing features will gradually become a reality. On the basis of these studies, the continental lithosphere multi-layer delamination model will be continuously revised and improved.", "The theory of plate tectonics has epoch-making significance in earth science. It explains many geological phenomena on the earth, such as most active volcanoes and earthquakes are distributed on the boundaries of plates; The interaction has been strongly challenged, especially the large-scale magmatic activities in the plate interior (continental, oceanic overflow basalts) and oceanic hotspot volcanic chains. Wilson put forward the hotspot hypothesis based on the linear distribution of some volcanic islands and seamounts in the Pacific, Atlantic and Indian Oceans and the change in eruption age sequence[1]. He argues that these hotspots are relatively static, so chains of volcanic islands formed as lithospheric plates drifted past them. On the basis of the hotspot hypothesis, Morgan[2] formally put forward the mantle plume hypothesis, and believed that Wilson's hotspot is the existence of a slender columnar hot material flow (i.e., mantle plume) that originates from the earth's core-mantle boundary and slowly rises on the surface of the earth. Manifestations. The basic point of this hypothesis is: the high-temperature and low-viscosity layer near the deep core-mantle boundary of the earth\u2014the D// layer can generate a columnar rising hot matter\uf02denergy flow, and when it reaches the cold lithosphere through the mantle, the top It often opens in a trumpet shape, forming a thermal mass body structure with a spherical crown and a narrow tail column\u2014a thermal slow column structure. Although some scholars later had different understandings of the structure of the mantle plume, their basic meaning and outline are roughly the same (Fig. 1). The huge spherical cap of this hot mantle plume can cause crustal uplift and large-scale overflow basalt volcanism to form continental or oceanic overflow basalt during the ascent process, and can also \tcause regional metamorphism in the model of the mantle plume in Figure 1[3] , crustal melting and different scales of crustal extension, with the movement of the overlying plate, the narrow tail plume of the hot mantle plume will produce a series of hot spot volcanic chains. Not only that, but it can also explain many geological phenomena such as Archaean Komatiite, continental cleavage, geomagnetic polarity reversal, mass extinction, global climate change, and sea level rise. It is considered to be another new tectonic theory after the plate tectonics, and it is also an important supplement to the plate tectonic theory. However, in the first 20 years after the mantle plume hypothesis was put forward, it did not arouse widespread interest. After Campbell and Griffiths conducted the famous mantle plume simulation experiment[4] and published the results in 1990, the mantle plume hypothesis became popular, and the number of articles on the mantle plume hypothesis increased almost dozens of times in the following years. has been maintained at a high level. Even so, voices opposing the plume hypothesis have never ceased [5~7]. However, in the last 10 years the skeptics have risen, and a dedicated web site (www.mantleplumes.org) has been devoted to the discussion. The hypothesis of the mantle plume has three main points: \u2460 originates from the slender columnar hot material flow that rises slowly at the Earth's core-mantle boundary; \u2461 the mantle with abnormally high temperature under the hot spot; \u2462 the mantle plume is relatively static. However, all three of these points have been questioned because of some research findings on areas that are recognized as hotspots. For example, seismic tomography shows that the mantle thermal anomaly is limited to the shallow mantle above 200 km in the Yellowstone region of the United States [8], and it is limited to the mantle above 400 km in Iceland [9]. Heat flow measurements found that the heat flow values in the Iceland region are not different from those in other non-plume active regions, and there is no petrological evidence representing high-temperature products such as komatiite and picrite in some large igneous provinces (LIP) . In addition, in fact, many hotspots are also migrating, but their migration speed is slower than that of the plates (the speed of hot spots is generally <1cm/a, while the speed of plates is generally 2~10cm/a)[ 10]. For example, the \"hot spot\" of the Atlantic Ocean relative to the \"hot spot\" of the Pacific Ocean was not fixed before about 50 Ma[11]. In addition, Anderson, a representative of mantle plume opponents, criticized the evidence for the existence of mantle plumes one by one, and proposed a new non-mantle plume explanation[12]: \u2460Large volume of magma. It is not necessarily that the mantle plume can produce a large volume of magma, other mechanisms can also produce a large amount of magma, for example, under the conditions of a large number of fractures and sufficient magma pipelines, the formed magma is stored in the magma chamber, and then erupts, so that Large volumes of magma can form. \u2461 The massive eruption of overflow basalt in a short period of time requires special conditions. Although high temperature and high melting degree can produce large volume of magma. However, other conditions can also produce short-term massive eruptions, such as the melting of the enriched mantle caused by the addition of volatiles, and the melting of eclogite-rich sources, etc. According to geophysical data estimates, the potential temperature of the mantle is 1350-1400\u00b0C [7], about 100\u00b0C higher than the petrological model. This temperature allows extensive melting of eclogite. Melting experiments show that[13], 60%~80% melting of eclogite can produce LIP basalt. Under these conditions the peridotite starts to melt, and the melts of eclogite and peridotite react. Therefore, when the eclogite delaminated into the mantle, under the temperature of the mantle, the eclogite was melted to a high degree to produce LIP overflow basalt. \u2462 The trajectory of the volcanic migration in the LIP is the result of the mantle plume tail. Less than half of the large igneous provinces have pillar tail tracks, and some so-called tracks may be structural features that existed before the eruption. \u2463 High 3He/4He indicates that it comes from the lower mantle. Anderson considers high 3He/4He values as evidence of a mantle plume because modern hotspots such as Hawaii, Iceland, and Yellowstone have high ratios, which is only hypothesis, not evidence. And some mid-ocean ridge basalts also have high 3He/4He values[14~16]. \u2464 There was no geological action before the eruption, indicating that there was tension. The presence of a large number of dykes before the eruption indicates extension rather than uplift of the crust. Stretching resulted in the melting of the enriched mantle to form basaltic magma. Some proponents of the mantle plume hypothesis also explained the above-mentioned doubts. For example, the absence of volcanic migration tracks is due to the fact that isotope dating has not yet reached such an accurate level; The movement of the plate leads to the bending of the hot spot[17]; the lack of picrite in the large igneous province is due to the high density of the picrite, which is filtered by the crust (due to the density relationship, it cannot rise to the surface and stay in the crust deep, or the basaltic magma formed and evolved after separation and crystallization in the crust rose to the surface)[18]; geophysical data (seismic tomography) did not observe the tail of the mantle plume, that is because the diameter of the mantle plume is too large Small (100~200km), the current geophysical methods are not yet accurate to that level[19]. A notable exception is that Montelli et al. used the new finite frequency technique to observe the mantle plume tail. They used this technique to observe that the Ascension, Azores, Canary, Easter, Samoa and Tahiti mantle plumes originate from the core-mantle boundary[20], Montelli etc. noted that it is more difficult to observe mantle plumes in the lower mantle than in the upper mantle [20]. It is more difficult to identify the ancient mantle plume, the main reason is that it is the main means of judging the existence of modern hotspots\u2014geophysical methods reflect the modern state, and it is difficult to judge whether the Paleozoic or even Precambrian mantle plume exist. It is generally believed that the important evidences for the ancient mantle plume include: \u2460 crustal uplift before the eruption; \u2461 picrite or komatiite representing high temperature products (300\uf0b0C higher than the asthenospheric mantle); \u2462 radial dykes group; \u2463 track of hotspot migration; \u2464 large volume of continental overflow basalt. But in fact, it is difficult to observe all the above phenomena in the ancient large igneous provinces. For example, Xu Yigang et al. believed that the above three pieces of evidence could be observed in the Emeishan large igneous rock province[21], but sometimes only 1 or 2 pieces of evidence existed in other large igneous rock provinces. Therefore, from the current point of view, although the mantle plume hypothesis has made great progress, it will take some time before it is generally accepted by geologists. It is believed that with the advancement of science and technology, especially the development of deep detection technology, the mantle plume hypothesis will be Gradually confirmed and further developed and perfected.", "Continental subduction orogeny is a recognized mechanism of plate dynamics. Geological phenomena such as high-pressure-ultrahigh-pressure metamorphism, ductile deformation, and crustal melting have been found in many orogenic belts produced by it. Mountains, the Himalayas, Kunlun-Qilian-Qinling-Sulu-Dabie Mountains, and the northern orogenic belt have all been well documented. With the continuous development of plate tectonics, the research direction of scientists began to expand from plate edge orogeny to intraplate tectonic processes. However, the subduction of oceanic crust always precedes the subduction of continental crust in orogenic belts, thus dragging the continental lithosphere into the asthenosphere. Due to the lack of evidence for early oceanic subduction, intracontinental subduction is rare and understudied. In view of the particularity of the structural process in the plate, it is necessary to focus on the research on the combination of the deep process and the shallow part. At present, the research on intraplate structure in China mainly focuses on the interior of the North China Craton, the South China Plate, and the formation of the Cenozoic giant mountain chain in Northwest my country. The North China Craton is currently the only region in the world where the original thick Archaean lithospheric mantle has been severely damaged on a large scale. The loss of the deep lithospheric mantle and the change of its physical and chemical properties control the physical processes at the crustal level. Shallow geological events such as large-scale extension, large-scale intracontinental rotation, and large-scale structural and geomorphic changes. These events manifested as major changes in the Mesozoic geothermal field, a large number of magmatic and volcanic events, widespread development of half-graben basins and metamorphic core complexes or extensional domes, and intracontinental rotation as indicated by paleomagnetism. In recent years, previous studies on the large-scale extension of the North China Craton and volcanic-magmatic events have indicated that important geological events occurred in the Early Cretaceous, but the corresponding problems with deep processes still need to be solved. The simple response to failure is still a staged manifestation of the failure process, and the failure mechanism of the North China Craton is still debated, so the deep process and shallow response are the most important in exploring the intracontinental orogeny process. The South China Plate is mainly composed of the Yangtze Craton and the Cathaysia Block, separated by the South China Fold Belt. After the collision and splicing of the two in the Neoproterozoic, they existed as a complete block in the geological history. Many mountain chains (orogenic belts) in South China were formed in the Phanerozoic, and most of them are considered to be collisional orogenic belts like the Qinling-Dabie; another type of mountain chains, such as Xuefeng Mountain, Wuling Mountain, Mufu Mountain and Gannan Mountain Since no Paleozoic-Early Mesozoic ophiolites have been found in these orogenic belts, they are considered typical intracontinental collisional orogenic belts. The South China Plate is quite different from the North China Craton, and the strong intracontinental deformation of the Early Paleozoic and Early Mesozoic is well documented by the regional unconformities of the Middle Devonian and Late Triassic strata. Studies have shown that carbonate rock deposits from the Devonian-Permian platform in South China, deep-water deposits in troughs, and rich hydrothermal activities have been extensively recorded in Hunan-Hubei-Guangdong-Guangxi, but Oceanic crust has never been reported in these extensional basins [1, 2]. During the Mesozoic, extensive collisional orogeny occurred around South China. At the same time, these compressive stresses exerted extensive influence on the continent, forming a series of intracontinental orogenic belts, but the development of collisional orogeny was not obvious. This process of intracontinental orogeny and intraplate tectonics \t\u00b7239\u00b7 is characterized by the closure of the early intracontinental rifts under the action of regional stress, and then the collision-merging-orogeny of the blocks on the two flanks of the rift. Previous studies have shown that this is a typical feature of the late evolution of the La split valley (basin) and passive continental margin[3], and we are more inclined to call it \"intracontinental orogenic belt without flattening belt\" (non-scar intracontinental orogen). The strong folds of the Pre-Middle Devonian and the folds and nappe thrusts of the Pre-Triassic strata in different areas, the large-scale emplacement and intermediate metamorphism of the granite are clearly displayed. However, the uplift process of Gannan-Wuyi Mountain and Xuefeng Mountain may be the shallow response of intracontinental lithosphere stretching-convergence-subduction process. The intracontinental deformation in the Late Mesozoic eventually constituted the current tectonic framework of South China. Tianshan is an important Late Paleozoic orogenic belt. The uplift of Cenozoic Tianshan is famous all over the world as the long-range effect of India-Eurasia collision to produce intracontinental compression, reactivation and uplift. It has a very remarkable feature, as a Cenozoic orogenic belt, the Tianshan Mountains are far away from the scar\uf02dgeosuture of plate collisions, but this seems to be difficult to conceal the complexity of the combined orogeny. The uplift of the Tian Shan played a crucial role in climate change in Eurasia. It has become a hot research topic in recent years. It can be seen that the intraplate tectonic process, especially the deep process and shallow response, is a new hot issue for us to pay attention to the evolution of the lithosphere.", "- The two most striking geological and geomorphic units on the earth's surface - basins and mountains - are complementary and closely integrated entities. Their complementarity has long been clarified by the principle of gravity equilibrium, but the basin-mountain coupling has not been paid much attention by the academic community. It is only in the past 20 years that the significance of basin-mountain coupling has been emphasized and systematic research has been carried out. Chinese scholars have made important contributions to this research [1~3], mainly involving structure, sedimentation, deep dynamics, and basin-mountain coupling mechanisms. Structural features As the name suggests, basin-mountain coupling is the combination of basins and mountains. The basic characteristics of the basin-mountain system are the similarity in formation time and the identity of the dynamic mechanism[2~4]. Studies have shown that the horizontal subduction and collision of lithospheric plates lead to the coupling of compression orogenic belts and foreland basins; the horizontal extension of continental plates produces coupling of rift basins and marginal mountains; the near-horizontal oblique strike-slip between plates may form an extensional type The pull-apart basin is coupled with the mountain, or the squeeze-type flexural basin is coupled with the mountain, which constitutes the basin-mountain coupling system of the plate dynamics system. The vertical intrusion of deep magma will lead to the uplift and orogeny of the crust and lithosphere, and the lateral extension into basins, forming the basin-mountain coupling system of the mantle plume dynamic system. The basins and mountains in the basin-mountain system were formed in the same geological period. If they are subdivided, the mountains are often formed slightly earlier, and the basins are slightly later than the mountains. There are often huge elevation contrast signs in the basin-mountain coupling system. For example, the elevation of the foreland basin on the south side of the High Himalayas is close to sea level, while the elevation of the mountains in the north of the basin can reach more than 8000m; 5895m; the bottom of Lake Baikal is as deep as 1636.5m, and the height of the mountains on both sides of the lake basin exceeds 2000m; the Dead Sea is a strike-slip pull-apart basin, the lake surface is -392m above sea level, and the heights of the mountains on both sides vary, but the height difference between the basin and the mountains is generally greater than 4000m. Sedimentary features No matter what kind of dynamic mechanism or what kind of stress field conditions form the basin-mountain system, they all develop characteristic sedimentary sequences with relatively large sedimentary thickness. The mountain provides filling materials for the basin, the continuous subsidence provides accumulation space for transported materials, and the height difference between basins and mountains provides power and channels for the transport of denuded materials. The sedimentation of the basin-mountain system of the tension stress field, such as the continental rift, was dominated by lateral transport and recharge in the early stage, forming the accumulation of coarse clastic alluvial fans; thereafter, the vertical transport and recharge of river valleys appeared; finally, it was lake deposits in the center of the fault depression. Due to local mantle magma activity, alkaline volcanic rock interlayers can be developed. Deposits of basin systems in compressive stress fields, such as foreland basins in the Alps, are characterized by molasse accumulation sequences. Including the lower marine molasse and the upper continental molasse. During the uplift period of the mountain, coarse clastic series such as conglomerate, coarse sandstone, sandstone and siltstone were formed in the near mountain; fine clastic series such as siltstone and mudstone were formed in the distant mountain, and carbonate rock and evaporite were formed in the farther mountain and other Great Lakes sedimentary rock series. The sedimentary characteristics of strike-slip pull-apart basins are similar to those of continental rifts, and there are often deep basins formed by steep strike-slip extensional faults and thick sedimentary rocks, and the subsidence and deposition centers are constantly migrating, and the uplift and depression pattern is complex. Strike-slip extrusion basins are often characterized by clastic recharge, transport and deposition in both horizontal and vertical directions. The center of subsidence and deposition also migrated continuously, resulting in the strata in the center of the basin being older than those in the lateral margins. The basin-mountain coupling relationship of deep structural features is the result of the mutual adjustment of the interaction between matter and structure in the earth's interior. Among the three layers of the Earth system, the dynamics of the lithosphere (lithosphere mantle + crust) have the most influence on basin-mountain coupling. The continental lithosphere has a multi-layered structure, the upper crust has a small mass and strong brittleness; the lower crust has a large mass and strong ductility; the middle crust belongs to the brittle-ductile transition layer, and the lithospheric mantle has greater mass and strength. The difference in these material layers is the internal cause of the formation of multi-level decoupling and its basin-mountain coupling. Mantle convection in the asthenosphere is not only the driving force for the generation and evolution of the tectonic stress field, but also can control lithosphere deformation, composition, and thickness changes, and even cause lithosphere delamination and induce lithosphere derooting and collapse. In the shallow layer, it is manifested as mountain uplift and denudation, and in the deep layer as magma migration, upwelling and bottom invasion or delamination. Genetic mechanism of basin-mountain coupling At present, it is believed that there are mainly two types of basin-mountain coupling dynamic mechanisms: lateral and vertical: \u2460 Basin-mountain coupling formed by vertical stress mechanism: it belongs to the dynamic system of mantle plume. The buoyant rise of the mantle plume or mantle magma is the driving force for the vertical stress field. The rise of the mantle plume caused the strong uplift of the lithosphere and the subsidence of the adjacent area to form basins, such as the Yellowstone mantle plume in North America, which led to the overall uplift of the Colorado craton, forming a plateau with an average length of 1500m and a north-south basin group, forming the famous basin-ridge structure . In the shortened area of the crust, the loading and accumulation of a large amount of crustal material will make the asthenosphere concave; with the weakening of the compression, the buoyancy of the asthenosphere will gradually increase, and the buoyancy of the rising mantle will lead to the instability of the mountain, making the mountain Collapse and denudation lead to lateral extension, forming an extensional basin-mountain coupling. \u2461 The basin-mountain coupling formed by the horizontal stress mechanism: it belongs to the lithospheric plate dynamics system. The three tectonic stress fields of extrusion, tension, and shear produced by the asthenosphere are the direct force sources that cause basin-mountain coupling, corresponding to the coupling between foreland basins and collision (or subduction and accretion) orogenic belts under the extrusion stress field , the coupling of graben and barrier under the tension stress field, the coupling of pull-apart basin and mountain mass under the strike-slip tension stress field, and the coupling of flexural subsidence basin and extrusion uplift mountain mass under the strike-slip compression stress field. The coupling of foreland basins and compressional orogenic belts is restricted by the nappe loading-deflection mechanism. The lateral continuous compression and convergence of lithospheric plates made the crustal rock slices overturned along the detachment plane, loaded vertically, and stacked into mountains. At its front end, the huge mass deflects and sinks the underlying crust, forming a squeeze-type basin-mountain coupling. It can be divided into two basin-mountain coupling types, the subduction type of the circum-Pacific oceanic crust and the Tethys continent-continent or arc-continent collision type. Peripheral foreland basins. Foreland basins are distributed in a belt shape, characterized by fast accumulation, large deposition thickness, and rich organic matter. They are good oil generation and oil storage structures and can form large oil fields. The basin-mountain system formed by the tensile stress field includes the graben-horst assemblage, the rift (rift)-basin margin-mountain assemblage, and the back-arc basin-magmatic arc-mountain assemblage. The Mesozoic and Cenozoic rift basin-granite mountain system in eastern China was formed through back-arc expansion under the subduction mechanism of the western Pacific Ocean. surface impact. The difference in physical properties between the brittle fracture of the upper crust, the ductile extension of the lower crust, and the flow deformation of the asthenosphere is prone to stretching and slipping, while the detachment zone between the three plays a role in regulating and balancing the amount of stretching. Its driving forces include convection of mantle magma, underplating of basaltic magma, delamination of lithosphere, low-angle normal faults and metamorphic core complexes[4], local back-arc convection and mantle diapirism. Back-arc rift basins are characterized by large area, stable horizons, and rich organic matter, well-developed oil generation and oil storage structures, and large oil fields are often formed. Pure strike-slip action cannot produce basin-mountain coupling. Under the horizontal stress mechanism, strike-slip action with extrusion component can form extrusion folded mountains and flexural subsidence basins, while strike-slip action with extensional component can form pull-apart basins And basin edge mountain. Strike-slip pull-apart basins or strike-slip flexure basins are characterized by small scale, large depth, thick accumulation and rich source rocks, and are important oil and gas resource producing areas. \u2462 How to understand the Chinese-style basin-mountain coupling\u2014intraplate extrusion type basin-mountain coupling: According to the plate theory, the orogenic belts on the earth are either formed by continent-continent collision, arc-continent collision or terrane accretion after the closure of ocean basins, or It is formed by the subduction of oceanic plates. Correspondingly, matching basins are formed in the piedmont lowlands. However, in the Cenozoic northwestern China, a unique type of intracontinental compression-type basin-mountain coupling, which has nothing to do with the above-mentioned mechanism, appeared. Existing basin-forming theories cannot explain its cause, so foreign scholars call it a Chinese-style basin[6]. Through the efforts of Chinese scholars, this problem has been basically solved. The huge Tianshan Mountains are Cenozoic inland uplifted mountains, and the Tarim Basin and Junggar Basin on both sides are peripheral basins with compression properties. Live with each other. Studies have shown[7] that the above phenomenon is the long-range response product of the Cenozoic Indo-Tibetan continent-continent collision within the continent. In order to distinguish it from the classic foreland basin, it is called \"regenerated foreland basin\", and based on this, it is proposed that the lithosphere in the Tarim and Junggar basins collided under the Tianshan lithosphere and delaminated. Basin-mountain coupling conceptual model. The outline of the model is that due to the India-Tibet collision, the lithosphere at the northern margin of the Qinghai-Tibet Plateau and the lithosphere of the Tarim Basin form a \"V\"-shaped collision structure, which pushes the high-strength lithosphere under the Tarim Basin to move northward and subduct under the Tianshan lithosphere; On the other hand, due to the influence of Siberia\u2019s southward compression stress, the lithosphere in the Junggar Basin subducted southward under the lithosphere at the northern margin of the Tianshan Mountains. The northward subduction of the lithosphere caused the shallow mountains of the Tianshan Mountains to thrust southward in the form of nappe slabs, stacking and loading, forming the regenerated foreland basin and regenerated foreland thrust belt on the southern margin of the Tianshan Mountains, and the regenerated Tianshan Mountains. The coupling mechanism of the Junggar Basin-Mountain is similar to this. The subduction of the intracontinental cold basin below the mountain can be partially offset by the thrust of the upper mountain. Since the Pliocene, the southward thrust of the mountain has caused the northern part of the basin margin to deform first, forming a piedmont fold belt; then it migrated southward, and multiple rows of thrusts and nappes appeared. There are many stepped reverse faults and fault-related folds in both the regenerated foreland basin thrust belt and the foreland thrust belt, forming many structural traps. The thrust and nappe of the mountain body also restricts the subsidence and migration law of the basin, which is high in the north and low in the south. The activity of the northern piedmont has stopped, but the thrust in the south is still in progress. The above basin-mountain coupling model can explain the origin of many basins in western China, basin subsidence, mountain uplift, lithosphere deep structure and thermal characteristics. The research prospect of \"basin-mountain coupling\" is an original idea of Chinese scholars, and there is no such formulation in the world. To be recognized by the international academic community, further research and exploration is needed. Although many research results have been obtained, there are still some problems. The outstanding problem is that it is not yet clear what intrinsic connection exists between shallow basin-mountain coupling and deep lithospheric processes, and the quantitative expression of the relationship between basin formation and lithospheric rheological properties is still unclear. It is believed that through in-depth research, these problems will be resolved, and the concept of \"basin-mountain coupling\" will be recognized by the international academic community.", "Various types of sedimentary basins develop on the continental crust, such as extensional basins, compressional basins, and strike-slip related basins. The tectonic subsidence and depositional processes of these basins have been extensively studied, and their formation mechanisms are well understood. Intracratonic basins are another type of basins that occur within cratons (continental blocks with stable Archaean-Proterozoic metamorphic basements), and are also referred to in the literature as intracontinental basins. basin). Typical intra-cratonic basins include the Paleozoic Michigan, Illinois, and Williston basins in North America; the Early Paleozoic basins in South China (whether they are intra-cratonic basins or not) and the North China block should also be classified as intra-cratonic basins. Intracratonic basins not only develop relatively complete sedimentary sequences, which provide stratigraphic records for restoring the history of crustal tectonic evolution, but are also very important oil and gas-bearing basins. Although long-term studies have been carried out on intra-craton basins, the tectonic subsidence mechanism of the basement of such basins is still unclear. Intracratonic basins have the following important characteristics: \u2460Developed in the interior of the continent or far away from the boundary of the continental plate; \u2461Mixed circles of different scales on the plane; \u2462The sediments are generally thousands of meters thick; \u2463The subsidence-sedimentation center is located in the middle of the basin; \u2464 There are no large-scale faults at the edge and inside of the basin; \u2465 volcanic activity is not developed; \u2466 the structural subsidence curve is similar to the subsidence process of passive continental margin basins. In order to explain the above geological characteristics of intracraton basins, various genetic models of basins have been proposed. The following is a brief summary of the main subsidence mechanisms: Thermal contraction: Since intracratonic basins and passive continental margins have a very similar tectonic subsidence process, some researchers believe that the tectonic subsidence of intracratonic basins is caused by thermal contraction of the lithospheric mantle of. This view assumes that the upwelling of deep hot matter first leads to surface uplift, resulting in denudation of the uplifted part of the crust and thinning of the upper crust. As the deep hot material gradually cools and contracts, the upper crust will subside and return to its original position, and the denuded parts of the surface will form depressions or basins. However, the craton has a thick and stable lithosphere structure, and magmatism is not developed, so it is difficult to link the subsidence process of the basin with the deep thermo-tectonic process. In addition, many research results show that intracraton basins did not experience an early uplift process. Slow stretching: Recent physical modeling results show that if the continental lithosphere undergoes very slow tectonic stretching, its interior will undergo continuous tectonic subsidence to form craton basins. Excluding other influencing factors, such as water body and sediment load, the subsidence curve of the craton basin shows a near-linear gentle slope, indicating that the strain rate is very small (~10\uf02d16 S\uf02d1), and the stretching factor (\uf062) is only 1.1 ~1.3. However, it is worth noting that extension should lead to faults, but normal faults are usually not developed in craton basins. Sediment loading: Other studies have attributed the subsidence of intracraton basins to sediment loading, arguing that the origin of intracraton basins \t245 is that sediments filled an earlier depression in cratons. Further promote the subsidence of the basin, and finally form the intracratonic basin. If the density of the mantle lithosphere is 3.3gm/cm3, the density of the crust is 2.6gm/cm3, and the depth of the early depression is about 1km, the continuous filling of sediments can lead to further subsidence of 3~4km. The model requires the existence of a depression first, but for many intracratonic basins, this assumption does not hold. Phase change: Due to the increase of temperature and pressure, some rocks in the lower crust and upper mantle may undergo phase change, greenschist can be transformed into amphibolite, gabbro can be transformed into garnet granulite, or gabbro Can be transformed into eclogite. The density of gabbro is about 3.0gm/cm3, while that of eclogite is 3.4gm/cm3. The increase in the density of rocks in the deep crust will cause the upper crust to flex and settle downward, that is, to form intracratonic basins. The reason for the phase transition may be related to the intrusion of asthenospheric hot material into the interior of the lithosphere. However, due to the obvious heterogeneity of the lithosphere, the material composition that can undergo phase transition does not necessarily exist universally. Therefore, it is difficult for the phase change or density increase of only a small amount of substances to cause flexural settlement on the surface. Intraplate deformation: Affected by the compressive stress at the continental margin, longitudinal bending folds may occur in the crust or lithosphere inside the continent, and the syncline of the folds forms intracratonic basins. Relevant research results show that the tectonic stress required to fold the crust is very large, and the compressive stress transmitted from the continental margin to the interior cannot directly produce folded syncline basins, and the compressive stress transmitted over long distances usually only increases the depth of the depressions that existed in the early stage. magnitude. If the compressive stress continues to act, there will be structural nappes that thrust toward the interior of the basin at the margin of the basin. However, large thrust faults do not develop at the margin of the basin in a typical craton. The above analysis shows that the genetic mechanism of intra-craton basins is still unclear, and various models that have been proposed cannot reasonably explain the basic characteristics and evolution process of intra-craton basins. The formation and evolution of intra-cratonic basins may involve a variety of dynamic processes, and the combination of them leads to the formation of intra-cratonic basins. Therefore, exploring the interrelationships between different mechanisms should be the key to studying the genesis of intra-craton basins.", "Introduction The famous Austrian geologist Suess (1893) in the paper \"Is the depth of the ocean eternal?\" \", for the first time introduced the Greek goddess Tethys into the geological literature. He envisioned Tethys as a now-vanished ocean that rose from the East Indies (now Indonesia) through the Himalayas to Asia Minor and finally to Europe; Mountains reaching into the sky, across Europe and Asia, towering over Tibet, the Himalayas and the Alps. In his later masterpiece \"The Face of the Earth\", based on the classification of the global marine Triassic, especially the comparison of the oceanic Triassic, he once again clearly expressed his assumptions. At the same time, he further put Tethys Linked to the continent of Gondwana. It is generally believed that it was Neumayr and related scholars who promoted the proposition of Tethys. When Neumayr (1885) summarized the global Jurassic system and its paleontological geography, he found that there was a seaway connecting the Caribbean and Myanmar, which was named \"Central Mediterranean\" because it was sandwiched between the southern and northern continental groups. It can be seen that from the very beginning, the Tethys proposition has been a big global problem, involving all aspects of earth science, and has become a popular geoscience problem for a century. Origin When did Tethys begin to form? The \"Central Mediterranean\" outlined by Neumayr traversed between the two continental block groups during the Jurassic period, and its age is the Jurassic; the Tethys defined by Suess in 1893 is the Triassic period. Huang Jiqing (1945, 1987) pointed out that since Suess believed that Tethys was a wide sea area north of the Gondwana continent, it should have started to form in the late Paleozoic, at the latest in the Permian, because the Gondwana continent It is the continental group characterized by Carboniferous-Permian Gondwana plants. Furthermore, he divided three periods: Paleo-Tethys (Carboniferous-Permian), Middle-Tethys (Mesozoic) and Neo-Tethys (Cenozoic). Bullard et al. (1965) and Smith and Hallam (1970) successfully mosaiced the continents around the Atlantic Ocean and the Indian Ocean with the help of computers, and verified the possibility of the existence of the joint ancient land conceived by Wegener, thus revealing that there is an eastward opening and insertion of the joint ancient land. A triangular bay on the mainland. Dietz and Holden (1970) believed that the joint ancient continent was in the Permian, and the bay opening to the east was Tethys. Stocklin's (1974, 1977, 1989) geological research on Iran showed that, in addition to the geological records of the Mesozoic ocean in the Zagros Mountains (that is, Tethys first pointed out by Suess), the Alborz Mountains in the north of it There is also a set of Paleozoic marine geological records juxtaposed as two distinct geological units. He called the former New Tethys and the latter Ancient Tethys. Since then, studies have confirmed the existence of Paleo-Tethys and Neo-Tethys in Turkey (\u015eeng\u00f6r, 1984, 1989, 1990). To sum up, although various understandings are not the same, a consensus has been reached on the development of Tethys from the Late Paleozoic. However, is there an even older Tethys? The answer is quite different. In Europe, research has discovered the remains of the early Paleozoic ocean, some scholars call it the Central European Ocean, and some scholars call it the \"Proto-Tethys\", that is, the most primitive Tethys (some scholars call it the Qianhai Western Ocean). Dewey and Bird (1970) believed that there existed an Ordovician ocean between Laurentia in North America, Baltica in Europe and Africa, which was called \"proto-Tethys\". In my country, Ordovician marine relics have been found in Kunlun Mountains and Qilian Mountains, which are considered to be \"proto-Tethys\" (Pan Yusheng, 1994). How did Tethys begin to form? Obviously, the above-mentioned different understandings of Tethys will lead to some major issues: Did Tethys form from the cleavage of the joint ancient land, or did it evolve from the older Proto-Tethys? Is there a major stage of geological evolution called \"joint ancient land\" between the original Tethys and Tethys? Should Proto-Tethys and Tethys be included in the development sequence of a unified Tethys? German geoscientist Wegener (1912, 1915) proposed the hypothesis of continental drift. He believes that in the late Paleozoic Era, the continents once came together to form a super ancient continent - the joint ancient continent (also known as Pangea, Pangea), and after the Jurassic, it split and drifted and finally became a modern state. Later, Staub (1928) and du Toit (1937) made revisions and believed that two supercontinents formed before the continent disintegrated and drifted, namely Laurasia in the northern hemisphere and Gondwana in the southern hemisphere, with the Tethys Sea in between. Of course, as mentioned above, Bullard et al. (1965), Smith and Hallam (1970), Dietz and Holden (1970) believed that the joint ancient land existed, and the bay opening to the east was Tethys. Scotese's (2002) recent global paleogeographical research also showed that there were joint ancient continents in the geological history (Figure 1). Scotese's reconstruction shows Paleo-Tethys and Tethys roughly comparable to Stocklin and \u015eeng\u00f6r's Paleo-Tethys and Neo-Tethys. It seems that it can be interpreted in this way: when the ancient continents were combined, it was also the day when Tethys began. However, Paleo-Tethys was not a simple triangular bay opening eastward and inserting into the United Ancient Land. The major land masses in East Asia, such as North China, South China, and the Indochina Peninsula, are all floating in this semi-enclosed bay. Aubouin et al. (1980) sketched the opening picture of Tethys. They believe that Tethys started from the \"Permanent Tethys\" in the western Pacific Ocean (Paleozoic Pacific Ocean), and opened in a scissors-like manner from east to west on the joint ancient land, and was distributed in Western Eurasia and the African continent during the Triassic Between North and South America in the early and middle Jurassic, it evolved into \"regenerated Tethys\". However, due to the superposition and transformation of the modern western Pacific Ocean, how \"Eternal Tethys\" started its journey to Tethys is still a mystery. The geological evolution is divided according to the multiple ancient land aggregations and cleavages in the geological history (for example, Proterozoic Rodinia), and the existence of the combined ancient lands shows that Proto-Tethys was an earlier ancient land aggregation and cleavage. Proto-Tethys and Tethys are geological events at different times and places, which do not seem to be an evolutionary series of events. However, is there a Tethys Ocean evolved from the original Tethys? This is still an unresolved issue. Of course, the restoration and reconstruction of the original Tethys needs further in-depth research. Figure 1. \tLate Permian (255 Ma) global paleogeographic restoration map (according to, 2002) PANGEA: United Continent; PALEO-TETHYS OCEAN: Paleo-Tethys Ocean; TETHYS OCEAN: Tethys Ocean; PANTHALASSIC OCEAN: Pan-Oceanic Evolution It is generally believed that Tethys is divided into ancient Tethys and Neo-Tethys. So, what is the evolutionary relationship between the two? As mentioned above, in West Asia and Central Asia, the ancient Tethys is in the north, and the new Tethys is in the south. Studies have shown that between the two is a strip-shaped continental group, which \u015eeng\u00f6r named the Chimeri Paleoland. \u015eeng\u00f6r believes that it was the ancient Tethys on the north side that was closed and the Neo-Tethys on the south side that was opened by the Chimeri ancient land that separated from the northern margin of Gondwana, drifted northward, and rotated counterclockwise. Later, Gondwana further disintegrated, and the separated Africa and India converged and collided with Eurasia, and Tethys eventually died out. However, as mentioned above, many land masses in East Asia were free in the Paleo-Tethys Ocean, and some were also related to the Paleo-Pacific Ocean. They are characterized by Cathaysia paleoflora, neither Gondwana nor Laurasia, and certainly not part of Chimeri, we call them the Cathaysia group (Chen Zhiliang, 1994). Here, the closure of Paleo-Tethys in some areas has nothing to do with the opening of Neo-Tethys at all, and the Paleozoic Tethys Ocean in some areas continued until the late Mesozoic. It should also be mentioned that the Paleozoic to Mesozoic paleo-oceanic systems in the East Asian marginal zone bordering the Pacific Ocean from Nadanhata-Xixhotala, Japan, Ryukyu to Taiwan and Palawan (Shinjiro Mizutani, 1989 ; Shao Ji'an et al., 1991), providing a new opportunity for further research on Tethys and its relationship with the ancient Pacific Ocean. In addition, the evolution of the Caribbean Tethys, \"Atlantic Tethys\" and \"American Tethys\" in the Western Hemisphere has its own unique trajectory, and there are many unsolved mysteries. It goes without saying that a wonderful chapter in the evolution of Tethys has yet to be revealed. Resources and environmental effects The formation and demise of Tethys produced a variety of natural resources, including metals, non-metals and energy, as well as environmental resources produced by their environmental effects. Among the mineral resources, oil and gas are the most prominent energy. The oil and gas resources in the Tethys belt account for 68% of the world's total (Klemme et al., 1991). In the zone, among more than 80 important sedimentary basins, there are 24 large-scale oil and gas basins, accounting for 97% of the total oil and gas reserves in the Tethys zone; among them, the Central Arabian Basin and Zagros Basin in the Middle East The basin accounts for 71.5% of the total oil and gas reserves in the Tethys zone (Zhao Chongyuan, 2000). Why can Tethys restrict the concentrated distribution of oil and gas resources? Where will the next giant oil and gas fields be? These resource issues have become lingering knots for geological exploration workers. The Alps-Himalayas and the Qinghai-Tibet Plateau formed by the demise of Tethys have profoundly affected the global appearance, environment, and ecology, and induced a series of natural disasters; The drought in Northwest my country has contributed to the formation of the Loess Plateau; in addition, the rise of the Qinghai-Tibet Plateau has also led to the flow of large rivers in my country to the east, and the problem of soil erosion is very prominent. This process is still going on, even intensified, and the environmental effect of Tethys has become the most important environmental problem we have to face.", "UHP metamorphism is one of the most breakthrough achievements in the field of solid earth science in recent years. It not only broadens the research scope of metamorphism, but also makes people realize that rocks on the crust can be subducted to the depth of the mantle Then turn back to the surface. Such a process of subduction and reentry at a depth of hundreds of kilometers has caused geoscientists to wonder: How did they subduct into the depth of the upper mantle? How do rocks subducted to the depths of the upper mantle return to the surface? UHP metamorphism was first discovered by French metamorphic petrologist Chopin in alpine eclogite-facies metamorphic sedimentary rocks[1]. It means that the temperature-pressure condition of metamorphism reaches or exceeds the stable temperature and pressure range of coesite, which is marked by the appearance of coesite, diamond and other metamorphic minerals in metamorphic rocks. Coesite is a high-pressure phase mineral of quartz. The rock-static pressure formed by it is generally greater than 2.6 Gpa (600oC). It usually appears in the main metamorphic minerals of eclogite in the form of mineral inclusions. Because coesite partially degenerates into quartz , the volume increases, and thus radial cracks appear in its host mineral (Figure 1). Compared with the classic concept of metamorphism that emphasizes the change of temperature and pressure conditions within the crust, the discovery of coesite in metamorphic rocks broadens the research scope of metamorphism from the crust to the upper mantle; thus also promotes the theory of plate tectonics from emphasizing the interior of the lithosphere From the perspective of horizontal movement to the study of crust-mantle interaction process. Figure 1 \tThe earliest reported coesite inclusion in garnet in the world[1] (collected from the Dora Maira metasedimentary rock in the West Alps) coes. coesite; qz. quartz; gt. garnet So far, more than 20 UHP metamorphic rocks containing coesite or diamond have been found in several orogenic belts. Most of them are characterized by protoliths with similar continental crust components, and oceanic crust rocks undergoing UHP metamorphism are relatively rare. Our country is a country with relatively complete exposure of UHP metamorphic rocks. The currently identified UHP metamorphic belts are from east to west: Sulu-Dabie UHP metamorphic belt[2], North Qinling[3], North Qaidam Margin-Al Jin[4~6] and Southwest Tianshan UHP metamorphic belt[7,8] have both continental and oceanic types. According to the peak temperature of UHP metamorphism and rock occurrence characteristics, UHP metamorphism can be further divided into three categories: low-temperature, intermediate and high-temperature UHP metamorphism[9]. \u2460 Low-temperature and ultra-high-pressure metamorphism: The peak temperature of ultra-high-pressure metamorphism is lower than 600 oC, and it is characterized by the appearance of blue amphibole eclogite. In terms of geological occurrence, this kind of ultra-high pressure metamorphic eclogite is associated with blueschist, which is equivalent to Coleman's C-type eclogite. The protolith belongs to the typical oceanic crustal assemblage, and the preserved one has a typical pillow basalt structure. . Low-temperature and ultra-high-pressure metamorphic eclogites are rarely found, and the typical examples discovered so far include the Zermatt-saas belt in the western Alps and the eclogite-blueschist belt in the Southwest Tianshan Mountains in my country. This type of UHP metamorphism generally occurs in cold subduction zones, so it can also be called cold subduction zone metamorphism (geothermal gradient is around 5oC/km) (cold subduction zone metamorphism). The cold subduction of the plate produces low-temperature and ultra-high-pressure metamorphism, which can carry a large amount of water fluid into the mantle. These water fluids are the main reasons for the changes in geophysical and chemical properties and geological disasters such as earthquakes and volcanism. Studies have shown that the most likely mineral carriers that can bring the water fluid in the surface to the mantle are: polysilicic muscovite, hard plutonite, antigorite and other minerals and hydrous high-density magnesium silicate mineral phase (DHMS) and Nominally Anhydrous Mineral (NAM) containing traces of water. The current research focus on this type of low-temperature and ultra-high-pressure metamorphism is how does this type of ultra-high-pressure metamorphic eclogite rise to the surface? That is, how do the oceanic crustal rocks with relatively large specific gravity turn back to the continental crust with relatively small specific gravity? What geochemical processes occurred in cold subduction UHP metamorphism? \u2461Medium temperature and ultrahigh pressure metamorphism: The peak temperature of ultrahigh pressure metamorphism is 600~900oC, usually characterized by the occurrence of kyanite eclogite and gneiss. This is the most exposed type of UHP metamorphic rocks reported so far, such as the Sulu-Dabie, Qaidam North Margin-Altyn UHP metamorphic belt in China, the Caledonian UHP metamorphic belt in Norway, and the Bohinian Sea in Europe. Western UHP metamorphic belt, UHP metamorphic rocks in Kokchetav Block, Kazakhstan, etc. The common feature of the field occurrence of this type of UHP metamorphic belt is that the UHP metamorphic eclogite is associated with the surrounding high-grade metamorphic gneiss. There is a dispute over the relationship between this type of eclogite and its surrounding rock gneiss. At present, ultra-high pressure metamorphic mineral coesite has been found in the surrounding rock gneiss of eclogite[10,11]. Answer. The protoliths of this type of eclogite are continental crustal rocks, so it is believed that this type of eclogite represents a typical product of continental deep subduction [12,13]. At present, the research on this kind of ultrahigh pressure rocks is still the main aspect of ultrahigh pressure metamorphism research. The question that needs to be further studied is how the continental deep subduction represented by ultrahigh pressure metamorphic eclogites occurs. How are crustal rocks subducted to the mantle depth where the specific gravity is relatively high, and then returned to the surface? Is this deep continental subduction a general phenomenon or a local phenomenon in geological history? \u2462 High-temperature and ultra-high-pressure metamorphism: the peak temperature of ultra-high-pressure metamorphism is around 900oC or above, which is equivalent to the occurrence of such ultra-high-pressure metamorphism under ultra-high temperature conditions, so this type of ultra-high-pressure metamorphism is also called the deep part of the earth ( Internal) metamorphism (metamorphism in the Earth deep interior). This type of UHP metamorphism research mainly focuses on garnet-bearing mantle peridotites and eclogites in orogenic belts. There are currently two controversial views on this type of ultrahigh pressure metamorphism: one is that the mantle rocks that were located in the continental crust earlier subducted to the mantle depth together with the continental crust rocks and then returned to the surface; the other view is that the protolithic mantle During the subduction process of the plate, the rock is carried by the subducting plate to the depth of the mantle, undergoes ultra-high pressure metamorphism, and then returns to the surface. The research on deep (interior) metamorphism of the earth involves an overall understanding of the internal structure of the earth, and has long been a hot field of solid earth science research. The relationship between rocks and the ultimate pressure value of this type of UHP metamorphism. The proposal of ultra-high pressure metamorphism has challenged many aspects of solid earth science research, thus promoting the development of plate tectonic theory. At present, everyone agrees that UHP metamorphism in continental deep subduction is due to the deep subducted oceanic crust dragging the subsequent continental crust to the depth of the upper mantle and UHP metamorphism occurs (Fig. 2a). After its junction was broken off, the deeply subducted continental crust was uplifted and folded back to the surface under continuous tectonic compression due to its lighter specific gravity than the surrounding mantle rocks (Fig. 2b). This model is proposed by taking the Sulu-Dabie UHP metamorphic belt, which is the largest in the world, as an example. Although it has successfully explained the formation process of the Dabie Mountain UHP metamorphic belt, there are still many unexplained problems. For example, in Fig. 2 \t, the tectonic evolution model of continental deep subduction UHP metamorphism [14] (taking the Dabie Mountains UHP metamorphic belt as an example) the Yangtze plate subducts below the Central Korean plate, and the subducted oceanic crust dragged the subduction of the Yangtze craton At the depth of the upper mantle below the China-North Korea plate, ultra-high pressure metamorphism occurred; when the oceanic crust and the continental crust were separated at their junction, the subducted continental crust, due to the buoyancy, under the continuous subduction tectonic extrusion condition Next, there is no evidence of the subduction stage of the oceanic crust at the same time in the Dabie Mountains area after the uplift and return to the surface? Are the hundreds of kilometers of UHP metamorphic belts in the Dabie Mountains all subducted and uplifted as a whole? UHP metamorphism research has been carried out for 20 years, but our understanding of UHP metamorphism is still quite limited. The real solutions to some of the above problems still need further in-depth research, especially multi-disciplinary and multi-angle integrated research. It can be predicted that in-depth research on ultrahigh pressure metamorphism will definitely push the development of solid earth science to a new climax.", "In the south of the ancient Siberia and Eastern European ancient block, and in the north of my country's Tarim and North China ancient landmass, there used to be a vast ocean, which is called the \"Paleo-Asian Ocean\" in geology. For about one billion years, due to the continuous reduction of the ancient Asian Ocean, accretionary wedges, island arcs, and other micro-blocks have been continuously pasted on the outer edge of Siberia, thereby growing outwards and expanding, forming the majestic Central Asian Orogenic Belt in central Asia . The Central Asian Orogenic Belt records the accretionary orogeny process between the southern margin of Siberia and the Tarim and North China Blocks. Its tectonic evolution since the Paleozoic is an important stage in the southward growth of the Asian continent and the evolution of the Paleo-Asian Ocean in the Phanerozoic. The Central Asian orogeny is characterized by multi-block collage, which contains important scientific issues such as strong Phanerozoic continental crust growth, and also involves important theoretical issues such as the current continental orogen model [1~5]. At the same time, the unique Central Asian accretionary orogeny created the Central Asian continental mineralization domain and oil and gas resource base. However, no consensus has been reached on the tectonic evolution of the Central Asian orogenic belt since the Paleozoic, especially the existence of ancient land blocks and their contribution to crustal growth have long been controversial. Fig. 1 \tSimplified tectonic diagram of the Central Asian Orogenic Belt [3] There are several different understandings of the nature of different micro-blocks in the Paleo-Asian Ocean: the existence of the Precambrian mysterious Paleo-Asian Ocean \t255-period crystalline basement Ancient blocks, such as Kazakhstan and the Tuva-Mongolian micro-block[6, 7]; the basement of the Paleo-Asian Ocean basin is mainly composed of Paleozoic limited ocean basins (small ocean basins), initial ocean basins and island arcs, but do not exclude the intervening There are some small ancient land masses [2,3,8,9]. Figure 2. \tThe tectonic-paleogeographical pattern of the Devonian Paleo-Asian Ocean[5] The main reason for these divergences is the long-term development of accretionary orogeny in the Central Asian orogenic belt and the mantle-derived magmatic activity at the bottom, and the Central Asian continent will be strongly affected by this. Transformation[1~3]; in some special blocks in Central Asia, such as the Junggar block, the underlying crust is Paleozoic oceanic crust, and there is no large-scale rigid ancient basement[9]. Recent research shows that the main body of the Tuva-Mongolian block is composed of late Proterozoic-Early Paleozoic continental margin accretionary complexes[10]. One of the difficulties in resolving these block disputes is that most of Central Asia has huge thick Mesozoic and Cenozoic sediments, and it is impossible to obtain extensive and direct research objects. Therefore, the understanding of its original appearance mostly relies on the comparative study of geophysical detection means and the geology of the surrounding areas of the basin. The second difficulty in solving the nature of these blocks is that different scholars have different interpretations of the existing geological and geophysical data. \u2460 Whether there are Precambrian strata or ancient metamorphic rock series. Recent detrital zircon data suggest an older age record, but the question of whether true Precambrian rocks exist remains inconclusive. \u2461 There are differences in the interpretation of geophysical data, especially the interpretation of aeromagnetic gravity anomaly data. Some scholars believe that hidden magnetic bodies exist in the Precambrian crystalline basement; others believe that they exist in intermediate-basic volcanic rocks or tholeiites on oceanic ridges. In recent years, some new progress has been made in the study of the geological and geophysical characteristics of the Junggar block. According to the geochemical and isotopic evidence provided by intermediate-acid plutonic rocks, some researchers tend to accept the argument that \"the basement of the Junggar block is basically the Paleozoic oceanic crust, but the possibility of small continental blocks also exist in the ocean basin cannot be ruled out\". However, the tectonic properties of these blocks are still shrouded in mystery. Therefore, in order to unravel the mystery of the quasi-Central Asian micro-block, it is necessary to combine the observations of strata, structure, sedimentation and magmatic activity in the Cenozoic basin and surrounding orogenic belts through systematic research and analysis, as well as the latest collection of petrogeochemical, Geophysical data, especially drilling work should be carried out gradually and systematically to provide a theoretical basis for the further development of mineral and oil and gas resources in western my country and Central Asia.", "Due to the needs of research and prevention of earthquakes and other disasters, as well as engineering and urban security, active tectonics, which aims to study the closest to modern tectonic activities, has developed by leaps and bounds in recent years, and is more closely related to modern crustal movement observations (including modern space Earth observation) combined to form a very active emerging branch discipline [1]. The so-called active structure refers to various structures that have been active since the late Pleistocene, are still active now, and will be active in a certain period of time in the future, such as active faults, active folds, active basins, active volcanoes, and the crust and structures surrounded by them. Lithosphere blocks. Since active tectonics is the latest tectonic activity that is closest to the modern moment in the history of tectonic and neotectonic development, it is most closely related to the study of environmental evolution, resource evaluation and natural disasters, and is also one of the main foundations for the study of modern geodynamics. In the 1970s, due to the seismic safety requirements of newly-built nuclear power plants and large hydropower plants in the United States, the work of active tectonics and seismic hazard assessment has made great progress, and the basic framework of active tectonics research has been established. The most representative classic works include: Sieh[2, 3]\u2019s research on the slip rate and paleoseismic activity history of the San Andreas fault; Schwartz and Coppersmith[4]\u2019s research on the activity habits of the Wasatch fault; Wallace[5]\u2019s research on the Basin Mountains Research on earthquake law of fault activity in the province. In particular, Sieh[2] discovered 14 paleoseismic events in the trenches of Pallett Creek in the San Andreas fault, obtained the activity history of the San Andreas fault in the past several thousand years, and pioneered the use of trenching technology to study the prehistoric ancient times of active faults in the Holocene. New frontiers in earthquake research. These studies not only promoted the development of earthquake geology itself, but also led to the development of the field of earth science, and created some new frontier fields, such as paleoseismology, fault segmentation, etc. After the census of many active tectonic belts in different regions in the 1960s and 1970s, my country began a new stage of quantitative research on active tectonics in the 1980s. Through the special research on several important active tectonic belts, especially the practice on the Haiyuan active fault zone, and on the basis of absorbing the experience of regional geological mapping, the 1:50000 active tectonic mapping technology was developed, and the In the 1990s, it was extended to nearly 20 major active structural belts across the country [6]. The work addresses the geometry and internal structure, kinematics and slip rates, paleoearthquake and megaquake repetition intervals, seismic rupture zones and coseismic displacements, segmentation and rupture processes, deformation mechanisms and dynamics, and seismic risk assessment, etc. Dozens of active fault zones, active fold zones, and active basin zones have been studied across the country, thousands of geometric and kinematic quantitative data have been obtained on these active tectonic zones, and a new 1:4 million China Activity structure diagram [7]. The most important and difficult content in the study of active tectonics is the active habit of faults, paleoearthquakes and dating of the late Pleistocene. Habit of fault activity. Habit of fault activity refers to the basic characteristics of fault movement, such as mode, speed, amplitude, history and activity period. It contains the internal relationship between fault activity and earthquake occurrence process, and can reveal the law of earthquake activity in time and space. . Quantitative data such as slip rate, coseismic displacement, recurrence interval, and elapsed time provided by active fault research are the basic data for establishing various earthquake recurrence laws. The fracture slip rate is the speed at which the fracture slips over a certain period of time. It represents the long-term and average activity level of the fault zone, which can be used to compare the relative activity of different fault zones. At the same time, it also reflects the rate of strain energy release on a fault zone, so it is often used in the seismic hazard probability of faults. evaluate. Coseismic displacement refers to the magnitude of surface displacement caused by an earthquake. The coseismic displacement actually reflects the energy of the accompanying earthquake and thus can be used to determine the maximum magnitude. The repetition period of the largest earthquake can be calculated by the fault slip rate after the coseismic position is removed, and it can also be used to calculate the repetition period of the largest earthquake. The repetition period refers to the time required for the characteristic earthquake (or the largest earthquake) to recur in situ on the fault zone. There are direct and indirect methods to determine the in situ repetition period of large earthquakes on the fault zone. The direct method is to identify paleoseismic events in trenches through paleoseismic research and measure the age of these paleoseismic events. The best example is the paleoseismic study of Pallett Creek on the San Andreas fault in the United States[2]. The indirect method is obtained by calculating the slip rate by removing the coseismic position[4,5]. Fault segmentation is a new field in the study of active tectonics in the middle and late 1980s. Its implication is that any large fault can be divided into several mutually independent segments, and each segment can be regarded as an independent rupture unit with its own unique activity history, and the rupture of any segment does not control and restrict the rupture of adjacent segments[4 ,8]. The segmental activity behavior of active faults is of great significance for earthquake hazard assessment. Paleoearthquake research Paleoseismic research is a new branch of earth science that began to form in the 1920s. It is considered to be the most accomplished field in the study of the latest active tectonics and earthquake hazard prediction. It is still the field of study until today. leading edge [1]. Paleoearthquake research is to identify the signs of paleoearthquakes that occurred before historical records through the dislocations preserved in the Quaternary strata and other geological and geomorphic evidence related to earthquakes, as well as to determine the age, frequency and intensity of their occurrence, and to answer the question of whether earthquakes occurred before. Two key issues in hazard prediction: when did prehistoric earthquakes occur and how long did prehistoric earthquakes occur [4]. Large earthquakes that occur in mainland China often have in-situ recurrence intervals of thousands of years, while reliable historical earthquake records are only decades to hundreds of years [7,8]. Obviously, the time window of several decades historical records cannot accurately represent the earthquake recurrence law with a repetition period of thousands of years. For seismic hazard prediction, the research results of paleoearthquakes along active fault zones make up for the shortness and limitations of historical seismic records to a large extent, allowing us to understand the long-term faults in the time period of several earthquake repetition periods. Activity Habits and Estimating Likely Time of Future Earthquakes. In recent years, different types of active tectonic paleoearthquakes have been studied using microgeomorphology and trenching paleoseismic research methods, and multiple types of signs such as structure, sedimentation, landform and secondary changes of paleoseismic have been summarized. According to these marks, multiple paleoearthquakes have been identified in many active tectonic belts, their repetition intervals have been obtained, and the paleoearthquake chronology can be compiled. Studies have shown that the repeated rupture and displacement process of a fault segment, that is, the recurrence process of paleoearthquakes may conform to the characteristic seismic mode or quasi-periodic mode; the paleoearthquake recurrence process of an active fault zone or an active tectonic zone is an uneven repeating process The rupture process, the cluster model. Late Pleistocene Dating Determining the age of Late Pleistocene sediments and geomorphological surfaces is an indispensable content of active structures, and important progress has been made in this area in recent years[9], especially due to the considerable progress in dating methods and techniques, These include accelerator mass spectrometry C14 dating, thermal surface ionization mass spectrometry uranium series dating (TIMS), cosmogenic nuclide dating on exposed surfaces, U-Th/He dating techniques, single-chip/single-particle OSL dating The emergence of methods such as laser micro-area Ar\uf02dAr dating and the innovation of original dating methods and technologies have made it possible to date trace, micro-area, and single-grain samples with high precision. The research provides one-dimensional time information, and gives two-dimensional (time, temperature), three-dimensional (time-temperature-pressure) and even multi-dimensional (time-temperature-pressure-isotope trace) quantitative information in geological processes. The application of high-precision dating technology has promoted the in-depth development of paleoearthquake event dating and active fault slip rate research, and provided a basis for earthquake risk prediction. In addition, in the recent research on continental dynamics, especially in the hot research on the evolution of the Qinghai-Tibet Plateau and inland orogenic processes, various chronological methods and geochemical methods and techniques with elements and isotope tracers as the main means have become important important means of reconstructing this tectonic process. Through the study of isotope and element change process and change mechanical mechanism, inversion of the dynamic mechanism and tectonic process it may represent, and at the same time time limit, has laid a solid foundation for a comprehensive and profound understanding of the neotectonic deformation process. The main difficulties in the study of active tectonics lie in two aspects, one is how to obtain various observation data to improve the reliability of observations; the other is how to use new technologies and methods to accurately determine the age of sediments and landforms. Future research will mainly focus on the following aspects: comprehensive geological, geochemical, geomorphological and geodesy methods to reliably determine the slip rate of faults at different periods, the evolution law of fault activity in time and space, and the ancient fault zone. Integrity of seismic activity history, using high technology to improve the dating accuracy of ancient earthquake events, establishing a theoretical model of earthquake recurrence intervals in fault zones and regions, and using active tectonics and current tectonic results to study continental dynamics.", "China is the country with the largest number of continental earthquakes, accounting for 7% of the world's land, and 33% of the world's continental earthquakes occurred [1]. Since the second half of the 20th century in my country, the number of deaths due to earthquakes has reached 285,000, accounting for 54% of the total number of deaths caused by the seven natural disasters in my country during the same period [2]. The 7.8-magnitude Tangshan earthquake in 1976 instantly turned a new industrial city with a population of one million into ruins, killing 240,000 people, seriously injuring 160,000 people, and causing direct economic losses of more than 10 billion yuan [3]; The magnitude 8 earthquake in Wenchuan, Sichuan caused nearly 80,000 deaths and more than 800 billion economic losses. Why do so many strong earthquakes occur in mainland China? What are the spatial characteristics of these strong earthquakes, and what factors control the occurrence of these strong earthquakes? The Chinese mainland is located in the southeast of the Eurasian plate, and is held hostage by the Indian plate, the Pacific plate and the Philippine Sea plate. The interaction between the plates and the deep geodynamics in the plate have created different structural types, different motion states and different mechanics in the Chinese mainland. The nature of active structures controls the spatial distribution pattern of strong earthquakes in mainland China[4,5]. One of its most notable features is that the huge Late Quaternary active faults are well developed, cutting the Chinese mainland into active blocks of different levels [5~7]. All occurred on the boundary zone of these active plots (Figure 1). Active blocks are actually divided and bounded by tectonic belts that have been continuously active since the late Cenozoic (3 million to 5 million years) and the late Quaternary (100,000 to 120,000 years), and have relative A geological unit with a unified movement mode [8]. The tectonic activity at the boundary of the active block is strong, and the internal tectonic activity is relatively weak. Most of the strong earthquakes occur in the active tectonic belt at the boundary of the block. Active block boundaries can be consistent with geologically historical block boundaries, or they can be new and inconsistent with old block boundaries. The active blocks are hierarchical, and there may be sub-blocks inside the first-level active blocks, but the tectonic deformation between different active blocks or between different levels of active blocks is coordinated within a larger regional framework. There are two forms of internal deformation of active blocks: one is relatively stable, without large-scale internal structural deformation; the other is relative movement between internal secondary blocks, with certain tectonic activity, but whether Both its activity intensity and frequency are much smaller than those of the boundary active tectonic belts. The movement of active blocks is not only driven by plate boundaries, but also by deep dynamics. The bottom boundaries of blocks are controlled by detachment zones or detachment zones at different levels. Due to different deep dynamics, the brittleness of the shallow surface Tectonic deformation and strong earthquake activity are also different. The fact that the vast majority of earthquakes above magnitude 7 in mainland China occur at the boundaries of active blocks indicates that the movement of blocks and the interaction between blocks are the direct controlling factors for the breeding and occurrence of earthquakes. The earthquake process contains two interrelated fundamental links, namely the tectonic background and the seismogenic environment[9]. The tectonic background actually refers to the large-scale dynamic background of the energy required for earthquakes, including the driving force of the plate boundary, the drag force of the mantle or the asthenosphere on the upper brittle lithosphere, the transfer of strain, and the interaction between different layers of the lithosphere. Interactions between active plots, etc. Seismogenic environment refers to the local conditions of strong earthquakes, which depend on the structural geometry, medium physical properties, fault activity habits, strain accumulation degree and earthquake recurrence law of the area where the earthquake occurs. Earthquakes are actually the sudden instability and rupture after the continuous accumulation of strain in the discontinuous section of deformation and reaching the limit state under the regional tectonic stress provided by the tectonic background[10]. Therefore, strong earthquakes often occur in places where the discontinuous structural deformation is the strongest, and these places are the fault systems that cut the surface of the earth's crust. In particular, the fault zone that constitutes the boundary of the active block is more conducive to the accumulation of high strain and breeds large earthquakes because of its large depth of cutting the crust rather than stronger continuity. This may be an important reason why most of the strong earthquakes occur in the boundary zone of active blocks. Figure 1 \tDistribution map of main active faults, active blocks and strong earthquakes in mainland China [8] The black lines are the main active faults; the black dots are historical earthquakes above magnitude 7; the orange lines and irregular areas are the boundary zones of the first-order active blocks ; the light blue lines and irregular areas are the boundary zone of secondary active blocks. The spatial distribution and causal mechanisms of earthquakes, but many key scientific questions remain poorly understood. For example, what is the three-dimensional geometric shape of the active block, what is the state of motion and activity habits, what is the dynamic process and activity history of the boundary belt, what is the physical process of strong earthquake breeding and occurrence, and the relationship between strong earthquake breeding and the boundary belt dynamic process what is it. The research and answers to such questions are of great significance for understanding the breeding and occurrence process of strong earthquakes in mainland China. Starting from the most basic characteristics of the Late Quaternary tectonic deformation in China mainland, comprehensively using geophysics, geology, geodesy and geochemistry methods, find out the deep-shallow coupling relationship between the contours of active blocks in China mainland and their boundary faults, and the deep geophysics. It is an effective way to understand the mechanism of continental strong earthquakes and carry out strong earthquake prediction by understanding the motion characteristics of active blocks and exploring the relationship between the interaction between blocks and the occurrence of strong earthquakes.", "Oil and gas is an extremely important strategic resource. Fluctuations in oil prices, major discoveries of oil and gas, emergence of important new technologies, major mergers and acquisitions among companies, and policy changes in oil-producing countries all affect the nerves of the entire international community. Recently, there have been two completely different trends of thought on the potential of oil and gas resources and the prospect of development and utilization in the world. The optimists believe that there are enough oil and gas resources in the world for human beings to use to the emergence of new alternative energy sources; the pessimists believe that oil and gas resources Already scarce, the oil and gas industry has entered its twilight period. How many oil and gas resources there are in the world and how long oil and gas can be exploited have gradually become issues of general concern to the international community. Oil and gas resources refer to the natural accumulation of oil and gas formed by geological processes in the earth's crust, including crude oil, natural gas, natural gas liquids and their associated substances. The quantity, distribution and quality of oil and gas resources are an important basis for a country to formulate energy policies and macro-development strategies, and it is also an important basis for an oil company to formulate company development plans and plans. Many countries and multinational oil companies not only pay attention to their own oil and gas resources, but also pay great attention to the world's oil and gas resources. The calculation of resource volume is completed under the guidance of existing scientific theories, and according to the different degrees of understanding, the corresponding technical methods are used to complete. Therefore, the value of resources will change with the development of theory, the improvement of understanding, the progress of technical methods and different economic conditions. Since the distribution of oil and gas resources in the earth's crust is affected by many factors, the prediction of oil and gas resources involves many uncertain factors that are difficult to recognize in advance. Therefore, so far, there is no method that can accurately calculate the amount of oil and gas resources to be discovered in a certain place. Therefore, in the estimation of undiscovered reserves (resources) of oil and gas, many methods have appeared, and there are nearly 30 commonly used methods. The United States Geological Survey (USGS) has set up the World Energy Resources Project Team to engage in the evaluation of oil and gas resources in the world. It is currently one of the authoritative organizations in the world engaged in the evaluation of oil and gas resources in the world. When USGS conducts oil and gas resource evaluation, it emphasizes that the amount of resources must be economically recoverable, and it also clarifies the time frame, only calculating the resources to be discovered and the reserves of known oil and gas fields that are meaningful to the growth of reserves in the next 30 years growth potential. When USGS evaluates oil and gas resources, it advocates the use of analogy or statistical methods (methods extrapolated from existing exploration and production data), rather than using geochemical methods to directly calculate possible hydrocarbon generation, migration, Methods for gathering and preserving volumes. This is because: \u2460 In the geochemical method, due to the theory and mechanism of the formation and migration of hydrocarbons in the ground, there are still many problems that are not clear enough, and further research is needed. Some parameters are greatly affected by subjective factors and are difficult to determine correctly, such as the migration coefficient and accumulation coefficient of generated hydrocarbons. \u2461 The workload of oil and gas exploration in the United States is very large. A large number of oil and gas exploration wells and development wells have been drilled, and a wealth of information and data have been accumulated. When statistical methods or analogy methods are used, there is a relatively reliable basis and the evaluation results are highly credible . Since the late 1970s, USGS has carried out five rounds of world oil and gas resource evaluation. The resources to be evaluated include the resources to be discovered, the growth potential of known oil and gas field reserves, remaining recoverable reserves and cumulative production. From the evaluation results of world oil and gas resources by USGS, the overall trend of the evaluation results is that the amount of resources is gradually increasing as time goes by (Fig. 1). The world's oil resources rose from 233.75 billion tons in 1984 to 410.85 billion tons in 2000, an increase of 75.74%; natural gas resources rose from 210.4 billion tons of oil equivalent in 1987 to 419.2 billion tons of oil equivalent in 2000, an increase of 99.24%. Fig. 1 \tThe change of USGS evaluation results of oil and gas resources in the world is extremely unbalanced compared with the distribution of oil and gas resources in the world (Fig. 2 and Fig. 3). Undiscovered oil resources are concentrated in the Middle East and North Africa, North America, the former Soviet Union and Central and South America, accounting for 31.82%, 19.98%, 16.05% and 14.59% respectively, accounting for 82.40% in total, while Europe, Asia-Pacific And South Asia accounted for a small proportion. Undiscovered natural gas resources are mainly distributed in the former Soviet Union, the Middle East and North Africa, and North America, accounting for 31.01%, 26.37%, and 13.10% respectively, accounting for 70.48% in total. Fig. 2 \tComposition of major regions with undiscovered oil resources in the world Fig. 3 \tPercentage diagram of major regions with undiscovered natural gas resources in the world It is 177.10\u00d71012m3. From the perspective of regional distribution (Figure 4 and Figure 5), the remaining oil reserves in the world are concentrated in the Middle East and North America, accounting for 55.58% and 15.64% of the total, accounting for 71.22% in total; the remaining natural gas reserves are concentrated in The proportions of the Middle East and the former Soviet Union are 41.44% and 32.07%, accounting for 73.51% in total. In terms of country distribution, the remaining oil reserves in the world are mainly distributed in Saudi Arabia, Canada, Iran, Iraq, Kuwait, Venezuela, the United Arab Emirates and Russia, accounting for 78.39% of the total; the remaining natural gas reserves in the world are concentrated in Russia, Iran and Qatar The proportions of the three countries are 26.86%, 15.85% and 14.26%, accounting for 56.98% in total. Fig. 4 \tComposition of the remaining recoverable oil reserves in the world in 2008 Fig. 5 Composition of \tthe remaining recoverable natural gas reserves in the world in 2008 The continuous growth trend ensures the sustainable development of the world's oil and gas industry. Although a large amount of oil and gas is extracted from the ground every year and used in a wide range of fields such as energy and chemical industry, the reserves of oil and gas have not decreased but increased. According to statistics, the remaining recoverable oil reserves in the world rose from 1368.78\u00d7108t in 1991 to 1838.84\u00d7108t in 2008, an increase of 34.34%; 1012m3, an increase of 31.20%. Especially in recent years, with the development of offshore geophysical prospecting, drilling, oil production, and deep-sea equipment and technology, three new forces with rapid growth and great potential have emerged in the development of new areas and new fields\u2014oil sands represented by Canada, and The heavy oil represented by the Orinoco belt in Venezuela and the deep-sea oil and gas represented by West Africa, Brazil, and the Gulf of Mexico have added new impetus to the development of the world's oil and gas industry. The development of the world's petroleum industry has a history of 150 years, and oil and natural gas are the most important energy sources in the world today. In 2008, oil and gas accounted for 58.9% of the world's primary energy, which played a huge role in promoting the development of the world economy. \"How long the world's oil and natural gas can last\" has become a difficult point in predicting the medium and long-term development trend of world energy. There are different opinions, and it is difficult to give an exact time limit. According to the statistical data of the US \"Oil and Gas Magazine\", based on the production in 2008, the world's remaining recoverable reserves of oil can still be produced for 50.4 years, and the remaining recoverable reserves of natural gas can be produced for 58.0 years. That is to say, even if there is no new discovery in the future, the proven reserves of oil and gas can still be recovered for at least 50 years. The actual situation should be more optimistic. With the advancement of science and technology and the improvement of economic conditions, it is expected that in the next 20 to 30 years, the newly added recoverable oil and gas reserves will be greater than the recovered oil and gas reserves, and the remaining recoverable oil and gas reserves in the world will continue to increase. In addition to conventional oil and gas resources, unconventional oil and gas resources such as coalbed methane, oil sands, oil shale, shale gas and natural gas hydrate are also quite huge, and their quantity far exceeds that of conventional oil and gas resources. With the advancement of technology and the improvement of economic conditions, unconventional oil and gas resources will also alleviate the pressure of world economic development on conventional oil and gas demand to a certain extent. Therefore, it can be considered that throughout the 21st century, the world's oil and gas resources are sufficient, and under normal circumstances oil and gas production will meet the needs of global economic development. Oil and gas are fossil fuels, which emit a large amount of greenhouse gases in the process of consumption, leading to global warming and endangering the living environment, health and safety of human beings. It is an inevitable trend for the development of human society to develop new energy sources (mainly including wind energy, solar energy, geothermal energy, biomass energy and nuclear energy) and low-carbon economy, and gradually reduce the proportion of fossil fuels in primary energy consumption. If major breakthroughs are made in technologies related to the development and utilization of new energy, it is very likely that new energy will replace fossil fuels such as oil and natural gas before oil and gas resources are exhausted, and human society will enter a new historical era.", "It is generally believed that coalbed methane (commonly known as gas) is different from conventional natural gas, not in a free state, but mainly in the micropores of coal in an adsorbed state, and the Langmuir equation (Lang's equation) is used to calculate the adsorption of coalbed methane on coal seams Capacity, that is, the maximum adsorption capacity of coalbed methane. However, during the formation of coal, the amount of gas generated by coalbed methane at different coalification stages is often 10 times or even dozens of times higher than the maximum adsorption capacity (adsorption capacity) calculated by the Langmuir equation (Table 1). Where have you been? In the past, it was mostly considered to be dissipated during the long geological history; then, the outburst amount of coal and gas outburst in coal mines per ton of coal gas is much higher than the maximum adsorption amount of coal seam gas, such as the Yubari New Coal Mine in Japan (October 1981 The average gas outburst volume measured on April 16) was 150 m3/t; the average outburst gas volume in Huaibei Luling Coal Mine (measured on April 7, 2002) was 89 m3/t; the average gas outburst volume in Zhengzhou Daping Coal Mine (measured on October 20, ) with an average outburst gas volume of 183 m3/t, etc. Even considering the addition of gas in the surrounding rock during gas production and gas outburst, it is far from being able to bridge the gap between gas content and Rankine adsorption. Where does the excess coalbed methane come from and what is its state of occurrence are unresolved scientific problems that must be faced. Alexeev et al. of the Ukrainian Academy of Sciences used hydrogen nuclear magnetic resonance spectroscopy (1H NMR) and X-ray diffraction techniques to find that the \"microcrystalline\" structure of organic matter in coal changed significantly after methane was adsorbed, suggesting that coalbed methane may also exist in other forms of occurrence. That is, it may exist in coal in the form of solid solution, not just in the form of adsorption, especially in the deep formation > 2 Mpa. Chinese scholar Qin Yong proposed whether the occurrence form of a considerable part of CH4 in coal seams may be similar to CH4 in natural gas hydrates; at the same time, they pointed out that the traditional CH4 adsorption model cannot fully and truly reflect the occurrence state of coalbed methane under formation conditions. Coal and gas outbursts almost all occur in strongly deformed tectonic coals. Then, is the deformation of coal related to the occurrence of excess coalbed methane, and at the same time restricts coal and gas outbursts in coal mines? Chinese scholars Hou Quanlin and Ju Yiwen\u2019s research team explored this issue and found that the nanoscale (<10nm) pores in mylonitic coal are tens or even hundreds of times higher than those in primary structure coal, which may be the place where excess coalbed methane occurs; At the same time, it was found that strong ductile deformation and brittle deformation of coal have significantly different effects on the macromolecular structure of coal, and ductile deformation will destroy the aromatic carbon structure of coal macromolecules to a certain extent, thus producing a large number of CH\uf02d ions that still have certain chemical bonds , which may be the basis for excess coalbed methane production. These have not been scientifically confirmed yet, or to some extent it is only a speculation. The occurrence state of excess coalbed methane and the production process in the process of coal and gas outburst are still issues to be explored.", "The gas that occurs during coal mining is the \"violent killer\" that ranks first among the five major disasters in coal mines. This methane gas that exists in coal seams is also called coalbed methane. In my country, high gas and gas outburst mines account for 46% of the total number of mines, and the annual death toll caused by gas disasters accounts for 80% of the death toll in the global coal industry. In addition to causing significant human casualties and economic losses, methane in coal seams leaks into the atmosphere along with coal mining, which will also aggravate the global greenhouse effect. my country's annual methane emission ranks first in the world, accounting for one-third of the world's total methane emissions from coal mining, which has attracted widespread attention from the international community. Many people compare gas to a \"gas tiger\" with lingering fear, while others eagerly regard it as the most realistic and reliable clean alternative energy source for conventional natural gas in my country in the 21st century. Rational use of coalbed methane has very important practical significance for improving coal mine safety, adjusting my country's energy structure and effectively reducing greenhouse gas emissions. In other words, as long as we effectively develop coalbed methane, we can turn the \"violent killer\" of this coal mine into a valuable clean energy source. So how is gas formed, and what is the status and prospect of gas management and utilization? Coalbed methane is a self-storage natural gas that occurs in coal seams and solid rocks in an adsorbed or free state and is produced in the process of biochemical and coalification of microorganisms into lignite, bituminous coal and anthracite after plant bodies are buried. Its main component is CH4 (methane), which belongs to unconventional natural gas resources. Coal-bed methane has a wide range of uses and can be used to generate electricity, industrial fuels, chemical raw materials, and residential fuels. The vast majority of CBM exists in the matrix pores of coal in an adsorbed state, and is in a dynamic equilibrium state under a certain pressure. After drainage and pressure reduction, the reservoir pressure drops below the critical desorption pressure, and coalbed methane desorbs from the coal matrix pores, migrates through fractures, and is produced. The process can be divided into three stages: the first stage is the stage of saturated water flow mechanism, in which only water is produced, which is the stage of water single-phase flow; the second stage is the stage of unsaturated flow mechanism, in which the reservoir pressure drops through drainage , when the pressure drops below the critical desorption pressure, CH4 begins to desorb from the pore surface of the coal body, forming air bubbles in the water in the pores or fissures, but these air bubbles are not merged into gas reservoirs, which hinder the flow of water to a certain extent. The relative permeability decreases; the third stage is the two-phase flow mechanism stage. As the reservoir pressure continues to drop, more gas is desorbed and the gas saturation increases until the bubbles merge into a continuous gas flow and migrate to the borehole to produce , the relative permeability of gas increases gradually (Fig. 1). There are two ways to mine coalbed methane. One is ground drilling; the other is underground gas drainage. If coalbed methane is extracted before coal mining, the gas content in coal mine production will be reduced by 70% to 85%, which greatly improves the safe production conditions of coal mines. As early as the end of the 1980s, the coalbed methane industry was successfully developed commercially around the world. Among them, the United States is the earliest and most successful country in the world to mine coalbed methane. The United States has abundant coalbed methane resources, with a resource volume of 21.19\u00d71012m3, ranking third in the world. The basins that have formed large-scale production of coalbed methane include the San Juan, Black Warrior, Powder River, Uinta and Raton basins. The gas industry in the United States started in the 1970s, and developed on a large scale in the 1980s; by 2006, the output of coalbed methane had reached 540\u00d7108m3, equivalent to the output of conventional natural gas in my country. In 2008, Canada's coalbed methane production also exceeded 100\u00d7108m3. Countries such as Britain, Germany and Poland have also achieved success in the drainage and utilization of mine coalbed methane. Figure 1 \tSchematic diagram of coalbed methane production stages [1] Technological innovation and progress are the most important driving forces for the successful development of the coalbed methane industry. Through the adaptive research and popularization of new technologies, Canada completed the development process of the United States in 20 years in just 4 years, and achieved two leapfrog developments. Basins such as San Juan and Alberta have achieved such great success precisely because technologies adapted to the geological characteristics of these basins have been used. The following drilling, completion and stimulation technologies have played an important role in promoting the development of coalbed methane in the world. Multi-branch horizontal well technology: Multi-branch well refers to drilling several branch wells in a main wellbore (vertical well, directional well, horizontal well), forming an interconnected network in the coal seam, and maximizing the communication of the cracks in the coal seam And cleat, improve coal seam drainage and decompression speed and gas desorption migration speed (Fig. 2). It not only increases the output and improves the recovery degree, but also shortens the gas production time and greatly improves the economic benefits. Multi-branch horizontal wells are suitable for areas with thick coal seams, low permeability and continuous distribution of coal seams. The application of this technology in the San Juan Basin has achieved very good development results, and the daily gas production of a single well has increased by an average of 6 to 10 times. In addition, the application of multi-branch horizontal well technology greatly saves the land expropriated during the development of coalbed methane, which is conducive to environmental protection. Fig. 2 \tSchematic diagram of multi-branch horizontal well Openhole/cave completion technology: The Powder River Basin is a typical low-rank coal basin, characterized by thick coal seams, high permeability, and undercompacted reservoirs. In order to avoid damage to the coal seam, the open hole/cave completion technology in the overpressure area can increase the exposed area of the coal seam and increase the production of a single well. The drilling cycle is shortened and the cost is reduced (Fig. 3). Figure 3 Schematic diagram of open-hole completion of coalbed methane wells (according to Wyoming State Engineering Office, 2001 data) Coiled tubing fracturing and small-scale nitrogen fracturing technology: Coiled tubing fracturing refers to that in coal-measure formations, sandstone and coal seams are segmented from bottom to bottom Continuous fracturing. This technology is suitable for areas with medium and high permeability and thin coal seams, and has achieved good results in the Alberta Basin of Canada. Small-scale nitrogen fracturing is mainly fragmentation, which can avoid the damage of fracturing fluid injection to the coal seam, prevent the reduction of gas phase relative permeability, control the pore deadlock caused by the expansion of clay minerals, and reduce wellbore pollution. Because this technology can quickly form a large number of fractures around the wellbore, it can greatly shorten the time for drainage and depressurization. The global coalbed methane resources are extremely rich. According to the statistics of the International Energy Agency (IEA) in 2003, the global coalbed methane resources may exceed 260\u00d71012m3, of which 90% of the coalbed methane resources are distributed in 12 major coal-producing countries, Russia, Canada, China , the United States and Australia have more than 10\u00d71012m3 of coalbed methane resources. my country's coal resources are 5.57\u00d71012t, ranking third in the world. The results of a new round of national oil and gas resource evaluation in 2006: the prospective resources of coalbed methane in coal seams buried below 2000m are 36.81\u00d71012m3, which is basically equivalent to conventional natural gas resources, accounting for about 13% of the world's total resources of coalbed methane; The recoverable resources are 10.87\u00d71012m3, and the proven rate is very low, only 0.36%. They are mainly distributed in Ordos, Qinshui, Junggar, eastern Yunnan, western Guizhou, Erlian, Tuha Tarim basins, etc., with huge development space; in 2008 Coal mines produced 2.74 billion tons of coal, 58\u00d7108m3 of coalbed methane was extracted underground, and 18\u00d7108m3 was utilized, with a utilization rate of about 31%, far below the level of developed countries in the world. my country's surface drilling began in the early 1990s, and after 20 years of development, it has entered the stage of commercial development. As of December 2008, there were more than 2,800 vertical CBM wells and 65 multi-branch horizontal wells drilled nationwide, with an annual production capacity of 15\u00d7108m3 and an output of about 7.5\u00d7108m3 (Table 1). Table 1 \tChina's coal production, underground coalbed methane extraction volume, and surface coalbed methane development capacity over the years[2] number/person fatality rate/% After years of practice in coalbed methane exploration and development, China's coalbed methane has entered the stage of small-scale commercial development . The distribution and characteristics of my country's coalbed methane resources have been basically clarified, great progress has been made in gas drainage theory and engineering practice, and some industrial technologies and norms suitable for my country's coalbed methane industry have been initially formed. However, in the process of development, many problems that need to be solved in my country's coalbed methane industry have also emerged. For example, the average extraction rate of mine gas is only 23%, far lower than the United States, Australia and other major coal-producing countries [7]. In terms of surface drilling and mining, most of the CBM exploration operations and geological research ideas still follow the methods of the United States, and it is urgent to form a set of CBM exploration and development theories and technologies tailored to my country's geological conditions.", "Unconventional oil and gas resources refer to hydrocarbon resources favorable to conventional oil and gas resources in terms of accumulation mechanism, occurrence state, distribution law and exploration and development technology. Because conventional oil and gas resources in some areas cannot meet the needs of economic and social development, technological progress has greatly reduced the cost of development and utilization of unconventional oil and gas resources. Development and utilization become possible. With the advent of the post-petroleum era, the output of conventional oil and gas resources in many regions has shown a downward trend. In the global energy structure, unconventional oil and gas resources have gradually begun to play an important role, becoming a strategic supplement or substitute for conventional energy. Unconventional oil and gas resources include unconventional oil resources and unconventional natural gas resources. Unconventional oil resources include oil shale, oil sand (bitumen), shale oil, heavy oil, etc.; unconventional natural gas resources include coalbed methane, shale gas, tight sandstone gas (deep basin gas), biogas, natural gas (methane) Hydrate, water-soluble gas, etc. The spatial distribution of various types of unconventional oil and gas resources and conventional oil and gas resources in sedimentary basins is shown in Fig. 1. Fig. 1 Spatial distribution of unconventional oil and gas resources and conventional oil and gas resources [1] The global unconventional oil and gas reserves are abundant, but the distribution is uneven, the degree of exploration and development is different, and the development and utilization are very unbalanced. Some resource types (such as tight sandstone gas, coalbed methane, shale gas, heavy oil, and oil sands) have achieved large-scale commercial development in some countries and regions, and have become important alternative energy sources; some resource types are still in the exploration and research stage. Commercial development and utilization may also need to go through a long exploration process (such as natural gas hydrate). In recent years, oil sand exploration and development in Canada has advanced by leaps and bounds, realizing a major leap in oil sand oil production greater than conventional oil production; oil shale dry distillation and comprehensive utilization in my country, Estonia, Brazil, Australia and other countries have begun to take shape; the United States has achieved compaction The large-scale commercial production of sandstone gas, coalbed methane, and shale gas resources; the exploitation of inorganic origin gas in my country has achieved initial results. The development and utilization of these unconventional oil and gas resources have achieved remarkable economic benefits and have formed an important supplement to conventional oil and gas resources. In addition, 122 ore spots of natural gas hydrate have been discovered globally. Once the development technology breaks through, it will definitely become the new darling of energy in the future. Since coalbed methane has already been discussed on a special topic, the following will briefly explain the causes and distribution of other unconventional resources except coalbed methane, the global resources and the situation of my country's resources, and the main development and utilization technologies. Oil shale (oil shale) refers to a high-ash solid combustible organic sedimentary rock. Shale oil can be obtained by low-temperature dry distillation. The oil content is greater than 3.5%, and the organic matter content is high. It is mainly sapropelic and mixed. Generally greater than 4.18kJ/g\u2460. In early 2000, Dr. Dyni of the US Geological Survey made statistics on the proven geological resources of oil shale (oil shale resources with oil content greater than 40L per ton) in 33 countries around the world, and the shale oil reached 4110\u00d7108t. my country is rich in oil shale resources, but the degree of exploration is low. The results of a new round of national oil and gas resource evaluation in 2006: the national shale oil geological resources are 476\u00d7108t, and the oil shale resources are concentrated in Songliao, Ordos, Lunpola, Junggar, Qiangtang, Qaidam, Maoming, etc. basin. The main technologies for oil shale development and utilization include: oil shale mining technology, oil shale combustion power generation technology, shale oil dry distillation extraction technology and oil shale comprehensive utilization technology, etc. The In-situ Conversion Process (ICP) of oil shale is a new technology emerging in recent years. The principle is to heat the underground through the heating well, so that the oil shale is pyrolyzed and cracked underground, and then the oil and gas are extracted separately through the production well, and the freezing wall is used around the heating zone to prevent the entry of water and the pollution of other products. Oil sands, also known as natural bitumen. According to the research of the United States Geological Survey (USGS) in 2004, the recoverable resources of oil sand oil in the world are 1035.1\u00d7108t, accounting for about 31.96% of the total recoverable oil resources in the world, second only to conventional oil (recoverable resources The amount is 1514\u00d7108t), which is greater than the recoverable resources of heavy oil (690\u00d7108t). The distribution of global oil sand resources is very uneven, mainly along the Pacific Rim enrichment belt and the Alps enrichment belt. According to the new round of national oil and gas resource evaluation in 2006, the national technically recoverable oil sand resources are 22.58\u00d7108t, mainly distributed in the Junggar, Tarim, Qiangtang, Ordos, Qaidam, Songliao, and Sichuan basins. According to the different conditions of oil sand mines, the mining methods adopted mainly include: open-pit mining, roadway mining technology, combined with ground carbonization separation technology, thermochemical separation technology; underground mining technology, including sand production cold mining, steam huff and puff, steam-assisted gravity drainage Oil (SAGD) method, solvent injection method, downhole catalytic upgrading recovery technology, hydrothermal cracking technology, etc. Shale gas is natural gas that exists in the free form in shale pores and natural fractures, in the adsorbed state on the surface of kerogen and clay particles, and even in the dissolved state in kerogen and asphaltenes. Continuously generated biochemical gas, pyrolysis gas or a mixture of both [4]. The geological resources of shale gas in the world are 456\u00d71012m3[2], which is equivalent to the sum of the geological resources of coalbed methane (256\u00d71012m3) and tight sandstone gas (210\u00d71012m3). With the deepening of shale gas exploration and development, shale gas resources or reserves will increase significantly. In 1996, USGS predicted that the technically recoverable reserves of Barnett shale gas in the Fort Worth Basin were 850\u00d7108m3, jumped to 7419\u00d7108m3 in 2004, and increased sharply to 2.66\u00d71012m3 in 2008. Shale gas and coalbed methane have certain similarities in origin and occurrence. Shale gas exploitation can be realized through \"exhaust-depressurization-gas production\" and \"drainage-depressurization-gas production\". The popularization and adoption of technologies such as horizontal wells, plume wells, foam fracturing, repeated fracturing, clear water fracturing, staged multi-stage fracturing, and multi-well synchronous fracturing have reduced the shale gas production in the United States from 100\u00d7108m3 in 2000 , soared to 600\u00d7108m3 in 2008. The Paleozoic in southern my country, the Carboniferous-Permian in North China, the Jurassic in Northwest China, and the Triassic in the Ordos Basin have favorable conditions for shale gas accumulation. Tight gas reservoirs refer to sandstone layers with low porosity (<12%), low permeability (<0.1\u00d710\uf02d3\uf06dm2), low gas saturation (<60%), and water saturation High (>40%) natural gas reservoirs in which natural gas flows slowly. Tight sandstone gas reservoirs were first discovered in the San Juan Basin of the United States in 1927. Since most of the tight sandstone gas reservoirs discovered in North America are distributed in the center of the basin or in the deep part of the basin structure, Masters proposed the concept of \u201cdeep basin gas reservoir\u201d in 1979[5]. After the 1980s, Walls et al. proposed the concept of \"tight sandstone gas reservoir\", and later some scholars proposed the concepts of \"basin center gas reservoir\", \"continuous gas reservoir\" and \"root edge gas\". Tight sandstone gas reservoirs generally have large gas thickness and wide distribution area, so the resources are relatively large. It is estimated that the geological resources of tight sandstone gas in the world are about 210\u00d71012m3[2], and the technically recoverable reserves are 10.5\u00d71012~24.0\u00d71012m3, ranking first in unconventional natural gas. The development of tight sandstone gas in the United States has developed rapidly. In 1970, the annual production was only 226\u00d7108m3, and by 2007 it had reached 1700\u00d7108m3 (US DOE Energy Information Agency, 2008), accounting for nearly 1/3 of its total natural gas production. my country is relatively rich in tight sandstone gas resources. Some scholars have made preliminary estimates on six basins or regions that basically have geological conditions for tight sandstone gas formation, including the Ordos Basin and the Turpan-Hami Basin. 1012m3, of which only the Ordos Basin's prospective resources are about 50\u00d71012m3, accounting for about half of the total. At present, tight sandstone gas production has accounted for about 1/5 of China's total annual natural gas production, and has become a bright spot in the growth of natural gas production. Gas hydrates (gas hydrates) are ice-like crystalline compounds with a cage structure composed of water molecules and gas molecules. Because the gas in them is mostly methane (>90%), it is also called methane hydration. methane hydrates or \"combustible ice\". Since natural gas hydrate has the characteristics of high energy density, wide distribution, large scale, shallow burial\u2465, and superior accumulation conditions, it is an ideal alternative energy source in the future. Global gas hydrates are mainly distributed in the Pacific Ocean, Indian Ocean, Atlantic Ocean, Arctic Ocean, Antarctica, other waters (inland seas and lakes), and terrestrial permafrost regions. According to the accumulation of resources in local areas calculated in areas with data, the global methane resources in gas hydrates are 6107\u00d7103~7001\u00d71012m3[3]. Among them, Canada, the United States and Japan have relatively large gas hydrate resources. my country's natural gas hydrate has good prospects for exploration and development. The amount of methane resources in natural gas hydrate in China is 77.9\u00d71012m3[3]. It is mainly distributed in the Qinghai-Tibet Plateau, the South China Sea, and the Okinawa Trough of the East China Sea. Shallow biogenic gas (shallow biogenic gas) generally refers to the organic matter deposited in the shallow biochemical zone (generally no more than 1000m buried depth) and the organic matter in the source rock formed by microbial community fermentation and synthesis in the immature stage. Natural gas, sometimes mixed with gases formed by early cryogenic degradation. Shallow biogas occurs in rock formations with shallow burial, new age and low evolution degree, and its composition is mainly methane. According to the statistics of [4], the accumulated proven shallow biogas reserves in the world reached 13.8\u00d71012m3, accounting for 21.3% of the total global natural gas reserves (66.4\u00d71012m3) at that time. According to Chen Ying's (1994) statistics, the accumulatively proven shallow biogas reserves in the world amount to 15.5\u00d71012m3. At present, shallow biogas reservoirs with industrial value have been discovered in dozens of countries such as Canada, Germany, Italy, Spain, Japan, the former Soviet Union, the United States, and China. Preliminary estimates show that the predicted shallow biogas resources in 19 basins in my country range from 2.66\u00d71012 to 2.95\u00d71012m3. By the end of 2006, about 30 biogas reservoirs had been discovered, and the proven shallow biogas reserves were 3330\u00d7108m3, mainly concentrated in Songliao Basin, Qaidam Basin and Ying-Qiong Basin. Among them, the Sanhu area of the Qaidam Basin is a typical biogas reservoir, which has achieved large-scale commercial development. Gas from deep source refers to the inorganic origin natural gas from the deep crust or upper mantle, or abiogenic gas. Deep source gases mainly include methane, carbon dioxide, sulfur dioxide, hydrogen sulfide, hydrogen, water vapor and trace rare gases. At present, the typical deep-source gas reservoirs discovered abroad are mainly distributed in the United States, Hungary, Austria, Australia, New Zealand, Japan and other countries, but there is no data on the amount of deep-source gas resources in each country. The discovered deep-source CO2 gas pools are mainly distributed in the Pacific Rim, and the famous CO2 gas pools include: the Gambier and Garoline dome-type liquid CO2 gas fields (pools) in South Australia, the Rocky Mountains in the United States (from south to north) ) Bravo CO2 gas field (reservoir) in New Mexico, Mcelmo CO2 gas field (reservoir) in Colorado, Kevin-Sunburst CO2 gas field (reservoir) in Montana, etc. A large number of studies have shown that the accumulation and accumulation of high-concentration CO2 gas is closely related to the development of deep fault zones, plate movement, earthquakes and pressure release caused by fault activity. my country's deep-source gas reservoirs are mainly distributed in the Mesozoic and Cenozoic extensional basins in eastern China, concentrated along both sides of the Tan-Lu fault zone, mainly mantle-derived rare gases and carbon dioxide. By the end of 2004, my country's proven geological reserves of CO2 reached 264.9\u00d7108m3 and recoverable reserves were 187.5\u00d7108m3[5], mainly represented by Wanjinta in Songliao Basin, Huangqiao Basin in Subei Basin and Sanshui Basin. Although unconventional oil and gas resources have great potential and have a certain development foundation, due to their special reservoirs and occurrence methods, there are still many challenges to realize large-scale commercial utilization. Since the overall exploration level of unconventional oil and gas resources is relatively low, and the amount of proven or proven resources is small, it is necessary to intensify resource evaluation and exploration efforts to increase the proven rate of resources. The accumulation mechanism of unconventional oil and gas resources and the main controlling factors of enrichment and accumulation also need to be further studied. Unconventional oil and gas have the characteristics of low reservoir permeability and strong heterogeneity, and the reservoirs in different regions are quite different. Some foreign development technologies and experiences cannot fully apply to China's geological characteristics. Therefore, it is necessary to develop oil and gas reservoirs suitable for China. The characteristic development technologies, such as the large-scale commercial development technology of oil shale and natural gas hydrate development technology, need continuous research. Since waste water, waste gas, waste residue, etc. will be produced while developing unconventional oil and gas, which will cause environmental pollution and damage to human health, it is necessary to increase investment in environmental protection and adopt comprehensive utilization technologies to turn waste into treasure and protect Ecological environment, take the road of sustainable development. In addition, because unconventional oil and gas resources have the characteristics of low daily production per well and long development period, the investment in development and utilization is relatively large, and the recovery period is long, so it is difficult to realize economic benefits in the short term. Therefore, it is necessary for the state and all aspects of society to give preference to policy formulation and scientific and technological investment, and to promote the formation and large-scale development and utilization of unique technologies related to unconventional oil and gas resources.", "The mature theory of organic oil generation adopts a series of corresponding organic research ideas and methods, which makes the theory of organic oil generation perfect day by day. At the same time, with the gradual deepening of exploration practice, the increase of oil and gas reserves and the continuous discovery of new types of oil and gas fields, people gradually realized the insufficiency of organic research methods, and tried to supplement related research with inorganic methods, and achieved some results to varying degrees. progress. In recent years, some oil and gas discoveries and studies related to inorganic effects or inorganic environment have been reported successively at home and abroad, such as inorganic origin natural gas, oil and gas reservoirs in volcanic rocks, oil and gas shows in metamorphic basement, deep sea Hydrocarbon substances in hydrothermal areas and hydrocarbon inclusions in magmatic rocks, etc., indicate that the inorganic process may be closely related to the formation of oil and gas or oil and gas reservoirs. Oil Generation Theory How oil is formed has always been an important proposition in the study of petroleum geology. It has attracted the attention and exploration of more and more geologists, organic geochemists, physicists, biologists and oil and gas explorers. One of them is the current popular theory of organic origin, that is, petroleum is hydrocarbons formed by thermal degradation of organic matter; the other theory believes that petroleum comes from deep inorganic origin. The theory of organic origin of oil and gas has guided the process of oil and gas exploration and development for a long time, and has achieved fruitful achievements, making important contributions to human civilization and economic development. So far, almost all of the large oil and gas fields discovered are related to organic origin. Whether inorganic (non-biological) processes can form large amounts of oil and natural gas resources has been a difficult problem that has been debated in the scientific community for more than a century. With the development of earth science, people have gradually found evidence of the existence of inorganic (non-biological) natural gas, such as the discovery of inorganic natural gas based on the evidence of C and He isotopes. However, most of these gas fields are dominated by mixed-source gas, or belong to non-hydrocarbon gas pools, such as CO2 gas pools (fields). The theory of inorganic oil generation Humboldt proposed in 1804: \"Petroleum is the product of deep distillation in the earth, and oil from deep primitive rocks will flow out in volcanic active areas.\" In the early 19th century, people first noticed mud volcanoes, oil seedlings related to magma activity. In the middle of the 19th century, Berthelot proposed in 1866 that the interaction between alkali metals in the deep earth and recycled crustal CO2 could form petroleum. The Soviet chemist Mendeleev (1834~1907) once proposed the inference that petroleum came from underground inorganic synthesis, and this hypothesis was later developed into the carbide theory. The carbide hypothesis was discussed in more detail in modern Hunt et al. Since the 1950s, scientists in Russia and Ukraine developed the hypothesis of an inorganic origin. Russian geologist Kudryavtsev put forward the hypothesis of modern inorganic origin in 1951. After analyzing the geological conditions of the oil sands in the Athabasca area of Alberta, Canada, he believed that no source rock could form such large-scale hydrocarbons. Therefore, He considered the most likely explanation to be the result of inorganic deep oil spills [1]. Nevertheless, some scholars believe that humic coal is the source rock that forms the oil sands in this area. In the West, more attention has been paid to the inorganic oil generation hypothesis because Gold found that there are thermophilic bacteria in the earth's crust and was used to prove the establishment of this hypothesis [2]. marker compound. In addition, some well-known scholars discussed the abiotic (inorganic) origin of oil and natural gas formation. Many discoveries or direct observations made by inorganic hydrocarbon generation theorists are as follows[2~8]: \u2460 Methane appears in planets, meteors, moons, comets and other galaxies; \u2462 oil and methane are found in many non-sedimentary rocks; \u2463 bituminous coal is the result of deep hydrocarbon leakage; \u2464 the distribution of metal elements in crude oil is more likely to come from serpentinization of the upper Chondrites, because oceanic crust, continental crust, and seawater are not well comparable; \u2465 hydrocarbons are accompanied by helium and rare gases; Bacteria were found in deep drilling in Iran, Australia, Switzerland and Canada; \u2467 oil and gas reservoirs were found in igneous rocks; \u2468 deep structures related to oil; \u2469 natural inorganic synthetic hydrocarbons in the Lost City (hydrothermal field) area may Occurs with ultrabasic rocks, water and moderate heat, etc. To sum up, many evidences of the theory of inorganic hydrocarbon generation can be roughly divided into two categories: one is that methane and other hydrocarbons are found in environments or areas where no organic matter participates; The formation of reservoirs, or the place where they accumulate are often closely related to the deep action of the tectonic active area. The first type has been reproduced and proved by the laboratory. The key is how many hydrocarbons are formed under this geological condition, and whether it can reach the scale of industrial exploitation and utilization has not been actually verified. The latter can be almost completely explained by inorganic effects or elements participating in the process of hydrocarbon formation and accumulation of organic matter, but its positive role and contribution have not been paid enough attention in the past, so it deserves attention. Contribution of inorganic action to oil and gas Geological catalysis A large number of exploration practices and research results show that the underground chemical, temperature and pressure environments have a very important impact on the formation and composition of oil and gas. Inorganic compounds, such as water, minerals, and trace elements (transition group elements, heavy metal elements, and radioactive elements, etc.) can be used as reactants or catalysts to participate in the hydrocarbon evolution process of organic matter. Fischer-Tropsch synthesis reactions (CO2+H2\u2192CnHm+H2O+Q) and (HCO3\uf02d+4H2\u2192CH4+OH\uf02d+2H2O) explain the formation mechanism of hydrocarbons of inorganic origin, and geocatalysis is very important in this reaction process of. The geological elements that play a catalytic role in the hydrocarbon generation process mainly include clay minerals, metal oxides, trace elements, radioactive elements, water (providing hydrogen) and other impurities. Studies have shown that the existence of geological catalysts can not only significantly increase the yield of gaseous hydrocarbons, but also significantly affect the yield of liquid hydrocarbons. Previous studies have studied the catalytic effects of kaolin minerals, calcium carbonate (CaCO3), ferric oxide (Fe3O4), ferrous sulfide (FeS) and elemental sulfur (S) in the formation of oil and gas. The role of inorganic minerals in the generation of hydrocarbons Inorganic components have an adsorption effect on organic components. There is a certain relationship between the content of organic matter in sediments and the adsorption capacity of minerals, that is, the surface area. This adsorption may be an important process for the formation of kerogen. Therefore, the distribution of fine-grained clay minerals with adsorption capacity, various oxides-hydroxides and their amorphous colloids in the basin sediments may determine the organic matter content in the formation and its hydrocarbon generation potential, oil and gas distribution, and oil and gas resources. One of the important conditions for the strata-time control feature. The mineral surface can accelerate the thermal degradation of acetic acid, and transition metal elements have a strong catalytic effect on the activation of C\u2014C bonds in organic components. In 1979, Johns et al. discussed in detail the internal mechanism of clay mineral surface catalysis and oil and gas generation, and pointed out that clay minerals have fine particles, large surface area, and strong adsorption, so they have a catalytic effect, which can promote the generation of hydrocarbons and make the threshold of hydrocarbon generation The temperature decreases and the amount of hydrocarbon generation increases. Therefore, in addition to temperature, the possible catalytic and inhibitory effects of inorganic components should be fully considered in the study of oil and gas generation mechanism. Effect of uranium and its radioactivity on hydrocarbon generation Radioactivity can provide a certain amount of energy for organisms to maintain their own reproduction and development. In this process, H is produced, and the combination of H and C in geological bodies (Fischer-Tropsch synthesis reaction) may be one of the reasons for the generation of oil and gas. At the same time, radioactivity can cause microorganisms to burst rapidly or die in large numbers during the survival and reproduction process, both of which provide organic substances exceeding the normal living environment for the generation of oil and gas. For sedimentary basins, it is of great theoretical and practical significance to analyze the effects of the heat provided by the decay of radioactive elements in sedimentary strata on surface heat flow and organic matter maturity, especially on source rocks. Previous studies pointed out that the effect of radiolysis on organic matter is partly similar to the effect of plugenesis caused by deep burial. In addition to the reduction of H/C value, organic matter with high uranium content is also oxidized. Distinguishing between plutonic effects due to deep burial and effects of other factors (organic deposition type, alpha radiolysis, depositional conditions) is important for determining the true nature of the original organic matter. The research of Cassou et al. in 1975 showed that radioactivity increased the maturity of organic matter, and the kerogen evolution degree of uranium-bearing samples was obviously deeper; the intensity of \"radiation damage\" was closer to the uranium ore body, the stronger it was. The evolution degree of organic matter near uranium mineralization is relatively deep. The organic matter maturity of black rock series in uranium-enriched mining area is higher. The reflectance of bitumen in black rock series in uranium-enriched mining area is higher than that of general uranium mining area, and the H/C value of its kerogen element composition is lower than that of general uranium mining area. Whether it is a uranium-enriched mining area or a general uranium mining area, the uranium content of the black rock series has an obvious linear positive correlation with the reflectance of bitumen in the rock, and has no close relationship with the organic carbon content. In 1957, after Chegel proposed the application of catalysis containing actinide components in the polymerization reaction of unsaturated hydrocarbons, people began to pay attention to the research of uranium-containing catalysts. The supported uranium oxide catalyst can oxidize and eliminate volatile environmental pollutants, catalyze the oxidation of alkanes or olefins to formaldehyde, and catalyze the oxidation of alkenes to prepare unsaturated aldehydes. Uranium halides and substituted halides and uranates also have good catalytic activity in some reactions, such as UI3(THF)4 is a good Lewis acid catalyst that can catalyze the Diels-Alder reaction; uranium halides or nitric acid Salt, acetate, etc. are good catalysts for the Friedel-Crafts reaction. Uranium may have catalytic and radioactive effects in the conversion of organic matter to oil and gas, but the specific influence and formation mechanism need to be further explored. The relationship between other metal elements and oil and gas Keith and Swan introduced the phenomenon that metal ore and oil are associated in the process of hydrothermal action, and concluded that this symbiotic relationship shows that oil is part of the product of hydrothermal reaction series. They pointed out that the temperature of these hydrocarbons of hydrothermal origin is much higher than the oil window [9]. Charlou et al. proposed that minerals such as pyrite, chalcopyrite, and sphalerite acted as catalysts in the formation of higher molecular weight hydrocarbons[10]. They further propose that the degree of polymerization increases with increasing pressure. The trace element composition of petroleum matches well with that of chondrites, serpentinized peridotite, and primitive mantle material relative to oceanic or continental crust, but does not correlate with observed seawater. In the Fischer-Tropsch reaction, this hydrogen is in turn involved in the reaction, catalyzed by trace elements present in peridotite, until oil is formed. Related scientific issues Whether inorganic (non-biological) processes can form petroleum and hydrocarbon natural gas resources is a scientific problem that the scientific community has been exploring for more than a century but has not been resolved. It involves two issues: one is the human demand for energy resources, whether inorganic (non-biological) hydrocarbons can form a huge resource scale for human development and utilization; the other is to solve the above scientific problems, the relationship To the understanding and understanding of inorganic-organic interaction, earth evolution, origin of life and other major earth science issues. At present, there are still certain voices in the theory of inorganic oil formation in the academic circles. There is still no definite conclusion on whether large-scale oil and gas resources can be formed due to inorganic origin. Many of the phenomena (evidence) held by the theory of inorganic hydrocarbon formation are due to the fact that inorganic substances and inorganic interactions provide various environments that are conducive to the formation, accumulation and transformation of organic substances, or the energy and substances that occur between organic and inorganic substances The participation of exchange and inorganic action makes the oil and gas of organic origin richer and more diverse in terms of composition and distribution. For example, petroleum contains a variety of elements from the deep earth. Is it possible that in the above-mentioned survival, reproduction environment and transformation process, organisms contain a large amount of elements from the deep earth, which are subsequently converted into hydrocarbons and dissolved in petroleum. These are indeed issues worthy of further exploration and of important scientific significance [11].", "On December 23, 2003, a large blowout accident occurred in a natural gas drilling well located in Kai County, Chongqing. Due to the high content of H2S in the uncontrolled ejection of natural gas, it caused heavy casualties to surrounding residents. The accident not only shocked the oil industry, but also shocked the whole society. People have a strong interest in how the natural gas with high H2S content is formed, how to effectively prevent it, scientifically develop and utilize it. As an acidic non-hydrocarbon gas, H2S is mainly distributed in marine limestone and dolomite gas reservoirs, and occasionally in clastic rock formations. Most H2S is formed by the reduction reaction of sulfates with hydrocarbons. Dissolved sulphate is unstable in its symbiosis with petroleum, including carbohydrates, kerogen, crude oil, bitumen, and light hydrocarbons, as well as gaseous organic compounds. The reduction reaction between sulfate and hydrocarbons under the action of microorganisms is called BSR (Bacterial Sulfate Reduction). The reduction reaction between sulfate and hydrocarbons under the action of abiotic (inorganic minerals) is called TSR (Thermochemical Sulfate Reduction). Cohn et al. [1] first discovered that the sulfur bacteria Beggiatoa can generate H2S under certain conditions, but they did not name this reaction BSR. BSR was formally published by Beijerinck [2] in \"Journal of Bacteriology, Parasitology, Infectious Diseases, and Hygiene\". named. BSR is generally produced in sedimentary basins with low formation temperature (temperature below 80oC), and the sediment types include subsurface aquifer soil layers, marine sediments, organic reef carbonate rocks, layered or dispersed evaporites, and clastic deposits things. The TSR reaction was first proposed by Toland[3] in the experiment of dissolving sulfate and hydrocarbons. Subsequently, it was found that TSR can occur at a temperature lower than 175oC under experimental conditions [4], while in actual geological conditions, the temperature required for the reaction may be 100~140oC[4]. In this way, TSR occurs only in diagenetic and hydrothermal environments, and the geological temperature is higher than 100~140oC, accompanied by a large amount of H2S generation (Fig. 1). Typical examples of such reactions include the Permian Zechstein carbonate and sulfate deposits in Northwest Europe[6], the marine carbonate and dolomite in Northeastern Sichuan in my country[7, 8], and the Baiyun gas field in the Jingbian Gas Field in the Ordos Basin. Yan et al. [9]. A lot of reports have been made on the reaction pathway and products of BSR and TSR [10~12]. Biodegradation of crude oil under aerobic or anaerobic action is a prerequisite for BSR, because sulfate-reducing bacteria use biodegradation residues as nutrients. In an anaerobic environment, sulfate-reducing bacteria can coexist with methanogens and attack methane as a carbon source. This process is not required in the TSR reaction because TSR can occur in the absence of biodegradable hydrocarbons. Whether it is TSR or BSR, H2S, calcite, dolomite, elemental ash sulfur and solid bitumen will be generated in the product. These co-products have been observed in actual geological bodies, and even serve as signs to identify whether redox reactions occur in hydrocarbons. In the process of TSR or BSR, if there are metals participating or existing (including the direct participation of metal ions, the reaction products of TSR and BSR are transported to the environment containing metal ions), different metal minerals will be precipitated. First, alkali metals can precipitate calcium carbonate (calcite and dolomite) as cement or as an alternative to dissolved sulfate (gypsum and anhydrite). The reaction of polysulfide complexes with bicarbonates can also lead to the precipitation of calcite, which is formed when sulfides are biooxidized to sulfur, and such bicarbonates are precipitated as calcite. In the process of TSR and BSR, other carbonate rocks can also be generated, such as iron dolomite, siderite, barium carbonate, strontite and so on. If alkali metals or transition metals are involved, dispersed or layered alkali metal deposits can also be formed, and its minerals include pyrite, galena and sphalerite. Fig. 1 \tSchematic diagram of the formation of H2S-containing natural gas [13] During these reactions, the formation of acidic solution due to sulfide precipitation can cause partial dissolution of surrounding rocks, which can improve reservoir performance to a certain extent. At the same time, in the process of TSR and BSR, due to the CO2 and H2S generated by themselves, the solution in the reacting formation becomes acidic fluid, which has a positive effect on the formation of high-quality carbonate reservoirs. Due to the toxicity of H2S, when the concentration of H2S in the general atmosphere reaches 1000mg/m3, workers will die instantly like electric shock. Therefore, desulfurization treatment is required during the development of natural gas fields, as shown in Figure 2[14]. The raw gas, acid gas, rich liquid, acid water, and acid waste residues of natural gas purification plants contain a large amount of H2S. The pretreatment of these raw materials and residues can reduce the concentration of H2S in the surrounding air and prevent H2S from accumulating in low-lying areas such as ditches, causing Security risks. In 2000, a purification plant poisoned many students passing by due to acid water outflow. In 2003, Quxian Purification Plant also had an accident of acid waste residue ditch poisoning. Since H2S is highly corrosive to production equipment, the monitoring of H2S corrosion should be especially strengthened in the production process. For workplaces that are prone to H2S leakage, fixed mechanical exhaust devices, special gas masks, emergency repair and rescue equipment, tool boxes and emergency start buttons should be installed. The purification plant should keep a sufficient safe distance from the residential area, and the purification plant should often organize surrounding residents to conduct safety plan drills. High H2S natural gas is a special natural gas resource with important economic value. Through scientific development and processing, the purified natural gas obtained is a valuable low-carbon clean energy, and sulfur is also an important chemical raw material. For the special field of natural gas reservoirs with high H2S content, there are still many scientific problems to be solved, such as the formation and enrichment of gas reservoirs need to be deepened, the mechanism of H2S on reservoir transformation needs to be explained systematically, and the technology for safe and efficient development of natural gas reservoirs And the scheme needs to be perfected. How to effectively prevent the corrosion of various pipes and facilities by H2S and CO2 acid gases requires the joint efforts of the scientific community to overcome. Figure 2 \tFlow chart of desulfurization treatment in natural gas purification plant [14] 1, 3. Mechanical filter; 2. Activated carbon filter; 4. Lean liquid replacement pump", "China's continental basins are rich in oil and gas resources, which have been proven by long-term exploration and production practices; they have also made important contributions to China's petroleum industry, economic development, and social needs. The theory of continental oil formation is regarded as a world-leading scientific and technological achievement by the Chinese scientific and technological circles and industrial departments. Since the 1960s, oil and gas exploration in continental basins has gradually attracted the attention of countries and oil companies all over the world and made great progress. However, at present, the annual crude oil production of continental oil and gas-bearing basins in the world only accounts for about 6% of the world's total oil production, of which more than 80% are produced in China. In addition to China, there are more than 20 continental oil and gas basins with industrial value oil and gas fields all over the world. The total recoverable oil reserves of these 20 basins are not as large as the Songliao Basin or the Bohai Bay Basin in my country. Why does continental oil have a soft spot for Chinese basins? Today, when countries and major oil companies in the world have paid more attention to oil and gas exploration in continental basins, it is obviously not caused by uneven exploration. This should be closely related to the particularity of the geodynamic environment of the Chinese mainland and the two important characteristics of the strong activity of the Chinese basin and the active deep action determined by it[1]. Oil-forming theory of continental basins Before the 1930s, it was generally believed that only marine basins could generate oil. The discovery of Daqing Oilfield in China proved that continental basins can not only form oil and gas (reservoirs), but also form large or even super-large oil fields . The development of China's continental oil generation theory has roughly gone through the stages of geological outcrop research, petrochemical research, and geochemical research and simulation experiments under the background of continuous research and unremitting exploration and discovery. The theory of continental oil generation summarized from Northwest China has guided the petroleum geological exploration in eastern China. The necessary conditions for continental oil generation are: the existence of a certain amount of oil source parent material, and a stable reducing environment for the accumulation, preservation and transformation of organic matter into petroleum. The basic geological conditions for the formation of continental source rocks in China are: deep-water-semi-deep-water lacustrine sedimentary areas that developed stably for a relatively long period dominated by subsidence are the most favorable areas for the formation and development of source rocks. This \"deep-water depression\" theory emphasizes that the formation of source layers is mainly controlled by structural-thermal conditions. The formation of sedimentary depressions is the prerequisite for continental oil generation, and the formation of continental source layers is mainly related to deep-water related to deep-water lacustrine deposits. However, climate and water salinity are not the decisive factors for the formation and development of continental source rocks. From the late 1960s to the 1970s, various research institutes in China widely applied the advanced experimental techniques in the world, centered on organic geochemistry, based on the study of lithological characteristics of source rocks, and studied the geochemical indicators of source rocks. A lot of research work has been done on the selection, identification of kerogen parent material types, division of maturity and thermal evolution stages, and oil-source correlation, and many important progresses have been made. Numerous scientific research institutes, universities and oilfield units have conducted comparative and systematic experimental tests on the organic matter types, oil-generating threshold temperature, thermal evolution characteristics of organic matter in oil-bearing basins, and the genetic relationship between crude oil and oil-generating rocks in various oilfields. And simulation research, so that China's continental oil generation research has developed to the theoretical and systematic stage. The understanding at this stage has focused on one point, that is, there is no essential difference in mechanism between continental oil generation and marine oil generation. The results of oil and gas exploration in China's continental basins have repeatedly proved that oil and gas fields are always distributed in hydrocarbon-generating sags or nearby adjacent areas. Therefore, hydrocarbon-generating sags are the main factors controlling the generation of oil and gas and the distribution of oil and gas fields in continental basins. A hydrocarbon-generating sag is an oil-forming area, which is the \"source control theory\". According to this theory, in China's Mesozoic and Cenozoic continental basins, the activity is strong and the deep action is active, which are two important characteristics of Chinese sedimentary basins, especially the Mesozoic and Cenozoic basins. This is determined by the structure and dynamic evolution characteristics of the Chinese mainland itself and its special tectonic location. For example, \u2460 the pre-Late Proterozoic continental blocks that make up the Chinese mainland are small in size and poor in stability; \u2461 the active belts around the margins of the continental blocks are large in scale and highly active; \u2462 the Chinese mainland was affected by different dynamical systems in the Phanerozoic The dynamical environment is changeable and the evolution history is complicated; \u2463 Today, it is surrounded by the Pacific-Philippine plate, Tethys-India plate and Siberia plate with obviously different activity intensities and characteristics and extremely complicated development and evolution history. Converging towards mainland China in multiple directions, etc. [1]. It plays a very important guiding role in oil and gas exploration. However, few people ask why the oil and gas resources in continental basins are favored in China, while most other continental basins in the world are not rich in oil and gas resources. To answer this question, we need to start with the geological characteristics of China's continental basins. Characteristics of China's Continental Basins The main body of the Chinese mainland is composed of three ancient continental blocks (or cratons) such as China, North Korea, Tarim and Yangtze, and the small land blocks in between. From the Sinian to the Early Paleozoic, the sediments of China's continental blocks were dominated by marine facies. After the end of the Paleozoic Era, the scattered Chinese continental blocks were gradually spliced together one after another, forming a unified continent mosaic of land blocks and orogenic belts. By the end of the Middle Triassic, except for Tibet and some areas bordering the Pacific Ocean, the main body of the Chinese mainland had risen to become a continent, and a large number of continental sedimentary basins with varying sizes and mainly fluvial and lacustrine deposits developed. Among them, the vast and thick fluvial and lacustrine deposits are not inferior to the marine deposits in terms of organic matter abundance and hydrocarbon generation ability, providing a solid material basis for the generation of a large amount of oil and gas resources. Mainland China is highly active. This determines that the continental basins developed on it have a large subsidence range, a wide distribution area, deep fault (bending) depressions, wide water areas, fast filling, thick sediments, complex and diverse sedimentary systems and filling methods; the overall geothermal field Relatively high; the interior is generally strongly segmented or differentiated, uplifts alternate with each other, and hydrocarbon-rich depressions (sags) are developed; trap types are diverse, and they are synchronously associated during deposition, and many structural traps are still formed after the basin formation stage. This is a favorable aspect for oil and gas occurrence. The unfavorable factors are that the sedimentary construction, structural characteristics, geothermal field and hydrodynamic conditions of the basin are relatively complex, the plane changes rapidly, and the late transformation is strong. The degree of late transformation increases with age of basin formation. Corresponding to the strong activity of China's continental basins is the active deep action, the interaction of various layers of the earth and the upward migration of deep materials are more common, and the geothermal gradient of the basin is relatively high. This has an important impact on the oil and gas content and resource scale of the basin. In the past, in terms of organic matter maturation and hydrocarbon generation, much attention was paid to the level and evolution of the basin's geothermal field; the positive and negative effects of magmatic activity on source rock maturation and hydrocarbon accumulation and occurrence have also been studied to some extent. Recently, the catalysis and synthetic hydrocarbon generation mechanism of hydrothermal liquid or gas from deep in the process of organic matter evolution and hydrocarbon generation have also been discussed. However, little discussion has been made on the significance of deep material and geothermal field in the key issue of organic matter survival and reproduction during the development of the basin[1]. Compared with marine basins, the development time of continental basins in China is shorter, and only through a faster sedimentation rate can the shortage of shorter time be compensated, and a faster subsidence rate and a larger subsidence range can provide greater tolerance. space, which controls deposition. China's continental basins are highly active, and the deep action is conducive to the formation of sufficient accommodation space, thus providing a broad place and favorable conditions for the deposition of thick strata, the development of multiple sets of favorable source-reservoir-caprock assemblages, and the formation of high-quality source rocks. These conditions have prepared a solid material foundation for the formation of rich oil and gas resources and many large and medium oil and gas reservoirs (fields). Characteristics of Oil and Gas Distribution in China's Continental Basins The occurrence and development of China's continental basins has experienced stages of uplift, fault depression, depression and contraction. The period of relative uplift is the occurrence period of the basin, and the period of fault depression and depression is the peak period of basin development. The period of hydrocarbon-generating material development, oil-forming and accumulation-forming in the basin, and the shrinking period is the period when the basin tends to die out and the oil and gas pools are further formed, adjusted and located. These four periods appeared unevenly in different basins, which also resulted in the differences in oil-forming conditions in each basin. The rift-depression basin has the characteristics of multiple structural cycles, forming good multiple sets of source-reservoir-caprock assemblages, with well-developed source rocks and reservoirs. The crust of faulted basins is mainly stretched, and various types of anticline traps can be formed under various geological stresses, which are the geological basis for the formation of large and medium-sized oil and gas reservoirs. Intense extensional rifts in faulted basins occurred in multiple episodes, forming multi-stage, multi-series, and fault-block oil and gas reservoirs of various shapes and types. Therefore, fault-block oil and gas reservoirs constitute a complex oil and gas province. The reservoirs are superimposed and connected. This is the characteristic of faulted basins. The bedrock of rift basins undulates greatly, and the overburden line, pinchout line, and unconformity are quite extensive, which is conducive to the formation of various hidden oil and gas reservoirs, especially ancient buried hill oil and gas reservoirs. The formation of each dustpan-shaped rift is fault block As a result of body tilting, large stratigraphic oil and gas reservoirs can be formed in steep slopes and gentle slopes. The sedimentary environments and types of reservoirs in continental basins are also different. There are alluvial cones in the foothill environment, riverbed sand bodies in the plain environment, swamp reticular sand bodies, convex mirror bodies, and lake basin delta systems (meandering flow) in the coastal environment. delta, fan delta) and derived estuary sand bars, sheet sand, underwater tributary channel sand, granular limestone (oolitic beach, biological beach, etc.) And underwater river sand, etc. The deposition of these different environments established different types of reservoirs and oil and gas pools. In rift-depression basins, various types of sand bodies of various stages and sources extend from the edge into the oil-generating area of the lacustrine basin, and the distribution area of a sand body is an oil-gas enrichment area. A large number of exploration practices have proved that the continental basin is an oil-generating area inside the lake, and the lakeside is an oil-gas-enriched area, and the main oil and gas fields are distributed near the oil-generating area. China's continental basins are characterized by large water area, long development period, deep subsidence and high geothermal gradient, which are very favorable for the generation of oil and gas. There are often several sags in a basin, and each sag has several fault troughs (sub-sags). A fault trough is an oil-generating center, and the oil-generating areas are widely distributed, and oil and gas migration is active. There are both lateral migration and in-situ storage. As well as vertical migration and redistribution along inherited faults, secondary oil and gas reservoirs may be formed in the upper strata lacking source rock development and in adjacent areas. Oil and gas vertical migration is active in rift basins. Fault disconnection obviously affects the height of oil and gas migration. The main fault controls the enrichment and distribution of oil and gas on both sides. Generally, the nature of crude oil is light at the bottom and heavy at the top, while natural gas is heavy at the bottom and light at the top. Secondary gas reservoirs are often seen. Due to the above favorable geological conditions, some rich sags in eastern China can form a situation of \"full sag oil-bearing\", with high reserve abundance, and large oil fields can be found in small sags. The formation of medium and large oilfields in different types of continental basins in China has different geological backgrounds and occurrence conditions. The formation of oil and gas fields in China's continental basins can be attributed to the following four common features: first, the long-term continuous subsidence of deep lake-deep lacustrine deep sags controls the formation of source rocks and oil-generating (hydrocarbon-rich) sags; Oil-generating sags control the distribution of oil and gas fields, that is, oil and gas fields are mainly distributed in enrichment areas near oil-generating sags; third, various large-scale structural belts in and around oil-generating sags cooperate with favorable sedimentary facies belts to form favorable oil-gas accumulation zones Fourth, compound oil and gas accumulation areas (belts) are a common accumulation pattern in continental faulted basins. There are different types of compound oil and gas accumulation areas (belts) in different geological settings, and they have different reservoir bodies and sequences [5]. The superposition characteristics of continental basins in China and the multi-stage and multi-stage nature of structural evolution complicate the spatio-temporal relationship between structural belts and accumulation stages, resulting in complex oil and gas distribution characteristics[4]. The reason why China's continental basins are rich in oil and gas resources is to discuss whether large and medium-sized oil and gas fields can be formed in continental basins. Whether abundant and high-quality source rocks can be formed in the sedimentary environment is a more acute and clear question of the environmental conditions necessary for continental oil generation[6~8]. Multi-cycle tectonic movements and multi-stage basin superposition make most of China's sedimentary basins have the characteristics of superimposed basins[9]. Large nutrient-rich freshwater-brackish water lake basins preserve a large amount of sapropelic kerogen, and good thermal sealing conditions are conducive to the transformation of organic matter and the formation of high-wax oil[10]. In continental basins, the presence or absence of oil-generating sags and the scale and characteristics of hydrocarbon-rich sags directly determine the existence and scale of oil and gas in the basin; while the survival and abundance of organisms during the evolution of continental basins (if they can be preserved in time) are determined by the key link. Continental basins in China have a relatively short development period, variable depositional environments, and frequent migration of depositional centers, which are generally unfavorable for the enrichment of organic matter. The higher ground temperature and the frequent and common upwelling of deep matter make it possible for organisms to live together in groups and to flourish and enrich in a relatively short period of time, and to be buried and preserved in time (because these organisms generally live at the bottom of the lake) [ 1]. The structure and dynamic evolution characteristics of the Chinese mainland itself and the special tectonic location determine that the sedimentary basins in China, especially the Mesozoic and Cenozoic basins, are obviously characterized by strong activity and active deep action. These two important characteristics directly determine the basic nature, overall appearance, evolution process and later transformation characteristics of China's sedimentary basins, and profoundly affect and restrict the occurrence environment and accumulation of Mesozoic and Cenozoic continental oil and gas in China from both positive and negative aspects The characteristics, distribution rules and resource scale have formed the Chinese characteristics of rich oil and gas in continental basins in China[1]. Although some of the reasons why China\u2019s continental basins are rich in oil and gas resources have been discussed above, why are China\u2019s continental basins rich in oil and gas resources, while most continental basins in other parts of the world are not rich in oil and gas resources? The existing theory of oil formation in continental basins does not answer this question, and more in-depth research and comparison of continental basins at home and abroad are needed. The root cause or main controlling factor of the rich oil and gas resources in China's continental basins is still a world problem that needs to be further explored. The above-mentioned problems provide many research fields for further in-depth study of the distribution law of oil and gas resources in continental basins, and finally reveal their root causes.", "Current status of oil and gas exploration in marine basins in the world Throughout the world, oil and gas are dominated by marine facies, followed by continental facies. More than 90% of the world's oil and gas reserves are found in marine facies strata (basins), mainly distributed in the Middle East, Central Asia-Russia, North America, and South America , Africa, and Asia-Pacific oil and gas regions[1]. Among them, the Persian Gulf region in the Middle East is the region with the most abundant oil and gas reserves in the world, and its oil production accounts for about 2/3 of the world's total production, most of which are produced in marine formations (basins). By the end of 2008, a total of 951 large oil and gas fields had been discovered in the world, and their reserves accounted for more than 50% of the world's discovered reserves, and they were mainly distributed in oil-gas-rich marine formations (basins) such as the Persian Gulf in the Middle East and West Siberia[2]. The world's largest oil field, the Ghawar Oilfield in Saudi Arabia, has a recoverable reserve of about 13.3 billion tons, and the largest gas field, the North Gas Field, has a recoverable reserve of 22 billion tons of oil equivalent, all formed in marine strata ( Basin)[3]. Most of the confirmed oil wells with a stable daily output of more than 1,000 tons are produced in marine oil and gas fields[4]. However, the contradiction between international energy supply and demand is prominent today, and energy security has increasingly become the focus of attention of all countries. With the innovation of oil and gas exploration geological theory and the improvement of exploration technology and methods, the world's marine oil and gas exploration is in the ascendant. China has made brilliant achievements in continental oil and gas exploration, and formed a set of continental oil generation theory with Chinese characteristics, making my country one of the world's major oil producing countries. However, why is the oil and gas exploration in marine basins dominated by the Paleozoic Era so late? Failed to get a big breakthrough? What makes its exploration difficult? Can marine basins be placed on the hope of China's oil and gas industry's \"second venture\"? The world is very concerned. 2 History and current situation of oil and gas exploration in China's marine basins The history of oil and gas exploration in China's marine basins has always been accompanied by the development of the petroleum industry, with repeated setbacks and occasional glories. From local exploration in the first half of the 20th century, to the discovery of Renqiu Oilfield in the 1970s and the confirmation of the Sichuan marine gas-bearing basin, breakthroughs were made in the exploration of the Ordos Basin and the Tarim Basin in the 1980s. Momentum [5], 28 continental marine basins and 22 marine marine basins have been discovered. By the end of 2007, in 234 marine oil and gas fields across the country (excluding the southern South China Sea), the cumulative proven oil in place reserves were 223,479.34\u00d7104 t, and the recoverable reserves were 41,884.23\u00d7104 t, accounting for about 8% and 5.5% of the total national reserves respectively; The cumulative proven geological reserves of natural gas are 20761\u00d7108m3, and the recoverable reserves are 13616\u00d7108m3, accounting for about 28% and 33% of the total national reserves respectively[6]. Most of the proven marine oil and gas fields are distributed in the three major craton basins of Sichuan, Tarim and Ordos. Among them, the largest marine oil field in my country\u2014\u2014Tahe-Lunnan Oilfield was discovered in the Tarim Basin. Proven oil geological reserves are 5.875\u00d7108t, and natural gas is 183.98\u00d7108m3; Puguang Gas Field, the largest integrated marine gas field in my country so far, has been discovered in the Sichuan Basin, and the proven natural gas reserves have increased to 3560.875\u00d7108m3. Great breakthroughs have also been made, and natural gas reserves have continued to expand; at the same time, important discoveries have also been made in natural gas exploration in the Ordos Basin, such as Changqing Gas Field, Sulige Gas Field and Daniudi Gas Field. However, the total area of marine sedimentary strata in China can reach 455\u00d7104 km2, mainly distributed in the Tarim Basin, Sichuan Basin, Ordos Basin, North China, South China, Qinghai-Tibet and sea areas; among them, the distribution area of marine sedimentary areas on land reaches 330\u00d7104km2, and the area of the Cenozoic marine basin in the sea is about 125\u00d7104km2. If the Mesozoic marine sedimentary area on the Qinghai-Tibet Plateau is not included, the area of the marine marine sedimentary area in China is 230\u00d7104km2[5]; The strata are widely distributed and have great potential, but the degree of exploration is relatively low. Compared with other marine oil provinces in the world, its scale of oil and gas resources is relatively small, and the amount of resource discovery is small and its contribution is small. Discussion on the reasons why oil and gas exploration in Chinese marine basins is difficult Due to the geological conditions and characteristics of marine basins, oil and gas exploration is relatively difficult. The main reasons can be attributed to the following six aspects: (1) The age of the strata is relatively \"old\", and the exploration target layer is deeply buried: the occurrence of industrial oil and gas accumulation in foreign countries The marine strata are mainly deposited in the late Mesozoic and Cenozoic, and the Paleozoic and older strata account for a small proportion; while the marine strata in China are just the opposite, concentrated in the Mesozoic and Late Proterozoic to the Triassic, and the main body is the Paleozoic. It can be extended to the Triassic in the south, and there are Jurassic and Cretaceous marine strata in parts of the western and northeastern margins[7]. The impact of this feature on exploration mainly lies in [8]. First, the sources, development conditions, and hydrocarbon-generating mechanism of hydrocarbon-generating organic matter are unclear; The loss and preservation of oil and gas reservoirs are very complicated; third, due to the overlapping of Mesozoic and Cenozoic continental basins in the later stage, the buried depth of the exploration target layer is generally 3-7 km, and the drilling technology and wellbore technology are complicated, which increases the difficulty of exploration; Changes in temperature and pressure environment and reservoir physical properties caused by deep enlargement may have different effects on oil and gas accumulation than those in middle and shallow layers, and the discussion on this has just begun; this also determines that the exploration object is gas reservoirs. host. (2) The source rocks are multivariate and multi-stage, with low abundance of organic matter but high degree of thermal evolution: overseas marine source rocks are mainly black shale since the Jurassic, and their organic carbon content is generally greater than 0.5%, even up to 5%. % or more, for example, the source rocks in the Persian Gulf Basin have a hydrocarbon generation potential of up to 20kg per ton of rock, which is characteristic of late stage hydrocarbon generation[8]; while marine basins in China often develop multiple sets of source rocks, among which the Paleozoic The types of rocks include mudstone, carbonate rock and coal measures, etc., which have the characteristics of multiple hydrocarbon generation, but the organic matter abundance, especially carbonate rock, is generally less than 0.15%, and the hydrocarbon generation potential per ton of rock is less than 0.2kg, and The degree of thermal evolution of organic matter is relatively high. For example, most of the organic matter from the Upper Proterozoic to Lower Paleozoic in the three craton basins in my country is in the stage of wet gas-dry gas thermal evolution. The reliability of the current work in this area still needs further in-depth research and exploration practice verification. \u2462 Multiple types of reservoirs and strong heterogeneity: carbonate reservoirs in foreign countries are mainly limestone since Cretaceous and dolomite in Paleozoic, with good reservoir properties; marine carbonate reservoirs in China are from It is distributed from the Sinian to the Mesozoic, the geological age is old, the evolution history is long, and the late transformation is strong, forming various types of reservoirs with strong heterogeneity. At the same time, various types of reservoirs form various types of traps. \u2463 The history of oil and gas accumulation is complex: most of the marine oil and gas reservoirs in China have experienced multi-stage accumulation, accumulation and dispersion, reserve change and late-ultra-late stage positioning. The formation of secondary oil and gas reservoirs is not uncommon, but there are different understandings of oil and gas accumulation periods, and the main accumulation period is not clear; the sources of oil and gas are both self-source and other sources, single-source, multi-source or mixed-source, At the same time, oil and gas with different characteristics can enter oil and gas reservoirs in stages, making their components, marker compounds, and isotopes and other characteristics complex, making oil-source correlation more difficult; the distribution of oil and gas reservoirs is far less controlled by \"source\" than in continental basins. Tibetan conditions are complex. \u2464 Intense reformation in the late stage: The development history of the marine basins with large oil and gas fields discovered abroad is relatively simple, and they have not experienced the superposition and reformation of the basins by multi-stage tectonic movements in the later period. The basins are mostly retained as prototype basins[8], and their oil and gas distribution is relatively simple. It is beneficial to the prediction of oil and gas reservoirs; and the late reformation is one of the main characteristics of Chinese sedimentary basins, and has the following notable characteristics: wide-spreading, obvious spatial differences; high intensity, the older the basin, the stronger the reformation; the newer the time, the more There are many stages, and the characteristics of different stages are different[9], so most of the marine basins in China have experienced multi-cycle superimposition and multi-stage structural changes[5,9], which directly lead to multi-stage Secondary hydrocarbon expulsion, multi-source oil-gas mixing, oil-gas reservoir modification-destruction, etc. complicate the formation and distribution of oil-gas, and also cause damage to some oil-gas reservoirs, and a large amount of resources have been dissipated. \u2465 Poor (complex) preservation conditions: Preservation conditions are the key factors for Paleozoic oil and gas accumulation[9], transformation and preservation are a pair of contradictions that are opposed to each other, yet are organically connected. Oil and gas reservoirs are formed, destroyed and preserved in this dynamic change and balance of contradictions. Also directly related to preservation conditions are the development, quality and integrity of regional and local caprocks (especially gypsum-salt layers). The late transformation of Paleozoic marine basins in China is intense and heterogeneous, regional gypsum-salt beds are generally not developed, and the preservation conditions are generally characterized by being better in the west and poorer in the east, and better in the north and poorer in the south[11]. Therefore, the hydrocarbon accumulation in most basins is complex and diverse, which increases the difficulty of exploration. The crux of the difficulty of oil and gas exploration in China's marine basins The above analysis shows that the reason for the difficulty of oil and gas exploration in China's marine basins is closely related to the strong tectonic activities and late reformation of the Chinese mainland itself, and the difficult structure of oil and gas exploration in China's marine basins. The symptoms are mainly reflected in the following aspects: \u2460 Congenital deficiency: More than 95% of the oil and gas in the world's Paleozoic marine strata are concentrated in North America, Eastern Europe, and the northern margins of the African (North Africa-Middle East) craton (including the Arabian plate). Among them, the Paleozoic oil and gas in the North American craton is the most abundant, accounting for 56% and 35% of the world's Paleozoic oil and gas reserves respectively; the Eastern European craton is second; the North Africa-Middle East craton ranks third[12]. Compared with the three land blocks with the most oil and gas in the Paleozoic, there is no significant difference in the scale, duration and depositional thickness of the Paleozoic marine sediments on the North China, Yangtze, and Tarim land blocks in China. However, due to the small area of China's continental block, the continental block itself is the main body of the sedimentary basin, with a low degree of internal differentiation, and there is no uplift pattern with large ups and downs. It is mainly extensive deposition with little change in lithology and thickness on the plane. Hydrocarbon-generating sags are not developed; and there is no gypsum-salt layer with a wide distribution and a certain thickness formed in the upper sequence of the marine strata like North America and Eastern Europe during the sea retreat, so there is a lack of good areas. Cap rocks, these are two obvious \u201ccongenital deficiencies\u201d in the basic conditions for oil and gas occurrence in Paleozoic marine strata in China[10]. \u2461 Acquired damage: In the process of ocean subduction, continental splicing and collision, and further intense intracontinental deformation, the sedimentary strata at the edges of each continental block bear the brunt and undergo intense transformation. Generally speaking, deposits at the continental margin have favorable conditions for hydrocarbon formation and accumulation; however, most of the continental blocks in China, except for a few special locations, have mostly been eroded, and the remaining ones have been metamorphosed to varying degrees, or deformed strongly, Or they are buried very deep, and there are very few well-preserved holders. Multiple periods of strong tectonic movements have caused obvious deformations in the interior of the continental blocks. The marine strata deposited in the early stage may be exposed to the surface, denuded strongly, or even wiped out. carbonate rocks; or superimposed and deeply buried by the overlying Mesozoic and Cenozoic basins, such as the Ordos, Sichuan, and Tarim Basins[13]. There are few remaining marine strata on the margin of the ancient continent and the late reformation of the basin is intense. These two significant \"acquired damage\" factors have a crucial or even decisive impact on the hydrocarbon and accumulation conditions of the Paleozoic marine strata in China[10]. On the other hand, the Mesozoic marine basins in the Qinghai-Tibet region rely on smaller land masses, and the subsequent transformation, especially thermal transformation, was stronger. Therefore, the sedimentary construction characteristics and later transformation characteristics of the margin and interior of the continental block are similar to those of the above-mentioned Paleozoic marine basins. The above are the crux of the complex characteristics of China's marine basins and the difficulty of breakthroughs in oil and gas exploration[10]. Relatively speaking, the prospect of oil and gas, especially natural gas, in superimposed deep-buried marine basins is good[13]. For areas with relatively strong structural deformation in the later period, it is of great significance to find out whether the superficial shallow layer and the middle-deep part of the basin are deformed synchronously and coordinated[10]. In the face of such complex exploration objects, the original continental petroleum geology theory and foreign marine petroleum geology theory can no longer effectively guide China's marine oil and gas exploration and meet the requirements of development. It is urgent to inherit and summarize continental and foreign marine petroleum geology On the basis of oil and gas geological characteristics and occurrence conditions, innovate the theoretical system of marine oil and gas geology, and form a theoretical system with Chinese characteristics to guide exploration practice. Fortunately, with the sublimation of China's marine oil and gas exploration theory and the advancement of science and technology, more and more marine oil and gas fields have been discovered in China, which shows that China's marine strata (basins) have good Oil and gas prospects and huge potential indicate that marine oil and gas exploration in China has entered a new stage of development[14].", "Uranium was discovered by the German chemist MH Klaproth in 1789, and it was named Uranium after the newly discovered Uranus in 1781, and the element symbol is U. Since the discovery of uranium as radioactive in 1896 and the discovery of nuclear fission in 1939 by O. Hahn and F. Strassmann, uranium has become extremely valuable and one of the most important elements. Uranium was used as a glass dye until the discovery of nuclear fission; today, as it is known, it is the raw material for nuclear weapons and power plants. With the development of economy and society, my country has made a decision to vigorously develop nuclear energy, which has put forward a major and long-term demand for uranium resources. Super-large uranium deposits refer to tens of thousands or hundreds of thousands of tons of individual ore fields and mineralized enrichment areas. Its breakthrough is an important guarantee to meet the huge demand for uranium resources in nuclear energy development. The source of uranium in exogenous ultra-large uranium deposits is basically clear. For example, uranium in sandstone-type uranium deposits mainly comes from the ore-bearing layer itself and the erosion source area. However, the enrichment and source of uranium in endogenous ultra-large uranium ore fields and mineralized enrichment areas has always been an important scientific problem that plagues nuclear geology scientists, and the solution of this problem is not only important for the study of elucidating its formation mechanism significance, but also has important practical significance. After nearly a century of uranium exploration work, it is found that the distribution of endogenous ultra-large uranium deposits in the world is strongly heterogeneous, and the main distribution areas are the uranium ore concentration areas in South my country, the unconformity surface uranium deposits in the Athabasca Basin in Canada The ore concentration area and the ancient conglomerate-type uranium-gold ore concentration area in the Blind River, the Russian-Mongolian Far East uranium ore concentration area, the South African paleo-conglomerate-type uranium-gold ore concentration area, the European Bohemia uranium ore concentration area and the Olympic Dam in southern Australia Uranium-copper-gold mines represent South Australian uranium ore concentration areas, and the resources of these ore concentration areas account for more than 80% of the world's total uranium resources. The main characteristics of these ultra-large uranium deposits or ore-concentrated areas are that the older the formation age, the larger the mineralization scale and the greater the depth, especially the large-scale single deposits and the characteristics of multi-element co-mineralization. The palaeoconglomerate-type uranium-gold deposits in South Africa and Canada were both formed in the Archaean, with mineralization depths of several kilometers, and both uranium and gold reached industrial grades; among them, the gold reserves of a single deposit in Witwatersrand, South Africa reached nearly 50,000 tons. Accounting for 40% of the world's reserves, uranium is also more than 100,000 tons, and uranium mainly exists in the form of crystalline uranium ore, indicating that it was formed in an oxygen-deficient reducing environment. People naturally ask, where does so much uranium gold come from? There are many documents that believe that it comes from uranium-enriched construction, so how did the original uranium-enriched construction enrich uranium? Unconformity-type uranium deposits in Canada are controlled by Archaean and Proterozoic unconformities, and the mineralization age is Mesoproterozoic. Not only is the scale of uranium mineralization large and the grade is very rich, the scale is hundreds of thousands of tons, and the highest grade can reach 15%, and the grade of nickel in uranium ore has also reached the industrial value. For example, the nickel content of the Kaihu uranium deposit reaches 2%. Controversy over the source of uranium in ultra-large uranium deposits and the heterogeneity of uranium in the earth \t\u00b7 301 \u00b7 Hugh. Another typical ultra-large uranium deposit is the Olympic Dam uranium mine in southern Australia. The uranium resource of a single deposit reaches more than 1 million tons. The mineralization age is also Proterozoic, and the mineralization depth is several kilometers. There are also uranium paragenetic mineralization elements. Copper, gold, silver, rare earth and iron, etc., such a large-scale mineralization and the common enrichment mechanism and material source of so many elements with different geochemical properties have not yet been scientifically resolved. The relatively young concentrated areas of uranium mineralization are concentrated areas of hydrothermal uranium deposits in southern China, Bohemia in Germany and the Czech Republic, and eastern Russia and Mongolia. The scale of a single deposit is not large, but the total scale is hundreds of thousands of tons, and the mineralization depth is several thousand meters. The degree of enrichment in the ancient super-large uranium deposits shows that these elements have undergone a certain differentiation with geological evolution, so why these deposits are concentrated and where the ore-forming materials come from has always been a problem that puzzles people. . Regarding the uranium source of endogenous ultra-large uranium deposits or uranium ore concentration areas, since people have studied uranium deposits, they have used various methods to study, but there are mainly two kinds of research conclusions and understandings: one is that uranium comes from the source area. rocks, such as ancient conglomerate-type uranium-gold deposits; the other is from ore-bearing wall rocks or from deep places, such as hydrothermal uranium deposits related to granite and volcanic rocks. But in any case, the overall background of super-large uranium deposits or ore-concentrated areas is uranium-rich, especially in the rocks of the Precambrian uranium mineralization production area, which shows that the super-large uranium ore enrichment area is closely related to the original uranium enrichment is closely related. So people put forward the major scientific question that the heterogeneity of the original uranium distribution on the earth controls the output of super-large uranium mines and ore-concentrated areas, then what causes the heterogeneity of the distribution of uranium? The heterogeneity of the distribution of uranium on the earth mentioned here includes the different characteristics of the distribution of uranium on the earth's surface and in the deep crust and mantle or the earth's core. Relating to, and/or relating to, a uranium core. A direct evidence for the heterogeneous formation and distribution of original uranium is the lunar Kreep rock, which has a basaltic SiO2 composition but is rich in uranium, thorium and rare earths, formed at the beginning of the evolution of the moon more than 4 billion years ago, and later No evolution, indicating that the early evolution of the moon is rich in uranium, and its distribution is heterogeneous. Generally speaking, due to the lithophilic and oxophilic properties of uranium, uranium tends to be enriched in the crust during the evolution of the earth, but many studies have proved that the distribution of uranium in the deep crust and mantle of the earth is heterogeneous, such as from the deep mantle of the earth The uranium content of kimberlite, especially the phlogopite-rich kimberlite is more than 100 times that of the mantle, and the content of uranium and thorium in deep alkaline rocks is often many times higher than that of the mantle. American geophysicist Marvin Herndon believes that there is a uranium nuclear fission reactor inside the earth's core. The direct evidence is that nuclear fission product 3He is found in the volcanic ejecta, and he believes that 64% of the uranium has accumulated in the earth's core. The theory also better analyzes the changes of the earth's magnetic field. Modern geophysics and geology study that core and mantle materials can reach the crust through the mantle plume, especially in the early days of the earth. Natural nuclear reactors do exist in nature. In 1972, it was discovered that Oklo, Gabon, Africa, had a natural uranium fission nuclear reactor that occurred 1.8 billion years ago. This reactor lasted for hundreds of years and produced huge energy. power. The source of uranium or original uranium in the ultra-large uranium ore or enrichment area is yet to be revealed. The research on it is not only of great practical significance for guiding the search for uranium ore, but also of great scientific significance for elucidating the evolution of the earth.", "The enrichment and integration of almost all minerals have obvious divisions. All kinds of super-large ore fields (deposits) in the world are extremely rich in reserves but very few in number, accounting for only 7% of the total number of discovered ore deposits in the world, but they own 65% of the mineral reserves [1]. Among them, super-large ore fields (beds) are extremely rare. For example, 2/3 of the remaining proven oil reserves in the world are in the Persian Gulf Basin in the Middle East, and it is estimated that no second oil area with the same resource scale will be found; another example is Olympus Dam The ore deposit is a super large Cu-U-Au deposit discovered in Australia in 1976. The reserves are astonishing, including Cu reserves of 3\u00d7108t, Au of 1200t, and U of 12\u00d7105t. It also contains a large amount of iron, with reserves of 20\u00d7 109t. The copper reserves of this deposit are almost equivalent to the sum of all copper reserves in my country, and its uranium reserves rank first in the world. The scale of gold is rare in the world, and there are also a large amount of silver, cobalt and rare earth. For a long time, people have been hoping to find a second similar deposit. In the past two decades, people have tried to find a second Olympus Dam in Canada, South America and other places with similar geological background and evolution, but with little success[2]. The value of rare earths and niobium in China's Baiyun Obo iron ore deposit far exceeds that of iron. It is a super-large deposit of iron, rare earths, and niobium at the same time. The rare earths contained in it account for more than half of the world's acquired rare earth reserves. The author calls this phenomenon partial wealth, and those who are \"partial\" are abnormal and supernormal. In addition to their obvious economic benefits, these ultra-rich oil-gas basins and super-large deposits have extraordinary geological backgrounds and ore-controlling (reservoir) factors, which have aroused widespread interest in the international geoscience community. Distribution characteristics of super-large deposits Time distribution of super-large deposits Taking China as an example, the time distribution of large and super-large deposits in China has obvious inhomogeneity, and the distribution of geological ages has a certain tendency. The predecessors made statistics on 807 large-scale and super-large deposits of 52 major minerals in my country. The results show that the distribution of large-scale and super-large deposits in various geological ages (percentage of deposits) in China is: Archaean accounted for 4.8%, The Proterozoic accounted for 11.1%, the Early Paleozoic accounted for 6.9%, the Late Paleozoic accounted for 22.3%, the Mesozoic accounted for 34.9%, and the Cenozoic accounted for 20%. From the Archaean to the Mesozoic, the formation of large and super-large deposits increased from less to more, reached a peak in the Mesozoic, and decreased sharply in the Cenozoic. Spatial distribution of super-large deposits The spatial distribution of super-large oil and gas fields or deposits can be divided into three types: point type, belt type and leader type[3]. The role of deep effects in super-large mineral enrichment areas Two factors play a decisive role in the formation of super-large ore deposits: the massive outflow of deep ore-bearing fluids, which promotes the formation of submarine exhalative-sedimentary ore bodies; Under the condition of , rock mechanical erosion is extremely strong, and these two effects have global characteristics, including vast areas of continents and oceans. At the same time, there should be conditions conducive to the preservation of previously formed deposits from total or partial destruction, and such conditions are most often found in platform areas. Deep faults are not only channels for ore-forming solutions, jets, and derived magmatic activities, but also determine the characteristics of the adjacent sedimentary basins to a large extent. During the formation of layered and quasi-layered deposits controlled by long-term active deep fault zones, on the one hand, minerals are widely dispersed, and on the other hand, minerals are enriched in the \"mine hole\" area. Large and super-large copper-polymetallic deposits, mercury-antimony deposits, and sometimes gold, fluorite, and celestite deposits, usually in areas where hydrothermal activity (including submarine exhalative-post-volcanic hydrothermal fluids) is most active, and later They are usually distributed in the zone where the lithology-lithology of the ore-hosting rock series changes the most, and the lithology-lithology change is also the controlling factor of sedimentation and sedimentary exhalation, as well as hydrothermal and telethermal mineralization. \"Mine hole\" is often the channel for deep magma and mineral migration, and its formation is caused by the structural weakening zone at the bend of regional ore-controlling faults. The intersecting zones of deep faults, magmatic materials and high-temperature fluid gushing parts may be such weakening zones and permeable zones of ore-forming solution and extrusive products. Kimberlite rock pipes are a typical example. The above-mentioned mineralizations are very rich and large-scale, the reason is that this kind of deposit is directly related to the deep ore-forming source, the deep ore-forming source is iron ore magma, magma segregation sulfide melt, residual magma rich in volatile matter, high-temperature gas - Liquid jets and sources of hydrothermal fluids of various temperatures. The Sr, Nd, C, O, S isotope studies of the super-large rare earth deposits in Baiyun Obo show that ore-forming elements such as rare earth and niobium come from the mantle, and the ore-forming solution is mantle fluid containing mantle-derived volatiles such as CO2. The impact of the earth's interior and deep effects on effective source rocks is mainly manifested in two aspects. One is that the energy and substances brought by deep fluids or hydrothermal fluids may participate in the formation and transformation of effective source rocks; the other is that deep faults The resulting series of material and energy effects play an important role in effective source rocks, especially high-quality (high-efficiency) source rocks[4,5]. The Cenozoic basins in eastern China are generally developed around deep and large faults. Basin structure, sedimentary evolution, and oil and gas geological conditions are obviously controlled by the deep tectonic effects of deep and large faults and their strike-slip and pull-apart tectonic effects. There are obvious differences in source rock development degree, source rock quality, source rock thermal evolution conditions, source-reservoir-seal assemblage matching, trap development scale and type, and preservation conditions. Resource distribution plays an important control role. Some of the proven CO2 gas pools are related to deep inorganic origin. Since there are many sets of high-efficiency and high-quality source rocks developed in the inherited deep fault depressions around the deep faults, the overall thermal evolution degree of the source rocks is high. Large and medium-sized structural traps are developed, so the oil-gas-rich sags are distributed around deep and large faults. In summary, the Earth's interior and deep processes have an important impact on the formation of organic-inorganic super-large minerals, and should be given sufficient attention in the future mineral evaluation process. Significance of biological action in super-large metal deposits The Early Proterozoic Witwatersrand gold-bearing conglomerate gold deposit in South Africa is the most important gold deposit in the world. Since civilization, the world's total gold production is estimated to be about 65,000 tons, of which 55 % (36000t) comes from this super large gold mine in South Africa. If the proven reserves (51000t) of this deposit are included, it can be calculated that the gold it produces only accounts for 60% of its reserves[6]. By studying the genesis of gold associated with kerogen in the gold-bearing conglomerate gold deposit, Mossman and Dyer believed that in an anoxic environment, gold may have been weathered from the Archaean source rocks under the action of the sulfur cycle microbial community; The gold in the river is transported in the form of solution in the river, or in the form of a colloid stabilized by humic acid or intermediate products of the sulfur cycle, to the catchment floodplain; there, in the presence of a large number of prokaryotic microbial mats, Gold was precipitated as a result of oxygen production in the local microenvironment within the cyanobacterial colony. They also support a model of biomineralization based on multiple experimental findings on the interaction of modern prokaryotes with gold. Regarding the way of dissolving and transporting gold, Reimer[7] argued that the dissolution of gold during the weathering of source rocks may be affected by cyanide microorganisms, and gold becomes a colloid protected by organic matter and is transported. In the Witwatersrand gold-bearing conglomerate gold deposit, there is obvious evidence of the original biological community. The kerogens produced in large quantities in the deposit are the remains of the prokaryotic community or microbial algal mat. Gold and kerogen are closely symbiotic, and gold can be produced in the form of filaments and filament-bonded aggregates, and the enrichment of gold in kerogen is as high as several percent (by weight). According to statistics, 50% of the gold ore in this deposit contains kerogen. Therefore, the close symbiosis of kerogen and gold has not only become an important evidence of biomineralization, but also has become a gold-bearing marker layer and guides the exploration and development of the deposit. The formation of super-large deposits is the product of multiple factors and stages. Although organisms may play an important or even critical role, it is difficult to form super-large deposits without other conditions. Biomineralization itself also occurs under certain background conditions such as sedimentary geology, paleogeography, and paleoclimate, and the ore deposit will be further modified after its formation. Although some progress has been made in the study of biomineralization, it is still the weakest link in mineral deposits. Considering the importance of mineral and energy resources distributed in the sedimentary lithosphere, and the importance of biomineralization in the formation of mineral and energy resources in the sedimentary lithosphere, it is urgent to pay attention to and strengthen the research on biomineralization. Significance of abiotic action in super-large oil and gas fields Abiotic (inorganic) components have an adsorption effect on organic components, and there is a certain relationship between the content of organic matter in sediments and the adsorption capacity of minerals, that is, the surface area. May be an important process for kerogen formation. Therefore, the distribution of fine-grained clay minerals with adsorption capacity, various oxides-hydroxides and their amorphous colloids in the basin sediments may be the key factor that determines the organic matter content, oil-generating potential, oil-gas distribution, and oil-gas resources in the formation. One of the important conditions for strata-controlled and time-controlled features. The mineral surface can accelerate the thermal degradation of acetic acid, and transition metal elements have a strong catalytic effect on the activation of C\u2014C bonds in organic components. Therefore, in addition to temperature, the possible catalytic and inhibitory effects of inorganic components should be fully considered in the study of oil and gas generation mechanism. The effect of radiolysis on organic matter is similar to the effect of plugenesis caused by deep burial. In addition to the reduction of H/C value, organic matter with high uranium content is also oxidized. Distinguishing between plutonic effects caused by deep burial and effects of other factors (organic deposition type, alpha radiolysis, depositional conditions) is important for determining the true nature of the original organic matter. The research of Cassou et al. shows that radioactivity increases the maturity of organic matter, and the kerogen evolution degree of uranium-containing samples is obviously deeper. Without chloroform extract, the gas hydrocarbons of uranium-containing samples are more obviously dry gas than other samples. The closer the intensity of \"radiation damage\" is to the uranium ore body, the stronger the organic matter maturity is in the radioactive area, and the organic matter evolution degree is deeper near the uranium mineralization. The organic matter maturity of black rock series in uranium-enriched mining area is higher. The reflectance of bitumen in black rock series in uranium-enriched mining area is higher than that of general uranium mining area, and the H/C value of its kerogen element composition is lower than that of general uranium mining area. Whether it is a uranium-enriched mining area or a general uranium mining area, the uranium content of the black rock series has an obvious linear positive correlation with the reflectance of bitumen in the rock, and has no close relationship with the organic carbon content. At present, the exploration and development of shale gas, which is becoming increasingly popular in the world, shows that its reserves are quite large and may become an important replacement resource in the future energy field. Its formation, enrichment, distribution mechanism and the contribution of inorganic effects in the formation of shale gas need to be further studied. Other factors affecting the formation of super-large deposits The distribution and formation of super-large deposits are related to the heterogeneity of the mantle, and even to the original heterogeneity when the earth was formed [8]. Syngenetic faults are produced during the development and evolution of sedimentary basins, and control the spatial distribution, internal sedimentation and volcanism, fluid activity and mineralization of the basin; they are important for SEDEX-type, VMS-type and MVT-type deposits. It is one of the basic factors for the formation of super-large strata-bound deposits[9]. At the same time, the hydrothermal sedimentary ore-forming mechanism is also a favorable factor for the formation of super-large deposits[2]. Super-large ore deposits tend to be produced in certain structural positions, and this characteristic has certain guiding significance for the search for super-large ore deposits. There are many studies on the controlling factors of super-large ore deposits in the world. Various scholars have discussed the controlling effect of certain factors on super-large ore deposits from different angles, and proposed a series of metallogenic models, trying to explain the genesis of super-large ore deposits. It should be pointed out that although people have a certain understanding of the geological characteristics, metallogenic conditions, and ore-controlling factors of super-large ore deposits, there is still no exploration of the metallogenic mechanism of large-scale and super-large ore fields (deposits). A substantial breakthrough will still be a major and arduous research problem [10].", "The earth is a typical giant complex system The earth is a typical giant complex system, which is composed of multiple complex large systems, such as the \"earth surface\" [1], sedimentary basins and orogenic belts. The earth has the following distinctive characteristics in a typical complex system[2]: a natural, open, and large-scale dissipative dynamical system with specific functions, the earth forms and evolves in the process of dissipating various internal dynamics; at the same time, the earth Various exospheres (hydrosphere, atmosphere, biosphere) and various exogenous geological processes of the outer planets continuously transport a large amount of matter and energy to the earth. The earth maintains its own specific overall characteristics and functions such as material movement, structural structure, and development and evolution through continuous multiple exchanges of matter and energy with the external environment and internal power dissipation. Integrity, hierarchy and correlation are the three elements that should be possessed in general (complex) systems: Integrity, that is, the existence of the earth system and its behavior are relatively independent, and the system composed of various parts is a system with certain functions. Whole[3~6]; Hierarchy or divisibility, that is, the whole of the earth system can be divided into multiple subsystems or components with different functions, relatively independent and non-homogeneous, such as different depths of the earth. Geological processes such as movement and structure, magma, metamorphism, and sedimentation, and their related secondary and more secondary processes; correlation, that is, the close relationship, interaction, and interaction between subsystems (such as various geological processes) in the earth system Influence, mutual adaptation and interdependence, generally indispensable, are the essence of non-linearity [7]. The author believes that correlation is the most important, emphasizing that the interaction between subsystems is mutual, and there is no independent component or unilateral influence in the system that has nothing to do with other subsystems [8]. The featureless scale that is common in Earth's complex systems is still widespread in sedimentary basins. For example, the area of oil and gas-bearing basins, such as the Persian Gulf, West Siberia and East Siberia, is more than 300\u00d7104km2; the small one, such as the Jinggu Basin in my country, is only 88km2, with a difference of about 5 orders of magnitude. Another example is the plane shape and cross-sectional structure of the basin. Strictly speaking, no basin in the world is the same. Others, such as the volume and distribution of sedimentary rocks in the basin, the type of mineral deposits and the richness and poverty of resources, the shape of mineral deposits (beds) and the scale of resources, etc., all vary in thousands of ways, and their measurement values vary widely. Non-equilibrium state and dynamic process During the formation and evolution of the earth, the overall structure of its complex system, the interaction and coupling characteristics between subsystems, the influence of external systems, and the occurrence of any geological process and mineral formation, etc., are always in a state of flux. Non-equilibrium is suitable for the research method \u00b7309\u00b7 (or away from equilibrium) of the dynamic change of the earth's giant complex system . \tTend to stable balance is the original property of matter movement\u2460. This dynamic change process shows that the earth's dynamical system and its subsystems are constantly approaching and evolving from a non-equilibrium state to another new equilibrium state; the latter is then destroyed by new changes and enters The new non-equilibrium state develops towards a new equilibrium state; during this period, new overall behaviors and functions will emerge, which will cause abrupt or qualitative changes in the evolution process and geological processes of the earth that can be divided into stages. In this process of breaking the existing balance and tending to a new balance, which seems to be repeated, it is actually a gradual evolution, and there are sudden changes from time to time. The earth is developing, evolving and showing stage characteristics. The nonlinearity of characteristics and processes has significant nonlinear characteristics between the earth system and each subsystem, as well as its spatial structure, time evolution, action dynamics, and processes. Perturbations can lead to mutations\" [3]. In the same tectonic unit or the same evolution stage of the earth, various geological processes, dynamic environments, and geomorphic features, etc., although the changes are obviously uneven (that is, nonlinear), they still show a certain degree of gradual change and similarity on the whole; However, between each unit or each stage, most of the geological features and dynamic environments show significant mutations. These mutations in time and space divide the evolution process of the earth into several stages with large differences and in space into several blocks (tectonic units) with obviously different characteristics; The evolution stages are changing; thus, the spatial distribution and evolution process of the whole basin show obvious nonlinearity. Do not satisfy the principle of superposition Most phenomena in the classical (traditional) natural sciences conform to the principle of superposition and can be dealt with by reductionism. The overall function of a complex system is greater than or not equal to the sum of the local functions, which is the main property of the complex system [1] and the performance of the overall characteristics. The formation and evolution process of the earth is not a simple combination or superposition of the internal and external dynamic geological processes of the aforementioned earth circles, but the result of mutual influence, interaction, and synergistic coupling of various geological processes. Therefore, the overall function of the Earth's dynamic system is neither different from any subsystem, nor equal to the sum of the functions of each subsystem, that is, the overall function (and behavior or characteristics) of the Earth system cannot be obtained or replaced by the functions of its subsystems [8]. Some motion laws (rules) that can explain system behavior or explain system characteristics and functions cannot be obtained by directly examining the laws satisfied by each component [9]. Diversity, heterogeneity and uncertainty (randomness) In the process of the formation and evolution of the earth, the geological processes involved are complex and diverse, covering almost all the dynamic geological processes of the generalized earth system, and involving the activities of the planets in the solar system. Therefore, the diversity of behaviors and functions of the Earth's dynamical system is formed. The evolution process, stages and geological functions, structural and structural characteristics, material composition (including various mineral resources) and evolution between different units of the earth or different sub-units in the same unit, \u2460Liu Chiyang, tending to stable equilibrium is the material Fundamental Properties of Motion, Northwestern University Graduate Communication Thesis, 1980. There are obvious diversity, heterogeneity, uncertainty and partition, and often change with the development of time (time-varying) [10~12]; its mathematical model is high-dimensional and nonlinear. In earth sciences, many non-periodic random uncertainties are caused by external small-scale noise (random fluctuation), and internal nonlinearity is often the main source of randomness [7]. It needs to be pointed out that the obvious uncertainties in the influencing factors (parameters) and material composition of the entire complex giant system of the earth include not only the difficulty of specific quantitative calculation or fine description; Some major influencing factors and local details are not fully known. For example, there are very few super-large ore fields (beds) in the world, but they have abundant mineral reserves [13]. Explorations on the formation mechanism of this kind of mineral resource partial abundance[10] have continued[13], but no substantial breakthrough has been made so far; obviously, some main controlling factors are still unclear. Self-organized criticality, catastrophicity, and ordered complexity lie between random and ordered, born at the edge of order and chaos [9, 14]. Geological systems and deposits grow fractally at the edge of chaos [3.15]. Self-organization of critical states (self-organized criticality) is an intrinsic property of complex systems and their subsystems. Anything in a critical state always spontaneously and suddenly (mutantly) \u201cemerges\u201d in a certain space-time, property, dynamic behavior or functional order through the cooperative coupling and interaction process of the subsystems in the nonlinear system structure, and then reach a new balance (orderliness), \"emergence\" is the overall characteristic of the system produced by the local interaction of the system [3]. It is worth noting that a small change in a certain parameter in the critical state may cause a catastrophe, that is, the catastrophe cannot be measured by the magnitude of the parameter change [3.4.15], such as the occurrence of earthquakes, landslides, and volcanic eruptions. In the formation process of the earth and various geological processes and mineral resources, there are obviously spatial imbalances or local extreme instability, random disturbances in time, and uncertainties in their influencing factors and development trends. However, due to the existence of a large number of multi-period self-organization behaviors in this process, the earth system and various accumulation (mineral) systems, as well as the various geological process subsystems they participate in and their results, have phased and relatively stable characteristics. And the overall relatively stable overall function and orderly structure. There are indeed obvious differences in the characteristics of each stage, but they constitute an orderly correlation, progressive evolution, development and evolution process with overall functions (such as basins, orogenic belts, various minerals, etc.), and a space for geological structures, mineral deposits, etc. Distribution. Self-similarity and fractal In nature, there is a general self-similarity of a kind of material or phenomenon between parts and parts or with the whole in terms of shape, function, space-time and other aspects, and this similar shape is called fractal. It is generally believed that nonlinearity, randomness and dissipation are the necessary physical conditions for the emergence of fractal structures [4]. Fractal theory points out the commonality and self-similarity of irregular phenomena in natural complex systems at different scales (part and whole), reveals and enriches the complexity and diversity of the relationship between parts and the whole in nature, and provides a way to understand and describe complex phenomena such as the earth. Bridges and tools for the transition from part to whole in the system. Therefore, it is possible to study and compare the fractal self-similarity of certain geological behaviors or geological phenomena and mineralization at different scales in an all-round and multi-dimensional manner; was explored. Research methods suitable for the giant complex system of the earth \t. 311. Irreversibility and non-periodic earth evolution, geological processes, mineral formation, and biological evolution all evolve with time and become more complex and rich, and the overall performance is that the evolution is staged ( non-periodic) and the irreversibility of long-term processes. For example, the evolution of the earth's surface hydrosphere, atmosphere, and biosphere directly controls the physical-chemical environment of the shallow lithosphere, and obviously affects the formation and characteristics of sedimentary rocks and exogenous mineral deposits. Therefore, in different stages of geological history, different types and characteristics of exogenous or sedimentary mineral deposits are produced. It has been proved that, unlike the periodic input of a linear system, which can cause a periodic output, the output of a nonlinear system often appears sub-harmonic of frequency multiplication or frequency division, or even non-periodic chaos [17]. In the natural complex giant system of the earth, the participating materials are not uniform, and the dynamics and processes are nonlinear, so no geological phenomenon, mineral deposits and geological processes are completely the same; even if they are of the same type, there are differences among individuals. Big difference. According to the same strict comparison, any geological process, environment and results (geological phenomena and minerals) are unique and cannot be repeated. In summary, the earth is a complex, relatively independent giant open dynamical system[2,10]. The research method suitable for the giant complex system of the earth is a major scientific problem. The earth is a typical complex giant system. Complexity science (system) is a new stage of the development of system science. Research ideas and methodology should have their obvious evolution. Like other fields of complexity science[4], the research on the earth's complex giant system and its methodology is still in the exploratory stage. Qian Xuesen's \"integration\" proposed a methodology for dealing with \"open complex giant systems\", that is, \"from qualitative to quantitative comprehensive integration methods\" [1]. With regard to the complexity of geoscience and its research objects, Chinese scholars have conducted various researches and beneficial contributions in the aspects of earth system, geological system, basin system, accumulation (mineral) process, geochemical system and various geological processes. Exploration [2, 3, 7, 8, 10, 15~21] has promoted the development of nonlinear and complex research in earth science and related fields. Based on relevant research, the author has also discussed the thematic connotation and principles of research ideas and methodology of complex giant systems of the earth and basins [2, 10]. However, these research ideas and methodologies related to the earth or other complex systems of geosciences are proposed under the circumstances of not only considering the characteristics of complex system science and the evolution of methodology, but also taking into account the actual needs of current scientific research and production. Due to the limitation of the immature status of complex system theory and method research, these more specific research ideas imply traditional discipline thinking and linear methods. There are nonlinear problems in complex systems, and simply studying them from a linear perspective will lead to inaccurate or even incorrect results [22]. Therefore, exploring and constructing scientific theories and research methods suitable for the complex giant system of the earth is an international scientific problem that has only begun to be touched but is far from being solved. Its progress depends on the overall progress of science and technology and the collaborative research of multidisciplinary integration. The author believes that with the gradual deepening of the understanding of complex systems and the continuous development of research, the theories and methodologies used in the study of Earth's complex systems will have more complex (non-linear) features.", "\"Landslide\" refers to the phenomenon that the rock and soil mass on the slope slides along a certain interface. During the formation and movement of landslides, if they affect human beings, their production and living environment, and cause losses, such landslides are called landslide geological disasters. It should be pointed out that the generalized landslide disaster includes all the processes and phenomena of slope material movement, such as landslides, collapses, and dumping in the narrow sense. This paper mainly refers to landslide disasters in the narrow sense (see Figure 1). Fig. 1 \tJiweishan landslide in Wulong, Sichuan (June 5, 2009) The most basic model for describing the landslide is shown in Fig. 2, which is similar to the model of slider movement on slopes in middle school physics. In this model, the slope body that may slide is simplified as a \"slider\" on the slope. If the friction force on the slider, also known as the anti-sliding force (fWsin\uf061 +CL) is greater than the sliding force generated by the slider's own weight (Wcos\uf061), the slider is stable and will not produce a landslide; and once the sliding force on the slider exceeds the anti-sliding force, the slider moves and a landslide occurs. In engineering geology, the ratio of the anti-sliding force to the sliding force is called the stability coefficient (K). Obviously, if K>1, the slope is stable; if K<1, the slope will be unstable; \nFigure 2 \tdescribes the mechanics of the landslide Model K=1, the slope is in a limit equilibrium state. For engineering slopes that require manual treatment (such as slope cutting, reinforcement, drainage, etc.), in order to ensure a certain safety margin for the slope, the K value of the slope after treatment is usually required to be greater than 1. At this time, the stability factor K is also called the safety factor. In most mountainous areas, slopes are the most basic geological environment for human survival, and they are also the most harmful and risky geological environment. my country is a country where landslide geological disasters occur extremely frequently and disaster losses are extremely serious, especially in the west of my country. Statistics show that there are currently more than one million geological hazards (including hidden dangers) in China, and tens of thousands to hundreds of thousands of new disasters will appear every year, of which there are more than 34,000 major geological hazards. The activities of geological disasters The area accounts for about 45% of the total land area of the country. The vast majority of geological disasters in my country are concentrated in the southwest and northwest regions. In this area, nearly 100 towns, thousands of villages, and nearly 10 million people are directly threatened by geological disasters every year; nearly 1,000 people die due to disasters. , the direct economic loss is about 20 billion yuan, and the indirect loss caused by the interruption of traffic and the destruction of production and living facilities is even more difficult to estimate, especially the impact and loss of severe and extraordinarily large geological disasters. Since the 1980s, there have been more than 100 severe and extraordinarily large geological disasters in mainland China that caused more than 30 deaths, or economic losses of more than 10 million yuan, or caused major social impacts. The typical ones are listed in Table 1. . The occurrence of these major disasters not only brought serious casualties or property losses, but also caused serious social and public security problems, some of which even attracted the attention of the international community. Table 1. \tLarge-scale catastrophic landslides and landslides occurred in China since the 20th century. Landslide name , \tlocation \t, time, \tvolume, sea-source earthquake, slope type \t, \ttriggering factors \t, notes, 675 large landslides were induced, and more than 40 mass landslides were \tinduced \n. \tA dammed lake; a large number of villages were destroyed, 100,000 people died Diexi earthquake, \tMaoxian, Sichuan \t19330825 \t21000 \tTriassic shallow metamorphic rock landslide \n7.5 Diexi earthquake \ndestroyed towns and villages, 6800 people died; dammed into a lake, and then the dam collapsed Caused 8,000 deaths in the downstream Chana landslide \tQinghai Gonghe County 19430207 \t25000 \tPaleogene and Neogene semidiagenetic lacustrine strata Luquan collapsed \tYunnan Luquan \t19651122 \t39000 \tPermian Emei Mountain basalt Triassic weathering \nand freezing and thawing destroyed Chana Village, 114 People died Long-term creep buried 5 villages including Laoshenduo, 444 people died The side of Yalong River blocked Yalong River for 9 days and nights, dam height was 335m, Tanggudong landslide Sichuan Yalong River 19670608 \t6800 \nSand slate \nerosion, slope collapsed Dam flood peak 5.7\u00d7104m3/s long-term creep Yanchi river rock collapse \nYichang, Hubei \t19800603 \t150 \tnear-horizontal layered slopes \tunderground mining destroyed mines, 284 people died Jipazi landslide Chongqing Yunyang \t19820718 \t1500 \tancient landslides (layered fragmentation) \theavy rain \tThe Yangtze River channel was interrupted for 7 days, and the economic loss was nearly 100 million yuan Saleshan landslide Dongxiang, Gansu \t19830307 \t3100 \tLoess caprock, Paleogene, creep, freezing death 237 people Neogene mudstone \tmelting Xintan landslide \tHubei \tJiegui 19850612 \t3000 \tAncient landslides and collapses Accumulative \trainfall , \timmediate relocation, landslide name , \tlocation \t, time of occurrence, \tvolume \ncontinued, slope type , \ttriggering factors \t, notes, Yankou landslide \t, Yinjiang, Guizhou, \t19960718, \t1500 \toblique limestone slope foot, quarrying dam 65m long, forming an 8km long dam Qianjiangping landslide Three Gorges \nReservoir tributaries 20030713 \t2400 \tSand mudstone bedding landslide \tReservoir water impoundment 14 people died, loss 57.35 million yuan Tiantai landslide \nSichuan Xuanhan \t20040905 \t2500 Tiantai Township \ngentle slope angle along bedding sand mud \trainstorm rock slope \nrelocated 1255 people, the landslide dam was 23m high, forming a long 20km barrier lake, 20,000 people affected Danba landslide 20050301 \t220 \taccumulative layer landslide \tlong-term creep and artificial disturbance \ndestroyed houses, costing 10.66 million yuan and threatening the safety of the entire county. The survey results of the last ten years also showed that Due to the rapid economic development and unreasonable human activities, the number of man-made geological disasters in China is rising rapidly, surpassing natural factors to become one of the most active factors inducing geological disasters, and more than 70% of large-scale catastrophic landslide geological disasters are directly or indirectly related to human activities. In addition, due to unreasonable urban planning and construction, especially the rapid development of urbanization, a series of large cities in western my country (such as Chongqing, Guiyang, Lanzhou and Xining, etc.), a large number of medium and small cities, and thousands of Ji's villages and towns have been harmed by geological disasters. People's research on landslide disasters has roughly gone through the following stages. \u2460 From the 1950s to the mid-1960s: During this period, the scale of human activities was small, and the construction site conditions were relatively simple. Therefore, human activities had little impact on the environment, and disasters and environmental problems were not prominent. During this period, the analysis of landslide disasters relied more on the \"geological history analysis\" method of the former Soviet Union and the basic theory of soil mechanics. The geological body was regarded as a relatively homogeneous continuum. Theory evaluates and predicts landslide hazards. \u2461 From the mid-1960s to the 1970s: firstly, the Vayon landslide event in Italy in the early 1960s made people realize that the occurrence of geological disasters is not a process that can be described simply by limit equilibrium theory. In my country, the rise of the \"third-line\" construction and the development of large-scale hydropower projects involved complex sites, and at the same time revealed a series of large-scale landslide geological disasters, which are difficult to understand from the perspective of statics, especially how to explain the formation of sliding surfaces , Slope deformation and failure mechanism and process. During this period, the development of rock mechanics provided a theoretical source for the solution to this problem. It helped engineering geologists realize the \"deformability\" of rock mass, the \"timeliness\" of deformation, and the impact of rock mass structure on this deformation and even The ultimate failure may play a controlling role, thus beginning the era of \"geological process mechanism analysis\" on the formation and evolution of geological disasters, and some representative geo-mechanical models of slope deformation and failure have been proposed one after another. It should be said that the proposal of \"mechanism analysis theory\" is a qualitative leap in people's epistemology of geological disasters. However, limited by the theory and research methods of this period, people have not been able to describe this complex process mechanically and quantitatively, and more qualitative analysis is based on the \"conceptual model\". \u2462 The 1980s: It was a period when the means and methods of geological disaster analysis made significant progress. On the one hand, with the development of computer technology and the advancement of modern mechanics and numerical analysis theory, simulation technology has begun to be applied to geological disaster analysis, especially mechanism analysis; for the characteristics of the medium, linear elastic simulation, elastoplastic simulation and consideration Visco-elastic-plastic simulation of time effect, discrete element simulation of quasi-large deformation and motion process, and even whole process simulation appeared later. With the help of method updating, people's understanding of geological hazards no longer stays at the stage of \"conceptual model\", but through simulation, the \"conceptual model\" is upgraded to \"theoretical model\", and further revealed from the internal process (mechanism) The development of slope geological hazards and the formation process of sliding surfaces, as well as the slope stability status reflected in this process and the subsequent change information contained in it, provide important theoretical methods and tools for the stability evaluation and prediction of complex slopes. The development of this stage has promoted the academic thought system of \"analysis of geological process mechanism\" to a new stage of \"analysis of geological process mechanism-quantitative evaluation\". Mathematical statistics, fuzzy comprehensive evaluation, and gray system forecasting theory are introduced into the evaluation and prediction of slope instability. However, all these methods have not fundamentally deviated from the traditional linear field category in terms of description methods. \u2463 Since the 1990s: People have made another qualitative leap in the epistemology of landslides and other complex geological processes and complex disaster systems. The symbol is that in the early 1990s, an important part of modern system science - nonlinear science Introduced into the study of geological hazards. People not only understand the physical composition of complex disaster systems from general system science, but also realize the openness, complexity and nonlinear characteristics of geological disaster systems with the help of nonlinear science, which makes people realize the transformation from linear thinking to A historic shift in nonlinear thinking. It believes that geological disasters are the result of a series of non-equilibrium instability events that produce spatial, temporal, functional and structural self-organization behaviors, which lead to the open system being far from the equilibrium state. Based on this, some preliminary descriptions of avalanche disaster nonlinearity have been established. Based on the dynamic equation of behavior, some geological disaster evaluation and prediction theories and models based on catastrophe theory, fractal theory, synergetics, neural network theory and nonlinear dynamic theory are proposed. It can be seen that people's research on landslide geological disasters has gone through epistemologically from closed to semi-open and open in system understanding, from deterministic to random and non-deterministic in system behavior, and from linear to nonlinear in system connotation. A linear process; on a theoretical basis, people gradually introduce traditional statics, modern rock mechanics, modern mathematical mechanics and nonlinear scientific theories into applications; in terms of behavioral purposes, from understanding the mechanism of disaster occurrence, evaluating and predicting the possibility of its occurrence It has gone through a difficult journey from understanding nature to transforming nature. Even so, people have to admit the fact that so far, the evaluation, prediction and early warning of landslide geological disasters have not been fundamentally resolved. Facing the variability of the geological environment, the complexity of the mechanism of geological disasters, and the temporal and spatial evolution Due to the randomness and sudden nature of the process, people's current theories and cognition levels are still extremely limited, and technical means are often ineffective in most cases. The prediction and forecast of geological disasters is still a worldwide problem. Regions are more prominent and characteristic. Therefore, we have to bear the severe challenges and huge risks brought about by this, and we urgently need to achieve major breakthroughs in theory and technical methods in this field. Regarding the prediction and forecasting of landslide disasters, the main difficulties at present lie in the following aspects: First, the establishment of a single landslide prediction model. At present, in the aspect of single landslide prediction, people have established dozens of prediction models, and established more than ten kinds of discrimination criteria in terms of changes in physical quantities such as apparent and internal deformation, groundwater, rain intensity, and acoustic emission. Typical examples include the work of Saito Dixiao in Japan in the 1970s; the summary of my country\u2019s Xintan landslide forecasting practice in the 1980s, the research on rainstorm-induced horizontal landslides by Zhang Zhuoyuan and Wang Lansheng; the monitoring of loess landslides by Wang Gongxian and others in the 1990s Forecasting; and researches on typical landslide forecasting models and forecasting criteria carried out by Huang Runqiu and Xu Qiang in combination with the prevention and control of major geological disasters in the Three Gorges reservoir area since 2000. However, these works mainly focus on establishing a theoretical model for disaster forecasting based on the appearance of geological disasters before the occurrence of geological disasters, and implementing mid- and long-term disaster predictions and landslide predictions. The key to breaking through the prediction of landslide geological disasters should be how to organically link the geological-mechanical mechanism of landslide occurrence and the occurrence and development process of landslide deformation, so as to form a scientific and reasonable landslide prediction model. The second is about the hazard zoning, risk assessment and management of regional landslide disasters. With the continuous deepening of basic theoretical research and prevention of geological disasters, a new key development direction that closely combines disaster reduction and prevention with engineering construction - the study of hazard zoning, risk assessment and management of geological disasters has attracted more and more attention. In the past 10 years, the United Nations has made a significant contribution to the advancement of disaster risk understanding, first manifested in the shift from \"disaster reduction to disaster risk reduction\" concept; emphasizing the integration of disaster risk management into the mainstream planning of sustainable development ; proposed two types of disaster risk management, that is, anticipatory disaster risk management and remedial disaster risk management; and emphasized the reform of disaster risk management mechanism, that is, the rise of risk management and the innovation of comprehensive risk management technology, especially from \" The term \"crisis management\" has become an important word in the field of risk management. The theme of the 32nd International Geological Congress held in Italy in August 2004 was \"From the Mediterranean Sea to the Global Geological Revival\u2014Geology, Natural Disasters and Cultural Heritage\". Geological disaster evaluation and risk assessment management is one of the themes, especially It is the development and application of GIS system in geological disaster assessment and early warning. The International Conference on Landslide Risk Assessment and Management held in Canada in May 2005 focused on the latest developments in landslide hazards and risk assessment techniques and risk management. Although the concept of geological disaster risk management has been put forward for a long time, it has only been in the past ten or twenty years that it has been applied to the actual management of geological disaster risk reduction. There are many more successful typical examples. The French government attaches great importance to the risk reduction of geological disasters. In the 1990s, it proposed a new policy for natural disaster management, that is, risk prevention planning (PPR), which combines land use, human information and compensation activities for major disaster events. At present, France has Establish a comprehensive regulatory and legislative framework for natural disaster risk prevention, including the preparation of prevention information and insurance compensation. In 1997, the Swiss Federal Government issued the \"Practice Rules for Landslide Hazard and Land Use Planning\" to guide landslide hazard mapping and formulate landslide risk zoning. Hong Kong has established a complete slope safety management system called \"Slope Safety System\". Since the introduction of this system in the 1970s, the risk of geological disasters has been significantly reduced. The loss of people's life and property is also greatly reduced. At present, the probability of death due to landslides per person per year is about 5\u00d710\uf02d7, which is 10 times lower than that in 1976. Japan's geological disaster risk management has its own unique characteristics. It puts the relationship between people and geological disasters in a prominent position, and attaches great importance to the value of human life; it combines cities to carry out basic research and risk assessment of geological disaster risk assessment, prediction and forecasting. Implementation of management measures. At present, geological disaster risk assessment and management have become the focus of disaster reduction and research hotspots. In particular, the \"International Landslide Research Organization\" has pushed geological disaster risk management to a new climax, and one of the core objectives of the \"2006 Tokyo Action Plan\" promotes geological disaster risk. Analysis and decision-making research, responding to risks on a global scale, and establishing a dynamic, global International Landslide Program (IPL) network, the establishment of the network and its operating system should play an effective role in the risk management of related surface system hazards, the plan In the implementation of the content, the comprehensive management of multiple disasters and multiple government departments should be considered. The third is monitoring and early warning of landslides. With the rapid development of modern measurement technology, information technology, computer technology, especially 3S technology integration and other related fields, it provides advanced technical support for the realization of geological disaster monitoring, information transmission, and dynamic simulation of disasters. The establishment of monitoring and early warning systems has brought unprecedented opportunities. Since the 1990s, technological innovation in the field of geological disaster monitoring has made great progress, and a series of efficient and applicable monitoring means and methods with high precision and automation have begun to be applied to the field of geological disaster monitoring. At present, the development of optics, electricity, informatics, computer technology and communication technology has brought vitality to the research and development of geological disaster monitoring instruments: the types of information and monitoring methods that can be monitored will become more and more abundant, and the monitoring of certain monitoring methods Accuracy, intuitiveness of collecting information, and ease of operation have been improved; modern communication technology has been fully utilized to improve the speed, accuracy, security, and automation of remote monitoring data transmission; at the same time, technological content has been improved, costs have been reduced, and geological disasters Lay the foundation for economical monitoring. Japan, the United States and other developed countries have also achieved remarkable performance in monitoring and early warning of sudden geological disasters, and have basically realized wireless real-time monitoring, automatic and intelligent monitoring and early warning of major geological disasters. my country's Taiwan region has also installed automated monitoring and early warning systems at major landslides and mudslides that may pose potential threats, to issue early warnings of major disasters that may occur. Due to the fact that geological disasters in the mainland of my country are characterized by many points and a wide area, the basic work of geological disaster investigation is still relatively weak, and the family background of the hidden danger points of geological disasters in the country is not clear. About 90% of the geological disasters that occur every year are not in our \"sight\" within range. In response to the above-mentioned actual situation, the Ministry of Land and Resources has established a set of geological disaster prevention and early warning system with Chinese characteristics based on group monitoring and group prevention; in addition, a higher level of improvement is still needed in the monitoring technology of geological disasters. play its due role in disaster reduction.", "Debris flow is a kind of sudden geological disaster, which has the characteristics of complex formation mechanism, sudden outbreak and strong destructive force. In order to carry out accurate early warning and prediction of debris flow disasters, it is necessary to grasp the mechanism of debris flow initiation and the law of resistance during movement, and this is precisely the biggest scientific problem in the current field of debris flow research. The solution of this problem is not only helpful to the forecasting of debris flow disasters, but also solves the complex flow velocity calculation problem, thus providing a reliable basis for the design of debris flow prevention and control buildings. The initiation of debris flow is divided into two categories: geotechnical and hydraulic [1]. The initiation mechanism of geotechnical debris flow is similar to that of landslides. The soil on the slope surface loses its stability and then moves. The movement of the lower surface drives the overall movement, and gradually transitions to a flowing debris flow state. Geotechnical debris flows mostly occur on slopes with large slopes, which are called slope debris flows, and sometimes they move into channels with small slopes to form channel debris flows. The initiation mechanism of slope debris flow is that rainfall (or snowmelt, etc.) infiltrates the sliding surface and causes the shear strength of the soil to decrease. How rainfall (or snowmelt, etc.) infiltrates water into the soil through infiltration, how to reduce the shear strength of the soil, and how to transition to a flowing state after the overall movement are all difficulties and hotspots in the current research on slope debris flow initiation . The starting mechanism of hydraulic debris flow is that the floods wash away the accumulations in the debris flow channel, uncover the bottom, and carry a large amount of sediment to form a debris flow. The hydraulic debris flow is channel debris flow, which is often characterized by large scale, long movement distance and strong damage. It is the main type of debris flow disaster in my country. It is a complex process from precipitation to channel flood formation, and the hydrological process is even more complicated for small debris flow basins (80% of debris flow basins are less than 10km2). The formation of debris flow caused by channel flooding is affected by the slope and width of the channel, the compactness of the sediment, the shape of the sediment, the particle size and distribution of the sediment, and the velocity and flow of the flood. The process is very complicated[2] ]. From rainfall (including snowmelt) to formation of floods, initiation of channel deposits, and formation of channel debris flows, it is currently the hot and difficult point of research on the formation of channel debris flows. The movement speed of debris flow is the most important parameter among the various parameters of debris flow, and it is the most important parameter for evaluating and preventing debris flow disasters, and the resistance law of debris flow movement is the key to determine the movement speed of debris flow. The resistance of debris flow mainly includes bottom (including side wall) frictional resistance and internal resistance. The bottom and side wall resistance is mainly affected by the roughness of the bottom and side walls. The contribution to the debris flow resistance is greater when the flow depth is shallow, and the contribution to the general debris flow resistance is relatively limited. The internal resistance of debris flow is very complex, because the composition of debris fluid is very complex: the particle composition ranges from ultrafine particles less than 1\uf06dm to boulders greater than 1m, and the particle size spans more than 6 orders of magnitude; the chemical composition of clay in fine particles The composition and content of the debris fluid are different, and the properties of the debris fluid are also different; the sediment volume content of the debris fluid is from 30% to 80%, such a large span makes the properties of the debris fluid with a large difference in the volume content of the sediment very different[3 ]. The difficulty of studying the internal resistance of debris flow lies in the core problem of the internal resistance of debris flow: the shear resistance is difficult to determine. Although there are many models trying to describe the shear resistance of debris flow, due to the complexity of the composition of debris flow, no model can describe the shear resistance of debris flow well [4]. The experimental research on shear resistance is also limited by the conditions, and progress is slow. In the experimental research, there is still the problem of the settlement of coarse particles, but in the actual debris flow, quite large particles are in a suspended state. If the coarse particles are removed, the fluid properties and the coarse particles The properties of debris fluids vary greatly [5]. Large-scale model experiments (see Figure 1) and field observations (see Figure 2) cannot obtain more data to study the law of internal shear resistance of debris fluids due to the high sediment content and the presence of mud in debris fluids[6, 7]. Therefore, to solve the velocity problem of debris flow, it is necessary to understand the law of resistance of debris flow, and the focus of research on the law of resistance is the law of internal shear resistance, which is a difficult point and a hot spot in the study of debris flow. Fig.1 \tLarge-scale debris flow experiment in the United \tStatesFig.2 Observation of debris flow in Jiangjiagou, Yunnan, China", "With the deepening of my country's resources and energy development and infrastructure construction, more and more deep-buried cross-mountain railways and highway tunnels, underground hydropower station projects, and underground tunnels for mines are being or will be built in the western region. Under the conditions of deep burial in the western region, large-scale underground excavation or underground mining often encounters strong dynamic phenomena and disasters related to high ground stress, high fluid pressure or high gas pressure, such as rockburst, large deformation, and high pressure surge. Water inrush, coal and gas outburst, etc. (Table 1). There are many examples at home and abroad that these construction geological disasters have seriously frustrated the construction of underground projects. For example, the Dayaoshan Tunnel[1] on the Hengguang Line suddenly gushed water and sand at the middle 2km, and the maximum gushing volume reached 50,000t/d. The tunnel was submerged and the construction was forced to stop; the Yuanliangshan Tunnel[2] of the Yuhuai Railway had a rare phenomenon of water and mud gushing in the karst strata, causing major economic losses and casualties; the length of the continuous rockburst section of the Qinling Tunnel[3] Line II reached 7.2km, accounting for 40% of the total length of the tunnel, completely exceeded the previous prediction results that \"rockbursts may occur in deeply buried and cross-mountain sections\", and strong and common rockbursts also occurred in tunnels buried only over 100 meters deep; Jin The rockburst disaster in the auxiliary cavern of the Ping II Hydropower Station is serious. According to statistics, a slight rockburst of 2447m, a moderate rockburst of 508m, and a strong rockburst of 304.5m occurred in cave A; a slight rockburst of 1976.2m, a moderate rockburst of 691m, 254m, strong rockburst 36m, posed a serious threat to construction personnel and equipment [4], and the water inrush disaster is also particularly prominent, the water pressure is as high as 10MPa, which seriously hinders the construction of the project; the Jiazhuqing Tunnel [5] is under construction During the process, the coal seams were exposed 78 times at each working face. Due to the high gas content and high pressure, high-pressure gas often shot out of the borehole with water vapor during construction. Severely exceeded the standard many times. Table 1 Geological hazards of some long and complex tunnel construction in Southwest China. Advance prediction \tof geological hazards in underground engineering or underground mining. \t, has become one of the most important factors restricting the construction of underground engineering. Advanced geological prediction is the key content in the research field of geological disaster prevention and control in underground engineering construction, and it is of great significance to underground engineering construction, safe operation, and cost saving. During the construction of underground works, the advance forecast of geological conditions and possible geological disasters in front of the tunnel face will play a decisive role in the normal construction and smooth connection of underground works. Successful prediction prompts the construction to take timely countermeasures to prevent problems before they happen; on the contrary, it is often helpless in the face of sudden construction geological disasters, which makes the construction suffer major setbacks. After decades of development, the advanced geological prediction method has developed from a single geological analysis and prediction stage to a comprehensive prediction stage of geological analysis combined with geophysical exploration. The study of advanced geological prediction in my country began in the late 1950s[6], and it was really applied to the construction of tunnels and underground engineering in the early 1970s. Domestic universities and scientific research institutes have done a lot in the research and application of advanced prediction technology. Useful work, put forward some new methods, improved some technical equipment, the results have been successfully applied to engineering construction, such as the land sonar method proposed by Zhong Shihang [7] and so on. In the past ten years, the construction geological hazards in deep tunnels and underground projects have attracted great attention from the academic and engineering circles. Many scholars have carried out fruitful research work, such as Huang Runqiu et al. In-depth research on the mechanism [10]; Li Tianbin et al. discussed the advanced prediction of disasters such as rockburst and large deformation [11]. However, due to the complexity of geological disasters in underground engineering construction, there is still a lack of simple and effective methods for rockburst and large deformation under high ground stress, water inrush under high fluid pressure, and gas outburst under high gas pressure. Forecast methods, such as rockburst and large deformation on-site detection, have not yet had effective instruments, and the prediction accuracy needs to be improved; coal and gas outburst on-site advance prediction methods mainly rely on advanced horizontal drilling, which is time-consuming and laborious; There are many calculation methods for water inflow in the construction stage[8,9], but their accuracy is often insufficient. Sometimes there is a huge gap between the calculated water inflow and the actual water inflow. Although there are some geophysical detection methods in the construction stage that can detect caves and groundwater in advance, their prediction accuracy It is greatly affected by the detection environment, and the interpretation method is not mature and perfect, there is no simple interpretation mark, and there are multiple interpretations. It can be seen that although scientists and engineers from all over the world have made long-term and unremitting explorations on underground engineering geological disasters in terms of theory and technical methods, how to predict and forecast these disasters in advance so far in construction Prevention and control, in order to reduce casualties and the impact on construction progress, is still a worldwide problem to be solved. The main research goals in the future are to deepen the research on the formation mechanism of geological disasters such as rockburst and large deformation under high geostress, gas outburst under high gas pressure, and water inrush under high fluid pressure, develop effective and practical detection methods and equipment, and establish and improve these geological hazards. Technology and method system for advanced forecasting of construction geological disasters. For rockburst and large deformation, it is necessary to further understand the formation mechanism of rockburst and large deformation under high ground stress environment, develop and improve the rapid testing technology for secondary stress of surrounding rock and on-site detection equipment for rockburst, and establish rockburst and large deformation. Precursor signs of deformation prediction, comprehensive integrated prediction indicators and intelligent method system; for gas outburst, it is necessary to deepen the understanding of the influencing factors of coal seam gas storage and enrichment and the mechanism of coal and gas outburst in tunnels, and determine the simple and easy risk of gas outburst Discriminate indicators and criteria, and establish a technical method system for gas advance detection, comprehensive advance prediction, and gas monitoring and early warning; for the problem of water inrush, it is necessary to further study the formation mechanism and model of water inrush in tunnels and underground engineering under complex geological conditions, and establish an Water hazard assessment methods, development of advanced detection equipment sensitive to water bodies, establishment of interpretation signs for advanced geophysical prediction of water inrush, and a comprehensive advanced prediction method system for water inrush. Through the above research, combined with continuous supplementation, revision and improvement in engineering practice, it is believed that the technical problem of advanced geological disaster prediction in underground engineering will be gradually solved, bringing huge economic and social benefits to engineering construction.", "With the rapid development of my country's nuclear power construction, the disposal of high-level radioactive waste (referred to as high-level radioactive waste) has become a major safety and environmental protection issue, which is specifically reflected in how to finally dispose of high-level radioactive waste generated by nuclear power plant spent fuel reprocessing and the development of nuclear weapons. High-level radioactive waste, etc. High-level radioactive waste mainly refers to the high-level radioactive waste liquid and its solidification produced by the reprocessing of spent fuel; for countries that implement the policy of \"one-time pass, that is, direct disposal of spent fuel\", high-level radioactive waste also includes spent fuel. High-level radioactive waste is a special waste that poses a huge potential threat to the environment. Neptunium, plutonium, americium, technetium and other radionuclides contained in high-level radioactive waste have the characteristics of strong radioactivity, high toxicity, long half-life and heat generation. Once they enter the human living environment, they will cause great harm and are difficult to eliminate. Therefore, the safe disposal of high-level radioactive waste has become a major issue that affects the sustainable development of nuclear energy, environmental protection and the well-being of future generations, and it is also a major scientific method problem. my country's high-level radioactive waste mainly comes from pressurized water reactor nuclear power plants, national defense nuclear facilities, Canadian Deuterium Uranium (CANDU) reactors, and high-temperature gas-cooled reactors that may be built in the future. High-level vitrified waste, high-level solid waste and \uf061waste will be produced after the spent fuel of the pressurized water reactor is reprocessed; the production of national defense nuclear facilities and the treatment and decommissioning of military nuclear facilities will also produce high-level vitrified waste, high-level solid waste and \uf061Waste; There is currently no relevant policy on spent fuel from CANDU reactors and high-temperature gas-cooled reactors that may be built in the future; in addition, spent fuel from research reactors and nuclear submarines will also produce high-level waste after reprocessing, but the amount is relatively small ; In addition, long-lived intermediate level waste and high-risk radioactive sources need deep geological disposal. For the final disposal of high-level radioactive waste, plans such as \"space disposal\", \"deep trench disposal\", \"ice sheet disposal\", \"rock melting disposal\" and \"deep drilling disposal\" have been proposed. After years of research, the currently generally accepted feasible solution is deep geological disposal, that is, burying high-level radioactive waste in a geological body about 300-1000m deep from the surface to permanently isolate it from the human living environment. The underground project where high-level radioactive waste is buried is called a high-level radioactive waste repository, and the high-level radioactive waste repository generally adopts a \"multiple barrier system\" design. According to different geological conditions, different countries have selected different lithologies as natural barriers. The selected repository site must meet the site selection requirements in terms of regional structure and engineering geological stability. Good adsorption and slow groundwater flow are the most basic requirements for choosing natural barriers; for artificial engineering barriers, not only their engineering strength, but also their chemical and thermal stability, and their ability to resist radiation must be considered . Since high-level radioactive waste contains radionuclides such as neptunium, plutonium, americium, and technetium with strong radioactivity, high toxicity, and long half-life, it is extremely difficult to dispose of it in deep geological conditions. How to reliably isolate high-level radioactive waste from the human living environment, How to convince the public that the safety of disposal of high-level radioactive waste can be guaranteed and how to persuade the residents at the location where the repository is built to agree to build the repository have become difficult issues in the disposal process. At the same time, the whole process has never been experienced before, lacking practical engineering experience. Therefore, the deep geological disposal of high-level radioactive waste is an extremely complicated systematic project, which is long-term, complex, arduous, comprehensive and exploratory, specifically manifested in: \u2460 Difficulty in research and development. The construction of a high-level radioactive waste geological repository faces a series of major scientific, technical and engineering problems, including: how to select a qualified site, how to evaluate the suitability of a site, how to select an engineering barrier material for isolating high-level radioactive waste, how to Design and construction of the repository, how to evaluate the safety performance of the disposal system on a time scale of more than 10,000 years, etc. They all involve cutting-edge interdisciplinary scientific issues, including geology, hydrogeology, radiation chemistry, rock mechanics, engineering science, materials science, mineralogy, thermodynamics, nuclear physics, radiation protection, computer science, social science, economic science, etc. . In addition, the development of a repository is a long-term, systematic and multidisciplinary joint research process, which generally needs to go through stages such as basic research and repository site selection, underground laboratory research, and repository construction. \u2461 The safety evaluation period is extremely long. The safety evaluation period generally recognized internationally is about 10,000 years (now the United States requires a longer safety evaluation period) [1]. The construction project of geological repository for high-level radioactive waste is the project that requires the longest safety evaluation period in the world so far. It lacks the experience that predecessors can learn from, and it is very exploratory. Due to the extremely long safety assessment period, this adds a lot of uncertainty to the prediction of changes in celestial bodies, geology and human living environment during this long period of time. \u2462 The research and development cycle is very long. Judging from the current international practical experience, it generally takes about 50 years from the pre-selection of the high-level radioactive waste repository site to the completion of the repository [2~5]. For example, the United States proposed the idea of geological disposal of high-level radioactive waste in 1957 and started research and technology development. It is estimated that the repository will not be built until 2010 (recently, due to approval problems, it is estimated that it will be delayed until 2018), after 54 years; Finland in 1976 It will take 45 years for the repository to be completed in 2020, which shows the long-term nature of its work. \u2463 Large investment in research and development. The amount of investment depends on the specific conditions of each country. For example, for the Yucca Mountain repository in the United States, the total budget for the life cycle from site selection to construction and operation of the entire repository is 57.5 billion U.S. dollars (recently raised to 96.2 billion U.S. dollars) ), and $7.1 billion has been used so far. Therefore, in the research and development of geological disposal of high-level radioactive waste, not only the stability, nuclear safety and technical feasibility of the disposal project should be considered, but also the cost-benefit analysis should be carried out in order to obtain reasonable economic results. In addition, the public is very concerned about the safe disposal of high-level radioactive waste, and the success or failure of the public acceptance work largely determines the success or failure of the disposal project. Due to the influence of social public, local government, ethics and political factors, sometimes even the original plan will be postponed or cancelled. The issue of safe disposal of high-level radioactive waste has attracted great attention from relevant countries and international organizations, especially in countries such as the United States, France, Sweden and Japan. To this end, they adopt policies, regulations, institutions, funds and Scientific research and other aspects ensure the safe disposal of high-level radioactive waste. Attaching great importance to it at the national level, improving the management system and establishing an executive agency are the most important means for nuclear-armed countries to solve the problem of high-level radioactive waste. Due to the high difficulty of safe disposal of high-level radioactive waste, long-term research and development and large capital investment, the international community generally believes that the safe disposal of high-level radioactive waste is an act of the state, not an individual enterprise. The government's safe disposal of high-level radioactive waste bear ultimate responsibility. The high level of attention at the national level is reflected in the fact that a lot of legislative, administrative and judicial power has been mobilized to manage high-level radioactive waste, and many decisions on the safe disposal of high-level radioactive waste have been dominated and determined. For example, the United States enacted the Amendment to the Nuclear Waste Act in 1987 to determine the site evaluation of the Yucca Mountain high-level radioactive waste disposal site; France enacted a law in 1991 requiring research on the geological disposal of high-level radioactive waste. In terms of specific implementation, Western nuclear-armed countries without exception have established implementing agencies for geological disposal of high-level radioactive waste. These agencies specifically implement geological disposal of high-level radioactive waste, including site selection, engineering design, safety assessment, and research and development. According to the law, their funds come from special government funds and nuclear power plant electricity charges (generally 1% of nuclear power plant electricity revenue is collected from the high-level radioactive waste disposal fund). Among them, the research and development funds for geological disposal generally account for 10% to 15% of the total investment in geological disposal of high-level radioactive waste. In strict accordance with legal requirements, and under the attention of the state, financial support and specific implementation by executive agencies, Western nuclear-armed countries have carried out decades of unremitting efforts in research and development and scientific and technological breakthroughs, and have made several breakthroughs. For example, the United States 2002 The Yucca Mountain site in Nevada was approved in 2018, and the repository is expected to be built in 2018; Finland approved the Olkiluoto site in 2001, and the repository is expected to be built in 2020; Sweden approved the Fosmark site in 2009, and it is expected The repository will be completed in 2020. These major advances have played a vital role in promoting research on the safe disposal of high-level radioactive waste, promoting the sustainable development of nuclear energy, and ensuring the recovery of global nuclear power. my country's nuclear military facilities have produced a certain amount of high-level radioactive liquid waste. After decades of temporary storage, the environmental risk is increasing, and its vitrification and final geological disposal are imminent. The 11 nuclear power units currently in operation in my country produce about 370t of spent fuel per year. According to the \"Nuclear Power Medium and Long-term Development Plan (2005~2020)\" approved by the State Council, the reactors built before 2020, together with the 18 reactors under construction at that time, will eventually produce 80,000 tons of spent fuel (equivalent to 2030 total spent fuel in the United States). If the scale of my country's nuclear power reaches 100 GW, the total amount of spent fuel produced by all these nuclear power plants will reach 140,000 tons. These high-level radioactive wastes from nuclear military industry and nuclear power plants are increasing year by year, making the environmental risks increase year by year. Therefore, safe disposal must be carried out as early as possible, which is determined by the urgent need to ensure the safety of our country's environment, the interests of future generations, and the sustainable development of the nuclear industry. Research on geological disposal of high-level radioactive waste in my country started in the mid-1980s, and has achieved a certain degree of progress over the past 20 years. The Nuclear Industry Beijing Institute of Geology, a subsidiary of the China National Nuclear Corporation, has carried out research on the site pre-selection of the high-level radioactive waste repository. Safety evaluation and other aspects have also carried out preliminary follow-up research. However, generally speaking, these research and development work are still in the early stage, and there is still a long way to go before completing the stage tasks of geological disposal. In 2003, my country promulgated the \"Law of the People's Republic of China on the Prevention and Control of Radioactive Pollution\", which stipulated that \"high-level radioactive waste should be disposed of in deep geology\"; In 2007, the State Council approved the \"Nuclear Power Medium and Long-term Development Plan (2005~2020)\", and put forward the goal of \"building an underground laboratory for high-level radioactive waste disposal in my country by 2020\", so that the geological disposal of high-level radioactive waste in my country Work goes a step further.", "Reservoir-induced earthquake refers to the earthquake caused by the change of reservoir water level. It is different from natural earthquake and is a geological disaster phenomenon related to human reservoir construction activities. It is referred to as reservoir earthquake[1~3] (Reservoir-Induced Seismicity, RIS). At present, there are more than 140 reservoir-induced earthquakes in the world[4], among which there are 4 cases of strong earthquakes with a magnitude greater than MS6.0, namely the Xinfengjiang Reservoir Earthquake with MS6. The 1963 Zambia MS6.1 Kariba Reservoir earthquake, the 1966 Greece MS6.3 Kerimasta Reservoir earthquake and the 1967 India MS6.5 Keina Reservoir earthquake. These strong earthquakes have caused casualties, damage to surface structures, and property losses to varying degrees. For reservoir-induced earthquakes, even some weak earthquakes or micro-earthquakes of magnitude 1~3, because of their very shallow focus, the epicentral intensity can reach \u2164 to \u2165, causing cracks and tiles to appear on the surface of the buildings, especially those often accompanied by There is the violent vibration of the roaring of the mountain, which often causes \"panic\" in the local people. The research on the mechanism of reservoir-induced earthquakes is carried out according to the characteristics of water-inducing factors. As a liquid, water has its own weight, and various physical and chemical properties such as incompressibility, solubility and fluidity. When the reservoir is impounded, the water level of the reservoir changes to form a certain potential energy difference, and the water is loaded, unloaded, and injected into the crustal rocks through pores and fissures. As additional stress, pore pressure and chemical agents, the stress state of the rock and the fracture state are changed. The mechanical properties of the mud and the fracture surface lead to the transformation of the fault from a stable state to an unstable state, brittle deformation of the crust, and earthquakes. The process can be simply described by the Coulomb shear failure criterion[5, 6]. The stress state of the crustal rock is determined by the values of the maximum principal stress \uf0731 and the minimum principal stress \uf0733, which form a Mohr circle [radius \uf028\uf073 \uf029 \n, circle center at 1 \uf028\uf073 \uf029 \n], and the shear strength of the fault plane is expressed as Among them, \uf0740 is the internal required strength of the medium; \uf06d is the fracture friction coefficient; Sn is the normal stress on the fracture surface; P represents the pore pressure. When the Moore circle and the shear strength line of the fault plane do not intersect, the rock is in a stable state; when the two intersect, the rock is in an unstable state. When water is loaded or unloaded, increasing the maximum principal stress \uf0731 or decreasing the minimum principal stress \uf0733 can increase the radius of the Mohr circle or move it to the left, which may cause the Mohr circle and rock failure envelope \uf074 \n\uf03d \uf074 0 \uf02b \uf06d \uf028Sn \uf02d P\uf029 intersect; water infiltrates into the fracture surface through the fracture, which will reduce the rock \uf0740 , or reduce the fracture friction coefficient \uf06d , or reduce the normal stress Sn on the fracture surface, or increase the pore pressure P , all of which can make the rock The failure envelope shifts and the slope changes, making the Mohr circle intersect with the rock failure envelope, which eventually leads to rock instability and earthquakes. Figure 1 \tSchematic diagram of the change of fault-level instability criteria before and after impoundment In the process of reservoir water level change, water body load and pore pressure will cause the diffusion of pore water pressure, and the hydraulic diffusion of the earthquake can be estimated by using the expansion of the epicentral area and the lag of the earthquake occurrence time coefficient. The time relationship between water storage and earthquake generation can be divided into two types of earthquakes: quick-response and delayed-response. The former is caused by the compression of the rock pores at the bottom of the reservoir due to the elastic stress of the water body load, that is, the \"consolidation\" (consolidation) increases the pore pressure; the latter is caused by the diffusion of pore water pressure. Through the study, it is considered that the factors related to reservoir-induced earthquakes include reservoir scale, lithological conditions, structural conditions, seepage conditions, stress state and seismic background [7], and the location and magnitude upper limit of reservoir-induced earthquakes are estimated. According to statistics, when the dam height exceeds 100m and the storage capacity exceeds 5 billion m3, the possibility of reservoir-induced earthquakes is greatly enhanced; reservoir earthquakes can occur along existing fault planes, large-scale cleavage planes or crack planes; regional tectonic stress , gravity, and mountain lateral compressive stress may be the initial stress for earthquakes; lithology and seepage conditions are the most important factors for earthquakes. Carbonate rock and granite rock formations are prone to reservoir-induced earthquakes, and karst rock formations are most prone to earthquakes. Induce an earthquake. The reason is that there are a large number of karst caves in the carbonate strata, and there are a large number of cracks in the granite strata, which have become good channels for water seepage and mechanically weak surfaces. The impoundment of the reservoir changed the hydrogeological conditions in the reservoir area, resulting in the instability of the local crust, and the brittle crust released energy with elastic energy, which stimulated the reservoir-induced earthquake. There are two main categories of reservoir-induced earthquake prediction methods [8]. First, according to the distribution and scale of the geological structure in the reservoir area, the possibility of the reservoir-induced earthquake after the reservoir is impounded is predicted by the seismic tectonic method, the stratigraphic correlation method and the structural probability method, fully considering the scale of the reservoir after impoundment, lithological conditions, Based on conditions such as structural conditions, seepage conditions, stress state, and seismic background, calculate the seismogenic section and magnitude upper limit of reservoir-induced earthquakes. Among them, the first two methods are deterministic forecasting methods, and the latter one is probabilistic forecasting method. This type of prediction method is mainly applicable to the hazard assessment of reservoir-induced earthquakes before and after the construction of reservoirs. The second is to use a dense network of stations to monitor the process of reservoir-induced seismic activity and predict the development trend of reservoir-induced earthquakes. From the statistical analysis of earthquake cases, the types of earthquakes induced by reservoirs are two types: foreshock-mainshock-aftershocks and foreshock-seismic swarm-aftershocks, which shows that the process of reservoir-induced earthquakes is different from natural earthquakes. The foreshock activity gradually connects smaller-scale fault surfaces into larger-scale fault surfaces, and there is an obvious process of gestation, occurrence, and development. By setting up a dense mobile seismic network in the reservoir area before and after the impoundment of the reservoir, the characteristics and process of the seismic activity induced by the reservoir in terms of time, space, intensity, sequence and focal mechanism are studied, and its cause and development trend are judged. Safe operation provides a scientific basis for the government to directly provide reservoir-induced earthquake disaster countermeasures and reduce disasters caused by reservoir-induced earthquakes.", "Land subsidence is a vertical deformation phenomenon in which the ground elevation slowly decreases due to various factors, and it will become a geological disaster in severe cases. Human activities and geological processes are the main causes of land subsidence, among which the unreasonable exploitation of groundwater is the most important reason. The main land subsidence areas in my country and the world are caused by excessive exploitation of groundwater. Land subsidence in some areas is also related to oil and natural gas exploitation, geothermal utilization (such as Yangbajing in Tibet), urban high-rise buildings (such as Shanghai), crustal tectonic activities (such as Xi'an), and natural compaction of underconsolidated soil layers (such as Tianjin) And so on. The area where land subsidence occurs is generally large in scope and the subsidence process is slow, so it is generally not easy to detect in the early stage, and it is not easy to attract people's attention. It mostly occurs in large and medium-sized cities, which has a great impact on people's production, life, transportation, etc., and causes great losses and harms. It has become a serious environmental geological problem, affecting and restricting the sustainable development of the local national economy. Land subsidence was first recorded in Mexico City, Mexico in 1891, but due to the small amount of subsidence in the city at that time, the damage was not obvious, and people did not pay enough attention to it. Subsequently, land subsidence occurred in more than 50 countries and regions in the world. The representative countries and regions are: Long Beach of the United States, Koto area, Kanagawa Prefecture, Niigata City, Tokyo, Japan, Mexico City, Mexico, Bangkok, Thailand, London, England, Moscow, Russia , the Po River Delta in Italy, the northern coast of Germany, The Hague in the Netherlands, Hanoi in Vietnam and Jakarta in Indonesia. Since the first discovery of land subsidence in Shanghai in 1921, more than 90 cities and regions in my country have experienced land subsidence in varying degrees. Among them, the Yangtze River Delta represented by Shanghai and the Bohai Rim represented by Tianjin have formed two major regional land subsidence zones. The ground elevation of a few areas in Tanggu and Hangu in Tianjin has been lower than the average sea level due to subsidence. plane [1,2]. Other cities such as the Fen (River) and Wei (River) valleys, the southeast coastal plain, the Songnen Plain, and the Lower Liaohe Plain also experienced serious land subsidence. Another problem worth noting is that in some places there is another geological disaster related to land subsidence\u2014ground fissures, which are distributed in 17 provinces and cities including Shaanxi, Hebei, Jiangsu, Shandong, and Henan, with about 1,000 people. More than 6,000 entries. In some areas, land subsidence and ground fissures often appear at the same time, such as Xi'an, Datong, and Wuxi, thus causing greater harm. The harm caused by land subsidence is multifaceted, mainly including: \u2460 loss of ground elevation, aggravation of flood disasters, and decline in the effectiveness of flood control and drainage projects; \u2461 land subsidence, especially uneven settlement, damages the foundation of buildings and seriously affects the normal use of buildings and service life, affecting the normal operation of subways, tunnels, bridges, expressways, high-speed railways, elevated roads, urban water supply and gas supply and other underground pipe networks, West-East Gas Pipelines and high-rise buildings; Decreased navigation capacity; \u2463 Water well facilities are scrapped; \u2464 Ground benchmarking points are out of alignment. According to estimates by relevant departments, every 1mm of land subsidence in Shanghai will cause an economic loss of 10 million yuan, and the more economically developed the area, the greater the loss caused by land subsidence. The underground rock layer (or soil layer) bears the load formed by the self-weight of the overlying soil layer. According to Terzaghi's point of view, these loads are jointly borne by the solid particles that make up the soil layer and the water (or oil and gas) in the pores between particles. In the natural state, the overlying load is in balance with the effective stress between particles and water pressure (or oil-gas pressure). If water is pumped in the soil layer, the water head drops \u0394H , and the corresponding water pressure decreases \uf072 g\u0394H , but the overlying load remains unchanged, so the effective stress acting on the skeleton composed of solid particles will increase \uf072 g\u0394H (H is head, \uf072 is the density of water, and g is the acceleration due to gravity). An increase in the effective stress acting on the skeleton necessarily causes compression of the soil layer, causing water to be released from the rock formation, just as squeezing water out of a water-filled sponge by hand would cause the ground to subside. Knowing the principle of land subsidence, it is not difficult to understand that different soil layers have different soil properties, and will produce different deformations under the action of overlying loads. Indoor experiments and field observation data show that the amount of soil deformation is not only related to the properties of the soil layer, but also related to the change of groundwater level experienced by the soil layer, because the change of groundwater level in the soil layer essentially reflects the effective The change process of stress will affect the deformation of soil layer. Therefore, the pattern of groundwater level change in a region will affect the deformation of the aquifer in the region. For example, the water level changes in the Shanghai area can be summarized into the following five modes: \u2460 After the water level rises from a lower value to a certain height, it rises and falls repeatedly within a certain range, the water level changes within a certain range, and the soil layer undergoes repeated loading and unloading. , presenting the characteristics of elastic deformation; \u2461 If the water level continues to decrease in the cycle and is lower than the lowest water level ever reached in history, the soil layer is subjected to repeated loading and unloading, and the stress increased each time is greater than the stress reduced by unloading , the effective stress on the soil layer continues to increase, not only residual deformation, but also creep deformation, and the deformation is viscoelastic-plastic; \u2462 After the water level changes from a high value to a low value, it fluctuates within a small range, and its remain basically unchanged or rise slightly, the effective stress on the soil remains basically unchanged, the soil has creep properties, and the deformation continues; \u2463 The water level continues to decline in the cycle, but is still higher When the lowest water level is reached, the deformation of the soil layer is almost synchronous with the change of the water level, and the creep deformation can be ignored, showing a certain degree of plasticity; \u2464 The water level gradually recovers, the overall rise continues, the effective stress decreases, and the deformation is close to elasticity. Under different water level change modes, not only the water-bearing sand layer has different deformation characteristics, but also the aquitard should have different deformation characteristics. It is important to note that the same aquifer may experience different water level change patterns at different times. Taking the Fourth Confined Aquifer in Shanghai as an example, in the early stage (from 1970 to the mid-1990s), the situation of mode \u2461 appeared; since 1998, the decline of water level was contained, and the situation of water level was basically stabilized and rose slightly, and the situation of mode \u2462 appeared. From the 1950s to the 1990s, the second confined aquifer in Shanghai experienced the situations of mode \u2464, mode \u2460 and mode \u2463 successively. Obviously, the water level change pattern in an area will change with the local water pumping and irrigation, so different deformation characteristics will appear. As far as the whole country is concerned, the situation of pumping and irrigation is even more varied and constantly changing, so the deformation of soil layers in various places must be extremely complicated. To sum up, due to the different patterns of groundwater level changes experienced by soil and soil layers, the deformation characteristics of soil layers in regional land subsidence show obvious differences. Not only will different soil layers show different deformation characteristics, but the same soil layer will also have different deformation characteristics due to the different parts of the down funnel in different regions. Elastic deformation appears at the edge of the funnel, while viscoelastic-plastic deformation occurs at the center of the funnel, and there may be a transition type in the middle. More importantly, the same soil layer at the same site exhibits different deformation characteristics at different settlement stages due to different water level change patterns. For example, the deformation of the fourth confined aquifer around 1991 in the Shangmian 17 factory layer mark at the center of the Shanghai descent funnel is evidenced by the elastic deformation and viscoelastic deformation. An area often has multiple soil layers in the vertical direction, as few as four or five layers, nearly 100 meters thick, and as many as one, twenty or more layers, with a total thickness of hundreds of meters to nearly a thousand meters. Not only are the properties of each soil layer different, but there are also aquifers and aquitards. The same soil layer is often also heterogeneous and anisotropic, thus forming different deformation characteristics. The conditions of pumping and irrigation are also different, and under the action of different water level change modes, various and complex deformation characteristics are formed. With the deepening of research, some people's understandings have been gradually changed. For example, the deformation of water-bearing sand layers was previously considered to be elastic. Later, a large number of layered standard observation data and laboratory tests confirmed that the deformation of some water-bearing sand layers is not only nonlinear, but also has obvious deformation lag. That is to say, in a certain water level change mode, the water-bearing sand layer appears elastic deformation, while in another water level change mode, it appears viscoelastic-plastic deformation. The same is true for the aquitard. The model for predicting and forecasting regional land subsidence is synthesized by coupling the water flow model and the subsidence model. Considering that the horizontal deformation of the subsidence area is very limited, the subsidence model usually adopts a vertical one-dimensional model, while the water flow model adopts a three-dimensional variable coefficient heterogeneous and anisotropic model. Among various types of deformation, elastic deformation, elastoplastic deformation and viscoelastic deformation have been done by predecessors. Although there are many theoretical studies on viscoelastic plastic deformation, there are still many difficulties in applying it to land subsidence. In recent years, my country has found a model that involves less parameters and is more rigorous in theory\u2014the modified Merchant model. It transforms the original Merchant model, which can only describe viscoelastic deformation, so that it can describe instantaneous elastic and instantaneous plastic deformation, as well as viscoelastic and viscoplastic deformation, which meets the simulation requirements, that is, it not only describes viscoelastic plastic deformation, Several other models are its special cases. By changing the parameter settings in the model, the settlement model under different deformation conditions can be obtained; and the parameters involved are few, which is suitable for large-area land subsidence simulation [3]. In the same way, the water flow model corresponding to the modified Merchant model can be obtained, and the water flow model under different constitutive relations obtained by the predecessors can also be obtained by changing the parameter settings in the model [3]. With the water flow model and the subsidence model, how to couple the two into a complete regional land subsidence model has become an urgent problem to be solved. In fact, the original two-step method has no coupling and cannot reflect the change of parameters during the settlement process; the fully coupled Biot model has too many parameters and cannot be applied in practical problems. The newly proposed method is to transform the two-step walking method into a coupled two-step model. Through the relationship between permeability coefficient and void ratio, soil volume compressibility coefficient, effective stress and void ratio, the change of permeability coefficient and water storage rate with settlement is reflected, and the coupling of water flow model and settlement model is close to the true sense. The entire coupling process needs to be completed through continuous iteration to gradually reduce the error between the calculated value and the observed value [3]. This reflects the continuous change of the permeability coefficient and water storage rate with the settlement during the settlement process. Another difficulty in this type of simulation is that the vertical thickness of each layer of soil is limited (several meters to tens of meters), while the horizontal spread reaches tens of thousands of square kilometers. It is likely to exceed the capacity of general computers; if the scale of the horizontal direction unit is increased, it will face the problem of unit deformity that the horizontal and vertical scales of the same unit are too large. This problem can only be solved by changing the composition of the basis function in the finite element method, that is, the basis function itself satisfies the seepage differential equation, and no longer requires the parameters in the element to be constant [4]. As far as the entire subsidence area is concerned, due to the large range, large differences in geological and hydrogeological conditions, and complex deformation of the soil layer, it is impossible to use one or two deformation models to describe the land subsidence of the entire area, and the corresponding stress-strain model must be used according to the water level change model. - Time relation to calculate deformation of soil layers. Not only should each aquifer and aquitard be modeled separately, but the same soil layer should also adopt different models according to the position of the water level descending funnel, considering the water level change mode partition, and the same soil layer at the center of the descending funnel should also be based on the Different models are used for different stages of water level fluctuations. All these works are also superimposed on the traditional division of various hydrogeological parameters and soil mechanics parameters based on soil properties. Therefore, each layer of soil has 20 to 30 deformation partitions, and the total number of partitions in the whole area is likely to be hundreds. The division and determination of subregions must be based on the observation data provided by the regional monitoring network composed of different depth layers, and through them to grasp the law of soil deformation and establish a model. The structure of the large-scale subsidence zone is complex, and it is difficult to determine the law of soil deformation and establish a model without a large amount of exploration data and layered standard observation data. Finally, it should be pointed out that there is no mathematical model for ground fissures. Land subsidence monitoring (including leveling, GPS surveying, INSAR technology, bedrock marker and layered marker monitoring) is the basis for mastering the overall change law of land subsidence, analyzing and researching, and formulating corresponding measures. my country is building a regional monitoring network. After land subsidence occurs, it is difficult to restore and control is difficult. Even if measures such as restricting mining, banning groundwater mining, and prohibiting artificial recharging can gradually restore the groundwater level and slow down the subsidence rate, the subsidence will continue to expand for a long time. Therefore, the management of land subsidence should focus on prevention and combine prevention and control, and combine general control and mitigation of land subsidence with the management of key areas. The main prevention and control measures for land subsidence are: \u2460 Formulate laws and regulations to ensure the effective implementation of prevention and control measures; \u2461 Strengthen the management of groundwater resources, optimize the mining layout, control the amount and level of groundwater extraction, and rationally use groundwater resources; \u2462 Carry out artificial recharging to recharge the aquifer , effectively restore and maintain the stability of the groundwater level; \u2463 strengthen the unified management of water resources in the basin, and promote water-saving technologies; \u2464 do a good job in monitoring and research on land subsidence, focusing on prevention.", "On June 30, 1908, a huge explosion occurred in the Tunguska region of Siberia, Russia, with hundreds or even thousands of times the energy of the Hiroshima atomic bomb, and more than 2,000 square kilometers of forest were instantly reduced to ashes. Records of atmospheric infrasonic pressure fluctuations during the explosion were recorded thousands of kilometers away in the United Kingdom. The earthquake induced by the explosion spread to Washington in the United States, Java in Indonesia and other places. At the same time, a powerful shock wave crossed the North Sea, causing the British Meteorological Center to monitor the violent fluctuations in atmospheric pressure lasting about 20 minutes. This big explosion filled the sky over Siberia and Northern Europe with rare shining silver clouds. Every time after sunset, the night sky will emit thousands of rays of light, which can be used to read newspapers on the streets of London at night. This is an extremely rare natural big explosion in human history. For a hundred years, although many scientists from all over the world have tried to find out its truth, they have not really solved this mystery. There are mainly the following hypotheses with certain scientific basis. Hypothesis 1: Nuclear Explosion Theory In 1927, the former Soviet scientist Kulik led a team to conduct a scientific investigation to the explosion area, and then conducted five inspections, but found no strong evidence that could indicate the cause of the explosion. Since 1958, the former Soviet scientist Plekhanov and many other scientists have conducted a series of scientific investigations on the Siberian explosion area, measured the radioactive dose of a large number of soil and plants, and found that the radioactive dose at the center of the explosion is proportional to the distance 1.5-2 times higher at 30-40 kilometers, Dr. Fashilyav, a professor at Minsk University, believes: \"So far, quite profound genetic changes have taken place in this area, not only in plants, but also in small insects. above. Various types of bees and insects that are hardly found anywhere else in the world appeared in the area. In addition, some trees and plants stopped growing, while others grew at several times the rate, some even more than the pre-1908 trees and plant growth hundreds of times faster.\" After in-depth investigation and research, Dr. Fajilia declared in 1960: \"The situation shows that here, especially in the center of the explosion, there was a general electromagnetic disturbance, indicating that this area A huge electromagnetic hurricane destroyed everything.\" In 1961, a scientist calculated that the light radiation energy of the Siberia explosion accounted for about 30% of the total energy, which is basically similar to the light radiation energy of a nuclear explosion. When some scientists observe the effects of nuclear explosions, they have found that when a nuclear explosion occurs at a certain location, a bright light will be produced on the other side of the earth opposite to it, and some electromagnetic interference will appear at the same time. The result of radar wave reflection. When Siberia exploded in 1908, British explorer Ellinster was exploring the Antarctic Deep, which happened to be opposite to Siberia from north to south. He was camped near Mount Erebus, and on the day of the explosion, the expedition team observed and recorded an unexplained and intense aurora. Some scientists believe that the \"purple-white aurora\" produced after the big explosion in Siberia in 1908, the mystery of the ages - 1908 Tunguska explosion \t\u00b7 337 \u00b7 \"silver cloud\", \"strange sunset\" and \"day in the night\" etc. The phenomenon is almost exactly the same as the hydrogen bomb test conducted by the United States in Biniki Atoll in 1954, the difference is that the scale of the American hydrogen bomb test is smaller. According to some of the same or similar phenomena between the Siberia explosion and the nuclear explosion, some scientists speculate that the Siberia explosion in 1908 may have been a nuclear explosion. However, as far back as 1908, humans on the earth did not possess nuclear weapons, so how did such a large-scale nuclear explosion happen? Hypothesis 2: The theory of antimatter and miniature black holes In 1965, three American scientists proposed that the Tunguska explosion might be caused by a kind of antimatter that fell to the earth from space\u2014antimeteorite. They said in their investigation report that on that day, a meteorite composed of \"antimatter\" accidentally broke into the earth and caused the disaster. They believe that the collision of half a gram of \"anti-iron\" with half a gram of iron is enough to produce a destructive force equivalent to that of the atomic bomb that exploded in Hiroshima. In 1973, two scientists from the University of Texas in the United States\u2014\u2014Jackson and Leian, based on the theory of black hole celestial bodies, believed that the Siberian Big Bang in 1908 was caused by the strong gravitational force of miniature black hole celestial bodies. According to their opinion, if this kind of miniature black hole entered the earth's atmosphere and penetrated the earth in the 1908 Siberia explosion, it could explain all the phenomena that occurred in the explosion. They concluded: \"The small black hole passed through the earth somewhere in the Atlantic Ocean between Iceland and Newfoundland (Canada).\" This conclusion was supported by some scientists, but many scientists disagreed with their diagnosis, because the Siberia explosion If it is as they said, then the same abnormal phenomenon should also occur on the side of the earth opposite to Siberia, and there may even be traces of miniature black holes flying out of the earth, but there is no discovery on the other side of the earth this phenomenon. Hypothesis 3: The Meteorite Impact Theory Some scientists believe that the 1908 Siberia explosion was caused by a meteorite colliding with another object. Many scientists who oppose this diagnosis have raised various questions to refute it. They believe that if it is really a meteorite, when the meteorite hits the ground in Siberia, it will quickly remove the thick layer of crustal material, and make the mantle material due to the impact of the crater. Formed and removed, several huge craters can always be found in the center of the explosion, for example, several craters found in Canada, America, especially the Brunt crater in central Ontario, and the Chab crater in eastern Quebec, with a diameter of about Up to 10 km. However, no such huge craters or craters have been found in Siberia. In 2007, Italian scientists suggested that Lake Cheko might have been an explosion crater, but gave no strong evidence other than its morphology. Surprisingly, the aerial detection confirmed that the area damaged by the explosion in Siberia reached 2000km2, and the center of this area was about 3km2. There is also a very strange phenomenon, some trees are still standing, but the leaves are all stripped off. After the explosion, The growth rate of trees in this area is very fast, even to an amazing degree. So far, no impact crater and explosion wreckage have been found, which makes the Tunguska explosion even more mysterious. At the same time, it also makes some scientists hold a negative attitude towards the theory of meteorite impact. Hypothesis 4: The theory of comet impact is a popular one at present. The first to put forward the \"comet impact theory\" was Petrov, an academician of the former Soviet Academy of Sciences. He believes that what caused the Tunguska explosion was a comet composed of loose snowballs from a distant part of the solar system. When it broke through the atmosphere on the earth's surface at a speed of 40,000 kilometers per hour, superheated gas was produced due to friction; this gas As soon as it touched the ground, a huge shock wave equivalent to the destructive power of several atomic bombs occurred. Since the comet evaporated quickly, there were no impact craters and debris left on the earth as \"physical evidence\". If the Tunguska explosion was an extraterrestrial body\u2019s attack on the earth, the anomaly of platinum group elements (PGE), an indicator element of extraterrestrial matter, should be left in the explosion layer, because the abundance of platinum group elements in the solar system is 4 times higher than that of the crustal material ~5 orders of magnitude. The research team of Chinese scholars Hou Quanlin and Xie Liewen has been diligently exploring this, and discovered iridium (Ir), palladium (Pd), rhodium (Rh) and ruthenium in the explosion layer in the explosion area in 1995, 1999 and 2004. The abnormality of platinum group elements such as (Ru) echoes the anomalies of light elements such as C, N, and H discovered in the same layer by Professor Kolesnikov of Moscow State University, which strongly supports the theory of comet impact, and it is calculated that the explosive object comet The radius of the nucleus is about 160m, the mass is tens of millions of tons, and the explosion energy is equivalent to tens of millions of tons of TNT. In short, to explain the real cause of the Tunguska explosion, it still needs the unremitting efforts and exploration of scientists all over the world, and the \"hundred-year mystery\" will eventually be solved.", "Preface The discovery of coesite and diamond in continental crust rocks in continental collision orogenic belts can only form minerals under ultra-high pressure, which proves that during continental collisions, continental crust can subduct to the depth of the mantle (see another section of this book for details: light specific gravity Why can the continental crust subduct into the mantle with large specific gravity?). Since the coesite or diamond-containing continental crustal rocks are formed by the subduction of the crust to the depth of the mantle and metamorphism under ultra-high pressure conditions, we call this type of rocks ultra-high pressure metamorphic rocks. However, although UHP metamorphic rock formed deep in the mantle, we can still observe it at the surface today, suggesting that it resurfaced at some point in the past. In addition, the speed of ultrahigh-pressure metamorphic rock reentry from the depth of the mantle to the surface must be fast enough, so that coesite does not have time to degenerate into quartz, and diamond does not have time to degenerate into graphite during the process of cooling and depressurization. Therefore, what kind of tectonic process and mechanism makes the deep subducted continental crust or ultra-high pressure metamorphic rocks quickly return to the surface has become an important scientific issue of interest. In addition, the Alps and Himalaya orogenic belts are Cenozoic orogenic belts, and their collisional orogenic processes are still ongoing, and ultrahigh-pressure metamorphic rocks are also exposed in these orogenic belts, indicating that their reentry occurred during continental collisions of. Therefore, it is of great significance to find out the mechanism of exhumation and exposure of UHP metamorphic rocks for a comprehensive understanding of the continental collision process. The age and speed of exhumation of ultra-high metamorphic rocks When the ultra-high pressure metamorphic rocks start to exhume after they are formed, and how high the exhumation speed is, people first need to clarify the issues. The exhumation of ultra-high pressure metamorphic rocks is a process of cooling and pressure reduction, accompanied by retrograde metamorphism. Therefore, accurate determination of the peak metamorphic (with maximum metamorphic pressure) age and retrograde metamorphic age of UHP metamorphic rocks can provide constraints on their reentry time and average velocity. In addition, the closure temperature theory of isotopic chronology provides an effective tool for us to understand the exhumation and cooling history of UHP metamorphic rocks. According to the closure temperature theory of isotope chronology, a certain radioactive isotope system of a mineral can be regarded as a closed system and start timing only when its temperature is lower than a specific temperature, and this specific temperature becomes the closure temperature of a certain isotope system of the mineral (Tc). Therefore, each isotopic age actually records the cooling age of the corresponding mineral when it cooled to the confinement temperature. In this way, if we can accurately determine the isotopic ages of various closing temperatures in UHP metamorphic rocks, we can construct the temperature (T)-time (t ) cooling curves, which can accurately reveal the uplift and cooling history of UHP metamorphic rocks. The difficulty of this work lies in: finding enough mineral-isotope systems with various closure temperatures for geological bodies with the same cooling history, and implementing accurate (error less than 5Ma) isotope dating. Figure 1 shows the measured cooling curves of Shuanghe UHP eclogite and its surrounding rock (granite gneiss) in the Dabie Mountains [1]. It reveals that the UHP metamorphic rocks of the Dabie Mountains experienced two rapid uplift and cooling processes together with its surrounding rocks: the first time was rapid uplift immediately after undergoing UHP metamorphism at 226 Ma, and cooled to 500\u00b0C at 219 Ma ; there was a constant temperature period of 450-500\u00b0C during 219-180Ma, and UHP metamorphic rocks did not rise further; during 180-165Ma, UHP metamorphic rocks experienced a second rapid uplift, cooling from 450\u00b0C to 300\u00b0C. In this way, at ~220Ma and ~180Ma, two time nodes corresponding to the sudden change in the exhumation rate of ultrahigh-pressure metamorphic rocks were formed. They correspond to deep tectonic events in the continental subduction zone, which need to be further studied and determined. Figure 1 \tThe temperature (T)-time (t) cooling curve of Shuanghe ultra-high pressure metamorphic rocks in Dabie Mountains [1] The yellow squares are the cooling ages of ultra-high pressure metamorphic rocks, and the white squares are the cooling ages of surrounding rocks; the size of the squares shows the age and the corresponding closure The temperature error (the time unit \"Ma\" is millions of years); the dating mineral assemblages of each age in the figure are summarized, see literature [1] The time of ultrahigh pressure metamorphic rocks exhumation to the surface can also be obtained from the deposition of basins around the collision orogenic belt Look for evidence in the strata. The basins around the orogenic belt receive the products of weathering and denudation of the uplifted belt of mountains, forming clastic sedimentary strata. If a formation contains UHP metamorphic rock gravel or corresponding high-pressure metamorphic mineral clasts, it means that UHP metamorphic rocks have been exposed to the surface during the sedimentary age of this strata. The Hefei Basin on the north side of the Dabie Mountains began to have coarse clastic sedimentary strata in the Jurassic, and UHP metamorphic rock gravels and high-pressure metamorphic minerals were found, which proves that UHP metamorphic rocks had exposed to the surface in the Jurassic Dabie Mountains[2]. The research results of various methods show that the first rapid uplift-exhumation of ultrahigh-pressure metamorphic rocks occurred during the deep subduction of continents, which provides an important constraint for exploring its exhumation mechanism. However, multiple tectonic uplifts are required for large volumes of UHP metamorphic rocks (such as the Dabie-Sulu UHP metamorphic belt) to be exposed to the crustal layers[1]. Exhumation mechanism of ultrahigh metamorphic rocks Because the ultrahigh pressure metamorphic rocks discovered in the early stage are all eclogite or jadeite quartzite clumps of different sizes (from a few centimeters to hundreds of meters), they are wrapped in a large area In granite gneiss, no evidence of UHP metamorphic minerals has been found in granite gneiss. Therefore, some scholars believe that UHP metamorphic rocks are only a small number of rock fragments in the collision orogenic belt, and they were pushed out in the form of tectonic m\u00e9lange. Covered on the ground. Later, coesite, a typical UHP metamorphic mineral, was also found in the metamorphic zircons contained in the granitic gneiss, which proved that the granitic gneiss as the surrounding rock also experienced deep subduction and ultra High pressure metamorphism [3]. Furthermore, they have the same metamorphic age and cooling history as the encapsulated UHP metamorphic rocks [1] (Fig. 1). Therefore, the UHP metamorphic rocks exposed on the surface are no longer seen as sporadic eclogite fragments, but large areas of exposed silica-alumina gneiss, which wrap some eclogite lenses. Therefore, the reentrant deep subducted continental crust is basically a silica-alumina crust with a light specific gravity. Since the density (3.03) of the metamorphic silica-alumina crust at a depth of 100 km is less than that of the surrounding mantle rocks (3.24), the resulting positive buoyancy becomes the main driving force for the exhumation of the deeply subducted silica-alumina crust[4]. However, the deep subducted high-density (3.74) mafic lower crust[4] cannot turn back due to negative buoyancy. This buoyancy-driven exhumation model of the subducting silica-alumina continental crust has two exhumation mechanisms at work, namely the \u201cslab fragmentation\u201d mechanism[4] and the \u201cthrust exhumation\u201d mechanism[4, 5]. The \"slab break-off\" mechanism: the buoyancy experienced by the deeply subducted silicon-aluminum continental crust in the mantle increases with the subduction depth of the continental crust. When the buoyancy increases to be equal to the drag force of the high-gravity subducting oceanic crust, the subduction velocity of the continental crust decreases to zero, and under the joint action of the downward drag force of the subducting oceanic crust and the upward buoyancy of the subducting continental crust, The lithosphere between them will be pulled off, thus breaking off from the subducted oceanic crust plate with a large specific gravity, and the deeply subducted continental lithosphere, which loses the drag force under the subducted oceanic crust, will quickly rebound and uplift under the action of buoyancy[4] . This mechanism can explain when the subduction process of the continental crust ends and the exhumation begins, but it fails to explain how the ultrahigh-pressure metamorphic rock is lifted further to the surface once the buoyancy disappears once the UHP metamorphic rock is lifted out of the mantle. \"Thrusting\" mechanism: the buoyancy suffered by the deep subducted continental crust in the mantle only acts on the light-weight silicon-aluminum subducted continental crust, while the heavy-weight subducted mafic lower crust suffers from negative buoyancy , thus generating a shear force between the subducting silica-alumina continental crust and the mafic lower crust. It causes the subducted continental crust to fracture along the weak lower crust (Fig. 2b), thus decoupling and decoupling the subducted silica-alumina continental crust from the underlying subducted mafic lower crust and lithosphere. Under the action of buoyancy, the decoupled subducting silicon-aluminum continental crust thrusts back as a whole (Fig. 2c). Subsequent subducted continental crust beneath this thrust fault repeats the process at depth, forming a second thrust-thrust slab that pushes the first thrust-thrust slab further toward the surface (Fig. 2d). Under the assumption that the lower crust is a plastic material that is prone to rheology, tectonic physics experiments have proved the feasibility of this exhumation mechanism[5] (Fig. 2). Figure 2 \tContinental Crust Subduction and Thrust Exhumation Experiment This experiment assumes that the lower crust is a plastic material that is prone to rheology. The experiment shows that during the subduction process of the continental crust, the continental crust and lithosphere will separate along the lower crust and form a main reverse fault , the deep subducted continental crust is thrust back under the action of buoyancy[5] 1 subducted continental crust; 2 upper crust; 3 lower crust; 4 denuded sediments The above model is based on the \"jam sandwich\" model based on the mechanical strength of the continental lithosphere The upper one, that is, the upper-middle crust and the lithospheric mantle are rigid, while the sandwiched lower crust is ductile and is a low-viscosity zone. However, studies in the past 10 years have shown that it is inappropriate to use the \"jam sandwich\" model to describe the mechanical structure of the continental lithosphere. Due to the differences in the composition of the continental upper, middle and lower crust, with the increase of temperature and pressure, at least two low-viscosity zones will appear in the continental silica-alumina crust at different depths[6]. Therefore, the deep subducted silica-alumina continental crust should be able to be divided into at least two ultrahigh-pressure rock slices for uplift and exhumation. Chinese scholars first noticed this problem and found that the Dabie Mountains UHP metamorphic belt can be divided into three slabs. Pb isotope studies show that North Dabie UHP rock slices have lower crustal characteristics, and South Dabie UHP rock slices have upper crustal characteristics. A differential exhumation model of ultrahigh-pressure metamorphic rocks with multiple slices was established, that is, during the deep subduction process of the continental crust, the silica-alumina crust can be split into several slices from top to bottom, which are successively uplifted and exhumed[7~9]. The Sulu HP-UHP metamorphic belt can also be divided into multiple slices with different metamorphic histories, and it can be explained by the differential exhumation model of multiple slices[10]. The multi-slice difference exhumation model of continental deep subduction continental crust reveals that the essential difference between continental plate movement and oceanic plate movement lies in the inhomogeneity of the internal composition and mechanical strength of the continental crust, which causes the continental crust itself to be split into several thin slices and Each has a different trajectory. However, the location, timing and conditions of multiple splits in the silica-alumina continental crust, as well as the thickness and vertical stacking of rock slices are still unclear, and the respective movement trajectories of each rock slice after splitting also need further research. Fig. 3 \tDifferent cooling Tt curves of three UHP rock slices in the Dabie Mountains [9] SDZ. South Dabie UHP metamorphic rock belt; CDZ. Middle Dabie UHP metamorphic rock belt; NDZ. North Dabie UHP metamorphic rock belt", "One of the important features that distinguishes the earth from other planets in the solar system is that it has a hard outer shell formed by chemical differentiation\u2014crust[1]. The Earth's crust is made up of two parts: the continental crust (continental crust) and the oceanic crust (ocean crust). The continental crust accounts for about 40% of the total surface area of the earth's crust (including the continental shelf), and is the part directly inhabited by humans, providing humans with rich biological and mineral resources and a vast space for activities. Therefore, a detailed understanding of the formation mechanism and evolution process of the continental crust is not only of great significance for understanding the evolution of the earth, but also closely related to human daily life. Through the long-term efforts of earth scientists, human beings have realized that the average thickness of the continental crust is about 40km. The average composition of the overall continental crust is neutral igneous rock (andesite), enriched in large-ion lithophile elements such as Cs, Rb, Ba, and Th, and depleted of high-field-strength elements (High-Field-Strength Element) such as Nb, Ta, and Ti as a feature. There is a complementary relationship between the abundance of elements in the overall continental crust and the depleted upper mantle[2, 3]; at the same time, the growth and demise of the continental crust have been going on since the Archaean to the present, and there are several rapid accretion periods[ 4] (Fig. 1). However, there is still a lack of clear and unified understanding of the specific formation mechanism and evolution process of the continental crust and how it obtains an overall andesitic chemical composition from mantle-derived materials. Most geologists believe that the most important source of material for the formation of the chemical framework and features of the continental crust is the magma produced by the partial melting of subducting plates in the early Earth [1, 5, 6], which may have been accompanied by later compositional reformation [7]; at the same time, \tisland arc magmatism related to the distribution of the continental crust in the subduction process [4] is the main way for the continental crust to grow [8]. However, there are also different views. Kamber et al. [9] believed that the formation mechanism of the Archaean continental crust (represented by the Archaean TTG suite) was similar to modern island arcs and had nothing to do with plate subduction; Smithies et al. [10] believed that, compared with the subduction zone, the thickened ocean The melt produced at the bottom of the crust has a more important role in the formation and accretion of continents. And all these different models of continental crust formation mechanism are based on the same observation data, that is, the continental crust is depleted of Nb, Ta elements and the Nb/Ta value is significantly lower than other reservoirs on the earth. different explanations for the formation mechanism. Therefore, the geochemical study of Nb-Ta becomes the key to explain the formation and evolution of continental crust. Nb and Ta are elements of the same subgroup adjacent to each other in the periodic table of chemical elements. Their ions have the same valence (+5) and almost the same effective radius (~0.64 \u00c5). Therefore, according to the basic theory of geochemistry, they are typical twin elements and will not differentiate in most geological processes related to magma. In other words, the concentrations of Nb and Ta in different geological bodies may vary, but their Nb/Ta values should not vary greatly. However, a large amount of analytical data so far shows that there are obvious differences in Nb/Ta values in different first-order geological units on the earth. The average value of Nb/Ta in the upper continental crust is 12~13, that in the lower continental crust is about 9, the average value of the overall continental crust is 11[3], and the average value of the depleted upper mantle is 15.5[1]. These data show that: \u2460 During the formation of the continental crust, which is closely related to the mantle, there is a very significant Nb-Ta differentiation; \u2461 The Nb/Ta values of the continental crust and the depleted mantle are not complementary, and both are significantly lower than The Nb/Ta value of the original mantle (17.5) or chondrites (17.3~17.6)[11], so there is an obvious crust/mantle uncoupling, from the point of view of mass balance, there should be another one with a relatively high A reservoir of Nb/Ta values (>17.5) to balance the lower Nb/Ta values in the continental crust and depleted mantle. However, so far there is no unified understanding of this high Nb/Ta value reservoir in the scientific and technological circles. Therefore, the uncoupling of Nb-Ta crust/mantle and the absence of high Nb/Ta value geological reservoirs have become the current geochemical research issues. Unsolved Mysteries in (Figure 2). Figure 2. \tRelationship between Nb content and Nb/Ta value of major geological reservoirs in the earth [3, 11] Based on the important relationship between subduction zones and continental crust genesis, many studies have been devoted to the geochemistry of Nb and Ta during the subduction process in recent years. behavior and its implications for the formation of low Nb/Ta values in the continental crust. Niu Yaoling et al. [12] found that there is a very obvious differentiation among Nb, Ta and some other related high-field strength elements when studying the trace element composition of seamounts basalts in the East Pacific Ocean, and the factors leading to this differentiation The mechanism is mainly the subduction of the oceanic crust. At the same time, they proposed that Nb-Ta depleted in the continental crust might be stored in mantle basalts. This discovery directly contributed to the research on the possible differentiation of Nb-Ta during the subduction process and its indication significance to the formation mechanism of continental crust, which has become one of the hot fields of geochemistry since the 21st century. Since 2000, a number of research results published in the world's top academic journals such as Nature and Science and first-level geochemical professional journals such as GCA have discussed this problem. Rudnick et al. [1] investigated the rutile-bearing refractory eclogites formed during the subduction process, and found that the Nb/Ta values varied greatly, and the average value was higher than that of the original mantle. Therefore, they believe that the Nb-Ta differentiation caused by the partial melting of rutile-bearing eclogites during subduction can produce low Nb/Ta values equivalent to continental crust, while refractory eclogites partially enter the lower mantle to become A reservoir with high Nb/Ta values. However, subsequent related experiments showed that due to the enrichment of Ta in rutile over Nb, partial melting of eclogite in the presence of rutile would lead to a higher Nb/Ta value in the melt than in the continental crust. Therefore, Melts produced by subducting slabs with rutile are unlikely to have the low Nb/Ta values of continental crust [5,13]. According to the Nb-Ta distribution coefficient between hornblende and melt (referring to the concentration ratio of elements in two phases when a certain system reaches equilibrium), Foley et al. Formed by the melting of amphibolites, while rutile eclogites do not contribute significantly to the formation of continental crust. However, Rapp et al. [6] pointed out that the model of Foley et al. [5] could not explain the characteristics of major elements and some trace elements other than Nb and Ta in the early continental crust (TTG suite), and the characteristics of these elements It must be considered when studying the formation mechanism of continental crust. Their experimental data showed that the granite melts produced by partial melting of hydrous basalts with initial low Nb/Ta values during the metamorphic process of eclogite facies had low Nb/Ta values and other constant and trace amounts similar to those of the early continental crust. Therefore, hydrous basalts with low Nb/Ta values may be the main source of TTG suites in the early Earth. However, a large number of studies have shown that the amount of such hydrous basalts with initial low Nb/Ta values is very rare, and the amount is far from enough to form the continental crust; and such rocks are generally severely depleted of incompatible elements [11,12]. Afterwards, when Xiao et al. [14] studied the Dabie-Sulu UHP metamorphic eclogites, they found that the Nb/Ta value ring appeared in the edge-core-edge profile of the rutile, which indicated that the Nb-Ta value occurred obviously during the plate subduction process. It is related to the fluid with low Nb/Ta value in a certain stage. Based on this, they pointed out that during the plate subduction process, the inside of the plate is cold and the outside is hot. The hot part forms a fluid with a low Nb/Ta value before the appearance of rutile, and flows to the cold part to form an area with a low Nb/Ta value. These cold parts Partial dehydration-melting produces melts with low Nb/Ta values, which may be an important mechanism for the formation of low Nb/Ta values in the early continental crust. However, Aulbach et al. (2008) found that the origin of high and low Nb/Ta values of eclogites may be related to the replacement of lithospheric mantle fluids, but not to the subduction process. A recent study by Niu and O'Hara (2009) showed that the depleted mantle contains more elements such as Eu, Sr, Nb, Ta, Ti, etc. than previously assumed, so it can complement these elements depleted in the continental crust. At the same time, they proposed that the andesite melt produced by the partial melting of hornblende with protolith as mid-ocean ridge (MORB) basalt in the continental collision zone may be the main source of continental growth. To sum up, there is still no unified understanding of the geochemical behavior of Nb and Ta, its indicative significance to the formation mechanism of continental crust, and the location of possible high Nb/Ta value reservoirs. All the models proposed so far cannot fully explain the specific differentiation mechanism of Nb-Ta during the formation of continental crust and the causes of low Nb/Ta values in continental crust and depleted mantle. Further research on Nb-Ta Paradox also depends on people's understanding of the differentiation mechanism of elements in the subduction process and the deep process of the earth.", "The oldest material in the solar system is the meteorite, which was formed about 4568 Ma (4.568 billion years) ago [1], but when the oldest rock on the earth was formed is an important scientific issue that is currently being discussed very hotly. According to the scientific theory of planet formation, the Earth, as a member of the solar system family, should also be formed at 4568Ma. But the reality is that the material currently found on Earth is much younger than that age. At present, there are mainly two kinds of ancient earth substances found in the world: one is minerals that exist in a stable state, such as zircon; the other is natural rocks. At present, the origins of zircons over 3.8 billion years old are found in Western Australia, the central part of the United States, eastern and northwestern Canada, and eastern Hebei, Tibet and Qinling in China. These ancient zircons are often produced in young sedimentary rocks. For example, in the Proterozoic sedimentary rocks of Jack Hills, Western Australia, 4.4 billion-year-old zircons were found [2], which is also the oldest material found on the earth ( figure 1). Rocks older than 3.8 billion years on the earth are mainly produced in Canada, Greenland, Antarctica, and China. Among them, the Slave area in northwestern Canada has confirmed the existence of rocks that are about 4 billion years old. It is the oldest rock found on the earth so far. Origin [3, 4]. Figure 1 \tU-Pb age harmony map of the world's oldest zircon [2] In the study of ancient rocks on the earth, China occupies a very important position. In the early 1990s, Chinese scientists discovered that materials older than 3.8 billion years existed in the eastern part of Hebei Province and the Anshan area of Liaoning [5], making China one of the few countries in the world that preserves the ancient geological history records of the earth [6 ]. Later, materials with an age of about 3.8 billion years or even older were successively discovered in Hubei, Gansu, and Tibet. Especially in the Anshan area, the 3.8 billion-year-old materials discovered so far are exposed in the Dongshan Scenic Area in the urban area and the Baijiafen area in the east of the city. When is the oldest rock on the earth formed? \t\u00b7 353 \u00b7 Li, easy Research. However, an important problem currently exists is whether the 3.8 billion-year-old zircons found in Anshan represent the formation age of rocks or just ancient remnants in young rocks[7], and further work is still needed. Although the oldest minerals found on the earth are 4.4 billion years old, and the oldest rocks on the earth are 4 billion years old, this does not mean that the rocks of the first 500 million years were not developed on the earth. So why does Earth preserve so little rock or material formed in its first 500 million years? One possibility has something to do with early Earth impacts. At present, it is believed that in the early days of the formation of the earth, a celestial body the size of Mars collided with the earth [8]. The energy of this impact caused the earth to melt as a whole. At that time, the surface of the earth was completely covered by hot magma, forming the so-called \"magma sea\". Obviously, the magma sea will erase all material records of the early Earth, making it difficult for us to find its existence. However, the possibility of finding meteorites from the Earth on other celestial bodies such as the moon cannot be ruled out. The second possibility is that the rocks formed in the early days of the earth are quite different from the present ones, mainly due to their higher density. Due to the existence of a large number of small-scale plate subduction at that time, the formed rocks could not be preserved; and as the earth's evolution continued, the formed rocks continued to develop in the direction of low density, thereby avoiding plate subduction and being preserved. Early material records are an important object for us to study the early evolution of the human-inhabited Earth [9]. At present, scientists have studied the internal mineral inclusions of these ancient zircons, the crystallization temperature of zircons themselves, trace elements and O-Hf-Nd-Li-Xe isotopes, etc., to understand the early hydrosphere and lithosphere of the earth. A new understanding has been formed. The main conclusions obtained are as follows: \u2460 Hf isotope data show that these zircons originated from the melting of materials for about 4.5 billion years, reflecting the recycling of materials in the early formation of the earth; \u2461 internal mineral inclusions in zircons (mainly Quartz and water-containing muscovite) and O-Li-Xe isotope data show that the magma that crystallized zircon is water-containing and comes from the partial melting of sedimentary rocks, which reflects the existence of flowing water or hydrosphere on the earth at that time;\u2462 According to the Ti thermometer, it is currently believed that the crystallization temperature of these zircons is between 600 and 780\u00b0C, with an average of about 680\u00b0C. Combined with the data of mineral inclusions, the geothermal gradient at the time of zircon crystallization is 35\u00b0C/km, which is equivalent to the average geothermal gradient of the earth today. But in fact, due to the existence of a large number of radioactive heat-generating elements in the early days of the earth's formation, its geothermal gradient was at least three times that of the present. A reasonable explanation for this paradox is that these magmas formed in a relatively low-temperature environment, very likely similar to today's plate subduction zones. If this explanation holds, the Earth began the process of plate tectonics shortly after its formation. Clearly, we need to find more early Earth rocks and minerals to answer the questions about the state of the Earth and the various geological processes that took place within 500 million years of its initial formation.", "The solar system is composed of nine planets including the earth, among which the earth is distinguished from other planets by its blue ocean and green land. From the perspective of earth science, there are still many places that distinguish the earth from other planets, among which granite is one example. We are all familiar with granite, an important type of rock on Earth. Many of our scenic spots are actually granite landforms, such as Mount Huang in Anhui (left in Figure 1), Mount Jiuhua and Mount Tianzhu, Mount Sanqing in Jiangxi, Mount Tai and Mount Lao in Shandong, Mount Qian in Liaoning, and Mount Hua in Shaanxi. Many of our stone materials are also granite. For example, the \"Night Rose\" (right in Figure 1) used for the decoration of the National Center for the Performing Arts is a kind of granite produced in Shanxi. Of course, granite is also associated with numerous minerals. The formation of tungsten-tin deposits in South my country is closely related to granite. In fact, many metal minerals that we humans use are related to granite. Figure 1 \tHuangshan granite scenery (left) and famous granite stones (right, Night Rose) Source of photo: Internet information Granites come in various colors and have different characteristics. But in terms of mineralogy, it is mainly composed of light-colored feldspar (including plagioclase and potassium feldspar) and quartz, with a small amount of mica and hornblende with iron and magnesium as the main elements. Geologists have found through more than a hundred years of research that granite is an important rock type that composes the earth's continental crust; while the oceanic crust is different from the continental crust[1], it is mainly composed of plagioclase, pyroxene and olivine with high mafic content. The rock types are mainly basalt (volcanic rock erupted to the surface) and gabbro (magmatic rock intruded underground). So, why is there granite only on the continent? To answer this question, we also need to understand the formation process of granite. We say that the earth is composed of the inner core, the outer mantle and the surface crust, and the average thickness of the crust in the continental region is about 40km [2]. When the earth was first formed, there was no core, mantle and crust, but as the earth evolved, its interior melted. Since the initial material of the earth is rich in iron and magnesium, its melting only produces basaltic magma, which forms a crust similar to the composition of the ocean, but when the basaltic rock melts again, it forms a magma with silicon-aluminum as the main component. Granite, the density of granite is significantly less than that of basalt. In this way, with the subduction of the oceanic plate on the earth, the denser oceanic crust subducts into the mantle, and the continent mainly composed of granite can be preserved for a long time. The oceanic crust seen so far is less than 200 million years old, while the continental crust can be as old as 4 billion years. And as time went on, the preserved continental crust was melted many times, making its composition continuously increase in the direction of increasing silicon-aluminum, and the remaining mafic materials can be returned to the mantle through appropriate methods [3~5]. This process is repeated, so that the earth's continental crust continues to develop in a stable direction. It can be said that the amount of granite is an important petrological indicator to measure the degree of crustal development. According to the information currently available to mankind, no other planets in the solar system have been found to have granite; in other words, granite is an important petrological sign that distinguishes the earth from other planets in the solar system. The emergence of this situation is actually related to the degree of evolution and history of these planets. As mentioned above, granite is a petrological indicator of the evolution degree of the crust. It is precisely because the crust evolution degree of these planets is relatively low that there are no granites that appear on the earth. Like the moon, it is another celestial body studied by human beings besides the earth. Samples returned from lunar meteorites and the moon have found that granite does not exist on the moon, or that granite is extremely rare. The main reason is that the moon is \"dead\" after going through the early magma sea stage. Except for occasional meteorite visits, there is currently no geological activity on the moon. Another important reason for the absence of granite on other celestial bodies is water. In all geological processes, \twater plays a very important role. Without water, all rocks are difficult to melt. When water is added to the rock, it can significantly reduce the melting point of the rock, thereby causing it to melt. Therefore, water is an essential and important component for the formation of granite[6]. It is precisely because of the presence of water that the earth makes the appearance of a large amount of granite possible, which is not available in other celestial bodies. Doesn't there also exist granite in other celestial bodies outside the solar system? We can't give a definite answer yet. But this question is very important, because only mature celestial bodies can have life similar to that on the earth, and finding life outside the earth has always been the dream of human beings.", "Cratons are important constituent units of the Earth's surface[1] (Fig. 1), most of which were formed in the early Earth 1.8 billion years ago. Most of these ancient cratons have thick lithospheric roots > 200 km, and due to their low density and heat flow values and high rigidity, they can avoid being modified by later geological processes and remain stable. After its formation, there were basically no obvious tectonic-magmatic-mineralization activities, and there is no obvious seismic activity today, making it the most stable area on the earth. Such as the Siberia craton in Russia, the Wyoming craton in the United States and Canada, and the Kaapvaal craton in South Africa. Figure 1 \tDistribution map of global cratons[1] The exception is that the North China Craton in northern China shows completely different characteristics from the major cratons in the world. Since the formation of this craton around 1.8 billion years ago, it has basically remained relatively stable, and formed carbonate sedimentary structures widely distributed in North China; however, since the Mesozoic around 200 million years ago, the North China Craton has undergone large-scale and intense structural deformation, Magmatic activity and basin formation, accompanied by a large number of metal and energy minerals. In addition, strong seismic activity is also an important geological feature of this craton, such as the Tangshan earthquake that occurred in 1976. Therefore, the original stability of the North China Craton changed later. In the current human knowledge, the change of the nature of stable continental cratons is a special geological phenomenon that is still quite limited. What mechanism caused the originally stable cratons to become unstable? Why did stable continental cratons become unstable? Will it be destroyed? \t\u00b7359\u00b7In the 1950s, Chinese scientists discovered the distinctive geological feature of the North China Craton, and proposed the concept of \"platform activation\"[2]. However, there has been a lack of clear understanding of the origin of this phenomenon or the nature of geodynamics. The study of mantle-derived kimberlite and basalt in the 1980s and 1990s made a qualitative leap in the understanding of this issue. The data of diamonds and mineral inclusions in kimberlite in Mengyin, Shandong and Fuxian, Liaoning show that the kimberlite had a lithosphere about 200 km thick when it was formed (about 470-480 million years ago)[3]; The thickness of the lithosphere obtained from the study of mantle-derived xenoliths in basalts is about 80-120 km[4, 5]. The geophysical detection data are also basically consistent with the above results[6], which shows that the lithosphere has thinned for more than 100 kilometers in the eastern North China since the Early Paleozoic[3]. Since the academic community put forward the scientific proposition of \u201cLithospheric thinning in North China\u201d, a large number of studies have focused on the time, spatial range, vertical amplitude, thinning mechanism and geodynamic controlling factors of the lithospheric thinning mentioned above[ 7, 8]. And the study further reveals that a series of crustal deformation, magmatic-mineralization activities and basin formation in North China during the Mesozoic and Cenozoic may be related to lithosphere thinning. That is to say, the Mesozoic and Cenozoic geological evolution of the North China Craton is not only characterized by the thinning of the deep lithosphere, but also the intense transformation and movement of the shallow crust. In other words, the North China Craton as a whole no longer has the stable characteristics of cratons. We call this process of craton destruction [8], or decratonization [9], which should have stable features but no longer exist. Although the phenomenon of craton destruction is also reflected in different degrees in other cratons in the world, such as North America, South America, Siberia, and India, the academic community agrees that North China is the most typical region where craton destruction occurs in the world[10] , because the stable craton properties of other cratons still exist despite thinning. But exactly why stable cratons break down remains a scientific mystery. Since 2007, the National Natural Science Foundation of my country has set up a major research program of \"North China Craton Destruction\", in order to conduct in-depth and systematic research on this important and characteristic geological phenomenon. According to the current research results, the main mechanisms leading to the destruction of cratons are delamination, thermal erosion and extension. Delamination refers to the increase in the density of the deep crust due to early thickening, and this gravitational instability makes the high-density crust sink into the asthenosphere together with the lower lithospheric mantle, thereby thinning the lithosphere. The space originally occupied by the delaminated material is replaced by the asthenosphere, and the asthenosphere heats the upper crust due to its high temperature, thereby destroying the cratons. The thermal erosion model believes that the \u201cbaking\u201d of the asthenosphere below the craton softens and melts the upper material, so that under the action of the tangential shear stress generated by the horizontal flow, this part of the material transforms into a part of the asthenosphere, In this way, the thinning of the lithosphere and the destruction of cratons are realized. The so-called extension is the process of thinning the lithosphere purely due to mechanical stretching. Since the lithospheric mantle has undergone major changes in properties before and after the thinning of the lithosphere in North China, it is obvious that pure mechanical stretching cannot be used to explain the destruction of the North China craton. Simple. North China belongs to the East Asian continent adjacent to the Pacific Ocean. Since the Mesozoic, the subduction of the Pacific plate cannot be ignored, because the subduction of the oceanic plate will bring a large amount of water into the mantle, and the water will continue to migrate upward after entering the deep part of the earth to soften the overlying layer. Rigid lithospheric mantle. Therefore, in order to solve the problem of destruction of the North China Craton, it is necessary to understand the geodynamic processes in eastern my country and even the East Asian continent from a broader perspective.", "Neoproterozoic glaciers are an important spectacle in the evolution history of the earth[1]. Not only did they appear on almost all continents, but these continents were all in low latitude and low altitude environments at that time. This is distinctly different from Quaternary glaciers, which occur in polar or alpine plateau regions. However, whether the glacial deposition under the cold climate conditions in the middle Neoproterozoic was a global geological event; whether it was a global oceanic glacier or a regional continental glacier, there is a lot of controversy. Low-latitude glacial remnants (maritime sedimentary rocks or moraine rocks) were widely developed in the strata of this age, indicating that it was once in a cold climate covered with ice and snow. Around this abnormal geological phenomenon, people have sought various ways to unravel this natural mystery since the 1960s, and various corresponding hypotheses have emerged, the most famous of which is the Snowball Earth Hypothesis[2, 3]. Was there a global glaciation event on the Earth's surface during this period? What are the causes of global or regional glacial thawing? Does melting water from glaciers affect the chemical composition of seawater? Answering these questions is very important for understanding the evolution of life in the Late Precambrian. This is because after this glacial event, the evolution of life on Earth has achieved a great leap, from simple eukaryotes to metazoans, multicellular animals have appeared for the first time, and the oxygen on Earth has greatly increased , which is also a question that scientists have been persisting in studying. The Snowball Earth Hypothesis believes that there was a serious glacier (Snowball Earth) event in the Neoproterozoic period, so that all the oceans on the earth were frozen, and the continents were also covered by ice and snow. Only in the 2km thick ocean ice layer Underneath there is a small amount of liquid water melted by geothermal heat. Caltech geology professor Joseph Kirschvinck was the first to use the term \"Snowball Earth\" in 1992 [2]. He believes that at that time, the solar albedo in the middle and high latitudes of the earth was very high, resulting in the formation of a large number of glaciers; then the sea level dropped, resulting in an increase in land area, which further increased the albedo of the earth. At the same time, the increase of continents in the tropics is conducive to the weathering of silicate rocks and the burial of CO2 in the atmosphere, which strengthens the \"icehouse effect\". The continuous influence of these two factors leads to the continuous cooling of the earth, thus forming a \"snowball earth\". After the formation of \"Snowball Earth\", due to the volcanic action on the earth, greenhouse gases such as CO2 were continuously released; after long-term accumulation, these gases were finally strong enough to produce a huge \"greenhouse effect\", and the temperature of the earth rose, so the global The sex glacier is melting again. Hoffman et al. [3] of Harvard University further developed the snowball earth hypothesis. They noticed that 800 million years ago, Earth's continents were not separated, but joined together near the equator to form the supercontinent Rodinia. The Rodinia supercontinent was split by the stretching of the lithosphere, forming several small land masses, which greatly increased the coastline of the land. The increase of the coastline has two consequences: first, the activities of organisms on the shore increase, and the enhancement of photosynthesis leads to the absorption of a large amount of CO2; These two processes lead to a rapid reduction of atmospheric CO2, the \"greenhouse\" becomes an \"icehouse\", resulting in a huge ice and snow cover, which in turn produces a runaway solar albedo event, and finally forms a \"Snowball Earth\". It is calculated that the ice cover was 1 km thick at that time, and it advanced to the vicinity of the equator, and the earth's temperature dropped to about \uf02d50\u00b0C. Due to being buried by ice and snow, both photosynthesis and continental silicate weathering were terminated. However, the supercontinent and the volcanic activity accompanying the cracking of the supercontinent became stronger and stronger, releasing a large amount of CO2 outward. After accumulating for as long as 10 Ma, the CO2 was finally strong enough to form a \"greenhouse effect\", which quickly melted the \"Snowball Earth\". When melting, the temperature of the entire ocean can reach above 50\u00b0C. The Snowball Earth Hypothesis can not only explain the low-latitude and low-altitude glacial phenomenon, but also link global climate cooling with supercontinent aggregation/breakup, and at the same time explain the formation of banded iron and cap carbonate rocks after the glacial period Geological and geochemical phenomena such as sedimentation and carbon isotope drift in carbonate rocks have attracted widespread attention from scientists. Therefore, if the snowball earth did appear, it means that there is a third climate state in the history of the earth, that is, the panglacial period[4]. However, the Snowball Earth hypothesis also encountered challenges [5~7]. Paleoclimate model calculations and some sedimentary records show that the Neoproterozoic glaciers did not cover the entire earth, but were in a dynamic glacial activity environment, and there were local unfrozen areas on the earth, thus forming the snow-water earth hypothesis[5,7 ]. The Snowball Earth debate extends far beyond geology. The paleoclimate system can be divided into three discontinuous climate states based on the calculation of the solar radiation energy model of paleoclimatology[4], namely, the glacial-interglacial period (such as the Quaternary), the ice-free period (such as most of the Mesozoic period) and the pan-glacial period. Ice Ages (Neoproterozoic Snowball Earth). Since the Snowball Earth hypothesis combines extreme climate change (development of low-latitude and low-altitude glaciers) with the evolution of global plate tectonics (disintegration of supercontinents), geochemical anomalies (carbon isotope excursions) and the evolution of life in the late Neoproterozoic (multicellular metazoan Different) and other series of earth system mutations are connected together, thus greatly stimulating people's enthusiasm for studying earth system mutations in the Neoproterozoic period. Many special studies have been carried out at home and abroad on scientific issues related to the sudden change of the Neoproterozoic Earth system, but the accurate dating of the sedimentary records of glacial activities is still a problem. It is generally believed that there were four periods of glacial activity in the Neoproterozoic period[8\uf02d13], namely the Kaigas (780~740Ma), Sturtian (720~660Ma), Marinoan (650~635Ma) and Gaskiers (585~575Ma) glacial periods, But whether they are all global ice ages is still controversial[8,9]. One point of view is that the sediments of the Sturtian and Marinoan glacial periods were exposed on all continents, and they were glacial marine deposits, so they are global (Fig. Local mountain glaciers [10, 13]. Another point of view is that the so-called four glacial periods belonged to land glaciers, which are not globally comparable, but because of differences in the depositional environments of continental interiors and margins in different periods, their start and end times were different[6]. Since the age of glacial strata is mostly inferred by indirect methods, there is considerable controversy over the specific age of each glacial period[8\uf02d13]. Accurate dating of the Neoproterozoic glacial rock formations is not only an accurate age division of these special rock stratigraphic units, but more importantly involves the depositional environment of the glacial rock formations themselves and the underlying and overlying strata. correct understanding. Did the climate turn cold before the global ice age? How long did the cold climate last until the ice ages began? How to identify cold climates in geological time? What are their signs? Did the Neoproterozoic glacial activity start before or after the breakup of the supercontinent? What impact did rift magmatism have on local or global glaciers during the Cryogenian? Why a locally active hydrological cycle during a Snowball Earth event? Did rift magmatism cause groundwater hydrothermal circulation and water-rock reaction? Can we now find records of hydrothermal activity in the Neoproterozoic period? In addition to the sun's heat supply to the earth's surface, what role does the heat energy provided to the earth's surface through rift magma activity and hydrothermal circulation in the earth's interior play to climate change? Fig. 1 \tDistribution of glacial sea sediments (red five-pointed star) on each continent at the end of the Neoproterozoic Sturtian and Marinoan glacial ages [4] During the Neoproterozoic period, all continents in the world showed different degrees of magmatic activity, but for them and the supercontinent The understanding of the time-space and causal relationship between cracking and the \"Snowball Earth\" event is quite controversial [10, 14]. Is there a causal relationship between magmatism during the Cryogenian and hydrothermal cycles? Is there any low 18O (depleted heavy oxygen isotope) magma formed by local crustal material recycling in the rift tectonic belt? Is the evolution of life in the Late Precambrian related to hydrothermal activities in rift tectonic belts? In addition to the changes in the concentration of CO2 in the atmosphere, is the heat energy of magmatic activities playing a role in the melting of local ice and snow? Does the formation of low 18O magma in the rift structural belt reflect that surface water has entered the deep magma chamber of the earth? Was there a significant exchange of energy and matter between the Earth's interior and exterior during a Snowball Earth event? Obviously, understanding the temporal and spatial relationship between magmatic activity and hydrothermal circulation during the Cryogenian is also an important link. Systematic geochemical studies on the properties of protoliths of metamorphic igneous rocks in the northern margin of the South China Block[10,14] found that there is a corresponding relationship between mineral oxygen isotope composition and zircon U-Pb age, that is, the middle Neoproterozoic age of zircons is consistent with Exhibits lower oxygen isotope ratios. These low \uf06418O zircons were formed in the low \uf06418O magma formed by the partial melting of hydrothermally altered rocks, indicating that rift magma activity and hydrothermal alteration have reached supersolidus temperature. From this, it can be inferred that the surrounding groundwater (hydrothermal) circulation was activated while the magma emplaced in the rift tectonic environment, and the magmatic rock itself was depleted of 18O due to the high-temperature water-rock reaction; local crustal material recirculation occurred at the fault subsidence site, and the thermal The liquid-altered rocks were remelted to form low 18O magma. Obviously, a large-scale hydrothermal cycle driven by magma heat energy appeared on the earth in the middle Neoproterozoic, which is a typical example of the interaction between \"ice\" and \"fire\" in geological history. This is of great significance to the understanding of the mutual exchange of energy and matter between the interior and exterior of the Earth during the Cryogenic Period. The Neoproterozoic was one of the most significant periods of change in the history of Earth's evolution [1,4]. Such as supercontinent cracking and rift magmatism, low-latitude glaciers and climate change, water-rock reaction and depleted heavy oxygen isotope (18O) magma, the end of the 1 billion-year-long sulfide ocean, the second stage of atmospheric oxygen rise, The diversification of multicellular metazoans and the Cambrian biological explosion are all related to the changes in the earth's environment during this period. From this, many important scientific issues related to the sudden change of the Neoproterozoic earth system have been derived, such as: How large is the global continental area covered by ice and snow (Snowball Earth Hypothesis and Snow Water Earth Hypothesis)? What is the reason for the cold climate and even large areas of sea ice in the world? What was the reason for the start and end of the Snowball Earth event? Did magmatism end or start the Snowball Earth event? Is there a direct relationship between the rift magmatism in the middle Neoproterozoic and the melting of ice and snow? What is the reason for the change of carbon isotope in seawater under the cover of ice and snow? Why did the largest carbon isotope shift in Earth's history occur around the Ice Age? How did the atmosphere and ocean chemistry change during this period? Was it freezing processes or hydrothermal activity associated with rift magmatism that facilitated the late Precambrian bioradiation represented by multicellular metazoans? These issues involve a series of major scientific issues such as the exchange of energy and matter between the inner and outer circles of the earth during the late Precambrian period, and have attracted extensive attention from earth scientists, and have become one of the difficulties in international and domestic earth science research in recent years. one. To solve these problems, it is necessary to strengthen the research on the formation environment and material sources of chemical sedimentary rocks in the Neoproterozoic period, distinguish the similarities and differences in the source area formation and chemical composition of sedimentary fluids and diagenetic fluids; Research on the tectonic background and initiation mechanism of hydrothermal alteration, seeking mineral oxygen isotope proxy indicators of continental glacial activity. Therefore, testing the snowball earth hypothesis is to understand the link between climate change on the earth's surface and the movement of matter in the earth's interior.", "The theory of plate tectonics is one of the most basic basic theories of earth science, which provides a theoretical framework for us to understand the distribution characteristics of earthquakes, volcanoes, and orogenic movements. Both continental and oceanic plates are lithospheric blocks floating on the asthenosphere mantle, and there is mutual motion among the blocks. The oceanic lithosphere begins to form at the mid-ocean ridge, moves away from the mid-ocean ridge, cools and thickens, and subducts and dies at the trench. One of the main reasons why the continental drift hypothesis was not accepted by people before the 1950s was that there was no driving mechanism to explain the large-scale movement of the continents. Now, it is generally believed that the mantle convection is the main reason for the mutual motion of the plates. Mantle convection is the main way for the heat energy inside the earth to spread outward, and it is also the way it converts heat energy into the kinetic energy of surface plate movement. The plate tectonic movement can be simply regarded as the manifestation of mantle convection on the earth's surface. But how exactly is heat converted into the driving force for plate movement? And how to precisely determine the driving force of plate tectonic movement? In the 1920s and 1930s, Wegener believed that tidal forces and differential centrifugal forces were the main driving forces for continental drift, but Jeffrey et al. proved that tidal forces and differential centrifugal forces were several orders of magnitude smaller than the driving forces required for continental drift. Therefore, although Wegener publicly put forward his idea of continental drift in 1912, and by the time of the publication of the third edition of his book \"Origin of Continents and Seas\" (1922), the idea and evidence of continental drift had already been obtained. However, the idea of continental drift was basically abandoned for more than 20 years from the 1930s to the mid-1950s [2, 3]. Interestingly, the concept of mantle convection appeared as early as the 19th century, but it did not attract Wegener\u2019s attention. Instead, Holmes tried to prove that mantle convection is the internal dynamic mechanism of continental drift in the 1930s and 1940s[4 ]. By the early 1960s, the seafloor spreading theory was established, and the plate tectonic theory including basic concepts such as continental drift was also established. With the in-depth study of the mantle convection theory in the 1980s and 1990s, a consensus has been basically reached on the types of the main driving forces of plate tectonic movements [2, 3]. In order to understand the driving force of plate movement, let's first look at the actual situation of plate movement today. Figure 1 shows the current distribution of major global plates and their moving speeds[5], and Table 1 shows the current status and moving speed statistics of the 12 major global plates given by Forsyth and Uyeda[6]. From these results, it can be seen that the velocity of plate motion is not directly related to the size of the plate, nor is it closely related to the length of the oceanic ridge at the plate boundary, but it is negatively correlated with the size of the land contained in the plate, and positively correlated with the length of the trench at the plate boundary. In addition, the speed of plate movement seems to have little relationship with the age of lithosphere subduction (Fig. 1, Fig. 2). When the Pacific plate subducted, the age of the lithosphere was much older than that of the Nazca plate [7] (Fig. 1), but their speeds were comparable [5, 6] (Fig. 2). Figure 1 \tThe main global plates and their moving speeds [5] Table 1 \tThe geometric shapes and moving speeds of the main global plates [6] The plate movement is the external manifestation of the mantle convection, and the mantle convection is caused by the uneven temperature inside the earth The result of uneven distribution of density[3], so the analysis of the driving force of plate movement is also carried out according to this idea. The oceanic plate begins to form at the mid-ocean ridge, and gradually cools and thickens as it leaves the mid-ocean ridge. The effect of thermal expansion and contraction makes the density and thickness of the lithosphere gradually increase as it moves away from the mid-ocean ridge, and its direct impact is that the depth of the seabed also gradually increases as it moves away from the ocean ridge. At the trench, the lithosphere subducts into the asthenosphere mantle (Figure 3). The force analysis of the plate model can classify the force state of the plate into three categories[2,6] (Fig. 3): subducting plate pull (slab pull), oceanic ridge push (ridge push) and mantle viscous force. The subducted cold lithosphere at the trench is denser than the surrounding hot mantle, which will generate a downward movement of gravity, which is transmitted to the entire lithospheric plate through the hard lithosphere, constituting the so-called subducting plate pull; the mid-ocean ridge The terrain is higher than the surrounding seabed, and the gravity of this part of the material will separate the two wings of the mid-ocean ridge. This gravity slip is generally called ocean ridge thrust; when there is relative motion between the plate and the asthenosphere mantle, the mantle viscous force will play Effect: If the plate moves faster than the asthenospheric mantle below, the plate will be affected by the viscous resistance of the asthenospheric mantle below; conversely, if the plate moves slower than the asthenospheric mantle below, the viscous drag below the plate will The stagnant force is the driving force, which drives the movement of the plate. In addition, in addition to the above three main force situations, according to the movement of the trench and two adjacent plates, the plate will also be affected by trench resistance or suction (Figure 3). Fig. 2 \tAge distribution of marine lithosphere [7] Fig. 3 \tSchematic diagram of force analysis of marine lithosphere plate [3] According to the force analysis of this plate model, combined with the actual situation of plate movement today, one view holds that the pulling force of a subducting plate is The main driving force of plate movement, ridge thrust only plays a secondary role [2, 3, 6]. However, some scholars believe that if the influence of mantle viscosity is considered, the pure pulling force transmitted from the pulling force of the subducting plate to the oceanic lithosphere may be basically equivalent to the pushing force of the oceanic ridge[2, 3]. In fact, from the above observations, it can be seen that the movement speeds of the main oceanic lithosphere plates are basically the same, which also indicates that the viscous force at the bottom of the oceanic lithosphere is very small, and the negative buoyancy of the subducted lithosphere is basically balanced by the viscous drag of the mantle . The obvious difference is that the plates containing land move at a slower speed. It is not clear whether these plates do not contain or contain less subducting plates, or because the continental lithosphere root is subject to greater viscous resistance. Caused by [3]. The simple force analysis of the plate model can indeed give us an intuitive understanding of the driving force of the plate tectonic movement, but this is not the force state of the entire system after all. A constantly cooling convective mantle creates two boundary layers on the upper and lower surfaces, the study showed. Instability in the hot lower boundary layer will likely lead to the generation of mantle plumes, while instability in the cold upper boundary layer will generate cold downwelling [2, 3, 8]. The upper cold boundary layer can be directly connected with the lithosphere, and the downflow generated by its instability is the subduction of the plate[2]. When the subducting lithosphere passes through the 410km and 660km phase transition planes, it will also be affected by the force due to the phase transition. In addition, the movement of any density anomalous body inside the mantle, including the instability of the lower boundary, will exert force on the surface plate through the mantle fluid [2, 3, 9, 10]. A correct understanding of the driving force of plate movement is the basis for our further understanding of the development and evolution of plate tectonics. Only by starting from the entire convective system can we have a comprehensive understanding of the driving force of plate movement [2, 3, 9, 10], but the current research models contain too many artificial factors [9, 10]. A complete understanding of the driving forces of plate tectonics requires a model in which plate and mantle convection are coupled and governed by unified physical laws, that is, we need to build a convective model in which plates are naturally generated by the model itself. What's more, there is currently no reliable way to directly measure the driving forces of plate motion to test theoretical models.", "The depth of 660km from the earth's surface is an important interface of the earth. It is a phase transition surface where mantle peridotite transforms from spinel phase to perovskite phase; in terms of geophysical characteristics, it appears as a seismic discontinuity plane, where the propagation speed of seismic waves jumps. Based on this boundary, the mantle is divided into two layers, the upper mantle and the lower mantle. So, is there a difference in material composition between the upper and lower mantles? Beginning in the 1970s, with the development of geochemistry, especially isotope geochemical analysis techniques and theoretical research, geochemists speculated that the chemical compositions of the upper and lower mantles were very different based on the differences between oceanic basalts and oceanic island basalts. The mantle is depleted of large-ion lithophile elements and other incompatible elements that are easy to enter the magma, while the lower mantle is relatively enriched in incompatible elements. In addition to the differences in elemental composition, there are also significant differences in the composition of radioactive isotopes such as strontium and neodymium between the upper and lower mantles. Since the radioactive parent bodies of the above-mentioned isotopes have very long half-lives, the difference in the isotopic composition of the upper and lower mantles indicates that the difference in their chemical composition may have a history of more than one billion years [1~4], which is the so-called upper and lower mantle. The chemical stratification between the mantle (Mantle Chemical Stratification). Considering that the simple whole-mantle convection process will cause the mixing of the upper and lower mantle materials, thereby destroying the chemical stratification of the mantle, many scholars believe that the upper and lower mantles should be convected independently, and there is little material exchange between them. However, geophysical studies suggest that the subducting plate can pass through the boundary between the upper and lower mantle and enter the lower mantle [5]. Since the upper and lower mantle boundary is likely to be a phase transition boundary depending on temperature and pressure, its depth remains unchanged. If the subducting plate can cross the upper and lower mantle boundary and enter the lower mantle, then there should be an equal amount of lower mantle material passing through the upper and lower mantle boundary. into the upper mantle. Some people think that this large-scale exchange of upper and lower mantle material is actually evidence of mantle-wide convection. Considering that geochemical observations are often the traces left during the long-term evolution of the earth, while geophysical methods are used to directly or indirectly observe the processes and phenomena that are taking place today, so some people think that the stratification of the upper and lower mantle may have occurred in the early evolution of the earth. However, the crossing of the subducted oceanic lithosphere across the upper and lower mantle boundaries may only be a recent phenomenon, which is not enough to have a sufficient impact on the chemical composition of the upper and lower mantles, so that it can better explain the difference between the geophysical and geochemical observations. The contradiction between [6]. However, this model does not match the geochemical evidence. For example, the ratios of some characteristic element pairs in mid-ocean ridge basalts and oceanic island basalts, such as Nb/U, Ce/Pb, etc., are basically the same, and are much higher than the corresponding ratios of the original mantle[1, 7]. Since the above elements do not differ significantly from each other during partial melting of the mantle and magma separation and crystallization, their ratios in basalts can represent the ratios of their source regions. It is generally believed that the source region of mid-ocean ridge basalt is the depleted upper mantle, while the source region of oceanic island basalt is speculated to come from the lower mantle, so the Nb/U and Ce/Pb values of the upper and lower mantle are also basically the same, which are much higher than the original The value of the mantle. Among the main geological processes in nature, plate subduction can cause significant changes in Nb/U and Ce/Pb values. In the process of plate subduction and dehydration, U and Pb are far more active than the corresponding paired Nb and Ce. Therefore, the Nb/U value, Ce/Pb in the fluids precipitated from the subducting plate, the related island arc volcanic rocks, and even the continental crust The values are low, so it is inferred that the remaining subducted slabs have high Nb/U and Ce/Pb values. Therefore, the Nb/U and Ce/Pb values of mid-ocean ridge basalts and oceanic island basalts are much higher than the original mantle values, which means that there are a large number of remnant subducted slabs in the two and their corresponding source regions, and the two The degree of impact is close. In other words, the upper and lower mantles were closely related to the recirculating subducted oceanic lithosphere during the long history of the Earth's evolution, that is, the subduction of the lithosphere into the lower mantle is a common phenomenon in the evolution of the Earth, rather than a unique phenomenon today. Figure 1 \tSchematic diagram of the difference in composition and material exchange between the upper and lower mantles. The left figure is the initial point of view: the subducted slabs cannot cross the upper and lower mantle boundaries due to their low density at the upper and lower mantle boundaries and form large accumulations, but the mantle plumes come from the lower mantle The figure on the right is a model interpretation of geophysical data, showing that the oceanic lithosphere can cross the boundary between the upper and lower mantles and enter the lower mantle [5] If there is always a large-scale material exchange between the upper and lower mantles, why can the upper and lower mantles maintain their chemical composition and stark differences in isotopic composition? In addition, why are the Nb/U and Ce/Pb values close to each other in the upper and lower mantles, but there are such significant differences in isotope and trace element composition characteristics? For this reason, scientists have proposed a series of models and hypotheses, hoping to reasonably explain the inconsistency between geochemical and geophysical observations [8~11]. Among these models, the \u201cTransition Zone Chemical Filter\u201d hypothesis (Transition Zone Chemical Filter) [10] and the \u201cSerpentinite Segregation\u201d hypothesis (Serpentinite Segregation) [11] are representative. The \"mantle transition zone chemical filter layer\" hypothesis proposes that there is a water-rich transition zone (Transition Zone) between 410 and 660km in the upper mantle, and minerals in this transition zone can store more water. When the lower mantle material crosses this interface from bottom to top, hydrous minerals decompose and dehydrate, and water can greatly reduce the solidus of silicate, so the dehydration process can produce \"dehydration partial melting\", and incompatible elements enter the melt . The remnants of depleted incompatible elements continue to migrate upwards, and the melt remains in the transition zone due to its higher density than the surrounding mantle rocks under this temperature and pressure condition, and will be brought to the lower mantle by the subducting plate, and the incompatible elements It was thus also reintroduced to the lower mantle. Since the mantle plume from the lower mantle is relatively dry and rises faster, it is less affected by the \"chemical filter layer of the mantle transition zone\" and will not undergo dehydration and partial melting in the transition zone, so its products (oceanic island basalts), The characteristics of enrichment of incompatible elements can be maintained [10]. The \"serpentine separation\" hypothesis proposes that the lithospheric mantle of the subducting slab usually has a large proportion of serpentinization. Since the density of serpentine is lower than that of mantle peridotite, it will separate from the subducting slab during subduction, The oceanic crust can then subduct further into the lower mantle. Since the oceanic lithospheric mantle is depleted, while the oceanic crust is relatively enriched in incompatible elements, finally resulting in the chemical stratification of the upper and lower mantles[11]. The latest research found that the serpentinite in the Zhaheba ophiolite in Xinjiang was likely to have been subducted to a depth of more than 300 km in the mantle, and then returned to the surface due to buoyancy [12]. This finding supports the \"serpentine separation\" hypothesis. The hypotheses of \"mantle transition zone chemical filter layer\" and \"serpentine separation\" seem to explain the phenomenon of mantle chemical stratification, but both need further verification. The main problems faced by the former are: Is the elemental differentiation produced by dehydration partial melting in the transition zone similar to that of partial melting at low pressure? Is there a melt layer in the transition zone? Can these melts be detected by geophysical means? The main question facing the latter is how much of the oceanic lithospheric mantle has been serpentinized? Is the serpentinized part simply dehydrated or separated from the subducting plate during the subduction of the plate? The verification of the above hypotheses requires the joint research of geophysics, geochemistry, geology and high temperature and high pressure experiments.", "High-temperature and high-pressure experiments show that after the subducted oceanic slab crosses the 660km interface and enters the lower mantle, the upper basaltic oceanic crust will undergo a phase transition, and its density is higher than that of the surrounding mantle rocks[1]. Therefore, after the subducted oceanic slab enters the lower mantle, it is difficult Directly return to the upper mantle through its own buoyancy; under the temperature and pressure of the lower mantle, the density of basaltic and picrite magma is also higher than that of the surrounding mantle rocks [2], so even if the subducted oceanic crust is melted, it becomes basaltic or picrite Magma still cannot return to the upper mantle by its own buoyancy. Based on the above experimental results, some scholars believe that the lower mantle is the \"cemetery\" of the subducted oceanic crust. Once the oceanic crust is subducted to the lower mantle, it will not return to the surface [3]. This is the so-called \"Lower Mantle Density Trap\". Compared with the mantle source region of oceanic basalts, the subducted oceanic crust is rich in incompatible elements, and the inevitable result of the density trap in the lower mantle is the obvious difference in the composition of the upper and lower mantles. This is consistent with the observation that the so-called chemical stratification of the mantle is probably related to the accumulation of subducted oceanic crust in the lower mantle [3,4]. So, is the subducted oceanic crust really unable to overcome the density trap of the lower mantle and stay in the lower mantle forever? In conflict with this view is an important hypothesis about mantle plumes: the recycling of subducted slabs is the main cause of mantle plumes. Figure 1. \tMaterial density variation at the boundary of the upper and lower mantle [3] The left figure is the density comparison between the mantle peridotite (red line) and the subducted oceanic crust in a solid state. Above the 660km interface, the density of the subducted oceanic crust is higher than that of the mantle peridotite. Therefore, it can continue to subduct downwards. At around 660km, the density of the subducted oceanic crust is lower than that of mantle peridotite, and once it passes through the 660km interface, its density will be greater than that of mantle peridotite. The density of basalt melts and komatiite melts is higher than that of mantle peridotite in a wide range of depths. Oceanic islands, volcanic chains and large igneous rock provinces that make up islands are all considered to be the products of mantle plumes. In 1980, Hofmann and White proposed that after the subducted oceanic crust enters the mantle, it separates from the surrounding peridotite, sinks into the deep mantle and accumulates in a certain layer under the control of density, which is probably the core-mantle boundary. This accumulation layer can be up to 100km thick locally. Finally, due to internal heating, the diapir rises and induces partial melting, forming the mantle plume source area of oceanic island basalts and other hotspot volcanic rocks[5]. After this model was put forward, it quickly attracted the attention of relevant scholars. Many geochemical [6], geophysical simulations of the subducting oceanic crust recycling process [7], petrology, and high-temperature and high-pressure experimental studies [8] have been carried out around this model. It supports the recirculation of subducted oceanic crust components in the mantle plume, so this model has become the dominant theoretical model for a long time. The most important geochemical observations supporting subducted oceanic crust components with recycling in mantle plumes come from characteristic element ratios such as Nb/U, Ce/Pb: representative mantle plume oceanic island basalts have similar Nb/U ratios to mid-ocean ridge basalts value, Ce/Pb value, and are much higher than the corresponding ratio of the original mantle[9]. Due to the incompatibility of the above-mentioned element pairs in the process of partial melting of the mantle and magma separation and crystallization is very close, there is no obvious differentiation from each other. Among the main geological processes in nature, plate subduction can cause changes in Nb/U and Ce/Pb values. During plate subduction and dehydration, U and Pb are far more active than their corresponding counterparts, Nb and Ce. Therefore, the changes in the above ratios are related to plate subduction. The higher values of Nb/U and Ce/Pb in the upper and lower mantle than in the initial mantle are obviously related to the recycling of subducted plates. In addition to the compositional characteristics, the evolution of mantle plumes also has distinctive characteristics. It is generally believed that the mantle plume usually includes a huge mantle plume head, such as Mount Emei in my country, Siberia in Russia, Deccan Plateau in India, Ontong Java in the Southwest Pacific, etc. These large igneous rock provinces can be formed within 1-2 Ma. In the mantle plume, magma up to tens or even millions of cubic kilometers is erupted, but the magma is rich in silicon, while the tail of the mantle plume often has a small amount of magma eruption and a larger proportion of silicon-poor magma. Then there is a slender mantle plume tail, which can exist for tens of millions or even hundreds of millions of years. For example, the famous Hawaii-Imperial Island chain has existed for at least 70Ma. Why does the mantle plume have such a large melting capacity? Is the special geological phenomenon of the mantle plume directly related to the recycling of subducted oceanic crust? Why is there such a big difference between the mantle plume tail and the mantle plume head? These are issues worthy of study. Theoretical, experimental and numerical simulations of geodynamics show that the entrainment of the mantle thermal plume will inevitably carry the subducted oceanic crust materials accumulated at the bottom of the mantle away from the bottom of the mantle [10~12]. According to geophysical simulations such as the density of chemical substances at the bottom of the mantle, the size of the mantle thermal plume, the viscous structure of the mantle, and the temperature difference between the mantle thermal plume and the surrounding mantle, it is believed that the subducted oceanic crust accumulated at the bottom of the mantle can exist throughout the history of the earth [10~12]. This is also supported by geochemical observations [13], a mantle plume entrained with subducted oceanic crust material, the melting capacity of the column head will also be enhanced [14]. These models and simulations essentially believe that the subducted oceanic crust is not the cause of the mantle plume, but only an appendage entrained during the mantle's ascent. Seismic tomography results show that many mantle plumes do originate from the bottom of the mantle [15]. There are some low-velocity regions with anomalous seismic wave velocity at the core-mantle boundary. Studies have shown that these low-velocity regions are not simply thermal anomalies. There may be dynamically unstable chemical anomalous layers in them[16], and local partial melting may have occurred[17 ]. However, plate reconstruction shows that many large igneous rock provinces correspond to these low-velocity regions, implying that mantle plumes may all originate from large low-velocity regions at the core-mantle boundary[18]. If these anomalous regions are remnants of subducted oceanic crust, then the subducted oceanic crust should be the direct cause of the formation of the mantle plume, rather than passive entrainment. In this way, the problem of density traps in the lower mantle cannot be avoided. Fig. 2 \tMineral separation model [19] The subducted oceanic crust will undergo metamorphism in the lower mantle, forming four main minerals: magnesium-silicon perovskite, calcium-silicon perovskite, spartan quartz and calcium ferrite, the first two minerals The density is higher than that of the surrounding mantle rocks, which is the reason why the subducted oceanic crust is denser in the lower mantle than the surrounding mantle peridotites, and the latter two are less dense than the mantle rocks. The easiest way to reduce the density of the subducting oceanic crust is to separate light and heavy minerals. There may be partial melting in the low-velocity zone at the core-mantle boundary, which is the best place for mineral separation. The separated light minerals contain a large amount of schistite, which is a good co-solvent, and can greatly increase the proportion of partial melting of the mantle during its ascent. For this reason, Sun et al. recently proposed a \"Mineral Segregation Model\" (Mineral Segregation Model) [19] . According to this model, the subducted oceanic crust transforms into four minerals in the lower mantle, namely, magnesium-silicate-perovskite, calcium-silicate-perovskite, stishovite and Calcium ferrite (Ca-ferrite), in which the first two minerals are denser than the surrounding mantle rock, while the latter two are less dense than the mantle rock. According to the \"mineral separation model\", when the subducted oceanic crust enters the lower mantle, its density increases rapidly, and thus sinks to the core-mantle boundary. Due to the heating of the surrounding mantle and the core, as well as the heat generated by its own radioactive elements, its temperature continues to rise, and the internal viscosity coefficient decreases, forming a low-velocity zone at the core-mantle boundary. At the same time, the particles of its constituent minerals also gradually increase Large, under the combined action of the two, the constituent minerals will begin to separate under the control of density. With the loss of heavy minerals, the density of the subducted oceanic crust decreases, the silicon content gradually increases, and finally migrates upward under the action of buoyancy. The recirculated components migrating upward are rich in silicon and high in temperature, which can significantly reduce the solidus of mantle peridotite, greatly increase the melting ratio, and form a large amount of magma. However, due to the addition of silicon, although the melting ratio is relatively large, the magma produced is still silicon-rich basalt, thus forming a mantle plume head: a large igneous rock province. After a large amount of melt was extracted, the rocks in the mantle plume channel became relatively silicon-poor, and the yield of the melt decreased greatly. At this time, a slender mantle plume tail was formed[19]. The \"mineral separation model\" can overcome the lower mantle density trap, and at the same time, it can reasonably explain phenomena such as large igneous rock provinces. The main problems it faces are: Is the subducted oceanic crust the main body of the low-velocity region at the core-mantle boundary? Can the viscosity coefficient in the low velocity region allow mineral separation? How are trace elements partitioned during mineral separation? Can it explain the trace element composition of the mantle plume we see now? The above problems require high temperature and high pressure experiments, geophysics, geochemistry and other joint research.", "The plume theory evolved from the hot spot hypothesis. Wilson (1963) based on the linear distribution of some volcanic islands and seamounts in the Pacific Ocean, Atlantic Ocean, and Indian Ocean and the phenomenon of eruptive age sequence changes, he proposed that there are some relatively static hotspots that can undergo melting in the deep part of the earth. When the lithospheric plate drifts A chain of volcanic islands is formed when passing through these hotspots [1]. Morgan (1971) believed that Wilson's hotspot is the manifestation of the elongated columnar hot material flow (i.e., the mantle plume) that originated from the earth's core-mantle boundary slowly rising on the surface of the earth, and further speculated that the mantle plume was caused by the mantle convection system rising composed of streams. The mantle plume is independent of the plate tectonic system, and its driving force is the thermal energy transfer from the core to the lower mantle. Its hidden huge energy leads to large-scale melting of the deep mantle[1] and is huge in scale (106 km2 in area) and long in duration. Formation of short (<1 Ma) large igneous provinces. The mantle plume activity may in turn affect plate breakup and movement, for example, the breakup of the supercontinent Rodinia and Pangea, and the opening of the Red Sea may be related to the mantle plume activity. Therefore, the theory of plate tectonics and the theory of mantle plumes are two complementary contents in the theory of global tectonics. It is speculated that the mantle plume originates from the 2900km core-mantle boundary, so it is invisible and intangible. Only seismic waves can observe anomalies in the deep part of the earth [2]; however, due to the limitations of seismic wave data collection and modern seismological analysis Due to the limitations of means and methods, there are different views on the understanding of the deep structure of the earth; in addition, the diameter of the slender mantle plume is about 100km (which is very small on a global scale), which brings difficulties to the identification of seismic waves , also led to a variety of completely different results and opinions, and triggered a fierce debate on the existence of mantle plumes [3~4] (see http://www.mantleplumes.org). Morgan's mantle plume theory has three hypotheses: \u2460 originate from the elongated columnar hot material flow slowly rising at the boundary of the earth's core and mantle; \u2461 the mantle with abnormally high temperature under the hot spot; At that time, a volcanic chain whose age gradually becomes older along the direction of plate movement is formed. All three hypotheses have been called into question in recent years because some of the phenomena predicted by these hypotheses have not been supported by actual observations. For example, in classic mantle plume regions (such as Yellowstone and Iceland), seismic tomography results show that the mantle thermal anomaly is limited to the shallow mantle at 200-400 km, and no anomaly as deep as 2900 km is found; The heat flow values in the region are indistinguishable from those in other non-plume active regions. The Hawaii-Emperor Sea volcanic chain is the birthplace of the mantle plume theory, and the two intersect at 60\u00b0. For a long time, this was considered to be caused by the change of the migration direction of the Pacific plate at ~43-47 Ma, but the trend of the magnetic stripe fault and the reconstruction of the plate movement did not support the change of the migration direction of the plate. Therefore, it is believed that the Hawaiian mantle plume is not stationary as people imagined, but relatively moving at a speed of several centimeters per year. Do mantle plumes exist? \t\u00b7381\u00b7In addition, Anderson et al. [5] criticized each piece of evidence for the existence of the mantle plume one by one, and proposed a new non-mantle plume explanation. \u2460 One of the characteristics of the large igneous province is the eruption of a huge amount of magma in a short period of time. The mantle plume school believes that these characteristics can only be explained by the special dynamic process of the mantle plume. But Anderson believes that the short-term eruption may be related to the stress change of the plate or the reorganization of the plate, and the mantle convection induced by the plate edge and rift can also produce a large amount of magma. In addition, if the source region of the mantle contains eclogite and other low-melting-point substances, large-scale basalt provinces can be formed without abnormally high temperatures. \u2461 The formation of volcanic chains whose ages vary linearly has nothing to do with mantle plumes, but is the result of decompression melting of mantle materials caused by lithospheric cracks and stress release. \u2462 Some large igneous provinces have abnormally high 3He/4He values, which are considered by geochemists to be strong evidence that the mantle plumes come from the lower mantle. Anderson argues that there is insufficient evidence for high 3He/4He values to indicate the lower mantle, as this is inferred from the fact that high 3He/4He values are found in large igneous provinces associated with mantle plumes such as Yellowstone, Hawaii, and Iceland , so there is confusion between interpretation and hypothesis. Anderson pointed out that high 3He/4He values can come from the upper mantle. \u2463 The thermal anomaly related to the mantle plume will lead to the uplift of the crust before the magma eruption, but some large igneous provinces have no geological records of crustal uplift, and this phenomenon can be explained by the plate stress without the participation of the mantle plume . \u2464 The physical model believes that the huge pressure in the deep mantle hinders the buoyancy of hot matter, so the mantle plume model is impossible in terms of dynamics and rheology. It can be seen that the doubts about the mantle plume theory mainly come from the uncertainty of the deep geophysical detection results. The key question is whether existing geophysical techniques can detect thermal anomalies that are small (<100 km) on a global scale, such as mantle plumes. Model interpretations of detection results in the same area vary widely due to the different techniques and methods employed by different research groups. For example, the seismic tomographic model of Iceland by Foulger et al. (2000) showed that the mantle anomaly was limited to the shallow mantle, while Zhao (2001) and Wolfe et al. (1997) believed that the mantle thermal anomaly in this region continued to the core-mantle boundary, The differing views may therefore be related to imperfections in seismic tomography. Nonetheless, the observation that all mantle thermal anomalies occur in volcanic chains or large igneous provinces seems to support the mantle plume theory, since mantle thermal anomalies originating in the deep do not randomly appear anywhere on the Earth. It is worth mentioning that scientists from Princeton University in the United States claimed to have detected elongated columnar anomalies originating from the core-mantle boundary in more than a dozen places including Hawaii by using new tomographic techniques[6], which provided support for the mantle plume hypothesis. support; but the results of Princeton also show that the anomalous depths of some \"hot spot\" regions are inside the lower mantle or the upper and lower mantle boundaries, rather than the core-mantle boundary, which makes Courtillot et al. (2003)[7] recognize \tthree different types of mantle plumes in Fig. \t7] There are three different types of hotspots (Fig. 1), originating from the core-mantle boundary, the upper-lower mantle boundary, and the upper mantle respectively. Hotspots originating from the core-mantle boundary are called primordial hotspots, and there are at least seven such hotspots on the earth today, such as Hawaii, Iceland, and Alpha, etc. The indiscriminate discussion of mantle plumes is the main reason for the current major differences. Another philosophical question is whether discrepancies between model predictions and actual observations overturn the theoretical basis of a hypothesis, or require refinement of the original theory. Scholars who oppose the mantle plume theory regard some tomographic results as the main evidence that the mantle plume does not exist, while others believe that the diameter of the mantle plume tail predicted by the mantle plume theory is about 100 km, and the existing technology may not be able to achieve such a resolution Therefore, it is normal to have different understandings at this stage. The discrepancy between model predictions and actual observations may also be related to the relatively simple assumptions of the theoretical model, while the long-term evolution of the earth and the superposition of various geological processes have led to the complexity of the geological system, which has brought difficulties to the verification of model predictions. These are the challenges that must be experienced in the development of earth science theories, and at the same time bring opportunities to scientific and technological workers. Systematic, rational, and scientific identification of mantle plumes in different geological historical periods is the key to answering the above questions, which requires us to have new technical methods and innovative research ideas. If the identification of modern mantle plumes mainly relies on geophysical means, then the identification of ancient mantle plumes pays more attention to geological records. The following five aspects are important identification signs [8~9]: \u2460 Crustal uplift before large-scale volcanism; \u2461 Radial dyke groups; \u2462 deep geophysical characteristics; \u2463 chronological changes of volcanic chains; \u2464 material composition of magma produced by mantle plumes.", "The earth is a huge machine, it has its own energy supply system, circulation system, lubrication system, display system and power system. The earth has a history of nearly 4.6 billion years of movement from its birth to the present. If you compress the 4.6 billion years of movement history into a 2-hour movie, you will see the earth rumbling, and its scenes are far more than volcanoes and Earthquakes are even more shocking. However, in order to understand the laws of motion of the machine of the earth, it is necessary to know each system in it, and to know each system, it is necessary to know the components of each system and the performance of each component. So, what are the components that make up the Earth's systems? It is chemical elements and minerals, rocks, magma and fluids composed of these chemical elements. So how do you study the properties of minerals, rocks, magma and fluids? The understanding of nature is nothing more than observation and experimental simulation. However, in the 21st century, there is another method, that is, computational simulation. \"Scholars can know the affairs of the world without going out.\" In today's IT era, there is no need for talents, and everything in the world can be easily obtained. Today, doctors can observe the state of mind of patients hundreds of miles away without going out. So, can computational geoscientists be able to predict what's going on underground without going out? There are no fewer things in the ground than in the world: there are various fluids, at least 3,000 kinds of minerals and various rocks composed of minerals inside the earth; they will undergo various changes under different temperature and pressure conditions. The range of temperature and pressure inside the earth is huge, the temperature ranges from plus or minus tens of degrees on the surface to more than 5,000 degrees in the core, and the pressure ranges from 1 atmosphere on the surface to 3.5 million atmospheres in the core. In such a large range of temperature and pressure, various fluids, minerals and rocks will not only undergo quantitative changes (such as density, enthalpy, chemical potential), but also qualitative changes (phase transition and chemical change). There are so many species and so many changes in such a large temperature and pressure range, it is impossible to rely on observation and experimental simulation alone. In the past, through a large number of observations and experiments by geologists, a lot of knowledge about the material inside the earth has been obtained; however, there are many extreme conditions in the deep part of the earth (such as too high temperature, too high pressure, too long geological process, and some phenomena are too microscopic. , too many and too complex components, etc.) have brought many limitations to observations and experiments, and computational simulation methods have overcome various limitations, thus bringing a very promising approach to understanding the Earth and other planets. Computational geochemists have two major tasks: one is to do what experiments can do, and to prove the effectiveness of computational methods by verifying and reproducing existing experimental results; After calculating the validity of the method, the problems of inconvenient observation and difficult experiment are studied. Geochemistry studies the composition, properties and changes of materials at different depths inside the earth, and there are many specific issues to be studied. For example, what is the composition of the core? What are the main minerals in the mantle, and do they change at different depths? Do rocks melt to produce magma under upper mantle conditions? How does the fluid composition of the Earth change with depth? Are they evenly distributed in the mantle? What does water look like in the depths of the earth? How do fluids and minerals react? Will minerals (such as quartz) seen on the surface change in the depths of the earth? How do the density, viscosity, heat (enthalpy) and chemical activity of the fluid change with the change of temperature and pressure? How is the fractionation of isotopes enriched into minerals? Does the mantle produce gas or oil? How much thermal energy is available deep in the earth? Under what conditions does combustible ice form? How does mineral dehydration and degassing cause earthquakes? etc. There are many similar questions, more professional questions, and every question is a difficult problem. Most of these problems require comprehensive methods of observation, experiment and calculation to be solved, and some problems can only be solved by computational simulation. A few examples below illustrate the unique ability of computational modeling to predict the composition and properties of Earth's materials. Earth's \"humors\" - geological fluids. Fluids play a key role in the lubrication system of the earth machine. It can be said that without fluids, there would be no subduction of plates, volcanoes, earthquakes, enrichment of metallogenic materials, and even no internal movement of the earth. Geological fluid is the \"blood\" of the earth, which exists in various circles of the earth and promotes the vigorous movement and evolution of the earth. They are both carriers of the movement of matter and the global circulation of elements, and mediators of energy transmission. Fluids play a key role in many geological processes. For example, during the plate subduction process, minerals containing water, carbon or sulfur enter the deep part of the earth with the plate, undergo dehydration and degassing reactions, and produce water, carbon dioxide, sulfur dioxide, nitrogen, etc. and other derived fluids. So how do these fluids keep our planet alive? The \"physiological mechanism\" of a fluid can only be discovered after understanding the physicochemical properties of the fluid. For more than half a century, chemists and geochemists in hundreds of laboratories have tested a large amount of data (about 60,000 data points) on the physical and chemical properties of fluids. Due to the limitations of experimental conditions, most experiments are limited to hundreds of degrees and thousands of atmospheres. Only experimental tests of water have recently reached a pressure of 70,000 atmospheres, which corresponds to a depth of 230km on the earth. Most of the fluid system experiments are limited to the vicinity of the surface. It can be seen that there are many types of geological fluids, and they show different physical and chemical properties at different depths and media conditions on the earth. Even though scientists have determined a large amount of experimental data, it is far from covering the geological conditions at different depths of the earth. It is impossible to study the various properties of so many systems under different geological conditions through experiments in a short or rather long period of time. The only way is to establish theoretical models. This kind of model must meet the following conditions: \u2460 Predictability: the model does not simply reproduce the existing data, the most important thing is that it can be extended beyond the existing experimental data space; \u2461 Its accuracy is within the experimental error range or close to the experiment. The most common theoretical model is the equation of state. Then, how to integrate molecular dynamics computer simulation, quantum mechanics simulation, statistical mechanics and thermodynamic theory to establish the equation of state of geological fluids to predict the behavior and properties of various geological fluids under different temperature and pressure conditions inside the earth, then It is a big problem that challenges computational geochemists. Isotope studies of meteorites from Mars and the asteroid (Vesta) show that the isotope composition of Fe and Si is lighter than that of the Earth or the Moon [1~5]. The original explanation is that during the big collision that formed the moon, silicates were evaporated due to huge energy, and the escape of light isotopes led to the heavier Fe and Si isotopes of the earth and the moon. But Mg, which has a melting point similar to Si, does not show this difference, nor does the more reactive Li [6,7]. This leads to another hypothesis: the Fe and Si isotopic anomalies of the Earth and the Moon are due to the formation of the Earth's core before the formation of the Moon. The main composition of the core is iron, nickel and a small amount of Si[8,9]. This difference in Fe and Si isotopes is caused by the equilibrium fractionation of Fe and Si isotopes between the core and the overall silicate earth[5] , light Fe and Si isotopes entered the core. Therefore, how much isotope fractionation occurs under extremely high temperature and pressure conditions becomes the key to verifying this hypothesis. Relevant experiments have reached 77,000 atmospheres, and no fractionation has been found[10]. However, the experimental pressure is far lower than the pressure of the earth\u2019s core, and theoretical simulation calculations[11] found that this fractionation is Existence can perfectly explain the difference of Fe and Si isotopes, but experimental verification is not yet possible. Under what conditions do diamonds form deep in the earth? Under what conditions is natural gas formed? The water and carbon dioxide we see on the surface may produce other fluids (such as natural gas) and diamonds in the deep part of the earth due to changes in temperature and pressure conditions and oxygen fugacity conditions. Current technology cannot directly detect how water and carbon dioxide change hundreds of kilometers below. Experimental simulations are possible, but very difficult. So far, only Matveev (1997) has done the experimental measurement work at 1000\u00b0C and 24,000 atmospheric pressure [12]; and Sokol et al. have measured the fluid composition of diamond formation at 1400\u00b0C and 10,000 atmospheric pressure [13]. There are no experimental studies under other temperature and pressure conditions. It is obviously impossible to study the formation mechanism of natural gas (methane + hexane) and diamond in such a wide range of temperature and pressure in the deep part of the earth through experiments. Recently, scientists have predicted the various changes of CHO fluid (that is, the fluid composed of water and carbon dioxide) under the upper mantle condition by means of molecular dynamics simulation, statistical mechanics theory, thermodynamic state equation and free energy optimization methods, and obtained the results consistent with the experimental results. The results are completely consistent with [14], thus providing a theoretical model for studying the formation mechanism of natural gas and diamond in the range of temperature up to 2000\u00b0C and pressure up to 100,000 atmospheres. However, the temperature inside the earth reaches more than 5,000 degrees Celsius, and the pressure reaches several million atmospheres. In such a large range of temperature and pressure, what changes will happen to water and carbon dioxide? Still a major task for computational geochemists. The above are just a few typical examples of inferring the composition, properties and changes of earth materials through computational simulations. It is one of the important directions in the development of geochemistry to study geochemical problems by integrating the research methods of molecular dynamics and quantum mechanics computer simulation, statistical mechanics and thermodynamics.", "Lead has four isotopes 204Pb, 206Pb, 207Pb, and 208Pb, of which 204Pb is of non-radiative origin, and its abundance has remained unchanged since the solar nebula condensed to form planets (including the earth), while some 206Pb, 207Pb, and 208Pb are composed of 238U, 235U It is formed by radioactive decay of 232Th, and its abundance increases with time. According to the formula of radioactive isotope decay, the Pb isotopic composition produced by decay in rocks and minerals can be expressed as formula (2) divided by formula (1), and 206Pb*, 207Pb* and 208Pb* are respectively obtained by parent 238U , 235U and 232Th decay radioactive daughters; the modern 238U/235U value is 137.88, and t is the formation age of rocks and minerals. Since most meteorites have no obvious U-Pb differentiation after their formation, the formation age of planets (including the Earth) can be calculated by measuring the Pb isotope composition of meteorites and using formula (4). Patterson was the first to use the Pb isotope composition of three stony meteorites and two iron meteorites to obtain a Pb/Pb isochron age of 4.55 billion years (Fig. 1)[1], which represents the formation age of planets in the solar system, which is consistent with the latest determined planet The formation age of 4.57 billion years is basically consistent within the error range [2]. As a planet in the solar system, the earth has the same initial Pb isotopic composition and formation age as other planets, so this isochron is also called \"Earth isochron\" (Geochron). After the formation of the earth, different layers (core, mantle and crust) and different rock reservoirs have been formed through differentiation and evolution, and they have different U/Pb values, Th/Pb values and Pb isotope composition ranges. According to the principle of overall balance of material and isotope composition, the Pb isotope composition of different layers and rock reservoirs of the earth should generally fall on the \"Earth isochrone\". However, a large number of Pb isotopic composition analysis results show that the Pb isotopic composition of various mantle and crustal reservoir rocks is generally enriched in radiogenic Pb, falling on the right side of the \"Earth isochrone\" (Fig. 2) , which does not conform to the law of the earth's overall Pb isotope evolution. All\u00e8gre first raised this question in 1969 [3], the famous Terrestrial Pb-isotope Paradox, now called the First Terrestrial Pb-isotope Paradox. Figure 1 \tThe Earth's Pb/Pb isochrone age[1] Figure 2 \tThe Pb isotopic composition of different reservoirs[14~16] If the Pb isotopic composition of various substances on the earth is generally balanced, then the \"Pb Isotopic Mystery of the Earth\" It indicates that there may be low \uf06d \uf028238U/204Pb\uf029 rock reservoirs that have not been discovered by us on the earth. Because the upper mantle is the largest rock reservoir on the earth, and the mid-ocean ridge basalt (MORB) derived from the upper mantle is the largest magmatic rock on the earth, many scientists try to study the evolution of the earth's Pb isotope composition through MORB. The Pb isotopic composition of MORB varies widely (Fig. 2), which reflects the complexity of its U-Th-Pb isotopic system. Therefore, the research on the Th/U value of MORB mantle source will help to understand the Pb isotopic composition. evolution. The 232Th/238U value (\uf06b value) of MORB can be calculated by its 208Pb/206Pb value and the unbalanced uranium-series daughter isotope 230Th. Galer and O'Nions calculated the 232Th/238U value (\uf06bPb) of MORB by using the Pb isotope composition to be about 3.7[4], but the 232Th/238U value (\uf06bTh) calculated based on the 230Th isotope was only 2.5. The results obtained by the two calculation methods are obviously different, indicating that the Pb isotope in the upper mantle does not evolve in a closed system. This complexity of U-Th-Pb isotope evolution in the upper mantle is known as the \"Second Terrestrial Pb-isotope Paradox\". \"The first mystery of the earth's lead isotope\" is the core of the \"mystery of the earth's lead isotope\", attracting many geochemists all over the world. They have carried out a lot of research work and put forward many hypotheses. The most representative work includes the following aspects. Pb accretion model in the earth's core All\u00e8gre et al. first proposed the \"Pb accretion model in the earth's core\" [5], they believed that when the Earth's early differentiation formed the earth's core with iron and nickel (both sulfophilic metal elements) as the main components, The sulfophilicity of Pb will lead to its enrichment in the core, while U is enriched in the silicate mantle, so the core has a very low \uf06d value, making the radiogenic Pb of the core very low and falling in the \" The left side of the \"Earth isochrone\" is in equilibrium with the silicate phase mantle and crust Pb isotopic compositions falling on the right side of the \"Earth isochrone\". This model requires the core to have a long formation time (about 100 million years or more). Since samples of the Earth's core are not directly available, and therefore its Pb isotopic composition is unknown, the model cannot be tested directly. However, in the past ten years, scientists have discovered through the research of other short-lived radionuclides that the formation time of the earth's core is very short, no more than 30 million years[6, 7]. Recent research by Lagos et al. [8] shows that although Pb has sulfur-philic properties, it is still not enough to allow it to enter the earth\u2019s core in large quantities; Sims\u2019 research on other sulfur-philic elements (such as As, Sb, W, Mo) shows that [9 ], and most of the chalcophilic elements did not enter the earth's core in large quantities during the formation of the earth's core, which does not support a large amount of Pb entering the earth's core. Although the earth\u2019s core may be enriched with low-radiogenic Pb, it can only balance 30% of the high-radiogenic Pb in the mantle and crust[10], and there should be at least other low-radiogenic Pb reservoirs in the mantle and crust for a reasonable explanation The Earth's Pb Isotope Mystery. The low-radiogenic Pb reservoir model in the lower crust is based on the low-radiogenic Pb characteristics of the lower crustal granulite in Scotland. O\u2032Nions et al., Zartman et al., and Kramers et al. proposed that the lower crust may be an important low-radiogenic Pb reservoir[10 ~12]. Compared with the upper crust, the lower crust is depleted of radioactive heat-generating elements U, Th, and K, and enriched in plagioclase and Pb, so it has the characteristics of low \uf06d value, so it can explain the mystery of the first earth lead isotope. This model believes that the crust and crust-derived sediments have a high value of \uf06b, because U and Pb have greater activity relative to Th during the plate subduction process after 2 billion years, and they are preferentially recycled into the upper mantle, making the upper mantle The depletion of Th relative to U leads to the decrease of \uf06bTh value, but the normal \uf06bPb value of Pb relative to U is not depleted, so this model seems to also explain the mystery of the second earth lead isotope. However, more and more studies on Pb isotopes in lower crustal enclaves show that Scottish lower crustal granulites cannot represent the overall composition of the lower crust. Most lower crustal rocks have higher radiogenic Pb compositions, which fall on the isochrone or the right side of the Earth. Although some lower crustal rocks have low radiogenic Pb isotope compositions, they are also higher than the ancient lower crustal Pb proposed by Kramers et al. Isotopic composition [10]. Therefore, the low radiogenic Pb reservoir model in the lower crust cannot be supported by the existing observation data, although a small number of Archean lower crust may have the characteristics of low \uf06d and high \uf06b values, but this part of the ancient lower crust still cannot be reasonably explained The Earth's Pb Isotope Mystery. The Pb enrichment model in the subduction zone Chauvel et al. proposed that the increase of \uf06d value and the evolution of Pb isotope in the MORB mantle source region are related to the preferential entry of Pb into the crust[13], that is, during the alteration process of the oceanic crust (composed of MORB), Pb is enriched in the oxidized Pb oxides and sulfides are separated from elements with similar chemical properties, because the oxides and sulfides of Pb are unstable during subduction metamorphism and decompose into the fluid phase, thereby entering the mantle wedge, while the subducted oceanic crust U of the eclogite facies /Pb value (\uf06d value) rises, and the subducted oceanic crust with high \uf06d value returns to the mantle, causing the slow \uf06d value to increase with time evolution, so that the Pb isotope composition of MORB falls on the right side of the Earth isochrone; On the one hand, the Pb-enriched (low \uf06d value) mantle wedge partially melts to form island arc magma, which joins the crust, reducing the \uf06d value of the continental crust. This model requires that the \uf06d value of the continental crust should generally fall on the left side of the Earth's isochrone, but the actual situation is that the \uf06d value of the crust we observe is generally higher than that of the upper mantle. If this model holds, then the value of \uf06d in the lower crust must be very low to balance the value of \uf06d in the overall crust, which is not the case. Low \u03bc mantle reservoir model Because the crust generally has a high \uf06d value, except for the upper mantle (MORB source region), the deep mantle is still the most likely undiscovered low \uf06d value reservoir, this low \uf06d value mantle reservoir The Pb isotopic composition of the pool should be complementary to that of the crust and upper mantle, so it is likely to have evolved through the crust-mantle system. Murphy et al. noticed that among all the mantle-derived magmatic rocks, only some alkaline rocks (especially lamprophyre and type II kimberlite) have Pb isotope compositions on the left side of the Earth\u2019s isochrone (Fig. 2)[ 14]. These alkaline rocks are probably derived from low \uf06d mantle reservoirs. Murphy et al. believed that the oceanic crust and some sediments were subducted to the 400-670 km transition zone between the upper and lower mantle, and were isolated from the upper mantle for a long time (more than 1 billion to 2 billion years)[14]. The Pb isotope composition of this low \uf06d mantle reservoir has undergone two stages of evolution: the first stage, the subducting plate inherited the characteristics of high 207Pb/206Pb values; the second stage, due to the loss of U and Th activities, the subducting plate The values of \uf06d and \uf06b are reduced, thus slowing down the radiogenic fall of the reservoir to the left of the Earth isochrone due to Pb isotope growth. This model requires that the subducting plate cannot cross the 400-670 km mantle transition zone, so that these materials can effectively stay in the upper part of the transition zone and cannot enter the lower mantle to become a relatively independent reservoir. However, this model is inconsistent with the generally accepted full-mantle convection model. At the same time, alkaline rocks (especially potassium-magnesium lamprophyre and type II kimberlite) are very small on the earth, and more and more evidence shows that these alkaline rocks The sex rocks probably originate from the transition zone between the asthenospheric upper mantle and the lithospheric mantle, rather than the deeper 400-670 km mantle transition zone. Recently, Malaviarachchi et al. found that the mantle peridotite in Japan's Horoman orogenic belt is composed of very low Pb isotopes[15], which falls on the left side of the Earth's isochrone (Fig. 2). These mantle peridotites in the Horoman orogenic belt are probably the remnants of partial melting of the ancient upper mantle and have been preserved for more than 1 billion years. Although this discovery provides new clues for unraveling the mystery of the earth's Pb isotope, it still faces some problems [16]: first, only the Horoman-type mantle peridotite can account for 1/3 of the mantle to solve the problem of Pb balance, But in fact, except for the Horoman mantle peridotite, similar large-scale mantle reservoirs have not been discovered so far; second, Pb should be enriched in the melt during the early partial melting process of the Horoman mantle peridotite, but in fact These mantle peridotites have anomalously high Pb content (much higher than other mantle sources or even primitive mantle), and the sources of these anomalously high Pb still need to be further studied. In short, over the past 40 years, geochemists have put forward various hypotheses and searched for the reservoir of low radioactive Pb, trying to solve the \"mystery of lead isotopes on the earth\", but they have not obtained a complete answer. The \"mystery of the earth's lead isotope\" is still an unsolved mystery in geochemistry.", "Preface Since the birth of the plate tectonic theory in the 1960s, the driving force of the earth's plate movement has become one of the basic scientific issues that earth scientists are most interested in. Most scientists believe that the driving force of the plate movement mainly comes from the convective movement of the mantle below it. In order to find out the movement law of the mantle material, many geochemists focused on mantle geochemistry research in the 1970s and 1980s. The high-temperature deep mantle will move upwell to the lower-temperature shallow part, and partially melt due to decompression, thus producing basaltic magma. These basaltic magmas are ejected from the surface and become part of the crust, and they carry the information of the mantle. Although it is not yet possible to sample directly from the mantle, people can learn about the chemical composition of the mantle by studying basalts and the mantle rock fragments (enclaves) they carry. Isotopic and trace element tracing is one of the most effective means for geochemists to understand the chemical composition of the mantle. This is because: (1) Isotopic differentiation does not occur during the partial melting of the mantle to form basaltic magma, so the isotopic composition of the basaltic magma that has not been contaminated by the continental crust can directly represent the isotopic composition of its mantle source region; (2) in the mantle When partial melting occurs to generate basaltic magma, it loses some lithophile elements such as Rb, U, Th, and light rare earth elements (LREE) into the melt and eventually transfers to the crust. This results in the enrichment of lithophile elements in the crust, and the depletion of lithophile elements in the mantle that produced the melt (called depleted mantle), thus having lower Rb/Sr values, U/Pb values and higher Sm/Nd value. The depletion events of lithophile elements in the mantle can be recorded by isotopes, because 87Rb, 235U, 238U and 147Sm are all natural radioactive isotopes, and their content reduction will lead to the accumulation speed of their corresponding decay daughters 87Sr, 207Pb, 206Pb and 143Nd After a period of evolution, the isotopic composition of the depleted mantle will be significantly different from that of the mantle without depletion of lithophile elements. The oceanic crust is mainly composed of basalt. Oceanic basalt is the best sample to explore the chemical composition of the mantle, because it is young, its isotopic composition does not need age correction, and there is no continental crust contamination. Its isotopic composition can directly represent the source area of the mantle. isotopic composition. There are two main types of oceanic basalts (Fig. 1): \u2460 mid-ocean ridge basalts (MORB). It is the basalt produced by the upwelling and partial melting of the upper mantle at the boundary of the oceanic plate, that is, the upwelling and partial melting of the mid-ocean ridge, which forms the new oceanic crust at the mid-ocean ridge and provides information on the upper mantle; \u2461 Island basalt (OIB). They are basalts erupted at hot spots inside the oceanic plate. Oceanic islands are formed when the volcanic cone rises above the water surface, and some island chains are formed due to long-term intermittent eruptions during plate movement, such as the Hawaiian island chain. Oceanic island basalts come from mantle plumes originating from the deep part of the mantle (the lower mantle or core-mantle boundary), which can provide information about the deep lower mantle. Figure 1 \tSchematic diagram of mantle structure, convection and oceanic crust production and subduction recirculation [1] Heterogeneity of mantle isotope composition Mantle rocks are mainly iron-magnesium-rich peridotite. Although the rock type and main element composition of the mantle are single, the Sr, Nd, and Pb isotopic surveys of oceanic basalts have found that the isotopic composition of the mantle is very heterogeneous. The 87Sr/86Sr-143Nd/144Nd diagram (Fig. 1) and the 87Sr/86Sr-206Pb/204Pb diagram (Fig. 2) clearly show that the isotope of the mantle consists of at least 4 end members: \u2460 Depleted MORB-type mantle (DMM), It has the highest 143Nd/144Nd value and the lowest 87Sr/86Sr value, mainly represented by Atlantic (Atlan.) MORB and Pacific (Pac.) MORB; \u2461 enriched mantle-1 (EM-1), which has the lowest 143Nd /144Nd value, 206Pb/204Pb value and higher 87Sr/86Sr value, mainly represented by the Walvis Ridge oceanic island basalt in the Atlantic Ocean; \u2462 enriched mantle-2 (EM-2), which has the highest 87Sr/86Sr value, mainly It is represented by the Samoa ocean island basalt in the Pacific Ocean; \u2463 High U/Pb mantle (HIMU), which has the highest 206Pb/204Pb value, is mainly represented by the St. Helena ocean island basalt in the South Atlantic [2]. The isotopic compositions of other oceanic basalts can be formed by mixing these four end members in different proportions. In addition to these four endmembers, Zindler and Hart believed that there is a fifth independent mantle endmember \u201cPRIMA\u201d (Fig. 1)[2], whose isotopic composition is between the above-mentioned DMM, EM-1, EM-2 and HIMU , but it is not the mixed result of these 4 endmembers, because most of the oceanic island basalts (such as Hawaii, Iceland, etc.) and most of the continental basalts, as well as some Indian Ocean MORBs, have the isotopic composition similar to PRIMA, it is difficult for us to Imagine a similar 4-end member mixing process being highly repeated across the globe. Later, Hart et al. found that the focal point of depleted endmembers in many oceanic island basalt isotopic compositions was not DMM, so they named the focal point another independent endmember FOZO (Fig. 1, Fig. 2) [3]. How did the main components of the mantle form? The mantle was formed by large-scale melting and rapid core-mantle differentiation in the early formation of the Earth. It should have a relatively uniform chemical composition, such as transition metal elements (Cr, Mn, Fe, Co , Ni) content is relatively uniform. Why is the above-mentioned isotopic composition of the mantle (and thus the corresponding Rb/Sr values, Sm/Nd values, U/Pb values) so inhomogeneous? How the various isotopic composition endmembers of the mantle above are formed and their positions in the mantle have become very interesting scientific issues. Finding out the cause of the inhomogeneous mantle isotopic composition is of great significance for understanding the movement law of crust and mantle materials. The most discussed is the origin of DMM, EM-1, EM-2 and HIMU 4 endmembers with extreme isotopic composition. Fig. 2 \t87Sr/86Sr-143Nd/144Nd of oceanic basalts [3] Fig. 3 \t87Sr/86Sr-206Pb/204Pb of oceanic basalts [3] The origin of the depleted MORB mantle (DMM) is relatively simple and not controversial, it is The result of the Earth's crust-mantle differentiation [3]. Due to the geological history of the upper crust-mantle differentiation process, the basaltic crust was firstly formed by the basalt produced by the partial melting of the mantle, most of which was the oceanic crust mainly formed by MORB from the upper mantle; The melting produced granite, which formed the continental crust. During the crust-mantle differentiation process, the continuous partial melting of the upper mantle resulted in the decrease of Rb/Sr value and the increase of Sm/Nd value. High 143Nd/144Nd values [2, 3]. If this interpretation is correct, then the depletion of lithophile elements in the depleted upper mantle is comparable to the enrichment of lithophile elements in the continental crust. Based on the depletion degree of the existing depleted upper mantle to calculate the mass balance of crust-mantle incompatible elements, it can be obtained that only 25%-30% depletion of the mantle can meet the mass of incompatible elements required by the entire continental crust. This mantle depletion volume exactly corresponds to the mass of the upper mantle (the depth boundary is 660km)[3]. There are two possible causes of enriched mantle-1 (EM-1)[2]: \u2460 Mantle exposed by fluids precipitated from subducted oceanic crust. The dehydration test of oceanic crust rocks shows that Pb has greater solubility and mobility in the fluid than U, Rb than Sr, and Nd than Sm[4]. Therefore, the precipitated fluid from the subducted oceanic crust has high Rb/Sr value, low U/Pb and Sm/Nd value, and the mantle replaced by it has the isotopic geochemical characteristics of EM-1 after long-term accumulation, that is, high 87Sr/ 86Sr value, and the lowest 143Nd/144Nd value and 206Pb/204Pb value. \u2461 Since the lower continental crust also has similar isotopic characteristics, the delamination of the mafic lower continental crust into the mantle can also lead to similar geochemical characteristics in the local mantle. Relying on Sr-Nd-Pb isotopes alone cannot distinguish these two causes, and differentiating them requires a wider variety of isotopic and trace element geochemical studies. Enriched mantle-2 (EM-2) is characterized by the highest 87Sr/86Sr values and is therefore thought to be caused by mixing with the mantle of terrigenous sediments that recycle into the mantle with subducted oceanic crust [2]. In recent years, basalts with 87Sr/86Sr values as high as 0.720 have been found on the seabed of Savai\u2032i Island, the representative oceanic island basalts in the Samoa Islands, and have low Ce/Pb values similar to those in the continental upper crust, and Nb, Ti, Eu Negative anomalies [5]. This finding proves that the EM-2 endmembers represented by the Savai\u2032i oceanic island basalts are indeed recycled terrigenous sediments. However, the origin of the negative anomalies of Nb, Ta, and Ti may be complicated. For example, the andesite samples collected by Haase et al. (2005) in the Pacific-Antarctic Ocean Ridge also had negative anomalies of Nb, Ta, and Ti but lower 87Sr/86Sr values, They explained that the negative anomalies of Nb, Ta, and Ti are related to the assimilation of seawater-hydrothermal alteration due to the separation and crystallization of amphibolite and magma[6]. Therefore, only a comprehensive study of these trace element anomalies and abnormally high 87Sr/86Sr values can determine whether EM-2 endmembers are related to recycled terrigenous sediments. High U/Pb end-members (HIMU) have long been considered to be the result of the mixing of the subducted oceanic crust recycled into the mantle, because Pb is more active than U during the metamorphic dehydration process of the subducted oceanic crust and is precipitated in large quantities. The fluid carried away, so that the remnant subducted oceanic crust has a high U/Pb value[2]. This explanation was later confirmed by the metamorphic dehydration experiment of eclogite facies in oceanic crust rocks by Kogiso et al. (1997), who found that Pb was more mobile than U, Rb than Sr, and Nd than Sm, resulting in dehydration of residual oceanic rocks. Shell not only has higher U/Pb value and Sm/Nd value, but also has lower Rb/Sr value than MORB [4]. The long-term evolution of this eclogite recycling into the mantle can explain the high U/Pb value and high radiogenic Pb isotope signature of HIMU endmembers, but it is consistent with the lower 143Nd/144Nd value of HIMU endmembers than MORB (requiring less Sm/Nd value) and higher 87Sr/86Sr value (requiring higher Rb/Sr value) contradict (Figure 1). This is stuck in explaining the genesis of HIMU endmembers using recirculated subducted oceanic crust. To solve this problem, scientists explored from the following three aspects: \u2460 This dilemma shows that the geochemical differentiation that occurs during the subduction of oceanic crust is not as simple as observed in the experiment of Kogiso et al. (1997). The geochemical differentiation of plates is still not fully understood. For example, the experimental temperature of Kogiso et al. (1997) was as high as 900\u00b0C, but actually the temperature at which the subducted oceanic crust undergoes eclogite facies metamorphism may be much lower, and the found reentrant subducted oceanic crust fragments are low temperature (T = 560 ~700\u00b0C) coesite-bearing eclogite[7, 8]. The study on coesite-containing analite eclogite shows that since allanite and analite are light REE-rich minerals, the rare earth elements released by the decomposition of allanite during the metamorphic process of eclogite phase can be completely accepted by analite[7 ]. Therefore, it can be speculated that the dehydration of the subducted oceanic crust may not lead to significant differences in Sm/Nd values at lower metamorphic temperatures. Therefore, a more in-depth study of the geochemistry of the plate subduction process is the key to solving this puzzle. \u2461 Kogiso et al. (1997) hypothesized that the oceanic crust recycled into the mantle may undergo a redifferentiation process that we do not know in the lower mantle, and this process can increase the Rb/Sr value and reduce the Sm/Nd value of the recycled oceanic crust[ 4], however, this assumption needs to be verified by corresponding high temperature and high pressure experiments. \u2462 Perhaps the origin of HIMU endmembers has nothing to do with the recycling of subducted oceanic crust, because even under low-temperature metamorphic conditions, the activity of Rb is still higher than that of Sr, and the problem of low Rb/Sr value of recycled subducted oceanic crust still exists. Niu and O\u2032Hara (2003) pointed out that using the recycling of subducted oceanic crust would encounter many difficulties in explaining oceanic OIB, for example, the melting of oceanic crust could not produce high MgO magma in OIB; The formation of most modern OIBs is too depleted; especially once the recycled oceanic crust enters the lower mantle, its density will be higher than that of the surrounding mantle rocks, thus creating negative buoyancy to prevent it from returning to the upper mantle (see the article \"Lower Mantle Density Trap\" in this book) [9]. Obviously, whether the heterogeneity of the mantle is related to the subducted oceanic crust is an important scientific issue that needs to be further studied.", "The structure of the Earth is like an onion: beneath the thin crust there is a thick silicate mantle and a sizable iron core (Fig. 1). In 1906, Oldham deduced the existence of the low-velocity zone in the earth's interior from the change of the amplitude of the seismic pressure wave when it passed through the earth's interior, and thus discovered the earth's core. The radius of the earth's core is about 3480km, occupying 1/6 of the total volume of the earth and nearly 1/3 of the mass. The reason why the mass percentage is much higher than the volume ratio is that the density of the core is more than twice that of the mantle. 5% of the core is a solid inner core, while the rest is in a molten state. The pressure inside the nucleus is as high as 1.36 million to 3.6 million bar. To remain liquid at such high pressures, the core must also be very hot. It is estimated that the temperature at the solid-liquid interface where the inner and outer cores meet may exceed 5000 \u00b0C [2]. Figure 1 \tEarth's layered structure [1] How is the Earth's core formed? Early scholars put forward two distinct nucleation hypotheses, corresponding to two opposite gravitational accretion models of the Earth [3]. In both models, the disk-shaped solar nebula of gas and dust provided the raw material for the birth and growth of the Earth through gravitational accretion. The homogeneous accretion model assumes that the Earth grows as a sphere of uniformly mixed iron and silicate minerals in an essentially unchanged nebular environment; layered structure formation occurs after this by melting of denser material and The center of the earth sinks to achieve. In contrast, the non-uniform accretion model emphasizes that the solar nebula gradually changes from hydrogen-rich to oxygen-rich as Earth's accretion increases. Thus, the interior of the Earth formed as a metal, while the exterior formed directly as an oxide. Figure 2 \tThe uniform accretion model and non-uniform accretion model of the earth correspond to different core formation processes [3] The core formation process occurred billions of years ago, how do we find out which model is true? If the core originated from a uniformly mixed Earth, it should have exchanged material to a large extent with the mantle. In contrast, the non-uniform accretion model implies a very limited exchange of matter between the core and mantle. Scientists have now roughly estimated the material composition of the Earth's core and mantle by analyzing Earth rocks and meteorites from near-Earth space; Partitioned between the main core and the silicate mantle. Experimental geochemical data obtained at atmospheric pressure reveal that the abundance of siderophiles in the mantle is in varying degrees of excess relative to the value expected from core-mantle equilibrium, which seems to support the changing redox environment corresponding to the heterogeneous accretion model Medium nucleation [4]. However, according to the new experimental results, it is inferred that the excess of siderophilic elements may be just an appearance. If core-mantle differentiation occurs in a molten state under high-temperature and high-pressure conditions, the uniform accretion model can explain the content of key siderophilic elements in the mantle [5]. In fact, various evidences show that the early earth may have undergone large-scale melting, thus forming a \"magma sea\" with a depth of nearly 1,000 kilometers [6]. Droplets of iron collect like rain across an \"ocean\" of molten silicate toward the center of the earth, where they collect on the magmatic seafloor to form \"ponds\" of iron until they reach the center of the earth either in the form of diapirs or through seepage networks . Because the \"pond\" of iron has sufficient time to exchange materials with the mantle, today's mantle should record the chemical balance between the core and the mantle on the magma seabed. The idea of a magma sea was originally proposed to explain the light-colored anorthosite of the lunar continent [7]. The hypothesis that the Earth's core was formed in a deep magma ocean implies large-scale melting and rapid core-mantle separation in the early Earth. The hypothesis of a deep magma ocean is supported by modern theories of planetary formation, which suggest that the Earth's infancy was rich in energy supplies, including thermal energy released from short-period radioactive element decay and frequent and high-speed meteorite impacts on Earth. coming momentum. These energies just provide enough heat for the formation of the magma sea[6]. The hypothesis of a rapid core formation is also consistent with recent radioisotope dating measurements. The \"isotopic geological clock\" based on the tungsten (W)-hafnium (Hf) system tells us that the earth's layered structure is not inherent, but formed through later evolution[8, 9]. The separation process of the core and the mantle mainly took place in less than 30 million years. Compared with the earth's history of more than 4 billion years, this process is very rapid. Arguably, the core is almost as old as the Earth itself. The formation of the earth's core began when the earth's original gravitational accretion just completed, or occurred simultaneously with the earth's gravitational accretion. Figure 3 \tcartoon depicts the formation of the Earth's core in the deep magma sea after the giant impact [6] According to the computer simulation of the planetary formation process, many planetesimals accreted in the early stages of the Earth's formation have already met the nucleation conditions, and the Earth's core is likely to be formed by these planetesimals The core is aggregated [10]. It has recently been suggested that, as the Earth's accretion increased, many of these pre-existing small cores came back into chemical equilibrium with the silicate mantle deep in the magma ocean and brought silicon (and perhaps oxygen) to the Earth's core [11]. Scientists are doing more experiments to test these hypotheses and determine the depth of the hypothesized magma ocean. The formation of the Earth's core is one of the most thrilling events in Earth's history. It released a large amount of gravitational potential energy, which set the initial conditions for the evolution of the earth and the internal movement. Understanding the nucleation process also directly affects our understanding of the composition of the Earth's core, the nature of the core-mantle boundary, and the relationship between the upper and lower mantles. Through more than a century of research, scientists have gained a basic understanding of the time, conditions and process of the formation of the earth's core, but many important questions have not yet been answered. How do iron-rich alloy droplets penetrate the solid mantle below the magma sea? Was nucleation completed in the first 100 million years of Earth's history, or has it continued to this day, as some scholars claim? Which light elements would co-enter Earth's core as iron sinks? To what extent are the core and mantle in chemical equilibrium? Will the core solidify completely in the future as the Earth slowly cools, causing Earth to lose its dipole magnetic field like Mars did? At the forefront of research on the formation of the Earth's core, some scientists are searching for nucleation clues in the records of stable isotopes silicon and iron [12]; others are using cutting-edge experimental techniques such as diamond anvils, focused ion beams, molecular dynamics calculations, etc. and theoretical methods to investigate the process of chemical distribution and diffusion of matter at the core-mantle boundary at millions of atmospheres of pressure and thousands of degrees; others have used synchrotron light sources in large national laboratories to observe iron droplets or diapirs X-ray spectroscopy of bulk through a silicate matrix. So far, the formation of the earth's core is still an unresolved fundamental problem, and the ultimate goal of unraveling the mystery of the formation of the earth's core still needs to wait for the next generation of scientists to achieve.", "Preface The earth's core occupies 1/6 of the volume of the entire planet. It is located in the center of the earth, hidden below nearly 3000km, and is the most inaccessible part of the earth. So far we have not been able to obtain any samples of the core. Numerous facts have proved that drilling deep into the earth is more challenging than flying into outer space. Spaceships have already passed the boundary of the solar system more than 10 billion kilometers away from the earth, but the deepest borehole drilled by humans is less than 14 kilometers away from the earth's surface. It is also unlikely that a volcanic eruption would bring fresh core samples to the surface. Therefore, the chemical composition of the Earth's core can only be inferred by indirect means. The mass of the earth's core is 1/3 of the total mass of the earth. Scientists generally believe that more than 80% of the mass of the earth's core comes from iron. Other elements that are relatively abundant in the earth's core include nickel (accounting for about 5% of the mass of the earth's core) and one or more elements lighter than iron. Such as hydrogen, oxygen, carbon, silicon or sulfur. These insights combine a variety of observational and experimental data, including seismological measurements of core density and wave velocity, geodynamic understanding of the Earth's magnetic field, geochemists' estimates of the mantle composition, cosmochemists' studies of meteorites, and in Tests on relevant materials in the laboratory [1]. Evidence of the existence of light elements in the earth's core Francis Birch put forward the viewpoint of the existence of light elements in the earth's core as early as the early 1950s[2]. He noticed that the density of the liquid core is nearly 20% less than that of iron under the corresponding pressure, so he speculated that the core contains elements lighter than iron. Recent advances in seismology and mineralophysics have not only provided more precise estimates of the density loss in the outer core, but have also revealed a smaller but non-negligible density loss in the solid inner core [1]. According to the latest estimates, compared with solid iron, the density loss of the outer core is about 6%~10%, while that of the inner core is 1%~3%. In addition to the lack of density, cosmochemical and geochemical studies have also speculated that the core contains lighter elements. The overall chemical composition of the Earth has been estimated from the composition of the sun's aperture and meteorites. By analyzing rocks near the Earth's surface (especially mantle xenoliths and mantle-derived basalts), we obtain reasonable estimates of the mantle's composition. Assuming that the earth is composed of two parts, the core and the mantle, the composition of the core can be calculated according to the principle of mass conservation. For example, the sulfur element is severely depleted in the mantle, indicating that most of the sulfur on the earth is concentrated in the core [3]. Interested in the nucleation process, scientists studied the distribution of various elements between iron-rich alloys and silicates, and they also found that light elements such as sulfur and carbon accompany iron into the core. Figure 1 \t\"Density Missing\" in the Earth's core [1] PREM. Preliminary Reference Earth Model; ICB. Inner and Outer Core Boundary; CMB. Core-Mantle Boundary; Fe 7000 K. Iron at a high temperature of 7000 degrees; Hugoniot. High temperature generated by shock waves Importance of iron light elements at high pressure The composition and content of light elements in the Earth's core is a fundamental question in modern Earth science. The answer to this question directly affects our understanding of the origin, evolution and mobility of the Earth. Its answer will help us clarify a series of related questions, including how the earth's core is formed, how much volatile elements are in the earth, how high the temperature of the earth's core is, and how the earth's magnetic field is generated and evolved. The light elements in the core can lower the melting point of the material in the core [4]. To estimate the temperature of the boundary between the inner and outer cores, it is necessary to know what are the main light elements in the core and how much there are (Fig. 2). The existence of light elements is also very important to maintain the geomagnetic field. Scientists believe that the magnetic field is generated by the convection of molten iron alloys in the earth's core, which is called a geodynamo. Its energy can come from the original heat stored in the earth's core, the crystallization of the inner core The heat given off and the heat released when a radioactive element decays. In recent years, some scientists have proposed that chemical buoyancy may be an important driving force for convection in the Earth's core [5]. Chemical Figure 2: \tIron-sulfur two-component phase diagram According to the iron-sulfur two-component phase diagram under normal pressure, the melting point of iron The buoyancy from 1800K (pure iron) down to about 1300K (eutectic point, containing about 30% sulfur) is due to the enrichment of light elements at the inner and outer core boundaries caused by inner core growth (Fig. 3). In addition, it was found that the light element sulfur can increase the siderophilicity of potassium and enrich it in the earth's core. Potassium-40 is radioactive and can provide energy for the earth's generators[6]. Determine the nature and content of light elements in the core How can we more accurately estimate the content of light elements in the core? From the density of an iron alloy containing a certain light element, the amount of the light element that causes the density deficiency of the inner and outer cores can be calculated. Does the calculated content agree with the experimental data for the partitioning of this light element between solid and liquid iron? Could iron alloys with this content produce density gradients and seismic wave gradients consistent with the Earth's core? Here are some criteria that can be used to test light element composition models. In addition, seismic observations indicate the absence of large-scale immiscible fluids in the liquid outer core. That is to say, under the temperature and pressure conditions of the earth\u2019s core, the alloy of iron and light elements must form a homogeneous liquid, not a mixture like oil and water, which is another criterion that can be used to distinguish light element models[7 ]. Another test is the effect of light elements on the wetted silicate minerals of liquid iron alloys. Iron alloys containing light elements during nucleation are likely to need to pass through the solid lower mantle to reach the center of the earth. This alloy must be able to wet the silicate minerals to form a permeable network to transport the ferroalloy from the shallow to the deep [8]. From a geochemical point of view, knowing the distribution coefficient of light elements between iron alloys and silicate, we can calculate it from the abundance of light elements in the mantle \nFigure 3 \tChemical buoyancy in the earth's core When the solid core grows Light elements are enriched at the boundary of the inner and outer cores, which promotes the convection of the molten iron alloy in the outer core, and provides energy for the geodynamo; q. Heat, \u0394T. The temperature difference and their contents in the earth's core. Studies have shown that the partition coefficient depends on various factors such as pressure, temperature, and the composition of related substances, especially the redox conditions of the system. For example, under low pressure, low temperature and reducing conditions, oxygen is basically not ferrophilic, and when the temperature and pressure rise, the ability of oxygen to enter the earth's core is greatly improved [9]. In addition, some elements are mutually exclusive and cannot coexist in the core, such as silicon and oxygen. Recent experimental studies have found that this incompatible behavior also changes unexpectedly with temperature and pressure conditions. Light elements are often volatile. Since the abundance of volatile elements in the whole earth is very uncertain, the exact content of light elements in the earth's core cannot be obtained by the method of mass conservation. Recently, some scholars have begun to use the distribution of stable isotopes in the core and mantle to infer the existence of light elements (such as silicon) in the core [11]. This method may be very promising. Conclusion Scientists from all over the world have spent decades studying light elements in the earth's core from theoretical experiments and observations [1]. While earlier studies focused on sulfur and oxygen, data involving hydrogen, carbon and silicon are increasing in recent years. Since most of the experiments still stay at the temperature and pressure conditions lower than the core and simplified chemical composition, there is still a long way to go to clearly understand the composition and content of light elements in the core. In Brett's words, discussions on the composition of the Earth's core are still limited by too few data and too many extrapolations [10]. This situation is expected to improve by applying the latest experimental and computational techniques to simulate the temperature and pressure conditions and chemical composition in the Earth's core, as well as the latest geochemical testing methods. A new generation of scientists will need breakthrough innovations to unravel this fraught and unresolved mystery.", "The Earth's core is a hot sphere made mostly of the element iron. It is the energy source of the deep activities of the earth, such as the convection of the mantle and volcanic activities. In particular, the geomagnetism generated by convection in the liquid region of the outer core of the earth has a huge impact on biological activities. Therefore, the temperature distribution of the earth's core is of great significance, and it is one of the core issues in the study of the deep earth. Since Earth's core sits at the center of the Earth, invisible and intangible, exploration of it must rely on speculation and physics-based calculations. As early as the 18th century, the naturalist Buffon had heated iron balls of different sizes to very high temperatures and recorded their cooling time, and then extrapolated the results to the volume of the earth, thus inferring the age of the earth. With the development of computer technology, the modeling and calculation of the earth's core has gradually become possible. Some computer models have accurately reproduced some properties of the earth's core, such as the reversal of the magnetic field. Similarly, high-performance computers play a key role in determining the temperature of the Earth's core. The exact temperature of the Earth's core remains a mystery to this day. Due to the position of the earth's core, all clues can only come indirectly from seismic wave data. According to the constraints of phase transition and structure information provided by these data, the temperature of the earth's core can be estimated by using high-temperature and high-pressure experiments or computational simulation methods. Theoretically, the core can be modeled according to fluid mechanics and heat transfer, and the temperature distribution of the core can be calculated by solving related equations, but this requires the temperature of a certain point as a boundary condition, otherwise a definite solution cannot be obtained. Fortunately, according to seismic wave data, the core is made up of two parts: a solid inner core and a liquid outer core. Therefore, the boundary temperature of the inner and outer cores can be determined according to the melting point of iron under the corresponding pressure. Therefore, the research on the temperature of the earth's core basically focuses on the melting point of iron and its alloys under high pressure. Therefore, the most commonly used research idea for the core temperature at present is: use seismic wave data to determine the pressure of the inner and outer core boundaries (about 3.29 million atmospheres), and determine the solid-liquid equilibrium temperature of iron and its alloys under this pressure. As the temperature at the boundary of the inner and outer cores, the research on the temperature distribution of the earth's core has a good foothold. There are two methods of experiment and theoretical calculation to determine the melting point of iron and its alloys at 3.29 million atmospheres. In terms of experiments, the current high-pressure experimental research on iron is mainly concentrated below 200GPa. To achieve such high pressures, diamond anvil or shock wave methods are usually used. The diamond pressure chamber is a pressure chamber made of artificial diamonds, which can use lasers and other means to measure some properties of the contents, while the shock wave is to generate instantaneous high pressure by artificially manufacturing a small explosion, and according to the frequency of the electromagnetic radiation released calibration temperature. The high-pressure test is expensive and the actual temperature and pressure are difficult to control. In particular, the calibration of the pressure also depends on the assumptions about the high-pressure properties of certain minerals. These assumptions themselves cannot obtain experimental evidence, and can only be based on their properties under known high pressures. speculate. For example, a commonly used marker in static high-pressure experiments is magnesium oxide. Since its phase transition with increasing pressure has never been observed in previous high-pressure experiments, it is assumed that it is always stable in experiments, so the It comes as a gauge to mark the pressure. But if one day it is found that magnesium oxide can also change phase under a certain pressure, it means that the scale of the experimental measurement is wrong, and then all the experimental results will become suspicious. Therefore, high-pressure experiments generally have large uncertainties. Of course, the results of high-pressure experiments are of great significance for understanding the formation process of the early core and the interior of smaller planets, because in these two cases, the pressure is not so high, and the experiment can obtain reliable results [2]. Fig. 1 \tExtracting the information of the Earth's core structure through seismic waves Due to the above reasons, theoretical calculations based on first-principle molecular simulation play an irreplaceable role in the study of the Earth's core. Here it is necessary to briefly introduce the basic idea of this simulation . The so-called first principle refers to predicting various properties of matter based on the most basic laws of physics. In this process, only four basic physical constants are generally required: electron mass, electron charge, vacuum permittivity, and Planck's constant. , and the values of these constants have been measured to a high degree of precision through a large number of experiments, and do not depend on any other experimental parameters. Obviously, theoretically speaking, the first-principle molecular simulation can be used to deduce the properties of any substance under any temperature and pressure conditions. It is precisely because of this advantage that the first-principle molecular simulation is very suitable for extreme temperature and pressure Research on the Lower Earth's Core. Since the current high-performance computing capability is still limited, some basic assumptions need to be introduced in practical applications. For example, since the mass of the nucleus is thousands of times heavier than that of the electron, the nucleus can be considered stationary when calculating the orbit of the electron. That is to say, the movement of the electron and the movement of the nucleus can be calculated separately. Functional theory, quantum Monte Carlo and other means; while the nucleus can be approximately considered to conform to Newton's law of motion, and can be calculated by the method of molecular dynamics. Figure 2. \tEarth's interior density and pressure changes with depth obtained from seismic wave data There are two main methods in today's geosciences to determine the melting point through calculation and simulation. One is through the two-phase coexistence method, that is, the initial state of the simulation is set to be half liquid and half solid, and after a period of operation at a given pressure and temperature, see which phase grows. Constantly adjust the temperature to determine the temperature at which the two phases can coexist for a long time, that is, the melting point. This method is easy to understand, but there are obvious uncertainties: for the reliability of the simulation results, a sufficient number of atoms must be used, which results in a heavy computational burden. Typically, systems of more than 1000 atoms take months or even more than a year to complete, and macro-scale effects cannot be included [3]. Another method is to use the thermodynamic integration method to calculate the Gibbs free energy: according to the laws of thermodynamics, when the temperature and pressure conditions are determined, the ultimate criterion for determining the stability of a certain structure is the Gibbs free energy, which is a Quantities related to energy, pressure, volume, temperature, and entropy of a system. The melting point can be determined from the intersection point of the free energy curve by calculating and determining the free energy of the solid-liquid two-phase under the given pressure and temperature conditions respectively. The determination of the free energy value needs to introduce a reference system with known free energy. Starting from this system, it gradually transitions to the system to be solved, and the energy difference between the system to be solved and the reference system is obtained through molecular dynamics simulation and integrated. , and finally obtain the free energy of a given system. This method is cumbersome in principle, but it is more reliable. At present, the representative work in related fields is the study of the melting point of iron under the pressure of the earth's core by Alfe et al. from the Global University of London (UCL). , the melting point of iron should be around 6200 K. The premise of this result is that the solid iron in the core exists in a close-packed hexagonal structure, but the solid structure of iron under high temperature and pressure is controversial. Which solid phase coexists. The biggest controversy centers on the hexagonal close-packed and body-centered cubic structures. The quantum mechanical calculation of pure iron shows that the hexagonal close-packed structure is more stable, but according to the research of Balonoshko\u2019s group at the Royal Swedish Institute of Technology, from the seismic wave data, it seems that the solid iron in the earth\u2019s core exists more like a body-centered cubic structure[4 ]. The more reasonable explanation now recognized is: the light elements mixed in the pure iron system strengthen the stability of the body-centered cubic structure [5]. Figure 3 \tDetermine the melting point through the Gibbs free energy curve. In the diagram, Gl and Gs are liquid and solid Gibbs free energies respectively. The temperature value corresponding to their intersection point is the melting point. The types of light elements mixed in the earth\u2019s core may be sulfur, oxygen, silicon, carbon, and hydrogen. According to metallurgical principles, the mixing of light elements will affect the melting point of iron. Exactly how these elements affect the temperature of the Earth's core is still not very clear, which requires knowing how much they are in, and how they are distributed in the solid-liquid state. The current research is to calculate the distribution of elements in the solid-liquid two-phase by calculating the chemical potential, which is generally limited to binary systems such as iron-carbon, iron-silicon, iron-sulfur or iron-oxygen, and it is difficult for the time being. A mix of three or more elements is achieved, so it is assumed that there is no interaction between the different elements. The distribution of these light elements in the earth's core and their precise influence on the melting point of iron are the key research directions in the future. The difficulty is that the exact temperature must be known to calculate the distribution of light elements, but the distribution of light elements will cause the temperature to change. The existing research work does not consider this temperature change, but uses the melting point of the pure iron system as an approximation. It is conceivable that the determination of this composition-temperature interaction requires an iterative process, that is, given a After the composition is calculated, the temperature is adjusted according to the composition; then the composition is recalculated according to the newly obtained temperature, and the cycle is repeated until the gap between the two iterations is small enough. There is no doubt that this requires a huge amount of calculation, and whether this process can converge is an unknown. Apparently, the determination of the core temperature is a comprehensive problem, and all aspects of it affect each other. Composition, pressure, density, solid structure, etc. will all affect the determination of the temperature, and it is impossible to draw reliable conclusions only by theoretical calculations. It can only be analyzed by combining evidence from astrochemistry, seismology and other disciplines. Maybe humans will never be able to send detectors to the core of the earth, but we believe that with the advancement of space exploration, seismic wave detection and high-performance computers, how high is the temperature of the earth's core? \tWith the further development of \u00b7411\u00b7, the calibration of the temperature of the earth's core will gradually become more accurate.", "The oldest material in the solar system is the refractory inclusions in primitive chondrites, also known as calcium-rich and aluminum-rich inclusions (Ca, Al-rich inclusions, CAIs for short), which are mainly composed of high-temperature refractory minerals (condensation temperature \u2265 1300 K)[1], common oxides and silicates, such as yellow feldspar, spinel, etc. Refractory inclusions range in size (tens of microns to several centimeters) and in various shapes (round to irregular), and generally appear in carbonaceous chondrites, and the content in other types of chondrites is relatively low (Fig. 1, Figure 2). Figure 1 \tAn irregular refractory inclusion fragment in the Ningqiang carbonaceous chondrite, composed of Melilite and Spinel, covered by a layer of aluminum-rich high-calcium pyroxene (Al-Diop) Side package \ndiagram 2 \tA circular refractory inclusion in NWA 2140 carbonaceous chondrite, composed of minerals such as mellow feldspar, anorthite, and spinel, and a small amount of refractory inclusions enriched in refractory elements of platinum group precious metal particles , such as Al, Ca and rare earth elements, but poor in volatile elements such as Na and Fe. Their chemical composition is very similar to the theoretical calculation results of nebular condensation, representing the earliest condensed objects in the primordial nebula of the solar system [1]. The oldest age measured in the solar system is found in refractory inclusions, which is 4.567 billion years [2]. In addition, refractory inclusions are also rich in relics of short-lived radionuclides [3], short-lived radionuclides have a very short half-life, and only exist in the earliest objects in the solar system, which further shows that refractory inclusions Bodies are the earliest objects to form in the solar system. Refractory inclusions are the most primitive matter in the solar system, which contain a wealth of physical and chemical information about the early formation and evolution of the solar system, providing important evidence for understanding the origin of the solar system. The past theory believed that the original nebula of the solar system was a uniformly mixed gaseous nebula, but the research work on refractory inclusions found that many elements existed in the solar system and how did the oldest objects in the solar system form? \t\u00b7413\u00b7The isotope anomaly phenomenon, and even found the star dust outside the solar system, also known as the pre-solar system material, which is the material that existed before the formation of the solar system[4]. These discoveries have revolutionized understanding of the origin of the solar system. In-depth research also found that refractory inclusions are actually a very complex group of objects, some refractory inclusions retain traces of gas-solid rare earth element fractionation, and some reflect evidence of liquid phase crystallization. The structural characteristics and mineral chemical composition of refractory inclusions indicate that they have experienced a series of complex processes, including condensation, evaporation, melting, crystallization, and later alteration [5], while the formation of some refractory inclusions has experienced multiple processes. Phase repeat process [6]. What physical process accompanies the formation of refractory inclusions, or in other words, what is the mechanism for the formation of refractory inclusions? This is a conundrum that has puzzled planetary chemistry researchers for years. Such physical processes should have occurred very early in the solar system and lasted only a short time. People think of many astrophysical events to explain the cause of refractory inclusions, including bipolar jets in the early stages of star formation and FU Orionis outbursts. There is a difficult problem that has not been well solved so far, that is, the residence time of refractory inclusions in the proto-nebula of the sun. According to the calculation of kinetic theory, centimeter-sized particles can only float in the solar nebula for about 10,000 years, and then they will be accreted into the primordial sun. However, both isotope chronology and the content of short-lived radionuclides indicate that refractory inclusions have remained in the solar nebula for at least 2 million years, and there is currently no good theory to explain this observation. When people discovered a special refractory inclusion in the primitive meteorite, the problem became more complicated. This kind of inclusions is called FUN (Fractionation and Unidentified Nuclear effects) inclusions, and their constituent minerals are more refractory than ordinary refractory inclusions, and have obvious isotopic anomaly effects, but FUN inclusions usually do not contain short-lived radioactivity nuclide relics. If there are no traces of short-lived radionuclides, it means that the object was formed late, and the short-lived radionuclides had all decayed when they were formed, so there was no evidence left. However, the minerals in the FUN inclusions are very refractory, which also shows that they were condensed from the solar nebula earlier, and the FUN inclusions also have many isotopic non-mass differentiation phenomena, which means that the original solar nebula was not completely mixed. As a result, it further shows that they are the earliest objects formed in the solar system. These observations are contradictory and difficult to explain with existing theories. It has been suggested that FUN inclusions may have formed very early, before the formation of short-lived radionuclides in the solar system. This hypothesis still lacks the support of theory and observation results. In fact, some short-lived radionuclides existed before the formation of the solar system and continued until the early days of the solar system [3]. How are refractory inclusions formed? What role did they play in the formation and evolution of the solar system? These issues are yet to be further studied.", "Hypotheses of the cause of the Earth-Moon system There are four types of hypotheses for the formation of the Earth-Moon system, namely, the theory of binary accretion, splitting, capture, and large impact [1]. The binary star accretion theory holds that the Earth-Moon system is composed of two stars condensed and accreted in the same region of the solar nebular disk, and they have the same oxygen isotope composition, but this hypothesis cannot explain the missing or very small metal core of the Moon, and the Earth-Moon Angular momentum of the system. The split theory believes that the moon was formed from the mantle material ejected by the earth's rotation, which can explain the small or even missing metal core of the moon and the same oxygen isotopic composition as the earth, but the angular momentum and kinetic energy of the earth-moon system cannot separate the moon from the mantle Throw it out. The capture theory holds that the moon was captured by the earth after it was formed in a metal-poor nebula region, to explain that the moon has a small metal core, but it cannot explain that the moon has exactly the same oxygen isotope composition as the earth. Figure 1 \tSchematic diagram of the large impact model of the Earth-Moon system [5] More and more evidence shows that the Earth-Moon system is likely to be formed by a large impact. William K. Hartmann and Donald R. Davis carried out the simulation calculation of the large Earth-Moon impact as early as 1975[2]. Subsequent research has continuously improved the large impact model, satisfying the angular momentum of the Earth-Moon system and the small size of the Moon. The metal nuclei, as well as their respective quality and other constraints [3, 4]. According to the giant impact hypothesis, the solar nebular disk condenses and accretes to form planetesimals with a size of several kilometers to hundreds of kilometers, and then further collides and aggregates into major planets. The big impact occurred in the late stage of the earth's formation, and the impactor was called Theia (the mother of the sun god in Greek mythology), which was formed near the Lagrangian point (the gravitational balance point between the earth and the sun), when As its mass increases to the size of Mars, its orbit becomes unstable and it eventually collides with Earth at an oblique angle. The large impact caused most of the mantle and a considerable part of the mantle to be ejected into space in the form of melt and gas, about 2% of the matter formed a disk around the earth, about half of which condensed and gathered to form the moon, and most of the metal core of the impactor Sink into the Earth's core (Figure 1). The large impact event should have occurred in the late formation of the proto-Earth, and the metals and silicates had differentiated to form the core-mantle. According to model calculations, most of the metals in the impactor sank into the earth's core, so the separation time of the metal core-silicate mantle of the earth and the moon can be used as the time of the big impact. Since W is siderophilic under reducing conditions, it tends to enter the metal core, while Hf is a typical lithophile element and mainly exists in the silicate phase. Therefore, the metal core-silicate mantle differentiation time of the earth and the moon can be obtained by using the extinct nuclide 182Hf-182W system, which is close to the time of the large impact event, which is about 30~60Ma after the solar nebula begins to condense. See the article Distribution and Origin of Extinct Nuclides in the Solar System for extinct nuclides and their dating. The main evidence of the large impact The reason why the Earth-Moon large impact hypothesis has become the mainstream view is that it is supported by more and more evidence from the moon. The most critical evidence is that the moon has the same oxygen isotope composition as the earth. Earth-Moon oxygen isotope mass fractionation line [6]. Oxygen has three stable isotopes, and its composition is usually represented by the per-thousandth deviation of the ratio between them (17O/16O and 18O/16O) and the standard substance (ie \uf06417O, \uf06418O). Oxygen isotopic composition changes caused by physical and chemical processes follow the law of isotopic mass fractionation, but meteorites of each chemical group have characteristic oxygen isotopic compositions, and they do not fall on the same mass fractionation line, reflecting the heterogeneity of oxygen isotopic composition in the solar system. The Earth and Moon are much larger than the smaller planets, but their oxygen isotopes fall on the exact same mass fractionation line. Theoretical calculations show that the high-energy state after the big impact caused the earth's silicate magma ocean and the material that formed the moon to undergo eddy mixing and equilibrium exchange, making the oxygen isotopic composition completely homogenous[5]. See \"Oxygen Isotope Anomaly in the Solar System\" for more on oxygen isotopes. Other important evidences for the large impact hypothesis include that lunar rocks are very depleted of volatile components, have little or no metal cores, and have a redox degree similar to that of the Earth. The main lithologies of lunar rocks are basalt and anorthosite, and their volatile components are significantly depleted compared with meteorites from Earth, Mars, and Vesta [7]. By using element pairs with similar geochemical properties but different volatilities (such as K-La, Mn-Fe) to map, it can clearly reflect the characteristics of lunar rocks that are relatively poor in volatile components compared to the earth. The K and La content changes of different planets fall on different straight lines, reflecting the magma crystallization and melting differentiation trends (straight lines), and the depletion degree of volatile element K in the initial material of different planets. It can be expected that during the recondensation of high-temperature magma and gas after the big impact to form the moon, high-temperature evaporation will lead to the loss of volatile components. Lunar explorers use on-board magnetometers to measure the slight changes in the Earth's magnetic field caused by the moon, thereby estimating the size of the lunar core [8]. The results show that the lunar core is very small [with a radius of (340\u00b190) km], accounting for only 1%~3% of the mass of the moon. For comparison, the Earth's core accounts for about 33 percent of Earth's mass. On the other hand, the similar Hf/W values of lunar rocks and mantle indicate that the earth and the moon have basically the same oxidation-reduction degree when metal-silicate differentiation occurs. Therefore, the unusually small lunar core is most likely the result of a large impact that sank most of the metal into the core. In addition, the composition of the anorthosite in the lunar highlands and the existence of KREEP-rich rocks [KREEP, namely rich in potassium (K), rare earth (REE), and phosphorus (P)] in the ocean of storms indicate that the moon once experienced global-scale magma melting event (ie, magma ocean), and the large impact hypothesis can easily explain the huge energy required to form a magma ocean. The status quo and existing problems of the large impact hypothesis Compared with the extremely complex Earth-Moon system, the numerical simulation of the large impact hypothesis is only a framework model, which is mainly modeled by the principle of smooth particle hydrodynamics, and the constraints considered The factors are a few physical quantities, including the size of the proto-Earth (related to the impact time), the relative proportion of the impactor to the proto-Earth, the number of impactors (one or several), the relative velocity and relative position of the impact, and the post-impact The angular momentum of the system, the size of the earth and the moon, the relative size of the metal core, etc. Obviously, the simulation of the large impact process and its results has an important impact on the understanding of the material composition and evolution of the Earth and the Moon, and provides a geochemical basis for verifying the large impact hypothesis. For example, the material composition of the Earth and Moon before and after the big impact, the process of recondensation and accretion of projectiles and the possible formation of asteroids, the thermal state of the initial Earth and Moon after the impact and the existence and size of the magma ocean, and the evaporation of high-temperature magma and gases Elemental and isotopic fractionation by interaction, gravity-driven material separation of products of large impacts, etc. The earth has gone through 4.5 billion years of evolution, and its early history has basically been wiped out. Humans\u2019 understanding of the moon is still in the discovery stage. The verification of the big impact hypothesis will be the key to the earth science and moon science for a long time to come. key scientific questions. Including my country's Chang'e project, the implementation of a new round of deep space exploration programs such as the moon will further deepen human understanding of the origin of the earth-moon.", "The discovery of extinct nuclides, as the oldest \"fossils\" left over from the formation and evolution of the solar system - meteorites, preserves the excess of daughter isotopes produced by the decay of short-lived radionuclides that existed in the early days of the solar system. Because the half-life of these radionuclides is 0.1~103Ma, which is much smaller than the age of the earth, they are all extinct, so they are often called extinct nuclides. Taking 26Al as an example, its half-life is 0.73Ma, and the decay product is 26Mg. In order to detect the excess of 26Mg produced by the decay of 26Al, three stable isotopes of Mg, namely 24Mg, 25Mg and 26Mg, need to be determined. Usually, the Mg isotopic composition of the sample is represented by the per-thousandth deviation between the values of 26Mg/24Mg and 25Mg/24Mg and the standard substance, that is, \uf06425Mg = [(25Mg/24Mg) sample/(25Mg/24Mg) standard \uf02d1] \uf0b4 1000\u2030, \uf06426Mg = [(26Mg/24Mg) sample/(26Mg/24Mg) standard\uf02d1] \uf0b4 1000\u2030. The change of isotope composition produced by the physical and chemical process follows the law of mass fractionation, which satisfies the relationship of \uf06426Mg = 1.92\uf06425Mg. Define \u039426Mg = \uf06426Mg\uf02d1.92 \uf0b4 \uf06425Mg, then \u039426Mg = 0. Lee et al. analyzed the Ca-rich and Al-rich inclusions in Allende carbonaceous chondrites, and found an obvious excess of 26Mg (\u039426Mg > 0), which was positively correlated with the Al/Mg value, thus confirming for the first time that the excess of 26Mg was formed by the decay of 26Al [1]. The advancement and popularization of secondary ion probe (SIMS) analysis technology has realized the in-situ analysis of Mg isotope in Al-rich and Mg-poor minerals in Ca-rich and Al inclusions, which has greatly promoted the research on 26Al and other extinct nuclides. Due to the very short half-life of 26Al, only the oldest samples in the solar system may preserve the excess 26Mg produced by the decay of 26Al that has not yet been completely extinct, that is, Ca-rich, Al inclusions and aluminosilicate-rich spherulites. A large number of analysis results show that Ca-rich and Al inclusions have the highest 26Al/27Al value (5\uf0b410\uf02d5), which is also considered to be the initial value of 26Al/27Al in the solar system, while the initial value of 26Al/27Al in chondrules The system is low (~1\uf0b410\uf02d5) [2]. In addition to 26Al, other extinct nuclides that have been discovered include 10Be, 36Cl, 41Ca, 53Mn, 60Fe, 109Pd, 129I, 146Sm, 182Hf, 244Pu, etc. Table 1 lists the half-lives and initial ratios of these extinct nuclides and other relevant parameters. Causes of nuclide extinction There are constant disputes about the cause of nuclide extinction, and there are at least two mainstream views, namely, the cause of extinction within the solar system and the addition of extrasolar system. The cause of formation in the solar system is represented by the solar X-ray-radiation hypothesis, which believes that in the earliest formation of the sun, it experienced a Taurus stage of strong solar wind radiation[4]. Intense solar wind radiation heats and evaporates ferromagnesian silicate dust near the solar region (<0.1AU), forming refractory enclaves rich in Ca and Al, while farther away from the sun, it only melts to form silicates pellets. At the same time, solar high-energy particles interact with matter to form 26Al and other extinct nuclides. Since Ca and Al-rich inclusions are closer to the sun than spherulites, the content of extinct nuclides such as 26Al in the former is correspondingly higher. After the formation of Ca and Al-rich inclusions, they are ejected to the area of chondrite condensation and accretion (Fig. 1). Obviously, according to the solar wind radiation hypothesis, the difference in 26Al content between Ca-rich and Al inclusions and spherulites has nothing to do with time, but only the formation space. The view that extinct nuclides are added from the outside is represented by the supernova source hypothesis. The hypothesis holds that extinct nuclides were formed in nearby supernovae and added to the solar nebula as they exploded. At the same time, the explosion of this supernova triggered the collapse of the solar nebula, which led to the formation of the solar system. According to the supernova source hypothesis, the distribution of extinct nuclides in the solar nebula is uniform, and the difference in the initial ratio of extinct nuclides between different components represents the difference in the formation time of the two. isotope clock. Table 1 \tImportant parameters such as discovered extinct nuclides and their initial ratios[3, 5] Half-life of extinct nuclides \t/Masonite \tisotope \treference nuclide \tinitial ratiosFig. 1 \tSchematic diagram of solar wind radiation hypothesis[4] Solar wind radiation caused by extinct nuclides The hypothesis and the supernova source hypothesis are verifiable, but the evidence obtained so far either supports solar wind radiation or supports the supernova source. The abundances and relative ratios of 26Al and other extinct nuclides are consistent with both the solar wind radiation model and the supernova source. However, the values given by the solar wind radiation model are only for regions very close to the sun. As it moves away from the sun, the solar wind flux decreases with the square of the distance as a coefficient, so the abundance of extinct nuclides also decreases correspondingly. Considering the shielding factors of dust and gas in the nebula, this decrease speed should be even faster. However, the existing large amount of analytical data cannot give information on the spatial distribution of extinct nuclides in the solar nebula. An important reason is that most of the analysis results of extinct nuclides are based on Ca and Al-rich inclusions, and the latter are probably formed in the same region of the solar nebula. The newly discovered 36Cl may provide a way to reveal the spatial distribution of extinct nuclides in the solar nebula[5]. Different from other extinct nuclides (such as 26Al, 10Be, 41Ca), Cl is a highly volatile element, which is formed from Ca-rich, Al-rich inclusions or chondrules and transported to meteorite formation regions of different chemical groups. The gas in it undergoes an alteration reaction and is added. However, 36Cl analysis is actually very difficult, especially in ordinary chondrites and enstatite chondrites, where Ca-rich and Al inclusions are very rare, and only a small part contains Cl-rich secondary minerals suitable for 36Cl research. The discovery of 10Be has provided important evidence for the solar wind radiation hypothesis[6], because this nuclide can only be formed by the radiation spallation of high-energy particles, not in supernovae. However, 10Be was found in some Ca and Al-rich inclusions, but no detectable 26Al and 41Ca[7], which indicated that the latter two extinction nuclides had nothing to do with the cause of radiation. The discovery of 60Fe gives another important evidence for the cause of extinction nuclides, because this nuclide cannot be formed by high-energy particle radiation[8]. In recent years, with the development of Pb-Pb isotope dating technology, its analysis accuracy has reached <1 Ma[9, 10]. Pb-Pb isotopic precision dating of Ca-rich, Al inclusions and spherulites gives absolute age differences (~2Ma) that are in good agreement with interval ages given by 26Al, indicating differences in 26Al/27Al values between the different components is a function of time, but still needs to do a lot of high-precision Pb-Pb ages. In addition, the study of the correlation between different extinction nuclides can provide important evidence to verify the hypothesis of its origin. However, because the confinement conditions of different extinct nuclide systems are quite different, some later alteration and thermal metamorphism events of nebulae and asteroids often cause the system to be disturbed to varying degrees. Application of extinct nuclides Since the half-life of extinct nuclides is only 0.1-103 Ma, the most important and widespread application is the precise dating of the early evolution events (0-100 Ma) of the solar system (assuming that the Sun Under the premise that the extinct nuclides in the nebula are uniformly distributed). Different from the absolute age determination of traditional U-Th-Pb, Rb-Sr, Sm-Nd, K-Ar systems, the extinction nucleus can only give the interval between the two components according to the initial ratio difference between them age. If the initial ratio of the solar nebula has been determined, using it as the zero point of the clock, the formation time of the sample can be obtained. According to the nature of the extinct nuclide system, it can be applied to the dating of different events. Ca-rich and Al-rich inclusions often undergo late-stage alteration, forming Al-rich and Mg-poor minerals such as anorthite and sodalite. The formation of 26Mg was at least 1.5Ma late; in some achondrites, the excess of 26Mg formed by the decay of 26Al was detected [11], indicating that the magma melting event in the early solar system was very early, about 1~5Ma when the solar system began to form. 36Cl is highly volatile, and its enriched minerals are mainly formed in the low-temperature alteration of the nebula or asteroid parent body, which will be an important isotopic time scale for the low-temperature process in the early solar system. W is siderophilic, while Hf is a typical lithophile element. In the metal-silicate melt differentiation, W mainly enters the metal, while Hf exists in the silicate phase, so the 182Hf-182W system is the most important method to study the differentiation time of metal core-silicate mantle including the earth . Before the complete decay of 182Hf, the core-mantle differentiation occurred on the earth, so there is still a small amount of 182Hf in the silicate mantle, and decays to form 182W. Compared with the W isotope of chondrites, the core-mantle composition of the earth (about 30 Ma after the formation of Ca-rich and Al inclusions) (based on the Pb-Pb age of Ca and Al inclusions of 4.567 billion years, the absolute age is about 4.54 billion years)[12,13]. The 146Sm-142Nd system is widely used to study the crust-mantle differentiation of the Earth and the Moon[14], but the given time range is relatively large (100~240Ma after the formation of Ca-rich and Al inclusions). The discovery of extinct nuclides such as 26Al and 53Mn in achondrites indicates that the solar system magma melting event occurred very early, and the decay of extinct nuclides is an important energy source for the early thermal evolution of asteroids and terrestrial planets. At the same time, whether it is the hypothesis of solar wind radiation or the hypothesis of supernova origin, the formation and distribution of extinct nuclides are closely related to the formation and early evolution of the solar system. With the application of a new generation of high-spatial-resolution, high-sensitivity, and high-precision secondary ion probes, new breakthroughs may appear in the study of extinction nuclides.", "Oxygen isotope anomalies have been found that oxygen has three stable isotopes, 16O, 17O, and 18O, and their average relative abundances in the solar system are 99.759%, 0.0037%, and 0.204%, respectively. The oxygen isotopic composition of the sample can be expressed by the per-thousandth deviation of the 18O/16O and 17O/16O values relative to the standard substance (average ocean water), that is, \uf06418O = [(18O/16O)sample/(18O/16O)standard\uf02d 1] \uf0b4 1000\u2030, \uf06417O = [(17O/16O) sample/ (17O/16O) standard\uf02d1] \uf0b4 1000\u2030. Since the mass difference between 18O and 16O is 1 times that of 17O and 16O, any physical and chemical process causes the change of oxygen isotopic composition to satisfy \uf06417O= 0.52 \uf0b4 \uf06418O, that is, isotope mass fractionation. Therefore, it is usually only necessary to measure the 18O/16O value of the sample, while the 17O/16O value is calculated based on isotopic mass fractionation. Clayton et al. selected the Ca and Al-rich refractory inclusions of the Allende carbonaceous chondrite (the earliest aggregate formed in the solar system, see the article \"Distribution and Origin of Extinct Nuclides\"), measured the values of 16O, 17O, and 18O, and discovered for the first time their The oxygen isotopic composition does not fall on the Earth-Moon mass fractionation line with a slope of 0.52, but forms a straight line with a slope of 1 (Fig. 1)[1]. Therefore, oxygen isotopic anomalies generally refer to isotopic compositions that deviate from the Earth-Moon mass fractionation line. The oxygen isotopic composition of Ca-Al-rich refractory inclusions can be regarded as a mixture of two end-member components, one of which is the solar system material (Earth-Moon mass fractionation line), and the other end-member is 16O-rich components. Since the discovery of the oxygen isotope anomaly, the analysis of a large number of meteorite samples has shown that the 16O-rich anomaly mainly exists in the Ca and Al-rich inclusions, and the \uf06418O and \uf06417O values of this end member component are about \uf02d50\u2030. Meteorites are classified into different chemical groups according to their whole-rock chemical composition and rock mineralogy characteristics, representing different parent asteroids (or planets). Oxygen isotope analysis of meteorite whole rock samples found that different chemical groups have characteristic oxygen isotope compositions (Fig. 1). Chondrites and Mars fall above the Earth-Moon mass fractionation line, that is, 16O-poor anomalies, while Vesta (HED), silicate inclusions of iron meteorites, and carbonaceous chondrites fall on the Earth-Moon Below the mass fractionation line, there is an anomaly rich in 16O. This result shows that the oxygen isotope composition of the solar system is heterogeneous, and oxygen isotope can be used as an important parameter to divide the chemical groups of meteorites. Formation Mechanism of Oxygen Isotope Anomaly The most direct explanation for the oxygen isotope anomaly of Ca-rich and Al inclusions is the mixing of 16O-poor solar system initial material and 16O-rich extrasolar system material. However, the isotope analysis of other elements (such as Mg, Si, Ca) in the most 16O-rich minerals (spinel and diopside) did not find relevant abnormalities [3]. On the other hand, although various pre-solar particles have been found in the most primitive chondrite matrix in the past 20 years, the proportion of 16O-rich particles is very high. Figure 1 \tOxygen isotope composition of solar system materials[2] Earth-Moon mass fractionation The line represents the oxygen isotopic composition of the earth and moon materials, with a slope of about 0.52; CCAM represents the oxygen isotopic composition of anhydrous minerals in carbonaceous chondrites, with a slope of about 1, which can be interpreted as 16O-rich components (\uf06418O and \uf06417O: ~\uf02d50\u2030) mixed with earth materials; Mars (Mars), ordinary chondrites (H, L, LL), R and CI group chondrites are relatively poor in 16O on the earth, and fall on the earth-moon On the contrary, other groups of meteorites are relatively rich in 16O, less fall below the Earth-moon line, and the content of presolar material in meteorites is very low[4]. In 1983, Thiemens and Heidenreich discovered for the first time the non-mass fractionation of isotopes produced by chemical reactions in the synthesis of ozone from oxygen in the laboratory [5], in which ozone was poor in 16O, while oxygen was rich in 16O, and formed on the diagram of three isotopes of oxygen The straight line with a slope of 1 is the same as the mixed line of Ca-rich and Al inclusions. However, this chemical reaction cannot explain the oxygen isotopic anomalies in meteorites, because the solar nebula is mainly composed of hydrogen, CO is the main oxygen-containing component, and the contents of O2 and O3 are extremely low. The third mechanism to explain the oxygen isotope anomaly in extraterrestrial matter is photochemical reaction, that is, the self-shielding isotope effect based on the photolysis of CO, which is the most abundant oxygen-containing molecule in the Milky Way. The CO molecule is excited by ultraviolet photons to transition into an excited state, and then decomposes into ground state C and O atoms. The energy band of the UV photons required for the stimulated decomposition of CO is narrow and isotope-dependent. Because the relative abundance of 12C16O is very large, the surface of the nebula basically absorbs all photons of this energy, while the photons of other energies can penetrate the interior of the nebula, so that 13C16O, 12C17O and 12C18O are selectively decomposed relative to 12C16O, and 16O-poor oxygen atoms are produced. , the latter reacts with other components to form 16O-depleted H2O and mineral particles. The model requires that the solar nebula has a 16O-rich composition at least similar to that of spinel (\uf06417O and \uf06418O are about \uf02d50\u2030), while the oxygen isotopic composition of asteroids, the Moon, Earth, and Mars, etc. differentiation. Applications of isotopic anomalies Although the mechanism of oxygen isotopic anomalies remains unclear, the characteristic oxygen isotopic composition of meteorites of different chemical groups and the heterogeneity of oxygen isotopes in the solar nebula make oxygen isotopes a widely used tracer. Ca and Al-rich enclaves are the earliest aggregates formed in the solar system. In addition to oxygen isotopic anomalies, they also contain various extinct nuclide daughters, and are a research hotspot in astrochemistry. The oxygen isotopic composition of meteorite whole rocks of different chemical groups is obviously different, but the various Ca and Al-rich enclaves in them all fall on the same mixing line with a slope of 1, indicating that they were formed in the same source area and then migrated to the small The accretion region of the planet; Earth material and lunar rock samples fall on the same oxygen isotope mass fractionation line, which becomes the most important geochemical evidence for the Earth-Moon system impact hypothesis (see Earth-Moon system impact hypothesis); calcium elongation The silicate phases of eucrite, diogenite, howardite, and mesosiderite have exactly the same \u039417O, It is confirmed that they were formed in the same parent body, Vesta; the oxygen isotopic composition of Martian meteorites completely satisfies the mass fractionation relationship, indicating that Mars may have experienced a global magma ocean stage in the early stage, making its oxygen isotopic composition homogeneous. The discovery of non-mass fractionation also provides a new way for the research of earth atmosphere and environmental science [6]. Components that have been found to exhibit non-mass fractionation are O3, CO2, CO, and N2O, as well as sulfate, nitrate, and perchlorate aerosols. Since mass non-mass fractionation is related to photochemistry, this isotopic effect becomes very characteristic evidence for the participation of substances in atmospheric circulation. The positive \u039417O anomaly (+4.6\u2030) of sedimentary sulfate can be explained as the sulfur input into the ozone layer through volcanic eruption is oxidized by O3 or H2O2 with positive \u039417O anomaly (+30\u2030) to form sulfate[7]. The negative \u039417O (\uf02d0.7\u2030) oxygen isotope anomaly of gypsum and barite since 750Ma may be formed by the weathering of the surface through the negative \u039417O anomaly of oxygen in the stratosphere. Since the isotopic composition of oxygen in the stratosphere is determined by the O3-CO2-O2 reaction, the increase of CO2 partial pressure is equivalent to the increase of positive \u039417O anomalous oxygen source, and the \u039417O of oxygen tends to a more negative value. Therefore, according to the model, Gives the CO2 partial pressure of the atmosphere. The results show that the early Cambrian atmosphere had a high partial pressure of CO2, and there was a peak at 635Ma[8]. Sulfur and oxygen are in the same period in the periodic table of elements and have various valence states. Sulfur isotopic determination of sulfide and sulfate in Martian meteorites revealed a negative \u039433S isotopic anomaly. Laboratory simulations of SO2 and H2S photolysis reactions yielded similar results, so the sulfur isotope anomalies in Martian meteorites indicate sulfur cycling in the Martian atmosphere [9]. Determination of sulfur isotopes (32S, 33S, 34S, 36S) of sulfide and sulfate in Precambrian sedimentary rocks and metamorphic rocks not only found non-mass fractionation of sulfur isotopes, but also its isotopic anomaly features are related to time, with an age of 2090 For samples between ~2450Ma, \u039433S increases from 0.02\u2030 to 0.34\u2030 with increasing age, and the \u039433S of older samples varies from \uf02d1.29\u2030 to 2.04\u2030, while younger samples basically have no non-mass fractionation. This phenomenon may reflect the influence of the atmosphere on the Earth's early sulfur cycle [10]. In addition, the sulfur isotope analysis of sulfide inclusions in diamond also found that the non-mass fractionated sulfur isotope anomaly indicates that atmospheric sulfur enters the deep cycle of the earth [11]. Problematic and challenging Oxygen isotope anomalies in the solar system have been discovered for more than 30 years, but their causes remain unclear. The three main hypotheses, that is, the interstellar matter left over from the former sun, the chemical process, and the self-shielding effect in the photolysis reaction, can all explain part of the observed facts. New research progress may be more conducive to the self-shielding effect in the photolysis reaction. According to this hypothesis, the Ca and Al-rich inclusions, which have long been regarded as oxygen isotopic anomalies, actually represent the oxygen isotopic composition of the solar system. In contrast, the oxygen isotopes of the Moon, Earth, and Mars are products of non-mass fractionation. Since the sun represents 99.9% of the mass of the solar system, assuming that the solar wind can be used as a sample of the sun, the above hypothesis can be verified by analyzing the oxygen isotopes of the solar wind. The lunar soil has been radiated by the solar wind for a long time, and the oxygen isotope composition of the solar wind can be obtained by analyzing the oxygen isotope profile of the metal particles in it. However, the two existing analysis reports give completely opposite results for 16O-rich and 16O-poor[12, 13]. Another way of verification is to launch deep space probes to collect solar wind samples for analysis. For example, the solar wind samples collected by the American Origin project, which is under analysis and research, may provide important information for revealing oxygen isotope anomalies. The non-mass fractionation effect related to the atmospheric evolution of Earth and Mars is a new research hotspot. In addition to laboratory simulations, more atmospheric components and various aerosols need to be analyzed. Since the non-mass fractionation effect is generally small, it is very challenging to use high-precision isotope analysis techniques for trace components. At the same time, it is necessary to establish a more complete system model (including atmospheric circulation, chemical reaction and photolysis, isotope fractionation, etc.) to invert the information carried by isotope non-mass fractionation.", "According to the observations of modern physics and cosmology, the universe we can observe is the same on the basis of matter, and the physical laws we have discovered are applicable. Earth is, so far, the only planet that hosts life as we know it. The reason why scientists emphasize \"life as we know it\" refers to other life that uses the same biochemical basis as us. Its most basic life chemical formula is based on carbon, and the existence of life on earth is inseparable from liquid water[1 , 2]. Is our Earth the only oasis of life in the solar system and the universe? The significance of this simple whether proposition is self-evident. To search for life beyond the earth, the first thing we can investigate is our solar system. Now we have clearly known that although there are many kinds of life on earth, they have a unique origin, so the significance of our search for life beyond the earth is to find a second model of the origin of life, which is far more meaningful than finding a Martian on Mars , or fossils of animals on Mars. If we find that another independent source of life needs water as we do, based on the chemistry of carbon, with amino acids as the most basic elements of its genetic structure, then we can say that in every corner of the universe, if there is life, or Fast or slow, we all follow similar evolutionary paths. Based on similar matter and evolution, if there is some kind of technological civilization in other corners of the universe, then we may have a certain level of communication. But if the biochemistry of another form of life is completely different from ours, then even if they are already at some very high stage of evolution, we may still have difficulty communicating with them. So finding the origin of a second life will answer the philosophical question: Are we the only living world in the universe. There are eight planets and many more moons orbiting them in our solar system. Where is life most likely to exist? Because Mercury is too close to the sun, the huge tidal force makes its rotation period close to 2/3 of the revolution period, and it will become longer and longer. The side facing the sun is always hot, while the side facing away from the sun is always cold. There is no single place that can hold liquid water, so the chances of life on it are slim. The atmospheric temperature of Venus is above 450\u00baC, and there is no condition to maintain liquid water in its near-surface environment, so the possibility of life is also very small. If we can find the origin of the second life, we will be able to find the third, and thus prove the universality of life in the universe. Perhaps Mars holds the key to answering this question. Mars today is a dry, cold planet constantly exposed to ultraviolet radiation. Although the surface environment of Mars is currently harsh and unsuitable for life, Mars is currently known as a planet that may have life in its past or present[3]. Recent theoretical simulations and observations have shown that there may be liquid water on the near surface of Mars. For example, the Phoenix landed on the north pole of Mars in May 2008. Ice was found under the soil; in addition, early observations such as Spirit and Opportunity also found the presence of ferric sulfate on the surface of Mars. These phenomena all indicate that water-rock interactions exist on Mars now and in the past [4]. There is now direct evidence that there was massive surface runoff on the surface of Mars 4 billion years ago, and that the features of the river beds are very well preserved. About 3 billion years ago, there was still local water flow on the surface of Mars, and then, except for the solid water in the ice caps at the two poles, the liquid water on the surface gradually disappeared. The main reasons for the disappearance of its water are: first, its small gravity makes it easy for water to escape the atmosphere; second, Mars and its weak magnetic field can hardly resist the ion radiation from the sun, so a large number of water molecules are decomposed and escape the atmosphere . The US and European space science agencies have recently explored Mars more and more frequently, because the near surface of Mars meets the conditions for the existence and evolution of life. In the near future, humans will be able to collect suitable samples on Mars, and then analyze them according to the molecular biology techniques they have mastered to confirm whether there is life on Mars (Figure 1); Are there similarities on a molecular basis? Exploration in recent years has uncovered water ice on the surface of Mars, ice beneath the Martian soil, and remote sensing of Mars has revealed possible atmospheric methane anomalies. All these signs indicate that the conditions for the survival of microorganisms on Earth exist in the depths of Mars, and according to estimates by life evolutionists on the formation of the first cells on Earth, it will take about 200 million years, maybe only 20 million years . The surface of Mars has been able to maintain liquid water for nearly 1 billion years. Therefore, life is likely to have been bred in its early days, and the genetic structure of life, or the biomarker minerals formed by its metabolism, may have been preserved to this day. What's more, we can't rule out that there will be liquid water about 3km below the surface of Mars, that is to say, there will be a deep biosphere. The existence of Earth's deep biosphere also provides a good support for the existence of Mars' deep biosphere. When Mars' dry, cold climate formed, its water was lost, and ions dissolved in the water crystallized to form typical evaporative minerals. Research now shows that some microbes can thrive in the tiny fluid inclusions contained in these evaporite crystals for millions of years or longer. Figure 1 \tThe next step of Mars exploration is to use drilling robotic equipment to obtain water and soil samples in the depths of Mars and directly analyze whether there is genetic material. Giant planets other than Mars are difficult to live on because their atmospheres are too thick and their surface temperatures are too low. , but many of their moons are of special significance for the search for life beyond Earth due to their special environmental conditions. Jupiter's moon Europa (Europa) has an ice shell several kilometers thick, beneath which may be an ocean as deep as more than 100 kilometers [5]. The water, tides, volcanic energy, and nutrients it possesses all support the theory that it may host life (Figure 2). Compared with the earth, Europa is an extreme environment, such as extremely low temperature in the ice shell, extremely high radiation on the surface, extremely high pressure on the seabed and changing salinity in the ocean; in addition, Europa's hydrosphere may also have a rather low pH value. We don't know anything about possible life forms on Europa: unicellular or multicellular? We don't even know if life in it is also based on the chemistry of carbon. Since it receives little sunlight, it is unlikely that photosynthesis drives ecosystems, but the array of hydrocarbons produced by its surface radiation could support some forms of life. Life in water or at the bottom of the ocean can sustain its ecosystem by harvesting chemical energy. Travelers found that the atmosphere of Titan (Titan) is mainly composed of nitrogen and methane, and due to the action of ultraviolet rays, nitrogen and methane and their derivatives have produced many complex organic compounds. Titan is the only nitrogen-dominated atmosphere in the solar system other than Earth, and its environment can help us understand the chemical evolution of the original Earth and life on it. Although Titan's atmosphere shares many similarities with Earth's, its average surface temperature is only \uf02d180\u00baC, and compared to Earth's watery oceans, Titan's atmosphere contains only trace amounts of water vapor. But what Titan has taught us is that sufficiently complex organic matter can be formed without water on it. Due to the existence of the earth\u2019s oceans, the chemical evolution of life must be different[6]. Therefore, considering the differences in materials on the earth, such as the clay, sulfide, carbonate, and oxide formed on the surface of the earth in the early The role of complex organic matter may play a more important role. The satellites of other large planets have something to consider, but in general, they all lack the conditions for life to arise and evolve. The technological achievements of mankind are sufficient to explore the entire solar system. Since no aliens have come to earth to tell us the mystery of the existence of life in the universe, a more careful study of Mars or the satellites of large planets (such as Europa or Titan) is a Very realistic plans for the foreseeable future. Figure 2 \tBeneath Europa's frigid ice, there may be an ocean of water more than 100 kilometers deep. In addition, the direct interaction between water and rock can produce minerals and other compounds, which have the basic conditions for life", "The oxygen content in the atmosphere today accounts for about 21%, while the atmosphere in the early formation of the earth did not contain oxygen or the content of oxygen was negligibly low. So, when did the atmosphere start to contain a certain amount of oxygen? This question is of great significance to the transformation of the surface environment and the evolution of early life in geological history. People can define and estimate the oxygen content in the atmosphere at that time and determine when the atmosphere began to oxidize through the mineral, rock and geochemical evidence in the well-preserved Precambrian sedimentary rocks, as well as geochemical model calculations. However, due to the lack of well-preserved original sedimentary records for a long time, both analytical data interpretation and geochemical modeling may give different results, so people now have various estimates of the oxygen content in the Earth's early atmosphere. Geologists have found clasts of pitchblende (UO2) and pyrite (FeS2) in early fluvial deposits older than 2.3 Ga in South Africa. These minerals are all oxidation-sensitive minerals and are easily oxidized in modern atmospheric environments. Based on this, some researchers speculate that the atmospheric oxygen content at that time was low, possibly lower than the 0.1% oxygen level in the modern atmosphere (Figure 1)[1]. At the same time, some geologists also saw Fe oxides in the ancient soil 2.2 Ga years ago, indicating that the oxygen content of the atmosphere at that time was much higher than that of 1% of the modern atmosphere [1]. Therefore, some researchers proposed that there was a sharp increase in atmospheric oxygen content between 2.4 and 2.2 Ga, which is called the Great Oxidation Event (Fig. 1). There are also different opinions on this issue in the field of geosciences, although the number of people is small. For example, the research group led by Professor Ohmoto of the University of Pennsylvania in the United States has successively reported that primary hematite exists in sedimentary rocks of 3.46 Ga and 2.76 Ga. Based on this, they believe that The early Earth's atmosphere was already oxidized from very early times (>3.4Ga) [2,3]. Therefore, more independent geological and geochemical evidence is needed to solve this problem. Figure 1 \tThe possible evolution of the oxygen content in the Earth's atmosphere during the geography [5] PAL. The level of oxygen content in the atmosphere today. When did the Earth's atmosphere oxidize? \t\u00b7433\u00b7The carbon isotope age curve of early marine carbonate rocks on the Earth shows that there was a large positive carbon isotope shift between 2.22 and 2.06 Ga, which reflects that there may be an abnormally high organic carbon burial rate during this period, which means that May have contributed to the abnormally high O2 production rate [4]. Sulfur isotope studies on sulfate and sulfide in marine sediments also show that the difference in \u03b434S between sulfate and sulfide is generally small before 2.4 Ga, but increases significantly after 2.4 Ga[5]. These evidences all indicate that there may be a process of obvious increase of oxygen content in the atmosphere around 2.4Ga. Another independent piece of evidence indicative of changes in atmospheric oxygen levels comes from the discovery of mass-non-fractionation effects of sulfur isotopes. Farquhar et al. (2000) reported the change of non-mass sulfur isotope fractionation of sulfide and sulfate in sedimentary rocks in various historical periods of the earth [6]. They found that the sulfur isotopic composition of rocks older than 2.3Ga deviated from the mass fractionation line, whereas rocks after 2.3Ga did not show mass non-mass sulfur isotope fractionation. The non-mass sulfur isotope fractionation effect is mainly related to photochemical reactions, that is, the decomposition reaction of sulfate under ultraviolet light can produce non-mass sulfur isotope fractionation. Therefore, this study proves from one side that atmospheric oxygen oxidizes after 2.3Ga. This is because the atmosphere before 2.3Ga did not form an ozone layer, and ultraviolet rays can directly irradiate the surface, thus causing non-mass sulfur isotope fractionation effects; however, the atmospheric ozone layer may have formed after 2.3Ga, so geological samples do not show non-mass sulfur isotope fractionation. Sulfur isotope fractionation effects. This becomes independent evidence that well supports the Great Oxidation Event. However, it has recently been found experimentally that sulfate is capable of non-mass fractionation of sulfur isotopes by thermochemical reduction in the presence of organic matter [7]. Whether the disappearance of the non-mass sulfur isotope effect after 2.3Ga on the earth directly reflects the atmospheric oxygen content is still a subject for further study. So far, most of the evidence supports that there may be a sharp increase in oxygen content in the early atmosphere around 2.3~2.4Ga, but there are still differences on the reason for this increase in oxygen content. It is traditionally believed that the sharp increase in oxygen may be related to the emergence of blue-green algae, because blue-green algae are the most primitive organisms that can generate oxygen through photosynthesis. Previous studies have shown that blue-green algae have become part of the marine ecosystem at least before 2.7Ga, so the big oxidation event obviously lags behind the appearance of blue-green algae. Another model suggests that the increase in atmospheric oxygen may be linked to a shift in the oxidation state of the mantle. Kump et al. speculated that the early mantle was reducing, which may have transported a large amount of H2, CO and CH4 gases to the Earth's surface [8], so the early atmospheric oxygen content was very low. Around 2.45Ga, a large amount of volcanism related to the mantle plume activity occurred, resulting in a large upwelling of oxidized mantle material at the bottom, which made the upper mantle oxidized, so the gas emitted by the volcano changed from reduction to oxidation, which led to an increase in atmospheric oxygen content. However, the research results of the redox state of the mantle in geological history do not support this interpretation, since the redox state of the mantle has not changed significantly since the Archean. Holland believes that the increase in atmospheric O2 content originated from the increase in seawater sulfate content and the transfer of a large amount of sulfate to the oceanic crust during the mid-ocean ridge basalt-seawater high temperature reaction, and the subducted sulfate acts as a H2 reservoir in the volcanic arc environment The role of volcanic puffs is to change from reducing to more oxidizing, thereby increasing the accumulation of oxygen in the atmosphere [9]. Many other models and explanations, such as increased H2 escape velocity, increased oxygen production rate, and even large changes in tectonic movement, have also been proposed to explain the increase in oxygen content. The time and reason for the increase in oxygen content in the early earth's atmosphere, which has been debated for decades, has not yet been determined. Although there are many and relatively consistent opinions, different opinions are still difficult to be completely refuted. The solution to this problem depends on the comprehensive research of many disciplines including chemistry and biology. Research on this issue will help to deepen our understanding of the Earth's early surface environment and the evolution of life.", "Preface With the intensification of human activities, the nutrient salts (essential elements such as phosphorus, nitrogen, and silicon) input into rivers, lakes, and oceans have increased, causing eutrophication and algae blooms in aquatic ecosystems are very common. For example, the cyanobacteria pollution in Taihu Lake (Figure 1) and the red tides in the Yangtze River and Pearl River estuary that people talk about are all caused by changes in the nutrient salt mechanism in essence. In addition to human activities, continental weathering, upwelling, and submarine hydrothermal fluids are also important reasons for the increase in nutrients in some areas of the earth. In the superposition area of the above forces, the possibility of water body productivity increase is greater, for example, the coastal zone is often a high-value area of productivity. Figure 1 \tTaihu Lake cyanobacteria pollution (network photo) None of the biogeochemical studies of modern marine nutrients can be used to describe the large-scale marine organic matter accumulation events that occurred on the earth in geological history. The existing evidence shows that when these events occurred, almost Eutrophication occurred in the whole ocean basin, which is not available on the modern earth. In the Phanerozoic, such events have occurred many times, Early Cambrian (concentrated at 542~521Ma), Late Ordovician/Early Silurian (concentrated at 445~439Ma), Late Devonian/Early Carboniferous (concentrated at 374-345Ma), Late Permian (concentrated at 260-251Ma), Late Triassic (concentrated at 228-203Ma), Late Jurassic (concentrated at 155-150Ma), Early Cretaceous (concentrated at 125-203Ma) 93Ma) and other periods, large-scale marine organic matter accumulation events occurred. The organic carbon content accumulated in the sediments can reach up to about 40% [1], such a high organic matter content is not available in today's marine sediments. The impact of these events on the Earth system is very profound. First, it changes the pattern of the carbon pool, so that a large part of carbon enters the sedimentary circle in the form of organic matter, and this part of carbon is mainly converted from atmospheric CO2 through photosynthesis, resulting in a sharp decline in atmospheric CO2 content, which is harmful to the earth. Revolutionary impact on surface climate and ecosystems. After the accumulation event, the climate is often dry and cold, and the biomass decreases sharply. Secondly, the large-scale accumulation of organic matter has fundamentally changed the chemical properties of ocean water. Water stratification and anaerobic, lower pH, and lower content of certain trace elements in water may lead to the collapse of aquatic ecosystems and the extinction of some organisms. . The impact of the large-scale accumulation of organic matter in the ocean during geological history on modern humans is that the fossil energy we rely on, oil, mainly comes from these deposits. The parent material of this type of organic matter is mainly algae, which is rich in lipids and forms more than 80% of today's crude oil. Understanding the causes of large-scale organic matter accumulation in the ocean is a major event in the study of Earth system science, but unfortunately, the current research at home and abroad is still at the level of data accumulation, and it is still far from the goal of fundamentally solving this problem. Far. Relevant research history, problem proposal and research difficulty There are quite a lot of scientific problems that can be explored in black rock series rich in organic matter and oil shale, which need to be studied by experts and scholars in different fields. For a long time in the past, our research are concentrated at this level. \u2460 The nature and evolution of organic matter. Through the study of the types of organic matter [2, 3] and the structure of macromolecules in rocks, people have basically clarified the nature of organic matter in marine sediments. It is a complex organic matter mainly composed of lipid compounds [4], At different thermal evolution stages, the properties of organic matter vary greatly. In the highly mature stage, this type of organic matter is pyrolyzed and transformed into organic matter mainly with aromatic structure. \u2461 Parent source of organic matter and water column process. The sedimentary marine organic matter inherited the stable composition of the marine biological organic matter. By comparing with the composition of the biological parent material, it is basically certain that it mainly comes from algal substances. The dominant algae species are not completely the same in different periods. In addition, marine organic matter comes from bacteria that decompose algae. The research on the parent source of sedimentary organic matter has greatly stimulated the research on the stable components of different biological organic matter and the research on the water column process after the death of biological organic matter. Today, we can use the knowledge accumulated by the predecessors to identify different biological parent materials from sedimentary organic matter. . \u2462 Reconstruction of water column biogeochemical processes. Accurate parent source and process identification technology allows us to reconstruct the paleobiogeochemical process of the water column from the study of sedimentary organic matter[5], solve the primary productivity of the water column, conduct detailed water column stratification and bacterial decomposition process research, etc.[6 ], which is a remarkable research result. From a large number of studies on the reconstruction of paleobiogeochemical processes, we will find that the formation process of sedimentary organic matter is very different in different periods, which stimulates people's research on the formation mechanism of sedimentary organic matter. In recent years, many excellent scholars in the world have linked this research with the carbon cycle process. For example, the large-scale accumulation of organic matter at the North Atlantic C/T boundary (93.6\u00b10.8Ma) can reduce atmospheric CO2 by 80%[7 ], which is considered to be the reason for the cold and dry climate after C/T. This work also made many organic geochemists realize that the reason controlling the large-scale accumulation of organic matter in the ocean may be a special biogeochemical process, that is, a special enrichment mechanism of nutrients. How to find relevant evidence in geological bodies and connect them organically is a difficult problem in current research. The difficulty of research is firstly manifested as an interdisciplinary problem, which cannot be solved only by the research that organic geochemists are good at. The special nutrient supply mechanism involves issues such as the source and rapid generation of nutrient salt, and inorganic geochemical research is also required. home participation. The second difficulty is that although the research results of modern biogeochemistry can be used for reference in terms of mechanism, it is a study without modern examples after all, and it is difficult to implement the principles of present and ancient, which is also a major obstacle to quickly solve this problem . The third difficulty is that it is difficult to find the entry point of the research, and the existing technical route is not easy to implement. Studying this problem from sedimentary records is probably the best research method, but it only solves the problem of time series, and the spatial relationship of various elements of event occurrence still needs comprehensive regional research to solve. The possible reasons for the large-scale accumulation of organic matter in the ocean The large-scale accumulation of organic matter in the ocean is attributed to high productivity and a good environment for preserving organic matter. Generally, organic matter is easy to oxidize, and the reducing environment is conducive to the preservation of organic matter[8]. It is very important for the preservation of organic matter in the case of low productivity. Under the condition of high productivity, due to the sufficient supply of organic matter, the bottom of the water body is always reduced. The reducing environment was not enough to be the main controlling factor, and high productivity became the decisive factor for high organic matter accumulation. High primary productivity requires special nutrient supply, and special nutrient supply requires special or extreme geological events. From this perspective, we can roughly guess the possible reasons for the large-scale accumulation of organic matter in the Phanerozoic ocean. It is nothing more than the following three situation. The first mechanism is that volcanic activity provides nutrients. Volcanic activity occurs for a short period of time. As long as the area is large enough, volcanic ash can provide a large amount of nutrients after dissolution. The high organic matter accumulation in Late Devonian/Early Carboniferous (concentrated at 374-345Ma) and Early Cretaceous (concentrated at 125-93Ma) may be related to this. The second mechanism is the supply of nutrients after the large ice age. There is obvious physical weathering during the glacial period, but it is very difficult for these substances to be transported to the ocean due to the consolidation of ice. In the late glacial period, these nutrients enter the ocean in a short period of time, which can cause eutrophication, thereby leading to the accumulation of organic matter. . The Precambrian Marino and Stuart glacial periods and the Late Ordovician/Early Silurian (concentrated at 445-439Ma) glacial periods are very likely to provide nutrients for the subsequent large-scale accumulation of organic matter. The third mechanism is the supply of nutrients after extreme drought. Physical weathering under arid conditions has resulted in the accumulation of nutrients on land, with the potential to enter the ocean on a large scale during later climate transition phases, thereby providing conditions for algal blooms. It is possible that the high organic matter layers in Paleogene lakes in eastern China were formed in this way. The above three situations are special nutrient supply caused by extreme conditions. For a specific layer, there may be multiple mechanisms coexisting. For example, volcanic activities may cool the climate and thus generate glaciers. The two may together play a role in the accumulation of organic matter. None of these three hypotheses have been verified by the sedimentary record and need to be worked hard in our future work. If terrestrial systems, especially the biosphere, are to be included in the study of sphere-level interactions, then what can link events in the inorganic world with the organic world? It can be said with certainty that the mechanism of nutrients is a very important link, and only by studying the large-scale accumulation of organic matter in the ocean from the perspective of biogeochemistry can we truly solve its mechanism.", "The chemical evolution of the Precambrian ocean is closely related to the increase of atmospheric oxygen. There was basically no free oxygen in the early Earth's atmosphere, and the ocean was also anoxic. Active volcanism and submarine hydrothermal fluids made seawater rich in Fe2+ ions, forming an iron-rich and anoxic ocean. This ocean chemistry continued until about 100 million years ago, when the banded iron deposits (BIF) deposited in the ocean suddenly disappeared from the Earth. For a long time, the disappearance of seawater rich in iron (ions) was considered to be the result of ocean oxidation, and Fe2+ ions dissolved in seawater were oxidized to Fe2O3 and precipitated[1,2]. Oxygenation of the ocean is directly related to the first stage of atmospheric oxygen elevation event that occurred on the earth 2.4 billion years ago. Before and after the event, the O2 content of the atmosphere increased from less than 0.1% PAL (modern atmospheric level) to nearly 10% PAL. Previously, the traditional understanding was that the disappearance of banded iron ore (BIF) 1.8 billion years ago marked the end of the anoxic ocean, and the ocean (including deep water) was oxidized since then (Fig. 1). Figure 1 \tThe chemical evolution model of the ancient ocean (deep sea) [4] After the first stage of atmospheric oxygen rise, the surface seawater will undoubtedly be oxidized, but will the deep seawater be oxidized or continue to be anoxic? Canfield proposed the Proterozoic \"sulfurized ocean\" hypothesis in 1998, which was the first to challenge the traditional ocean oxidation model [3], so geochemists also called the \"sulfurized ocean\" the \"Canfield Sea\". The so-called \"sulfurized ocean\" refers to the ocean with water containing free H2S. A sulfided ocean is necessarily anoxic, but an anoxic ocean is not necessarily sulfidic. The oceans of the Archaean and Paleoproterozoic were anoxic, and because the water was rich in Fe2+ ions, there could be no free H2S, which was iron-rich or iron-containing. The disappearance of banded iron ore (BIF) and the generation of sulfide oceans are considered to be the combined results of increased atmospheric oxygen and long-term anoxia in the deep ocean [3,4]. After the atmospheric oxygen rises, the primary productivity of the ocean surface increases, and the sinking and degraded organic mass also increases, consuming the oxygen that diffuses to the deep sea, and the deep sea continues to be anoxic. On the other hand, sulfide minerals in continental rocks are oxidized, and the amount of sulfate (SO42\uf02d) discharged into the ocean through rock weathering and rivers varies from small to large. The low sulfate content of Archean seawater limited the rate of (bacterial) sulfate reduction. The increase in atmospheric oxygen that occurred 2.4 billion years ago increased the concentration of dissolved sulfate in the deep sea, and the amount of buried organic matter also increased, resulting in an accelerated sulfate reduction rate and an increasing amount of H2S generated. Once the reduction rate of bacterial sulfate exceeds the supply rate of iron that can participate in the chemical reaction, all the iron reacts with H2S to form pyrite precipitation, and finally forms a sulfide ocean with excess H2S[3,4]. This happened 1.84 billion years later, when there were no more banded iron ore (BIF) deposits on Earth. It was not until the \"Snowball Earth\" event occurred 800-600 million years ago, and with the increase of atmospheric oxygen in the second stage, the sulfide ocean that lasted for about 1 billion years in the Proterozoic Era ended. The Proterozoic \"sulfur ocean\" hypothesis has gained widespread support and attention because it successfully explained why aerobic eukaryotic evolution was stagnant for a long time until the end of the \"Snowball Earth\" event (635 million years) Diversification of multicellular eukaryotes [5]. Under the conditions of the sulfide ocean, not only the content of Fe decreased greatly, but also the content of Co, Mn, Ni, Zn, Cu and other elements necessary for living organisms also decreased. Mo forms MoO42\uf02d under oxidizing conditions, which is easily dissolved in water and transported; but in the presence of H2S, Mo forms insoluble sulfide or forms MoS42\uf02d, which is adsorbed by organic matter and enters the sediment. Mo and Fe fix N2 through nitrogenase (reduce N2 to ammonia for biological utilization), and carry out NO3\uf02d assimilation (ie nitrification) through nitrate reductase. The role of these two elements is very important[4,5]. Ocean sulfidation led to the absence of these essential elements in most Proterozoic marine environments, potentially restricting the nitrogen cycle, affecting primary productivity, and restricting the ecological distribution and evolution of eukaryotic algae. The second phase of elevated atmospheric oxygen ended the Proterozoic sulfide ocean. Atmospheric oxygen rises again, can the deep sea contain a certain amount of dissolved oxygen as it does today? This is a question that is still being debated. Studies on the carbon and sulfur isotopes of the Ediacaran marine sedimentary rocks in Oman[6] and China\u2019s Three Gorges region[7] show that the ancient ocean experienced multi-stage oxidation after the \u201cSnowball Earth\u201d event, but the deep sea was still oxidized by 551 million years ago. is hypoxic. Recently, evidence from the study of iron components in sediments revealed that the deep sea returned to an anoxic and iron-rich ocean during 760\u2013530 million years [8]. During this period, the shortage of sulfate and H2S in the ocean made iron surplus, and the water body changed from sulfide to iron-rich[4,8]. Neither the cessation of sulfate input during the \"Snowball Earth\" event nor the overburden and subduction of pyrite into the mantle during the sulfidation of the oceans is difficult to explain why the oceans were instead deficient in sulfates when atmospheric oxygen was elevated. In addition, neither the appearance of Ediacaran evaporites nor the data on sulfur isotope fractionation support the long-term maintenance of low sulfate concentrations in the ocean during this period. What was the chemistry of the Ediacaran ocean? This problem has not been solved yet. There is increasing evidence that the oceans of this period were chemically stratified and thus may have included oxidized, anoxic (iron-rich, or even sulfided), possibly at the same time, but at different depths or at different ancient Geographical water bodies. Maybe it was a transitional period, so when exactly was the deep ocean as largely oxidized as it is today? This is related to the diversification of metazoans. Similarly, there are not many direct evidences of the sulfide ocean that have been published for about 1.8 billion to 800 million years ago, and they are also facing various challenges. Hematite, which is obviously contradictory to the global sulfide ocean hypothesis. How did the Proterozoic sulfide ocean develop? All of the open ocean, or some ocean basins? Why did some submarine hydrothermal or other sedimentary iron deposits still appear in this period? Were there some sulfided ocean basins before the Proterozoic sulfided oceans? etc. Analysis and research on Fe components (FeHR/FeT, FeP/FeHR, DOP), stable isotopes (S, C and Mo, Fe, etc.) and redox-sensitive trace elements in marine sediments from different basins in different periods may provide More information on ancient ocean redox states, chemical structures and transitions. Today, the deep ocean is completely oxygenated, and deep-ocean currents flowing from the poles to the equator have stirred the ocean evenly. Modern sulfided oceans occupy less than 0.5% of the total ocean area, the largest being the Black Sea, followed by the Carriaco Basin off the coast of Venezuela [4]. In the Phanerozoic, oceanic anoxic events (OAEs) that were anoxic on a sea basin scale and lasted for millions of years occurred frequently. For example, several widely exposed organic-rich black shale layers in the Mesozoic Era indicated seabed anoxic, and possibly large-scale sulfidation ; biological extinction events like the Jurassic Toarcian, Late Devonian Frasnian/Fammenian, and the largest biological extinction events in the Late Permian are often associated with deep-sea hypoxia and sulfidation. However, the sulfidation of the ocean, which occurred about 1.8 to 800 million years ago, is unprecedented, and it is an important era in the evolution of ocean chemistry.", "Mass extinction refers to a global phenomenon in which a whole family, whole order or even a whole class of organisms disappear completely within a very short geological time, or only a few survive. In the evolution history of the earth, there have been several mass extinction events, at least 5 since the Phanerozoic Eon (300 million years). Among them, the largest biological extinction event occurred at the end of the Permian (250 million years), causing the extinction of more than 95% of marine life and more than 75% of terrestrial life [1, 2]; amphibians and reptiles that were originally prosperous on land Invertebrates and corals and other organisms in the ocean are also suffering heavy losses. The relatively well-known trilobites are all extinct, and none of them survived the Mesozoic (200 million years). After the end of the Permian, the earth's biological world was in a long-term depression period (about 5~6Ma), and the development of life at the beginning of the Triassic once returned to the original state comparable to that of the end of the Precambrian. The most well-known mass extinction event is the end-Cretaceous (65 million years) extinction event that wiped out the dinosaurs, but its scale is only 1/3 of the end-Permian extinction event. It is a recognized fact that there have been many mass extinctions in geological history, but its cause has always been an unresolved problem. There are currently two most popular models: one is large-scale volcanism, and the other is an asteroid hitting the Earth. The former is mainly based on the timing coincidence of large-scale volcanic events and mass extinction events (Fig. 1), including the late Guadeloupe extinction event at the end of the Middle The end-stage mass extinction event and the Siberian volcanic eruption, the end-Triassic extinction event and the Mid-Atlanta volcanic eruption, the Jurassic Toarcian extinction event and the Karoo overflow basalt, and the end-Cretaceous extinction event and the Deccan basalt eruption in India. Consistent in time, etc. [3~5]. Large-scale volcanism refers to a huge amount of magma eruption in a short period of time. For example, the basalts in Siberia extend thousands of kilometers from north to south, with a range of about 2 million km2 and an average thickness of more than 1000m; About 2.6 million km3 of volcanic material was erupted. Such a large-scale volcanic eruption would cause a large amount of volcanic ash to enter the stratosphere, blocking the effect of solar radiation, which is not conducive to the survival of living things. The large-scale regression in the direction will deteriorate the environment on which marine life depends. On the other hand, a large amount of ash, CO2, sulfide and other gases enter the atmosphere and hydrosphere, causing changes in the global climate and seawater composition, resulting in a greenhouse effect that causes large-scale melting of icebergs, leading to sea level rise, and greatly reducing the living space of terrestrial animals and plants. And trigger a series of chain reactions, thereby having a major impact on the life system on the surface (Figure 2). Another model emphasizes the impact of meteorites on the earth. This type of impact not only smashes the earth into \"a thousand holes\" and forms craters, but also causes large-scale fires and may bring a certain degree of radiation, resulting in deformation of the earth or changes in sea levels. In addition, the ashes formed by the combustion and explosion of meteorite fragments when they approach the atmosphere will also diffuse into the atmosphere, bringing a large amount of outer space substances, including rare solid substances on the earth such as iridium and fullerene, or gaseous substances such as CO2 and SO2 , and then affect the solar radiation, produce the greenhouse effect, change the atmosphere and the hydrosphere, threaten the survival of organisms and even lead to extinction. The largest crater (10 km in diameter) that has been confirmed so far is located in the Gulf of Mexico. The discovery of the enrichment of extraterrestrial substances such as iridium on the Cretaceous-Paleogene boundary (65 million years) is the main evidence for this hypothesis. It is generally believed that the meteorite impact caused the mass extinction of creatures at the end of the Cretaceous period. Let a large number of biological species, including dinosaurs, withdraw from the historical stage of the earth. Fig. 1 \tThe temporal correspondence between large volcanic provinces, biological events and marine hypoxic events during the geologic history[5] For example, the two biological extinction events at the end of the Permian corresponded to the large igneous provinces of Siberia and Mount Emei respectively in time, while the Cretaceous At the beginning of the 21st century, scientists once believed that they had found solid evidence to support the hypothesis of meteorite impact. Becker et al.[6] of the Scripps Institute of Oceanography in the United States believe that the fullerenes in the Permian rocks and the foreign gases contained in them may be the evidence left after the meteorite hit the earth and burned; Farley and Mukhopadhyay[7] used the same method to analyze, and did not find any substances from outer space such as fullerenes and noble gases; in fact, the samples analyzed by Becker were not from the interface strata at the end of the Permian, but It was collected from the formation a few meters below the interface. Therefore, even if there was a planetary impact at the end of the Permian, it should have occurred earlier than the biological extinction. Some scientists claim to have discovered the Permian crater near the northwest coast of Australia. A large number of molten rocks and fragmented quartz crystals can be found in the Permian strata in the large area around the crater. It is estimated that the diameter of the crater is 125. Miles, suggesting that 250 million years ago, a giant meteor larger than Mount Everest hit the earth, and its power was equivalent to the energy of 1 million nuclear bombs, almost destroying all life on the earth. If this is the case, it provides important support for the meteorite impact hypothesis, but there has been no evidence to confirm the impact metamorphism of the crater and reports on the precise age of the crater. In fact, although the impact of large stars can explain the occurrence of large-scale and short-term extinctions of organisms, it cannot explain why many known large-scale impact events did not cause extinction of organisms, nor can it explain the mass extinction of organisms on the seabed . According to statistics, the earth may be impacted by stars with a diameter of more than 1km every 500,000 years on average. Obviously, the frequency of biological extinction is much lower than the frequency of star impacts. Figure 2 \tThe possible mechanism of volcanism affecting surface life systems Wignall's research on Greenland sedimentary rocks shows that the extinction of life in the Permian did not complete in a short period of time, but experienced a history of 80,000 years. During the 80,000-year extinction process, a small number of organisms in the ocean were wiped out first, followed by terrestrial organisms, and finally most of the organisms in the ocean became extinct. These results are also consistent with the meteorite theory. conflict. The volcanic eruption hypothesis also has its shortcomings. After all, the volcanic eruptions that occurred within a short period of time, that is, within tens to millions of years, are regional in nature, and whether they can really cause global environmental changes and biological extinctions remains to be seen. As far as extinction events are concerned, not all are related to volcanism. The high-precision dating technology has also raised doubts about the temporal coupling between the previously identified events. For example, the recent re-determination of the age (252Ma) of the Permian-Triassic boundary (250 million years) suggests that the end-Permian extinction occurred before the large-scale volcanic eruption in Siberia. It is worth noting that 10 Ma before the eruption of the Siberian basalt, the Emeishan basalt (~260 Ma) also erupted in the southwest of my country. Is this a prelude to the end-Permian extinction? Is the mass extinction event gradual? It is worth thinking about. Obviously, the \"culprit\" that caused the mass extinction has not yet been determined. Perhaps there is not one \"culprit\" in the first place, but two, or even more. The answer to this important unsolved scientific question awaits the discovery of more new evidence and more rigorous demonstration.", "Gould is an accomplished astrophysicist at Cornell University in the United States, who has a very strong interest and superhuman intuition about what happens in nature. On an oil well in Sweden in 1991 he saw something that baffled him. When the Swedes drilled a 6km-deep exploration oil well, the pipeline was blocked by some black matter with unknown properties. After further research by Gao De and his Swedish colleagues, it was found that it was magnetite with extremely small particles of only tens of nanometers. The composition of magnetite is Fe3O4. Geologists know that these magnetites are not inherent in the rocks because igneous rocks are formed at high temperatures and the magnetite in them tends to have larger particle sizes. Gao De's intuition is that these magnetites may be of biogenic origin, so he boldly speculates that there should be many microorganisms living in the deep part of the earth, and they rely on the extremely low mineral energy, carbon and heat energy to reproduce there. Gold's intuition was correct, and on the basis of some in-depth work, he quickly came up with the concept of a \"deep, hot biosphere\" and wrote from his understanding of the fundamental physical-chemical conditions under which life exists A book that describes what a deep hot biosphere should look like [1]. This hypothesis has gained increasing support in the ensuing years. In 1994, Swedish scientists isolated a strain of thermophilic iron-reducing bacteria in that oil well, which transformed ferric iron particles into magnetite [2]. In 1997, scientists from the U.S. Department of Energy jointly discovered the same species of microorganisms in the depths of West Virginia and Colorado, two places more than 1000km apart in the United States [3] (Fig. 2). Since then, similar microorganisms have been found in deep oil reserves in Siberia, Russia, and in deep samples from continental ultra-deep drills in eastern China. All these discoveries point to the depth of 1000~3000m inside the earth's continents. In view of the great significance of probing the deep biosphere, NASA funded a joint project by Princeton University, Indiana University, Tennessee University and Oak Ridge National Laboratory. Its main goal is from the perspective of geochemistry and geomicrobiology, To study the ways or mechanisms by which microorganisms in the deep environment obtain extremely small amounts of energy and carbon in deep rocks to maintain their survival and reproduction. Due to the extremely high cost of detection and some unresolved technical problems (such as how to drill deep samples to avoid the contamination of genetic materials in the surface biosphere, etc.), our current understanding of the deep biosphere is only the tip of the iceberg. According to geochemical and thermodynamic estimates, the rocks of the earth or other celestial bodies (such as Mars, Europa, etc.) will interact with hydrothermal fluids to produce enough hydrogen; and hydrogen can drive a chemoautotrophic system, such as methane production An ecosystem dominated by bacteria [4]. In sedimentary environments on Earth, such as sedimentary rocks, karst areas, and oil-bearing sediments, there is undisputed evidence for the existence of microbial life. Ultrabasic rocks crystallize at very high temperatures, contain few voids, and are therefore not the best candidates for hosting life. However, ultrabasic rocks contain high levels of multivalent metals in a reduced state, and when they react with hydrothermal fluids, they can easily release energy storage gas\u2014hydrogen. Basic rocks (such as basalt) contain less metal in the reduced state, but have more pores and are more likely to react with water. In the mid-1990s, geochemists drilled a 1200m drill hole in the Columbia overflow basalt plateau in Washington State, USA. After analysis, they found that there may be a deep earth energy (such as hydrogen) as the energy source in the deep part of the basalt. , Microbial ecosystems using dissolved inorganic carbon as a carbon source. If this result is reliable, the deep part of this overflow basalt alone has considerable biomass, because it is 3 km thick and 300,000 km2 in area. In order to verify the above results, microbiologists and geologists also detected microorganisms in the 200m underground of a hot spring called Lidi in Idaho, and confirmed that microorganisms can not only withstand the high temperature, high pressure, Carbon-poor and energy-deficient, and an ecosystem fully supported by energy and carbon sources deep in the earth. Based on the analysis of DNA extracted from rocks, the scientists found that more than 90% of the organisms there are hydrogen-dependent and methane-producing bacteria. So how deep can living things survive? Research in recent years has shown that some microorganisms can survive at temperatures as high as 120\u00baC. If converted according to the earth's warming gradient, the deep biosphere should reach 4km or deeper. Biologists estimate that the upper limit of the temperature that life on Earth can withstand is around 150\u00baC, that is, there may still be life in the deep crust of about 5km. When the temperature is higher, the substances (such as proteins) that constitute extremely important life will lose their biochemical activity. According to a simple estimate, the biomass of organisms living below the surface has the same amount of biological carbon as the biomass of all organisms on the earth's surface (such as continents, oceans, and atmosphere) [1], which shows that the earth's Half of the organisms do not need the sunlight at all, and a considerable proportion of organisms in the biosphere on the surface do not depend on solar energy at all. For example, there are a large number of microorganisms directly from the geological environment Energy is obtained during the oxidation-reduction process of chemical substances in the medium. So how did this deep biosphere form? An intuitive view is that the creatures in these deep biospheres all entered the deep from the biosphere on the earth's surface. Microbiologists have discovered that some deep microorganisms are indeed brought from the surface to the deep through various geological processes, and finally stay there. After a long evolution, they gradually adapt to the harsh environment there. Microorganisms carried by surface water can penetrate up to 10km into the earth's crust through rock cracks, but this is not entirely the case. The analysis found that some deep-seated microorganisms have a fairly distant relationship with those on the surface, but have a closer evolutionary relationship with some microorganisms in the deep ocean thousands of miles away. how did this happen? There is no definite answer yet, but one possible explanation is that these deep microbes shared a common origin in an ocean when the land and sea separated in some distant past; , some microorganisms are trapped in the continental crust and gradually adapt to the environment of high temperature, high pressure and no sunlight in the deep crust. For example, thermophilic microorganisms capable of producing nanomagnetite isolated from deep parts of West Virginia, USA, are estimated to have become trapped in sediments during deposition 140 million years ago. Subsidence gradually penetrated to a depth of more than 2000m [3]. The discovery of the deep biosphere has greatly encouraged scientists' confidence in finding life beyond Earth. Because this shows that, like Mars, or other planets outside the solar system, even if their surface is extremely cold and unsuitable for living things, as the ground temperature increases, there will always be a condition suitable for living things in the depths. The existence of a deep biosphere suggests that Earth can escape cosmic events that would wipe out life on the surface. In view of this, is it possible that the graphite with the signature of biological carbon isotope found in the metasedimentary rocks of 3.85 billion years ago, and even in the rocks of 4.2 billion years ago, was caused by earlier deep life? Figure 1 \tCrystal structure of magnetite The structural formula of magnetite can be written as (Fe3+)A[Fe3+Fe2+]BO4, the A position is the tetrahedron in the figure, and the B position is the octahedral structure in the figure Figure 2 \tThermoanaerobacter spp. strain TOR39 This strain of bacteria was isolated from a depth of 800~2200m underground. It can withstand high temperatures as high as 75\u00baC. The cells in the picture are surrounded by magnetite produced by the reduction of ferric iron in the environment", "The question of origins is one of the deepest questions humans have pondered, including science, philosophy, and religion. From the perspective of evolution, the origin of human beings, the origin of life on earth, and the origin of the earth are all links in the origin of time, space, matter, and energy. The origin of life on Earth can be traced back to the extremely distant past, and is closely related to the origin of the universe (Figure 1). Due to the earth-shaking changes that the earth has undergone continuously since its formation, no evidence of the early evolution of the earth has been completely preserved. Fortunately, we can restore the image of the early evolution of the earth and life bit by bit like building blocks based on geological, biological, comparative planetary and astronomical research. Figure 1 \tThe creation of the universe, the nuclear chemistry inside stars, the formation of planets, the formation of oceans on the earth, and the synthesis of organic molecules are all important links in the origin and evolution of life. The age of the earth is at least 4.5 billion years, while the age of the solar system The age will be older than the earth. The earth is the only planet in the solar system that we know so far that has life, and as far as we know, it is also the only planet in the universe that has life. When did life on earth begin? How is the original life different from our current living world? Does the origin and evolution of life on our earth have universal significance in the universe? Life was not possible for the first 700 million years after the formation of the Earth. Due to the heat generated by complex processes inside the earth and the frequent bombardment of celestial bodies outside the earth, the earth has experienced a magma sea process, and for a long period of time, the earth's atmosphere is mainly composed of nitrogen, water vapor and CO2 [1]. We know that the atmospheric pressure at 25\u00baC at mean sea level today is one atmosphere. The partial pressure of CO2 in the earliest days of the earth was 40 atmospheres, and the pressure of water vapor was as high as 120 atmospheres. Since both water vapor and CO2 can significantly cause the greenhouse effect, coupled with the extremely high surface heat flow of the early Earth itself, the Earth's atmospheric temperature can be as high as 450\u00baC, which is similar to the atmosphere of Venus today. At such high temperatures the atmosphere interacts with rocks to form secondary minerals that catalyze the synthesis of living matter (Fig. 2). As the earth's own heat production decreases and the CO2 in the atmosphere decreases, the temperature of the earth's surface gradually decreases. CO2 dissolves in water and co-precipitates with dissolved metal ions such as Ca2+, Fe2+, Mg2+, etc. to form carbonate, which is a very important process of fixing atmospheric CO2. Earth's oceans may have formed 4.4 billion years ago [2]. According to the current understanding of the origin of life, there may be life in the stable existence of liquid water. But life on Earth may not have started that early. This is because large-scale meteorites and comets frequently hit the earth in the early days of the formation of the solar system. Many scientists believe the massive impact could have completely vaporized Earth's oceans several times over. The last large-scale impact occurred about 3.9 billion years ago [3]. Since then, the Earth's oceans have finally formed and have remained so far. Although the earth has experienced many large-scale impacts since then, it has never been completely evaporated. Although it is not certain that life appeared on the earth in about 3.8-3.9 billion years just after the formation of the ocean, it is certain that the formation of the ocean, that is, the formation of liquid water on the surface of the earth is the most important condition for the beginning of life, because the origin of other life Elements such as carbon, oxygen and metals have long existed on the earth [4]. The oldest preserved sedimentary rock is 3.85 billion years old, and its existence provides indisputable evidence of the formation of the ocean. The oldest surviving metamorphic sedimentary rocks have been found in Aisua, Greenland. These rocks were formed 3.8 billion years ago, indicating that there was already interaction between rocks and seawater at that time, so it can be assumed that the ocean has formed. Could graphites characterized by light carbon isotopic values found in older than 3.8 Gyr be of biogenic origin [5]? It's hard to say yet. Figure 2 \tReaction of olivine with carbon dioxide-water supercritical gas at high temperature to form Mg(OH)2 and other secondary minerals. They play an important role in the pH of the initial ocean and the catalysis of early organic synthesis. It is generally believed that it takes about 200 million years from the chemical evolution of lifeless to the formation of the first cellular life, but some people think that it may only take 20 million years. Time will do. A certain amount of simple organic molecules, catalytic clay minerals, oxides or sulfides were required to conceive the first life [4, 6]. We still don't know what the first cell should look like, or even whether the first protocell was a functional group characterized by energy or material metabolism, or a functional group capable of inheriting its own information. The former is easier to explain chemically and energetically, while the latter is closer to the genetic characteristics of life in terms of characteristics. In any case, the metabolism of matter and energy and the genetic functions necessary for life emerged within a short time. According to the current explanation, the first life on earth was produced on the basis of purely chemical evolution at first, and it is the common ancestor of all life on earth [7]. On this basis, life is further differentiated into bacteria, archaea and eukaryotic organisms including animals, plants and algae. Based on the analysis of some evolutionarily stable genes, bacteria are as old as archaea, while eukaryotes diverged from archaea later [7]. So what caused the slight differences between bacteria and archaea in the primordial ocean? One wonders if one of them came from outside Earth, such as Mars. For a long time, some people have insisted that the seeds of life on earth may come from space. It is conceivable that a Martian meteorite fell into the Earth's oceans very early with the seeds of life on Mars. Of course, another possibility is that bacteria and archaea were born in slightly different environments on the earth, such as differences in temperature, acidity, or chemical composition due to differences in the degree of homogeneity of the ocean. There is also much debate as to whether the first cells were heterotrophic or autotrophic. Some of the current single-cell life obtain energy by directly oxidizing ready-made organic matter, and some obtain energy by transforming the energy state of inorganic chemical substances, such as Fe2+, H2S, CH4, H2, etc. [8]. Which one comes first? This may be related to the different geochemical environment of the early Earth. A terrestrial environment, if it already existed, would not be the best place for life to begin, because Earth's atmosphere at that time would not have an ozone layer that would block deadly ultraviolet rays. Shallow seas can have certain chemicals that receive sunlight, but UV radiation is still a problem. In the deep ocean, such as the surface of a submarine volcano, there is enough thermal energy for the formation of catalytic substances, and there is enough chemical energy, but without the trouble of ultraviolet rays. This may be the place where life was first born. There are also hydrothermal vents on the mid-ocean ridges on the seabed in similar places. If this were the case, sunlight would not have been necessary for life from its birth. Perhaps on the basis of these initial uses of chemical energy, microbes that can use solar energy have gradually evolved. The first life also greatly participated in the process of changing the surface conditions of the earth after its emergence. Professor Williams Schopf of the University of California, Los Angeles discovered the oldest fossils of photosynthetic bacteria so far in the 3.5 billion-year-old marine sedimentary rock chert[9], as evidenced by stromatolites found all over the world, It took another 800 million years for the organic molecules in these cells to be preserved in the sediments to this day. Sulfate-reducing bacteria appeared in about 3.47 billion years [10]; the appearance of bacteria mainly reducing Fe3+ should be after the appearance of a large number of Fe3+ oxides on the seabed, that is to say, it appeared during the formation of global ferrosilicon . Therefore, it should be later than the purple fungus. Because it is the water decomposed by purple bacteria that produces oxygen, which in turn oxidizes Fe2+ in seawater. Subsequently, with changes in ocean temperature and oxidation-reduction conditions, the reduction of Mn4+ and NO3\uf02d also appeared in the Early Proterozoic. Among all these events, the oxidation of Fe2+ is the most important event, which completely changed the mineral composition of the earth's surface and opened a new page for the subsequent evolution of the atmosphere and life.", "Global warming is one of the hottest public topics in recent years. It is an indisputable fact that the CO2 in the global atmosphere has increased from less than 300 parts per million before industrialization to 385 parts per million now [1]. Many scientists and government officials agree that CO2 emissions from burning fossil fuels are responsible for the increasing number of severe weather events. Developed countries and emerging developing countries have been arguing endlessly over the issue of carbon emissions trading. Many developed countries have clearly put forward plans to reduce CO2 emissions in the near future, and require developing countries to follow suit in order to prevent excessive CO2 emissions from causing further deterioration of the global environment. But how many people know that the long-term trend of the evolution of atmospheric CO2 is an irreversible decrease, and that its level will eventually be so low that it will make photosynthesis difficult for plants, causing the extinction of plants and all animal worlds that rely on plants for food? Has anyone seriously thought about what the extinction of animals and higher plants at the top of the evolution of life on Earth would mean for life on Earth and in the universe? Today's earth is blue sky and white clouds, lush greenery, singing birds and fragrant flowers, full of vitality everywhere, but all these are only after the earth has undergone tremendous changes. When the earth was first formed, the atmospheric temperature was above 450\u00baC, and the main components of the atmosphere were CO2, N2 and water vapor. With the formation of oceans, CO2 in the atmosphere quickly dissolves in seawater, and co-precipitates with metal ions in seawater to form carbonates, which are locked in the seabed for a long time. After 3.9 billion years, the temperature of the Earth's atmosphere had dropped below 90\u00b0C, and life on Earth began. Around 3.5 billion years ago, purple fungus began to appear in the ocean, and began its journey of transforming the earth's oceans and atmosphere that lasted more than one billion years. By 1.8 billion years, the Earth's atmosphere and oceans were converted to oxidizing properties. Then, with the advent of aerobic respiration, algae bloomed, further increasing the oxygen content of the atmosphere. The increase in atmospheric oxygen also contributed to the emergence of the ozone layer, which further blocked ultraviolet rays from the atmosphere, making the continents more suitable for the expansion of life. In the Cambrian period, there was finally an explosion of animals and plants. Plants then quickly occupied the continent and adorned it with lush greenery. Life has evolved from nothing, from simple to complex, and after nearly 4 billion years of evolution, it has transformed the earth into what we see now. So will this ecosystem last forever? Where will it go? The existence of all life on earth requires carbon as the most basic element of its life molecules. Studies of the evolution of Earth's ecosystems consider carbonate deposition in the oceans, atmospheric CO2, carbon in the biosphere, and carbon in fossil fuels, and the cycles between them. Is there never more or less carbon circulating throughout the system? This is an issue that most ecological studies ignore. In fact, the total amount of carbon on the earth's surface is changing. Its increase is mainly due to the magmatic activity of volcanoes and mid-ocean ridges releasing CO2 from deep in the earth to the surface geosphere and biosphere; while the decrease is mainly due to the subduction of the earth's plates, which brings a part of the carbonate deposited on the ocean floor to the earth's Deep [2]. This reduction process cannot be manifested in thousands, tens of thousands or millions of years. But it's obvious when you look at it on a billion-year scale. So another way to put this question is, how long is the carbon on earth usable by living things? Figure 1 \tWhen carbon dioxide in the atmosphere decreases to a critical value, such a carbon cycle will be difficult to maintain. The earth's ecosystem with animals and plants as the peak of life evolution will undergo major changes. There are several factors that will affect the stability of the earth's ecosystem. One is that as the brightness of the sun continues to increase, the so-called habitable zone of the solar system will gradually drift outward, and this process will accelerate the loss of water on the earth [3,4]. The reason why the earth's ecosystem can be stable for a long time is that on the one hand, the steady reduction of atmospheric CO2 reduces the greenhouse effect; on the other hand, the gradual increase in the brightness of the sun allows the earth to continuously obtain more energy, thereby offsetting the CO2 The reduced effect makes the earth avoid going directly into a permanent ice age [4]. After the formation of the biosphere, it also has a certain regulatory function, so as to prevent the earth's ecological system from collapsing. But the changing temperature of the Earth's surface is also responsible for the changing structure of the biosphere. CO2 has been an important greenhouse gas in the Earth's atmosphere. The problem now is that CO2 is already a trace gas in Earth's atmosphere. As the natural CO2 fixation process dominated by silicate weathering proceeds, the CO2 decreases to levels that make it difficult to sustain plant photosynthesis, the energy produced by photosynthesis, which is the source of all oxygen-breathing organisms. Many plants require a minimum of 150 ppm atmospheric CO2 for photosynthesis. And our current atmospheric CO2 concentration is 385 ppm. Lovelock and Whitfied [5] calculate that after about 100 million years, the atmospheric CO2 concentration will be below 150 ppm; if this is true, the age of plant life is 95% past. Ken Caldeira and James Kasting recalculated the lifespan of the biosphere in an article published in Nature in 1992 [4]. They think that Lovelock-Whitfield[5]'s statement that at least 150ppm atmospheric CO2 is required for plant photosynthesis is not strict enough. Because although the concentration of CO2 required by most plants is lower than 150ppm, there are still a large number of plants growing in mid-latitudes with slightly different photosynthetic mechanisms, and they need atmospheric CO2 concentrations as low as 10ppm . Considering such changing factors, Caldeira and Kasting[4] recalculated that some other plants could last for 500-1000 million years. Franck et al. (2001) further used more accurate rock weathering and CO2 flux data, and the calculation results are similar, that is, photosynthesis can continue for another 500-800 million years from now [3]. If considering the regulating effect of the ecosystem itself on the environment, the current ecosystem may last for about 1.2 billion years. Figure 2 \tEarth's surface without plants would appear inhospitable, but microbial life would still exist and persist for long The optimal period for its biomass production [6]. Our current ecosystem is in a process of declining biomass production. According to previous estimates, our terrestrial plant ecosystem may end in 600-1000 million years. After the plants disappear from the earth, the meandering rivers will be replaced by those densely packed small streams that are common on the edges of deserts or grasslands (Figure 2). In addition, with the disappearance of plants, the albedo of the earth will increase, desertification will appear, and CO2 will increase instead. But because there were not enough plants left, the rise in temperature was not moderated, large numbers of plants continued to disappear, and animals migrated to higher latitudes. As a result, the food and oxygen that animals depend on to survive are greatly reduced. The ultimate result is the disappearance of the animal world. Since microorganisms can survive in various extreme environments, such as environments with very low atmospheric oxygen content, or even environments that do not require oxygen; animals and plants are obviously not a necessary condition for the existence of microorganisms. Then there may be another long world of microorganisms, until finally the sea water on the earth evaporates and disappears [6]. This may take another billions of years. Since human beings are animals with technological civilization, maybe they will not disappear soon with the disappearance of plants and animals. But our descendants will spend more energy in the long future to survive in a harsher environment. Perhaps our descendants can use their wisdom and natural resources more wisely and rationally, so that a beautiful world of life like the earth can exist for a longer period of time.", "\"Tao begets one, one begets two, two begets three, and three begets all things\" (Chapter 42 of Laozi). From ancient times to the present, human beings have never stopped seeking the \"Tao\" of life on earth, that is, how life on earth originated. So far, there have been various views and assumptions about the origin of life, and the theory of the chemical origin of life is one of them. The hypothesis holds that small inorganic molecules on the earth undergo chemical reactions to form small organic molecules, which react to produce organic macromolecules, and further react to produce multi-molecular systems, eventually giving birth to life. As early as the 1950s, Miller (Stanley L Miller) did a famous experiment. He used CH4, NH3, H2O and H2 to obtain the experimental results of amino acids under the action of electric sparks, indicating that it was possible to Synthesis of organic substances important for the origin of life [1]. At the end of the 1970s, people discovered hydrothermal activities on the seabed, and found H2 and organic substances such as CH4 and C2H6 that may be abiotic in the hydrothermal fluid [2], so many researchers believed that some organic substances in the hydrothermal environment It can be formed through non-biological ways, which is similar to the early environment of the earth, and can provide material accumulation for the origin and evolution of early life on the earth [3]. C and N are the most important elements for life. The main components of the early Earth\u2019s atmosphere were CO2, N2 and a small amount of H2[4]. Today\u2019s submarine hydrothermal system is still continuously providing H2 produced by the reaction of rocks containing low valence iron and H2O. Therefore, understanding CO2, N2 and H2 Transformation reactions to produce organic matter have become the primary task in exploring the abiotic synthesis and evolution of organic matter on Earth [5]. Generally speaking, there are three methods to study the abiotic synthesis of organic matter on the earth: isotope tracer, theoretical analysis and experimental simulation. Since the earth's surface is full of biogenic organic matter, and the carbon isotope characteristics are often uncertain when identifying abiotic organic matter[6], people use theoretical analysis and experimental simulation methods to study the abiotic pathways to produce organic matter evolution process and its significance to the origin of life on Earth [7]. Theoretical analysis shows that CO2, N2 and H2 can react to form hydrocarbon substances, which can further react to form other organic substances such as ribose and amino acids; even under solution conditions, the dissolved CO2, N2 and H2 and their products In theory, the reaction to form organic matter can also be carried out[5,8,9]. For example, the dissolved CO2, N2 and H2 in the solution may react as follows [5]. CO2 and H2 react to form organic substances such as carboxylic acids, aldehydes, alcohols, and alkanes. Among them, the equation for the reaction of dissolved CO2 (CO2, aq) and H2 (H2, aq) to produce alkanes can be expressed as follows: In the formula, n represents the carbon number of the alkane; the subscript \"aq\" represents the dissolved state; the same below. N2 reacts with H2 to synthesize NH3, which can be expressed as CO2, N2 reacts with H2 to form HCN, which can be expressed as carboxylic acids, alcohols, aldehydes, NH3, HCN, etc. formed by the above reactions may further react to form other organic substances, including bases, ribose And amino acids, etc. [8]. HCN reacts with CH2O to form a base, and the reaction formula can be expressed as \n(adenine) (guanine) (cytosine) (thymine) (uracil) CO2 reacts with H2 to form CH2O into ribose and deoxyribose, and the reaction formula can be It is expressed as \n(ribose) (deoxyribose) aldehyde reacts with HCN to form amino acid, and NH3 participates in this process, and the total reaction can be expressed as In the formula, R represents an alkyl group. \n(Amino acid) Generally, lower temperature (eg, less than 300\u00b0C) and higher pressure conditions are conducive to the formation and stable existence of organic matter. In addition, since CH4 and NH3 are the substances with the highest reduction degree of C and N, respectively, they are often the dominant products when the reaction system reaches equilibrium, and the content of other products is very low. However, experimental results show that the reactions of CO2 and N2 with H2 and related products for the synthesis of organic matter are difficult to carry out. For example, McCollom and Seewald did not observe the reaction of CO2 and H2 in solution to form alkanes in their experiments [10]. However, it is precisely because the reaction is difficult to carry out that the complete reduction of CO2 and N2 to CH4 and NH3 is prevented, thus providing opportunities for the formation of other substances under non-equilibrium reaction conditions [5]. Therefore, people use catalysts to conduct experiments to study the process of these non-biological synthesis of organic matter and the effect of catalysts on the reaction. For example, it is difficult for CO2 and H2 in solution to react to form alkanes in the absence of a suitable catalyst [10]. Experimental studies have confirmed that when nickelite exists in the reaction system, CO2 and H2 can react to form a large amount of CH4[11]; and when chromite exists, CO2 and H2 can react to form CH4, C2H6 and C3H8[12]. Recently, Chinese scientific and technological workers used cobalt-containing magnetite to catalyze the reaction to form C1~C5 alkanes, and found that there are very few branched chain hydrocarbons in the alkane products, and the content of straight chain hydrocarbon products has a logarithmic linear relationship with the number of carbons. [13]. These experimental results show that the type and nature of the mineral catalyst determine whether the CO2 and H2 reaction can proceed and the type of the reaction product. So far, people have only learned about the reaction of CO2 and H2 to form some alkanes under solution conditions and the required catalyst conditions through experiments, but the reaction process and mechanism of this reaction under the action of mineral catalysts are not clear. Also, are there any mineral catalysts that can catalyze the reaction of CO2 and H2 under solution conditions to form higher carbon number alkanes? Can CO2, N2, and H2 undergo the reactions listed above to form NH3, HCN, etc. under solution conditions, and can they react further to form bases, ribose, and amino acids? What are the conditions, process and mechanism of these reactions? A series of issues, etc., still need to be studied in depth.", "Cold seeps refer to the seepage activities of fluids mainly composed of H2S, CH4 and other hydrocarbon-rich compounds that come from below the seabed sedimentary interface and have a temperature similar to that of seawater. In 1983, American scientist Charles Paull first discovered a cold seep in the Florida cliffs of the Gulf of Mexico[1], and reports on cold seeps continued to emerge around the world. Modern active cold seeps mostly develop in passive/active continental margins and other fault-developed sea areas. At the same time, there are more and more reports about cold springs (paleocold springs) in geological history, mainly formed between the Devonian and Quaternary periods (Fig. 1). Unlike hydrothermal fluids, which are ephemeral (10-year scale), cold seepage can last for a long time (10,000-year scale). Although a great deal of research has been done on cold seeps and many valuable insights have been obtained, our understanding of cold seeps is only limited to the accumulation of research in the past 20 years, and there are still many relevant major scientific issues that have not yet been resolved. Figure 1 \tDistribution map of global modern cold sees and paleo cold sees (modified according to literature [2]). One of the direct consequences of seepage of cold seeps is that a large amount of CH4 enters the ocean water body and even the atmosphere. CH4 is a more powerful greenhouse gas than CO2, and a large amount of CH4 entering the atmosphere will inevitably accelerate global warming. So how much CH4 enters the atmosphere through cold seeps every year? This issue is still unclear because the biogeochemical processes of CH4 in cold seep systems are not well understood. In fact, most of the CH4 leaked from the cold seep will be consumed by the CH4 oxidizing biological community in the process of penetrating the anoxic zone sediment layer during the upward seepage from the deep bottom layer, which is a microbial Oxidation of CH4 mediated by sulfide usually occurs simultaneously with sulfate reduction (Fig. 2). The chemical equation can be expressed as: CH4 + SO42\uf02d \uf0e8 HCO3\uf02d + HS\uf02d + H2O. However, the anoxic oxidation rate of CH4 varies greatly in space (up to 1\u20132 orders of magnitude[5]); in addition, the spatial extent of cold seeps in the entire ocean area is also unclear. Therefore, obtaining more accurate data on CH4 anoxic oxidation rate and cold seep spatial extent is the key to constraining the role of CH4 anoxic oxidation in the global CH4 balance and even the global carbon cycle. On-site observation research, microscopic observation, stable and radioisotope determination, and biomarker compounds and gene composition analysis at the molecular level are important research means and methods to solve the above problems. In addition, it is also crucial to increase the marine geological survey to determine the spatial extent of cold seep. Figure 2 Schematic diagram \tof fluid migration, typical cold see biome, gas hydrate distribution and cold seep carbonate rock precipitation in continental margin cold see system. The illustration shows the concentration of methane and sulfate profiles in anoxic sediments. At the sulfate-methane interface, both sulfate and methane concentrations are at their minimum due to coupled anoxic methane oxidation and sulfate reduction. The illustration in the upper right corner is a schematic diagram of the metabolism of methane anoxic oxidation and sulfate reduction. The red part in the figure is the methane-oxidizing archaea, and the green part is the sulfate-reducing bacteria (according to literature [3] and [4] synthesis) Another notable feature of cold seep seeps is the formation of cold seep biomes on the seafloor. Due to the lack of light on the seabed below the depth of 200m, photosynthesis cannot proceed, and the deep-sea environment has long been considered a forbidden zone for life. However, in the submarine cold seep system, there is a food chain with chemoautotrophic bacteria as the primary producer, and an ecosystem with a very unique reproductive community structure (Fig. 2)[2, 6]. Primary producers such as tubular worms, clams, mussels, polychaetes, and primary consumers such as starfish, sea urchins, and sea shrimps and secondary consumers such as fish, crabs, flatworms, and cold-water corals are propagated on the basis of primary producers , they are eventually decomposed by nematodes and returned to the natural environment, forming a complete set of cold seep ecosystems. In terms of higher taxonomy, cold seeps are similar to ecological communities in hydrothermal environments, but cold seep systems have high biomass and low biodiversity, and cold seep organisms usually grow slowly, and some large tubular worms can be hundreds of years old , considered to be the oldest animal on Earth [6]. The reproduction and death of the cold seep biome is controlled by the seepage of the cold seep. Once the cold seep is \"dormant\" (methane stops seeping), the biome dies and a new community is formed near the new vent. Cold seep organisms are extremely sensitive to changes in their living environment, so the community can change rapidly within a small range (a few meters)[5]. In recent years, marine surveys and research have obtained a large amount of original data, which has greatly expanded our understanding of cold seep organisms. We already know that the evolution of cold seep ecosystems is driven by geological activities related to cold seep. The response is unclear. In addition, due to the particularity of cold seep research, strong reliance on high technology and high investment, and different research levels around the world, little is known about the evolution of cold seep ecosystems in space and time, especially in space. To strengthen the comparative study of cold seep systems at different water depths in the same sea area and in different sea areas, while strengthening the research on modern cold seep, deepen the research on ancient cold seep ecosystems, and explore the interaction between geology and biology in cold seep systems in geological history is the key to breaking through this bottleneck effective measures. In terms of the intensity of seepage, cold seeps can be strong \"eruption\" systems or \"diffusion\" systems that are invisible to the naked eye. Previous studies have shown that the intensity of cold seepage is related to sea level changes, showing strong seepage at low sea levels and weakening at high sea levels[7,8]. However, there is still a lot to be confirmed for the above point, and if the intensity of cold seep seepage is related to sea level changes, then this relationship should be global and also exist in geological history. Furthermore, how do cold seep ecosystems respond to changes in seepage intensity? Did life on Earth originate in cold seep systems? Research on cold seep ecosystems directly affects our understanding of the laws of geological-biological interactions in extreme environments. It can be said that the research on cold seeps and cold seep ecosystems has just begun. Frontier scientists are searching for clues to these questions, and many mysteries are yet to be revealed by future scientists.", "About 50 million years ago, plate movement caused India and the Asian continent to collide, leading to an important orogeny event in the history of the earth, forming the world's largest Himalaya-Tibet orogenic belt and the roof of the world - the Qinghai-Tibet Plateau. The research on the formation and evolution process of the orogenic belt and the Qinghai-Tibet Plateau is one of the frontier fields of earth science today[1]. At least until the Cretaceous, the mainland of my country was still high in the east and low in the west, but the uplift of the Qinghai-Tibet Plateau reversed the topography, forming the current pattern of high in the west and low in the east[2], and caused changes in the distribution and direction of major rivers in Asia[2, 3 ], thereby changing the transport of freshwater and sediment from the land to the ocean. For example, the original Yangtze River only developed in the eastern part of my country, and the upper reaches of the Jinsha River flowed south into the Indian Ocean. With the substantial uplift of the Qinghai-Tibet Plateau, the Jinsha River connected with the original Yangtze River, forming the third largest river in the world today. The rise of the Qinghai-Tibet Plateau not only changed the terrain, but also had an important impact on the climate in Asia and the world. The specific process and mechanism of these effects are currently hotly debated scientific issues [4\uf02d6]. In Asia, the uplift of the plateau has raised the altitude of a large area of tropical, subtropical and temperate land on the earth to above 4500m to become an alpine region, forming the \"third pole of the world\" where ice, snow and permafrost are concentrated. The uplifted plateau becomes the heat source of the atmosphere in summer and the cold source in winter, so that the southerly wind prevails in a large area of Asia in summer, bringing a large amount of water vapor from the low-latitude ocean, while the dry and cold northerly wind prevails in winter, forming a strong Asian climate. Monsoon [7, 8]. The plateau makes the westerly circulation branch, and the south branch airflow in summer and the north branch airflow in winter strengthen the monsoon. At the same time, the Himalayas and the Qinghai-Tibet Plateau form barriers to the inland water vapor from the ocean, forming the world's largest inland arid region in Asia. The intensification of the Siberian high by the plateau makes the inland Asia drier in the winter half of the year. In the Paleogene period before the uplift of the Qinghai-Tibet Plateau, the Asian continent was originally an east-west belt climate pattern controlled by the planetary wind system. The southern part of my country is similar to today's Sahara Desert, an arid area controlled by subtropical high pressure. It is the formation of a strong monsoon circulation that makes the south a wet land. Research on loess and other geological records in my country shows that the above-mentioned change in climate pattern occurred 22-24 million years ago[9]. The most significant change in the global climate in the Cenozoic is the global cooling process marked by the formation and development of polar ice sheets. The Tibetan Plateau may have played an important role in this process. First, the uplift of the plateau can reduce the heat in the high latitudes of the northern hemisphere by changing the atmospheric circulation, which in turn leads to high latitudes in the northern hemisphere and global cooling; The weathering of acid salts absorbs CO2 in the atmosphere and converts it into HCO3\uf02d, which enters the surrounding basins and oceans through rivers, reducing the concentration of atmospheric CO2; thirdly, can the uplift of the monsoon Qinghai-Tibet Plateau cause global climate change? \tThe formation of \u00b7467\u00b7 also intensifies the chemical Weathering and absorption of CO2; Fourth, a large amount of organic matter on land is brought into the basin and ocean by rivers together with other denuded substances, which will also lead to a decrease in CO2 concentration [6]; Fifth, some minerals entering the ocean become the source of marine organisms. Nutrients that increase productivity and absorb CO2. The last four types of processes can all reduce atmospheric CO2 concentration, leading to global climate cooling and ice sheet development. Among these known processes, the impact of chemical weathering on CO2 concentration may be the most critical, and it has been used to explain the alternation of icehouse and greenhouse periods since the formation of the Earth (such as the BLAG hypothesis). Some studies also argue that climate change caused by the uplift of the plateau can further intensify the rate of plateau material denudation, cause an isostatic adjustment of the crust, and lead to further uplift of the plateau. This is the famous \"chicken\" and \"egg\" debate on the causal relationship between the Qinghai-Tibet Plateau and climate change [10]. However, the current understanding of the relationship between plateau uplift and East Asian climate is relatively good, while the relationship between plateau and global climate basically remains at the stage of conceptual models or hypotheses, and a large number of future studies are needed to confirm [5,8,11]. Among the prominent questions are, when did the Qinghai-Tibet Plateau rise enough to affect the climate? Does the development history of the Asian monsoon have a strict relationship with the growth of the plateau? To what extent do the various processes by which plateau uplift affects global climate play a role? Obviously, there are several key nodes in the solution of these problems. First, there are still many uncertainties in scientists' current understanding of the plateau elevation and range change history, and it is necessary to find and develop accurate paleo-altitude measurement techniques; second, the various impact processes of plateau uplift on the carbon cycle need to be further understood, Tracking and quantifying the sensitivity of climate change to CO2 concentration on different time scales is also one of the keys; third, how much does climate change affect the denudation process? How to feed back to crustal isostatic and plateau uplift? In fact, there are other ways in which the uplift of the Qinghai-Tibet Plateau has an impact on the global climate, but it has not yet been recognized or paid enough attention to. For example, the major climate pattern changes in Asia caused by the uplift of the plateau have an important impact on terrestrial ecosystems and will inevitably affect the global carbon cycle, but we currently know little about it; the uplift of the plateau also has a major impact on the distribution and area of low-latitude wetlands , have various effects on the water cycle, and will inevitably regulate the concentration of terrestrial carbon pools and other greenhouse gases (such as CH4, N2O, etc.) in the atmosphere. The effects of these processes on the global climate have not yet been noticed. The above-mentioned unresolved or unrecognized problems are the charm of the research on the relationship between the Qinghai-Tibet Plateau and the environment, and it is also one of the breakthrough points for future earth system science research.", "Whether the Quaternary strata that can truly record environmental change events on different time scales and attach reliable ages has become an important topic in the past global change (PAGES) research. Because of its wide distribution, continuous deposition, and rich environmental information, Chinese loess, together with deep-sea sediments and polar ice cores, is known as the three pillars of global change research. The Chinese loess has systematically recorded the continuous climate and environmental change history, geomagnetic polarity conversion and geomagnetic drift information of the Asian inland since the Quaternary and Miocene. Using the relationship between the increase in the yield of cosmogenic nuclides and the weakening of the paleomagnetic intensity during polarity transitions, we can independently trace environmental changes and lock stratigraphic horizons with events of known age[1~3], providing our The research provides a new way of thinking. 1. Difficulties in Chinese Loess Tracing Global Paleomagnetic Variation Geomagnetic field is the barrier of solar wind and low-frequency cosmic rays, and has an important impact on space climate. So far, the causes of magnetic pole inversion and drift on the scale of ten thousand to million years, which are crucial to the earth's surface environment, and their environmental consequences are still academic unsolved issues. The long-term historical information of paleomagnetic field intensity is generally obtained by magnetic measurement or by using accelerator mass spectrometry (AMS) technology to detect cosmogenic radionuclide 10Be records in sedimentary layers. Zhu Rixiang et al. conducted detailed magnetic research on Chinese loess. In 2001, they used the magnetometry method to estimate the paleomagnetic relative intensity history from Chinese loess[4]. Since the obtained results do not completely agree with the results obtained from marine sediments, they believe that the magnetic parameters in the loess may be affected by the climate change and soil-forming process of the Loess Plateau, so it is more reasonable to estimate the change history of the paleomagnetic field intensity from the magnetic records of the loess in China. complex. Compared with the application of the traditional paleomagnetic method to invert the evolution of the Earth's magnetic field, the use of the geomagnetic field's shielding effect on cosmic rays for the tracer study of the cosmogenic nuclide 10Be has a higher detection sensitivity, which can reveal the relationship with the atmosphere in detail. Therefore, a large number of studies have been reported on the change history of the paleomagnetic field traced by 10Be in marine sediments and ice cores. However, due to the complexity of 10Be deposition in eolian loess, the measured 10Be records include not only 10Be atoms from distant dust source areas transported to the Loess Plateau by strong winds with the dust, but also 10Be atoms from the local space modulated by the geomagnetic field. Newly produced 10Be atoms brought to the surface of the Loess Plateau by precipitation. Due to the extreme inhomogeneity of monsoon rainfall and dust fall on the Loess Plateau, the modulation information of geomagnetic field changes in 10Be records is covered up (Fig. 1). Therefore, unlike the 10Be records in ocean, lake and ice core sediments, which can directly show the modulation information of the geomagnetic field, the 10Be records measured in loess fail to show obvious modulation information of the geomagnetic field. As a result, there has been no research report on tracing paleomagnetic events or paleomagnetic history from loess 10Be records for a long time. Figure 1. \tThe complexity of 10Be deposition and magnetic mineral particles in loess-paleosol The measured magnetic susceptibility in loess is the magnetic mineral particles carried by the dust particles in the dust source area and transported by strong winds to the Loess Plateau for settlement (dust fall) The magnetic mineral particles formed in situ during the soil formation process are composed of two parts, the former forms the magnetic susceptibility of dustfall, and the latter forms the magnetic susceptibility of soil. According to the high similarity (r = 0.95) between the 10Be concentration curve and the magnetic susceptibility curve of Luochuan loess measured in the past 130,000 years (Fig. 2), it can be inferred that the two are related to climate factors (precipitation and dustfall) in a similar way Associated. In 2003, the author proposed to regard the magnetic susceptibility as a surrogate index for the 10Be concentration in loess affected by climatic factors (precipitation and dustfall)[5], and compared the 10Be composition affected by the geomagnetic field modulation in the measured loess 10Be concentration with the The climate-influenced 10Be components are separated from each other, and using the \"average concept\" [6], the production history of 10Be in the atmosphere and the change history of the paleomagnetic field in the past 80,000 years have been reconstructed [5, 6]. It clearly traces the major geomagnetic field drift events (Mono-lake and Laschamp events) in the last 80,000 years, and is very consistent with the famous paleomagnetic field variation curves SINT-200 and NAPIS-75 in the past 200,000 years ( image 3). Although this simple and effective method has been further confirmed in the Luochuan and Xifeng profiles of nearly 130,000 and 300,000 years ago, the correlation between loess 10Be and magnetic susceptibility and climate factors cannot be completely consistent, and the non-consistent The degree of influence of some parts on the reconstruction results is still unknown, so it needs to be verified on different sections for a longer period of time. In addition, since the production rate of 10Be in the atmosphere is not only affected by changes in the strength of the geomagnetic field, but also by changes in the initial cosmic ray flux and the modulation of the electromagnetic field of solar activity, the calculation of the intensity of the paleogeomagnetic field can be deduced from the reconstructed atmospheric 10Be production rate. Conversion equations or conversion curves currently contain approximation assumptions. Figure 2 \tThe high similarity between the 10Be concentration curves and magnetic susceptibility curves measured in the Luochuan Loess and Xifeng Loess in the last 130 ka (r = 0.95) \t. The magnetic field change curve clearly traces the major geomagnetic field drift events (Mono-lake and Laschamp events) in the last 80,000 years, and is very consistent with the famous geomagnetic field change curves SINT-200 and NAPIS-75. II. Reconstruction of China The limitations of various methods of paleo-precipitation in the Loess Plateau The Loess Plateau with a fragile ecological environment is a key area for ecological environment protection and construction in China. Quantitative restoration of paleo-monsoon precipitation on the Loess Plateau is an important part of it, which can provide scientific basis for scientific evaluation of the environment, prediction of the future, and macro-decision-making of the government. For the quantitative reconstruction of paleo-monsoon precipitation in the Loess Plateau, different scholars have established different climate proxy indicators. The good correlation between magnetic susceptibility records of loess accumulations in China and deep-sea oxygen isotope records suggests the potential application of loess magnetic susceptibility in studying paleoclimate changes. In the past two decades, many geologists believe that the use of magnetic susceptibility, rather than other proxy indicators, can be sensitive enough to reflect paleoclimate changes [7~12]. (temporal) reconstruction of paleo-rainfall changes has taken an important step. The magnetic susceptibility of dustfall from the dust source area in loess has nothing to do with rainfall, while the formation of soil magnetic susceptibility is a complex chemical/biochemical process, which is controlled by climatic conditions (rainfall, monsoon and air temperature). Soil conditions (parent material, topography, weathering, vegetation, humidity, temperature, etc.) are closely related. Because modern technology is still unable to accurately obtain information on these paleoclimate and paleosol parameters, and the regression analysis method adopted by the researchers cannot determine the role of various factors in the process of soil formation. They can only determine the magnetization of soil formation when rainfall is the Under the concept of the most important factor of the rate, each did their best, excluded or abandoned the consideration of other factors, and carried out their own research on paleo-rainfall reconstruction. Therefore, the reconstruction formulas and results obtained by them have great limitations. The reconstruction equations of Sun Donghuai et al. [7] and Han et al. [8] considered the magnetic susceptibility of dust fall from the dust source area to be related to rainfall; the work of Maher et al. [9] and An et al. The separated soil magnetic susceptibility is directly used as summer rainfall[10] or regression analysis with rainfall alone[9], both of which ignore the \u201cdilution\u201d effect of dust on soil magnetic susceptibility[11] , in fact, their articles mentioned that in the process of soil formation, the slow dust fall is continuous; in 2001, Porter and Hallet[12] proposed a simple MS (magnetic susceptibility) model to Find out and estimate the \"dilution\" effect of dust. Although their regression equation includes the dust deposition rate factor, it can only represent the horizontal distribution of the loess surface along the lateral direction, and cannot represent the longitudinal depth distribution of a certain profile. For example, the formula uses the surface data of nearly forty sections. Their annual average temperature is in the range of 6-14\u00b0C, and the temperature difference of only 8\u00b0C leads to the conclusion that the researchers themselves are confused: the effect of air temperature on soil magnetic susceptibility No effect! However, a 130,000-year section includes the temperature difference between the Ice Age and the modern era. In fact, except for the work of An [10], the work of other researchers is to use the data between the surface magnetic susceptibility and the local modern rainfall to conduct regression analysis. Although the reconstruction formulas obtained by them correctly reflect the best fitting relationship among the modern data of the selected sections, because they take into account the influence of all other climate and soil factors other than rainfall, even the magnetic susceptibility of dust falls , are all counted as the contribution of rainfall, thus changing the inherent linear relationship between rainfall and soil magnetic susceptibility into various nonlinear relationships (logarithmic, high-order polynomial). What's more, they used the regression equation established by their measured data in the past 10 to 30 years to reconstruct the rainfall history of the glacial-interglacial period, which means that they must divide the rainfall from the glacial-interglacial All other factors other than , including dust settling rate, were treated as unchanged and the same as the modern values. Obviously, this is inconsistent with the actual situation. For example, the fourth-order polynomials derived by Han et al., according to the average value concept, they can only be established under the modern average temperature, which is very different from the average temperature since the glacial-interglacial period. As another example, the dissertations of two of the author's students showed that the RSD (relative standard deviation, the ratio of the root mean square of the fluctuation amplitude to its average value) ) reached 24% and 40%, respectively (Fig. 4). Moreover, based on the concept of the mean value and the principle of error propagation in the regression equation, the authors conducted an \"average effect estimate\" of the influence of changes in all other factors except rainfall and dust flux on soil magnetic susceptibility. Estimates show that (1) the change ranges of soil magnetic susceptibility caused by the changes of these unknown factors are 17% (nearly 80,000 years) and 10% (nearly 130,000 years) of the average soil magnetic susceptibility, which is all the loess magnetic susceptibility Those who reconstruct ancient rainfall must pay attention to and solve the problems. Figure 4 \tLuochuan loess dust fall flux curve in the last 130,000 years, its relative standard deviation, that is, the ratio of the root mean square of the fluctuation range to its average value, RSD=24%, there is a sudden change around 80,000 years In 1993, Heller et al. [13] proposed the \"10Be-magnetic susceptibility model\" in loess, and used it to separate the soil magnetic susceptibility from the measured magnetic susceptibility, trying to use it to reconstruct paleo-precipitation. One of the problems of this model is that it does not consider the modulation effect of the change of the geomagnetic field on the 10Be signal of loess, and it can only give the average rainfall in a long period of time (tens of thousands of years), and the error is too large. Sun Jimin et al. (1999) used loess geochemical parameters and Wang Lixia et al. (2005) used the conversion function relationship between organic carbon isotope \u03b413C and annual average rainfall to quantitatively reconstruct the paleomonsoon rainfall in the Loess Plateau region. Since a proxy indicator is often related to multiple factors, such as organic carbon \u03b4 13C is related to climate elements such as precipitation, temperature and light, there is still a gap between the rainfall obtained through these proxy indicators and the actual value. In 2007, according to the basically constant ratio between the yields of 7Be and 10Be in the atmosphere, the author [5] transformed the linear regression equation established between the current rainfall and 7Be in the rain into the relationship between rainfall and 10Be in loess precipitation. Equation to reconstruct the rainfall curve of the Luochuan Loess Plateau in the past 80,000 years. Although it can be compared with the \u03b418O curves of the Hulu Cave-Dongge Cave stalagmites, it is also a problem to extend the regression equation established with 2-3 years of data to 80,000 years. The biggest difficulty lies in how to get the correct precipitation 10Be to replace 7Be, because \u2460 Zhou WJ, et al., internal report. The former is not only modulated by the geomagnetic field, but also diluted by dust, while 7Be in modern rain is not affected by these effects. In short, the existing methods for reconstructing the paleo-rainfall in the Loess Plateau of China have great limitations, and they need to be explored by the young generation with ambitions.", "Limestone and dolomite are the most widely distributed chemical sedimentary rocks on the earth. The former is mainly composed of calcite and aragonite (calcite and aragonite, both composed of calcium carbonate CaCO3), and the latter is mainly composed of dolomite, composed of calcium magnesium carbonate CaMg (CO3)2] composition. Dolomite in geological records is widely distributed in marine strata, especially carbonate rocks in Precambrian strata more than a billion years ago, most of which are huge thick massive dolomite [1, 2]. There is a very consistent understanding of the origin of limestone, that is, it is formed by direct chemical precipitation of calcium carbonate. However, the origin of dolomite has been a major issue in sedimentological debate for many years, mainly including the dispute between primary sedimentation and secondary metasomatism[2,3]. Although according to thermodynamic theory, dolomite should be directly chemically precipitated from seawater, in fact, it has been found that the distribution of dolomite in modern marine sediments in nature is extremely limited. Where does such a large amount of magnesium come from? Could there be dolomite deposited directly from seawater, i.e. primary dolomite? If dolomite directly deposited from seawater did exist in geological history, why hasn't a large amount of dolomite directly deposited from seawater been found in modern oceans? In addition, it has not been possible to synthesize dolomite under normal temperature and pressure conditions close to the surface in the laboratory[4]. Therefore, it has become an unresolved scientific problem whether the dolomite widely distributed in the geological history period, especially in the Precambrian strata, is the primary dolomite formed directly from the ancient seawater or whether it was formed from calcium carbonate metasomatism. Oxygen isotopes have been used extensively by geochemists to study the origin of sedimentary carbonate rocks (limestone and dolomite) and the temperatures at which they precipitate from bodies of water. However, the interpretation of these isotopic data in nature depends on our knowledge of the magnitude and direction of oxygen isotopic fractionation between carbonate minerals and fluids. The study of oxygen isotope fractionation behavior of different carbonate minerals (such as dolomite, calcite and aragonite)\uf02dwater system in nature has always been one of the difficulties in the study of stable isotope geochemistry[4,5]. A correct understanding of the oxygen isotope fractionation mechanism of these minerals plays a key role in resolving the debate on the origin of dolomite. Through the study of oxygen isotopes, it is possible to answer whether the dolomite was directly precipitated from seawater or formed by the replacement of limestone by Mg-rich fluid. Some geological observations suggest that dolomite may be a direct chemical deposit in marine evaporite basins, however no one has been able to demonstrate such a precipitation process with laboratory simulations. On the other hand, it is entirely possible that dolomite was formed by the diagenetic metasomatism of pre-existing calcium carbonate by Mg-rich fluids. Many different models have been proposed for the cause of this secondary metasomatism, such as concentrated normal seawater model, seawater evaporation model, mixed water model, buried dolomitization model, etc., among which the source of Mg in the fluid is the key. Dolomitization can be divided into burial compaction, seepage backflow, fluid thermal convection, and sea level rise and fall driving fluid. However, the mechanistic study of this secondary metasomatism process is still not very clear. Previous studies have shown that the reason why dolomite is difficult to chemically precipitate directly from solution is restricted by crystallization kinetics rather than thermodynamic factors. Through experimental research, it has been found that microbial activities contribute to the formation of dolomite, and the kinetic problems encountered in the synthesis of low-temperature dolomite can be overcome under the condition of microbial participation [6, 7]. The analysis of the oxygen isotope composition of intergenerated dolomite and calcite may provide a means to study the chemical mechanism of dolomite formation, but the magnitude and direction of oxygen isotope fractionation between dolomite and calcite need to be clarified[4, 5]. Since dolomite has not been synthesized at low temperature (25\uf0b0C), the oxygen isotope equilibrium fractionation coefficient between dolomite and water cannot be directly determined at this temperature. Therefore, people have to extrapolate to the depositional low temperature conditions through the study of oxygen isotopes of certain high temperature mineral assemblages containing dolomite and calcite. The results show that at 25\uf0b0C, dolomite is significantly enriched in 18O relative to the associated calcite. However, in nature, dolomite-calcite mineral pairs associated with sedimentary carbonate rocks often show little oxygen isotope fractionation. According to the calculation of the mineral crystallization chemical model, the oxygen isotope fractionation behavior of dolomite is similar to that of calcite, and the equilibrium fractionation between dolomite and calcite is a small positive value at 25\uf0b0C; is a large positive value, which is similar to the result of dolomite-calcite fractionation obtained by simple extrapolation of laboratory high-temperature experimental data. Therefore, in the process of aragonite transforming from homogeneous polymorphism to calcite, whether there is oxygen isotope inheritance is one of the keys to solve the problem of dolomite genesis. When calcium carbonate precipitates from solution, it can appear in any of three homogeneous multiphase variants\u2014aragonite, calcite, and hexalcite. In nature, the type of homogeneous multiphase variant actually produced depends on many factors, including temperature, pH value, crystal nucleation/growth rate, presence or absence of impurities, partial pressure of carbon dioxide, and degree of supersaturation of the solution, etc.[8~ 11]. All three variants can be found in natural minerals, shells and bones of marine organisms, with aragonite and calcite being the most abundant, while hexagonal calcite is very rare. Therefore, people have mainly carried out studies on the fractionation behavior of oxygen isotopes in the aragonite\uf02dcalcite\uf02dwater system, but there are great differences in understanding of the results[5]. Precipitation experiments in early laboratories and some observations in nature found that the fractionation of oxygen isotopes between calcite and aragonite is negative. Conversely, there are also some natural observations that the fractionation between calcite and aragonite is positive. Certain marine organisms with aragonite-phase shells often show very little or no fractionation of oxygen isotopes when compared with the associated calcite. In the partial equilibrium method hydrothermal experiment of calibrating the oxygen isotope fractionation coefficient of the calcium carbonate\uf02dwater system at 100\uf0b0C, when natural calcite was used as the starting material, the extrapolated equilibrium fractionation ratio was obtained by using aragonite as the starting material. The resulting fractionation values are much larger. According to the calculation of mineral crystallization chemical model, aragonite is obviously depleted of 18O relative to calcite. According to these observations and theoretical calculation results, it can be inferred that either calcite-type shells are formed from the homogeneous multi-phase transformation of aragonite-type shells, but their isotopic composition changes little or basically unchanged after deposition, or that calcite-type and aragonite-type shells originally formed at different temperatures. The question is: Does the aggregate with negative value of fractionation between calcite\uf02daragonite belong to the mineral assemblage of isotope imbalance? If yes, what is the reason for this unbalanced fractionation? Therefore, solving the reasons for the above differences and understanding their geological significance is one of the difficulties in the basic research of oxygen isotope geochemistry at low temperature. According to the existing data, it is speculated that the large dolomite-calcite oxygen isotope fractionation may occur in one of the following two processes: \u2460 Diagenetic replacement or metamorphic alteration of calcite, but both calcite and dolomite are in isotopic equilibrium with seawater \u2461 During the homogeneous and multiphase transition from aragonite to calcite, no isotopic re-equilibrium with environmental substances was achieved, but both aragonite and dolomite were precipitated when they reached isotopic equilibrium with seawater. In both cases, the dolomite remained unaffected in its isotopic composition. In addition, does the similar \uf06418O value between dolomite and calcite indicate that dolomite has one of the following causes: \u2460 Dolomite is a kind of primary chemical sediment, and both dolomite and calcite are precipitated when they reach isotopic equilibrium with seawater; \u2461 Dolomite is formed by the replacement of pre-existing calcite by a solid-state diffusion mechanism without changes in oxygen isotopic composition. The question is: Is the large dolomite-calcite fractionation value observed in some natural mineral assemblages an imbalance? Is calcite depleted of 18O relative to coexisting dolomite formed by aragonite through homogeneous multiphase transformation? Is there no isotopic repartition with the medium during the transformation of aragonite to calcite, or dolomite and aragonite both precipitated when they reached isotopic equilibrium with seawater? These questions constitute the key issues in solving the origin of dolomite, and thus remain scientific questions that geochemists are constantly exploring.", "The use of stable isotopes to trace plate subduction has always been one of the hotspots in geochemical dynamics. Regardless of whether it is subduction of oceanic or continental crust, the study of stable isotopes can provide many important constraints on several key issues in the subduction process (such as time scale, crust-mantle interaction, and recycling of subducting materials, etc.). Stable isotope tracer plate subduction is mainly based on the following understanding: low-temperature water-rock exchange process can produce significant isotope fractionation; while high-temperature magma process has little effect on stable isotope fractionation. Prior to plate subduction, low-temperature interactions between the oceanic crust and seawater or ocean sediments can produce significant isotopic fractionation, resulting in subducted material with a different isotopic composition than the mantle. Following plate subduction, these isotopic signatures were preserved during high-temperature partial melting and eventually manifested in oceanic basalts. Therefore, the subduction process and crustal material recycling can be traced by studying the stable isotope signature of oceanic basalts. Theoretical and experimental studies on traditional stable isotopes such as C, H, and O all show that the equilibrium isotope fractionation in high temperature processes is small. However, the recent development of high-precision analytical methods and the study of non-traditional stable isotopes have shown that significant kinetic isotope fractionation can occur in high-temperature processes, which greatly promotes the study of stable isotope high-temperature fractionation mechanisms and its role in tracing a series of Applications in high temperature geological processes. Non-traditional stable isotopes specifically refer to those stable isotopes that cannot be obtained with high-precision isotope ratio data by traditional mass spectrometers, such as thermal ionization mass spectrometers or gas mass spectrometers, but which can be obtained with satisfactory precision by using multi-receiver plasma mass spectrometry developed in recent years, such as Li , Mg, Cl, Ca, Cr, Fe, Cu, Zn, Se, Mo, Tl, etc. In addition to the significant fractionation of traditional stable isotopes in low-temperature geological processes, recent studies of laboratory synthetic samples and natural samples have found that non-traditional stable isotopes may undergo thermal or chemical diffusion during high-temperature geological processes. A greater degree of isotope fractionation in the process [1~5]. According to the research of Richter et al. [4], light isotopes diffuse faster than heavy isotopes in the process of chemical diffusion, so significant isotope fractionation can occur before reaching diffusion equilibrium, and the degree of fractionation depends on the relative mass difference of isotopes and the initial conditions of diffusion. Under the same initial conditions of diffusion, the greater the relative mass difference of isotopes, the greater the isotope fractionation caused by diffusion; for the same element, the greater the difference in the initial element activity (concentration) at both ends of diffusion, the greater the isotope fractionation caused by diffusion. big. The Tin Mountain pegmatites and their surrounding rocks in the United States are a typical example of isotope fractionation caused by chemical diffusion in nature (Fig. 1). Teng et al. [2] analyzed the lithium element and isotope composition in pegmatite and its surrounding rocks, and the results showed that when lithium diffused from pegmatite to surrounding rock, lithium isotope fractionation occurred significantly. The measured sample data and one-dimensional diffusion The simulated curves calculated by theory are in good agreement. Figure 1 \tLithium kinetic isotope fractionation during one-dimensional diffusion The light lithium isotope 6Li diffuses faster than the heavy lithium isotope 7Li, resulting in the enrichment of light lithium isotopes in samples farther away from the pegmatite. The square represents the lithium content of the surrounding rock; the dot represents the lithium isotopic composition of the surrounding rock; the five-pointed star represents the surrounding rock sample not affected by diffusion; the solid line and the dotted line represent the theoretical simulation curve, except for the chemical diffusion process, due to the thermal gradient Thermal diffusion can also cause significant elemental differentiation and isotopic fractionation [1, 3, 5]. For example, when a basaltic melt with uniform composition (SUNY-MORB) undergoes thermal diffusion under a temperature gradient, the diffusion of major elements leads to the enrichment of Si at the low temperature end and the depletion of Mg, Al, Ca, Fe and other elements at the higher temperature end. At the same time, stable isotopes such as O, Si, Mg, Ca, and Fe all undergo significant kinetic fractionation, resulting in enrichment of light isotopes of these elements at the high temperature end, and enrichment of heavy isotopes at the low temperature end (Fig. 2). Figure 2 \tKinetic isotope fractionation during thermal diffusion [1, 5] In a homogeneous basaltic melt with a temperature gradient, O, Mg, Si, Ca, Fe and other elements diffuse and are accompanied by significant isotope fractionation. Isotopic fractionation is represented by \uf064X: \uf064X=[(iX/jX) sample/(iX/jX initial substance) \uf02d1]\u00d71000 However, in actual geological research, the power generated by chemical diffusion or thermal diffusion accompanying high temperature process Scientific isotope fractionation is often indistinguishable from fractionation produced by cryogenic processes, which further complicates the use of unconventional stable isotopes to trace plate subduction. For example, in the process of partial melting of mantle peridotite to form basalt, iron isotope fractionation makes both basalt and differentiated peridotite have iron isotope compositions that deviate from chondrites[6], leading to the use of basalt and differentiated peridotite to directly Constraining the iron isotopic composition of the mantle becomes difficult. On the other hand, high-temperature isotope fractionation provides a new tool for studying high-temperature geological processes. For example, iron isotope fractionation in magmatic processes is mainly caused by changes in oxygen fugacity between melts and minerals[6]. Therefore, changes in iron isotopes in oceanic basalts can be used to infer changes in oxygen fugacity during actual mantle melting and magma evolution. Another example is that Lundstrom[7] showed through experimental research and theoretical calculations that during the formation of some granites, thermal gradients can exist for a long time due to the continuous undercutting of high-temperature magma into the magma chamber, and the resulting thermal diffusion leads to differentiation of elements, making Granite is formed at the low temperature end and cumulated amphibolite gabbro is formed at the high temperature end, accompanied by isotope fractionation. Since the degree of fractionation of two stable isotopes in the thermal diffusion process is always linearly positively correlated, such as Fe and Mg isotopes, it is possible to determine whether the thermal diffusion process is in its formation by systematically determining multiple stable isotopic compositions of a set of magmatic rocks. play a major role. Although great progress has been made in the study of high-temperature unconventional stable isotope fractionation mechanisms in recent years, many key issues remain to be resolved, mainly including: \u2460 Although both experimental studies and theoretical calculations have shown that chemical diffusion can produce significant isotope fractionation, the existing The research is limited to between melts, and the effect of element diffusion on isotope fractionation within minerals, between minerals and melts, and between minerals is still unclear, and these are the basis for the application of high-temperature isotope fractionation to trace geological processes;\u2461 Although experimental studies have found that thermal diffusion can lead to significant isotope fractionation, the specific principles and mechanisms are still unclear; \u2462 There are no experimental studies close to the actual geological processes and conditions in nature, such as the effect of oxygen fugacity on the chemical and thermal diffusion of variable valence elements The effect of isotope fractionation is still unclear, and this is the key to using isotopes to control the change of oxygen fugacity in high-temperature magma processes; \u2463 Although experimental studies and theoretical calculations have shown that high-temperature diffusion processes can cause Still very little, which casts doubt on whether isotope fractionation by high-temperature processes is a common geological phenomenon. To this end, we also need to carry out systematic non-traditional stable isotope studies on thermal or chemical boundaries, such as the boundaries between enclaves and country rocks, and between intrusive rocks and country rocks.", "There is a mass difference between molecules composed of different isotopes, and the difference in the physical and chemical properties of the molecule caused by this mass difference is called the isotope effect. Isotopic fractionation refers to the phenomenon that in a system, isotopes of an element are assigned to two substances or phases in different ratios. As early as the 1940s, Urey[1] calculated the equilibrium constant of isotope exchange reaction by using the small free energy change of molecules caused by isotope substitution, thus laying a theoretical foundation for the development and application of isotope geochemistry. Isotopic fractionation is caused by slight differences in physical and chemical properties of molecules with different isotopic compositions. This slight difference is mainly related to the relative mass difference between molecules, so the size of isotopic fractionation is mainly related to the relative mass difference [1]. In general, thermodynamic and kinetic fractionation of stable isotopes caused by physical, chemical and biological effects are almost all mass-related. For example, the element oxygen has three stable isotopes, 16O, 17O, and 18O. Most samples on the earth form a straight line with a slope of about 0.5 on the oxygen isotope composition \u03b417O-\u03b418O correlation diagram, that is, \u03b417O = 0.516\u03b418O, which is also called the earth mass fractionation line (Fig. 1). Similarly, the sulfur isotope composition of mass fractionation generally satisfies \u03b433S = 0.515\u03b434S, \u03b436S = 1.89\u03b434S. Figure 1. \tOxygen isotope Earth mass fractionation line and meteorite non-mass isotope fractionation line[9] However, in 1973, Professor Clayton et al.[2] from the University of Chicago first discovered that the calcium-aluminum inclusions in Allende chondrites have different The isotopic composition of the mass fractionation line forms a line with a slope of about 1 on the \u03b417O-\u03b418O correlation diagram, which has the characteristics of \u03b417O = \u03b418O (Fig. 1). The discovery of this phenomenon of mass-independent isotope fractionation has aroused great interest among scientists. Generally, the degree of deviation from the mass fractionation line is used to represent the size of the non-mass fractionation. For oxygen isotopes, the non-mass fractionation size is expressed as: \u039417O = \u03b417O-0.516\u03b418O; for sulfur isotopes, \u039433S = \u03b433S-0.515\u03b434S, \u039436S = \u03b436S-1.89 \u03b434S. If \u0394\u22600, it means that there is an isotope effect of non-mass fractionation. Subsequently, Thiemens and Heidenreich[3] reported that non-mass isotope fractionation also occurs during the chemical reaction of molecular oxygen to ozone. Over the past 10 years, non-mass isotopic fractionation has been found in a large number of earth samples. In addition to ozone, nitrate and sulfate in atmospheric suspended matter also have obvious characteristics of non-mass oxygen isotope and sulfur isotope fractionation[4]. In addition, sulfate samples from some arid regions on the earth also have non-mass fractionation of oxygen isotope[5], and some early geological samples on earth (greater than 2.3 billion years) also tend to have non-mass fractionation of sulfur isotope[6]. These discoveries have greatly expanded people's understanding of non-mass isotope effects. But so far, there is still a lack of a unified understanding of the mechanism of the non-mass fractionation effect. Mass non-mass isotope fractionation in the Allende meteorite was first linked to early nucleosynthesis [2]. Later, it was discovered that chemical reactions can also produce non-mass isotope fractionation effects [3]. It is generally believed that the non-mass isotope fractionation effect may be mainly related to photochemical reactions, but some recent studies have also shown that non-mass isotope fractionation effects can also occur in some thermochemical reactions [7]. Therefore, to understand the mechanism of the non-mass isotope fractionation effect is of great significance for applying it to solve problems in planetary science and earth science. For example, the non-mass sulfur isotope fractionation effect found in early earth geological samples (more than 2.3 billion years old) is considered to reflect the lack of oxygen in the atmosphere at that time, the ozone layer has not yet formed, and ultraviolet light can directly irradiate the earth's surface, which is very important for solving the problem of the atmosphere. However, if it cannot be ruled out that it may be formed in a thermochemical reaction, this conclusion is still open to question. Since the discovery of the mass non-mass isotope fractionation effect, there have been a lot of experimental and theoretical work to study the mechanism of its generation. The \u03b417O = \u03b418O non-mass isotope fractionation effect observed in the ozone formation process is considered to be probably originated from the isotope self-shielding effect of 16O[3]. However, later theoretical studies showed that after the decomposition of O2, the rate of isotope exchange reaction between oxygen atoms is very fast, much faster than the formation rate of ozone, so the non-mass isotopic composition of original O2 due to photochemical self-shielding effect can be erased. Therefore, the isotope self-shielding effect cannot well explain the non-mass isotope effect. Later, different scholars successively proposed various mechanisms to explain the causes of the non-mass isotope effect. Some scholars suggested that the non-mass isotope fractionation effect in ozone formation may be controlled by the difference in molecular symmetry[8]. Some of the models developed include symmetry-induced kinetic isotope effects, symmetry and parity constraints associated with diadiabatic collisions and transitions between different electronic states, etc. However, these models were later proved by a large number of experimental studies that they could not explain the observed isotope effect [9]. At the same time, there is also a large amount of theoretical and experimental work to study the process of non-mass isotope fractionation. For example, Hathorn and Marcus [10] developed an intramolecular theory of non-mass isotope effects. Some independent theories developed on the basis of quantum mechanics have also been used to simulate the process of mass non-mass isotope fractionation [9]. However, these theories are mainly used to understand the chemical reaction process in the gas phase, and there is still a lack of theoretical understanding of the non-mass isotope fractionation produced by the chemical reaction in the solid phase or gas-solid phase. Therefore, a more ideal theoretical model is yet to be established to explain the mechanism and process of the non-mass isotope fractionation effect. The non-mass isotope fractionation effect can have broad application prospects in many aspects of planetary science and earth science, including the origin of the solar system, the evolution of the Earth's early atmosphere, the formation and transport of aerosols, the formation of the atmospheric ozone layer, and so on. Although a lot of progress has been made in the research on the mechanism of non-mass isotope effect and the process of non-mass isotope fractionation, it is far from being solved. The research on the mechanism of mass non-mass isotope fractionation involves the intersection of many disciplines, including quantum mechanics, physical chemistry, photochemistry, etc., so it is extremely challenging. It is believed that with the further deepening of theoretical and experimental research, the mechanism of the non-mass isotope effect will be further clearly understood, and then its wide application in planetary science and earth science will be promoted.", "With the continuous enhancement of human activities, a large amount of persistent organic pollutants (POPs) are discharged into the environment, which has become an important potential threat to human health and the living environment. Most POPs have negative effects on organisms, i.e. carcinogenic, teratogenic and mutagenic, are difficult to degrade in the environment and have the ability to bioaccumulate and amplify, and they can migrate long distances away from the discharge point through air, water and migratory organisms areas, eventually accumulating in remote ecosystems and negatively impacting local organisms. In order to reduce the environmental impact of POPs, the 19th Council of the United Nations Environment Program passed Resolution GC/13C in February 1997 and established an intergovernmental negotiating committee on POPs; The Stockholm Convention on Pollutants, signed by 90 countries. The first batch of POPs included in the Convention for global control includes 12 organic compounds, known as the \"dirty dozen [BP1]\". These compounds are mainly divided into three categories: \u2460 organochlorine pesticides, including aldrin, dieldrin, endrin, DDT, heptachlor, chlordane, mirex and toxaphene; \u2461 industrial chemicals, including six chlorobenzene and polychlorinated biphenyls; \u2462 unintentional release of by-products, including polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans. Due to the continuous emergence of new chemicals, the international community has also supplemented the scope of POPs accordingly. In May 2009, another nine organic pollutants with potential hazards to human health and the natural environment (including two isomers of HCH, four- to heptabromo-substituted polybrominated diphenyl ethers, chlordecone , hexabromobiphenyl, lindane and perfluorinated compounds) are included in the control scope of the Stockholm Convention on Persistent Organic Pollutants. Although the production and use of most POPs have been stopped or restricted, POPs are still ubiquitous in the global environment, such as water bodies in the Arctic and Antarctic regions [1, 2], organisms [3~5] and the atmosphere [6,7]. POPs such as DDT have been detected, and POPs have also been found to exist widely in some remote alpine environments [8, 9]. It can be seen that the long-distance migration of POPs is ubiquitous. ED Goldberg first proposed the hypothesis of \"global distillation effect\" to explain the phenomenon of semi-volatile POPs such as DDT and PCBs entering the ocean from terrestrial systems [10]. This hypothesis holds that the earth is like a big flask, the sun is like a heater, and the environmental media such as soil and vegetation are like solvents in the flask. Condensation and enrichment at the poles. Subsequently, Mackay and Wania et al.[11~14] simulated the process of global POPs transport from warm tropics to polar regions with the help of fugacity concept, and quantitatively calculated the transport speed of POPs in different environmental processes, enriched and refined the POPs The \"global distillation effect\", and this effect is defined as \"global fractionation and condensation\" (global fractionation and cold condensation) [16] (Fig. 1). The results of the model study suggest that the \"global distillation effect\" of POPs is mainly controlled by temperature. When the temperature rises, the volatilization increases and the deposition decreases. , settling and re-volatilization cycles. At the same time, due to the differences in physical and chemical properties among POPs, such as volatility and half-life in the atmosphere, there are differences in their transport routes and rates to high latitude regions. In general, highly volatile POPs are more likely to carry out long-distance transport, while the total amount of transport of POPs that are easily degraded in the atmosphere will be smaller. Therefore, the composition of POPs will change during the long-distance migration, which is the so-called fractionation effect. Due to differences in environmental factors (such as seasonal changes in rainfall and temperature), POPs do not migrate directly to high latitudes, but tend to volatilize and migrate in summer with higher temperatures and settle in colder winters. Will show jumping migration, so-called \"grasshopper effect\" (grasshopper effect). Similar to the \"global distillation effect\", alpine areas are also considered to be natural condensers of POPs; the air temperature decreases along the height of the mountain profile, resulting in the enrichment of POPs in the alpine environment (Fig. 2). Different from the \"global distillation effect\", the \"mountain cold-trapping\" process is not only controlled by temperature, but also affected by the rainfall process, because the rainfall changes with the increase of mountain altitude [16 ]. It is worth mentioning that not all POPs have significant global distillation and fractionation effects. For example, perfluorinated compounds, due to their weak volatility and easy solubility in water, theoretically their \"global distillation effect\", especially the fractionation effect, is not significant. Figure 1 Global distillation and condensation effects of POPs (from Waina and Mackay[15]) Although the \"global distillation effect\" successfully explained the phenomenon of POPs in remote areas, it is very difficult to verify this hypothesis through field observations. The main reason is that the study area is too large to achieve all-round field sample collection, and POPs are still being emitted in many places around the world, which brings great difficulties to distinguish the real source of POPs in remote areas. The scattered evidence currently obtained is mainly reflected in three levels: \u2460 Simple detection of POPs in the environmental media of local polar regions [1~6], or research on the environmental media deposited for many years in local polar regions to explore the POPs deposited Time changes [18], because POPs generally have a peak discharge period from the beginning of use to the final ban. Such data can indirectly reflect the long-distance migration of POPs, but the mechanism predicted by the \"global distillation effect\" has not been involved. . \u2461 Simple cross-scale sample collection, through the study of POPs in environmental media at different latitudes, to find the law to achieve the verification of the \"global distillation effect\" of POPs, such as Simonich and Hites found in the study of POPs in the bark of 32 countries Light components of POPs (such as hexachlorobenzene and HCH) have latitudinal distribution patterns, but the distribution of DDT has no obvious pattern[19]. Compared with the previous level of research, this study has tracked the possible traces of POPs at different latitudes from a global perspective, but this study also failed to involve the environmental behavior of POPs during the distillation process. So far, there are still few studies of this kind. The main difficulty lies in how to choose a suitable environmental medium as the object. Different environmental mediums have different abilities to accept POPs. \u2462 Through the indirect verification of the \"alpine cold capture\" research, since the \"global distillation effect\" is similar to the \"alpine cold capture\" effect theory of POPs, the \"alpine cold capture\" effect of POPs is indirectly verified by tracing the environmental behavior of POPs in mountainous areas. The Global Distillation Effect\". However, these evidences can only be verified one-sidedly and indirectly, and cannot understand the \"global distillation effect\" of POPs from a mechanistic perspective. How to fully obtain field data verification (especially direct verification from kinetics) of the \"global distillation effect\" of POPs has become a major scientific problem in environmental science today. Figure 2 The \"alpine cold capture\" effect of POPs (from Wania et al. [17]). In order to truly verify the \"global distillation effect\", on the one hand, it is necessary to strengthen global cooperation and comprehensively study the environmental behavior of POPs in different regions. The basis of the \"global distillation effect\". On the other hand, it is necessary to develop new research techniques for tracking the dynamic behavior of POPs in large-scale environments. The stable isotope (referring to the isotope with undetectable radioactivity, part of which comes from the final product of radioactive isotope decay, and the other part is natural stable isotope) tracer technology may have an absolute advantage in this research direction. This technology has been widely favored since its inception, especially the development of monomer isotope technology, which has become an important means of pollutant source analysis. Generally, when POPs emitted from different regions enter the environment, they have their own unique isotopic composition, or specific 13C/12C values [20]. After a series of environmental processes, the isotope fractionation of light component POPs is more obvious than that of heavy component POPs [21]. Therefore, the contribution of POPs from long-distance migration can be calculated according to the isotopic abundance of POPs emitted in different regions of the world and their fractionation degree in different regional environmental media. However, this technology requires complex pre-processing technology support to eliminate other carbon source interference, and the concentration of ordinary samples is difficult to reach the instrument monitoring limit, so this technology has not played its due advantages in the study of the \"global distillation effect\" of POPs.", "As Paracelsus (Paracelsus, 1493~1541, Swiss alchemist and doctor) said: All substances are poisons, and there is no one that is not a poison. But, in the right dose, poisons can also become elixir. In fact, apart from the dosage, whether a poison is lethal depends on whether it can be effectively used by organisms, which is the core factor why a poison is poisonous. This involves the concept of bioavailability. Bioavailability is a common concept used by environmental scientists when evaluating the health risks of chemical pollutants. In a broad sense, bioavailability refers to the degree of migration of chemical substances from various environmental media to organisms, that is, under the same conditions, the more pollutants absorbed by organisms, the greater their bioavailability, which is beneficial to the environment. The potential risk to the organism is greater. During the more than 100 years of global industrialization, incalculable amounts of natural or synthetic chemical substances have been released into the earth's environment, seriously polluting various natural environmental media such as the atmosphere, soil, and water that humans rely on for survival. Since soils and sediments (i.e., the sediments of lakes, rivers, oceans, etc.) are the main places to store chemical pollutants, and the microstructure of soils and sediments is very complex, it usually brings great difficulties to the measurement of bioavailability. large uncertainties, which are of particular concern. Therefore, in most cases, bioavailability refers specifically to the degree to which humans or ecosystems are exposed to pollutants in soil or sediments [1]. Indigenous peoples in South America had unconsciously applied the concept of bioavailability long before the arrival of Columbus. For example, curare is harmless when eaten, but is highly toxic when injected. Modern environmental toxicology defines bioavailability as the rate and extent of pollutants entering the systemic circulation of organisms. Obviously, there is a difference between the concept of a poison entering an organism and its entry into the circulatory system of an organism. Whether a poison is fatal depends on the way it enters the organism. For example, intravenous injection and oral administration can produce completely different results. From this point of view, modern toxicology's concept of bioavailability is accurate. However, the results of toxicology experiments are related to various physiological and biochemical factors, so the uncertainty is relatively large. In view of this, environmental scientists have successively given some other definitions of bioavailability [2\uf02d4]. These definitions all aim to describe the rate or relative amount of pollutant uptake by organisms from soil or sediment relative to the original total amount, but each definition focuses slightly differently. Many definitions of bioavailability have seriously affected people's understanding of the biological effects of pollutants [5], and have also hindered the formulation of reasonable plans to remediate contaminated soil or sediments. In fact, this may have something to do with the fact that bioavailability is technically difficult to define. Based on such considerations, after summarizing various definitions of bioavailability in detail, the National Research Council did not further give a clear definition of bioavailability, but only gave bioavailability The bioavailability process is used to describe the process by which organisms absorb a compound from soil or sediment [6]. The bioavailability process includes (A) the conversion between the bound state and the free state of pollutants, (B) the free state or (C) bound state pollutants are transported to the vicinity of the biofilm, and (D) the pollutants pass through the biofilm. Membranes and (E) pollutants enter a biological system (Figure 1). Figure 1 \tThe process of bioavailability[6] In soil and sediment, the pathways that determine the biological exposure of pollutants include: (A) mutual conversion between the bound state and free state of pollutants; (B) free state or (C ) Bound state pollutants are transported to the vicinity of biofilm; (D) pollutants pass through biofilm; (E) Another concept closely related to bioavailability of pollutants entering a biological system is bioaccessibility ). Most people have always believed that there is no substantial difference between the two. In fact, the above definition of bioavailability more or less includes the meaning of bioavailability. However, Semple et al. [7] believed that these two concepts are aimed at different steps in the process of organisms absorbing a certain pollutant. Different from the view of the National Research Council of the United States, Semple et al. [7] attribute step AC in Figure 1 to bioavailability, while step D is related to bioavailability. According to this understanding, bioavailable pollutants are actually potentially bioavailable, but because they are locked (sequestration) by organic matter in soil (or sediment), it is temporarily difficult to access organisms, so it is not yet available. Not available for biological use. The new interpretation of bioavailability and bioavailability given by Semple et al. [7] reduces the various uncertainties in the connotation of bioavailability due to the complexity of soil or sediment microstructure. The definition can be said to be the most reasonable explanation of bioavailability at present, and it also provides a new idea for the quantitative measurement of bioavailability. Methods for measuring bioavailability can be divided into four categories according to their characterization types: direct biological indicators, indirect biological indicators, direct chemical indicators, and indirect chemical indicators [8]. The biological intake expressed by direct biological indicators includes the influence of all biological and abiotic factors, and is the most accurate expression of bioavailability. Typical applications of direct biological methods include the measurement of bioaccumulation and critical body residue. The difference between the two is: bioaccumulation is the concentration of pollutants that accumulate in the organism but have not yet caused toxicity. There are two ways of expression, one is the accumulation at the site that does not produce toxicity in the organism; the other is It can accumulate at toxic sites in organisms, but the accumulation amount is lower than the lethal critical value. The critical residual amount in the body is the lethal critical pollutant concentration accumulated at the site of toxicity in the organism; the two reflect the continuous accumulation of pollutants in the organism. However, the direct biological method must also consider the amount of pollutants lost due to in vivo biotransformation and excretion in order to obtain the true bioavailable amount. Indirect biological indicators reflect the response value of organisms to pollutant exposure. Obviously, the bioavailability obtained by this method is closely related to biological species or even different individuals of the same species, and the uncertainty is relatively large. Indirect chemical indicators refer to the chemical concentration of pollutants in soil or sediment, which can be obtained by various chemical methods. For example, different chemical solvents are used to extract pollutants from soil or sediment, and then compared with the exposure in organisms to determine which extraction method can better predict bioavailability [9,10]. Another method uses the organism's digestive fluid to extract pollutants from soil or sediment as an indicator of bioavailability [11]. In recent years, chemical biomimetic technology has gradually become an effective means of characterizing bioavailability due to its fast, simple, and low-cost characteristics [12]. It is worth noting that a chemical parameter must be correlated with a biological response to be used as an indirect chemical indicator of bioavailability. The direct chemical indicator is a purely theoretical parameter and does not exist in practice, since only the organism itself can determine how much of the pollutant is bioavailable. The usual practice is to choose some indirect chemical indicators and a direct biological indicator to repeatedly fit to establish a reliable correlation. A chemical parameter is considered a direct chemical indicator if the biological effect can thus be predicted from the measurement of that chemical parameter [8]. It should be pointed out that among the above measurement methods, the indirect chemical method measures the bioavailability to a large extent, while other methods are still unable to characterize the bioavailability of pollutants in soil and sediment well. Finding better characterization techniques and means will be the goal of environmental scientists in the next period of time; that is to say, bioavailability is still an environmental parameter that cannot be accurately quantified. As Semple et al. [7] lamented: Can the part of a certain substance x used by a certain organism y be measured? If so, how should it be measured?", "Natural gas hydrate, commonly known as combustible ice, refers to an ice-like clathrate compound formed by gas molecules such as water and methane under low temperature and high pressure conditions. It is widely found in seabed sediments on the continental margin and polar permafrost. According to preliminary estimates, the global carbon storage of natural gas hydrate reaches 10 trillion tons, which is twice the sum of the carbon storage of all proven fossil fuels (including coal, oil and natural gas) in the world, and is equivalent to the total amount of methane carbon in the earth's atmosphere. 3000 times. The scientific community generally believes that natural gas hydrate will become a clean energy source with great potential in the future. However, gas hydrate may also be an invisible killer of the earth catastrophe, which is extremely sensitive to changes in the depositional environment. Once the temperature-pressure balance is destroyed, it is likely to cause large-scale decomposition of gas hydrate to release methane, resulting in a strong Greenhouse effect, and then lead to regional or even global climate, environment, ecological catastrophe events (Figure 1). Figure 1 \tSchematic diagram of the large-scale decomposition and release of methane from natural gas hydrates during the geological history. In the long evolution history of the earth, there have been many global geological catastrophes, such as the snowball end event in the late Neoproterozoic, the Permian-Triassic The mass extinction event at the turn of the Paleocene and the extreme heat event at the turn of the Paleocene-Eocene, etc. What factors in the end directed this devastating disaster? Genetic hypotheses such as volcanic eruptions and planetary impacts have become classics believed by many science enthusiasts! However, these hypotheses are controversial in academic circles. As a potential influencing factor of climate and environmental drastic changes, natural gas hydrate is considered to be the main triggering mechanism of many global geological disasters, and has gradually received more attention in recent years. The destabilization of gas hydrates at the edge of the Spitsbergen shelf in the Arctic region led to the explosive escape of a large amount of methane into the seawater layer, which may cause an increase in the regional atmospheric methane content [1], which seems to verify the possibility of a major geological abrupt change period. There is a hypothesis that gas hydrates decompose to release methane. Gas hydrate is like an invisible ghost, and it is difficult to effectively preserve it directly in geological bodies through the process of rapid release of methane out of balance. How to explore the clues that gas hydrates may leave in the layers of major geological events in the geological history? The geochemical behavior of stable carbon isotopes (\u03b413C) provides an effective tracer approach for studying the carbon cycle and carbon sources in the lithosphere, hydrosphere, atmosphere, and biosphere. According to the research of geologists, there is a relatively brief and significant negative drift of \u03b413C near the boundary strata of some geological turning periods. This global carbon isotope transient negative anomaly indicates that a huge amount of light carbon (12C) was injected rapidly into the ocean-atmosphere system. So where does so much light carbon come from? Dickens et al. [2] used the carbon cycle mass balance equation to reveal the mystery for us: because hydrate methane enriches light carbon (\u03b413C \u2248 \uf02d60\u2030), in a short period of time (less than 10,000 years) the gas hydrate reservoir About 1/10 of the decomposition and release will eventually lead to a sudden shift of carbon isotopes in the global marine and terrestrial carbon pools (carbonate rocks, organic matter, etc.) by \uf02d2\u2030~\uf02d3\u2030. However, the oxidation of organic carbon pools (\u03b413C \u2248\uf02d25\u2030) or the CO2 produced by volcanic eruptions (\u03b413C \u2248\uf02d7\u2030) are difficult to cause such a large negative shift in such a short period of time. In addition, the sedimentological and geochemical characteristics of modern submarine cold seep carbonate rocks provide a good reference for the invisible killer of the earth's climate change - the traces left indirectly by gas hydrates in the layers of major geological events. Cold seep carbonate rocks generally have extremely negative carbon isotope values (\u03b413C < \uf02d25\u2030), and often develop some special sedimentary structures[3]. At the same time, in cold seep carbonate rocks, methane anoxic oxidative colonies often leave some special organic molecules, which we call biomarkers, which can often be used to indicate methane seepage activities. Using the above-mentioned more reliable detection tools, geologists have found important evidence supporting the release of hydrate methane in some major geological event layers, thus clearing away the corner of the fog and promoting the development of the hypothesis of hydrate methane release . In the late Neoproterozoic (about 635 Ma), cap carbonate rocks with a thickness of 1\u201310 m were directly covered on the thick moraine rocks of the Marinoan glacial age, and were widely distributed around the world, representing the \u201cSnowball Earth\u201d end. Who has such a great ability to make the snowball disappear in an instant? There are some special geological phenomena that have attracted the attention of geologists. Tent-like structures, flat-topped geodes, interlayer fissures, algae laminae, barite fans, strawberry pyrite, isopachic or grape-like cements (possibly originally aragonite ) and other sedimentological fabrics similar to modern cold seep carbonate rocks[4]. Moreover, the carbon isotope of cap carbonate rocks shows a global negative anomaly (magnitude up to \uf02d5\u2030), and its duration is only an extremely short 100,000 years. The latest research shows that there is an extremely negative carbon isotope anomaly (\u03b413C < \uf02d40\u2030) in cap carbonate rocks in the Three Gorges area of China, and this extremely negative carbon isotope can only be the source of methane[4,5]. These evidences indicate that the release of methane from the decomposition of gas hydrate is likely to be the terminator of \"Snowball Earth\". Of course, this hypothesis has also been strongly questioned. For example, the extremely negative carbon isotope values of cap carbonate rocks are only produced in individual regions and have no global significance. Another example is that the content of organic matter in moraine rocks and upper carbonate rocks is very low, which cannot supply enough methane for hydrates. At the turn of the Paleocene-Eocene (about 55Ma), the earth experienced a global temperature rise event, and a large number of benthic and planktonic foraminifera became extinct. Then who is causing trouble, and plunged the earth into dire straits? In 1995, Dickens and his collaborators published a paper in the well-known geoscience journal Paleoeanography, proving for the first time that the decomposition and release of natural gas hydrate was the primary factor triggering this extreme heat event[2]. Afterwards, the hypothesis of gas hydrate decomposition and release attracted the attention of a large number of geologists and became a popular research field in contemporary geosciences, and some relevant geoscience evidence has been unearthed one after another. Negative carbon isotope anomalies in foraminifera, carbonate rocks, and organic matter were detected not only in cores from Antarctica (ODP690), but also in cores from the remote Caribbean Sea (ODP1001) and Atlantic Ocean ( ODP1051) also appears in [6]. Moreover, at the same time as the negative shift in carbon isotopes, a large number of biogenic barites appeared, which is very similar to barites in modern cold see carbonates [6]. In addition, seismic data showed that there were extensive landslides at the edge of the continental shelf at that time, which may have led to the imbalance of gas hydrates on the seafloor and the release of methane by decomposition [7]. Based on the accurate and reliable negative carbon isotope anomaly and its short duration (about 100,000 years), most scholars agree that hydrated methane is the culprit of this extreme heat event. In order to further confirm the reliability of the hydrate hypothesis, we need to look for more sedimentary and geochemical evidences similar to cold seep carbonate rocks, especially the extremely negative carbon isotope values and organisms that can indicate the activities of methane anoxic oxidizing colonies marker compound. In the late Quaternary, polar ice cores recorded in detail the changes in atmospheric methane content over the past tens of thousands of years, and the sharp increase in methane content was basically synchronized with the end of each ice age. What factors make the earth suddenly pass through the cold winter again and again? At present, most scientists in the world believe that the periodic change of the orbit has caused the earth to experience glacial and interglacial periods. Although the cyclical role of orbital forces cannot be denied, we should not ignore the positive feedback effect of methane on climate change. So who provided the large amount of methane for the atmosphere? Kennett et al. studied the characteristics of carbon isotope fluctuations recorded by benthic and planktonic foraminifera in the Santa Barbara Basin of the United States in the past 60,000 years, and found that there were four large negative anomalies (up to \uf02d4\u2030 ) (Fig. 2a), proposed to be the result of at least four releases of hydrated methane [8]. Other late Quaternary strata such as the Gulf of Papua, the Northwest Pacific Ocean, and the South China Sea have negative carbon isotope anomalies that may be caused by the release of hydrate methane. It is particularly meaningful that Hinrichs et al. analyzed the sedimentary rock samples also collected from the Santa Barbara Basin from the perspective of molecular fossils, and detected the biomarker compound Libai The abundance of alcohols (Diplopterol) all showed peaks (Fig. 2b)[9], which strongly indicates that the fluctuations in atmospheric methane content are most likely caused by gas hydrates. In addition to the above-mentioned major geological events, are other catastrophic events in the history of the earth also related to gas hydrates? The answer is yes. According to the characteristics of negative carbon isotope anomalies near the boundary layer of the turning period, some scientists believe that the mass extinction event at the turn of the Permian-Triassic (about 251Ma), the Jurassic Tuasinian (about 183Ma) and the Cretaceous Aptian (about 117Ma) global ocean anoxic event may also be the result of the ghost director of gas hydrate. However, stronger evidence is lacking for the hydrate hypothesis of these catastrophic events. Therefore, we still need to launch a large number of geological surveys to reveal these magical mysteries. Figure 2 \tCarbon isotope evolution of foraminifera in the late Quaternary in the Santa Barbara Basin of the United States and the abundance fluctuation characteristics of the biomarker Diplopterol[9] In the long geological history, the impact of gas hydrate on the global carbon cycle and climate change It has an important role, and it is likely to have caused many global catastrophic events on the earth. But there are still many important questions that have not been resolved. For example, what is the triggering mechanism for the decomposition and release of natural gas hydrate? What is the suspension mechanism? How much gas hydrate decomposition release is required to cause a large negative global carbon isotope shift? What about the process of releasing huge amounts of methane into the atmosphere through seawater? How does methane released from gas hydrates cause marine and terrestrial extinctions? The earth we live in is an extremely complex and coupled system. Changes in some single factors may cause a series of global changes, just like the \"butterfly effect\". Therefore, there is still a long way to go to confirm the mechanism of natural gas hydrates triggering sudden changes in the earth's climate, and the joint efforts of geologists are needed to unravel this unsolved mystery.", "It is generally believed that life on Earth may have begun after the formation of the ocean 3.8 billion years ago [1], and then microbial life evolved photosynthesis that can split water and release oxygen into the ocean as exhaust gas [2], This oxygen reacts with dissolved iron in the seawater resulting in the deposition of iron oxides. Formation of this massive build-up on Earth began 3.5 billion years ago (Fig. 1) and declined sharply 1.8 billion years ago as atmospheric oxygen increased and oceans became more oxidized. Ferrosilicon construction is distributed all over the world. There are the oldest ferrosilicon constructions in South Africa and Zimbabwe; the largest ferrosilicon constructions are in Australia, and there are also large-scale ferrosilicon constructions in North China. The formation of ferrosilicon formations is the most controversial in known sedimentary geology and theories of Earth's evolution, but a growing number of geologists and biologists now believe that the process records the formation of fledgling microbial life on Earth. The interaction between the biosphere and the geospheric layers in which it resides. Strip iron construction provides 95% of iron ore resources to our current global steel industry. There are still many unsolved mysteries about these banded ferrosilicon formations, such as: \u2460 When did its deposition start? \u2461 What geological process provides a huge source of iron for its deposition, and what mechanism is used to transport iron into the ocean? \u2462 By what mechanism is dissolved iron deposited in the ocean? Perhaps the most important questions are: \u2463 What changes were made to the chemistry of the ocean during its deposition? What happened to the microbial world that already filled the oceans during this long process? These are questions that geologists have pondered since the discovery of banded iron formations [3]. The banded ferrosilicon construction is characterized by iron-rich and silicon-rich interlayers of varying thickness (Figure 2) [4]. This composition of very different compositions can be seen on any outcrop, with thin bands of only nearly millimeters and thick bands reaching the scale of more than meters [5]. These iron deposits are known as Precambrian ferrosilicon formations. Its iron content is generally above 15%, generally 25% to 35%. Deposited hematite or magnetite is generally interbedded with chert or iron silicate-iron carbonate. Even on a microscopic scale, the boundaries between the Fe-rich and Si-rich bands are still very clear. There was almost no oxygen in the ocean and atmosphere 3.5 billion years ago. Since Fe2+ has a higher solubility in water than Fe3+, a huge amount of Fe2+ has accumulated in the ocean. Although most agree that anaerobic photosynthetic bacteria played a large role in iron oxidation in the Archaean ocean, controversy still exists [6, 7]. If the problem of reaction efficiency is not considered, there are three possible reaction mechanisms to oxidize Fe2+ in the ocean: \u2460 Oxidation by atmospheric oxygen; \u2461 Oxidation by inorganic photochemical processes caused by ultraviolet radiation; \u2462 Oxidation of Fe3+ as a by-product of photosynthesis of life . Among them, the first possibility can be ruled out, because even a very low atmospheric oxygen concentration is sufficient to suppress the supply and accumulation of Fe2+ on longer geological time scales. For example, the global homogenization time of the modern ocean is about a millennium scale. If there is free oxygen in the atmosphere, it will be quickly consumed by Fe2+. However, the fact that there were a large amount of iron-forming deposits in the Proterozoic period indicates that the concentration of free oxygen in the atmosphere was very low at that time. Now there is enough evidence to prove that the microorganisms in the early Proterozoic stromatolites can reduce CO2 in the atmosphere, and the CO2 reduction process requires a certain amount of light energy but not a two-step photosynthesis mechanism that must involve oxygen. Although there is still no conclusive evidence to show when photosynthesis started, the common observation of stromatolites in ancient strata around the world, such as Archaean strata, is the recognized evidence for the existence of blue-green bacteria[8]. Bacteria perform photosynthesis under anaerobic conditions, with the side effect of producing oxygen. About 2.7 billion years ago, the Late Archaean ocean gradually formed a situation where purple fungus flourished [9], which was related to the accumulation of early atmospheric oxygen. In the following nearly 1 billion years, the further flourishing of purple fungi raised the concentration of atmospheric oxygen to about 1/4 of the current concentration of atmospheric oxygen. The prosperity of purple fungus during this long period is due to its superiority in the energy process of photosynthesis on the one hand, and on the other hand, the oxygen it produces is toxic to most anaerobic microorganisms. Eukaryotic cells were produced 1.8 billion years ago, and after another 600 million to 800 million years, the function of photosynthesis finally appeared in eukaryotic cells. Afterwards, the emergence and prosperity of cyanobacteria further increased the concentration of atmospheric oxygen to the current level [10]. Plants, the descendants of cyanobacteria, appeared around 500 million years ago, decorating the earth's surface into the landscape we are now familiar with. Iron from the Earth's crust is dissolved into the ocean and deposited by complex geochemical and biological processes. It was this process that persisted in Earth's early oceans for at least 1.7 billion years and brought profound changes to the mineralogy, geochemical characteristics of the Earth's surface, and the evolution of the hydrosphere and biosphere. Since continents were still forming and growing at that time, more of the globe's surface was covered by oceans, so iron deposition was also largely global in nature. A Brockman iron formation in Harmersley, Australia is a basin of about 100,000 km2. Figure 1. The \tstromatolites are the oldest biological fossil records, indicating that life existed on the earth 3.5 billion years ago. Figure 2 \tThe silicon and iron bands built by Precambrian ferrosilicon recorded the information of the atmosphere, ocean and organisms at that time, among which there are still iron deposits of more than 50 trillion tons, and the deposition on the global surface is at least There are 500 trillion tons of iron [11]. By about 1.7 billion to 1.8 billion years ago, more and more oxygen gathered, and accumulated in the atmosphere after exhausting the dissolved Fe2+ in the ocean. The presence of oxygen in the oceans and atmosphere also caused some of the earliest biological catastrophes on Earth. A large number of anaerobic organisms that have thrived and evolved in the Earth's oceans for nearly 2 billion years became extinct when oxygen disrupted the electron transfer process necessary for their respiration. But on the other hand, the evolution of organisms has been greatly accelerated as organisms have adapted to aerobic respiration and gradually have a biochemical mechanism for using energy more efficiently. Multicellular life, algae and protozoan life followed one after another, eventually leading to the Cambrian \"explosion of life\", which opened the most colorful page of life evolution.", "Introduction Microorganisms are the oldest and simplest organisms on earth. Studies have found that microorganisms participate in or affect the nucleation, crystallization and growth processes of various minerals in nature, and eventually lead to the formation of minerals. Microbiologists collectively refer to these processes as \"microbial mineralization.\" Biomineral is the final product of the mineralization process of microorganisms, which is a mineral phase formed by microorganisms through metabolic activities or under biological control conditions. Biominerals have specific properties that differ from their inorganic-origin counterparts in terms of shape, size, crystallinity, isotope, and trace element composition [1]. It is increasingly recognized that microorganisms, which exist widely in nature, and their intrinsic biomineralization ability, are an extremely important factor driving geochemical cycles and affecting the earth's environment. As early as 1887, Winogrdsky discovered that the chemosynthetic autotrophic bacterium Beggiatoa trevisan could oxidize H2S gas to elemental S, leading to the formation of elemental sulfur minerals. In 1938, Ehrenberg found that the aerobic neutrophil iron-oxidizing bacteria Gallionella ferruginea was closely related to the formation of reddish-brown iron oxides in swamps. In 1975, Blakemore first discovered the magnetotactic bacteria containing magnetite (Fe3O4) in the bacteria [2]. In 1990, Zierenberg et al. found that seafloor hydrothermal microorganisms were replaced by minerals such as arsenite and other minerals, and believed that silver mineralization may be controlled by biogeochemistry. In 1997, Taylor et al. found that sulfur-oxidizing bacteria in the hydrothermal fluid of the East Pacific Sea Rise could excrete irregular filamentous elemental sulfur, and believed that the production of these filamentous elemental sulfur was a direct result of the oxidation of H2S in the fluid by sulfur-oxidizing bacteria. In 2000, Labrenz et al. confirmed that the precipitation of nodular sphalerite is closely related to the metabolism of sulfate-reducing bacteria based on evidence from molecular biology and organic geochemistry [3] (Fig. 1). In recent decades, microbial mineralization research has flourished in the fields of geology, geochemistry, microbiology, paleontology, bionics, medicine, materials science and other disciplines. A large number of microorganisms have participated in the oxidation, hydroxide, carbonate, Evidences of more than 70 kinds of minerals such as sulfide, sulfate, phosphate, chloride, fluoride, and silicon oxide have been discovered one after another. At present, microbial mineralization is divided into two main types according to the difference in microbial action: microbial-induced mineralization and microbial-controlled mineralization [1]. Microbial-induced mineralization Microbial-induced mineralization means that the metabolic activities of microorganisms affect the pH, pCO2, Eh, and accumulation of organic matter (polysaccharides, proteins, etc.) in the surrounding environment, which in turn leads to changes in the local microenvironment and triggers mineral precipitation (Figure 2). In this process, microorganisms are only a driving force for mineralization, and cannot control the type of minerals and the production of minerals. Figure 1. \tThe metabolic activities of sulfate-reducing bacteria lead to the precipitation of spherical sphalerite [3] Figure 2. \tMicroorganism-induced mineralization mode Figure [1] Long habit. The hallmark feature of bioinducible minerals is their heterogeneity, and the morphology, water content, composition, size and structure of minerals vary greatly with different environments. In nature, the formation of calcium carbonate minerals driven by cyanobacteria through photosynthesis is a typical example of microbial-induced mineralization [4]. During photosynthesis, cyanobacteria increase the pH of the surrounding water by absorbing and fixing dissolved inorganic carbon from the water, making the chemical reaction proceed in a direction that is conducive to the precipitation of calcium carbonate, resulting in the crystallization and growth of calcium carbonate minerals. This microbial-induced calcification linked to the metabolic activity of cyanobacteria is considered to be the most common way in which microorganisms participate in calcium carbonate precipitation. In the process of biologically induced mineralization, the role of microbial bacterial wall and extracellular polymeric substances (EPS) secreted by microorganisms as mineralization sites is very significant. Nucleation is a necessary condition for crystal formation and growth, otherwise mineral precipitation would not occur spontaneously even in a near-saturated or supersaturated environment. The chemical functional groups (R\u2014COOH, R\u2014OH, R\u2014NH2, etc.) widely present on microbial cell walls and EPS provide the necessary nucleation centers for the formation of biominerals. During the aggregation of ore-forming cations on the cell wall and EPS, the exposed carboxyl and phosphoryl groups can provide the necessary negative charge points for the electrostatic and chemical adsorption of cations. Once complexation occurs, the chemically bonded metal ions can serve as nucleation sites for further complexation and mineralization. Microbial control of mineralization Microbial control of mineralization refers to the use of cell activities by microorganisms to guide the nucleation and growth of biominerals, and to control the morphology and formation locations of biominerals. Although different species of microorganisms have different degrees of control over mineral formation, almost all microbial control of mineralization occurs in a relatively isolated microenvironment. In this microorganism-controlled mineralization process, vesicles are usually an ideal relatively isolated mineral-forming microenvironment, which guides the nucleation of biominerals in microorganisms, controls the composition and morphology of biominerals, \"processing\" and \"Assembling\" precise, species-specific mineralization products. At the same time, these mineralization products often enable organisms to have specific physiological functions. Fig. 3 \tMicrobial control of intracellular mineralization [1] Typical representatives of microbial control of mineralization are magnetotactic bacteria, which are a type of Gram-negative bacteria that can move along the direction of the magnetic field and the direction of the oxygen concentration gradient Bacteria, found in both terrestrial and marine environments [5]. The unique crystalline shape and membrane-wrapped magnetic mineral particles (magnetosomes) in magnetotactic bacteria show us the ingenious workmanship of microorganisms in controlling mineralization. These magnetosomes are generally 20~120nm long, and the chemical composition is mainly magnetite or colloid pyrite, with high chemical purity, and they are arranged in chains in magnetotactic bacteria. Their crystal shapes are different from magnetite particles of inorganic origin, mainly including cubic-octahedral, hexagonal prism, bullet-shaped, etc., and their formation is controlled by strict biological and biochemical processes in microbial cells. Magnetosomes in magnetotactic bacteria have been regarded as magnetic fossils of life, and they have become important clues for exploring early life on Earth and extraterrestrial life. Figure 4 \tMagnetotactic magnetite magnetosomes of various shapes arranged in chains in magnetotactic bacteria cells and several geoscience problems related to microbial mineralization The role of microorganisms in the mineral formation process is an emerging and cross-cutting research field involving multiple disciplines. At present, the relationship between microbial mineralization and the formation of metal deposits, the origin of life on Earth, the exploration of extraterrestrial life, the geochemical cycle of elements, and the molecular mechanism of microbial mineralization are the focus of this frontier research field. The Proterozoic chert belt iron-bearing formations constitute the world's most abundant iron ore resources, including the world's most important large and super-large iron deposits. The academic community has been arguing about the origin of these deposits, and the problem mainly focuses on how the dissolved Fe2+ in the ocean is oxidized to Fe3+ and precipitated. Recent studies suggest that marine microbial activities (including cyanobacteria and chemoautotrophic iron-oxidizing bacteria) may be the main reason for the formation of these Proterozoic super-large iron deposits[6]. The photosynthesis of cyanobacteria increased the oxygen content in the water, which caused a large amount of iron oxide in the ocean at that time to precipitate. Similarly, chemoautotrophic iron-oxidizing bacteria have the function of directly oxidizing Fe2+ into Fe3+, and can also precipitate a large amount of iron oxides. In addition to large iron mines, the formation of some gold mines and apatite mines is also closely related to microbial activities. Although the exact origin of these deposits has not been determined so far, the theory of microbial mineralization provides a new idea for a reasonable explanation of their formation mechanism. The mineralization process of microorganisms can not only produce biominerals outside or inside the cells of microorganisms, but also in some cases can completely metasomatize the entire microorganisms, forming microbial fossils and permanently retaining them. Since microorganisms are the first life to appear on the earth, the search for microbial fossils in ancient strata can provide important information for the origin of life on earth, the conditions for the origin of life, the environment in which early life existed, and the evolution of early life. The age of the microfossils currently preserved in the geological record is 3.5 billion years. Did life on Earth arise earlier? Under what circumstances and conditions did it arise? To answer these questions, it is necessary to continuously search for older microbial fossils and refresh the time record of the origin of life on Earth. In addition, biominerals in geological environments can also be used as a biological signal to re-understand the origin and evolution history of extraterrestrial planets (Mars and Europa), and to search for evidence of the existence of extraterrestrial life. It is now known that many microorganisms on the earth are directly or indirectly involved in the microbial mineralization process, transforming dissolved ions into solid minerals, affecting a variety of important elements (Fe, Mn, Si, Ca, S, P, C, etc.) biogeochemical cycle process. For example, the mineralization process of iron-oxidizing bacteria can significantly affect the cycle of Fe2+ and Fe3+ in the environment, while the mineralization process of sulfate-reducing bacteria can locally dominate the speciation transformation of S and Fe in aqueous environments, affecting the biogeochemistry of S and Fe. cycle process. However, it is still unclear how much the microbial mineralization process contributes to the biogeochemical cycle. How fast do they occur? Quantitatively estimating the contribution of the microbial mineralization process to the earth's element cycle and understanding the biogeochemical effects of the microbial mineralization process are also issues of great concern in the study of microbial mineralization. The diversity of microorganisms involved in the microbial mineralization process and the diversity of biominerals produced by the microbial mineralization process make the study of the microbial mineralization mechanism particularly attractive. The current research on the mechanism of microbial mineralization has developed from the original simple microscopic morphology research to the molecular level research. Revealing the role of active living substances in biomineral synthesis, searching for functional genes controlling biomineral formation, and exploring the molecular mechanism of biomineral formation have given new vitality to the study of microbial mineralization mechanisms. The in-depth study of microbial mineralization in recent decades has built a bridge between inorganic minerals and organic life, greatly promoted the development of earth science and life science research, and demonstrated the strong and vigorous vitality of this research field.", "Introduction Natural gas hydrate (also known as \"combustible ice\") is a clathrate compound with an unfixed hydration number. Under high pressure and low temperature conditions, gas molecules are bound in polyhedral cages composed of water molecules connected by hydrogen bonds. I, II and H three structures (Figure 1). Since the main component of natural gas is methane, it is often called methane hydrate (8CH4\u00b746H2O)[1]. Figure 1 \tThree common natural gas hydrate crystal structures [2] Natural gas hydrates mainly occur in modern deep-water seabed sediments or high-latitude permafrost. Each cubic meter of gas hydrate can release 160\u2013180 m3 of natural gas under standard conditions[1]. It is estimated that the total natural gas resources of global gas It has been proven that the total amount of carbon in traditional fossil fuels is twice that of traditional fossil fuels, and it has great prospects for energy development. Therefore, all countries in the world, especially developed countries and countries with energy shortages, have paid great attention to it. For example, the United States, Japan, Canada, Germany, South Korea, India, and China have all formulated research and development plans for natural gas hydrate. The United States and Japan even proposed a 2015 The goal of commercial gas hydrate development has been realized successively. At the same time, gas hydrate is closely related to the earth's environment. On the one hand, natural gas hydrate is a kind of clean energy, and its main gas composition is methane, so the rational development and utilization of natural gas resources of hydrate will greatly reduce the negative impacts such as environmental pollution caused by current traditional energy consumption; on the other hand On the one hand, existing studies have shown that there have been many geological disasters such as global warming events and landslides caused by the sudden decomposition and release of seabed natural gas hydrates in geological history. Therefore, the development and utilization of natural gas hydrate not only has important energy strategic significance, but also has important environmental significance. Hydrate formation process Natural gas hydrate is a solid formed by the combination of natural gas dissolved in water and water, but what is its formation process? The formation of gas hydrate includes two stages: nucleation and growth. The hydrate nucleation process is a process that emerges from nothing, that is, a new stable hydrate phase is born from the methane-water system. However, the growth of hydrates is a process in which methane molecules and water molecules copy and grow the cage structure on the surface of the crystal nucleus, and it is a process from small to large. So far, the mechanism for the birth of crystal nuclei is still unclear, and there are mainly several controversial hypotheses. The most famous is the cluster nucleation hypothesis proposed by Sloan et al. [1], which emphasizes that water molecules can form cage-shaped water clusters around gas molecules, and these water clusters gather with each other to gradually form larger clusters until critical crystal nuclei are produced. Trout's research group refuted this hypothesis. It proved that multiple cage-shaped water clusters should be thermodynamically dispersed with each other through free energy calculations, and proposed a local structure hypothesis for hydrate nucleation, thinking that gas molecules happen in the system by chance The local orderly arrangement, then the surrounding water molecules start to adjust their orientation around the gas molecules, and proceed towards the direction of the hydrate structure [5]. Recently, Guo Guangjun et al. put forward the cage adsorption hypothesis, thinking that cage-shaped water clusters can form spontaneously in methane solution, but the probability of formation is very small; there is a strong mutual attraction between water cages and methane molecules; cages can adsorb gas molecules While prolonging its own lifespan, it also promotes the formation of other cages around it; the mixed stacking of these cages first forms an amorphous phase of hydrate, and finally produces hydrate crystals through structural transformation [6]. The above hypotheses have their own characteristics, and the existing experimental and computational evidence is not enough to clearly distinguish and verify them. At the same time, the above hypotheses all consider the modeled single gas and water system, and there is still a long way to go to describe the hydrate formation process of the real system in nature, because the seabed and tundra in nature are both sediment- The water system, and seawater contains salt, and the composition of natural gas is also very complex, and contains macromolecular gas, all of which are important factors affecting the nucleation and growth of gas hydrate. Development and Utilization of Hydrate Although gas hydrate has huge energy potential, how to develop gas hydrate for human use? The key conditions restricting the stability of hydrates are temperature and pressure. As long as these conditions are artificially changed, the hydrates will lose their balance and decompose to release natural gas. For this reason, scientists have proposed the following three main methods: heat injection method: input steam, thermal fluid or other heat sources into the hydrate layer, raise its temperature to decompose the natural gas hydrate, and then use conduit or conventional gas transmission Pipelines collect the evolved methane gas. Decompression method: by decompressing the natural gas accumulated under the natural gas hydrate layer or at the gas pocket formed by thermal stimulation, chemical agent injection, etc., the natural gas hydrate becomes unstable and converted into gas and water, thereby recovering natural gas . Chemical agent injection method: pump brine, ethanol, ethylene glycol, calcium chloride and other chemical agents that can reduce the condensation point of natural gas hydrate into the downhole to induce the decomposition of natural gas hydrate and recover natural gas. So far, the former Soviet Union, Canada, the United States, Japan and other countries have carried out experimental development of gas hydrate in their permafrost. From 1969 to 1979, the former Soviet Union conducted a test of depressurization method to exploit gas hydrate in Messoyakha natural gas hydrate reservoir in Siberia, and achieved good results. From 1969 to 1990, when the production was discontinued, 17 years of intermittent production, about 36% (about 5.17 billion m3) of the natural gas extracted from the Messoyakha deposit came from the natural gas hydrate layer[1]. The hydrate development pilot project (Mallik project) in the Mackenzie Delta tundra in northwestern Canada began in 1998[7, 8]. In 2002, the Mallik5L-38 development test first developed 470m3 of natural gas (5 days) from the hydrate-bearing sedimentary layer through the heat injection method; in 2007, the natural gas hydrate development test was carried out by using the simple depressurization technology, and a total of 830 m3 of natural gas was produced. m3; 6 consecutive days (139 hours) of development tests in 2008, the natural gas production reached 2000-4000 m3/d, and the cumulative production was about 13000 m3. From 2001 to 2009, the United States implemented a gas hydrate development test program on the North Slope of Alaska. The test results show that the cost of exploiting methane in natural gas hydrate reservoirs by heat injection is 6 times that of natural gas, which is equivalent to the price of 20 US dollars per barrel of oil; the cost of decompression method is equivalent to that of natural gas, but transportation is still a big problem [9]. In fact, the exploitation of marine gas hydrate still faces many scientific and technical difficulties. What methods and technologies can be used to extract gas hydrate more economically? How effective are different methods and technologies in different geological conditions? How efficient are different methods and technologies in longer time scales and larger scales? How to control Safety and environmental issues in the process of gas hydrate exploitation? These issues need to be resolved in future research.", "The Earth is a water-rich planet. Volcanic eruptions brought considerable amounts of water from the deep Earth to the surface and atmosphere, and subduction of plates recycled water to the deep Earth. In the Earth's interior, it is not clear whether the core contains water, but the mantle and crust contain water; water can be in a separate fluid phase (gas or liquid), dissolved in silicate melts, or preserved in aqueous Mineral phase (such as hornblende, mica), even dissolved in nominally anhydrous minerals (such as olivine, pyroxene). As we all know, the critical temperature of gas phase water and liquid phase water in pure water system is 374\u00b0C, and the critical pressure is 0.22 Kbar. Therefore, the ambient gas-phase water and liquid-phase water in the deep part of the earth higher than this temperature and pressure are homogeneous water fluids (which can dissolve certain silicate substances), also known as aqueous fluids (aqueous fluid) or the first supercritical fluid. fluid. The crust and mantle are mainly composed of silicate minerals, so it can be expressed as a silicate + H2O system under the condition of fluid activity in the deep part of the earth (Fig. 1). In the subsolidus region of the system (low temperature, zone I), water-rich fluids coexist with silicate minerals. In the supersolidus or partial melting zone (high temperature, zone II), when water is unsaturated, water is mainly dissolved in the melt phase, and the hydrous melt coexists with silicate minerals; when water is saturated, Water-rich fluids and aqueous melts coexist with silicate minerals. The physical and chemical properties (such as composition, viscosity, density, electrical conductivity, etc.) of aqueous silicate melt and water-rich fluid are completely different, and the two are easy to distinguish. However, with the increase of pressure and temperature, the solubility of water in silicate melt and silicate substance in water fluid increases, and the two are completely miscible above the critical pressure and temperature, and the composition ratio of water and silicate It can be changed arbitrarily, so Figure 1 \tis the temperature-pressure phase diagram of the earth's deep silicate + H2O system, indicating minerals, aqueous melt, aqueous fluid and supercritical fluid ( The phase equilibrium relationship between supercritical fluid) Salt+H2O system The solidus line disappears above the critical pressure and temperature (Zone III), the water-rich fluid and the hydrous silicate melt cannot form independent phases, and there is only one fluid phase \t, which is the second supercritical fluid. The temperature and pressure of the second supercritical fluid in different composition systems are different. In addition to the supercritical fluid of the silicate + H2O system in the deep part of the earth, there are also supercritical fluids of the silicate + CO2, silicate + H2O + CO2 system. fluid etc. The first supercritical fluid belongs to the liquid-vapor system (liquid-vapour miscible), formed below the water-saturated solidus line of the silicate material, and is rich in volatiles and trace silicate components to varying degrees. The second supercritical fluid belongs to the liquid-melt system (liquid-melt miscibility), which is formed above the temperature and pressure (critical curve) after the solidus disappears in the silicate + H2O system (Fig. 1), to varying degrees Rich in silicate melt components. These supercritical fluids have completely different physical and chemical properties from the pure vapor phase and aqueous melt phase, and usually have a strong ability to dissolve low-volatile substances (silicate substances and ore-forming elements, etc.), low viscosity, and large diffusion coefficient. And other characteristics [3,4]. These properties have an important impact on many geological and geochemical processes \t, such as reducing rock strength, increasing water-rock reaction rate, accelerating material migration and element diffusion, reducing partial melting temperature, changing the correlation coefficient of melting process, and affecting the concentration of trace elements and isotopes. Distribution between different phases, etc. The physical and chemical properties of supercritical fluids in the deep earth and their research on rock rheology, partial melting, phase balance, element transport and differentiation, etc., are of great importance for understanding the subduction zone process, crust-mantle differentiation and mantle composition heterogeneity, and The migration and enrichment of ore-forming elements are of great significance. The discovery and research of supercritical fluids can be traced back to the 19th century, when the chemical industry initially studied the supercritical behavior of systems such as H2O and CO2[3,4]. At present, supercritical fluid technology has been widely used in chemical extraction separation and reaction engineering, material science, environmental protection, food, medicine and other fields. The chemical industry studies mainly the miscibility and supercritical behavior between liquid and gas phases. Based on the overall composition of the mantle and crust and the high-temperature and high-pressure environment in the deep part of the earth, the supercritical fluid of the composition system of silicate + H2O (that is, the miscibility of silicate and water in infinite proportion) is absolutely dominant in the deep part of the earth. The upper mantle is mainly composed of peridotite, the oceanic crust and lower crust are mainly basalt or mafic metamorphic rocks, and the upper crust is mainly composed of granitic rocks. Therefore, the study of the physical and chemical properties of supercritical fluids in the deep earth and their effects on the deep processes of the earth is essentially to study the supercritical behavior of fluids in the peridotite+H2O system, basalt+H2O system and granite+H2O system; on the one hand, through Petrological phase equilibrium experiments determine the pressure-temperature critical points and critical curves of these systems. On the other hand, the effects of supercritical fluids on rock rheology, partial melting, phase balance, and element distribution are studied through experiments and natural samples. Despite the importance of deep Earth supercritical fluids for understanding many important geological processes, our knowledge of them is still very limited. How to distinguish the first and second supercritical fluid? How does the composition of supercritical fluid change with physical and chemical conditions such as temperature and pressure in different composition systems (such as peridotite+H2O, basalt+H2O and granite+H2O)? How do they affect mineral phase equilibria, partial melting of rocks and elemental partitioning? What are the species and dissolution mechanisms of insoluble elements (such as high field strength elements Nb, Ta, Ti, Zr and Hf and ore-forming elements W, Sn, Mo, Cu, Pb, Zn and Au, etc.) in supercritical fluids? What is the relationship between element migration and ore-forming enrichment in subduction zones and the formation and evolution of supercritical fluids? The resolution of these problems is crucial to the understanding of deep Earth processes, crust-mantle differentiation, and mineralization. On the one hand, the study of supercritical fluids in the deep earth and their effects relies on high-temperature and high-pressure experiments. There have been some important advances in research in this area, such as the determination of the relevant equilibrium systems of some simple silicate + H2O systems, and the quantitative relationship between minerals, melts, and fluids under different temperature and pressure conditions. [7,8,10]. At present, the research on high-temperature and high-pressure equipment mainly includes hydrothermal diamond pressure chamber, piston cylinder, multi-anvil large press, etc. The difficulty of research is that some experimental methods cannot clearly establish the relationship between melt/fluid composition and temperature and pressure. The improvement of experimental techniques and methods is the key to solving this difficult problem. On the other hand, the research on supercritical fluids and their effects also focuses on natural high-temperature and high-pressure metamorphic minerals and their fluid inclusions and fluid-melt inclusions, looking for records of supercritical fluid activities [11,12]. Ultrahigh-pressure metamorphic rocks in subduction zones provide a natural laboratory for this research, and in-situ micro-element analysis of minerals is the main method for such research. To study the dehydration and partial melting of minerals in the mantle and crust, especially under the temperature and pressure conditions of the subduction zone, to quantitatively determine the element partition coefficients between related minerals and supercritical fluids, aqueous melts and water-rich fluids, is an important part of experimental design and natural sample research. an important direction.", "Water is the basis of human existence and controls the nature and evolution of the entire biosphere. If there is no water, many dynamic processes of the earth (such as plate movement, volcanic eruption, etc.) may not take place. Not only there is a large amount of liquid water on the earth\u2019s surface, but also \u201cwater\u201d in the deep part of the earth in the form of other H-containing phases (such as OH, NH4+, etc.) Once H is active, it is easy to combine with O widely present on the earth to form H2O when the conditions are suitable. Although the main constituent phases of the deep earth (lower crust, upper mantle, transition zone, and lower mantle) are \"nominal anhydrous minerals\" that do not contain H in the ideal chemical formula, such as olivine, pyroxene, garnet, and their high-pressure variants, but both the analysis of natural samples and the observations of high-temperature and high-pressure experiments have shown that these minerals contain structural water mainly in the form of OH defects [1]. Experiments even show that hydrogen can also exist stably in the form of iron hydride (FeHx) in the earth's core [2]. Although our current estimates of the water content in the deep Earth still have great uncertainty, its existence is already an indisputable fact. The question is: When and where did the water on Earth come from? Did it exist when the earth was formed, or was it added by extraterrestrials in the later evolution process? If there are multiple sources, what proportion does each source account for? The answers to these series of questions not only restrict human understanding of the formation and evolution process of the earth, but also play an important role in clarifying the formation and evolution of life systems on the earth. Hydrogen in different extraterrestrial sources has different isotopic compositions (D/H value), so the hydrogen isotopic composition is an important basis for judging the source of water on Earth. The solar system was formed from the original solar nebula 4.5 billion years ago, and the main part of the earth is also formed by the collision and aggregation of dust and planetesimals in the original solar nebula [3]. Did the water on the earth exist on the earth in the early days of the earth's formation, and then evolved into the existing form, distribution and isotopic composition of water in different layers of the earth through a series of physical and chemical processes? That is to say, there has been neither loss of H nor addition of H from extraterrestrial sources since the formation of the earth. If this is the case, the current D/H value of the entire earth should be consistent with the initial primitive solar nebula (nuclear fusion occurs in the sun, and the D/H value changes, but the earth and other planets do not occur). The average D/H value of the whole earth is estimated to be 149\u00d710\uf02d6, and the average D/H value of the ocean is 155\u00d710\uf02d6 (Fig. 1). However, the initial D/H value of the original solar nebula estimated by different models is (20~80)\u00d710\uf02d6 (Fig. It is unreasonable to be at the same stage and maintain a closed system evolution, so what other extraterrestrial matter may become the source of water on the earth? In fact, the earth is not completely isolated after its formation, but is constantly interacting with outer space Exchange. The source of water on the earth \t\u00b7 519 \u00b7 Therefore, comets, meteorites, and interstellar dust (IDP) entering the earth are all potential sources of water on the earth. In 1985, Kerridge[4] pointed out that the average D/H value of comets is (310 \u00b1 40) \u00d7 10\uf02d6 (Fig. 1), the mixing of comets and primitive solar nebula can explain the anomaly of D/H value on the earth today. Jessberger et al.[5] and Bockelee-Morvan et al.[6] respectively in 1988 and 1998 pointed out in 2010 that the water content of carbonaceous chondrites can be as high as 10%, and its average D/H value is in the range of 130 \u00d7 10\uf02d6~180 \u00d7 10-6 (Fig. 1), which is in the same range as the Earth\u2019s D/H value Therefore, it is naturally interpreted as the possible source of water on the earth. However, the above explanations all encounter such difficulties: the water from comets is mixed with the water of the earth itself, and it conforms to the current D/H value on the earth, and the source of comets is required The water content of the earth reaches 50% of the existing water content of the earth, and the highest water content of comets on the earth is currently estimated to be no more than 10%; if meteorites are the main source of water on the earth, compared with the global water content (about 3.3%) , will get the unreasonable result that meteorites account for nearly 33% of the total mass of the earth. Figure 1 \tshows the D/H values of water in different source areas . system evolution), or multi-causalism (that is, the water on the earth includes the addition of water from extraterrestrial sources), it is impossible to coordinate all the current data in the interpretation of the source of water on the earth. At present, in the exploration of the source of water on the earth The difficulties encountered, on the one hand, are due to the inconsistency between the existing data and the model, such as the mismatch between various isotope systems, the uncertainty of the selected parameters in various models, and the inconsistency between the chemical model and the physical model; On the one hand, it is because our observation means cannot comprehensively trace more potential water sources in outer space; moreover, we do not have an accurate understanding of the changes in mineral phases and their isotope fractionation mechanisms in space. Therefore, in order to solve the The problem of the source of water on the earth is bound to require a more complete and accurate model of the formation of the earth, which is an important constraint on the original material composition of the earth, and is also the key to restricting the source and evolution of water on the earth. At the same time, it is necessary to obtain more More and more reliable observational data, increasing the detection of extraterrestrial water sources, collecting comprehensive, diverse and abundant source materials, and using more accurate means to analyze elements and isotopes. We believe that with the gradual improvement of theoretical models and observations With the continuous improvement of means, this difficult problem New breakthroughs will be made and will have a profound impact on the evolution of human understanding of the earth and life.", "Nd isotope is a new isotope geochemical method developed in the 1970s. It has been widely used in the field of solid earth research such as mantle and crustal evolution, and has exerted a powerful geochemical tracer effect. The study of Nd isotopes in marine authigenic sediments and seawater began in the 1980s, and after more than 20 years of development, it has become a powerful tracer for modern oceans, ancient ocean water masses, and ocean circulation. At the same time, the research on the concentration of Nd and other rare earth elements in seawater found that there is a contradiction or uncoupling between the Nd concentration ([Nd]) and the Nd isotope composition (\u03b5Nd) in seawater, which is called the Nd paradox (Nd paradox). It has two expressions [1,2]: the difference in the \u03b5Nd value of the three oceans is obvious, that is, Nd is not uniformly mixed in the ocean, indicating that the residence time of Nd in seawater is shorter than the ocean circulation time, that is, less than 1500 years , but the distribution characteristics of [Nd] in seawater are similar to those of the nutrient element Si, so the retention time of the two should also be similar, and the retention time of the latter is known to be up to 10,000 years. This means that a part of Nd in the ocean may come from an unknown source (missing source) that has not been observed; observation data show that \u03b5Nd can effectively trace the flow circulation and mixing of mid-deep water bodies, but [Nd] and nutrient elements such as Si etc. Concentrations have similar distribution characteristics, indicating that during the flow of mid-deep ocean currents, there should be continuous addition of Nd from shallow surface water bodies in different geographic locations. This addition should make the Nd isotope composition of mid-deep ocean currents vary with geographical location, thus blurring the tracer effect of \u03b5Nd in the water, which is inconsistent with the above actual observations. Nd isotopes as tracers of mid-deep ocean currents The general pattern of ocean thermohaline circulation today is: in the North Atlantic Ocean, Norway and Greenland, the surface water with low temperature and high salt sinks to form North Atlantic Deep Water (NADW), and NADW flows southward to the Antarctic ; the high-salinity water under the Antarctic Weddell Sea (Weddell Sea) ice layer sinks to the seabed to form Antarctic bottom water (AABW); AAIW); together these waters make up the Antarctic Gyre. The mid-deep water of the Antarctic circulation flows into the Indian Ocean and the Pacific Ocean, so the 14C age of the mid-deep water in the Pacific Ocean is the oldest (>1000 years). In the Pacific Ocean, these water bodies eventually upwell and transform into surface water, and then flow along the surface through the South China Sea to the North Atlantic, It and part of the Antarctic middle water and Antarctic bottom water flowing northward make up the North Atlantic seawater, forming a cycle (Figure 1). Conservative tracers such as temperature, salinity, and hydrogen-oxygen isotopes are mainly used to trace modern deep ocean currents. The so-called conservative tracer means that once the water body enters the deep sea from the surface, it stops with the atmosphere. Figure 1 \tOcean thermohaline circulation[3] The red curve in the figure represents the surface ocean current, the blue curve represents the middle and deep ocean current, and the purple indigo represents the bottom ocean current , the yellow dots represent areas where surface water sinks to form deep water. After the exchange, the characteristics of these tracers in the water body are fixed. Only when the water body physically mixes with other water bodies with different characteristics, these tracers characteristics will change. However, these tracers are generally not easy to be preserved by sedimentary records, so to restore the current state of the ancient ocean, we have to use non-conservative tracers (non-conservative tracers) that are relatively easy to deposit and preserve, such as the calcium carbonate shell of foraminifera \uf06413C and Cd/Ca values, but the \uf06413C value is affected by many factors such as temperature and biological productivity, and the Cd/Ca value is affected by sedimentation dynamics, so the paleo-ocean current conditions reflected by these two tracers are sometimes contradictory. of. However, the Nd isotopic composition (\u03b5Nd value), which is easily preserved by Fe-Mn oxides, fish teeth and foraminifera, is not affected by these factors, and it is stable after deposition, which can reflect the \u03b5Nd value of the water body at the time of deposition; while modern Studies on the Nd isotope composition of seawater and Fe-Mn oxide sediments show that Nd isotope is a semi-conservative tracer, and its residence time is slightly shorter than that of the global ocean current cycle, so that Nd can migrate with the ocean current for a longer distance. Does not mix homogeneously across the globe and thus provides a good trace of the flow of deep water (see below and Figure 2). It can be seen that in terms of tracing paleo-ocean currents, the \tAtlantic salinity profile and \u03b5Nd value change curve with seawater depth in Figure 2[8] \u03b5Nd is currently more advantageous and potential tracer than \uf06413C and Cd/Ca. This is why marine \u03b5Nd has received more and more attention and applications. The geographical variation of Nd isotopic composition in modern oceans is large and closely related to ocean circulation. In addition, multiple vertical \u03b5Nd value change curves in the global ocean show that the \u03b5Nd value of surface water in the same ocean basin varies greatly, while the middle-deep layer Water \u03b5Nd values tend to be consistent [Fig. 3(a)]. These provide the basis for \u03b5Nd tracing of mid-deep ocean currents. The \u03b5Nd value of the deep water in the North Atlantic is \uf02d13~\uf02d14[4], and the \u03b5Nd value of the mid-deep water in the Pacific Ocean is \uf02d2~\uf02d4[5], which is consistent with the ancient crust (with low \u03b5Nd value) and The crust around the Pacific Ocean is relatively young and there are many Mesozoic-Cenozoic island-arc volcanic rocks (with high \u03b5Nd values) that generally echo, which also indicates that Nd in the ocean mainly comes from the weathering and denudation of the surrounding land and island-arc rocks. However, Nd from seafloor hydrothermal fluids has been deposited near hydrothermal vents, and its contribution to seawater Nd can be ignored. The \u03b5Nd value of the Antarctic circulation is \uf02d9~\uf02d8[6], which is the result of the mixing of Atlantic and Pacific ocean currents. The \u03b5Nd values of mid-deep water in the Indian Ocean are \uf02d7~\uf02d9[7], which are different from the \u03b5Nd values (\uf02d10~\uf02d12) of the Ganges-Brahmaputra and Indus sources (indicating that these rivers are The contribution of Nd in the Indian Ocean is not large), but the value is the same as that of the Antarctic gyre, which is consistent with the hydrogeographical characteristics of the northward transport of mid-deep water in the Indian Ocean from the Antarctic gyre, indicating that the Nd in the Indian Ocean is mainly from the Antarctic gyre. The value of \u03b5Nd in the modern ocean not only has geographical differences, but also is closely related to ocean circulation. The modern ocean current traced by the \u03b5Nd value can be clearly reflected from the salinity profile of the Atlantic Ocean and the vertical curve of the \u03b5Nd value (Fig. 2): the vertical variation curve of the \u03b5Nd value is highly consistent with the salinity distribution of the traced water body. In the North Atlantic and the Antarctic, due to The temperature- and salinity-dependent densocline disappears, vertical mixing is strong, and salinity and \u03b5Nd are weakly stratified; both salinity and \u03b5Nd show that the North Atlantic deep water flow can be traced southward at least to 45\u00b0S, In the South Atlantic, it can be seen that the North Atlantic deep water is sandwiched by the northward flowing Antarctic middle water and Antarctic bottom water, and the salinity and \u03b5Nd value of the North Atlantic deep water have decreased and increased respectively, reflecting the Mixing of water with Antarctic mesowater and Antarctic bottom water. This is a good example of salinity and \u03b5Nd values tracing mid-deep water bodies. The distribution characteristics of Nd concentration in the ocean and its contradictions with Nd isotopes. The highest in the Pacific Ocean and the middle in the Indian Ocean. These characteristics are similar to SiO2 concentration ([SiO2]). [SiO2] is depleted at the surface due to bioabsorption, enriched in deep water due to remineralization of sinking biodetritus, and accumulates as the deep water body ages. Causes of elevated [SiO2] in the Pacific Ocean. Some researchers [1, 7, 9] have proposed a vertical circulation pattern of Nd in the ocean that is similar to the distribution mechanism of [SiO2]: Nd is partially dissolved into the surface seawater through the sand dust entering the ocean, and in deep water source areas (such as the northern Atlantic Ocean) Nd sinks with the water body to the deep. In other sea areas, Nd sinks with the particles, and joins the water body with the dissolution or desorption of the particles in the deep sea. The increase in inheritance and [Nd] is the result of a combination of lateral migration and vertical circulation. However, as noted in the previous section, oceanic Nd isotopic composition is characterized primarily in the North Atlantic and Pacific Oceans, with mixing of the two yielding intermediate values around the Antarctic. It has been pointed out above that even within the same sea basin, the \u03b5Nd value in the surface seawater varies greatly. Therefore, if the transport of Nd from the surface layer is an important process in the ocean, the original \u03b5Nd characteristics of the deep water will be changed, thus reducing the function of Nd isotopes to trace deep water bodies (this is consistent with the above-mentioned \u03b5Nd that can be well traced At the same time, the variation of the vertical profile \u03b5Nd value should shrink or disappear, but there are sections with a large vertical \u03b5Nd value variation in each ocean basin [Fig. 3(a)], and these The [Nd] of the profile increases steadily with depth up to two times (Fig. 3b); for another example, in the South Atlantic Ocean and Southwest Indian Ocean, [Nd] increases steadily with depth, while the \u03b5Nd value presents a zigzag change (Fig. 3a), which reflects water bodies belonging to different sources at different depths (Fig. 2), and [Nd] could not distinguish these water bodies. If the increase of [Nd] with depth is due to the vertical addition from the surface during the lateral migration of deep water, then the added Nd must have the same isotopic composition as that of each water body, but the \u03b5Nd value of the surface water varies greatly in short distances laterally [Fig. 3 (a)], and thus cannot be the main source of addition to the deep. In addition, mixing calculations show that this vertical Nd concentration variation is also unlikely to be the result of mixing of different water bodies [2]. Fig. 3 \tThe variation curve of Nd isotope (a) and Nd concentration (b) with seawater depth in the modern ocean (data quoted from [4-7], etc.) In a word, Nd isotope can well trace the state of ocean currents in the middle and deep waters of the global ocean, However, the increase of Nd concentration with water depth and water age contradicts the Nd isotopic characteristics. The recent Reversible Scavenging model calculations claim to have made progress in explaining this contradiction [10], but the premise of this model is too simple and needs to be tested by actual measurements. Therefore, the reason and mechanism of the increase of Nd concentration in the deep ocean still need further study. Although the isotope exchange between groundwater Nd input into the ocean and seawater and ocean basin margins revealed in recent years [11] cannot explain the above contradictions, some scholars believe that the study of seawater and ocean basin boundary rocks and sediments, especially volcanic ash The interaction between sediments may be the direction of future efforts to search for undiscovered potential Nd sources and explain the oceanic Nd paradox.", "How to quickly and correctly judge whether an earthquake will trigger a tsunami after a major earthquake is still an unresolved scientific problem. This situation reflects that humans still lack a deep understanding of the characteristics of earthquakes (tsunami earthquakes) that can trigger tsunamis and the mechanism of tsunamis, and further research is urgently needed. So, what is a tsunami? Why do tsunamis happen? How to prevent and mitigate tsunami disasters? This also starts with the cause of the tsunami. Tsunami Causes A tsunami is a huge wave. Large-scale, sudden ups and downs of the seabed, including submarine volcanic eruptions, submarine or coastal landslides, collapses, landslides, impacts of meteorites or comets, and submarine earthquakes can all trigger tsunamis [1]. However, among the above-mentioned many reasons for triggering tsunamis, the most important reason is the submarine earthquake, especially the \"dip-slip\" earthquake characterized by the up and down dislocation along the fault plane. Large-scale, sudden ups and downs of the seabed will disturb a wide range of seawater from the sea surface to the seabed, and the disturbance will spread in all directions in the form of waves, which is called a tsunami. When a tsunami propagates in the ocean, the speed is very fast, reaching 200~250m/s, that is, 720~900km/h, which is equivalent to the speed of a jet plane. The wave height of a tsunami is usually tens of centimeters to one meter, which is much smaller than the storm surge (wave height is usually 7~8m). When a tsunami spreads in the sea, it is like a thousand troops galloping at night. Ocean-going ships have encountered tsunamis from time to time. When a ship encounters a tsunami in the sea, there is absolutely no danger of safety when seeing the ship pass through the tsunami leisurely. However, when the tsunami is close to the coast, especially when it enters the harbor (so the tsunami is written as \"\u6d25\u6d6a\" and \"\u6d25\u6ce2\" in Japanese by borrowing Chinese characters, and pronounced as tsunami; in English, it is written as tsunami according to the pronunciation of \"\u6d25\u6ce2\", also known as harbor wave, both mean \"waves in the harbor\"), the speed slows down, the waves climb rapidly, and the wave height can reach tens of meters, just like the sea stands upright (so the tsunami is also called \"sea stand\"), like a tall wall Like a wall of water rushing to the shore, invincible, the coast was swept away, causing huge casualties and losses. Characteristics of Tsunami Tsunami and storm surge are the same as the ocean waves that can be seen every day at the seaside. They are all so-called \"gravity waves\", that is, waves generated with gravity as the restoring force [1,2]. Gravity has a tendency to return the ocean from a disturbed state to an undisturbed state. In the process of gravitational wave propagation, gravity plays a role in making energy transfer from its relative excess region to relative deficiency region in the form of fluctuations. More specifically, tsunamis (Fig. 1b), like ordinary ocean waves and storm surges (Fig. 1c), belong to \"gravity (surface) waves\", that is, gravity waves whose amplitude decays with depth in seawater particle motion. Although a tsunami is a gravitational surface wave like normal ocean waves and storm surges, it differs significantly from ocean waves and storm surges: it has a different cause. Ordinary waves or storm surges are caused by wind or storms on the sea surface, while tsunamis are mostly caused by sudden ups and downs of the seabed. The causes of the two are different. Period is not the same as wavelength. The period of tsunami is as long as 200-2000s, and the wavelength is as long as 10-100km; while the period of storm surge is only 6-10s, and the wavelength is about 100m. Although both belong to gravitational surface waves, ordinary ocean waves or storm surges are a kind of \"deep water wave\" (Fig. Motion is restricted to propagation within a depth range of the order of about 100 m from the surface of the deep ocean. Seawater particles move on a plane perpendicular to the sea surface in a forward circular shape; the amplitude decays rapidly with depth, and attenuates completely when it reaches about half a wavelength (that is, the depth of the order of magnitude is about 100m) (Fig. 1c). This is the reason why a submerged submarine remains unmoved despite rough seas. Similarly, Figure 1 \tis a schematic diagram of the propagation of gravity waves in the ocean[1]a: a gravity wave with a wavelength of \uf06c and a wave height of \uf07a propagates along the x direction in an ocean with a water depth of H at a phase velocity c; b: \"shallow water wave\" (long-period gravity wave); c: \"deep water wave\" (short-period gravity wave) principle, the piezometer placed on the sea surface can record almost all the time waves up to several meters high, but it is difficult to detect waves with a higher amplitude than ordinary waves Small tsunami (or even large tsunami) signals that are submerged in general wave signals (see Figure 1b); therefore, pressure gauges should be installed not only on the sea surface but also on the deep seabed to effectively monitor the occurrence of tsunamis and spread. Different from ordinary waves and storm surges, the wavelength (10-100km) of the tsunami (Fig. 1b) is much larger than the depth of the sea (about several kilometers). It is like a pool of shallow water, so a tsunami is a \"shallow water wave\" as a gravitational surface wave. When it propagates in the ocean, the amplitude attenuates very slowly with depth, so slow that there is almost no attenuation; moreover, the range of movement of seawater particles in the vertical direction is much smaller than that in the horizontal direction, showing an extremely flattened forward movement The ellipse is so flat that it almost degenerates into a straight line, so that the entire ocean, from the sea surface to the seawater particles on the seabed, reciprocates synchronously along the horizontal direction, carrying a large amount of energy and attacking the coast (Figure 1b). The speed of propagation is different. A tsunami is a long-period gravitational wave, and its high-frequency cut-off frequency is 0.01~0.02Hz, that is, the period is 50~100s. Its propagation speed is very high, as mentioned above, up to 200~250m/s, which is about 15 times of the normal wave speed. The propagation velocity of the tsunami is as high as 200~250m/s and the amplitude of the tsunami wave has little attenuation with depth, which explains why the tsunami has extraordinary destructive power. In the ocean with water depth H, the phase velocity c of gravity wave propagation is: where k \uf03d 2\u03c0 / \uf06c is the wave number; \uf06c is the wavelength; g is the gravitational acceleration; tanh is the hyperbolic tangent function. \"Dispersion\" is called \"dispersion\" in physics, which refers to the change of wave propagation velocity (phase velocity or group velocity) with period (or frequency). Tsunamis are long-period gravitational waves. When the period of the tsunami wave is on the order of 100~1000s, that is, when the wavelength \uf06c is much larger than the depth H of sea water ( \uf06c >>H), the tsunami wave as a long-period gravity wave (\u201cshallow water wave\u201d) has no frequency. disperse. At this point, formula (1) is simplified to where u is the group velocity. Ordinary ocean waves are short-period gravitational waves. When the order of magnitude of the period is 10s, that is, the period is very short, formula (1) is simplified to formula (3), which means that as a short-period gravity wave (\"deep water wave\"), ordinary ocean waves are dispersed surface waves, Its phase velocity c is twice the group velocity u, both of which are proportional to the square root of the wavelength \uf06c. The difficulty level of excitation varies. Ordinary waves or storm surges are caused by wind or storms on the sea surface and are easily excited by wind or storms. Most tsunamis are generated by submarine earthquakes, and the ability of submarine earthquakes to stimulate tsunamis decreases sharply with the increase of focal depth and frequency. Therefore, in the case of the same focal depth, the frequency is the most important characteristic quantity, which determines the effectiveness of the earthquake to stimulate the tsunami. In the solid interior of the earth, the amplitude of the \"eigenfunction\" that determines the effectiveness of earthquake-induced tsunami is very small. For earthquakes whose focal depth is greater than 60km, the amplitude of the eigenfunction is only 10\uf02d3 of the surface displacement (when the period is about 103s ), 10\uf02d5 (when the period is about 102s) and 10\uf02d7 (when the period is about 50s). That is to say, earthquakes with a focal depth greater than 60km can only trigger long-period tsunamis. Only an extremely long period, an extremely large earthquake, and under extremely favorable conditions can trigger a disastrous tsunami. This has been confirmed by a large number of historical tsunamis and modern observation data. Tsunami Disasters Tsunami is a natural disaster with extremely low frequency and recurring time in place that is much longer than human life span. According to the statistics since 1980, an average of one large earthquake of magnitude 8 or above occurs on the earth every year, and only about one of the 10 large earthquakes of magnitude 8 or above occurs on the seabed and simultaneously stimulates Tsunami's. Moderately sized earthquakes, ie those around magnitude 6.5, may generate small tsunamis with wave amplitudes of only a few centimeters that can be recorded with modern piezometers on the deep ocean surface. The annual rate of small-scale tsunamis is several times a year, and the annual rate of larger-scale tsunamis is about once a year. It is easy for people to take it lightly when it comes to natural disasters, such as extremely large earthquakes and tsunami, which are extremely infrequent and repeated in situ for a time much longer than a human lifespan. For example, as far as the northern Indian Ocean is concerned, there have been only six recorded tsunamis in history[3], including the earliest recorded tsunami in the region in 326 BC to the army of Alexander the Great and April 1, 1008 AD. Tsunami off the coast of Iran from local earthquakes from May 9 to 9, August 27, 1883 from the eruption of the Krakatoa volcano in Indonesia, 1884 from an earthquake in the western Bay of Bengal, June 1941 The tsunami triggered by the magnitude 8.1 earthquake in the Andaman Sea on November 26, 1945 and the tsunami triggered by the magnitude 8.1 earthquake on November 27, 1945, 70km south of Karachi. China, India, Indonesia, Japan, the Philippines, the east coast of the United States, Africa's C\u00f4te d'Ivoire (formerly known as \"Ivory Coast\") and even Europe have all been hit by tsunamis many times since ancient times. In fact, among many natural disasters, tsunami is an event with extremely low frequency and low probability, and its danger is obviously greatly underestimated. Earthquake-generated tsunami (earthquake-generated tsunami) refers to a tsunami excited (generated) by an earthquake. By analyzing the characteristics of the tsunami, it is not difficult to understand what factors affect the earthquake to trigger the tsunami. The main factors affecting the tsunami induced by earthquakes are[1,2]: \u2460The magnitude of the earthquake (measured by the seismic moment M0 or moment magnitude MW); \u2461Earthquake mechanism; \u2462Focal depth; \u2463The process of focal rupture. Earthquake Size Natural earthquakes are produced by sudden dislocations of subsurface rocks. Therefore, the size of the earthquake is related to the area of the fault plane, the relative displacement distance of the rocks on both sides of the fault plane, and the rigidity coefficient of the medium. The magnitude of an earthquake is usually measured in \"seismic moment\" or \"moment magnitude\" MW. The seismic moment M0 is defined as in the formula, A is the area of the fault plane; D is the average dislocation (dislocation distance) on the fault plane; \u03bc is the rigidity coefficient of the medium. Correspondingly, the \"moment magnitude\" MW is defined as where M0 is in N\u00b7m. As shown in the above formula, the moment magnitude is calculated from the seismic moment. When MW<7.25, the measurement results of the moment magnitude MW are basically consistent with the measurement results of the magnitude measured by surface waves (called \"surface wave magnitude\") MS; but when MW>7.25, the surface wave magnitude MS begins to appear \"saturation\". \u201d, that is, the measured surface wave magnitude MS is lower than the moment magnitude MW that can reflect the real size of the earthquake; and when MW=8.0~8.5, MS reaches full saturation, that is, no matter how MW increases at this time, the measured Surface wave magnitude MS no longer increases accordingly. Therefore, when determining the magnitude of a large earthquake, if other magnitude scales other than MW are used, the scale (magnitude) of the earthquake will be underestimated due to magnitude saturation, resulting in a wrong judgment of whether the earthquake will trigger a tsunami. Therefore, no matter from the perspective of tsunami warning, or from the perspective of monitoring and researching seismic activity, the seismic moment [Equation (4)] or its equivalent moment magnitude calculated from the seismic moment should be measured [Equation (5) ]. Obviously, when 6.5\u2264MW\u22649.5, the change of M0 spans 5 orders of magnitude, from 6.3\u00d71018N\u00b7m to 2.0\u00d71023N\u00b7m; therefore, under the same conditions, the larger the magnitude of the tsunami The bigger it is; it's just that the tsunamis triggered by earthquakes of different sizes can vary greatly in intensity. The parameters that characterize the focal mechanism of an earthquake are the strike of the fault plane (the angle between the intersection line of the fault plane and the ground and the north direction N) \uf066 , the dip angle (the angle between the fault plane and the ground) \uf064 and the slip angle [slip vector e (the direction in which the \u201changing wall\u201d of the fault slides relative to the \u201cfootwall\u201d) and the strike of the fault plane, counterclockwise is positive] \uf06c (Figure 2). In general, pure strike-slip faults (faults with \uf06c \uf03d 0\uf0b0 or 180\uf0b0) are less likely to generate tsunamis[2,4]; pure dip-slip faults (faults with \uf06c \uf03d 90\uf0b0 or 270\uf0b0) Faults are more likely to trigger tsunamis [2]. However, this is not to say that strike-slip faults will never trigger tsunamis. A pure strike-slip fault on the seafloor will also produce seafloor uplift and subsidence. Although the magnitude of seafloor uplift and decline caused by it is not as great as that of a pure dip-slip fault of the same strength, it is still possible to trigger a tsunami. Theoretical calculation and analysis show that, under other conditions being the same, the ground uplift and decline caused by a pure strike-slip fault is about 4 times that of a pure strike-slip fault, and the height of the tsunami induced by it is also about 4 times that of a pure strike-slip fault. 4 times of [2]. Focal depth The importance of focal depth for generating tsunamis seems self-evident. However, it needs to be pointed out that the focal depth generally refers to the depth of the initial rupture point of the seismic source, and people often ignore that the crucial parameter for tsunami warning should be the \"central moment tensor\" (the \"seismic moment tensor\" released during an earthquake). The depth of the \"centroid\" of the moment tensor). Naturally, deep earthquakes are less likely to generate tsunamis than shallow earthquakes, especially those with fault planes exposed to the seabed. In fact, under the same conditions, within 2000km from the epicenter, the tsunami wave height caused by an earthquake with a large focal depth is only a fraction of that caused by an earthquake with a shallow focal depth; however, when the epicentral distance exceeds 2000km In the future, the focal depth will have little influence on the magnitude of the tsunami[2]. Figure 2 \tshows the strike \uf066, dip (defined as \uf066 \uf02b 90\uf0b0 ), dip \uf064 of the fault plane in the earthquake tomogram, the hanging wall and footwall of the fault, the slip vector e, and the slip angle. A point on an object has a certain shape and size. For example, the length of a seismic fault can be as small as a few meters (corresponding to an earthquake with MW \u22480) to as large as hundreds of kilometers (corresponding to an earthquake with MW \u22488). The main difference between a tsunami generated by a finite-sized source and a tsunami generated by a point source is the short period. So far, in many works, especially in tsunami warning, the \"point source moment tensor\" model is still widely used to calculate tsunamis [2]. However, the Sumatra-Andaman MW 9.0 earthquake and its induced mega-tsunami show that the dynamic process of earthquake rupture, especially the directionality of rupture, has a non-negligible effect on the propagation of tsunami energy, at least for extremely large earthquakes and their induced tsunamis. The impact of [5]. The preliminary analysis of the rupture process of the Sumatra-Andaman earthquake on December 26, 2004 [6] showed that the earthquake was generally a unilateral rupture from the south-southeast direction to the north-northwest direction. And the focusing of tsunami energy in the north-north-west direction, the so-called \"seismic Doppler effect\", caused huge losses in the northern Indian Ocean. If the rupturing direction of this mega-earthquake was reversed and expanded southward, the losses in areas or countries such as Banda Aceh and Thailand would not be so great; however, in this way, the losses in southern Sumatra-Andaman may be greater much. Tsunami earthquakes Why some large earthquakes can trigger large tsunamis, even abnormally large tsunamis (called \"anomalous tsunami earthquake\" (anomalous tsunami earthquake)], while others cannot? This involves the mechanism of \"tsunamigenic earthquakes\" or \"tsunamigenic earthquakes\" for short. Some people think that the reason for this huge difference is that the earthquake that can trigger the tsunami has a particularly slow process of focal rupture and a particularly long duration of focal rupture [7]. Some people believe that some large earthquakes can trigger large tsunamis because these earthquakes occur on the wedge-shaped end of the accretion of the upper complex plate in the subduction zone, and their depth is shallow and the rigidity coefficient is also small; Where the depth is relatively large (about 10~40km). Therefore, the former can generate a large tsunami, and because the rigidity coefficient of the medium is small, its seismic moment is relatively small [8]. Some people also think that generally speaking, the bigger the earthquake, the bigger the tsunami it stimulates, which is not a problem; the above-mentioned difference or contradiction is due to the inappropriate use of the surface wave magnitude MS to measure the size of the earthquake, while the surface wave magnitude MS It has reached full saturation when the moment magnitude is MW8.7[9]. Using the normal mode shape theory, it can be concluded through calculation that under certain geometric conditions, the seismic source located in the shallow sediment layer may generate a much larger tsunami than the seismic source located in the solid earth [10]. Waveform simulations show that the degree of undulation (\u201croughness\u201d) of the seafloor topography near trenches is related to the occurrence of large earthquakes and tsunamis [1]. These findings suggest that slow focal rupture processes in the sedimentary layers of shallow subducting slabs are favorable factors for initiating large tsunamis, highlighting the need to determine focal rupture processes, especially to study particularly slow focal rupture processes such as \"slow earthquakes,\" \" Phenomena such as \"silent earthquake\" are of great significance for clarifying the mechanism of tsunami excitation, and thus for preventing and mitigating tsunami disasters. The characteristics or criteria of earthquakes capable of triggering large tsunamis are extremely important for understanding the tsunami, a natural phenomenon with a very low frequency of occurrence, for reducing the false alarm rate of tsunami early warning, and thus for preventing and mitigating tsunami disasters. Clearly, it would be interesting to explore in depth which of the many possible factors played a major role in making earthquakes so significantly different in their ability to generate large tsunamis. Tsunami warning As mentioned above, how to quickly and correctly judge whether an earthquake will trigger a tsunami after a major earthquake is still an unresolved scientific problem. This situation reflects that people still lack a deep understanding of the characteristics of earthquakes (tsunami earthquakes) that can trigger tsunamis and the mechanism of tsunamis, and further research is urgently needed. Nevertheless, according to the current level of people's understanding of tsunami, tsunami early warning can still make a certain contribution to the prevention and mitigation of tsunami disasters. The physical basis of tsunami warning is that seismic waves travel faster than tsunamis. The propagation speed of seismic longitudinal waves, that is, P waves, is 6~7km/s, which is 20~30 times faster than that of tsunamis. Therefore, in the distance, seismic waves arrive tens of minutes or even hours earlier than tsunamis, and the specific value depends on the epicentral distance. And the propagation velocity of earthquake wave and tsunami. For example, when the epicentral distance is 1000km, it takes about 2.5 minutes for the seismic longitudinal wave to arrive, while the tsunami takes more than 1 hour; the huge tsunami triggered by the MW9.5 earthquake in Chile in 1960 did not reach the coast of Japan until 22 hours later! If the time difference caused by the difference between the seismic wave propagation velocity and the tsunami propagation velocity can be used to analyze the seismic wave data, the seismic parameters can be quickly and accurately determined (including the time of the earthquake, the location of the epicenter, the focal depth, the seismic moment, the focal mechanism and the source rupture process, etc. ), and matched with the records of pressure gauges pre-deployed in sea areas where tsunamis may occur (as mentioned above, there should be pressure gauges deployed not only on the sea surface, but also on the seabed), there is It is possible to make a determination of whether the earthquake triggered a tsunami, and how large the tsunami was. Then, based on the measured bathymetry map, seabed topographic map, and the topographic features of the coastal areas that may be hit by the tsunami, simulate and calculate the time and intensity of the tsunami reaching the coast, using satellites, remote sensing, interferometric satellite aperture radar (InSAR), etc. Space technology monitors the progress of tsunami propagation in the sea area, uses modern information technology to transmit tsunami warning information to residents in coastal areas that may be hit by tsunami in time, and carries out relevant tsunami prevention and mitigation measures in coastal areas that may be hit by tsunami. Publicity, education and popularization of scientific and technological knowledge of disasters, as well as training and exercises for dealing with tsunami disasters. In this way, there is hope that thousands of lives will be saved and a large amount of property damage will be avoided when the tsunami strikes. Tsunami warning has a reliable physical basis, it is not only established in theory, but also feasible in practice, and there have been successful examples. For example, shortly after the 1946 tsunami that caused severe casualties and property damage to the city of Hilo, Hawaii, the Pacific Tsunami Warning Center was established in Hawaii in 1948, effectively averting subsequent tsunamis. possible greater losses. If the countries along the Indian Ocean had established a tsunami warning system like the countries along the Pacific Ocean before the 2004 Indian Ocean tsunami, then the Indian Ocean tsunami caused by the Sumatra-Andaman earthquake would never have caused such huge casualties and property losses. The above-mentioned tsunami warning is more effective for \"pelagic tsunami\", but for \"offshore tsunami\" (\"local tsunami\"), that is, the submarine earthquake that triggers the tsunami is very close to the coast, for example, a tsunami that is only tens to hundreds of kilometers away, Since the time difference caused by the difference between the seismic wave propagation velocity and the tsunami propagation velocity is only a few minutes to tens of minutes, the early warning of the tsunami is more difficult to be effective. In order to quickly and correctly judge whether the earthquake has triggered a tsunami after a major earthquake, and to improve the early warning level of \"offshore tsunami\" and \"local tsunami\", it is necessary to strengthen the research on tsunami physics. The above briefly introduces tsunamis, tsunamis induced by earthquakes, earthquakes capable of inducing tsunamis, and the physical basis of tsunami warning. We have seen that, like earthquakes, tsunamis are a natural phenomenon. Humans live on a constantly changing, very active planet. The earth is the common home of human beings. It not only provides the resources, energy and environment that human beings rely on for survival, but also makes waves from time to time, bringing disasters to human beings. As natural phenomena, tsunamis and earthquakes are the vivid manifestations of the continuous movement and changes of the earth; as natural disasters, tsunami disasters and earthquake disasters are just two of the many natural disasters faced by human beings! In the face of natural disasters, human beings should work hard to study and understand them, rely on science and technology, seek ways to avoid and mitigate disasters, and learn to \"get along with disasters\" and \"promote benefits and avoid harms\". Tsunami disasters can be prevented and mitigated by establishing a tsunami early warning system. However, in view of the fact that people still lack a deep understanding of the characteristics of tsunami earthquakes and the mechanism that triggers tsunamis, the false alarm rate of tsunami warnings, especially the false alarm rate of \"offshore tsunami\" warnings, is still very high. In order to improve the level of tsunami warning, the research on tsunami physics must be strengthened.", "Earthquake prediction Earthquake prediction refers to clearly giving the location, time and size of future earthquakes (referred to as the \"three elements\" of earthquakes) and their intervals, as well as the degree of credibility of the prediction [1]. Earthquake prediction is usually divided into long-term (more than 10 years), medium-term (1-10 years), and short-term (1 day to hundreds of days and less than 1 day)[2]. Sometimes the short-term prediction is subdivided into short-term (10 days to several 100 days) and imminent earthquake (1~10 days and less than 1 day) prediction. The division of long, medium, short and imminent earthquake prediction is mainly based on (objective) needs, but it is artificially (subjectively) divided without physical basis; the boundaries are neither very clear nor completely unified. In public language, and even among professionals, there is often no distinction between \"earthquake prediction\" and \"earthquake prediction\" and often refers to what is referred to here as \"earthquake short-term, imminent prediction\". Internationally, some seismologists refer to \"prediction\" and \"prediction\" that do not meet the above definition as \"prediction\", also known as \"probabilistic (earthquake) prediction\", and \"prediction\" that meet the above definition as Make \"deterministic (earthquake) predictions\". According to this statement, \"long-term forecast\" and \"medium-term forecast\" should be called \"long-term forecast\" and \"medium-term forecast\". In our country, it is customary to refer to the results of relevant research conducted by scientists and research institutes on the location, time and size of future earthquakes as \"earthquake prediction\", while the warnings about future earthquakes issued by the competent government department according to law are called \"earthquake prediction\". \"Earthquake Forecasting\". The size of the \"target magnitude\" is important when evaluating earthquake predictions. Because there are many more small earthquakes than large ones, it is easier to get it right by chance! It is not easy to predict a 6.0-magnitude earthquake by chance in a given area and within a given time period, but it is very possible to \"correspond\" to a 5.0-magnitude earthquake by chance. Advances in Earthquake Prediction Since the 1960s, earthquake prediction, especially mid- and long-term prediction, has made some meaningful progress [3,4]. In terms of long-term prediction, the most prominent progresses are: \u2460 In the circum-Pacific seismic belt, almost all large earthquakes occur in the pre-determined void area using the \"seismic void\" method. \u2461Using the \"seismic gap\" method, the Parkfield M6 earthquake officially predicted by American seismologists in 1984 was finally (1988\u00b14.3) years later than the predicted time (that is, before the beginning of 1993 at the latest). It happened exactly 11 years later on September 28, 2004 at 17:15:24 UTC. \u2462Using the \"seismic gap\" method, American seismologists successfully predicted the Loma Prieta M6.9 earthquake on October 18, 1989 in California, USA. \u2463In our country, there are some successful earthquake cases in the identification of intraplate seismic gaps. Although the above-mentioned progress has been made in the long-term earthquake prediction, but: \u2460 the \"Tokai Earthquake\" predicted by Japanese seismologists using the \"seismic gap\" method has not occurred in 31 years since 1978; \u2461 the Loma Prieta Earthquake has not yet occurred. The actual situation does not exactly match the forecast, and it still cannot be ruled out that it was a coincidence; \u2462The Parkfield earthquake occurred 11 years later than the predicted time. These circumstances suggest that accurate forecasting is difficult even for seemingly regular sequences of earthquakes occurring at plate boundaries. In terms of medium-term prediction, \u2460 using the \"stress shadow area\" method to do retrospective research on many earthquake sequences has achieved very meaningful results; \u2461 Japanese seismologists have successfully predicted the In 1978, the magnitude 7.7 earthquake in Oaxaca, southern Mexico; \u2462Russia Keilisi-Borok (\u041a\u0435\u0439\u043b\u0438\u0441-\u0411\u043e\u0440\u043e\u043a, \u0412. \u0418.) and his colleagues proposed a method called \"time of increased probability\" for strong earthquakes \u201d (Time of Increased Probability, TIP) medium-term prediction method, made predictions for the 8.1-magnitude earthquake in Hokkaido, Japan on September 25, 2003 and the 6.5-magnitude earthquake in San Simeon (San Simeon) in central California, USA on December 22, 2003. And succeeded. Although the above-mentioned progress has been made in the mid-term earthquake prediction, there are still some problems that cannot be ignored, such as: \u2460 The \"stress shadow area\" method is still in the retrospective research stage and has not been used in earthquake prediction experiments; and the Keilis-Borroc method have been comprehensively tested; \u2462The authenticity of the seismicity image precursors based on the prediction of the Oaxaca earthquake is still in doubt. In contrast to the progress of medium and long-term earthquake prediction, the progress of short-term and imminent earthquake prediction is not great. For more than 40 years, seismologists have been working on exploring \"deterministic earthquake precursors\", that is, any anomaly that can be observed without exception before an earthquake, and once it occurs, a major earthquake will inevitably occur. But no breakthrough has been made. Beginning in 1989, the Earthquake Prediction Subcommittee under the International Association of Seismology and Physics of the Earth's Interior (IASPEI) organized an expert group to evaluate the \"significant earthquakes\" nominated by experts from various countries. Precursor\u201d underwent two rounds of review [1,5]. \"Significant earthquake precursor\" refers to \"a quantitative and measurable change in an environmental parameter that occurs before an earthquake and is considered to be related to the seismogenic process of the main shock.\" The first round, 1989-1990; the second round, 1991-1996. A total of 37 items were reviewed in two rounds, of which only 5 items were approved. Including: \u2460 foreshock hours to months before the earthquake; \u2461 \"preshock\" months to years before the earthquake; \u2462 \"quiet\" earthquake before the strong aftershock; \u2463 reduction of radon content and water temperature in the groundwater before the earthquake; \u2464 Crustal deformation reflected by groundwater rise before the earthquake. Even if the above 5 items are recognized as \"significant earthquake precursors\", it does not mean that they can be used to predict earthquakes. For example, foreshocks are undoubtedly precursors of earthquakes, but how to identify foreshocks is still an open problem. After the 1980s, the focus of international research on earthquake precursors shifted to the exploration of transient slip precursors before major earthquakes. To this end, the United States Geological Survey (USGS) established an earthquake prediction test site in Parkfield, Central California, and deployed a dense earthquake observation network and precursor observation network to detect foreshocks and other possible earthquake precursors. However, the Parkfield magnitude 6.0 earthquake that was predicted not only occurred 11 years later than predicted, but was not detected before the earthquake, and no earthquake precursors have been analyzed so far. Why is earthquake prediction so difficult? Earthquake prediction is a recognized scientific problem. So, what is so difficult about it? why is it so difficult In a nutshell, the difficulties in earthquake prediction mainly include the following three points [4]. The \"impenetrability\" of the Earth's interior. The \"impenetrability\" of the earth's interior refers to the fact that human beings are currently unable to go deep into the earth's interior under high temperature and high pressure to set up stations and install observation instruments to directly observe the source of the earthquake. Seismologists can only use fairly sparse geometries on the Earth's surface (in many cases on land that occupies only about 30% of the Earth's surface area) and in the Earth's interior that is very shallow from the Earth's surface (downholes at most a few kilometers deep). Observations are carried out on a very heterogeneous network of observation stations, and the obtained, very incomplete, insufficient, and sometimes even very imprecise data are used to deduce (\"invert\") the situation inside the Earth. The interior of the earth is very uneven and not very \"transparent\". Seismologists can't \"see\" the interior of the earth from the surface of the earth even as well as \"seeing flowers through the fog\". image of the Earth's interior. All of these have greatly limited human understanding of the environment where the source is located and the source itself. The \"infrequentity\" of large earthquakes. Large earthquakes are rare \"non-frequent\" events, and the recurrence time of large earthquakes is much longer than the human life span and the time since modern instruments have been observed, which limits the ability of seismology as an observational science to understand phenomena Observations and advances in knowledge of empirical regularities. So far, the research on the precursory phenomena before major earthquakes is still in the stage of summarizing and studying each earthquake case, lacking the practical and reliable empirical laws necessary to establish the theory of earthquake occurrence, and the summary of empirical laws and the establishment and verification of the theory are due to large Earthquakes are limited as a rare \"infrequent\" event. The complexity of the physical process of earthquakes. Earthquake is a natural phenomenon that occurs in a very complex geological environment, and the seismic process is an extremely complex physical process at all levels from the macroscopic to the microscopic. The complexity and variability of earthquake precursors may be closely related to the complexity of the geological environment in the earthquake source area and the highly nonlinear and complex seismic process. Earthquake Predictability Earthquake prediction has been one of the most concerned goals of seismologists around the world for more than a century [6]. In the 1970s, immediately after the Soviet Union reported that the seismic wave velocity ratio (the ratio VP/VS of the longitudinal wave velocity VP to the shear wave velocity VS) decreased before the earthquake, an anomaly of the wave velocity ratio before the earthquake was also observed in the Lanshan Lake area of New York, USA. There followed a large number of reports on precursor phenomena such as anomalies in wave velocity and wave velocity ratio before earthquakes, and the proposal of physical mechanisms related to earthquake precursors such as expansion-diffusion mode and expansion-instability mode, as well as the success of the Haicheng earthquake in China in 1975. Forecasting made the international seismological community extremely optimistic about earthquake prediction, and even optimistically believed that \"even if the physical mechanism of earthquake occurrence is not well understood (like weather, tides, and volcanic eruption predictions), it is possible to make predictions about earthquakes.\" predict to some degree.\" At that time, even many famous geophysicists were convinced that it was feasible to systematically predict short-term and imminent earthquakes, and that earthquakes could be routinely predicted in the near future. The key was to deploy enough instruments to detect and measure earthquake precursors. However, it was soon discovered that there were problems with the observational and theoretical foundations of earthquake prediction: when the wave velocity ratio anomaly was re-measured, it was found that the previously reported results could not be repeated; whether the geodesy, geochemical and electromagnetic anomalies reported after the earthquake were true or not Precursors associated with earthquakes have raised questions; theoretical models as well as laboratory results of rock mechanical expansion, microfractures, and fluid flow experiments do not yield the time-dependent progression of precursory anomalies suggested earlier. Then, the use of empirical earthquake prediction methods failed to make short-term and imminent predictions of the Tangshan earthquake in China in 1976. In the 1980s and 1990s, American seismologists predicted the Parkfield earthquake on the San Andreas fault, The Tokai earthquake predicted by Japanese seismologists did not occur (the former occurred on September 28, 2004 after 11 years, and the latter has not yet occurred), which made many people feel pessimistic. For more than a century, there have been various views on earthquake prediction ranging from very optimistic to extremely pessimistic, and different views have been debated, especially in recent years, there have been fierce debates around the predictability of earthquakes[2,7]. Some experts believe that the seismic system, like many other systems, is a system with \"self-organized criticality\", that is, a system that fluctuates at the edge of a critical state without a critical length scale. Phenomena with self-organized criticality are inherently unpredictable. However, the critical phenomena in systems with self-organized criticality generally obey the power law distribution like the Gutenberg-Richter law in seismology, so these experts believe that earthquakes are A self-organized criticality phenomenon, the seismic system is a system with \"self-organized criticality\". Furthermore, they believe that since the self-organized criticality phenomenon is inherently unpredictable, earthquakes are unpredictable; since earthquake prediction is difficult or even unpredictable, it should be abandoned and no longer studied [7] . However, whether the earthquake is a self-organized critical phenomenon is not a problem that can be solved by \"democratic voting\" and \"minority obeying the majority\"! Most people think that earthquake is a kind of self-organized critical phenomenon, but it does not mean that earthquake is a kind of self-organized critical phenomenon! L. Knopoff pointed out[2,8] that the most important observational basis for the self-organized criticality of earthquakes\u2014the power law derived from the Gutenberg-Ricker law\u2014is only an apparent phenomenon, so the conclusion that earthquakes have \"scale invariance\" derived from this is a wrong concept, and the reason for this wrong concept is that the effect of aftershocks is not considered. Knopov demonstrated that earthquake phenomena do not have no characteristic scales, but at least four characteristic scales. It is intriguing that the theoretical model used by many researchers who believe that \"earthquakes are unpredictable\" when studying the self-organized criticality of earthquakes is exactly the same as that proposed by Knopov and his students 40 years ago. The famous Burridge-Knopoff spring-slider model (abbreviated as BK model) [9]. These researchers have come to the conclusion that \"earthquakes are unpredictable\" based on the BK model or other very simple earthquake-like models that are similar to the BK model. Geller R. J., who held negative opinions on earthquake prediction, summarized [7] that these theoretical simulations use very simple models similar to earthquakes, but their simplicity shows that for a deterministic model, It is said how easy it is to be unpredictable, so there is no reason to think that the conclusions obtained from these theoretical studies are not applicable to earthquakes. Knopoff[2,8] believed that these researchers abused his model (BK model). He believed that these researchers did not properly consider the physical problems of earthquakes, so although they simulated some phenomena, their simulated Not an earthquake phenomenon. He pointed out that the apparent power law of earthquakes corresponds to only a transitional phenomenon, rather than the self-organized critical state that the system eventually evolves to; earthquake phenomena are self-organized (SO) but not critical (C). The complex geometric properties of the geological structure make the mainshock and aftershock follow roughly the same fractal-like distribution, which makes it easy for people to confuse them, regardless of the reliability of the power law, and simply deduce the earthquake from the power law It has self-organized criticality, and then draws the conclusion that \"earthquakes cannot be predicted\". Knopoff[2] sharply pointed out the fallacy of the logical reasoning of the researchers who advocated that \"earthquakes are unpredictable\". He pointed out that the logical reasoning of researchers who advocate \"earthquakes are unpredictable\" is like this: \"Mammals (self-organized critical phenomena) have 4 legs (following power-law distribution), and tables (earthquake phenomena) also have 4 legs (following power-law distribution), so the table (earthquake phenomenon) is also a mammal (self-organized critical phenomenon) or the mammal (self-organized critical phenomenon) is also a table (earthquake phenomenon)\". The discussion or debate on the predictability of earthquakes, a theoretical issue closely related to the practice of earthquake prediction and the universal laws of nature, is still going on. Since the difficulty of earthquake predictability arises from the impossibility of measuring with high precision the state of the fault and its neighbors and the fact that the laws of physics within it remain largely unknown. So if the situation in these two aspects can be improved, it is still possible to predict earthquakes several years in advance in the future. The difficulty of earthquake prediction several years in advance is comparable to that of meteorologists to make weather forecast hours in advance, but the information of the earth's interior required for earthquake prediction is far more complicated than the atmospheric information required for weather forecast There are many, and it is not easy to obtain, because these information all come from underground (the \"impenetrability\" of the earth's interior). In this way, the limit to the predictability of earthquakes may not be due to the inherent limitations of deterministic chaos theory, but because of the unavailability of extremely large amounts of information. The scientific way to achieve earthquake prediction relies on scientific and technological progress, and relies on the group of scientists to solve the difficulties faced by earthquake prediction. Developed countries ignore the urgent need for short-term forecasts and simply count on the leap forward and major breakthroughs in basic research one day in a few decades. In this respect, earthquake prediction is not quite the same as purely basic research. This is [4]: \u2460 \"urgency\" in terms of time, that is, the question must be answered at the first time, without any hesitation or excuse; \u2461 \"incompleteness\" of information about the \"shock\"; \"high risk\". The above-mentioned characteristics of earthquake prediction neither mean that the strict scientific standards for earthquake prediction can be lowered, nor does it mean that due to insufficient understanding of earthquakes and incomplete information on earthquake conditions (in extreme terms, there will never be \"sufficient\" \", \"completely\") and disregarded earthquake prediction. For more than a century, through the unremitting efforts of several generations of seismologists, the understanding of earthquakes has indeed made great progress, but there are still many things that are not understood. At present, earthquake prediction is still in the early stage of scientific exploration, and the ability of earthquake prediction, especially short-term and imminent earthquake prediction is still very low, which is far from the urgent social needs. Solving this earth science problem, which urgently needs to be answered and needs to be solved through long-term exploration, can only rely on the progress of science and technology, and rely on the group of scientists. On the one hand, scientists should do their best to use the knowledge representing the highest level of current technology for earthquake prediction; Information about the earthquake at the highest level of awareness (both positive and negative) is faithfully communicated to the public. Strengthen the observation of earthquakes and their precursors In order to overcome the observation difficulties faced by earthquake prediction, in terms of earthquake observation and research, efforts should be made to change \"passive observation\" into \"active observation\". The seismic network cooperates to intensify observations; not only natural seismic sources are used, but also artificial sources are used to detect the interior of the earth. In terms of the observation and research of earthquake precursors, we should continue to strengthen the monitoring of earthquake precursor phenomena and broaden the scope of exploration of earthquake precursors. Earthquake precursors involve many disciplines and broad fields such as geophysics, geodesy, geology, and geochemistry. Many experiences and lessons including the 2004 Parkfield Earthquake Prediction Test show that according to the current thinking and practice, reliable earthquake precursors are indeed not easy to detect. The efforts to continue to search for earthquake precursors along the existing direction cannot be easily given up; however, it should be advocated and encouraged to find another way, put forward new ideas, adopt new methods, and explore new precursors. Since the 1990s, the advancement of space-based earth observation technology and digital seismic observation technology has enabled the observation (modern crustal movement, internal structure of the earth, earthquake focal process, and earthquake precursors) technology to improve in terms of resolution, coverage, and dynamics. All have developed by leaps and bounds, and high-tech [such as global positioning system (GPS), satellite aperture radar interferometry (InSAR) and other space geodetic technologies, \"seismic satellites\" for detecting earthquake precursors, etc.] The application of multidisciplinary research has brought new opportunities for earthquake prediction research. Multidisciplinary cooperation and mutual penetration are powerful means to find and reliably determine earthquake precursors. Adhere to the scientific experiment of earthquake prediction\u2014\u2014The complexity and variability of earthquake precursors in the earthquake prediction test site may be closely related to the complexity of the geological environment in the source area. It varies from place to place, that is, to adopt different \"strategies\" in different earthquake risk areas, and to test and develop different prediction methods with different emphases. This is not only scientifically reasonable, but also financially economical. We should learn from the experience and lessons of earthquake prediction test sites in other countries, including my country's earthquake prediction test site, and pay special attention to the successful experience in one area may not be applicable to other areas, just like the experience of the 1975 Haicheng earthquake in my country. The experience of successful forecasting does not apply to the 1976 Tangshan earthquake. Pay attention to making full use of our country's geographical advantages, select the right area, and use the earthquake prediction test site, an important and effective way, to carry out under strict and controllable conditions, use pre-determined, and operable criteria. Scientific experimental research on earthquake prediction; multi-disciplinary cooperation, intensive observation, monitoring, research, prediction and forecasting are closely combined, and perseverance is expected to obtain fault activity, terrain deformation, earthquake precursors, and seismic activity in different tectonic environments. and other very valuable data, which will help to enhance the understanding of earthquakes and overcome the difficulties of earthquake prediction. Systematically implement basic and comprehensive observation, detection and research programs of the Earth's interior and earthquakes In order to overcome the observational difficulties faced by earthquake prediction, basic and comprehensive observations of the Earth's interior and earthquakes should be systematically implemented , Detection and research plan: \u2460 Strengthen the observation of earthquakes and their precursors; \u2461 Conduct scientific drilling in seismically active areas for the purpose of detecting the source area; \u2462 Excavate trenches in fault zones to study paleoearthquakes; \u2463 Conduct research in laboratories Fracture experiments of rock samples under high temperature and high pressure; \u2464 numerical simulation of seismic process by computer; and so on. Strengthen domestic cooperation and international cooperation Earthquake prediction research is deeply limited by the difficulty (\"infrequentity\" of large earthquakes) caused by the lack of \"samples\" required for the establishment of earthquake theory based on empirical laws. At present, most of the academic journals that publish papers on earthquake prediction practice hardly provide relevant original data, and the language is so vague that other researchers cannot make independent inspections and evaluations after reading them; in addition, the data cannot be shared. These factors exacerbate the aforementioned difficulties. We should face up to and change the actual closed situation of earthquake prediction research, carry out extensive and in-depth domestic and international academic exchanges and cooperation; strengthen the construction of earthquake information infrastructure to facilitate data sharing; make full use of the convenience of the information age to establish a world without walls A virtual, distributed joint research center enables researchers engaged in earthquake prediction to use instruments and equipment, obtain observation data, use computing facilities and resources, regardless of places, whether they are north, south, east or west, and whether they are inside or outside professional institutions. Exchange with peers, etc. Prospects for Earthquake Prediction Above, the situation of international research on earthquake prediction and prediction has been briefly reviewed from both positive and negative aspects, the scientific difficulties encountered in earthquake prediction and prediction have been analyzed, and the scientific approaches that should be adopted to solve these difficulties have been expounded. Since the 1960s, some meaningful progress has been made in medium-term and long-term earthquake prediction, such as the confirmation of large seismic gaps at plate boundaries, \"stress shadow areas\", seismicity images, image recognition, and the prediction of the Parkfield earthquake in the United States. After a period of 11 years, it finally happened. At present, the overall level of earthquake prediction, especially the level of short-term and imminent earthquake prediction is still not high, and it is still far from the social needs. We also pointed out that although earthquake prediction is a difficult geoscience problem that needs to be answered urgently and requires long-term exploration, it is not impossible; the difficulty can neither be used as an excuse to relax or abandon earthquake prediction research, nor can it be As a reason for giving up the research on earthquake prediction and one-sidedly emphasizing that we only need to build earthquake-resistant fortifications. Earthquake, as a natural phenomenon, is a manifestation of the vitality of the earth, a unique planet in the solar system, which is inhabited by human beings. Its occurrence is inevitable; however, earthquake disasters should not only be avoided but also can be avoided or mitigated through efforts of. In the face of earthquake disasters, seismologists should be brave enough to meet challenges and advance despite difficulties; they should strengthen research on the laws of earthquake occurrence and disaster-causing mechanisms, improve the level of earthquake prediction and forecast, and enhance the ability to prevent and mitigate earthquake disasters. The solution to the difficulties faced by earthquake prediction can neither rely solely on empirical methods, nor ignore the urgent social needs and wait for a leap in basic research and a major breakthrough in a certain day after decades. What needs to be pointed out optimistically is that compared with the situation more than 40 years ago, the scientific problems faced by seismologists today are still not increased, but this problem has been exposed more clearly than before. Moreover, earthquakes since the 1960s The progress of observation technology and the development and application of high technology have brought historic opportunities for the study of earthquake prediction and forecasting. Rely on scientific and technological progress to strengthen the observation of earthquakes and their precursors, select the right location, carry out and persist in the scientific experiment of earthquake prediction and prediction with the earthquake prediction test site as an important method, and persistently and systematically carry out basic research on the interior of the earth and its relatives. Earthquake observation, detection and research can be cautiously optimistic about the prospect of realizing earthquake prediction.", "Are earthquake faults in a state of high stress or in a state of low stress? This is a scientific problem that has troubled seismologists for decades [1]. From seismic observations, it can be estimated that the \"stress drop\" (stress released during an earthquake) is 1-10 MPa, with an average of about 6 MPa (1 MPa = 10 bar). The stress drop of \"interplate earthquakes\" (earthquakes occurring between plates) is lower, about 3 MPa; the stress drop of \"intraplate earthquakes\" (earthquakes occurring inside plates) is higher, about 10 MPa[2] . These observations are very consistent with Tsuboi Chuji [3]'s estimate that the magnitude of earthquake critical strain is 10\uf02d4. However, according to the experimental results under high temperature and high pressure in the laboratory, the strength of the rock layer is estimated to be as high as hundreds of MPa[4]. According to the experimental results and the theoretical simulation of rock friction, it can be considered that this difference is due to the fact that the stress drop of each seismic event is only a small part of the total stress when an earthquake occurs, or that the seismic stress drop is only a part of the total stress. Fraction, called \"Fractional Stress Drop\". When a fault slides at velocity v under the action of shear stress \u03c3d, the heat generated by friction in unit time and unit area is \u03c3dv due to work done against friction. If the frictional stress on the fault plane is very high during a large earthquake, such as hundreds of MPa, a large amount of heat should be generated to cause melting on the fault plane, resulting in abnormal heat flow[5]. However, no heat flow anomalies were observed in the heat flow measurements made on the San Andreas fault (Fig. Indicates the increase in heat flow value caused by frictional heat generation under a shear stress of 50 MPa. The absence of anomalous heat flow indicates that the friction stress on the fault plane should be quite low (less than tens of MPa) when an earthquake occurs, That is, the seismic fault occurred under a low-stress state, or the seismic fault is much weaker than expected. \tThe solid line in the heat flow diagram observed across the San Andreas fault in Figure 1 indicates the shear stress at 50 MPa [6,7] Heat flow paradox \t\u00b7 547 \u00b7 The orientation of the crustal stress on the San Andreas fault also leads to a similar conclusion. The seismic focal mechanism solution, geological data and Observations such as borehole stress measurements can be deduced that in the San Andreas fault zone, the direction of the maximum principal compressive stress axis should be about 23\u00b0 from the strike of the San Andreas fault. The direction of the compressive stress axis is basically perpendicular to the fault[8]. This shows that the fault plane is like a free surface. The above contradictory observations are called the heat flow paradox, also known as the fault strength paradox ( fault strength paradox) or San Andreas (fault) paradox (San Andreas paradox). So far, there is no generally accepted convincing rational explanation for these seemingly contradictory observations. One explanation is that the fault The effective stress on the surface is lowered by the high pore pressure. However, it is questionable whether the fault zone can sustain a pressure much higher than the hydrostatic pressure. Another explanation is that the fault zone is filled with low-strength clay-rich fault gouge, Therefore, the fault zone is of low strength. However, this explanation encounters a difficulty, that is, the experimental results on the fault gouge show that the strength of the fault gouge is not low, and the fault gouge also has normal strength, unless The pore pressure is very high[9,7,10,11]. The study of rock friction properties, crustal stress measurement, and fault zone heat flow measurement are of great significance for understanding whether earthquakes occur under high stress or low stress .While many of these issues remain to be studied in depth, there is no doubt that the resolution of the heat flow paradox [fault strength paradox or San Andreas (fault) paradox] is crucial for elucidating the physical mechanism of earthquake occurrence [12 ].", "The importance of early warning of earthquakes Most earthquakes are the result of the sudden release of a large amount of strain energy accumulated near the plate boundaries or faults inside the plates due to the material movement in the earth's interior. Earthquake disasters are characterized by suddenness and severity, and have always been a huge threat to human society (Chen et al., 2008). However, our country is located in a strong earthquake-prone area, and the earthquake disasters suffered are particularly serious, especially the Tangshan earthquake in 1976 and the Wenchuan earthquake brought huge economic and human losses to our country, and the disasters caused by them are rare in the world. Earthquake prevention and disaster reduction research is the most important content of geophysical research in my country. Of course, if the earthquake can be accurately predicted, casualties and some economic losses can be effectively avoided. However, as far as the current research level of earthquake physics is concerned, short-term earthquake prediction is still a worldwide problem, and it is difficult to make breakthroughs in the near future. At present, the countermeasures taken by developed countries and regions such as the United States and Japan are to strengthen the anti-seismic fortification of buildings before the earthquake, so that the ground vibration caused by the strong earthquake will not cause disasters; Take emergency measures before the formation of strong ground vibrations to avoid the formation of some disasters (earthquake early warning), and after the formation of earthquake disasters, quickly determine the scope of earthquake disasters, allocate disaster relief resources according to the severity of the disaster, and provide efficient disaster relief (earthquake disaster rapid Evaluate). Early warning of earthquakes and rapid assessment of earthquake disasters are the main research contents of real-time seismology [3, 7]. Earthquake early warning refers to determining the location and intensity of the earthquake in the shortest time after the earthquake, using the rapidity of modern communication and the time difference between the fast but weak P wave and the slow but strong S wave, the destructive seismic wave (mainly seismic surface waves) before the arrival, send out early warning information, so that emergency measures can be taken to reduce the loss of personnel and property. For example, the Wenchuan earthquake occurred more than 100 kilometers west of Chengdu, and it took about 40 seconds for the seismic surface waves to reach. However, using the seismic stations near the earthquake (assuming there are two stations within 50km), then using the P wave of the earthquake, the location and intensity of the earthquake can be preliminarily determined within a dozen seconds, and Chengdu can have more than 20 seconds of early warning time to terminate the key. Sexual activities (students attending classes, important meetings, hospital operations, high-speed trains, financial transactions, military exercises, production of dangerous industrial products, etc.), and suspension of electricity, gas, tap water and other facilities to reduce losses caused by secondary disasters. Earthquake early warning status quo Earthquake early warning evolved from the rapid release of earthquake information (earthquake quick report). When the source information of the earthquake quick report arrives earlier than the destructive seismic wave, the earthquake quick report becomes an earthquake early warning. The idea of earthquake early warning has long been proposed. In 1868, JDCooper proposed to build an earthquake observation station on the edge of the Hayward fault east of San Francisco, and use telegraph to provide earthquake warning to San Francisco. Although his suggestion was not implemented, it provided meaningful ideas for future generations. After more than 100 years of development of seismology, the early warning of earthquakes has finally become a reality. There are two options for modern earthquake early warning. One is based on the earthquake rapid reporting system, that is, using the stations closest to the epicenter in the seismic observation network to quickly determine the three elements of the earthquake, and then using modern communication and the time of destructive seismic waves, people who are far away from the epicenter can have a certain amount of time. Time to take action. This scheme (network early warning) should have determined the basic elements of the earthquake, so early warnings can be issued to the compared areas. But this scheme has relatively large blind area, because this scheme needs several stations to determine the information of the earthquake. Another scheme is single-point earthquake early warning. This scheme utilizes the characteristic that the earthquake P wave arrives early but does not cause disasters because of its low intensity, and the earthquake S wave arrives late but strong, to carry out earthquake early warning. The way it works is that seismometers at the site continuously monitor for earthquakes, and if strong P waves are detected, an early warning is generated, warning that a destructive seismic wave is approaching. This scheme only needs one station, so the blind zone is small (Wu et al., 2001). However, because it can only carry out early warning on this location, its applicability is limited. A better solution is to combine network early warning and single-point early warning to reduce blind spots and increase early warning range [7]. Earthquake early warning systems have been applied in earthquake disaster mitigation. In the 1960s, the Japanese railway system developed the UREDAS system, which uses the observation data of a seismic station to roughly determine the basic elements of an earthquake. If a strong earthquake is detected, the system will notify the high-speed train to take braking measures to avoid derailment ACCIDENT. After the Loma Prieta earthquake in California in 1989, Bakun (1994) proposed an early warning system for the strong aftershocks of the earthquake, that is, to monitor the aftershocks through a dense seismic network in the main shock area. Construction workers (especially those working at heights) will send an early warning, and they have more than 20 seconds to evacuate to a safe place. Earthquake early warning systems have also been established in Taiwan, Mexico and other regions. The most critical goal of earthquake early warning research is to determine the elements of earthquakes in the shortest time. Intuitively, however, early warning of large earthquakes seems impossible. For example, the Wenchuan earthquake lasted nearly 100 seconds and eventually became a magnitude 8 earthquake. So does one have to wait until the quake is over to know how big the quake ended up being? The major achievements of emergency seismology in recent years have given a negative answer to this. The research results of Olson and Allen (2005) on NATURE show that the earthquake rupture process has certain certainty, that is, the information of a few seconds at the beginning of the earthquake is basically enough to judge when the earthquake will end and how big the earthquake will eventually evolve into (Ellsworth et al., 1996). They calculated the characteristic time (\uf074c, \uf074p) of the earthquake by using the waveform of the first few seconds of the P wave recorded by the seismic station with a close epicentral distance, and found that these characteristic times had a good correspondence with the magnitude. Their research shows that four seconds after the P wave is enough to judge the magnitude. The work of Wang et al. (2009) clearly shows that the Wenchuan earthquake sequence can also be used for early warning by the \uf074c method. Accurate description of the problem The main task of early warning of earthquakes is to determine the source parameters such as the magnitude, location, rupture direction and focal mechanism solution of the earthquake as quickly and accurately as possible, so that necessary anti-seismic measures have been taken before the arrival of destructive seismic waves. The key factor for early warning of earthquakes is fast and relatively accurate seismic parameter determination, so people are faced with the problem that only a few seismic stations can be used for seismic parameter determination. The core issues of early warning of earthquakes are: \u2460 Determination of the magnitude and source parameters of a small number of stations (single, double, or multiple stations); \u2461 The relationship between the low-frequency and high-frequency characteristics of P waves and the magnitude; Confirm; \u2463 The relationship between the physical process of earthquake initiation (nucleation) and the whole process of earthquake rupture. For example, information on how long after the onset of earthquake rupture is required to determine the final scale of rupture evolution (magnitude). Difficulties in Earthquake Early Warning Difficulties There are two difficulties in the study of earthquake early warning. From a technical point of view, due to the high complexity of the crustal structure, the speed and accuracy of earthquake location, magnitude, and mechanism solutions determined by a few seismic stations are not high. Therefore, the determination method of single-station and double-station seismic parameters is a difficult point in the study of earthquake early warning, although there are some preliminary plans [8]. From a theoretical point of view, the problem of the initiation of earthquake rupture is very complicated. The interaction between faults is very complex. Anderson et al. (2003) found that the earthquake rupture that occurred on some faults would only propagate along one fault, and eventually develop into an earthquake of average size. Other earthquakes started on one fault and quickly triggered activity on another, longer fault, resulting in strong quakes with magnitudes as high as 7.8. So without a detailed study of the interactions between the faults, it's difficult to predict the magnitude of an earthquake. Another practical example is the 2002 Alaska earthquake sequence, where a magnitude 6.7 earthquake occurred on October 23, followed by the Denali mainshock not far away ten days later. The rupture at the beginning of the Denali mainshock presented a thrust-type focal mechanism, which quickly transformed into a strike-slip mechanism after 20 seconds, and stopped after a rupture of more than 300 kilometers, with a final magnitude of 8.1[1]. If the initial thrust-type earthquake does not continue to develop, maybe the earthquake is just a magnitude 7 or less. This example shows that it is very difficult to accurately predict the magnitude of an earthquake in advance. If this is the mechanism of most earthquakes, early warning of earthquakes will also be very difficult. However, the research results of Olson and Allen (2005) showed that the magnitude of the Denalli earthquake can still be estimated by using the time 3~4s after the earthquake rupture. Therefore, during the seismogenic process of an earthquake, adjacent faults may interact with each other, so that the stress distribution on each fault has a certain spatial correlation, so that even a short period of time information at the beginning of the earthquake rupture can be used to estimate the magnitude of the earthquake. Information on the overall rupture process. The work of Ellsworth and Beroza (1995) seems to indicate that the nucleation phase of earthquakes can be observed and has a clear correlation with the final magnitude of the earthquake. However, the research results of Kanamori and Mori (2004) rejected this model. They believed that there was no obvious difference between large and small earthquakes within 0.1s after the onset of earthquakes. Perhaps only when the earthquake rupture lasts for a long enough period of time, the overall information of the earthquake rupture can be estimated. The current earthquake early warning research results show that 3~4s may be enough, and its specific physical mechanism still needs further research. Earthquake early warning needs to study the evolution trend after the initiation of the rupture, while earthquake prediction needs to study where, when, and how large (time and space) the rupture may occur before the earthquake rupture occurs. Therefore, earthquake prediction and early warning of earthquakes are the things before and after earthquakes are studied respectively. However, earthquake rupture is not a sudden process, it should be a process of instability first and then rapid acceleration[5], so the time limit of earthquake prediction and early warning of earthquakes is not very clear in a strict theoretical sense. From the perspective of observation, the current early warning of earthquakes is only aimed at the physical process after the earthquake that is obviously stronger than the noise captured by the seismograph. Most of the earthquake ruptures start at the bottom of the seismogenic region, at least 5-10 km away from the observation point on the surface, so its details are difficult to observe. However, compared with earthquake prediction, the range of stress and deformation field changes after the initiation of rupture is significantly enhanced, so the research on early warning of earthquakes is much less difficult than the research on earthquake prediction. The solution to the problem of early warning of earthquakes relies on the intersection of multiple disciplines, that is, the combination of high-precision, short-distance, multi-station, and multi-method on-site earthquake rupture initiation observations, multi-time-scale earthquake dynamics numerical simulation, and high-time-resolution Only by combining the rock mechanics time with high rate can better solve the problem of early warning of earthquakes.", "How much stress is driven by the plates to move? Under what high shear stress environment do large earthquakes occur? Is there any difference in the level of shear stress in the crust between the seismic zone and the non-seismic zone? The advances in the understanding of these basic issues involve the development of techniques for measuring the magnitude of absolute stress in deep subsurface. The San Andreas fault in the United States is a well-known plate boundary fault. After the 1960s, American scholars have conducted a long-term debate on whether the fault is in a state of high shear stress or in a state of low shear stress[ 1]. The current mainstream view is that the heat flow measured on the fault zone is low, and the angle between the maximum principal compressive stress direction and the fault determined by the earthquake focal mechanism and borehole caving near the fault zone is large [2]. These two observations show that the fault zone in a state of low shear stress. But this is still an inferred view, and it needs to be confirmed by direct evidence of absolute stress magnitude measurement. The hydraulic fracturing stress measurement results in the 3.5km deep Canjon Pass hole 4km away from the fault line in the northeast of the fault show that the shear stress does not seem to be low [3], but the hole is not too deep, and it is a little away from the fault. The level of shear stress in the fault zone is still uncertain. Many people assume that the gestation of large earthquakes is a process of accumulating high shear stress. For example, according to the results of rock experiments, the expansion model of earthquake gestation [4] requires that the shear stress in the seismogenic area exceeds 1/2 to 2/3 of the rock fracture strength. , there will be expansion of rock mass; another example is the solid body model [5] bred by earthquakes, which believes that a solid body is a crustal block that can accumulate high shear stress. Whether there is a high shear stress zone before a large earthquake is related to the physical basis of the study of large earthquake precursors. The assumptions of these models need to be confirmed by observations, the most important of which are the observations of the magnitude of deep shear stress. There are very few measurement results of the absolute magnitude of in-situ stress at present. For example, in the 1990s, when the world stress map was compiled by international cooperation, among the nearly 10,000 pieces of ground stress data provided by scholars from various countries, most of them were the observation results of the principal stress direction, and the absolute stress data with a measurement depth > 100m only accounted for It is less than 4%, and there are only a handful of measured data at depths >1km. Not long ago, stress measurements made in ultra-deep drills and several deep drills in the world have enabled people to gain some enlightenment on the stress state in the deep crust. The stress measurement and estimation results in the 9km-deep KTB borehole in Germany at the end of the 20th century showed that the shear stress reached the order of 100MPa at about 8km [6] (Fig. 1). If the rock friction coefficient is assumed to be 0.6~0.7, The shear stress measured from 3 to 8 km is close to the upper limit of shear stress allowed by the rock experiment law that controls fault slip\u2014Beyer's law. When the water injection-induced microseismic experiment was done at the depth of 9 km in the borehole, many microseisms were caused by only increasing the water pressure by about 1 MPa, which also shows that the shear stress in this place is close to the critical state. The borehole is located at the western end of the relatively stable Bohemian massif in southwestern Germany. It is still difficult for scientists to explain why the shear stress is so high in this structurally relatively stable area[7]. This also shows that the non-seismic active area is not necessarily a low shear stress area. However, this is only a \"snapshot\", and more evidence needs to be accumulated. Figure 1 \tStress measurement and calculation results of KTB ultra-deep borehole in Germany [6] SH and Sh are the maximum and minimum horizontal principal stress respectively, and Sv is the vertical principal stress. For each depth segment between 3 and 6.8 km, the Sh and SH values estimated by the combined analysis of hole wall collapse and borehole-induced fracture observations are given as the minimum (empty triangle), medium (cross) and maximum (empty rhombus) three values. In the main borehole of China Continental Scientific Drilling in Donghai County, Jiangsu Province, someone used the borehole caving method to estimate that the shear stress at the depth of 1216m was about 6MPa, and at the depth of 5000m the shear stress was about 6MPa. The shear stress is about 20MPa[8]. The shear stress here is obviously lower than the shear stress at the corresponding depth of the KTB borehole in Germany, but the stress obtained only by the borehole caving method is only a rough estimate, and there is no deep hole hydraulic fracturing stress measurement in my country at present. Technical strength. The deepening understanding of many tectonic dynamics and earthquake causes requires more observation results of the absolute magnitude of subsurface differential stress, and the current deep absolute stress measurement technology is not enough. At present, the main method for measuring deep in-situ stress is the hydraulic fracturing method. It is still very difficult to measure the stress below 3km by this method. Ultra-high pressure water injection is also very difficult. It can also be seen from the actual measurement results of KTB boreholes (Fig. 1) that in addition to the minimum principal stress at the depth, the maximum principal stress, which is very important for determining the shear stress, can only be measured at a depth of 3 km by hydraulic fracturing. When the depth is below 3km, the error of the maximum principal stress calculated by other methods may be very large. Seismic waves can penetrate deep underground, and it has long been conceived to use seismic wave observations to study the stress state in the deep part of the earth. At present, there is still significant progress in inferring the principal stress direction, but there is no effective method to analyze the absolute magnitude of shear stress through seismic waves. At present, the study of S-wave splitting is considered as one of the methods to obtain some information about the subsurface stress field. In the study of crustal S-wave splitting, some people proposed the \"stress prediction method\" to predict earthquakes by using the change of S-wave splitting parameters with time. They assumed that subcritical expansion of directional cracks would occur in the crust under the action of differential stress, which led to pregnancy. Dilation anisotropy appears in the rock medium in the earthquake area[9]. According to this model, the direction of the maximum principal compressive stress of the crust can be deduced from the polarization direction of the fast S-wave, and the gestation process of the earthquake can be deduced from the change of the travel time difference between the fast and slow S-wave with time. Even if this model can be established, the absolute magnitude of the crustal differential stress cannot be determined from the study of S-wave splitting; on the contrary, the proponent of the expansion anisotropy model pointed out that only the stress state, pore pressure and rheological The understanding of the characteristics has made great progress, which is conducive to the test of this model [9]. In short, in order to measure the absolute magnitude of subsurface differential stress (or shear stress) extensively in more areas, and to learn more about the initial (or basic) stress field in the subsurface, it is necessary to find, develop and improve the measurement of the deep crust Techniques and methods for absolute magnitude of differential stress.", "On February 4, 1975, a 7.3-magnitude earthquake occurred in Haicheng, China. Because there were sequential direct foreshocks three days before the main earthquake, the short-term prediction of strong earthquakes made a breakthrough for the first time in the world. The foreshocks of the Haicheng earthquake began to \u201caccelerate\u201d two days before the main shock, that is, their frequency increased and their intensity increased, but they were relatively calm 7 hours before the main shock (Fig. 1). Later, Japanese scholars also reported an earthquake case with characteristics similar to the Haicheng foreshock[1] (Fig. 1). Fig. 1 The frequency of foreshocks and aftershocks (Ms\u22651.5) of the Haicheng earthquake in China in 1975 and the Mikawa earthquake in southern Aichi Prefecture in Japan in 1945. The successful prediction of the Haicheng earthquake has solved the problem of short-term imminent prediction of strong earthquakes. Here comes a silver lining. Although some people in Greece and Japan had studied foreshocks before the Haicheng earthquake[2, 3], after the Haicheng earthquake, the \"direct foreshocks\" (referring to the accelerated foreshocks that occurred within a few hours to more than ten days) before the strong earthquake Earthquake sequences) have received extensive attention[4, 5]. However, in the second year after the Haicheng Earthquake, the Tangshan Earthquake in Hebei Province, which had no direct foreshock records, made seismologists somewhat confused about the short-term prediction of earthquakes. Two earthquakes are not far apart, why does one have foreshocks and the other does not? In fact, in the 10 years from 1966 to 1976, four earthquakes above magnitude 7 occurred successively in North China, two of which (Xingtai earthquake in 1966 and Haicheng earthquake in 1975) had foreshocks, and two earthquakes (Bohai earthquake in 1969) had foreshocks. , Tangshan Earthquake in 1976) had no foreshocks. They both occurred in the tectonic area of the North China Depression, so why do some have foreshocks while others do not? The recent 2008 Wenchuan M8.0 earthquake also has no direct foreshock records. Later, some people studied the large shallow earthquakes of magnitude 6 and above 7 in the world, and noticed that the proportion of direct foreshocks was about ten percent [5]. (Ms\u22655.5 in the east and Ms\u22656.0 in the west), among which earthquakes with direct foreshocks account for about 9%[6]. Some regions also reported results with a higher proportion of earthquakes with foreshocks. However, most of the existing statistics determine the presence or absence of foreshocks by specifying a time, space, and intensity range of foreshocks on the basis of conventional catalogs. It is not certain that these foreshocks are foreshock sequences that accelerated at the initial rupture area of the main shock. In the future, more detailed studies on earthquake cases are needed, including relocating the foreshock sequence as much as possible, and inferring the similarities and differences of the focal mechanism of the foreshocks by using the P and S amplitude ratios or waveform comparisons recorded by limited stations. Specific features of foreshock occurrence. Why do some major earthquakes have foreshocks, but more major earthquakes do not? This is a scientifically unsolved problem. This problem involves the understanding of the physical process of the initiation of large earthquake rupture, as well as the understanding of the underground physical conditions of the fault initiation region. This is also an entry point to discuss the physical basis of short-term precursors of major earthquakes: if the precursors of impending earthquakes appear, they must always be related to a certain physical process, and one of the most probable physical processes is the initiation process of the large rupture; the direct foreshock itself It is a short-term precursor of a major earthquake. If this precursor does not appear before most major earthquakes (under the existing observation conditions), it will make people think whether other possible precursors will definitely occur in all earthquakes. Will there be performance before a major earthquake? Some explanatory models have been proposed for why there are direct foreshocks before some large earthquakes. For example, some people use the static fatigue characteristics of materials to explain the acceleration process of foreshocks [7], which shows that the greater the stress on the material, the shorter its \"life\" before failure. They assume that there are many strong points (asperity) in the initial zone of fault rupture, and some points are destroyed first under stress loading, and the stress they bear is transferred to other strong points, so that the stress concentration on the strong point after rupture is continuously strengthened , the time interval for aftershocks is getting shorter and shorter, which can explain the phenomenon of foreshock acceleration. This explanation does not involve the question of why most large earthquakes do not have foreshocks. For large earthquakes without foreshocks, does the rupture initiation region just lack a fault structure with a strong point? In view of the fact that all reservoir earthquakes have foreshocks, some people use the change of pore pressure in the rock formation to explain the cause of the reservoir foreshocks, and even used the fracture experiment of water-bearing sandstone to make a micro-rupture sequence to simulate the foreshock[8]. However, the development time of the foreshocks of reservoir earthquakes is generally several months or longer, which is far less than the foreshock sequence of tectonic earthquakes. The distribution range of reservoir foreshocks is generally wide, and it is not only near the initial rupture area of the main shock; therefore The foreshocks of tectonic earthquakes and those of reservoir-induced earthquakes may have different mechanisms. If it is said that the effect of water induced the occurrence of foreshocks, it is also difficult to explain why two of the aforementioned four strong earthquakes in North China had foreshocks and the other two did not. Some people regard the foreshock sequence as a manifestation of pre-slip in the initial region of the fault rupture before the main rupture destabilizes and expands, and the foreshock is the vibration generated by brittle rupture at some obstacle points during the pre-slip process. In the stick-slip experiment between rock blocks with pre-cut planes in the laboratory, it has been found that there is a small amount of pre-slip before the rock block loses stability and large slip [9]; for actual earthquakes, it is necessary to observe the strain associated with the foreshock sequence. changes to support this view. However, since the slow slip that occurs in local areas can only be observed at a very close distance, and the probability of large earthquakes with significant foreshocks is not high, to obtain and accumulate creep records corresponding to foreshocks, it is currently in the observation There is still greater difficulty. However, whether there is fault creep or strain change accompanying the foreshock should be a target of strain observation in the future. Note that the fault pre-slip mentioned here only occurs on the initial sliding segment of the fault, not the entire fault formed by the large earthquake. There are always many aftershocks after a large earthquake, and the observations of postseismic creep along the entire fault are not uncommon. The faults formed by major earthquakes have a scale of tens to hundreds of kilometers, but most of the faults are formed by the small initial rupture area and the dynamic rapid expansion of rupture. On this fault formed by rapid rupture, the Whether slow-speed slip occurred in many places before the main shock still needs further confirmation. To solve the problem of whether there is a foreshock in a large earthquake, in addition to the above-mentioned difficulties in observation under the current conditions, the difficulty in experimental and theoretical research may lie in the fact that the initiation of rupture is a highly nonlinear physical problem, and perhaps a small factor in the physical system Variations can lead to vastly different outputs, and figuring out exactly what factors affect the rupture initiation process can be a difficult process.", "Since the great San Francisco earthquake in 1906, seismologists have generally accepted the view that \"earthquakes are caused by dislocation of faults\". The seismic waveform recorded at the surface is a response to this misalignment. Therefore, it is an important means for people to understand the rupture process of the earthquake source by using the seismic waveform to infer the dislocation on the seismic fault. At present, in the research on the rupture process of the earthquake source, the commonly used technical means are based on the seismic waveform records observed on the surface, and the mathematical inversion theory is used to obtain the time and space distribution of the slip on the earthquake fault. This kind of research is called \"source kinematics inversion\" because it does not involve the dynamics that cause the motion. Shortly after the major earthquakes that have occurred all over the world in recent years, people can obtain the process of earthquake focal point occurrence through inversion. With the enrichment of seismic records and the improvement of computing technology, this method has been developed more maturely. The simulated seismic records calculated according to this method are in good agreement with the actual observation records, especially for the observation records involved in the inversion calculation. However, this method does not explain the rupture process of the fault from the physical essence of the earthquake. The kinematic model describes the rupture process of the earthquake, but does not answer the question of \"why\" it ruptures in this way. The research on the source kinematics model takes the slip on the fault plane as its basic parameter. In the inversion process, it is usually assumed that the source time function and the propagation velocity of the limited rupture are assumed. These assumptions imply the control of the source rupture mode. From a physical point of view, the slip on the fault plane that caused the earthquake is the result of a series of complex geological processes: geological movement changes the stress state in the source area, when the stress accumulation in some local areas exceeds the capacity of the medium limit, a misalignment occurs. Therefore, the study of the stress field near the fault is to explore the process of gestation, occurrence and cessation of earthquake from the physical essence. This kind of research related to the dynamics that cause earthquake fault movement is called \"focal dynamics\" [1,2], which takes the stress field on and around the fault plane as the basic parameter studied, and the slip on the fault is the result of a stress field. From a mathematical point of view, the research on source dynamics of earthquake source rupture is to solve the solution of the motion equation under certain boundary conditions and initial conditions, and this solution is the dislocation of the fault. Although the problem is complex, it is a solvable definite problem. Since the equation of motion is definite for a given earth medium, the corresponding displacement solution can be obtained as long as the boundary conditions and initial conditions are given. But the question is, how to give this boundary condition and initial condition? This obviously depends on the stress state near the fault, and this stress state is the result of complex geological processes. It is still difficult to accurately know the details of this complex geological process at the current level of technological development. Therefore, the biggest difficulty in the study of the physical process of earthquake source rupture lies in the lack of understanding of the stress state in the ground. How to directly and quantitatively observe or reliably calculate the stress field near the fault is the key to fundamentally understanding the physical essence of earthquakes. The physical process of earthquake source rupture \t\u00b7561\u00b7In view of the complexity of the problem, the current research on the physical process of source rupture is still in the exploratory stage: Assuming the stress state of the source area, using mathematical means to solve the equation of motion, the slip on the fault plane can be obtained time and space distribution. By calculating and simulating the relevant data of the earthquake based on the obtained source dynamic model, comparing it with the actual corresponding observation data, and then further revising the assumed stress state model according to the comparison results, and finally obtaining a reasonable source dynamic model , to gain insight into the regional stress field. A reasonable source dynamics model should be able to simulate the observation data of earthquakes well, and can properly explain some phenomena in earthquakes from a physical point of view, which is a research method of trial and error. So far, the research on the source dynamics model has not had a direct inversion method similar to the source kinematics, and more belongs to the forward calculation process, because the stress field of the fault is consistent with the seismic observation data on the ground. Very complex nonlinear relationship, the corresponding source dynamics forward modeling calculation is also very complicated. There are still some technical difficulties in the inversion calculation method. If there is a breakthrough in the inversion calculation method in the future, the dynamic model of the source can be obtained by inverting the ground observation data, which will greatly promote the understanding of earthquake occurrence. understanding of the physical process. On the other hand, even the current forward modeling of the seismic source dynamics model still has some difficulties, the most important of which may be the friction criterion in the process of source rupture, that is, the intrinsic relationship between stress and slip in the process of source rupture. structural relationship. The friction criterion is important because it controls how earthquakes are triggered, how ruptures propagate after triggering, and eventually stop. So far, the friction criterion is mostly derived from the results of rock fracture experiments in the laboratory. The commonly used ones are sliding weakening friction criterion, sliding velocity weakening friction criterion and rate-state friction criterion [3]; Parameters are also a difficult point in source dynamics. Even from the perspective of the simplest parameter geometry, there is a huge difference between the rock model in the experiment and the fault of natural earthquakes. Therefore, how to apply the experimental results in rock physics to the fault rupture of natural earthquakes still needs to be done. subject for further research [4]. The research on the physical process of earthquake source rupture is not isolated, it is closely related to other branches of geology and geophysics. For example, to obtain an accurate rupture process depends on a detailed understanding of the medium conditions in the source area and also depends on the understanding of the geological structure background in the source area. Otherwise, some assumptions as the premise will become water without a source. Conversely, the in-depth study of the physical process of source rupture can also promote the development of related science. With the continuous improvement of observation and technical means, it is believed that as one of many related research projects, the research on the physical process of earthquake source rupture will continue to deepen, and people will finally be able to unravel the mystery of earthquake occurrence.", "With the development of earthquake science, we can know the epicenter, exact location and magnitude of the destructive earthquake within a few minutes after the earthquake, as well as the rough intensity distribution of the earthquake. However, there is still a lack of credible understanding of accurately predicting the onset and cessation of seismic events. Therefore, the key to preventing earthquake disasters is to strengthen the necessary precautions. Statistics show that the casualties and building damage caused by earthquake disasters are closely related to the reasonable selection of earthquake resistance codes and the establishment of effective post-disaster emergency rescue systems. The above two points are related to the intensity of surface movement. Therefore, how to accurately predict the intensity of ground vibrations: the acceleration, velocity and displacement of surface particles, the duration of ground vibrations, and the frequency content of seismic waves will be crucial to the selection of seismic parameters in engineering design and the rapid and correct response of emergency rescue systems. role. The shear rupture of faults in the crust leads to sudden relative slip on both sides of the fault, accompanied by the radiation of mechanical energy (seismic waves) and the propagation of waves in the earth's medium. The surface particle motion includes the coupled response of the seismic wave to the surface medium when it reaches the surface, and the response of the surface structure to the surface motion (mostly caused by shear waves) is the direct cause of the damage to the building (Fig. 1). Accurately predicting the intensity of surface motion actually includes the exploration and practice of the source/fault rupture process, the propagation and attenuation of seismic waves, and the structural properties of near-surface media (Fig. 1). Fig. 1 \tConceptual model of fault rupture formed by strong surface motion; 2. Seismic wave radiation, propagation and attenuation; 3. Site response Synthetic ground motion can be expressed as: U(t)=Source(t)\u00b7Path(t)\u00b7Site(t ), where t is the duration. At present, the research on the prediction method of strong surface motion intensity focuses on the establishment and verification of the following four different models: \u2460 empirical attenuation relationship model; \u2461 stochastic model; \u2462 kinematic model; \u2463 dynamic model . The premise of the above model must be based on seismic observation data. The empirical attenuation relationship model is based on the statistical regression analysis of the observation records of strong earthquakes, and establishes the intensity indicators of the ground surface under different magnitudes, such as the acceleration and velocity of the horizontal peak value, and the acceleration and velocity response spectra of different response periods decrease with the increase of the fault distance. Small attenuation features. The stochastic model characterizes the time-history curves of the acceleration, velocity, and displacement of surface motion as Gaussian noise with limited duration and bandwidth, that is, its amplitude spectrum satisfies the Brune[1] far-field spectrum U ( f ) \uf0b5 M o /(1 +(fc/f )2 ) [1], where U is the surface displacement at frequency f, Mo is the seismic moment (Mo=\uf06d\u00b7\u0394u\u00b7A, where \uf06d is the shear modulus of the fault; A and \u0394u are the fault rupture area and average displacement), fc is the corner frequency ( fc \uf03d 2.34\uf062 r , where \uf062 is the shear wave velocity; r is the characteristic scale of the fault). The vibration frequency of the ground motion synthesized by the stochastic model is generally greater than or equal to 1 Hz. The kinematic model solves the synthetic calculation of the wide-band (0Hz\u2264 f \u226410Hz) surface motion. Through the detailed description of the slip function and slip velocity of the mainshock fault plane, as well as the seismic wave propagation process, the surface motion synthesis from the source, seismic wave propagation path to the site is completed. At present, the more representative results include the empirical Green's function method, the composite source model and the hybrid method (hybrid method). The dynamic model is based on the given internal boundary conditions (friction criterion) and initial conditions (initial stress field) of the fault, and solves the strong dynamic equation numerically to simulate the dynamic rupture process of the fault and the corresponding surface motion field. The commonly used numerical calculation methods include finite difference method, finite element method, boundary element method and boundary integral method. The frequency of the calculated surface motion is less than or equal to 1 Hz. The above models have their own advantages and disadvantages in practical applications. The stochastic model is simple and easy to calculate, but lacks accurate physical meaning; the kinematic model obviously does not consider the complexity of the fault rupture process; the dynamic model requires a large amount of calculation, and the content of high-frequency components of surface motion is still lacking. With the continuous accumulation of strong earthquake data, the understanding of the fault rupture process and the corresponding surface motion has also been deepened. In fact, the near-field horizontal peak acceleration saturation and supersaturation phenomena revealed by the recent NGA program [2] (next generation attenuation project) have brought new challenges to the establishment of strong ground motion prediction models. The horizontal peak acceleration saturation or supersaturation phenomenon when the magnitude Mw\u22657.0 (Fig. 2) may indicate that there is no single corresponding simple relationship between the moment magnitude (fault scale + slip displacement) and the earthquake intensity, which means that the relationship between small earthquakes and large earthquakes The similarity is broken at a certain magnitude [3]. For the accurate prediction of strong ground motion, the future focus should be on the establishment of strong ground motion models based on physical processes. Specifically, the establishment of the new generation attenuation relationship can adopt the principle of combining actual observation and simulation results. The dynamic model should take into account the adoption of the kinetic friction criterion under different fracture-friction mechanisms. Recently, Choy and Boatwright[5] studied two earthquakes of the same magnitude (Mw 6.7) in Japan and showed that although the two earthquakes had the same magnitude, their seismic wave energy radiation differed by 4 times. The apparent stress \uf073 a \uf03d \uf06d Es M o ( \uf06d is the shear modulus, Es is the seismic wave energy, and Mo is the seismic moment) also reflects the difference in ground motion intensity. In particular, the strong ground motion in the near field may be affected by the fault type and the regional tectonic background. Kanamori[6] investigated the difference in energy radiation between stress reduction and stress reduction in the process of dynamic friction according to the slip weakening criterion. According to our recent preliminary analysis of the 2001 Mw=7.8 west of the Kunlun Mountain Pass and the 2008 Wenchuan Mw=7.9 earthquake, it also shows that although the moment magnitude of the Kunlun Mountain Pass west earthquake is slightly smaller than that of the Wenchuan earthquake, its energy radiation is much greater than that of the Wenchuan earthquake ( E Wenchuan \uf0bb 1.4 \uf0b4 1016 N \uf0d7 m , \uf073 Wenchuan \uf0bb 1.85Mpa , E Kunlun Mountain \uf0bb 3.2 \uf0b41016 N \uf0d7 m , \uf073 Kunlun Mountain \uf0bb 5.3Mpa ). Figure 2 \tNGA peak acceleration (peak ground acceleration, PGA) attenuation curve [2,4] compared with the previous attenuation relationship [4], when Mw \u2265 7.0, PGA appears saturated, and the PGA predicted by NGA is much smaller than the previous results; B. Comparing the US Mw=7.3 and Landers earthquake observation data in 1992; C. Comparing the Wenchuan Mw=7.9 strong earthquake observation data in China in 2008 for the research on the relationship between magnitude and surface movement intensity, more observation data and processing analysis are very necessary. Based on the dynamic model, the practice of establishing an equivalent kinematic model requires an in-depth investigation of the description of the slip function and slip rate function on the fault plane. The directional factor of rupture propagation D=1/(1\uf02d(v/\uf062)cos\uf071) (where v and \uf062 are the rupture propagating element and S-wave velocity, respectively; \uf071 is the angle from the source) and The caustic phenomenon formed by SH wave directionality in the process (the critical angle is \u03b8c=cos\uf02d1(\u03b2/v), where \u03b8c is the angle between the surface observation point and the rupture surface; v is the propagation velocity of fault rupture ) should be properly considered in the model. The results of recent rock mechanics experiments also show that the slip function and slip rate of the fault rupture process are significantly different from those of the Brune[1] model. Considering the contribution of the fracture energy in the slip weakening process, the revised Brune[1] model It can be expressed as [7] where u and u. are sliding displacement and velocity respectively; \u0394\uf074d is dynamic stress drop; t is time, tc ~r/\u03b2; Dc is critical sliding distance. The contribution of the fracture energy should be included in the model calculations. Specifically, the following key issues should be studied in depth in future work. The influence of fault segmentation, fault geometry and fault type on surface movement; the directionality of fault rupture propagation and the influence of caustics on surface movement during supershear rupture; site effect: the impact of shallow velocity/density structure on surface movement The nonlinear effect of amplification/decrease of surface motion; basin model analysis: the influence of basin base and terrain fluctuation on surface motion; the difference of surface motion characteristics of hidden/exposed surface faults; the impact of different fault dynamic slip mechanisms on The influence of motion intensity, and how the experimental results of rock mechanics are applied to the macro-scale model calculation; the establishment of dynamic models and equivalent kinematic models based on the physical process of fault rupture; the development of numerical technology: the applicability of different numerical methods, Validity and comparability; model/parameter uncertainty analysis. In-depth research on the above work requires more experimental data to support. How to deduce the results of indoor rock mechanics experiments and apply them to large-scale numerical simulations requires the support of more seismic observation data of different magnitudes and scales to solve the proportional relationship between large earthquakes and small earthquakes; to expand the numerical calculation of dynamic models The high-frequency band (\u22651Hz) requires a new calculation method and a synthesis method that conforms to physical principles; the development of a fine crustal velocity model is also an important link in the simulation calculation of strong ground motion. Obviously, realizing the above goals will be a number of challenging topics, but we have reasons to believe that solving or partially solving the above problems is of great significance for strengthening and perfecting the establishment of the earthquake disaster defense system.", "Mantle convection is an important process in the Earth's interior. From the perspective of geological time scale, it controls the formation and distribution of land and ocean, affects the climate of the earth, the circulation of glaciers, the evolution of organisms and the formation of resources and energy [1]. For today's Earth, the manifestation of mantle convection on the surface is the tectonic movement of plates. The theory of plate tectonics is a model describing the movement of plate tectonics. It believes that the surface of the earth is divided into several rigid plates, which can move with each other. Plates accrete at mid-ocean ridges and subduct and subduct at trenches. The speed of plate motion is about a few centimeters per year. A large number of earthquakes, volcanic activities and orogeny occur at plate boundaries [1] (Fig. 1). The main content of the plate tectonic theory was basically established in the 1960s. Continental drift is an integral part of plate tectonics theory[1, 2]. Figure 1 \tSchematic diagram of the distribution of major plates and their movement (From USGS) Research shows that in the evolution history of the earth for nearly 1 billion years, continents have experienced two supercontinent formations and ruptures [3]: Rodinia has been formed 900 million years ago Basically formed and began to crack around 750Ma; Pangea started to form about 330Ma and started to crack around 175Ma. Various studies have shown that the Earth has had active plate tectonics over a considerable period of its history. At the same time, research also shows that in the solar system, the earth is the only planet with plate tectonic movement. Then a direct question is, why can the earth produce plate movement? When did it start? will it stop If the earth had plate movement very early, is it any different from now? To solve these problems, we must start from the mantle convection, because the mantle convection is the direct cause behind the plate movement. Fig. 2 \tThe supercontinent Pangea (a) at 195Ma and the supercontinent Rodinia (b) at 750Ma Mantle convection is the main mechanism for the outward propagation of heat energy in the earth's interior, and thus is also the internal cause of plate movement, continental drift, volcanic earthquakes and orogeny , and affect the development and evolution of the Earth's magnetic field. Mantle convection controls the overall development and evolution of the earth. But the problem is that we cannot directly enter the interior of the earth to observe the flow of the mantle. Theoretical, experimental and numerical simulations have shown that fluid geometry, viscous structure and Rayleigh number have a great influence on fluid flow. Figure 2 shows the convective image of a two-dimensional rectangular region of constant viscosity at a Rayleigh number of 105. At this time, two symmetrical boundary layers are formed at the upper and lower boundaries. For the three-dimensional spherical shell region, if the upper mantle is less viscous and the lithosphere is harder, then a convective pattern dominated by the order spherical function will be formed, and this convective pattern will form a second-order convective pattern under the action of the supercontinent. The convective form dominated by order spherical function [3] (Fig. 3). This provides an explanation for the plate movement and continental drift of the earth for nearly 1 billion years. Fig. 3 \tNumerical simulation results of convective movement From the study of continental drift and mantle convection, the current plate movement can be traced back at least to the Late Proterozoic. In fact, many geological, geophysical and geochemical observations also show that the Earth's plate tectonic movement can be traced back to the mid-Archean, or even the Hadean. Through the analysis of the ancient magnetic fields of Archaean and Proterozoic rocks, it is found that the earth's magnetic field existed at least 3.5 billion years ago [4], and has the characteristics of modern magnetic fields. The study of paleomagnetic poles shows that there was mutual movement between southern Africa (Kaapvaal craton) and North America (Superior craton) 2.7 billion to 2.1 billion years ago [5]. Through the geochemical analysis of ancient crystals and rocks such as zircons from the Hadean era and Archaean rocks, scientists believe that the Earth\u2019s surface in the Proterozoic era was also cold, with both oceans and land, and there is not much difference from the current Earth\u2019s environment[6 ]; many Archaean island arcs have similar geochemical characteristics to modern island arcs. Structural geology studies have shown that some ancient orogenic belts, such as Trans-Hudson in Canada, Svecofennian in Finland, and Mazatzal-Yavapai in the United States, have structures similar to modern plate subduction and collision convergence[5]. These studies show that plate tectonics may have existed throughout the history of the earth. Although it is still controversial whether the principles of plate tectonics can be used to explain geodynamic phenomena before the Ming Dynasty, scientists generally believe that as early as the Archaean, the mantle Convection is an important process in the mantle, and it is more active than it is now; active plate tectonics dates back to at least the middle Archaean (3.1 billion years). However, there are also a few scientists who still disagree with the above point of view (such as Stern). He believes that from the geological record of ophiolite, blueschist and ultrahigh pressure metamorphic rock formations associated with subduction, there is not enough evidence to suggest that similar modern plate tectonics The subduction of the plate started before the Mesoproterozoic [7]. If so, this means that Earth probably did not have plate tectonics for about 4/5 of its life. The oceanic lithosphere is actually the upper boundary of the convective mantle, and the subduction of the lithosphere is essentially caused by the gravitational instability caused by the cooling and thickening of the mantle magma. When the lithosphere cools to a certain extent, its negative buoyancy creates instability, leading to the convective movement of the mantle [8], thereby driving the movement of the surface plate, especially when the influence of the core-mantle boundary layer is small. For the present Earth, it takes about 100 million years of cooling on average for the lithosphere to reach subduction [1, 2]. For the early mantle, if it is considered that it is 50\u2103 higher than the present, the subduction requires lithosphere cooling for 3-3.5 billion years[9]. Does this also mean that plate tectonics could not have arisen too early? In the early days of the Earth, the temperature of the mantle was higher than it is now, and the mantle convection should be more intense than it is now. If plate tectonics had existed then, surface plate movement would have been faster than it is now. The oceanic lithosphere should have been thinner than it is now, since it didn't have more time to cool itself. On the other hand, the high mantle temperature in the past also means that the oceanic crust formed by the melting and differentiation of mantle magma rising from mid-ocean ridges is thicker than it is now [1, 2]. Since the oceanic crust is less dense than the mantle lithosphere, thickening of the oceanic crust will increase the buoyancy of the oceanic lithosphere. Thick oceanic crust and thin lithosphere would be very unfavorable for lithosphere subduction. Davies[9] calculated that if the early mantle was 50\u00b0C higher than the present, the subduction of the oceanic lithosphere may not have occurred until the last 900-1.4 billion years. Given that the transition from basalt to eclogite in the crust would have increased gravity, he thinks that early plate tectonics may have existed, but at a slower rate and in a different form than today. The theory of plate tectonics is the most basic theory of earth science, and it is of great theoretical significance to clarify the most basic issue of plate tectonics, the time when plate tectonics began. Because scientists have different understandings of plate tectonic activities in the early Earth, experts in geology, geochemistry, geophysics, and geodynamics began to gather to discuss this issue[10]. The crux of the matter is that, precisely because of the Earth's plate tectonic movement, there are very few records of Earth's early days left on Earth. Geoscientists are also working hard to gather clues and paint a picture of the scene based on clues.", "Hotspots are areas of unusual volcanic activity that cannot be directly linked to plate tectonic activity [1]. Most volcanoes (such as the Pacific Rim) are associated with plate subduction or seafloor spreading, they are not hotspots. Morgan attributed the hotspot's volcanic activity to a thermal plume originating deep in the mantle [1,2]. Mantle plumes are nearly cylindrical upwellings formed by hot mantle rocks, representing a basic form of mantle convection [2,3]. Therefore, there is still a popular saying that the hot spot is the manifestation of the mantle plume on the surface[4]. Now the hypothesis of hot spot and mantle plume has been cited by most textbooks [2~4]. But the question is, how many hot spots are there on the surface of the earth? Is there really a mantle plume beneath the hot spot? If not, what is the cause of the hot spot? If so, where does the heat column come from? How do heat columns form? Hotspots are areas of the Earth's surface that have not been associated with plate tectonic activity, but have experienced prolonged periods of active volcanic activity. This definition is not particularly strict, which also makes everyone unable to reach a consensus on the total number of surface hotspots [1~6]. Generally speaking, hotspots are located inside the plate, such as Hawaii. However, anomalous areas with thicker crust at mid-ocean ridges are sometimes considered hotspots, a typical example being Iceland. At present, several statistical tables about the number of hotspots have been published, and the number of hotspots ranges from 20 to more than 100 [1~6]. Many hotspots leave a long footprint of volcanic composition, such as Hawaii (Fig. 1a). The mutual movement between hotspots is very small, and many hotspots are accompanied by topographic uplift. The magmatic rocks produced by the hotspots and the topographical schematic diagram of the oceanic Hawaii-Imperial volcanic chain \t(b) The Reunion volcanic chain is basalt like the ridge in Figure 1 of the Deccan Basalt Province, but their chemical There are many differences in composition [2~4,7]; some hotspots are also connected by volcanic chains to the Large Igneous Province or Flood Basalt Province (Fig. 1b). The concept of a stable heat source under the hot spot comes from Wilson[3], but it was Morgan who first clearly proposed that the magma erupted from the hot spot comes from the deep mantle plume, and believed that the mantle plume is derived from the thermal properties of the core-mantle boundary. Columnar bodies formed by buoyant mantle rocks [2] (Fig. 2). Morgan's mantle plume hypothesis\u2014the hot spot on the Earth's surface originating from the deep mantle plume was quickly recognized by scientists, and the existence of the mantle plume is consistent with our current understanding of mantle dynamics[1,3,4]. Mantle convection is caused by boundary layer instabilities. The subducting lithosphere results from the instability of the upper boundary of the convective mantle. There will also be a thermal boundary layer at the bottom of the convective mantle, and the instability of the thermal boundary layer at the bottom will generate mantle thermal plumes[3]. Both laboratory simulation experiments and numerical simulations have confirmed this view (Fig. 2). Fig. 2 \tGeological, geophysical and geochemical evidences for the rapid growth of the mantle plume originating from the bottom thermal boundary layer Although most of them are not direct, the mantle plume hypothesis can give a reasonable explanation. Through the analysis of the distribution of global hotspots, scientists found that the spatial distribution of hotspots is relatively stable. The plate moves above the thermal column, and the magma erupted by the thermal column will form a chain of volcanoes on the plate. Volcanoes on this chain will gradually increase in age as they move away from the thermal plume. These are consistent with observations (Figure 3). Both laboratory simulations and numerical calculations have shown that a high-temperature, low-viscosity thermal column consists of a large column head and a small column tail when it rises [3, 4] (Fig. 2). The interaction between the tail and the lithosphere forms a volcanic chain similar to Hawaii, and the stigma will form the Great Basalt Province (Fig. 1 and Fig. 3). At the same time, geochemical observations also show that there are many differences in incompatible elements and isotope contents between basalts from oceanic islands and basalts from mid-ocean ridges, indicating that they are likely to come from different regions of the earth [7]. A direct inference is that mid-ocean ridge basalts come from the upper mantle, while oceanic island basalts come from the bottom of the mantle. Therefore, although there is little direct evidence, the mantle plume hypothesis has been recognized by most scientists (see Figure 1 in Reference [8]). Fig. 3 \tSchematic diagram of thermal plume lithospheric action However, not all scientists support the mantle thermal plume hypothesis, and typical skeptics include Anderson and Foulger et al. [8]. They suspect the existence of mantle plumes in the mantle. The first reason for their opposition is that the mantle plume is an unproven hypothesis, but scientists all agree that their observations can be explained by the mantle plume, which is very abnormal. Of course, they also have many observational reasons[8]: \u2460 The volcanic chains with gradual age changes have not been observed near all hot spots; \u2461 The positions of the hot spots are not fixed, and there is mutual movement; Originated from the core-mantle boundary, but seismic inversion does not support it; \u2463There is no petrological evidence that the temperature of the magma below all hot spots is higher than that around; \u2464Not all large basalt provinces are associated with mantle thermal plumes, and some thermal plumes are not The corresponding large basalt province exists. Davies refuted all the opponents' reasons, but obviously did not convince the opponents [9]. Seismology should provide independent judgment in the mantle plume debate, as seismic tomography can provide insight into the structure of Earth's interior today. If there are thermal plumes inside the mantle, seismology should be able to detect them. However, it is clear that mantle plumes are not as easy to detect as subducting slabs. The subducting plate is more than 100 kilometers thick, and the temperature difference between the inside and the outside is more than 600 degrees Celsius. Its nearly plate-like two-dimensional structure makes it much easier to detect than the mantle plume. The mantle plume is a columnar three-dimensional structure with a diameter of only about 100 kilometers. The temperature difference Only two or three hundred degrees Celsius. Nevertheless, seismic tomography still gives a thermal columnar structure in the mantle [10] (Fig. 4). However, due to the difficulty of detection, the inversion results given by earthquakes are not completely consistent, which is actually one of the reasons given by opponents. Plates and thermal plumes are both products of mantle convection [3,4]. But essentially, the theory of plate tectonics doesn't need thermal columns. But many observations show that plate tectonics cannot give a reasonable explanation. However, the thermal column hypothesis can explain exactly what the plate tectonic theory cannot explain. This is why the mantle plume hypothesis is classified as a component of the plate tectonic theory in many textbooks. However, for the mantle plume hypothesis to become a mantle plume theory, scientists must answer many questions including the basic questions above. Research from theory to observation, including geophysics, geology, geochemistry and geodynamics, is very important. On the one hand, we need to study and improve our seismological detection theories and methods in order to provide more and higher-precision structural images of the mantle thermal plume; on the other hand, Figure 4 \tshows the seismic wave velocity structure of the lower surface of Hawaii shown by seismic tomography. On the other hand, we also need more and better observation results of geochemical detection theory to understand the similarities and differences between hot spot magma and other magmas. At the same time, we also need to establish the connection between geophysics and geochemistry, so that we can make better use of multiple observations. In this regard, geodynamic models can play a very important role [12].", "Tomography refers to the medical computer X-ray tomography (computed tomography or computerized tomography, CT), which came out in the 1970s. Shortly thereafter, CT technology entered the field of geophysics and was used to detect physical parameters inside the earth. Seismic tomography refers to the method of using seismic waves to detect the velocity structure of the earth's interior. Because the seismic wave velocity of the earth's interior materials (rocks, metals) has a relatively stable correlation with its physical properties, seismic tomography can obtain knowledge of the earth's interior material structure. Seismic tomography is a major tool for understanding the Earth's interior. The radius of the earth is more than 6,000 kilometers, and the deepest artificial well drilling is only more than ten kilometers. It is unrealistic to directly understand the internal structure of the earth. Seismic waves can penetrate to any depth inside the Earth. For more than 100 years, seismographs all over the world have recorded a huge amount of seismic data, and it is still increasing every day. Using these seismic data to perform seismic tomography is a practical method to obtain the internal structure of the earth. Earthquake disaster is one of the huge natural disasters suffered by human beings. Both earthquake prediction and engineering earthquake resistance require an accurate understanding of the material properties around the fault and the movement of the earth's plates, so seismic tomography is also an indispensable tool for earthquake prevention and disaster reduction. Before the 1970s, geophysicists compiled P and S wave travel time tables using global seismic observation data, and obtained some important knowledge of the Earth's internal structure by using simple travel time inversion methods, such as discovering the Moho surface, core-mantle composition interface, the boundary interface between inner and outer cores, and the one-dimensional layered model of the earth\u2014\u2014Bren\u2019s model is established. In the early 1970s, Aki and Dziewonski pioneered the study of 3D seismic tomography [1]. After the 1980s, with the continuous increase of the global seismic observation network, the application of broadband digital seismic data, the practical application of inversion theory, and the rapid development of computer technology, 3D seismic tomography has flourished[2\uf02d 5]. During the 30 years from 1970s to 2000, high-frequency seismic ray approximation was the main method of seismic tomography. In the last decade since 2000, limited frequency seismic tomography has been developed. The traditional seismic tomography method obtains the static large-scale structure of the earth's interior\u2014\u2014crust, mantle, inner core, outer core, lithosphere, asthenosphere, etc., which is no longer controversial. However, when it is used to invert small-scale fine structures such as mantle plumes, the results of tomography are still very rough, so it cannot provide credible quantitative basis for geodynamic problems such as mantle convection and plate movement. In applied geophysics, inaccurate tomographic imaging will lead to huge economic losses, so 3D seismic tomography has not been practical in seismic exploration so far. The reason why the imaging results are not convincing is the approximation of the forward modeling method, and the multiple solutions of the inversion problem. Traditional seismic travel time tomography is based on the high-frequency ray theory of seismic waves, and uses the first-arrival travel time difference of seismic waves as a data matrix. From the perspective of forward modeling, since the real first arrivals are always submerged in noise, there is a systematic error in the reading of the first arrivals when inverting the schematic diagram of the relationship between the regional grid point density and the seismic ray void in Figure 1. Seismic wave wavelength and epicentral distance increase, so there is a problem of wavefront healing in ray theory. If the characteristic scale of the anomalous body in the medium is less than 0.5 \t(L is the propagation distance, \uf06c is the wavelength), the first arrival travel time difference predicted by ray theory will seriously deviate from the correct value. At this time, even if the rays are dense enough, the correct image cannot be inverted. From the perspective of inversion, the ray travel time is only related to the anomalous wave velocity on the ray path, so the coefficient matrix of the inversion equation is a large sparse matrix. To invert small-scale fine structures, it is necessary to refine the lattice points in the inversion area. Figure 1 is a two-dimensional schematic diagram of ray inversion grid point density and seismic ray void. Squares marked with a \"+\" indicate ray voids through which no rays pass. When the grid density is 4\u00d74, all squares are traversed by rays. When the grid point densification is 8\u00d78, the number of squares not penetrated by the ray is 23, and the ray void area accounts for about 36% of the inversion area. For the three-dimensional case, the growth rate of the ray-space region is faster when the grid is densified than for the two-dimensional case. Since the seismic rays are distributed in a three-dimensional curve, the ratio of the ray void to the inversion area tends to be 100% in the case of infinitely dense grid points. This is an intuitive understanding of the difficulty in inverting 3D fine structures with ray tomography. Traditional seismic waveform tomography mainly computes long-period S-wave or surface-wave waveforms. For practical considerations, the velocity structure of the great circle plane between the source and the station is simplified to the average velocity structure, and the disturbance of the waveform is also only related to the velocity disturbance in the great circle plane, so it has the same forward and inversion defects as the ray theory. Since 2000, Dahlen et al. [6~10] opened up the research field of three-dimensional finite frequency tomography. Finite-frequency tomography is an inversion method based on elastodynamic equations, using the relative travel times of seismic waves as a data matrix. Compared with the \"infinite high frequency\" condition of ray theory, it is applicable to waves of any frequency, so it is called \"finite frequency\" tomography. It still needs some approximate conditions, but because the relative travel time mainly depends on the peak value of the seismic phase, it is less affected by the noise, so the problem of wave front healing is overcome, and the accuracy of the forward modeling theory can be guaranteed. From the perspective of inversion, the data it uses (full wave correlation travel time, correlation amplitude, group velocity, polarization angle, etc.) are related to the wave velocity anomaly in a large area next to the ray path, so the coefficient matrix of the inversion equation is a dense matrix. Figure 2 is a two-dimensional schematic diagram of the ray inversion grid point density and finite frequency kernel function (kernel) coverage space. When the grid density is 4\u00d74, no empty area is covered. When the grid point encryption is 8\u00d78, the kernel function covers only one square in the empty area. For the three-dimensional case, the growth rate of the kernel function space is roughly equal to that of the two-dimensional case when the grid is refined. Since the finite-frequency kernel function is a three-dimensional volume distribution, in the case of infinitely dense grid points, the ratio of the kernel function coverage space to the inversion area tends to 1\uf02dVk/V (Vk represents the volume occupied by the finite-frequency kernel function, V is the volume of the inversion region). In other words, as long as an appropriate source-station distribution is selected so that the finite frequency kernel function completely covers the inversion area, then the data matrix of finite frequency tomography always carries all the velocity structure information of the inversion area. The strong internal connection between the data matrix and the velocity structure information is of great significance for solving the multiple solutions of the inversion problem. In addition, finite-frequency tomography can also filter seismic waves in sub-frequency bands, extract multi-frequency traveltime information for joint inversion, and reduce the underdetermination of the inverse problem. In conclusion, the finite frequency tomography method has initially shown potential in improving the accuracy and resolution of seismic wave imaging. But this research field is still very young, many new concepts are still vague, and it is far from establishing a rigorous theoretical system and obtaining convincing high-resolution inversion images. This requires more scholars to conduct in-depth research and proof, and effectively promote the development of 3D seismic tomography. Figure 2 \tSchematic diagram of the relationship between the inversion area grid point density and the finite frequency kernel space", "my country is one of the countries with frequent high-intensity earthquakes. People still remember the huge loss of life and property caused by the 8.0 magnitude earthquake in Wenchuan in 2008. Earthquakes are a major way to release energy accumulated in plate movement, and most of them occur on active faults at plate boundaries or inside plates. Although scientists have long recognized the physical mechanism of earthquakes, the breeding process of earthquakes and the changes in fault characteristics before and after earthquakes can only be inferred from surface observations. Therefore, research related to earthquake prediction and fault activity is still a worldwide problem recognized by the earth science community. Scientists have recently discovered a pattern of fault activity that further study is thought to have the potential to lead to breakthroughs in earthquake prediction. This kind of fault activity is between fast rupture or slow creep that produces ordinary earthquakes, which scientists call \"slow earthquakes\" [1] . Compared with ordinary earthquakes, the dislocation displacement of slow earthquakes is limited, and the rupture slip time is longer, so that there is no or only very weak seismic wave radiation. Therefore, slow earthquakes usually do not cause damage and are rarely noticed by people. The duration of seismic wave signals generated by slow earthquakes is relatively long, and it is difficult to identify obvious longitudinal waves (P waves) and shear waves (S waves), which are similar to the \"volcanic tremor\" observed near active volcanoes, so Known as \"non-volcanic tremor\" (non-volcanic tremor) [2]. According to magnitude and source rupture time, slow earthquakes can be subdivided into \"low frequency earthquakes\" (low frequency earthquakes; source rupture time <1s) and \"very-low frequency earthquakes\" (very-low frequency earthquakes; source rupture time ~20s). Non-volcanic tremors are generally considered to consist of many low-frequency earthquake swarms [3]. Relatively large slow earthquakes (moment magnitude Mw>6) can usually cause surface deformation and thus can be observed by Global Position System (GPS) or other geodetic instruments. The source slips for days or months and is often referred to as \"slow-slip events\". Slow earthquakes have been found in the circum-Pacific subduction zone, the San Andreas fault in California, Hawaii and Taiwan[4]. Scientists have found that near some subduction zones, non-volcanic tremors are usually accompanied by slow slip events at regular time intervals, so this coupling phenomenon is named \"Episodic Tremor and Slip, ETS)[ 5]. Observations of slow earthquakes have started many years ago. Earlier studies have found that some earthquakes have focal times longer than those predicted by the standard moment-focal time relationship, and these earthquakes usually occur on oceanic strike-slip faults, within near-surface sedimentary rock formations, or are associated with glacial activity. The newly discovered slow earthquakes (low-frequency earthquakes, ultra-low-frequency earthquakes, non-volcanic tremors, etc.) are mostly distributed in the deep part of subduction zones or continental strike-slip faults, usually below the locked layer where ordinary earthquakes occur (Fig. 1). These weak seismic signals, usually dismissed as noise, were only recently detected in areas with highly sensitive seismometers and continuous records. Similarly, although slow-slip events have been discovered many years ago, their importance was not realized until the establishment of large-scale and dense GPS network in recent years. Fig. 1 \tSchematic diagram of the structure and earthquake distribution near the subduction zone Although the systematic research on slow earthquakes has just begun, there have been many breakthrough research results in recent years[4]. Nevertheless, there are still many problems to be solved. These problems can be roughly divided into the following three types: \u2460The physical mechanism of slow earthquakes; \u2461The conditions for slow earthquakes; \u2462The relationship between slow earthquakes and large earthquakes. Regarding the physical mechanism of slow earthquakes, there are currently two views: one view is that slow earthquakes, like ordinary earthquakes, are generated by shear motion along fault planes [5]; Like the volcanic tremors mentioned, it is related to the movement of fluids in the deep crust [2]. The latest observations have found that the epicenters of slow earthquakes are mostly concentrated near plate boundary faults[3], and the focal mechanism is a double couple consistent with ordinary earthquakes, which supports the first view. In contrast, the non-volcanic tremors found on the Cascadia subduction zone in the northwestern United States are distributed over a considerable depth range in the fault-plane suspension plate [6], and such observations can be better explained from the perspective of fluid movement. Obviously, improving the location accuracy of slow earthquakes and focal mechanism solutions is of great significance for further understanding the physical mechanism of slow earthquakes. Although slow earthquakes have been discovered on many plate boundary faults, scientists do not have a unified understanding of the distribution area and generation conditions of slow earthquakes. First, not all plate boundaries have slow earthquakes; second, although slow earthquakes have also been observed in plate interiors, so far there are only a few cases. This uneven distribution is at least partly related to the uneven distribution of high-resolution seismic or geodetic networks. Of course, in some regions where the distribution of seismic stations is relatively uniform, such as Japan, ETS events are only found in the Shikoku subduction zone in southern Japan. Because of this, scientists initially thought that ETS events occurred only in relatively young subduction zones, perhaps controlled by temperature or rock properties. The latest research has found that ETS events exist in relatively old subduction zones, such as Alaska and Costa Rica. Scientists generally believe that slow earthquakes occur near faults where large amounts of fluid exist, and in this case, the fluid pressure is high, approaching lithostatic pressure. The effective friction coefficient of the fault is thus reduced, which is conducive to fault sliding and causes slow earthquakes. The large amount of liquid near the subduction zone may be mainly produced by the dehydration process of the subducting plate, and the source of the liquid that causes the high hydraulic pressure in the strike-slip fault is not clear. The investigation and research of slow earthquakes on a global scale, especially the strengthening of slow earthquake observations in the internal regions of my country and other plates, is conducive to solving the scientific problem of the conditions for slow earthquakes. Another related scientific puzzle is the relationship between slow earthquakes and ordinary large earthquakes. Slow earthquakes themselves do not cause significant loss of life or property, but scientists hope to learn more about ordinary destructive earthquakes by studying them. This is mainly due to the following reasons: first, slow earthquakes usually occur above or below the lock-in area where common earthquakes occur (Fig. Critical for the assessment of strong ground motion and associated hazards induced by large thrust earthquakes near subduction zones. Secondly, a stress change of several thousand pascals (kPa) (much smaller than one atmospheric pressure) caused by teleseismic surface waves[7], earth tides[8], and even typhoons[9] can induce slow earthquakes; Force changes are very sensitive, and scientists can use slow earthquakes as natural \"strain gauges\" to observe changes in fault properties before and after large earthquakes. In addition, slow earthquakes themselves may induce large earthquakes while releasing the accumulated energy of plate movement[10]. If we can observe changes in slow seismic activity such as slow slip events or non-volcanic tremors before major earthquakes occur, it will be of great significance for understanding the laws of major earthquake activity, accurately predicting major earthquakes, and earthquake prevention and disaster reduction. Although observational research in this area is still controversial [10], most seismologists still have confidence in the breakthroughs that slow earthquake research can bring. With the popularization of high-precision seismic and geodetic networks in my country and the world, the development of slow earthquake identification and positioning technology, and the breakthroughs in theoretical model research and related experiments, scientific problems related to slow earthquakes are expected to be solved one by one in the future. Seismology is still a young science compared to many other disciplines. The problem of slow earthquakes provides a good research direction for young people who are interested in earthquake science research.", "All kinds of geodetic observations are carried out on the earth's surface or in the near-earth space, and all observation results include the comprehensive influence of various geodynamic factors such as the motion of the earth's surface, the motion of the earth's interior, the change of the earth's gravitational field, and the overall rotation of the earth; Conversely, geodetic observations are also an important reference for studying various dynamical phenomena of the earth. However, various geodynamic factors interact and influence each other, and screening and separating the influence of these factors is one of the difficult problems in geodetic theory and practice. In-depth research on the separation of various dynamic factors in geodesy will not only help geoscientists to understand the states, laws and dynamic mechanisms of various motions of the earth, but also help humans understand and predict changes in the earth's environment, and formulate the sustainable development of human society. Development planning has important scientific significance. To establish and maintain the Earth reference frame, it is necessary to separate the influence of various dynamic factors. In order to study the shape of the Earth and its gravity field, to describe the various geometric and dynamic characteristics of the Earth, and to meet the needs of high-tech development such as aviation and spaceflight, it is necessary to Establish a high-precision, long-term stable Terrestrial Reference Frame (TRF). After more than ten years of hard work, the accuracy of the International Terrestrial Reference Frame (ITRF) has made great progress. The International Earth Reference Frame Sequence issued by the International Earth Rotation Service (IERS) is internationally recognized as the most accurate and most stable reference frame[1,2]. ITRF takes the center of mass of the earth as the reference origin, and the center of mass of the earth refers to the mass center of the entire earth including oceans, atmosphere, etc. Due to the redistribution of the earth's mass, the earth's center of mass is displaced relative to the origin of the earth's reference frame [2]. The displacement of the earth's center of mass relative to the origin of the earth's reference frame is called geocentric motion. The calculation results of various space observation techniques (such as SLR, GPS, DORIS) show that the annual variation of the Earth's barycenter relative to the origin of the Earth's reference frame is several millimeters [8,9]. ITRF is represented by a set of coordinates of a large location that is fixed on the earth's surface and only moves linearly, and these coordinates are calculated by means of space observation such as VLBI, SLR, GPS, DORIS, etc. They are establishing and maintaining the earth's reference frame play different roles. VLBI can realize the connection and orientation between the earth reference frame and the space inertial system, SLR can determine the absolute coordinates of the frame points, and GPS can realize the encryption of the reference frame due to its economical and fast characteristics. Using ground or space observations to establish the Earth reference frame, it is necessary to consider the impact of various geodynamic factors on these observations, such as crustal movement, tidal load, atmospheric load, local stability of points, etc., and even study the impact of these factors on the geodetic coordinate frame The resulting nonlinear effects [3\uf02d6]. In general, the effects of geodynamic factors (such as crustal movement, changes in the earth's rotation, earth tides, sea surface topography, and sea level changes) on geodetic observations or results are calculated according to certain models, and all All models have their assumptions. For example, as the ITRF2000 standard global plate movement model of the International Earth Reference Frame\u2014\u2014NNR-NUVEL1A model, its structural theory assumes that the plate is a rigid body, the boundary of the plate is narrow, and the deformation of the plate only exists on the boundary zone, and does not occur inside deformation. In fact, the plate is not a rigid body, and the plate motion is a compound motion of overall rotation and internal deformation[7]. With the development of various high-tech technologies, people have higher and higher requirements for the accuracy of ITRF, and higher and higher requirements for the accuracy of corresponding geodetic observation and processing. Calibrating the coordinate frame only based on surface or space geodetic observations is increasingly unable to meet the requirements. Therefore, it is necessary to study the influence of various geodynamic factors on ITRF from the perspective of geological changes and geodynamic processes. It is necessary to study various geodynamic Correction models for geodetic observations, and the construction of these correction models requires a detailed distinction between the nature and mechanism of various geodynamic factors. The monitoring and interpretation of geodynamic phenomena needs to separate the influence of various dynamic factors. In addition to the high-precision reference frame, the monitoring and interpretation of various geodynamic phenomena also need to study the detailed changes of various geodynamics. However, the influence of various geodynamic factors on geodetic observations is often alternating and cumulative. For example, changes in point coordinates on the earth's surface may come from crustal movement, geocentric movement, or pole shift changes. It is difficult to analyze the influence of a single geodynamic phenomenon by relying on geodetic observations. Therefore, using geodetic observation methods to monitor these changes or perform inversion is prone to ill-posed problems, that is, the inversion solution is not unique. In addition, the number of surface frame points is extremely limited, and the spatial resolution is very low; the coordinates of frame points are often observed repeatedly every few years, and the time resolution is also very low. It is very unrealistic to invert various geodynamic parameters based on these low-spatial-resolution and low-time-resolution geodetic observations without a reliable prior model. Another problem with using geodetic methods to monitor geodynamic phenomena is observation error. Measurement errors are inevitable, and in many cases the magnitude of the observation errors is larger than that of the geophysical parameters to be extracted, that is, the signal-to-noise ratio is small. Extracting smaller-magnitude signals from observations with larger errors also requires prior information on geophysical period models. However, the accuracy and resolution of existing prior geophysical models are often low. For example, the famous model of crustal movement - NNR-NUVEL1A model is an important model for geodetic deformation correction. This model is established based on geological and geophysical data of millions of years. Obviously, it is measured on the time scale of geological centuries. is an average model of several million years; while the space is calculated on the scale of intercontinental plates [5]. It is often difficult to use such models to correct geodetic observations in some local areas, or to invert and separate other geodynamic parameters as prior information. Internationally, in recent years, the theoretical model of crustal movement, the establishment of the earth's reference frame, the interaction between the atmosphere, ocean and lithosphere, the earth's rotation and gravity field changes, mantle convection and mantle dynamics, the cross motion of the earth's interior and its dynamic effects , Mantle viscosity, dissipation and post-glacial recovery, Earth's inner core motion and nuclear model research have made great progress [5], but the fineness is still not enough, and further in-depth research combined with geodetic observations is needed.", "Earth has a layered structure. Since the solid inner core is in the liquid outer core with a very low viscosity, super-rotation of the inner core (that is, the rotational angular velocity of the inner core is greater than the rotational angular velocity of the mantle) becomes possible[1,2]. Lambeck once deduced that there should be kernel overspinning [3]. In order to explain the generation mechanism of the geomagnetic field, Glatzmaier and Robert asserted[4] that the magnitude of the inner core over-rotation is about 3\u00ba/a. Inspired by the work of Glatzmaier and Robert, Song and Richards [5] and Su et al. [2] successively asserted that the magnitudes of overspeed rotation in the core are 1.1\u00ba/a and 3\u00ba/a, respectively, through the analysis of about 30 years of seismic data. However, a series of subsequent related studies have shown that if there is an overspin in the inner core, its magnitude will not exceed 0.5\u00ba/a [1,6]. According to seismic wave observations, it is found that the arrival time of P waves passing through the inner core along the north-south direction is 3% faster than that along the equatorial direction. This observation can be explained by assuming that the inner core has axisymmetric anisotropy[1,2 ,5]. Su and Dziewonski [7] analyzed and studied the data provided by the International Seismological Center (ISC) database for nearly 30 years, and found that the inner core anisotropy symmetry axis points (79.5\u00baN, 160\u00baE), that is, there is an approximately 10\u00ba deviation from the Earth's rotation axis. Angle [2], see Figure 1. It is based on this hypothesis that scientists have detected the super-rotation of the inner core based on seismic wave data. If the core anisotropy symmetry axis coincides with the Earth's rotation axis, the core super-rotation cannot be detected with long-term seismic wave observation data. Figure 1 \tO-XYZ is the Cartesian coordinate system consolidated with the mantle, OZ points to the rotation direction of the earth (mantle), the solid inner core is in the liquid outer core with very low viscosity; the inner core rotation axis is symmetrical to the inner core anisotropy axis (fast axis) coincides, and the angle between it and the earth's rotation axis is about 10\u00ba; the inner core's rotation axis rotates around the earth's rotation axis, which is the precession of the inner core[9]However, the anisotropic symmetry axis of the inner core and its rotation axis The assumption of non-coincidence is hardly convincing. In fact, if it is assumed that the core rotation axis is the core anisotropy symmetry axis, the results given by Song and Richards[2] and Su et al.[2] can also be explained due to the possible precession effect[8]. It can be conceived that the inner core rotation leads to the centrifugal force on the inner core medium, so that it shrinks inward in the direction of the poles and expands outward in the equatorial direction, forming an ellipsoid shape[9]. In the formation process of the solid core, no direction can be the dominant direction, except the direction of the core rotation, because there is no centrifugal force in this direction. Therefore, if the kernel is axisymmetrically anisotropic, the anisotropic symmetry axis most likely coincides with the kernel rotation axis [9]. Since the rotation of the inner core in the liquid outer core is sufficiently free, just like a planet moves in free space[1,2], it can be deduced that the rotation axis of the inner core should have precession, just as the earth\u2019s rotation axis has precession; Therefore, it is difficult to imagine that the rotation axis of the inner core coincides with the rotation axis of the Earth, except by some kind of coincidence. Since the angle between the anisotropic symmetry axis of the inner core and the Earth's rotation axis is 10\u00ba [2], it is reasonable to infer that the over-speed rotation of the inner core based on seismic wave analysis [2,5] is actually the precession effect of the inner core [8] . Is the hyperrotation of the inner core detected based on the seismic wave data a real hyperrotation or a precession effect? This is an unresolved problem. Even if it is overspin in the true sense, there is still a lot of controversy about the magnitude of overspin. Many scientists believe[1,10] that all current seismic wave analyzes are not enough to determine the magnitude of hyperrotation. In order to further determine the magnitude of overspeed rotation, Zhang et al. [6] analyzed 18 sets of seismic wave doublets (waveform doublets) observation data with a span of 35 years. \u00ba/a. At this point, it appears that the evidence for core hyperspin rates is overwhelming. However, in order to explain the origin of geomagnetism, scientists have proposed various geodynamo theories. Currently the most popular are the Glatzmaier-Robert geomagnetic dynamo theory [4] and the Kuang-Bloxham geomagnetic dynamo theory [11]. The former requires the magnitude of the core overspin rate to be 3\u00ba/a, while the latter requires the core overspin rate to be a variable quantity, which can be positive or negative. Obviously, both of these two theories are inconsistent with the current core super-rotation rate 0.27\u00ba\uf07e0.53\u00ba/a deduced based on seismic wave data. What's the problem? Is there a problem with the core overspeed rotation rate determined based on seismic wave data[2,5,6], or is there a flaw in the Glatzmaier-Robert geomagnetic dynamo theory[4] and Kuang-Bloxham geomagnetic dynamo theory[11]? This is an unresolved puzzle. If the rotation axis of the inner core coincides with its anisotropic symmetry axis, then, due to the precession effect of the inner core, a time-varying gravitational field will inevitably result, and its magnitude depends on the angular rate of the inner core precession. Assuming that the angle between the anisotropic symmetry axis and the Earth\u2019s rotation axis is 10\u00ba [2], and the angular rate of inner core precession is 0.5\u00ba/a[6], the super-rotation of the inner core (actually the inner core precession) by 1\u00ba (It takes two years) resulting in a maximum change of 0.37\uf06dGal on the Earth's surface (Fig. 2)[9]. Therefore, by monitoring the time-varying changes in the gravitational field, it is possible to extract the information of inner super-rotation (inner precession). The superconducting gravimeter is a high-sensitivity, high-precision gravity observation instrument, and its observation accuracy has reached the nGal (nGal) level. There are about 30 superconducting gravity stations in the world, and there are very rich superconducting gravity observation data. Fig. 2 \tGravity changes at the Earth's surface caused by an over-rotation of the inner core by 1\u00b0 at every 5\u00b0. \uf0b45. The center of the grid is given a calculated value [9]. The vertical axis represents the gravity change (in \uf06dGal units), the left and right horizontal axes are latitude and longitude (in degrees) respectively. The changes in the gravitational field due to the overspeed rotation of the inner core (inner core precession) are global signals ( Figure 2), if the superconducting gravity observation data of multiple global stations can be stacked to eliminate noise and amplify the effective signal, it is possible to make the earth's gravitational field monitoring technology an effective means to detect the super-speed rotation of the earth's inner core (core precession) . Whether it is possible to detect super-rotation of the core (core precession) using multiple superconducting gravity data is a difficult problem to be solved.", "The rotational motion of the rotationally symmetric rigid body of the earth (the two principal moments of inertia A and B are equal) can be described by Euler's dynamics and kinematics equations, which have analytical solutions. At this time, the angular rate of rotation (reflecting the length of the day) is a constant quantity. However, the actual Earth is layered (crust and mantle, outer core, inner core), is elastic, and has three unequal principal moments of inertia A \uf0b9 B \uf0b9 C . At this time, the rotational motion of the earth is described by the Euler-Liouville equation, which has no analytical solution. In addition, the earth's rotation is not only affected by the gravitational force of external celestial bodies, but also by the coupling effects (electromagnetic, topographical, gravitational, and viscous) among the inner circles, as well as factors such as the ocean and the atmosphere. This makes it extremely difficult to solve the Euler-Liouville equation, and only numerical solutions or approximate solutions can be sought, one of which is the variation in day length, which is attributed to the inhomogeneity of the actual Earth's rotation rate. Variations in day length contain different periodic components, ranging from short-term variations, to decadal-scale variations, and even long-term variations. The long-term change is mainly caused by tidal friction, and the day length is extended by about 1.7ms per century[1]. Table 1 lists the number of days in a year in different geological ages [2], from which the length of days in ancient times can be estimated. Short-term (high-frequency) day-length changes are difficult to predict, and are mainly caused by material migration within the Earth system (including the flow of the atmosphere and oceans). The change in day length with a time scale of several days is closely related to the angular momentum of the atmosphere, and the change in day length with a time scale of tens of days and seasonality is mainly attributed to the influence of the ocean and atmosphere[3]. Explaining the variation of day length on a decadal scale (10 to 20 years) is a problem of widespread concern to scientists. Hide and Dickey[4] once pointed out that the amplitude of day length variation on a decadal scale is as high as 4ms. However, recent studies have shown [5] that the range of day length variation on a decadal scale is about 2.2ms. It can also be seen from the data provided by IERS (http://www.iers.org) and Figure 1 that the results of the decadal day length change given by Gross et al. [5] are credible. Due to the lack of sufficient understanding of the Earth's motion mechanism, there is considerable controversy over the geophysical mechanism that causes the decadal-scale day length variation. Atmospheric and oceanic effects do not explain the observed data well [6]. The study by Gross et al. [5] showed that the contribution of the atmosphere and ocean to the decadal day length variation only accounts for about 14%. Therefore, after deducting the atmospheric and oceanic decadal-scale daily length variation mystery \t\u00b7 593 \u00b7 effects, there is still about 1.9ms remaining to be explained. The data in Figure 1 \tcome from the observations of day length changes by the International Earth Rotation Service (IERS), which include day length changes at different scales; (b) day length changes calculated based on atmospheric data; (c) day length after deducting the influence of the atmosphere Variation In recent decades, scientists have proposed different core-mantle coupling mechanisms to explain decadal-scale day length variation. Currently, the most popular theories focus on electromagnetic coupling, terrain coupling, and gravitational coupling. Studies have shown[7] that the magnitude of viscous coupling is small (even under the extreme assumption of viscosity coefficient), and the contribution to the decadal-scale day length variation is negligible. The interaction between the current in the mantle and the geomagnetic field produces the Lorentz force, leading to electromagnetic coupling between the core and the mantle, which may be one of the important factors explaining the variation of day length on a decadal scale [8]. Although many scholars have studied the electromagnetic coupling mechanism, the research on explaining the decadal-scale day length variation based on electromagnetic coupling is still not convincing[8]. In order to obtain the electromagnetic coupling torque, the annular magnetic field and the polar magnetic field at the core-mantle boundary must be obtained first, and the polar magnetic field is usually determined by using the NHDC (non-harmonic downward continuation) model [9]. After considering the core-mantle boundary velocity model, the toroidal magnetic field can be inverted, and then the electromagnetic coupling torque can be solved. Although NHDC is a very effective approach to approximating the real Earth, there is a fatal flaw: the assumption of the conductivity distribution of the mantle. Because the deep part of the earth is in a state of extreme high temperature and high pressure, it is difficult to obtain detailed information about the deep part of the earth. The flow in the outer core exerts a force on the rough core-mantle boundary topography, thus generating topographical coupling moments[10]. This moment acts on the mantle, which transfers the angular momentum between the core and the mantle, resulting in the change of the mantle's rotation angular rate (ie, the change of day length). In order to determine the topographic coupling moment, the core-mantle boundary velocity model and topographic relief of the core-mantle boundary need to be known. Although there are many topographic models and velocity models of the core-mantle boundary in the world [10], there are still large differences between different models, and there is a lack of sufficiently accurate information. Studies have shown [10] that the contribution of terrain coupling to the decadal-scale day length variation is only about 10% of the observed results (1.9ms). More than 20 years ago, the gravitational coupling in the Earth's interior was not given enough attention. A possible reason is that the interior of the earth was considered to be spheroidally symmetric at that time. But in fact, the earth is layered with a triaxial ellipsoid, and the flatness of different layers is different. For example, the equatorial flatness of the inner core is 1/416 (the extreme flattening is not yet clear); on the surface of the earth, the equatorial flatness is 1/298. It is precisely because of the triaxial stratification of the earth and the change of the flatness of each layer that the gravitational coupling affects the change of the day length. Since the understanding of the triaxiality of each layer (mantle, outer core, and inner core) is not very clear, it is difficult to establish a gravitational coupling torque model that is more in line with the real earth. Although Szeto and Xu[11] gave a gravitational coupling model, they did not perform numerical calculations, nor did they try to use their model to explain the decadal-scale day length variation. This is a subject for future research. How to explain the decadal-scale day length variation based on the Euler-Liouville equation describing the three-axis layered and elastically hysteretic Earth rotation is a difficult problem to be solved.", "Determining the high-resolution centimeter-level global geoid is one of the goals pursued by the field of geodesy in the 21st century. The Earth's gravity field model EGM2008 provides a high-resolution (5\u2032\u00d75\u2032) global gravity potential field reference frame, but its accuracy level is about 10cm[1]. The GOCE satellite system launched in April 2009 can provide a global 1\u00b0\u00d71\u00b0 resolution gravitational potential field reference frame, and its accuracy level is equivalent to the geoid elevation accuracy of 1cm[2], but its resolution is not as good as EGM2008. How to determine the centimeter-level global geoid with high resolution using various information sources is a challenging problem in the 21st century. The geoid is the gravitational equipotential surface closest to the tidal mean sea level. In order to determine the geoid, the mean ellipsoid or reference ellipsoid is usually introduced, such as the GRS80 reference ellipsoid system [3]. If the distance N from the geoid to the reference ellipsoid (that is, the distance between the geoid) is determined, the position of the geoid is also determined (Fig. 1). The datum of the geoid can be selected artificially, but the premise is that the geoid determined by it is required to be the best approximation to the tidal-free mean sea level. Taking the geoid as the boundary surface, using the gravity anomaly \u0394g on the geoid (the difference between the gravity value on the geoid and the normal gravity value on the corresponding point on the reference ellipsoid), based on the Stokes method, the The disturbance position T of the geoid can be obtained by using the Bruns formula [4]. In order to calculate the disturbance potential T, the gravity observation value on the earth's surface needs to be reduced to the geoid, which requires the orthoheight H of the gravity measurement point, that is, the altitude (the distance from the ground point to the geoid, Figure 1) , it is also necessary to know the material density between the geoid and the ground. Although orthoheight can be approximated by leveling and gravimetry, the approximation is that the elevation datum is a specific sampling point of the mean sea surface, not a point on the geoid to be determined. Moreover, the strict determination of the orthometric height requires the average gravity between the known measuring point and the geoid. Usually, the average normal gravity value is used to determine the approximate orthometric height, and the error influence can reach the order of 1m. In addition, the leveling measurement is related to the length of the survey line. Points that can only be reached by the transfer of a long leveling line and areas with complex terrain changes have large cumulative errors. Especially in mountainous areas, it is difficult to implement leveling. These are insurmountable obstacles, and it is difficult to achieve centimeter-level accuracy. In addition, the material density between the geoid and the ground is usually taken as a constant 2.67g/cm3, but in fact, the density values in different regions are quite different from this constant, and the resulting error may reach 100% in mountainous areas. meter level. This is where it is difficult to determine the centimeter-level geoid using the Stokes method. Taking the earth's surface as the boundary surface, using the gravity anomaly \u0394g of the earth's surface (the difference between the gravity value of the earth's surface and the normal gravity value of the corresponding point on the terrain-like surface), the disturbance potential T of the earth's surface can be obtained based on the Molodensky method, and then the Bruns formula The elevation anomaly \u03b6 (that is, the distance from the quasi-geoid to the reference ellipsoid, see Fig. 1) can be obtained [4,5]. The Molodensky method avoids the problem of mass adjustment (or gravity reduction) and does not need to know the material density between the ground and the geoid (or terrain-like surface)[5]. However, the quasi-geoid is determined based on the Molodensky method (Fig. 1). The quasi-geoid is not a gravity equipotential surface, which limits its application value. In order to realize the transformation from quasi-geoid to geoid, the crustal density or gravity distribution information is still needed. Figure 1. The green line is the reference ellipsoid, the blue line is the geoid, the red line is the quasi-geoid, and the orange line is the earth's surface. The distances from the earth's surface to the geoid and quasi-geoid are called the orthometric height (H) and the normal height (H*), respectively, and the distances from the geoid and quasi-geoid to the reference ellipsoid are called the geoid gap ( N) and elevation anomaly ( \uf07a ). The geoid approximately coincides with the sea level (mean sea level) in the ocean area. Due to the above problems, it is difficult to achieve global centimeter-level accuracy whether using the Stokes method or the Molodensky method. Therefore, how to determine the high-resolution, centimeter-level global geoid is a difficult problem. In order to solve this problem, Chinese scholars proposed a theoretical scheme to determine the global centimeter-level geoid using the gravity field model EGM, the crustal density model CRUST and the digital elevation model DEM. The basic ideas are described as follows[6,7]. It is assumed that the Earth's surface position with centimeter-level precision and the high-resolution Earth's external gravity potential equivalent to centimeter-level elevation precision are given, and a sufficiently high-resolution crustal density model is given. The gravitational potential field produced by the matter in the shallow layer of the earth's surface (that is, the area bounded by \uf0b6\uf047 and \uf0b6\u03a9, see Fig. 2) is denoted as V1 (P), which can be calculated according to the usual Newtonian potential integral formula according to the density distribution \uf0721, That is, V1 (P) \uf03d G\uf0f2\uf047 \uf02d\u03a9 (\uf0721 / l ')d\uf074 , P \uf0ce \uf047 , where G is the gravitational constant, \uf047 represents the outer region of the curved surface \uf0b6\uf047, which contains the outer region \u03a9 of the earth, \u0393- \u03a9 represents the shallow surface area (Fig. 2), l ' is the distance between the field point P and the volume integral element d\uf074. Then, the gravitational potential field V0 (P) \uf03d V (P) \uf02dV1 (P), P \uf0ce \u03a9 in the outer region \u03a9 of the earth produced by the matter surrounded by \uf0b6\uf047 can be determined, this equation is only defined in the outer part of the earth area \u03a9 , because the gravitational potential field V in the area \u0393- \u03a9 is not known in advance (only V is given in the outer space \u03a9 of the earth). The real gravitational potential field produced by the matter surrounded by \uf0b6\u0393 is regularly harmonic in the outer region \uf047 of \uf0b6\u0393, which is known in the outer space \u03a9 of the earth. Therefore, the gravitational potential field in the domain \uf047 (that is, the outer area of the \uf0b6\uf047 plane) produced by the matter surrounded by \uf0b6\uf047 can be obtained based on the virtual compression recovery method[8] \n, V * (P), P \uf0ce \uf047 \n, it is actually a natural extension of V0 (P)(P \uf0ce \u03a9 ) to the interface \uf0b6\u0393, and the boundary value used is obtained from V0 (P)(P \uf0ce \u03a9 ) on a spherical surface surrounding the earth Boundary value V0 |\uf0b6K on \uf0b6K (eg Brillourin sphere). The resulting field V * (P) in the region \uf047 outside the \uf0b6\uf047 plane agrees with the corresponding real field (the gravitational potential field in the domain \uf047 produced by the matter surrounded by \uf0b6\u0393)[9]. Then, the real earth gravitational potential field V (P) \uf03dV0 (P) \uf02b V1 (P), P \uf0ce \uf047 produced by the whole earth in \uf047 can be obtained. The gravitational potential field in \uf047 can be expressed as W (P) \uf03d V (P) \uf02b Q(P), P \uf0ce \uf047 , where Q(P) is the centrifugal force potential. From this, the Earth's gravitational potential in the region \uf047 including the geoid is found. Furthermore, the position of the geoid can be accurately determined according to the geoid equation V (P) \uf02b Q(P) \uf03d W0, where W0 is the gravitational potential constant on the geoid, for example, the value provided by the GRS 80 system [3 ]: W0 = 62 636 860.850m2/s2. At present, the new W0 can be determined under the best approximation criterion between the geoid and the mean sea surface by using the known high-precision Earth gravity field model (such as EGM2008) and the satellite mean sea surface model [7]. The solution process of the geoid equation can be realized by heuristic method and iterative (or search) technique[6,7]. Figure 2 \tThe red solid line is the earth's surface ( \uf0b6\u03a9 ), the blue dotted line is the geoid ( \uf0b6G ), the geoid is not always located inside the earth, and the green solid line ( \uf0b6\uf047 ) is located inside the earth and the geoid , the area bounded by \uf0b6\u0393 and \uf0b6\u03a9 is called the shallow surface layer. The ray ol intersects with the surface shallow bottom \uf0b6\u0393 and the geoid \uf0b6\u03a9 at P\uf047 and PG respectively. Theoretically, the centimeter-level global geoid with high resolution can be determined by using the above scheme. The characteristic of this scheme is that there is no need for leveling (therefore no need to know the orthometric height of the ground in advance), no need to use the Stokes or Molodensky method, but it requires GPS technology (or other effective land height measurement technology) to provide centimeter-level accuracy on the earth's surface. (Satellite altimetry technology needs to be used in sea areas), it is necessary to use the density distribution of the shallow surface layer with high precision determined by geological exploration and a large number of seismic wave data, and at the same time, it is necessary to know a high-resolution equivalent elevation accuracy The Earth's external gravitational potential field at the centimeter scale. If a centimeter-level global geoid with high resolution and high precision is determined, the unification of the global elevation reference can be realized, the heavy leveling survey can be replaced by GPS measurement, and the three-dimensional satellite positioning in gravity space can be realized, which has important scientific significance and application value. In the near future, it will not be a problem to determine the high-resolution, centimeter-level accuracy of the Earth's surface position and the high-resolution equivalent of centimeter-level elevation accuracy of the Earth's external gravitational field. The key issue is to determine the material density of the shallow surface layer with higher accuracy and resolution. The currently available crustal density model is CRUST 2.0[10], but its resolution is only 2. \uf0b42. . To determine the resolution is 1. \uf0b41. Models of even higher crustal densities (or shallower layers of the Earth's surface) still have a long way to go. This makes the determination of the high-resolution centimeter-level global geoid a difficult problem for the 21st century. Through the cooperation of multidisciplinary scientists, the time to solve the above problems can be greatly shortened.", "In addition to generating body waves and surface waves involving local motions of the earth, large earthquakes will also stimulate free oscillations of the earth on a global scale. Since the frequency of the Earth's free oscillations is closely related to the structure of the Earth's interior, it is possible to use the inversion of the Earth's free oscillations recorded during major earthquakes to study important parameters such as the density and Lame parameters of the Earth's interior. The traditional free oscillation of the earth uses the elastic stress of the medium as the restoring force, and its period does not exceed 1h. It is mainly divided into two categories: spheroidal free oscillation and toroidal free oscillation. The former is accompanied by the bulk expansion of the earth medium. It leads to the corresponding change of the earth's gravitational field; while the latter does not involve the volume change of the medium, and generally does not produce the gravitational field disturbance. Therefore, the observation of the free oscillation of the earth is an important basis for studying the internal structure of the earth. Since the free oscillations of the Earth were observed in the 1960s, scientists have pointed out that once excited, free oscillations may also occur inside the liquid core (usually called \"core modes\", core modes). Compared with the traditional spherical or annular free oscillation of the earth, the free oscillation of the liquid core uses gravity or (and) Coriolis force as the main restoring force, so it has a relatively long eigenperiod, ranging from several hours to 1 day [1~5]. Many scholars in the world use the real earth model and different numerical methods to theoretically discuss the spectral characteristics of the nuclear mode, and give the possible region of the eigenperiod of the nuclear mode. However, due to the complexity of the outer core flow and people's understanding of the internal structure and layering characteristics of the deep earth, different theoretical simulation methods or the same theoretical simulation method are used to select different earth models for the nuclear model. There are very large differences between the symptom cycles. Therefore, it is necessary to use the actual observation data to study the free oscillation of the liquid core and determine its frequency, so as to provide a basis for studying the structure and layering characteristics of the liquid core. However, even if it is excited, the signal of the nuclear mode on the earth's surface is very weak, and may be overwhelmed by relatively large environmental noises. Therefore, it is very difficult to observe the signal of the nuclear mode. The invention of the superconducting gravimeter and its long-term, continuous observations on a global scale have opened up broad prospects for research in this field. The superconducting gravimeter has extremely high sensitivity and stability, extremely low noise level and drift, and an extremely wide dynamic frequency response range, which can detect seismic surface waves with periods ranging from seconds to years related to changes in the Earth's rotation. Global geodynamic effects, have successfully observed all fundamental frequency earthquake patterns with orders less than 48 and the spectral peak splitting of 2nd and 3rd fundamental frequency earthquake patterns 0S2 and 0S3 due to the rotation and ellipticity of the earth and coupling in the spherical oscillation The ring oscillation signals in , play an important role in constructing long-period seismograms with frequencies below 1 mHz and studying the deep structure of the earth. In addition, theoretical studies have shown that due to the excitation of large deep earthquakes, the vibration amplitude of the surface gravitational field caused by liquid core oscillations is on the order of 10\uf02d2nm/s2[6], which is basically within the observation accuracy of superconducting gravimeters, and within The background noise in different regions has different characteristics. At the same time, the stacking technology of observation data of stations in different regions of the world can effectively eliminate the regional effects of observation results of multiple stations in a single or local region, and improve the information of global harmonic signals. noise ratio for more reasonable and accurate results. At the end of the 20th century, many scholars tried to use high-precision surface gravity observation data, especially the long-term continuous observation data of superconducting gravimeters to detect different types of nuclear mode signals, and published their observation results[7~14]. Melchior and Ducarme analyzed the observation data of superconducting gravimeter in Brussels station for 3 consecutive years. On December 30, 1983, the Hindu Kush earthquake (magnitude 7.2, depth 222km) and November 20, 1984 Mindanao earthquake (magnitude 7.1 , at a depth of 202km), in the power spectrum of the gravity residual (observed gravity minus gravity tide signal), a signal with a period around 13.9h and a peak amplitude of 10\uf0b410\uf02d2~15\uf0b410\uf02d2nm/s2 was found. These two deep earthquakes were absent before and after other earthquakes with large magnitudes but very shallow focal depths. They believed that the signal might be the internal gravity wave excited by the deep earthquake[7]; however, Aldridge and Lumb found that the supersonic wave at the Brussels station Several spectral peaks of internal inertial waves were found in the gravity residual amplitude spectrum of the gravimeter[8]; Cummins et al. discussed the internal gravity waves and internal inertial waves of the liquid core by using the IDA gravity data after a deep earthquake[9]; Smylie et al. Using the \"integrated spectrum\" of the gravitational residuals of the long-term continuous observation data of superconducting gravimeters at four stations in Central Europe, and taking into account the bifurcation characteristics of the normal mode's eigenperiod, the eigenperiod of the inner core translational oscillation (also known as the Slichter mode) is obtained and quality factor, and based on this, the density near the center of the Earth and the fluid viscosity near the inner core boundary were estimated [9, 10]. These observations and research results, especially the observation results of Smylie, have attracted great attention in the international geoscience community and made it a hot research topic. Many scholars have made similar attempts with different numerical methods, but most scholars have obtained different conclusions from Smylie [12~18]. According to the current observation and research results, the most likely observed long-period nuclear mode is the Slichter mode. For the non-rotating, spherically symmetric Earth model, the simplified Slicer mode is the first-order spherical oscillation mode with the longest period, and the eigendisplacement is the first-order spherical displacement 1 S1 (the left subscript indicates the overtone order); and other Earth\u2019s rotation and ellipticity will cause the spectral peaks of the Slicer mode to split, forming a set of triplet lines with azimuth angle series m of \uf02d1, 0 and 1, corresponding to the equatorial positive, axial and Translational motion against the equator. Theoretical simulation results show that the period of the Slicer mode is very sensitive to the density difference on both sides of the inner core boundary (ICB), while other factors near the ICB (such as viscosity, electromagnetic force, transition zone, etc.) have little effect on the period. Although there are relatively large differences in the ST periods predicted based on different earth model theories, the spectral peak splitting characteristics of Slicer modes have little dependence on the earth model, so they can be used as an important basis for identifying Slicer modes. However, there are still a lot of research work in this area that must be further studied: on the one hand, a more objective earth model and a more effective method are used to obtain the eigenperiod of the nuclear mode through theoretical simulation, so as to facilitate the comparison and detection of actual observations; On the one hand, improve the analysis and processing method of actual observation data, eliminate the influence of other interference factors as much as possible, improve the signal-to-noise ratio of observation, and obtain more accurate observation results. The Earth Deep Research Group (SEDI) under the International Union of Geodesy and Geophysics (IUGG) advocates and organizes the implementation of the Global Geodynamics Program (GGP), using the long-term continuous synchronous observation data of global superconducting gravimeters (using the same data Acquisition format and data analysis software) research and detection of nuclear model is one of the main contents of GGP research. It should be noted that due to the weak surface gravity signal caused by the movement of the earth core, the detection of the translational oscillation of the solid inner core is still a very difficult task. This is a frontier project in the international earth science research. These difficulties are mainly manifested in: \u2460 Although the ideal accuracy of superconducting gravimeter observation in the laboratory is on the order of 10\uf02d2nm/s2, the solid-state core translational oscillation signal to be detected is very weak, and the background noise of the actual station is often \u2461Using the latest multi-station stacking technology can effectively suppress the regional background noise and play a role in relatively amplifying the global public harmonic signal, but the critical value of the harmonic signal that this technology can identify is 7\uf0b410\uf02d3nm/ s2, the same order of magnitude as the ideal precision of the superconducting gravimeter laboratory; \u2462 So far, there is no internationally accepted solid-state core translational oscillation theoretical model for reference in actual testing; \u2463 The mechanical mechanism of the Earth\u2019s solid-state core translational oscillation is not yet It is very clear, is it triggered by a deep earthquake? Or is it caused by the strong electromagnetic vortex field caused by the iron element and high temperature in the liquid outer core of the earth, and the topographical coupling torque at the boundary of the earth's core caused by the rotation of the earth? Therefore, in-depth research and reliable conclusions depend on a reasonable theoretical model. By accumulating global long-period high-precision superconducting gravity instrument observation data and in-depth and careful data processing, on the premise of improving the resolution of gravity signals, it is expected that this A comprehensive understanding of scientific issues.", "The gravity field is the most basic and direct physical quantity that reflects the density change of the earth medium and the dynamic characteristics under various environments (solid earth tide, internal heat flow, mass exchange between solid and liquid, surface load and seismic tectonic movement, etc.) [1]. The gravitational field and its changes reflect the density distribution and movement state of the earth\u2019s surface and interior matter, and according to the temporal and spatial changes of the gravitational field, the process of material migration and exchange in the earth system can be deduced and monitored (Figure 1)[2], and the gravity The higher the temporal-spatial resolution of the field, the more time-varying information of the earth's material system it contains. Therefore, high-resolution time-varying information of the Earth's gravitational field is of great significance for the study of geodynamic processes and practical applications. Earth's gravity changes are reflected in a series of time scales, including hundreds of millions of years of continental drift, seafloor spreading and orogeny, tens of thousands of years of ice ages and tectonic movements and ocean movements caused by ice ages, and decades, one year, Gravity changes caused by material migration in half a year, one month, one day, or even shorter periods of time (such as earthquakes). The monitoring and interpretation of gravity changes has always been an important content in the field of earthquake prediction research. At the same time, the spatio-temporal variation of the gravitational field has a non-negligible impact on the orbit control and movement rules of the spacecraft. Changes in the gravity field can be obtained in two ways: repeated ground gravity observations and satellite gravity observations. From the pendulum gravity measurement tool invented by C. Huygens in the 16th century to the emergence of high-precision measuring instruments such as microgamma-level high-precision relative and absolute gravimeters in the late 20th century, it has become a very important task to observe the change of the gravitational field over time. Possibly [3]. However, there are relatively few continuous gravity observation sites on the ground at present, high-precision repeated gravity observations are time-consuming, and it is difficult to obtain continuous time-varying gravity field information. With the rapid development of satellite earth observation technology, its advantages such as all-weather, global coverage, and uniform precision are becoming more and more popular. Scientists can obtain information on changes in the global gravity field through satellite measurement technology. The satellite laser ranging technology realized in the mid-1980s can accurately measure the time variable of the ultra-long-wave part of the gravity field (that is, the earth's gravity field below the 6th order). , the spatial resolution of its time-varying part has reached the limit and cannot be improved, so a more effective way must be found, which prompted the rise of satellite gravity research [4]. After nearly 30 years of research and development, the use of satellite tracking satellite technology and space-borne gravity gradiometer to measure the static (dynamic) earth's gravity field has become mature and practical. In recent years, the European Space Agency (ESA) and NASA (NASA) have successively launched CHAMP, GRACE and GOCE satellites [5] (Fig. 2). The CHAMP satellite is a high-low-satellite tracking gravity test satellite developed by the German Space Agency (DLR). It was successfully launched in July 2000 and is mainly used to measure the earth's gravity field and magnetic field. Since the unstable performance of some satellite equipment affects the calculation accuracy of the gravity field, its model is not widely used in practice. Relatively speaking, CHAMP is a conceptual experiment. Figure 1. \tThe time-space variation scale of the gravity field and its associated geodynamic processes[2] The red line indicates the theoretical space-time resolution capability of the CHAMP, GRACE and GOCE satellite missions Following the CHAMP satellite, the German DLR and the US NASA jointly developed the GRACE satellite, Launched on March 17, 2002, it is mainly used to detect gravity field and global climate change [6]. The GRACE satellite system is composed of two parallel-flying double stars with the same distance of about 200km. It is in a near-polar circular orbit. It adopts low-low satellite-satellite tracking technology, K-band microwave instrument precise double-star technology, and precise three-axis acceleration for measuring non-conservative forces. A variety of advanced instruments, equipment and technical methods, such as the two satellite cameras that measure the inertial direction of the satellite, provide a reliable guarantee for obtaining global coverage, high-precision, and high-resolution observation data. Studies have shown that [7], the gravity field model sequence solved by the data provided by GRACE satellite can reflect the monthly interval time change of the earth's gravity field on a spatial scale greater than 400km, and can achieve quite high accuracy. Figure 2 \tThe gravity satellites currently in orbit are detailed. On March 17, 2009, the first human gravity gradient measurement satellite developed by ESA - GOCE satellite was successfully launched. It uses satellite orbit perturbation and gravity gradient measurement technology to determine the Earth's gravity field. The GOCE satellite platform is equipped with GPS/GLONASS receivers, gravity gradiometers composed of three-axis accelerometers, and attitude control systems and other equipment related to the measurement of the earth's gravity field. The main purpose of GOCE is to provide a global gravity field model with high spatial resolution and high precision, and its spatial resolution will reach 80~200km, and the shortest can reach 65km[5]. The development of satellite gravity measurement technology has brought gravity measurement into a new era and greatly promoted the research progress in obtaining time-varying information of the gravity field. Gravity field information. To determine high-frequency gravitational field information, it is still necessary to rely on traditional gravimetric measurement techniques. To sum up, it can be seen that the acquisition of high-resolution and high-precision gravity field time-varying information depends on two factors: one is gravity observation technology and the extraction of gravity time-varying information on the payload. In terms of satellite gravity observation technology, future gravity satellites (such as GRACE Follow-on) will use advanced laser ranging or no-drag technology; in terms of load time-varying information extraction, domestic and foreign researches based on satellite load observation data ( Orbital position and velocity, inter-satellite distance and velocity of K-band/laser interferometry system, non-conservative force of on-board accelerometer, gravity gradient tensor of satellite gravity gradiometer, etc.) to invert time-varying information of gravity field (such as energy method, dynamics method, etc.)[8], but there are still deficiencies or limitations, and it is necessary to seek a high-precision, full-band, and efficient gravity field inversion method. On the other hand, it is the comprehensive processing of the time-varying information of the gravity field obtained by various observation techniques. Observation technologies such as ground gravity measurement and satellite gravity measurement have their own advantages and disadvantages. How to integrate and process these observation data is also a problem that needs to be solved to obtain high-resolution time-varying information of the Earth's gravity field. Facing today\u2019s increasingly global changes, high-resolution spatial-temporal dynamic change information of the gravity field is essential for monitoring related geophysical processes such as terrestrial water storage, water cycle, seawater quality change, seismic deformation, ice mass balance, and post-glacial rebound. It has great significance and application prospect. With the improvement of observation technology and the development of data processing methods, scientists will determine the time-varying gravity field with higher and higher resolution and accuracy.", "In 1960, for the first time in history, human beings used modern observation instruments to detect the Earth\u2019s free oscillation spectrum excited by the Chilean earthquake, which not only opened the curtain of a new branch of geophysics, but also inspired people\u2019s research on the Earth\u2019s free oscillation study in depth. When studying the free oscillation spectrum of the earth, American scientist Slichter[1] found in 1961 that a long-period spectrum signal in the low frequency band may correspond to the translational oscillation of the earth's inner core, and its oscillation restoring force is gravity instead of the usual elastic stress. In 1974, American scientist Busse[2] showed that the intrinsic period of the core translational oscillation is very sensitive to the density difference of the Earth's core interface. At present, it is still difficult for people to accurately detect the density difference at the boundary of the inner core, which is of great significance for studying the dynamics of the Earth's core and the origin of the Earth's magnetic field. Since the observation of inner core translational oscillations can provide important constraints for the study of inner core interface density difference, this research has become one of the international frontiers of Earth's interior physics. To commemorate Slichter's pioneering research, people often refer to the eigenmodes of the core translational oscillation as Slichter modes. The eigenfrequency (or period) of the Slichter mode depends on the deep structure of the earth. Scientists use different models and methods to study the theoretical spectrum characteristics of the Slichter mode. Busse studied the translational oscillation of the core using a solid-state rigid core model, and gave an analytical expression for the eigenfrequency of the Slichter mode. Smith[3] and Rogister[4] used the generalized spherical harmonic function expansion method to study the translational oscillation of the rotating micro-elliptical earth core. Dahlen[5] used the second-order perturbation theory and Rayleigh variational principle to study the eigenperiod and spectral splitting parameters of the Slichter mode of the kernel translational oscillation. In view of the fact that the long-period motion of the outer core fluid is very complicated due to Coriolis coupling, Smylie and Rochester[6] established the variational principle of liquid core motion by using subseismic wave approximation, and obtained the eigenfrequency of the Slichter mode of the 1066A and CORE11 earth models. There are relatively large differences in the periods of Slichter modes estimated theoretically by different earth models. Therefore, it is of great significance to use measured data to determine inner core translational oscillations. The translational oscillation of the Earth's inner core has the characteristics of long period (4~5h) and weak signal. Broadband seismograph is the most commonly used observation technique for detecting seismic wave signals, but seismograph will have non-linear quantization effect in the long-period frequency band, so it is not suitable for detecting Slichter mode. The early detection of the Slichter mode is based on the observation data of the LaCoste spring gravimeter, but the irregular instrument zero drift and high noise are the shortcomings of the spring gravimeter that are difficult to overcome. The superconducting gravimeter has extremely high sensitivity and stability, extremely low noise level, and extremely wide dynamic frequency response range, which can effectively detect long-period geodynamic effects [7]. Therefore, the modern superconducting gravimeter has become the most important observation instrument for detecting Slichter modes once it comes out. The Deep Earth Research Group (SEDI) under the International Union of Geodesy and Geophysics (IUGG) advocates and organizes the implementation of the Global Geodynamics Program (GGP), which uses the long-term continuous synchronous observation data of global superconducting gravimeters to detect Slichter modes. It is one of the main contents of the Global Geodynamics Program research. Smylie[8] used the long-term continuous observation data of superconducting gravimeters at four stations in Central Europe, and took into account the spectral splitting characteristics of the eigenfrequency, obtained the eigenperiod and quality factor of the core translational oscillation (Slichter mode), and then estimated the The density near the core and the fluid viscosity near the inner core boundary. Smylie's test results have attracted great attention internationally. Many international scholars [9~13], including Chinese scholars Sun Heping [12] and Xu Jianqiao [13], have used different methods to detect Slichter modules, but have not yet obtained results that are consistent with the theoretical values of Slichter modules. For the non-rotating, spherically symmetric Earth model, the Slicer mode is the first-order spherical oscillation mode with the longest period, and the eigendisplacement is the first-order spherical displacement 1S1. Slicer modes use gravity as the main restoring force instead of elastic stress and thus have relatively long eigenperiods (several hours). Similar to the energy level splitting in atomic physics, the earth's rotation and ellipticity will cause the spectral peaks of the Slicer mode to split, forming three translational oscillation spectral peaks (m=0, \u00b11): the inner core along the direction of the earth's rotation axis (m =0), the reverse (m=1) and forward (m=\uf02d1) translations (relative to the Earth\u2019s rotation direction) on the equatorial plane are collectively called Slichter modes. Figure 1 shows a schematic diagram of the inner core translational oscillation (m=0). The excitation source of the Slichter mode may come from the asymmetric crystallization of the liquid core material on the inner core interface (ICB) or the large earthquake in the deep part of the earth. The former will cause a small change in the center of mass of the inner core, and the earth\u2019s gravity field will drive the inner core to balance near its equilibrium position. The latter is often accompanied by the first-order spherical disturbance of the mantle, which leads to the corresponding deformation of the core-mantle boundary (CMB), which is transmitted to the ICB through the compressible fluid outer core, and then excites the Slichter mode. Fig. 1 \tSchematic diagram of the inner core translational oscillation in the direction of the rotation axis The observation of the Earth\u2019s inner core translational oscillation (Slichter mode) is still a very difficult task, which is reflected in the following points: \u2460 Although the ideal accuracy of laboratory superconducting gravimeter observations is within 10\uf02d2 nm/s2 level, but the solid-state core translational oscillation signal to be detected is very weak; \u2461 The multi-station stacking technique can suppress the regional background noise and relatively amplify the global harmonic signal, but this technique can identify The critical value of the harmonic signal is in the same order of magnitude as the ideal precision of the superconducting gravimeter laboratory; \u2462 So far, there is no internationally recognized theoretical model for the translational oscillation of the inner core for practical detection reference; \u2463 The mechanical mechanism of the translational oscillation of the solid inner core of the earth is still unknown. It is not very clear, is it triggered by a deep earthquake? Or is it caused by the topographic coupling moment of the inner core boundary? Through the accumulation of global high-precision superconducting gravimeter observation data and in-depth research on relevant theoretical models, scientists are expected to have a further understanding of this cutting-edge scientific issue.", "To explain the Chandler wobble of the Earth, let's look at the most common example. Spin a plastic disc and throw it in the air. Unless you are skilled, you will find that the disc not only rotates, but also wobbles. This is a very famous problem in classical mechanics, which was studied by Euler (Leonhard Euler, 1701\u20141783) as early as the 18th century [1]. The same is true for the earth, if the gravitational effect of the sun and the moon is not considered, in addition to the rotation, the rotation axis will also swing. This wobble of the Earth is known as the free nutation or Eulerian nutation of the Earth. To be more specific, it is \u201cwhen the earth system is excited or driven by a certain internal force in the absence of external torque, due to the deviation between its shape axis and the rotation axis, or due to the adjustment of the material movement in the system, it will This causes a change in its rotational speed (the length of the day - the length of the day changes) and a change in the relative position between the shape axis and the axis of rotation (the Earth's pole shift)\". Considering the earth as a rigid body, Euler gave that the free nutation period of the earth is 305 days. Since the advent of Euler's theory, people have been trying to find evidence of the free swing of the earth from astronomical observation records. There was no breakthrough until the end of the 19th century. Based on a large number of astronomical observations, Chandler (Seth Carlo Chandler, 1846-1913) discovered two kinds of wobbles in 1891. One is the forced swing with a period of one year, also known as the annual pole shift, which is mainly triggered by climate change, such as the migration of atmosphere, ocean, land water, glaciers, etc., as well as wind and ocean currents. The other is the free wobble of the earth with a period of about 14 months, called Chandler wobble. The magnitude of the Chandler wobble is 0.1~0.3arc sec\u2460(arcseconds, 1 arc sec corresponds to a distance on the ground of about 30m), and the wobble amplitude on the earth's surface is 3~9m. Chandler's swing is a kind of free swing. After it was discovered, three scientific problems arose: \u2460 Can its period be explained quantitatively? \u2461 Given that any free swing in physics is always damped, how does it make up for this loss to maintain the swing? \u2462 Where does the energy of Chandler swing dissipate? [2] Before long, the first puzzle had a plausible explanation. Because the earth is not a rigid body, it is compressible to some extent, at most it can only be regarded as an elastic body (to be exact, the earth is not an elastic body, it is a viscous body). It is for this reason that the Earth's free nutation (Chandler wobble) period is extended to 14 months. As for the third scientific problem, it is just some guesses at present. It is believed that the energy of Chandler's swing is consumed in the mantle, ocean or other places, and there is no unified conclusion yet. We are primarily concerned here with the second scientific puzzle. In order to observe the free swing of the earth, the International Latitude Observatory (International\u2460 arc second, 1arc sec=1000mas. Latitude Observatories) was established in 1899, because the result of the free swing of the earth will cause changes in the surface latitude, so the free swing of the earth is also considered called latitude change. In the 20th century, the International Bureau of Latitude was replaced by the International Earth Rotation Service (IERS), which provided data services of Earth's wobble to the whole world [3]. One of the purposes of IERS is to measure, at regular intervals, which direction the Earth's axis of rotation is pointing. If a mark is inserted at the place where the axis runs through the earth's surface (near the North Pole), then after many years, the swing track of the earth's axis can be drawn, as shown in Figure 1. Fig. 1 \tTrajectories of pole shift on the Earth's surface Fig. 1 shows two observations of Earth's wobble trajectories. One kind of observation is from the beginning of 2000 to the end of June 2009, the ground track (point track) of the Earth's wobble observed every day, with an amplitude of 10~15m, and the accuracy of each point can reach 0.05mas (equivalent to 1.5mm on the ground surface) . The trajectory of the Earth's wobble (also known as the pole shift) is counterclockwise, and it takes more than a year to change a circle, and it presents a spiral shape, gradually increasing and decreasing. From Figure 1, we can find a problem: the trajectory of the pole shift does not rotate around the position of the North Pole (the five-pointed star in the picture). What's going on? This goes back to how the original \"North Pole\" was calibrated. According to the international agreement, the so-called North Pole is the center point (Conventional International Origin, CIO) of the pole movement track from 1900 to 1905. Today's center point (near the red 2000 position in the figure) has been compared to the CIO (pentagram) Drift about 10m. Another type of observation is the trajectory shown by the red line in Figure 1, which represents the average pole position every decade since 1900. Roughly speaking: From past records, the \"true\" North Pole does not \"hold its ground\", but keeps drifting toward the direction of 80\u00b0 west longitude (northeast Canada) at a rate of about 10cm per year. Therefore, the pole shift of the earth, in addition to the swing around the circle, also adds the drift of the axis, this situation is somewhat similar to the typhoon and the eye of the typhoon. The reason for the polar axis drift is still unclear, but there is a plausible explanation, which is related to the exchange of mass between the oceans and ice caps, and the uplift of continents after the ice age. Why does the trajectory of the pole shift not only circle, but also change in size? This is due to the combined effect of the annual swing and the Chandler swing. Since the two have similar periods and amplitudes, they superimpose to form the common \"beat\" phenomenon in mechanics and acoustics, making the pole shift have a 6.4-year periodic change[4]. In order to analyze and discuss the pole shift in more detail, we can also decompose the total pole shift into several specific spectral components through the spectrum method (or filtering method). The uppermost graphs in Fig. 2 and Fig. 3 show the X and Y components of the pole shift respectively, and the three lower graphs are the time series of the decomposed Chandler swing, annual pole shift and other signals (including noise). Fig. 2 \tX component of pole shift Fig. 3 \tY component of pole shift The problem that scientists are concerned about is also the second scientific problem we mentioned earlier. Theoretically speaking, the Earth's Chandler wobble must be maintained by one or more excitation sources, otherwise, its wobble will eventually disappear. Since the annual swing is seasonal, it is obviously caused by meteorological changes. Although the data of the global atmosphere, ocean, land water, ice and snow can be observed relatively well at present, after careful calculation, we still find that we cannot explain the observed annual oscillation well. As for the Chandler swing, if there is no excitation source, the energy will be exhausted due to the incomplete elasticity of the earth's interior, tidal friction, core-mantle coupling, liquid core consumption, mantle rheology, etc., and eventually (30~100 years) it will stop . However, according to the observation results of more than 100 years, the amplitude of the Chandler wobble varies from time to time, and its period of change is uncertain. Apparently, the Chandler wobble is constantly being stimulated. So, what exactly inspired Chandler to swing? Some scholars believe that the excitation source is likely to be the change of ocean bottom pressure and atmospheric mass change. These two are currently able to explain about 60% of Chandler's wobble [5,6]. It is worth mentioning that the Chandler wobble is time-varying. Most of the excitation data we can obtain now come from climate models. Due to the lack of observations, the climate model itself must have obvious limitations, and the atmosphere and ocean excitations are only The fit is relatively good over a specific period of time, and there is currently a lack of good explanations for Chandler's variation over long periods of time. As for other excitation sources, they may originate from the earth's outer core, terrestrial water, cryosphere, crust (earthquakes), and sudden changes in the geomagnetic field, etc.[7,8]. The reason why geophysicists are so interested in Chandler wobbles is because the change in the Earth's rotation is due to the non-spherical effect of the Earth, the movement of the atmosphere, ocean and land water, the cryosphere, the movement of the Earth's interior, the inelasticity of the mantle, the Earth's The axis of rotation and the axis of inertia do not coincide, the coupling effect of the earth's liquid outer core and the various layers of the earth system, etc. Because of this, scientists can better understand and study the earth and test the correctness of the earth model based on the change of the earth's rotation (including Chandler wobble), which is of great theoretical and practical significance[9]. What exactly is the excitation source of Chandler's wobble? This question has puzzled geophysicists for more than a hundred years, and it has always been a mystery, but it also inspires scientists to work hard to find the ultimate answer.", "\"Origin and Evolution of the Universe\" has been a research topic for many years in the international scientific community, including academic authorities such as Einstein and Hawking in physics. So far, the hypotheses about the origin of the universe include the steady state cosmology (H, Bondi, E Hoyle, T Gold) and the big bang hypothesis (G lemaitre, G Gamov) and so on. Steady-state cosmology believes that the universe is stable in space and time, and advocates that the density of the universe expands, and the resulting thinning will be compensated by the continuous creation of new matter from nothingness. The difficulty with this hypothesis is that galaxies have been observed to be more crowded in the past by radio astronomers, thus suggesting that the universe is not stable, and its continuous matter creation has also been theoretically questioned. The big bang hypothesis includes G. Lemaitre's original atomic explosion hypothesis and G. Gamov's original fireball hypothesis. G. Lemaitre was a priest before he studied the universe. Inspired by Einstein's theory of expansion, he designed a large-density hot primitive atom to cause space expansion after the explosion, so he is known as the \"father of the big bang\". The original fireball hypothesis believes that the universe has undergone an evolution history from hot to cold, and the universe is constantly expanding, including the age of celestial bodies, redshift, helium density and other evidences to support G. Gamov's hypothesis. In the process of studying the origin of the universe, a hypothesis with a milestone contribution is the so-called \"inflation theory\" (A. Guth), which indicates that it is impossible for the universe to explode with only 1kg of innovative matter and then enter the evolution stage, But it must go through the violent inflation stage 10 \uf02d35 ~10 \uf02d32 s after the big bang, as a result, not only the universe expands 10 50 times, but also creates a huge amount of matter that fills the universe. Regarding the evolution of the universe, it is related to the origin. On the basis of the Big Bang hypothesis, the evolution of the universe is calculated from the Planck time. Hubble's law states that light from galaxies exhibits a certain systematic redshift; the farther away the source is, the greater the speed of separation. What Hubble discovered was the expansion of the universe. The expanding universe includes three kinds, namely \"open universe\", \"closed universe\" and \"critical universe\". The \"open universe\" expands infinitely and expands forever, and the \"closed universe\" is finite and finally shrinks to a \"big crunch\". The boundary between the two expanding universes is the \"critical universe\", which is infinite and will last forever swell. For \"closed universes\", only when the expansion is very close to \"critical universes\" can they exist long enough to form stars and possibly evolve life (Figure 1). The basic theories put forward by the British scientist Stephen Hawking on the \"Origin and Evolution of the Universe\" include: \u2460 \"Principle of Unknowability\". The singularity is ultimately unknowable, and thus should be completely uninformative. Comments suggest that this principle is consistent with the view that the primordial universe is in a state of maximum disorder (thermal equilibrium). \u2461 \"variable dimension theory\". The universe originally had four dimensions of space, but no time dimension. Among them, one-dimensional space spontaneously becomes time, so that the universe can change and evolve freely, so that it can expand and give birth to life. \u2462 \"Quantum Gravity Theory\". It is used to discuss the very early stage of the universe, and it is proposed that \"as long as the universe has a beginning, it is conceivable that there is a creator (referring to God)\". [1,2] Universe Zero and Evolution \t\u00b7 615 \u00b7 Fig. 1 For \"closed universe\", only the expansion is very close to \"critical universe\", in which space-time can form stars and evolve life, as shown in the shaded area in the figure Shown by John D. Barrow. The Origin of the Universe. Translated by Bian Yulin. Shanghai: Shanghai Science and Technology Press. 14 Research on the \"Origin and Evolution of the Universe\" is a multidisciplinary, long-term open academic project. It is difficult for us to expect to get a quasi-result after 10 or 20 years of work, but to move forward step by step. Newton, Einstein, and Hawking almost all believed that there is a \"God\". And Engels believed that \"the possibility that all movements will stop sooner or later does not exist\", this is the Marxist world outlook. This topic is theoretical research supplemented by scientific experiments, such as the work of the European Large Hadron Collider, the extension of Hubble's redshift, the mathematical simulation of dark matter, etc. [3]; we do not need to recognize the Bible, but learn from its seven-day creation It is also like the Chinese \"Yi began with Taiji, Taiji divided into two, so heaven and earth were born\" and the universe formation process of \"Taichu (Taiji)\u2192Taishi and Taisu\u2192Chaos\u2192Taiji Division\". These scriptures also show that there is an acknowledgment that the universe had a beginning. Among them, there are too many scientific problems to enumerate. This topic only explains a few basic problems. \"Universal zero\" includes zero time, zero space, and zero matter in the universe. \"Zero universe\" is the basis of the Big Bang hypothesis and the connotation of \"universe zero\". \"Universal zero\" is the process of increasing probability, \"+, \uf02d\", \"brightness, darkness\", balance or \"nothing\"? The change of \"universe zero\", such as the cause of the big bang, is it one of the cycles or the only time? Does the existence of \"universe zero\" necessarily lead to the existence of \"God\"? The relationship between Planck time and \"cosmic zero\". The scale meaning of \"cosmic zero\". The significance of \"black hole\" and \"white hole\" in the evolution of the universe [3]. Fundamental laws of the evolution of the universe, as evidenced by the chaotic process. The relationship between the expansion of the universe and the creation of matter. The significance and proof of the existence of positive, antimatter and dark matter, as well as the mathematical and physical simulation assumptions of dark matter. The content and function of quantum gravity theory [4]. The rationality problem introduced by Einstein's cosmological constant. Hawking's rationality problem that \"the current state of the universe can be caused from a considerable number of different initial structures\" derived from the new inflation model[1,2]. A diagram of the relationship between \"universe zero\" and the evolution of chaos. Including general relativity, the second law of thermodynamics, the law of indestructibility of matter and other traditional physical laws in the coupling degree of \"universe zero\" and the evolution of chaos. The basic law of chaotic evolution, the material properties of cosmic nebulae, the nature of space-time changes, and the existence significance of regenerated matter. The quality of the universe, the direction and rate of evolution, the description method and credibility proof of the evolution of the universe. The different levels and spatial ranges that make up the universe, the power source of the change of material properties in the universe, and the structure and basis of the new cosmology.", "The earth's interior has a layered structure (Jeffreys, 1939; Bullard, 1957; Bullen, 1963, 1975). From the perspective of seismic wave velocity changes, there are multiple low-velocity layers in the earth's interior. The earth's surface (crust) is bounded by the Moho interface, and the stratification of strata in sedimentary basins is deformed to varying degrees. In the earth medium, the propagation of seismic waves has been studied for many years (more than one hundred years since the Lamb problem), and numerous results have been obtained. Different scholars at home and abroad adopt different methods when summarizing the theory and process of seismic wave propagation. Here, in order to connect with the proposed \"topic\", the academic contributions of several influential scholars are used as the outline to describe the research on the earth's medium so far. The main result of internal seismic wave propagation. These scholars are Ewing et al., Fu Chengyi, Brekhovskikh (Russia) [2] Aki K et al. [1], Li Daqian, Qian Zuwen, Crampin, Zhang Zhongjie, Guo Ziqiang. Elastic waves in layered media (translated by Liu Guangding, schooled by Wang Yaowen) by Ewing et al. [1] is an excellent monograph on wave mechanics. The academic results provided by this book include point and line source integral solutions and their calculation methods. Wave propagation at interfaces in two-phase media, layered half-space problems, effects of gravity and viscosity on wave propagation, wave propagation in inhomogeneous media, introducing anisotropic media problems. Among them, Mr. Fu Chengyi, a Chinese scholar, published two articles on the basic properties and propagation of seismic waves on Geophysics at the same time as early as 1946, and then published the research results on the propagation characteristics of elastic waves in horizontal layered media in 1950. On this basis, Chinese scholar Ma Enze et al. published the research results of seismic wave propagation in elastic half-space of thin loose overburden in 1964. The Russian monograph \"Waves in Layered Media\" (Brekhovsky [R]) published in the same year as WM Ewing is an equally important academic work as the above-mentioned Ewing work. This book in Russian (translated by Yang Xunren) mainly expounds the theory of propagation of electromagnetic waves and elastic waves in layered media. The main difference from Ewing's results is that Ewing worked hard to calculate analytical solutions, while Brekhovsky (Russia) provided high-level The approximation of precision, and the simultaneous explanation of electromagnetic waves and elastic waves are also its characteristics. Contributions of Aki K et al.: the mathematical properties of the source, the seismic calculation method for 3D heterogeneous media, the effect of improving the internal characteristics of fault zones on earthquake prediction, and the proof of the relationship between the rationality of the basic model and the inversion of the real earth structure, etc. These contributions are collected in Quantitative seismology theory and methods (Vol.\u2160& Vol.II). Two important branches of wave science are nonlinear waves and wave properties in anisotropic media. Professor Li Daqian from Fudan University and Professor Qian Zuwen from the Institute of Acoustics, Chinese Academy of Sciences have made a series of advanced results on nonlinear wave propagation problems, including the rigorous proof of the well-posedness of the solution of the n-dimensional nonlinear wave equation Cauchy problem, the nonlinear elastic wave in solid media Fundamental characteristics of waves of finite amplitude in propagation, dispersive media, and bounded spaces. The book \"Nonlinear Acoustics\" by Qian Zuwen is an important work on nonlinear wave propagation. S. Crampin made a series of important results in the field of seismic wave propagation in anisotropic media, such as confirming that anisotropy generally exists in lithosphere layers, seismic shear waves have azimuthal anisotropic responses related to the size of cracks, when shear When the wave passes through the EDA medium, the splitting phenomenon of the vertically polarized fast wave qS1 and slow wave qS2 related to the crack is proposed, and the detection method is proposed. Professor Zhang Zhongjie of the Chinese Academy of Sciences revised Snell's law and theoretically proved that there are orthogonal seismic waves qP, qS1, and qS2 in anisotropic media. Professor Guo Ziqiang from the University of Science and Technology of China established the magnetothermal viscoelastic wave. The starting point includes the parallel connection of Kelvin body and Maxwell body, omitting the displacement current, Ohm's law with pyroelectric effect, and linear relationship between permeability and deformation. , Isotropic strain-displacement relationship, etc. [3]. Due to the limited space of this article, other relevant results are omitted, among which the important ones are related to hyperbolic equation inversion imaging processing technology by Claerbout J, etc. The earth's medium is extremely complex, and its three-dimensional changing properties and different changing sizes are difficult to describe with general analytical formulas. In exploration seismology, the problem of seismic wave propagation is a basic theoretical component; traditional propagation theories and methods approximate the propagation of seismic waves in the earth medium from two aspects. One is the description equation, and the other is the properties of the earth's medium. The approximation technique for equation inversion is omitted here. So far, the above two approximations are also reasonable for practical applications, and its problems include insufficient precision and stage rationality issues in dealing with approximation techniques. It is an unreasonable example to apply the approximate formula of core-mantle long-distance propagation to the problem of distinguishing near-surface solid minerals. But the medium of the earth is unified, and three phases, four phases and even multiple phases appear at different depths, indicating that depth is a major parameter. Porosity is related to mineralized water, and probably everywhere, and that's another important parameter. With the improvement of the understanding of the earth's structure and the increase of deep resource exploration, a more complete propagation theory and effective approximation are needed to describe the seismic wave propagation process of the earth's medium properties (Fig. 1). Fig.1 \tSchematic diagram of the uniform law of seismic wave propagation in the interior of the earth Note: V\uf0e5 is the seismic wave velocity; {N} is the property of the earth medium; {C} is the structure of the earth; E is the quasi-stable quantity of the controlling effect; E is energy; I establishes the complete seismic wave propagation theory generated by the general seismic source under the condition of the earth medium; II makes an effective approximation for the complete seismic wave propagation process, including the simplification of the field response and the physical parameters that are synchronized with it Reasonable approximation, precision analysis and research of inversion algorithm; III well-posedness research, analysis and application of analytical solution, research and treatment of rationality issues; IV simultaneous accurate inversion technology of the earth's internal structure and physical properties matching the complete seismic wave propagation theory", "The exploration of oil and gas resources is gradually developing from middle-deep and deep layers to deeper sedimentary formations, and the exploration of solid mineral resources is shifting from the \"first space\" to the \"second space\" (500~1500m). In comparison, the exploration difficulty of structural oil and gas reservoirs is generally less than that of stratigraphic and lithological oil and gas reservoirs, but when the depth reaches >8km, or even reaches 10km or 11km, it is also difficult to improve the imaging accuracy of oil and gas structures. For the convenience of description, the oil and gas resources > 8km are determined as \"deep\", and the solid mineral resources located > 1500m are also determined as \"deep\". For the exploration of deep resources, the cost is much higher than that of general depth exploration. No matter from the perspective of economic investment or improving the technical level, the deep resource exploration here needs to introduce high-precision work, which is different from the general exploration process in the past new areas. . High precision requires the processing of seismic data with an SNR of 1:0.25, a resolution of 1m in the vertical direction and 5m in the horizontal direction, and the imaging technology requires that the thickness be as thin as 2m, the fault distance >1m, and distinguish resources and surrounding rocks of different phases and lithologies. Obviously, it has exceeded the precision requirements for medium-deep resource exploration. Existing exploration knowledge points out that it is difficult to determine the stratigraphic structure down to 6km deep with the profile data obtained by 60 coverages, and it is even more difficult to determine the lithologic oil and gas reservoirs. This topic requires simultaneous inversion and interpretation of structures, strata, lithology, and phases. For the exploration of solid minerals in the \"first space\", we know that the irregular interference is much higher than that of oil and gas data. Difficulties such as difficulty in imaging, extraction of parameters, and superposition of common reflection points have become obstacles to everyone's work. In addition to the difficulties of \"first space\" and \"second space\", the exploration of deep solid resources proposed here has to face new and greater work difficulties caused by the increase of depth and the complexity of structure and mineralization conditions. In addition, the mining area is not the same as the target area, and the actual target area is far larger than the mining area, so we have to do the work of \"passing through a sieve\" without considering the economic input. There is still a huge amount of seismic exploration in Songliao Basin, Tarim Basin, Sichuan Basin, Ordos Basin, solid mineral deposits in the Xingmeng Convergence Zone, Anhui-Jiangxi-Xiangmen Belt, Qinling-Dabie Mountain Mineral Belt and other domestic deep resource target areas. Under the limitation of precision, it is necessary to solve a series of applied basic theory and applied technical problems (Figure 1), among which there are many difficult problems. This topic only puts forward preliminary opinions on the most important scientific and technological problems. The job target is a newly defined deep resource, and the job level is high precision. This requires the establishment of a new solution method based on the complete seismic wave propagation theory to explore the accuracy and effect of the solution. Integrating relevant data of rock physics, mineral deposit science, structural geology, petroleum geology, and mathematics, supported by geophysics and seismology theory, method and technology, researching the solution accuracy and error reduction methods with basic methods and technologies, and using the minimum economic The investment meets the requirements of the topic and provides favorable conditions for the sustainable development of the national economy. It is necessary to analyze the difficulty and method of work in detail, such as SNR, resolution, imaging accuracy, adaptation to complex and irregular conditions of ore bodies, petrophysical properties and multi-phase parameters, new issues of deep resources (such as shallow resources, medium-deep resources, structural lithology, etc. It is related to changes in deep resources, difficulties in other processing links due to the increase in the main frequency of seismic waves, etc.), changes in data acquisition technology, etc. Theoretical Deep Resources High-precision Seismic Exploration \t\u00b7 621 \u00b7 Fig. 1 The detection of \tthe \"second space\" or \"third space\" of solid minerals on the left side of the exploration range of deep mineral resources is in the process of engineering, and the accompanying problem is mining technology. As the depth increases, there are a series of inadaptability in the existing means. For oil and gas exploration on the right side, the oil and gas problems in shallow and middle layers (<8km) have not been completely resolved, but the design of oil and gas exploration in deep layers has also begun, including method technology, accuracy error, test production, comprehensive research and other foundations to be perfected, and research objectives to be reasonable and feasible 1. The working method must be effective, so here is almost just a question, the purpose is to make exploration seismology adapt to the needs of national economic development, and then improve it, whether it can cause revolutionary changes is also possible. Connection with the complete theory of seismic wave propagation [1]. Seismic exploration of deep resources is based on a complete theory, but it is impossible to fully apply it in specific work, such as whether electromagnetic effects, thermoelastic effects, randomness, etc. need to take into account multiphase media, inhomogeneity, anisotropy, nonlinearity, etc. It also requires careful discussion and analysis. Due to the difficulty of the problem, premature and excessive approximation is not allowed; the problem of technical matching between different work links. Considering the complexity of deep resource exploration, the redesign of 3D observation includes the common reflection performance and control technology of complex deep underground structures, the method of checking the effect of constraints between different geophysical exploration technologies at different scales, and the asymmetric observation technology (combined receiver The application conditions of single-receiver method, combination of different precision observations, etc.), the design and effect of shaped seismic source, etc. Even design \"compound method\" observation, composite design and application of exploration technology, innovative processing and interpretation system design. One of the main foundations of \"deep resource high-precision seismic detection\" is the problem of data signal-to-noise ratio. Among them, the amplitude-preserving broadband denoising technology is the core issue. For high-precision seismic exploration of deep resources, it is required to meet the requirements of medium-deep and shallow exploration or even exceed those requirements, such as simultaneous amplitude-preserving processing, automatic reduction of strong regular interference, and random noise reduction in the wide range of 200Hz. ) and >1500m (solid minerals) are currently almost blank work. One of the main difficulties in deep oil and gas seismic exploration is stratigraphic and lithological reservoirs, which require not only a good data base, but also an innovative interpretation system. Complicated geological and physical conditions, such as pressure, caused the work that was originally a difficult point in shallow and medium-deep oil and gas exploration to become more complicated and difficult when the depth is large, and excessive drilling is not allowed. This can only lead to a leap in exploration seismology technology, otherwise the work goal will not be achieved [1].", "The scientific community once circulated that \"it is easy to go to the sky and difficult to enter the earth\", and the reasons may include: \u2460 use space tools to enter space (not referring to observation equipment such as radio telescopes); The Kola deep drill in the former Soviet Union is about 12.3km); \u2462 Even if science and technology develop rapidly, the progress of the underground engineering will be slow. So, is it true that \"it is easy to enter the sky and difficult to enter the earth\"? Answer: no. It can be summarized as follows: the sky is almost infinite, and the aerospace engineering goals of spacecraft emerge in endlessly; although the earth's medium is hard and difficult to enter, it is limited. \"It is difficult to go to the sky or enter the earth\" is a scientific understanding. Here we mainly discuss the problem of entry into the ground. The purpose of entering the earth is to detect the structure and internal state of the earth, develop mineral resources, and benefit mankind. The means of entering the ground are basically divided into three categories, drilling engineering (category A), geophysical technology (category B) and geochemical testing (category C). Among them: the difficulty of category A lies in the drilling tool material and power supplement; the difficulty of category C includes limited inclusions and the limitation of testing technology level; the technology of category B is basically indirect access to the ground. Our scientific community cannot ignore this type of indirect means. It can measure the structure and motion state of the deep earth that are far beyond the reach of Type A and Type C, but the accuracy needs to be continuously improved. As the B category of geophysical technology, it can roughly include gravity field, magnetic field, geoelectric field, natural seismic wave field, artificial seismic wave field, remote sensing technology, etc. The earth's gravitational field is a very important physical field. In the actual processing work corresponding to the in-ground target, the solid tide factor cannot be ignored, and the gravity wave should be considered further, and the underground target should be detected gradually by using high-order field harmonic analysis. The origin and metamorphosis of the Earth's magnetic field is an important scientific problem. When using the Earth's magnetic field for intra-terrestrial research, the key issue is the ability to distinguish. Interpretation of geoelectric field observations is conditional, supplementing artificial electric field detection may improve detection accuracy. The core problem of the \"earth-entry electromagnetic missile\" equipment proposed at home and abroad in recent years is the limitation of energy. The underground structure interpreted by remote sensing technology is constantly deepening, and it is a detection method with great application prospects. In contrast, the wave field generated by natural earthquakes has the following advantages: wide range, large depth, and many conditions for improving accuracy. The accuracy of the artificial seismic wave field is high, but the cost is high. Geophysicists should reasonably judge and analyze their own work ability, advocate the combination of regional and local detection, the combination of observation and interpretation level, and the combination of detection scale and accuracy. Only by using the physical field data with enough deep (including target) information can the target solution be obtained. The target solution includes the internal structure and state of the earth (Fig. 1) [1]. The information about the earth's interior obtained from long-term scientific research and experiments is very valuable. For example, the layer structure of the earth, the differential movement near the inner core, the inhomogeneity, anisotropy, physical elasticity, thermoelectricity, nonlinearity, heterogeneity, and randomness of the earth medium. Are rotational differential motions derived from seismology coupled to precession? Is the fluid part in the deep core and mantle structure of the earth the only state? May there be material properties in the earth that have not been observed on the ground so far? Does the \"into the earth\" project include other stars? Although these are not enough scientific problems on a macro level, they are still problems in our geophysics community. Fig. 1 \"Into the ground\" drilling and geophysical methods The dotted line in the figure indicates the stage of drilling into the earth medium, the premise of which is to solve the physical and mechanical problems such as the hardness and melting point of the drilling tool; even if it is possible to drill into the B'' layer, Its large-scale lateral changes are also difficult to solve. Geophysical methods may be the main means, but if the accuracy is not improved, it is far from quantitative description. What accuracy is appropriate: 10\uf02d2m? 10\uf02d3m? Accuracy (H)? The \"?\" in the picture indicates the structure and state. On the basis of the above-mentioned background information about \"entering the earth\", we list the scientific issues of \"entering the earth\" as follows. (1) The scientific community recently reported: \u201cThe surface hardness of neutron stars reaches 10 billion times (1010 times) that of steel.\u201d Here we do not explain the technical means of studying the hardness, but extend the results from another aspect. First of all, why is the hardness so great? What does the hardness of matter in the universe depend on? Is it possible to reorganize the molecular structure to change the hardness of the substance? Even if the ultra-hard material is obtained and used as a class A drilling tool for drilling into the ground, there is still a problem of melting point. It is suggested to install a miniature \"heat transfer\" device inside the drill head to solve the problem of melting point and power source at the same time. (2) Within the scope of the earth scale (including medium and small scales), establish geophysical field equations of mixed scales and comprehensive states and corresponding new inversion techniques. The difference between this problem and another scientific problem (high-precision seismic detection of deep resources) includes: \u2460 here is a comprehensive geophysical field; \u2461 what is obtained is a quasi-large-scale dynamic equation and its inversion technology. The new inversion technology in this problem has a problem of discrimination between error and accuracy, and a new discrimination method needs to be provided. (3) About the \"step by step\" research process: the combination of distant goals and the progress of short-term goals, the combination of theoretical discussions (allowing assumptions and approximations) and mathematical simulations, and the verification of theories and methods. (4) The scientific problem of \"entering the earth\" requires multidisciplinary research, which is the synthesis of human civilizations all over the world. We recommend the development of a global \"into the ground\" detection table (including implementation). Its main contents include: \u2460 Materials research (disciplines: chemistry, physics, materials science, mechanics, geology, geophysics, etc.), the goal is to prepare A-type means and tools and conduct corresponding experimental research; \u2461 Research on different layers of the earth Different geophysical methods (type B) are used for research at different positions and depths, with the goal of: in the near term (50 years) to the B'' layer in the earth, and in the mid-term (100 years) to the D'' layer. Among them, the material properties of the earth include its state and changes, and the structure reaches an accuracy of 1000~100m; \u2462 The detection organization is established. Establish five short-term detection and observation areas around the world, and complete the demonstration work for mid-term detection and observation and long-term detection and observation areas. Development of detection equipment, including theoretical level requirements, matching between detection level and theoretical level, and equipment replacement. Establish a global geophysical exploration database (version U0). The U0 version includes the collection of existing detection results (after demonstration), as well as the part of existing detection theory.", "The geoscience community has accepted the knowledge that the earth has a layer structure. The earth's layer structure is not uniform from shallow to deep. Fluids on the earth\u2019s surface such as rivers and seas, oil and gas water near the surface, mine water, mineralized water in the crust, magma chambers, molten materials in the asthenosphere (B\u2019\u2019), core-mantle transition zone (thickness of several thousand Km) fluids, etc., are all used as fluids in the earth's layer structure. The geosciences, physical chemistry, biology and other academic circles have conducted a lot of fruitful researches on fluids in the earth[1,2]. Type I fluids include fluids in basins and fluids in solid minerals (<12km); Type II fluids refer to magma chambers and other melting zones in the crust (<40km, generally thick crust); Type III refers to Substances, D''~E substances belong to Class IV (Figure 1)[1]. For Class I fluids, although the problems have not been completely solved (such as the determination of mine water, etc.), there is a lot of research investment and a lot of verification knowledge (omitted). The remaining three categories are also accepted by the geophysical community, but not exactly. Mr. Xie Hongsen clearly pointed out[2] that \"the lateral density inhomogeneity in the lower mantle may reflect the subduction zone materials carried by the mantle convection and sink into the mantle\", in order to prove the role of the chemical reaction between the core and the mantle in the migration and evolution of the earth's materials With the relationship between mantle convection, core convection, mantle plumes, and oceanic subducting plates, \"a variety of methods must be used to conduct more complete and detailed observations of the core-mantle boundary region, among which seismological observations must be strengthened.\" The above shows that there is fluid movement in the earth, and it has a close interaction with other substances and their structures. B''fluids, collectively referred to as asthenospheric fluids, are observed by different geophysical methods. Figure 1. \tThe fluid states that may exist in the Earth's inner layer structure have scale transformations, and their physical properties and structures may be multidimensional functions. The existence of small-scale fluids near the surface cannot be fully determined (such as mine water, etc.). The geophysical method is the main means to detect the fluids in the earth. The function of the fluid in the earth's layer structure. \t627. Measurement, such as the low-velocity layer of seismology, the low-resistance layer of geoelectricity (other methods of geothermal temperature, the characteristics are not obvious), and Not exactly. D''~E core-mantle transition zone, the problem is more complicated: an obvious question is why so thick? During the formation of the earth, there was a \"clustering effect\", and this feature may also exist in other stars. The result of \"clustering effect\" is a relatively balanced state, but why is the density of clustering substances not gradually changing? Different substances with similar densities have such a large difference in thickness? Obviously not \"precipitation\". What is the function of the fluid layer? Is there a difference in the movement of matter in the earth's interior due to the existence of fluid layers? Or does fluid viscosity counteract this differentiating factor? There are other reports on the study of the earth's fluid problem. Such as \"fluid is close to a fluid state, not a fluid\", the state of B'', the thickness change of the fluid layer, the lateral or even bidirectional change of the fluid state, and possible fluids other than the above four types of fluids, etc. \"The role of fluid\" includes two aspects. First, the determination of the fluid, including the method, accuracy, scale, fluid state, properties, and range of material existence, the relationship between the fluid and adjacent substances, the evolution and cause of the fluid, etc.; second, the role of the fluid, including large-scale The role of the Earth's equilibrium state, mesoscale plate movement, and small-scale mineral resources. The role of fluids in the Earth's layer structure contains a large number of scientific issues that need to be explored by geosciences (especially geophysics), physical chemistry, and biology. Here are a few for reference. Exploration seismology engineering experience shows that basin oil and gas can be indicated by \"bright spots\" and \"flat spots\". It is difficult to see the features indicating oil and gas on most seismic stack sections, which are often determined after repeated processing and interpretation. Why can't I get \"bright spots\" and \"flat spots\" profiles? Is the reflection coefficient not large enough? Is it possible to get by means of processing? How accurate is it? It is known that the existence of mine water is closely related to coal seam mining, but where is the water? How to move? What is its driving force? The above shows that there is still a set of unresolved scientific problems for Class I fluids. Type II fluids, such as the partially molten layer of the crust in the south-central part of Qinghai-Tibet obtained by the INDEPTH project, have been inferred to have the nature of \"fluid\" after a comprehensive study of various geophysical methods. The large Moho fault zone in the Songliao Basin interpreted by SONGLIAO-DRIP opens a channel for the thermal upwelling of mantle materials, which may be related to the formation of tens of kilometers thick low-resistance zone. These scientific results, to a large extent, have interpretations that are acceptable to the academic community. But how accurate is near-vertical seismic reflection technology? Especially in addition to the material state and properties of structure and structure imaging. Considering the investment aspect, is there any alternative technology to this technology? How to replace it? The results of the \"National 3D Lithospheric Structure and Tectonics\" project show that it is difficult to solve the state of the B'' layer with near-vertical reflection seismic technology, mainly because of the 3D change and degree of material properties of the B'' layer. Although the (Vp) distribution obtained by natural seismic imaging can show the above changes, it is obviously not accurate enough. Especially B'' material properties, with fluid properties? Composition of matter causing fluid-like characterization? Are physical-chemical tests general? We only recommend: use of integrated detection system to test as control (horizontal) - supplementary control of near-vertical reflection seismic technology (new solution) - (Vp) deployment for improved precision. It is suggested that institutions and organizations jointly detect fluid problems in the D''~E zone. The four types of fluids have different effects on the material structure and dynamic balance of the earth [2]. The B'' zone may regulate the quasi-equilibrium state of the earth's upper mantle structure evolution, and the fluids that may exist in the thousands of kilometers thick D''~E may play an important transitional role in the core-mantle relationship and global stability . The question here is: why do B'', D''~E exist? Where? Where do they fit in the Earth's balance system (or possibly established equations)? How do they work on Earth? Related to surface ecosystems? Do they need to adopt dynamic observation system research? So far, few hypotheses have been made about the D''~E zone.", "Earthquake is a natural phenomenon, its occurrence is the result of the continuous movement of the earth's interior, in general, the occurrence of earthquakes has nothing to do with human activities. However, since the 1960s, many phenomena of human-induced earthquakes have been discovered. It was first discovered that water injection into deep wells induced earthquakes in Colorado [1]. The Rangely oil field is located in the northwestern part of Colorado. Since 1957, the reservoir has been injected with water to increase crude oil production. Nearby seismic stations have recorded seismic activity near Rangely, from October 1969 to November 1970. From the occurrence of more than 1000 small earthquakes, the number and size of induced earthquakes are directly related to the pressure and volume of water injection. Another phenomenon of water injection-induced earthquakes also occurred in Colorado. In 1963, in the Rocky Mountain Armory near Denver, the capital of Colorado, contaminated water was poured into a well more than 3km deep. Denver has never happened in history. In 1970, water injection in the well was changed to water pumping, and the small and medium earthquakes stopped happening. Later, in many areas of other countries, the phenomenon of earthquakes induced by water injection into deep wells was also discovered. For example, the salt mine [2], Daqing Oilfield and Shengli Oilfield in Sichuan, China. These phenomena all indicate that water injection is closely related to seismic activity. Why can water injection induce earthquakes? It may be caused by the penetration of pressurized water into the rock, which reduces the friction of the crustal rupture surface and eventually causes sliding. But there are also many reservoirs that do not have earthquakes after water injection, mainly because there are no fractures in the underground there, that is, the basic conditions for earthquakes do not exist. The above-mentioned water injections belong to shallow water injection. The International Deep Drilling Program drilled a well about 10km deep in Germany, and conducted a deep water injection-induced earthquake test at a depth of 9km [3]. It was found that a large number of earthquakes also occurred. The tectonic environment is related to the stress field. The magnitudes of all induced earthquakes are mostly in the range of 3~4, which belong to small and medium earthquakes. The magnitude of the artificially induced earthquake is larger, only the earthquake induced by the reservoir. Since the beginning of the 20th century, large-scale water conservancy projects such as reservoirs have been widely built in various countries. It has been found that the seismic activity (the number of earthquakes and the magnitude of earthquakes) near the reservoir area caused by the water storage of the reservoir is significantly increased, which is called reservoir-induced earthquake, or reservoir earthquake for short. Reservoir-induced earthquakes first occurred in 1931 at the Marathon Reservoir in Greece. Since then, people have realized that human engineering activities such as building reservoirs can induce earthquakes. There are more than 10,000 large and medium-sized reservoirs built in the world. However, only a hundred reservoir earthquakes have been induced [4], most of which are medium and small earthquakes, which do not cause damage to the reservoir. Among them, more than ten destructive reservoir earthquakes have been induced, and the maximum magnitude of the induced earthquake is 6.5 ( Table 1). Reservoir impoundment and induced earthquakes is a very complicated issue. Although it has been studied for many years, it is still mostly a phenomenological statistical study, and the understanding of its mechanism is still not enough. Because of this, there was an \tacademic discussion in 2009 on the relationship between the Wenchuan Earthquake in China in 2008 and the construction of the Zipingpu Reservoir. Zipingpu Reservoir is a large-scale water conservancy project on the Minjiang River in Sichuan. It was officially started in March 2001, and the gate was opened for water storage in September 2005. It was fully completed in 2006. The total storage capacity of Zipingpu Reservoir is 1.112 billion m3, and the maximum dam height of the concrete face rockfill dam is 156m. It is one of the few high dams of the same type in China. Zipingpu Reservoir is more than ten kilometers away from the epicenter of Wenchuan M8 earthquake on May 12, 2008. Was the Wenchuan earthquake caused by the impoundment of Zipingpu Reservoir? Richard[6] and Moore[7] believed that the construction of the reservoir was \"an artificial inducement of the Sichuan Earthquake\". Chen[8, 9] believes that: from the perspective of phenomenology and mechanical analysis, the Wenchuan earthquake is very different from general reservoir earthquakes, and the Wenchuan earthquake is not a reservoir earthquake caused by water storage. This academic disagreement is not limited to the Wenchuan earthquake. Whether the construction of the Three Gorges Reservoir will induce a major earthquake is also a scientific problem currently facing. There are at least two aspects of research on the issue of human-induced earthquakes: first, how to estimate the possibility and hazards of human-induced earthquakes before the construction of large-scale projects, and minimize earthquake disasters; In areas where destructive large earthquakes may or are likely to occur, the method of artificially inducing small earthquakes is adopted to turn large earthquakes into many small earthquakes and destructive earthquakes into many non-destructive earthquakes, so as to achieve the purpose of controlling earthquakes. Obviously, the continuous deepening of the understanding of human-induced earthquakes is of great significance in both discipline development and practical application.", "The energy released by the decay of uranium, thorium and potassium radioactive elements in rocks is the main source of the earth's internal heat, and they are mainly enriched in the shallow part of the earth's crust during the differentiation of the earth's spheres. The vertical distribution of radioactive heat generation rate in the continental lithosphere is very important for studying the thermal-rheological structure and dynamic evolution of the lithosphere. It has always been a hot issue and a difficult issue for geothermal scientists. The most classic vertical distribution model is the exponential model proposed by Lachenbruch [1] in 1970 (Fig. 1), that is, the radioactive heat generation rate in crustal rocks decays exponentially with depth. However, subsequent studies continue to question this [2]. Some studies have pointed out that the heat generation rate in the upper part of the crust first increases with depth, and then decreases with depth[3,4]; some studies also point out that the heat generation rate does decrease with depth, but it does not decay exponentially[5] ]. The implementation of the Continental Scientific Drilling Project gives scientists the opportunity to directly observe the distribution characteristics of the radioactive heat generation rate in the shallow crust, which provides an excellent opportunity for testing the crustal heat generation rate model. At present, the exponential model of the heat generation rate distribution of crustal rocks in Figure 1 \t[1] in the world currently shows that A is the heat generation rate, z is the depth, and A0 is the surface heat generation rate . There are more than 20 continental scientific drilling projects that have been implemented, but none of them have \nAn exponential decay of the heat generation rate was observed. The German Continental Scientific Drilling Project shows that the radioactive heat generation rate exhibits a layered distribution and a step-like change at the lithological interface[6]. The vertical distribution of heat generation rate revealed by the China Continental Scientific Drilling Service located in the Sulu ultra-high pressure metamorphic belt is particularly unique: the average heat generation rate of rocks in the upper 5km of the crust is 1.23\u03bcW/m3, and the heat generation rate increases with depth from the surface, and at 1650m The heat generation rate jumps and decreases rapidly at deep depths, and then continues to increase with depth. In addition, according to the study of the North China Basin and the southeast coastal areas of my country based on oil drilling rock samples, no heat generation rate decays exponentially with depth [7]. Borehole data, although direct and reliable, can only reveal the vertical distribution characteristics of the uppermost part of the crust. In order to make up for this defect, many scholars have used the exposed crustal rocks to study the heat generation rate. However, these studies are also affected by many factors such as geological structure, and it is difficult to reveal the vertical distribution of the entire crustal heat generation rate. Condition. For this reason, some scholars use geophysical data to indirectly invert the heat generation rate of rocks in the deep crust. Rybach and Buntebarth proposed an empirical relationship between seismic wave velocity and heat generation rate based on the observation data of rock seismic wave velocity and heat generation rate in the laboratory [9], and then similar research work was carried out all over the world. However, the vertical distribution of radioactive heat generation rate in the continental lithosphere by these studies \t\u00b7 633 \u00b7 Fig. 2 \tThe vertical distribution characteristics of heat generation rate revealed by China Continental Scientific Drilling [10] Lithology; (b) Vertical heat generation rate distribution, where the scatter points represent the measured values, the straight line represents the average value, and the shading represents the standard deviation. There are also two problems: one is that many different types of rocks have the same wave velocity, but the heat generation rate is quite different; the other is the above-mentioned The empirical formula is obtained based on experimental data, and temperature and pressure correction must be carried out in practical applications, and the relationship between wave velocity, temperature and pressure itself is also a difficult problem. Early studies on lithospheric heat generation rate paid more attention to the vertical distribution of radioactive heat generation rate in the continental crust, but often ignored the contribution of rock radioactive heat generation rate in the upper mantle. However, although the radioactive heat generation rate of mantle rocks is very low, their contribution to the thermal-rheological structure and evolution of the lithosphere is still significant, and can provide important constraints for the study of the Earth's thermal history and mantle convection. [10]. Admittedly, the magnitude of the radioactive heat generation rate in the mantle is still quite controversial. Most scholars believe that the radioactive heat generation rate in the mantle is extremely low, about 0.002 \u03bcWm\u22123, but the outcrop observation data show that the variation range is 0.014\u20130.46 \u03bcWm\u22123. The mantle heat generation rate data revealed by the China Continental Scientific Drilling Project [8] is even more discrete, varying from 0.02 to 1.76 \u03bcWm\u22123. What is the relationship between the radioactive heat generation rate and the type, composition, age, and properties of mantle rocks still needs a lot of research work. The vertical distribution of radioactive heat generation rate in the continental lithosphere is still a major problem in theoretical geothermal science. To solve this problem, we cannot completely rely on continental scientific drilling wells. After all, the deepest continental scientific drilling wells in the Kola Peninsula are also Only 12261m, only 1/3 of the average thickness of the crust. In order to solve this difficult problem, a variety of geological, geophysical and geochemical means must be used to be effective.", "The development history of geomagnetism The geomagnetic field is one of the basic physical fields of the earth. It can effectively shield cosmic rays and protect life on earth. Therefore, it plays a vital role in the evolution of life on Earth. However, as for the origin of the geomagnetic field, as early as the beginning of the 20th century, Einstein had recognized that it was one of the most important basic problems in physics, and it has not been completely solved until today. Since the geomagnetic field cannot be touched, humans did not know its existence in the long history of evolution. It wasn't until the 6th century BC that the ancient Greek philosopher Thales observed that magnets have a certain attraction. The ancients of our country also noticed the role of magnets in the 3rd century BC, and then invented the magnetic compass (Sinan); around the 10th century AD, the compass was invented, which became one of the four great inventions in ancient China and provided a means for humans to quantitatively describe the magnetic field. In ancient times, the compass was mainly used in navigation, and with the development of the navigation industry, in the 12th and 13th centuries AD, the compass was introduced to the Arab region by sea, and later introduced to Europe by the Arabs. The invention and application of the compass played a vital role in Zheng He's seven voyages to the West 600 years ago and Columbus's discovery of the New World 500 years ago, and became a model for human beings to successfully use the geomagnetic field for navigation. However, the early human observation of the geomagnetic field was limited to magnetic declination, which can be traced back to a group of monks in the Tang Dynasty in my country in 720 A.D. In addition, Shen Kuo in the Northern Song Dynasty also discovered the phenomenon of magnetic declination through the phenomenon that the compass does not guide. the presence of corners. With the simple observation of magnetic declination, people began to think about the formation and evolution mechanism of the geomagnetic field. In the 13th century AD, it was thought that the north-pointing property of the magnetic needle might be closely related to certain stars. Later, it was believed that there were large magnetite mines at the poles of the earth, which made the magnetic needle point north. This hypothesis is of course untenable, but it is of great significance, because it marks the beginning of the transfer of the human way of thinking from the sky to the land, and a step towards the truth. In the development of modern geomagnetism, the Chinese made little contribution. On the contrary, Europeans began to gradually improve the observation and description of the geomagnetic field in a more scientific way. In 1576, the British Normam first discovered the geomagnetic inclination and noticed the change of the geomagnetic inclination. This discovery is very important for understanding the cause of the geomagnetic field. Because before him, Westerners believed that the geomagnetic field was located at the two poles of the earth, and it was a god endowed by God to the world, and his work made people realize that the geomagnetic field may be a natural phenomenon. In the 16th century AD, the British began to establish a geomagnetic station in London to observe the geomagnetic declination; in the 17th century, it was discovered that the magnetic declination has the characteristics of changing with time; at the end of the 18th century, De Rossel began to observe the variation characteristics of the geomagnetic field strength with temperature . With the increase of observation data, the German mathematician Gauss determined the spherical harmonic coefficients of the geomagnetic field in 1838 based on the potential field theory, which opened a precedent for applying modern physical and mathematical methods to quantitatively study the geomagnetic field. The significance of this research is also that the complex geomagnetic field can be decomposed into different components, so that the evolution characteristics of the geomagnetic field can be studied from different time and space scales, which provides a new method for interpreting the complex behavior of the geomagnetic field. In order to obtain more ancient information about the geomagnetic field, Delesse and Melloni began to study the correlation between the magnetization recorded in rocks and the geomagnetic field in the mid-19th century. At the beginning of the 20th century, David and Brunhes discovered that the direction of magnetization recorded by volcanic rocks is opposite to the direction of the current geomagnetic field, and revealed for the first time that the direction of the geomagnetic field can be reversed by 180\u00b0. This discovery became another major milestone in understanding the Earth's magnetic field. Geomagnetic Polarity Reversal and Earth's Internal Dynamics At the beginning of the 20th century, the discovery of geomagnetic polarity reversal laid the foundation for the subsequent geoscience revolution (the theory of plate tectonics). It not only deepens human understanding of the origin of the geomagnetic field, but also further changes human understanding of the formation and evolution of the earth itself. Subsequent studies have found that the phenomenon of magnetic field polarity reversal is not limited to the earth, and the magnetic fields of the sun and the Milky Way also have similar behaviors[1]. If we give a simple definition to geomagnetic polarity reversal, it is the opposite sign of the dipole item g10 of the geomagnetic field. But if the geomagnetic field is dominated by the equatorial dipole g11 and h11 terms, then the sign change of g10 has little effect on the overall behavior of the geomagnetic field. As for the geomagnetic field, due to the influence of non-dipole, the record of reverse magnetization is found somewhere on the surface of the earth, which cannot confirm the anti-sign of g10. Therefore, from a statistical point of view, geomagnetic polarity reversal must have stable global characteristics, which implies that geomagnetic polarity reversal has global isochronism on a short-term (such as millennium) scale, so it can be used as an effective time comparison sign. Although Brunhes had already discovered the phenomenon of reverse magnetization in volcanic rocks in the early 20th century, and proposed that it might be caused by the reversal of geomagnetic polarity, this view was not generally accepted at the time. The reasons are: first, there is a lack of sufficient data to prove the global nature of geomagnetic polarity reversal; second, some magnetic minerals in volcanic rocks have self-reverse magnetization properties. In response to the doubts in these two aspects, different scholars have carried out more in-depth research. First of all, the remanent magnetization characteristics opposite to the current geomagnetic field are found in different volcanic rocks. Secondly, it is recognized that the self-reverse magnetization phenomenon of magnetic minerals in volcanic rocks only occurs in some high-temperature oxidized titanium magnetite. Very rare in magnetism. Therefore, the fact that the direction of remanence found in different volcanic rocks is opposite to that of the current geomagnetic field can only be caused by the reversal of geomagnetic polarity, not due to the spontaneous reverse magnetization of magnetic minerals in volcanic rocks. When people have confirmed the fact that the geomagnetic polarity has been reversed, they can't help but ask what is the frequency of the geomagnetic polarity reversal? How does the Earth's magnetic field change polarity? For more than a hundred years, these issues have been the focus of earth science research, especially since the 1950s, with the implementation of the ocean drilling program and the development and application of isotope dating technology, the geomagnetic polarity chronology (geomagnetic polarity chronology) has been greatly promoted. polarity time scale, GPTS) [2,3]. After more than half a century of hard work by scientists from various countries, we have a relatively clear understanding of the geomagnetic polarity reversal sequence since 158 Ma. In the past 158 Ma, there were 295 stable polarity events and about 200 short polarity events[4]. The research on the geomagnetic polarity reversal sequence since the Phanerozoic has also made great progress, and it has been found that there are three superquiescent tapes in the Phanerozoic, which are Moyero Reversed Superchron (MRS) with a duration of about 30Ma (about 490~460Ma)[ 5], the Kiaman Reversed Superchron (KRS)[6] with a duration of about 50Ma (about 310~260Ma) and the Cretaceous Normal Superchron (CNS) with a duration of about 37Ma (about 120~83Ma). Tertiary ultrastatic tapes account for about 20% to 25% of the Phanerozoic [7]. These research results at least show that the dynamic process related to the geomagnetic field inside the earth since the Mesozoic can be divided into two states: reversal and non-reversal, and the frequency of geomagnetic polarity reversal is also an extremely complicated process. The frequency of geomagnetic reversal (FGR) refers to the number of times the polarity of the geomagnetic field reverses per million years. Cox[2] studied the statistical characteristics of the geomagnetic polarity chronology since 10 Ma, and found that the polarity reversal interval (\u03c4) of the geomagnetic field obeys the Poisson distribution. By studying the data of a longer time scale (165 Ma), Zhu Rixiang et al. [8] believed that \u03c4 obeys the lognormal distribution instead of the Poisson distribution. McFadden and Merrill[9] found that from 160Ma to 118Ma, FGR gradually decayed, and then entered the Cretaceous positive polarity overtime (CNS). After 83 Ma, FGR gradually increased again. There is a significant difference in the change rate of FGR before and after CNS, specifically, the increase rate of FGR is higher than its decrease rate. To understand the mechanism of geomagnetic polarity reversal, one of the effective ways is to understand the changing characteristics of the geomagnetic field during the polarity transition. To this end, we need to understand several basic issues related to geomagnetic polarity switching: \u2460 The time scale required for polarity switching. According to the westward drift rate of the non-dipole field and the thickness of the outer core, it can be roughly estimated that the motion period of the liquid outer core is about 500 years[10]; while the variation period of the geomagnetic field in the solid inner core is thousands of years[10] . On the other hand, paleomagnetic studies show that at least the duration of the Matuyama-Brunhes (MB) polarity switch is about 5000 years [11, 12]. Therefore, we have reason to believe that the Earth's solid inner core plays an important role in controlling the geomagnetic polarity reversal [10]. \u2461 How the direction of the earth's magnetic field changes during the polarity transition. In the 1970s, due to the lack of high-precision studies, people regarded the polarity reversal more as a fast event. Subsequent more geomagnetic studies have shown that a complete geomagnetic polarity transition includes multiple rapid reversal processes [11, 13]. Furthermore, people have made in-depth research on whether the geomagnetic field is controlled by dipole field or non-dipole field during these rapid reversal processes, and found that the path of VGP change during the polarity transition not only has the characteristics of the distribution around the Pacific Ocean but also often Clusters are gathered in some specific places (such as Australia)[11,14], which implies that the geomagnetic field may still be dominated by dipole fields during the polarity transition. Numerical modeling studies also show that regions of increased heat flux at low latitudes correspond to significant geomagnetic activity, which affects the equatorial dipole most clearly and makes the distribution of VGP during polarity transitions closely related to the heat flux through the CMB , but the dipole component is still the main component during the polarity transition [15]. \u2462 How does the strength of the Earth's magnetic field change during the polarity transition? Paleomagnetic studies have shown that the reduction of the geomagnetic field strength is a necessary condition for geomagnetic polarity reversal, that is to say, the change of the geomagnetic field strength is earlier than the start time of the polarity conversion process represented by the direction of the geomagnetic field; the recovery time of the geomagnetic field strength is also Also lagged behind the end of the polarity switching process characterized by the direction of the earth's magnetic field. Further statistical analysis found that only when the strength of the geomagnetic field drops to about 20%, the geomagnetic polarity reversal can really occur [16]. \u2463 Global heterogeneity distribution. On a global scale, for the same polarity reversal, the duration, the strength of the geomagnetic field, and the time of occurrence will vary at different locations on the Earth's surface. For example, Leonhard and Fabian[17] found that in the study of MB polarity switching, the duration of polarity switching in the Atlantic and Eastern Pacific regions is only a few thousand years; in Africa and other regions, it can reach tens of thousands of years. In the South Atlantic, the polarity reversal started at the earliest (about 770ka); while in the central Pacific, the polarity reversal started at least 5000 years later than in the South Atlantic (Fig. 2). \u2464 Precursor of geomagnetic polarity reversal. By studying the continuous marine sediment records, Hartl and Tauxe[18] found that 20~25ka before the MB polarity reversal, there was another event with a significant decrease in intensity. Taking these findings together, we believe that the geomagnetic field drifted about 20,000 years before the MB polarity reversal. The variation characteristics of the geomagnetic field and the long-period variation trend of FGR during the polarity transition period contain rich information on the dynamics of the Earth's interior. Glatzmier et al. [19] found through theoretical model calculations that the heat flow pattern at the core-mantle boundary (CMB) is closely related to the behavior of the geomagnetic field. When the heat flow has a latitudinal zonal mode, the simulated geomagnetic polarity reversal behavior is in good agreement with the actual observation data, specifically showing that the VGP circum-Pacific has a preferred path. Theoretical research shows that the movement of fluid in the core is in the low-speed region in the Pacific Ocean[20], while the seismic P-wave in the lower mantle is in the high-speed region in the Pacific Ocean region[21]. It can be seen that the geomagnetic polarity reversal process is not only controlled by the change of the fluid movement state in the core, but also related to the structure of the lower mantle. Therefore, the structure, material distribution and heat flow state of the lower mantle are all closely related to the movement mode of the fluid in the outer core of the earth and the related geodynamo process. On longer time scales, for example, the long-period variation characteristics of FGR are also related to the lower mantle [22,23]. During the CNS, the peculiar behavior of the geomagnetic field coincides with changes in global heat flux. Courtillot and Olson[7] showed that the inversion frequency is directly proportional to the heat flow through the CMB, and the dependence of the inversion frequency on the heat flow through the CMB also depends on the state of geodynamics, that is, whether it is more inclined to an inversion state or a non-inversion state. A small change in heat flow through the CMB would cause the end of the superquiet tape, and possibly alter the geomagnetic field strength, if the geodynamics were in a transitional state between inversion and non-inversion. Heat flow through the CMB controls the end of the ultra-quiet tape, so what controls the start of the ultra-quiet tape? The interval of the superquiescent band is related to the long-period variation of mantle dynamics, which in turn is related to the Wilson cycle, which was originally used to describe continental collision and breakup, but is now also used to describe the atmosphere CO2, Seafloor Spreading Rates, Global Sea Level Change, Polar Movements, Continental Tectonics, and Magmatism. Although these events are non-periodic, it is worth paying attention to the possible correlation between Wilson cycles and ultraquiescent tapes. At the same time, the inversion frequency also depends on the structure of the heat flow through the CMB. Numerical simulations show that the heat flow through the CMB is greatest at the equator and at the poles. If the dynamics of the Earth's interior tend to strengthen this heat flow pattern, then such dynamics will produce A stable, high-strength axial dipole field; on the contrary, geodynamic processes will generate an unstable, weak dipole field, and polarity reversal will also occur. The intensity of the earth's magnetic field corresponding to the transition from the non-inversion state to the inversion state in the earth's interior should decrease, and the dipolarity of the magnetic field also decreases. In addition, the location of the mantle plume is also closely related to the geomagnetic polarity reversal. For example, the mantle plume generated at the equator or at the poles will not cause the end of the superquiescent tape, but the mantle plume generated at mid-latitudes will easily cause the superquiescent tape. the end. The mantle plume originating from the CMB is an important factor to end the ultraquiescent tape, which produces traps on the surface and may cause the mass extinction of organisms. At the same time, we should note that even since the Phanerozoic, only some traps have been associated with ultrastatic tapes and extinctions, so what factors caused some mantle plumes to become Killer mantle plumes, while others did not become Killer mantle plumes? Courtillot and Renne[24] believed that the characteristics of mantle plume intrusion into the surface are different, such as trap volume, chemical composition, eruption location, climate state at the time of eruption, and the number of magma eruptions caused by the mantle plume, etc. So, which mantle plumes can become Killer mantle plumes? Only those mantle plumes that originate from the CMB, rise rapidly, and have sufficient thermal energy after intruding into the lithosphere can become killer mantle plumes. Another related question is, which mantle plumes affect both the surface environment and core dynamics? The rise time of the mantle plume may be one of the important factors. If it is considered that the time difference between the end of the ultra-quiet tape and the trap eruption is 10~20Ma, then the rise rate of the mantle plume is 0.3m/a, which is an order of magnitude higher than the plate movement rate , such a high velocity requires a large buoyancy, or mantle plumes rising along existing channels, which are chemically inhomogeneous and rapidly rising substances. Courtillot and Olson[7] believed that the mantle plume can well link the two. The time scale of large-scale mantle convection is about 200 Ma, which can lead to changes in the spatiotemporal distribution of heat flow in the CMB. When the heat flux of the CMB is high, the inversion is more frequent and vice versa. In this process, the thermal instability of the D\u2033 layer plays an important role. Due to the abnormal temperature of the D\u2033 layer, its viscosity is lower than that of the lower mantle. Driven by thermal buoyancy, the mantle plume produced in the D\u2033 layer will pass through the mantle and rise, and its time scale is about 20 Ma. This can just explain the time difference relationship between the superquiescent tape and the extinction of life. In addition, the mantle convection It can affect the shape of CMB, and then affect the generator process of the earth's core. The shape change of CMB makes its adiabatic process no longer consistent with the gravity equilibrium, and there will be a temperature gradient effect in the horizontal direction. If we consider the relationship between the fluid system in the outer core of the earth and the mantle The mechanical effect of the lower mantle can link the thermal structure of the lower mantle with the geomagnetic polarity reversal mode and frequency. In this core-mantle thermal and mechanical coupling model, the dynamic process in the D\u2033 layer is extremely important, but the current research methods However, research on this issue is still lacking. To study the long-term evolution of the geomagnetic field in depth, we also need to understand the behavior of the geomagnetic field before 165 Ma. Coe and Glatzemier [25] studied the influence of different components of the geomagnetic field on its stability. It is well known that the geomagnetic field can be decomposed into different components by means of spherical harmonics. Specifically, according to its spherical harmonic expansion coefficient, it can be divided into even and odd terms. The angular deviation of the geomagnetic field VGP is a function of latitude and can be expressed as S={a2+(b\uf06c)2}1/2, where a and b are related to their even and odd terms, respectively. Therefore, the ratio b/a can be defined to represent the contribution of different items, and the increase of the ratio represents the increasingly asymmetrical geomagnetic field. Theoretical calculations show that the greater the contribution of odd-numbered terms, the more stable the geomagnetic field. For example, the asymmetry of the Earth's magnetic field is highest during the CNS. In the early days of the Earth's evolution, its inner core was relatively small, and the resulting magnetic field should have a high asymmetry on a global scale. Therefore, Coe and Glatzemier[25] believed that the geomagnetic field should be more stable in the early days of the Earth than it is now. These conclusions are partly supported by paleomagnetic data; for example, the FGR recorded by the Late Archean basalts in Western Australia is about 0.03 Ma\uf02d1, which is nearly 50 times lower than the average FGR (1.7 Ma\uf02d1) since 165Ma[26 ]. In addition, Elston et al. [27] studied a set of sandstone and igneous strata in North America. The time span of this formation is about 1468 to less than 1401Ma. But only four inversions were recorded. However, the above studies are far from enough. At present, it is especially necessary to strengthen the study of continuous sections suitable for the construction of magnetic strata in the Precambrian. Judging from the changing trend of FGR, the geomagnetic field is still in the active zone at present. Over the past hundred years, the dipole moment of the Earth's magnetic field has decayed at a rate of 0.05 per year. If it decays steadily in this way, it will take 800~1000 years before the strength threshold of the earth's magnetic field actually reverses. But the problem is that we still can't be sure that the geomagnetic field will continue to decline in the future like in the past 100 years? Therefore, we can't answer whether the earth's magnetic field will reverse in the next 1000 years. In fact, Constable and Korte[28] believed that although the strength of the geomagnetic field is decaying, its current value is still higher than the average value of the strength of the geomagnetic field since the past 0.78 Ma. Its decay rate is also consistent with the change value over the past 7000 years. In addition, the average magnetic field strength during the Brunhes period is higher than the value in the past 165 Ma, implying that the Earth may also have a superquiescent magnetic field again. To sum up, the geomagnetic polarity reversal is a breakthrough in understanding the origin of the geomagnetic field. Although a lot of relevant data has been accumulated in the past few decades, the frequency of geomagnetic polarity reversal and the characteristics of the temporal and spatial variation of the geomagnetic field during the polarity transition are still one of the frontier topics of current geosciences. We believe that to deepen the research in this field, it is very important to select a section in the Pacific Rim region that is suitable for studying the shape of the geomagnetic field during polarity switching[29]. In addition, through the study of the Steens Mountain volcanic rocks in southeastern Oregon, it was found that the direction and intensity of the geomagnetic field change at an astonishing rate of 3\u00b0/a and 300\uf06dT/a, respectively[30]. 1/4~1/5 of the original envisioned value. Therefore, the study of the properties of the lower mantle will also be one of the key scientific issues in the future. According to observations, people realize that the geomagnetic field is similar to a magnetic dipole magnetic field placed in the center of the earth, and its most important feature is its dynamic nature. For example, its strength changes with time, and after the strength decreases to a certain level, a complete reversal or short-term excursion of geomagnetic polarity often occurs within several thousand years. In addition, the reversal frequency of the geomagnetic field (the number of reversals per million years) is not constant, and has a certain correlation with the evolution of the geomagnetic field strength. Therefore, any mechanism and model used to explain the origin and evolution of the geomagnetic field cannot break away from the above observation constraints. Mechanism of the origin of the geomagnetic field There are various early explanations for the origin of the geomagnetic field. According to a large number of geophysical observation data, the current mainstream view is that the geomagnetic field originates from the outer core of the earth. Under the action of high temperature and high pressure, the outer core of the earth is filled with liquid conductive fluid, which moves under the impetus of various energies (such as the growth and release of heat of the solid inner core to generate buoyancy) to generate a magnetic field. This is the so-called self-excited geodynamo. The magnetic field in the outer core of the earth can be decomposed into a polar field (which can penetrate the mantle and reach the surface to be observed) and an annular field (which cannot be directly observed). Due to the differential rotational motion of the fluid in the outer core, the polar field is partially distorted to form an annular field, which strengthens the annular field as a whole. Various subsequent models try to find the specific fluid movement mode in the outer core to maintain the self-excited process of the geomagnetic field. In the early 1970s, scientists found a fluid motion model that was more in line with reality. With the development of computers, it was not until 1995 that the working process of the three-dimensional nonlinear geomagnetic field generator was really simulated. The Earth's outer core is not independent because it is bound by the inner core and the lower mantle. Existing models indicate that a complete reversal of the geomagnetic poles may be the result of a combination of inner and outer cores. Small polar events only affect the state of the magnetic field in the outer core, but the magnetic field in the inner core remains stable. In these simulations, the choice of parameters is crucial, therefore, we must clearly recognize the limitations of these models: the time scale is not long enough, the model parameters are simplified, etc. Nevertheless, from some aspects, these simulations can still provide some interesting results for paleomagnetists to verify. For example, Professor Peter Olson of Hopkins University and his collaborators found that when the mantle convection is strengthened, the thermal energy of the outer core is released rapidly; or when the cold mantle material reaches the core-mantle boundary, the inner core grows rapidly. If the rotation rate of the earth's core changes little, it can cause an increase in the reversal frequency of the earth's magnetic field and a decrease in the intensity of the dipole field at the same time [31]. For the influence of the lower mantle, none of the previous models could overcome this bottleneck. Recently, Professor Zhang Keke of the University of Exeter in the UK and his research team improved the model, adding the mantle with inhomogeneous electrical conductivity to the model for the first time, and established a global (whole earth) dynamo model[32], which is far from the real earth structure Another step forward. Summary At present, although great progress has been made in the paleomagnetic experimental observation and the theoretical simulation of the geodynamo, it is only a small step for the problem of the origin of the geomagnetic field. The main difficulties to be overcome are: the limitation of computer calculation speed, the lack of accurate understanding of the nature of the earth's interior, and the need to further improve the means and data of paleomagnetic experiments. Theoretical computer simulations involve massive calculations. Compared with the real and complex earth behavior, the temporal and spatial accuracy of the current simulations is many orders of magnitude worse. In addition, for human beings, it is easy to go to heaven, but difficult to go to earth. We can soar into space, but the current so-called ultra-deep drilling has only reached more than ten kilometers underground. For the properties of the outer core of the earth below 2900km, we mainly rely on seismology and high temperature and high pressure experimental methods to make indirect estimates. Nevertheless, from the perspective of historical evolution, we are trying to get closer to the mystery of the origin of the geomagnetic field. Through breakthroughs in paleomagnetic instruments and observation techniques, we are eager to obtain more data to invert the geomagnetic field, especially the characteristics of the early geomagnetic field. Through the progress of seismology and high temperature and high pressure experimental research, more reasonable parameters of the earth's interior can be obtained; through the improvement of computer calculation speed, a model closer to the earth can be established. With the continuous improvement of science and technology and human beings' aggressiveness in exploring the mysteries of nature, we will finally understand how the geomagnetic field is generated and evolved.", "Although a large number of observational data and research results preliminarily show that the electromagnetic phenomenon in the process of seismogenesis exists objectively, and electromagnetic data also play a certain positive role in earthquake prediction research, there are still great doubts about the earthquake electromagnetic phenomenon. One of the biggest puzzles is why there are frequent reports of electromagnetic anomalous changes before earthquakes, but there are few or even no observation reports of electromagnetic anomalous changes in the more obvious coseismic stages such as stress changes and fault displacements. Therefore, the existence of coseismic electromagnetic signals has become a key problem restricting the further development of electromagnetic research on earthquake correlation. The reason why this problem has attracted much attention is that its solution will be related to whether we can better understand the relationship between electromagnetic observation data and earthquake formation, and whether we can deepen the knowledge and understanding of the multi-physics process of earthquakes. Over the past few decades, the study of electromagnetic phenomena related to earthquakes has received increasing attention. However, almost all researches are aimed at the electromagnetic phenomena before earthquakes, and most of them are in the stage of empirical accumulation, while the research on the mechanism of electromagnetic signals related to earthquakes is still in the exploratory stage. Because of this, some scholars have begun to question whether there is a reasonable correlation between electromagnetic phenomena and earthquakes, and a focus issue closely related to this is how to recognize and understand coseismic electromagnetic signals. Faced with the above doubts, some researchers carried out some observation experiments by increasing the sampling rate of instruments, and successfully observed some coseismic electromagnetic signals[1,2]. The recently proposed fault electromagnetic model based on stress changes in the fracture process and the piezoelectric effect of rocks also provides a preliminary theoretical basis for these coseismic electromagnetic observations[3]. However, it should be pointed out that, strictly speaking, coseismic electromagnetic signals actually include the following three different forms: \u2460 coseismic electromagnetic signals directly related to the source rupture; The interface generates a converted electromagnetic signal, and then transmits the co-converted wave electromagnetic signal to the observation point; \u2462 the co-seismic wave electromagnetic signal caused when the seismic wave reaches the observation point. Whether it is the coseismic electromagnetic observation report mentioned above[1,2], or the coseismic electromagnetic phenomenon in the aftershock monitoring of the 2008 Wenchuan Earthquake reported recently[4], in fact, it is only the coseismic electromagnetic phenomenon of the third situation. Seismic wave electromagnetic signal only. In fact, so far there is no field observation report on the electromagnetic signal of the co-rupture in the first case. As for the coconversion wave electromagnetic signal in the second case, although there have been reports in oil and gas exploration research that the electromagnetic signal generated by the seismoelectric effect of artificial explosions has been successfully recorded through field tests [5~7], so far there has not been any Reports of coconverted electromagnetic signals associated with natural earthquakes. So far, the fact that there is no coseismic electromagnetic signal observation of the first two cases related to natural earthquakes has become a very troublesome problem for relevant scholars. Relevant conjectures include: the frequency of the co-seismic electromagnetic signal may be relatively higher than that of the pre-earthquake electromagnetic signal, and it attenuates faster in the relatively conductive upper crust medium, so it is difficult to be detected; the density of the existing electromagnetic station network is not enough; The coseismic signal itself is too weak to be detected by existing observation instruments; does the coseismic electromagnetic signal exist? \t\u00b7645\u00b7 There may even be no so-called coseismic electromagnetic signals directly related to earthquake ruptures. Unfortunately, due to the high complexity of the earthquake breeding and occurrence process itself and the relatively limited understanding of the mechanism of seismic electromagnetic signals, it is still difficult to conclude various speculations about coseismic electromagnetic signals. Although the coseismic electromagnetic problem faces many difficulties, the indoor experiments, field observations, and theoretical model research accumulated so far have also pointed out possible directions for gradually solving this problem. For example, in view of the low signal-to-noise ratio of seismoelectric signals in field tests and the lack of understanding of the seismoelectric effect, numerical simulation based on a certain theoretical model [8] is an effective way to deepen the understanding of the seismoelectric effect. There are few numerical simulation studies on this aspect [8-11]. If further based on the research method of source physics, combined with the relatively clear mechanism of force-electric coupling and fluid-solid coupling in theory, the research on the electromagnetic interaction in the process of earthquake source rupture is expected to help to gradually unravel the The mystery of the coseismic electromagnetic signal.", "Since the plate movement theory was put forward in the 1960s, the crustal sports fields in most ocean regions around the world can be successfully explained using the plate movement model, that is, the crust-lithosphere is composed of a limited number of rigid blocks, and there is no Deformation, the crustal movement field is mainly manifested in the mutual movement of the blocks, forming fault zones at the boundaries of the blocks, resulting in extrusion, tension and shear dislocations. But can such a plate motion model be used to explain the continental crustal sports field? This issue has been debated since the plate theory was born. Among the two schools of debate, one school believes that similar to the oceanic crust, the continental crust can also be divided into a limited number of active blocks, the interior of the blocks is stable, and the crustal movement is mainly along a few large-scale active faults (mainly strike-slip faults) that constitute the boundaries of the blocks. ) occurs, that is, the crustal movement is mainly manifested as a horizontal motion field[1]. The other school believes that active faults exist widely in the continental crust, and crustal deformation is widely distributed, not limited to a few large strike-slip fault zones [2]. When these two hypotheses are applied to the deformation field of the Qinghai-Tibet Plateau and its surroundings, they form diametrically opposite models. The \"block movement\" theory holds that the northward pushing of the Indian plate caused the large-scale eastward extrusion of the Qinghai-Tibet Plateau, and the internal deformation of the plateau was relatively minor[3]. The \"extensive deformation\" theory holds that the northward pushing of the Indian plate caused thickening of the inner crust of the Qinghai-Tibet Plateau, and the internal deformation was widely distributed, while the eastward extrusion of the plateau was relatively minor[4]. The above two hypotheses are supported by some observational data over a long period of time, but they are also controversial due to the uncertainty of the data. Since the 1990s, GPS has been widely used in monitoring the crustal deformation field. A series of research projects, especially flow observations in the Qinghai-Tibet Plateau and its surrounding areas, have provided strong constraints for the study of the crustal deformation field. The research results show that the internal deformation field of the Qinghai-Tibet Plateau is widely distributed on a scale of tens to hundreds of kilometers, showing a uniform reduction in the northeast direction and a stretching in the south-east-east direction[5]. According to GPS observations, the slip across the Altyn Fault is only about 9 mm/a, which is much smaller than the expected result of the traditional \u201cblock movement\u201d model, and consistent with the results of recent geological research[6]. Recently, people have improved the insufficiency of the \"block movement\" model, making the division of active blocks more detailed, increasing the boundary zone, and the northward extrusion of the Indian plate is more absorbed by the mutual movement of blocks inside the plateau[7]. However, the model is still different from the \"continuous deformation\" model, and the deformation only occurs at the block boundary, rather than distributed inside the block. From the perspective of kinematics, \"block motion\" and \"continuous deformation\" are two extreme models of the crustal deformation field, and the real crustal deformation mode must be between the two. If the earth\u2019s crust is divided into active blocks, the number of blocks and the area of the blocks represent a fractal relationship, and the models of \u201cblock motion\u201d and \u201ccontinuous deformation\u201d correspond to the extreme cases of extremely low and extremely high fractal dimensions, respectively . However, the difficulty is that due to the limitation of spatial observation density, there are still large uncertainties in the research of small-scale block motion models so far, and the resulting block motion models still cannot answer the geometric distribution of small-scale blocks at the high end of the fractal model. The deformation mechanism and dynamics of the continental crust and lithosphere \t\u00b7 647 \u00b7 cannot well answer the applicability of the continuous deformation model. The debate about the deformation mode of the continental crust is also a debate about the nature and mechanical mechanism of the continental crust-lithosphere medium from a dynamic point of view. One of the focal points of debate is whether there is a relatively weak lower crust and upper mantle in the continental crust and upper mantle, that is, whether there is a \"sandwich\" rheological structure in the lithosphere[8]; in addition, whether the continental fault zone is in the depth range Whether the cut lithosphere exists only in the upper crust, dispersed in the weak zone in the lower crust. Studies in structural physics, petrology, seismology, and magnetotelluric sounding have given evidence of the existence of a weak layer (or low-velocity, high-conductivity layer) in the lower crust and upper mantle, but how weak it is and whether it can form ductile shear The decoupling of the upper and lower crust or lithosphere caused by tangential banding is still debated. Around this problem, a series of mechanical models have been developed, trying to constrain the rheological structure of the crust-lithosphere through observation data, as well as its deformation mechanism and dynamic evolution under the action of plate pushing and gravity field drive. Royden et al. [9] and their follow-up models believe that there is a lower crustal flow with a very low medium viscosity (about 1017 Pa-s) inside the Qinghai-Tibet Plateau, and its flow promotes the crustal deformation field around the plateau. However, Flesch et al. [10] obtained a lithospheric mechanical model based on the data constraints of the GPS velocity field and the shear wave splitting direction of the seismic wave. They believed that the upper and lower coupling of the lithosphere in the Qinghai-Tibet Plateau challenged the lower crustal flow model. The difficulty of research in this area is that the observation of deep structures can only be done through indirect means, and it is impossible to obtain direct observation evidence deep into the interior of the lithosphere. To sum up, the debate on the crustal deformation field model of the continental crust, especially the Qinghai-Tibet Plateau and its surrounding areas, continues, from large-scale deformation to regional deformation, from surface displacement to deep coupling, from kinematic patterns to dynamics mechanism. In-depth research on this issue will comprehensively deepen our understanding of the physical structure, mechanical properties, driving methods, deformation mechanisms and dynamic processes of the continental crust and lithosphere.", "With the acceleration of urbanization and economic development, the losses caused by disasters (especially earthquake disasters) also increase exponentially [1]. The M8 earthquake that occurred in Wenchuan, Sichuan Province in May 2008 gave the most direct and effective proof. It is the common wish of all mankind to understand the cause mechanism of earthquake disasters and finally predict the occurrence of earthquake disasters so as to achieve the effect of disaster reduction. As we all know, more than 70% of the earth's surface is covered by seawater; \"how high is the mountain and how high is the water\" shows that only less than 30% of the land is also a multiphase body containing water. Studies have shown that the occurrence of various disasters on the earth's surface is more or less related to the role of water. Whether it is mudslides, landslides on the earth's surface, or earthquakes thousands of meters below the surface, they are the most direct manifestations of the interaction between water and rocks. Due to the special properties of water, the earth's surface has become the most significant sphere where multiple interactions interact. The impact of various disasters on human life makes the connotation and extension of earth science change rapidly (the theme of the 2009 annual meeting of the European Geosciences Union (EGU2009)). Although this has become a relatively recognized research result in the earth science community, the relationship between water-rock interaction and the mechanisms of various disasters is still an unsolved mystery in the earth science community. To successfully predict disasters, especially earthquake disasters, to achieve the purpose of disaster reduction, it is very necessary and urgent to study the relationship between water-rock interaction and earthquake generation process. Famous seismologist Bruce A. Bolt said that without water there would be no tectonic earthquakes [2]. This idea summarizes the important role of water in natural earthquake processes. But what exactly happens between water and rock that might lead to earthquakes has been inconclusive. Some research results based on long-term observations believe that in the earthquake breeding-occurrence cycle, the pore fluid pressure in the shallow water-bearing porous media of the earth has a periodic change process[3, 4]. Geophysical imaging results show that the occurrence of earthquakes is related to the existence of deep fluids. Although these findings have been generally accepted, the mechanism of action is not clear. Studies have shown that water may play a positive stimulating role in the process of earthquake breeding and occurrence, and may also play a negative reducing role [5]. Under the condition of low confining pressure and high stress Coulomb friction, the concept of Mohr's circle is introduced in the Griffith rupture curve for predicting material instability. If the pore pressure P increases so that the effective stress decreases to (\uf0731 \uf02d P,\uf0733 \uf02d P) , then the Mohr circle will move to the Griffith rupture envelope, and an open instability displacement process will appear, which In the water-rock interaction, water is beneficial to the occurrence of rock fracture instability (earthquake), that is, water plays a positive role in promoting the occurrence of earthquakes. Conversely, if the pore pressure decreases, for example, due to local rock expansion, the Mohr circle will be far away from the Griffith failure instability curve, which means that water in the water-rock interaction is not conducive to rock failure instability ( Earthquakes), that is, water plays a negative damping effect on the occurrence of earthquakes (Figure 1). This theoretically provides a conceptual elaboration of the mechanism of the relationship between water-rock interaction and earthquake genesis. Fig. 1 \tA simple schematic diagram of the mechanical effect of pore\uf02dfluid pressure change based on the similarity of fracture response drilling). The established International Continental Drilling Program (ICDP) is trying to promote this exploratory research on a global scale. The purpose is to conduct drilling experiments in the shortest time after the earthquake, in an attempt to understand the crustal rock deformation mechanism during the earthquake. On September 21, 1999, the results of drilling on the Chelongpu fault zone after the Chiji earthquake in Taiwan found direct evidence of the water on the fault zone promoting the earthquake [6], which caused a sensation. After the Wenchuan earthquake, Chinese scientists also organized rapid-response drilling experiments\u2460. Preliminary results show that water also plays an immeasurable role in the process of fault instability and rupture. These all show that the relationship between water and earthquakes is very close. During the gestation process of an earthquake, not only the pore fluid pressure on the surface or the shallow part of the crust will change within a certain range near the epicenter, but the occurrence of the earthquake can also cause changes in the water level (pore pressure) of the wells in the long and short distances. It has been found that there is no one-to-one correspondence between the abnormal changes observed before the earthquake and the actual earthquake, which makes earthquake prediction more difficult. Although the observation of long-distance water level (pore pressure) changes caused by earthquakes has become a common phenomenon worldwide[7\uf02d12], and the research on such changes has become one of the current international research hotspots, the reason It is the explanation of these phenomena that helps us understand the problem of earthquake long-range correlations, but even if there are successful examples, they are controversial. Whether the above-mentioned principle of rock fracture caused by the interaction between water and rock is in line with the actual process needs to be verified based on the evidence provided by the continuous development of detection and observation technology and experiments. This certification\u2460 is implemented by academician Xu Zhiqin and researcher Wu Zhongliang. \u2461 On February 16, 2009, the exchange meeting on the results of the second phase of drilling was held at the Baijiatuan National Geophysical Observatory. It will directly help to correctly understand the mechanism of earthquake generation, improve the accuracy of earthquake prediction, and achieve the purpose of disaster reduction. In the case of abundant observational data, with the help of new research methods, research from a new perspective, and mining the deep physical meaning in the observational data may provide some possible impetus for unraveling the mystery of the earthquake breeding process.", "In the 1960s, seismological research found that at the depths of about 400km and 650km inside the earth, the seismic wave velocity suddenly jumped. After more than 10 years of research, it was confirmed that these two seismic discontinuities are universal in the world, and Therefore, the area between them is defined as what is called the mantle transition zone today. The study of petrology and thermodynamics attributed the formation of these two discontinuities to phase transition. At 400 km, olivine (olivine) phase changed to wadsleyite (wadsleyite) with spinel structure; At 650km, the phase of ringwoodite changes into magnesium perovskite (Mg-perovskite) and periclase (magnesiowustite). Although the average thickness of the mantle transition zone is only about 242km, it is of great significance in terms of geodynamics, geochemistry, and water cycle in the earth's interior. At present, one of the main geodynamic issues that people have been debating is: Is the Earth's interior full-mantle convection or stratified mantle convection? One of the keys to solving this problem is to fully understand and understand the mantle transition zone. Bercovici et al. [1] thought that if the mantle transition zone contains a large amount of water, full-mantle convection will occur, which shows that water plays a very important role in the mantle transition zone. Figure 1 \tWater cycle and water content in the mantle People do not have a clear understanding of the cycle, distribution, and occurrence of water in the mantle, and the total water content in the mantle is estimated to be 1/4 to 4 times the total seawater mass [2-4], they are mainly stored in solid minerals, aqueous fluids and melts in the mantle. The research on the water content of the main minerals Watsleyite and Lynnwood in the mantle transition zone is far less than that of the main mineral olivine in the upper mantle, firstly because we cannot find these two minerals in the rocks exposed on the surface ( can exist in meteorites), they can only be synthesized through the phase transition of olivine by using high temperature and high pressure experimental technology, and secondly, they are limited by high pressure equipment and measurement technology. In the 1960s, the use of large-cavity high-pressure equipment began to develop in the field of geosciences, and the infrared spectroscopy (FTIR) analysis method is still the main means to determine the water content of samples, but in fact, the results obtained by this method have errors Large, scattered and low quality data. Despite the difficulties, the research of Kohlstedt et al. [5] still found that under the conditions of 14~15GPa and 1100\u00baC, is the mantle transition zone water-bearing? \t\u00b7653\u00b7Watsley stone can contain 2.4wt% water (24000ppm); under the conditions of 19.5GPa and 1100\u00baC, Linwood stone can contain 2.7wt% water; Litasov and Ohtani[6], Demouchy et al.[7] found that: At 12~13.5GPa, from 800\u00baC to 1200\u00baC, the solubility of water in Wattsley stone is 2~3wt%, which is almost unchanged with the increase of temperature. Increase and decrease, about 0.3wt% (3000ppm) at 1600\u00baC. Solubility experiments at high temperature and high pressure have shown that these two main minerals in the mantle transition zone have the ability to contain a large amount of water, but this does not mean that the mantle transition zone contains only a few wt% of water. Another major mineral in the mantle transition zone is majorite. Bolfan-Casanova et al.[8] reported that the water content of a majorite synthesized at 17.5GPa and 1500\u00baC was about 677 ppm ( 1ppm=1\u00d710\uf02d6); Katayama et al. [9] determined that the water content of pyrognet synthesized at 20GPa, 1400\u00baC and 1500\u00baC was ~600 ppm and 550 ppm, respectively. Based on the experimental results, it is speculated that the mantle transition zone may be a large reservoir, but how much water it contains is still a controversial issue. Most mantle minerals are normally insulators, but as temperatures increase these minerals become semiconductors, and an increase in water makes these minerals more conductive. Comparing the conductivity data obtained from the geophysical field inversion with the experimentally obtained conductivity data can be used to constrain the temperature, water content, etc. of the Earth's interior. Huang et al. [10] measured the conductivity of Wattsley stone and Ringwood stone under high pressure (14~16GPa) and high temperature (873~1473K), and established the electrical conductivity and temperature of Wattsley stone and Ringwood stone. The relationship between water content, combined with the one-dimensional profile results of conductivity and depth detected by the geophysical field in the North Pacific region, it is speculated that the water content in the mantle transition zone in the North Pacific region is about 0.1~0.2wt%, which is much higher than that in the upper Mantle (50~200ppm). But Hirschmann[11] believed that the mantle transition zone is quite reduced, and its oxygen fugacity is close to or lower than the coexistence of metallic iron and square iron, rather than more oxidized as Huang et al. said, and its oxygen fugacity is close to that of metallic nickel and Because of the coexistence of nickel oxide, Hirschmann estimated that the water content in the mantle transition zone in the North Pacific region is about 200\u2013300 ppm, which is roughly equivalent to that in the upper mantle, without oxygen fugacity correction for the experimental results of Huang et al. However, Huang et al. [12] started from another perspective, based on the difference of about 10 times in the electrical conductivity obtained from the geophysical field inversion below and above the 410km discontinuous interface, and the conductivity of Wattsleyite and olivine obtained under the same experimental conditions The difference in conductivity is very small, about 0.3~1 times. Using the relationship between conductivity, water content and oxygen fugacity, the relationship between the oxygen fugacity ratio and the water content ratio above and below the 410km discontinuous interface is estimated, and it is pointed out that : At 410 km, there is about a 10-fold difference in water content between the mantle transition zone and the upper mantle, consistent with their previous conclusions. Yoshino et al. [13] also measured the conductivity of hydrous and non-hydrous Watsley stone and Linwood stone under low frequency conditions, and used another function format to parameterize the relationship between conductivity and water content, etc. The conclusion is: The mantle transition zone in the North Pacific region should be dry and water-free. Both Huang and Yashino used the relationship between the electrical conductivity, water content and temperature of two types of minerals, Watsleyite and Lynnwoodite, combined with the results of geophysical field inversion to estimate the water content in the mantle transition zone, but the results But it is completely different. The main reasons for this difference may be as follows: \u2460 The difference in measurement technology, the impedance spectroscopy method is also used to measure the mineral conductivity, but the frequency chosen by Yashino is very low (0.1~0.01Hz), and a reference resistor is used to measure the conductivity of the sample. in series; \u2461 use different functional forms to parameterize the influence of water on conductivity; \u2462 also use infrared spectral analysis technology (with a large error, up to 30%) to measure the water content in the sample, but even for the same sample, Different groups will also yield different results, and there may be a systematic bias between the two. Therefore, future conductivity measurements need to control experimental conditions more precisely, such as oxygen fugacity, water fugacity, metal activity, etc., and improve assembly methods, measurement techniques, etc., so as to better constrain the water content in the mantle transition zone .", "The gem-grade mineral olivine (olivine) is a mixture with the molecular formula (MgxFe1\uf02dx)2SiO4. The end member minerals are forsterite with molecular formula Mg2SiO4 and fayalite with molecular formula Fe2SiO4. Olivine is the main mineral in the upper mantle (the mantle region from below the Earth's inner crust to above the 660km discontinuity). However, olivine exists in the upper mantle as heterogeneous bodies with three different mineral structures. The three heterogeneous bodies are \u03b1-phase olivine (olivine), \uf062-phase olivine (wadsleyite) and \uf067-phase olivine (ringwoodite). What kind of allotropic forms olivine exists in the upper mantle depends on the specific temperature and pressure conditions. Generally speaking, olivine only exists in the mantle above the 410km discontinuity. Below 410km, olivine will exist in the form of \u03b2-phase olivine and \uf067-phase olivine. However, during the subduction zone (the part where the oceanic plate enters the earth\u2019s interior) subducts into the deep mantle, because of the low temperature, olivine may not have time to change phase, and exists in the mantle transition zone below the 410km discontinuity in the form of metastable state (depth range between the 410km discontinuity and the 660km discontinuity), forming metastable olivine. The existence state of metastable olivine (whether there is metastable olivine in the transition zone of the mantle) has received widespread attention because this problem not only involves the material structure and composition of the earth's interior, but also has a relationship with deep earthquakes. (Earthquakes whose focal depth is below 300km)[1] and the whereabouts of matter in subduction zones[2] are closely related to the two focal issues of Earth's interior physics. Because the existence of metastable olivine will make the subduction zone have different rheological structures (representing the mechanical response of the material to the load) and different density structures (representing the compactness of the material), thus having different stress states (representing the deformation body internal stress) and deformation characteristics. Therefore, if metastable olivine does exist in the mantle transition zone, it will definitely affect the occurrence of deep earthquakes and the whereabouts of materials in the subduction zone. In the 1970s, the earliest calculation results of phase transition dynamics showed that metastable olivine does exist in subduction zones[3,4]. However, new experimental results[5] and phase transition dynamics calculation results[6] show that it is difficult to exist a large amount of metastable olivine in the subduction zone. Later, this conclusion was also supported by seismological observations [7]. However, the research results of this type of seismological method depend on the selection of the velocity model. According to more seismological observations, there seems to be no metastable olivine in the subduction zone[8]. The research on this issue is limited not only by geophysical detection data, but also by the precision of high temperature and high pressure experiments. So far, high-precision broadband seismometers have not been deployed long enough to perform high-precision geophysical inversion. At the same time, because many areas where subduction zones are located are covered by seawater, it is difficult to deploy a large number of seismic observation stations, which largely limits the acquisition of useful seismic data. The deployment of Ocean Bottom Seismometer (OBS) will facilitate the solution of this problem. Due to the high pressure at which the phase transition of olivine occurs, current experimental techniques can only be performed on small samples of about 1 mm, which greatly limits the accuracy of the experiment. Olivine is another mixture with relatively complex phase transition partitions, which makes it difficult to design high-temperature and high-pressure experiments. The latest theoretical research results of phase transition kinetics [9] show that the classical theory of phase transition kinetics of nucleation and long-scale phase transition ignores the influence of the grain shape of the generated phase, which will greatly affect the nucleation rate of phase transition Estimation and calculation of the particle size of the generated phase. At the same time, it is found that the current experimental results are concentrated in a small range of temperature and pressure ranges, which cannot effectively restrain the growth rate of the olivine phase transition, thus limiting the accuracy of the experimental results to a large extent. At present, geoscientists are trying to overcome a series of technical difficulties based on new theoretical research results, and redesign the phase transition kinetic experiment of olivine in order to obtain more accurate experimental data. Judging from the current research status, this problem is expected to achieve breakthrough research progress within ten years.", "Coronal mass ejections (coronal mass ejections, CMEs) and solar flares (solar flares) are two major manifestations of solar eruptions, representing the most violent release of energy and matter in the solar system. A large explosion can release the energy equivalent to the explosion of billions to tens of billions of nuclear bombs, and can throw more than 10 billion tons of magnetized plasma material into the heliosphere space. When the products of solar eruptions spread to the earth, they can cause strong disturbances in the electromagnetic and plasma environments in the near-Earth space such as the magnetosphere-ionosphere, and can produce space weather disasters that have a major impact on modern society and human life.[ 1]. Accurate forecasting of such events requires a comprehensive and clear understanding of the various physical processes involved in solar eruptive phenomena such as CMEs and flares. Among them, the study of the outbreak mechanism is particularly critical. Figure 1. \tA CME event captured by the coronagraph mounted on the SOHO satellite \u2460. The red disc in the figure represents the coronagraph baffle that blocks the strong light radiation near the sun, and the white circle in the middle represents the size of the sun. The first and last pictures were taken The time difference is two hours and ten minutes, during which time the CME projectile moves outward at a speed of approximately 800 km/s. This image is taken from the web page of SOHO satellite: http://sohowww.nascom.nasa.gov, more images and animations of solar eruption events can be found in this website and NASA website. In the study of solar eruption mechanism, the most important and A basic question is the nature of the energy released during the solar eruption, the mechanism of energy accumulation and rapid release, and how and into what kind of energy is released. \u2460 SOHO (Solar and Heliospheric Observatory) is the A scientific satellite launched jointly with NASA in 1995 has been operating for more than 10 years at the solar-terrestrial gravitational balance point 1.5 million km away from the earth, that is, the first Lagrangian point, providing scientific research on solar eruptions. Provides a large amount of high-quality scientific data. form of energy. Simply put, it is to solve the problem of where energy comes from and where it goes. At present, most scholars believe that the energy released by CME explosions comes from the magnetic field of the corona, and these magnetic field energy is slowly transported through processes such as the convective movement below the photosphere, the magnetic field twisting on the photosphere surface and above, and the emergence of new magnetic regions. And stored in the corona; under certain physical conditions, the stored magnetic energy can be released quickly and converted into kinetic energy and thermal energy of plasma particle movement. This view of using two physical processes, fast and slow, to describe solar eruptions comes from the current mainstream theoretical model of solar eruptions, that is, the energy storage and release model [2]. There are two main mechanisms for the rapid release of magnetic field energy. One is the instability process of the magnetic field system on the macroscopic scale; the other is a physical process called magnetic field reconnection \u2460 originating from smaller-scale regions. In the process of solar flares, it is currently recognized that the so-called magnetic field reconnection process can be used to explain the rapid release and conversion of magnetic field energy, and the energy obtained by a considerable part of the particles is quickly converted into radiation energy in the sharply enhanced soft X-ray and other bands; and for In the CME process, the released magnetic energy is mainly used to quickly throw the large-scale magnetized structure to the heliospheric space. The specific energy release mechanism is still controversial, and it is very likely that the above two different-scale energy release processes work. And for different events, the relative importance of different mechanisms may also vary. In fact, CMEs are closely related to flare phenomena. As early as the early research of CME phenomenon in the 1970s, work has found this correlation, and thus it is believed that CME is driven by flares. However, scholars soon discovered many observational facts that contradicted it. For example, some CMEs are not accompanied by flares, and many flares are not accompanied by CMEs, even for the CME-flare events, there is no fixed order of occurrence of the two. Therefore, scholars now tend to believe that the two correspond to the response and performance of the same type of magnetic field burst process at different levels of the solar atmosphere, and there is a physical connection but no causal relationship[3,4]. The results show that the acceleration curve of a CME is closely related to the change profile of the soft X-ray flux of the flare[5], and the faster CME is usually accompanied by a stronger flare, which means that the acceleration of the CME may be related to the magnetic gravity that causes the flare. related to the linking process. However, the magnetic field configuration before and during the burst, the triggering of magnetic reconnection or macroscopic instability, the specific process of magnetic energy release and redistribution, the role of magnetic reconnection in the CME process, and the relationship between it and macroscopic instability The relationship among them is an issue that has yet to be studied and resolved. Solar eruptions can strongly interact with the original corona and the interplanetary magnetic field-plasma system, and produce colorful accompanying phenomena, mainly including global-scale disturbance of the corona (such as the so-called EIT wave, coronal dimming), multi-band electromagnetic Radiation enhancement (such as radio bursts), shock waves, high-energy particle events, etc. These processes show us different aspects of the physical process of solar eruptions. This seemingly simple phenomenon actually contains rich natural mysteries. At present, the research on related physical processes mainly has the following difficulties: \u2460 Although the structure of the coronal magnetic field plays the most critical role in the process of solar eruptions, there is still no reliable technical means to directly measure the coronal magnetic field; the magnetic field on the surface of the photosphere and many other The observation resolution of parameters is not enough to distinguish the smallest scale physical process; \u2461 The study of CME topology and acceleration process mainly depends on the observation data of coronagraph, however, \u2460 refers to the disconnection and reconnection of magnetic field lines in different directions in the plasma The process can quickly transform the magnetic energy into the kinetic energy and thermal energy of the plasma, and sometimes it can be vividly called the annihilation of the magnetic field. In order to obtain a good observation effect, the coronagraph must always block the strong light radiation near the sun, but it also covers the CME start and main acceleration region; and the measurement of a single-point instrument can only give the CME on the sky plane. Projection speed; \u2462 In terms of model construction, the three-dimensional nature of CMEs and flares and the physical nature of multi-spatial-scale fusion make numerical modeling very difficult. The existence of these difficulties makes it impossible to study the physical mechanism of solar eruptions clearly in a short period of time. Nevertheless, many space exploration satellites in the world have promoted the research and solution of this subject as their main scientific goal. For example, in addition to the aforementioned SOHO satellite, the Hinode (Sunrise) satellite and the STEREO satellite (Sun-Earth Relations Observatory) were successively launched in September and October 2006. Among them, Hinode's main scientific goal is to understand how the sun's magnetic field is generated, how energy is transported upward by the photosphere, and how this energy is released in bursts. STEREO is composed of two basically identical sub-satellites, which are distributed on both sides of the earth. Like two human eyes, they form a stereoscopic viewing angle for the sun. The main scientific goal is to understand the physical processes that lead to solar eruptions and track CMEs. The three-dimensional evolution of acceleration and propagation, the study of high-energy particle acceleration and the three-dimensional structure of the solar wind, etc. In addition, several updated satellite detection programs have been proposed internationally, such as the SDO (Solar Dynamics Observatory) planned to be launched in 2011, the Solar Orbiter (Solar Orbiter) launched in 2015, and the Solar Probe Plus that is still in preparation. (Sun Probe) program, etc. Domestic scientists have also proposed a number of large-scale satellite projects such as the \"Kuafu\" program and the Space Solar Telescope. These plans have higher spatial-temporal resolution observation capabilities and more novel observation designs, which are and will provide opportunities for in-depth research and final resolution of related physical problems.", "The solar wind is the high-speed plasma flow ejected from the sun's outermost atmosphere (corona) and sweeps the major planets in the solar system and the entire interplanetary space continuously. The concept that certain regions of the sun emit continuous streams of charged particles has long been recognized by scientists. The geomagnetic field activity on the earth repeats for 27 days, and the sun's rotation period is exactly 27 days when viewed from the earth. Thus certain regions of the Sun affect the Earth periodically on a 27-day basis, evidence of its continuous emission of particle streams. However, the recognition that the coronal gas continues to expand to form the solar wind is derived theoretically. This is a classic example of physical theory predictions prior to observational experiments. In 1958, Parker, a professor of astrophysics at the University of Chicago, published a paper in which he proposed the concept of solar wind [1]. He believes that the coronal ionized gas is not simply a static distribution in the sun's gravitational field, but under the action of the high temperature of the coronal, it expands and flows steadily outward, forming a supersonic flow called the solar wind. After the concept of solar wind was put forward, the scientific community did not accept it. It is difficult to imagine that thermal pressure will drive the formation of supersonic solar wind. Whether it was \"strong wind\" or \"light wind and then disappeared\" has caused fierce debate. Four years later, the United States launched the Mariner 2 (Mariner 2) spacecraft to Venus, and the direct detection of the interplanetary space plasma and magnetic field confirmed the existence of continuous supersonic plasma flow. People generally accepted the existence of the solar wind [2]. How does the solar wind form, and can coronal ionized gas be continuously accelerated into a supersonic flow? We know that rocket engines accelerate rockets by ejecting combustion gases. The jet's high-speed flow requires supersonic speed to obtain huge acceleration thrust. The combustion chamber of the rocket engine is connected to a nozzle with an opening that gradually expands. This nozzle with a specific geometric shape is called a Ravel nozzle. When the compressible gas flows through a pipeline with a decreasing cross-section, its velocity increases and soon approaches the speed of sound. At this time, the orifice behind the narrow neck of the tube suddenly opens up (Ravel nozzle), and the gas flow becomes a supersonic flow. From a dynamic point of view, the solar wind is accelerated into a supersonic flow, which is also similar. Surprisingly, the effect of the sun's gravitational field is analogous to the geometrical flow-limiting effect of the Ravel nozzle [2]. The aerodynamic equation of solar wind acceleration can have multiple solutions, and there is a solution curve that accelerates to near the critical point to reach the speed of sound, and then the speed continues to increase to form supersonic solar wind. The other solutions do not reach the speed of sound at the critical point, and the solutions decrease from the maximum value to form a breeze solution. It is not that theorists have chosen the solution of supersonic speed with a soft spot. Observations have confirmed it, and people have to admit this choice in nature: the sun blows out the supersonic solar wind. The theoretically calculated solar wind speed is 330km/s at the earth's orbit, and the measured data basically agree with this, which seems to be enough. Wonderful theory backed up by experiments! However, soon there are new measurement data, often observed higher speed solar wind, the wind speed is above 600km/s, even up to 800km/s, so the thermal pressure acceleration theory is in trouble. It can't provide such fast speed. There must be another non-thermal mechanism of the origin of the solar wind \t\u00b7 661 \u00b7 to supply energy to the solar wind particles. The main component of the solar wind is a stream of ionized hydrogen plasma. Previous thermal pressure gradient acceleration theories did not introduce electromagnetic effects. Observations have found that the high-speed flow of the solar wind is accelerated by the sun's open magnetic field region, which is called a coronal hole[3,4,5]. When the solar activity is low, coronal holes are mainly distributed in the high latitude regions of the north and south poles of the sun. At this time, the solar wind evaporated from high latitudes is mostly high-speed flow. In addition, well-developed magnetohydrodynamic turbulence is observed in the high-speed solar wind, mainly the turbulence evolved by Alfv\u00e9n waves, which is called Alfv\u00e9n turbulence. This turbulence has a spectrum similar to the classic Kolmogorov turbulence. It shows that the Alfv\u00e9n wave has a strong nonlinear interaction before it develops into a classical power law spectrum. These two pieces of observational information easily make people think that the solar wind expands outwards at the solar corona, and at the same time, there are Alfv\u00e9n waves that propagate outwards along the open magnetic field of the coronal hole. Does the energy dissipation of these Alfv\u00e9n waves not only heat the coronal plasma, but also accelerate the solar wind? And the Alfv\u00e9n turbulence observed in the solar wind is just the remnant of the Alfv\u00e9n wave that has not been dissipated? However, the ideal magnetic fluid Alfv\u00e9n wave is very impotent, it is a pure shear wave, neither dissipation nor nonlinear interaction. Without the dissipation mechanism, the wave energy cannot heat and accelerate the particles; without the nonlinear interaction, the wave cannot evolve into turbulent flow. Fortunately, the kinetic Alfv\u00e9n wave makes up for these shortcomings. It is both dissipative and nonlinear, and the nonlinear interaction of self-waves can evolve into turbulent flow. In fact, the ideal magnetic fluid Alfv\u00e9n wave in the plasma is rare. It is the degeneracy of the dynamical Alfv\u00e9n wave under ideal conditions. In general, the dynamical Alfv\u00e9n wave is a longitudinal and transverse coupling mode, which is decoupled into a pure transverse wave (Alfv\u00e9n wave) and a longitudinal wave (ion acoustic wave) only in the zero order approximation. In the 1990s, many scientists paid attention to the power supply and acceleration of the high-speed solar wind by dynamic Alfv\u00e9n waves, and included coronal plasma heating as the same physical problem[4,6]. At present, observations have confirmed the presence of dynamical Alfv\u00e9n waves in the solar wind[6]. There are two remaining problems: \u2460 Alfv\u00e9n wave dissipation in kinetic theory, that is, how wave energy is provided to particles; \u2461 Alfv\u00e9n wave excitation in dynamic theory, that is, where wave energy is input from. The former, kinetic Alfv\u00e9n wave theoretically has dissipation mechanisms: Landau dissipation, magnetic mirror capture dissipation and cascaded cyclotron resonance dissipation, etc. Early theories have shown that Alfv\u00e9n wave turbulence is cascaded to ion cyclotron frequency, and resonance dissipation provides energy to the solar wind; numerical simulations show that Alfv\u00e9n wave cyclotron resonance can accelerate the solar wind. However, specific applications still need to combine observations to make a fine theoretical model. The latter, in view of the fact that there are many possibilities for the excitation of dynamical Alfv\u00e9n waves, the first choice in the solar corona is the magnetic field reconnection mechanism. The sun has a very complex magnetic field, and magnetic reconnections of various scales abound, and each magnetic reconnection is the excitation source of the dynamical Alfv\u00e9n wave, and the magnetic energy is transformed into wave energy, the wave energy is dissipated, and then the accelerated particles are heated\u2014similar to A space \"microwave\". The difficult question is at what frequency and on what spatial scale the dynamical Alfv\u00e9n waves are excited. We need to know on what characteristic temporal and spatial scales wave frequency, wavenumber, and magnetic reconnection are related. There are no good theoretical and observational data yet. Regarding the height of the initial acceleration region of the solar wind and the magnetic reconnection region on the sun, based on the new solar observation data at the beginning of this century, there was a new understanding[5,7]. It was previously believed that the coronal region was the initial height of solar wind acceleration, but the new theory holds that mesoscale magnetic reconnection mostly occurs in the chromosphere, where the solar wind has initially accelerated (Fig. 1). Figure 1 \tSchematic diagram of the origin of the solar wind[7] The red sphere in the lower left figure is the solar image captured by the SOHO satellite with an extreme ultraviolet camera with a wavelength of 19.5nm; The plane is the magnetic field of the photosphere (observed value) and the magnetic field at a height of 2000km (extrapolated value). The purple curve is the open magnetic field structure, and the black-gray arched line represents the closed magnetic field line structure; the lower right picture is a partial enlargement of the upper half of the picture, showing the magnetic funnel structure in the solar wind origin area. In fact, the solar wind acceleration mechanism is more complicated and difficult. The solar wind does not expand and flow out uniformly from the sun. The solar wind has many structures, forming multiple streams with different channels, which may correspond to different acceleration mechanisms. In addition, there are few theories about the low-speed solar wind flowing out of the active region. In recent decades, people have mainly paid attention to the origin of the high-speed solar wind.", "1. What is solar wind turbulent solar wind is the outwardly expanding plasma that flows out from the sun and carries a magnetic field. In the solar wind, there are both the characteristics of magnetic fluid turbulence and the characteristics of Alphen fluctuations. MHD turbulence is characterized by the power spectrum of the magnetic field disturbance having a form similar to the Kolmogorov power law spectrum. The characteristic of Alfv\u00e9n fluctuations is the correlation between the magnetic field disturbance and the velocity disturbance. Therefore, solar wind turbulence is turbulent flow with Alfv\u00e9n fluctuations, also known as Alfv\u00e9n turbulence. [1] The \tleft figure of Figure 1 shows that the solar wind disturbance has the characteristics of turbulence spectrum, and the right figure shows that the solar wind disturbance has the characteristics of Alfv\u00e9n waves, thus revealing that the solar wind disturbance has the characteristics of Alfv\u00e9n turbulence II. The important scientific significance of studying solar wind turbulence Solar wind turbulence is a The natural huge plasma turbulence system has a range of related scales that cannot be achieved by turbulence laboratories on the earth. The study of solar wind turbulence promotes human understanding of turbulence, magnetohydrodynamic turbulence, and collision-free plasma turbulence. The origin and transmission process of the solar wind are closely related to turbulence. Solar wind turbulence plays an important role in heating and accelerating the components of the solar wind, and is one of its important energy sources. Therefore, the study of solar wind turbulence is an important scientific frontier of space physics. 3. The formation of solar wind turbulence The formation of ordinary fluid turbulence, from the perspective of phenomenology, is the result of the evolution of large-scale vortex interactions to small-scale vortices; from the perspective of fluid mechanics equations, it is the momentum equation in the Navier-Stokes equations The result of the nonlinear momentum transport term of . The formation of solar wind MHD turbulence, from the MHD equation, is the result of the nonlinear interaction of two Alfv\u00e9n waves propagating in opposite directions. [4] The origin and propagation of Alfven waves are the primary conditions for the formation of solar wind turbulence. In the field detection of the inner heliosphere, it is found that there are outward-propagating Alfven waves and inward-propagating Alfven waves in the interplanetary space. It is speculated that the outward-propagating Alfven waves should originate from the solar atmosphere, while the inward-propagating Alfven waves may be caused by scattering, reflection or instability excitation of Alfven waves somewhere far away from the sun. [1] Figure 2 \t\t(top) schematically illustrates the propagation and reflection of fluctuations in the solar coronal atmosphere and the formation of turbulence [5]; (bottom) 3D simulated turbulence in the corona, the background is the distribution of current density in the radial direction, the vector The figure shows the distribution of the transverse flow field [6] The origin of the outwardly propagating Alfv\u00e9n wave has not been studied clearly, and it will be an important scientific topic for future space physics and solar physics research. Evidences of Alfv\u00e9n wave propagation have been found in the solar corona and solar chromosphere [7]. But direct observations of the origin of Alfv\u00e9n waves are still lacking. The small-scale magnetic reconnection at the intersection of the network structure on the sun\u2019s chromosphere may excite Alfv\u00e9n waves, and the feet of magnetic field lines located in the gap between rice grains in the solar photosphere may also excite Alfv\u00e9n waves when they are impacted by neighboring rice grains. However, these theoretical speculations Observational confirmation is required. The SOT instrument carried on the Hinode spacecraft can observe the atmosphere of the solar photosphere and chromosphere with high temporal and spatial resolution. We expect to observe the small-scale activities of the photosphere and chromosphere and the lateral movement along the needle-like objects on the sun\u2019s limb. Oscillating direct link. The inwardly propagating Alfv\u00e9n waves have not been observed in the solar atmosphere and need to be confirmed by observation. Theoretically, it is considered that the internal Alfven waves in the solar atmosphere are generated by the reflection of the uploaded Alfven waves in the transition region with a large Alfven velocity gradient[8]. We expect to use the SOT instrument to analyze the dynamic process of the needles extending from the solar chromosphere to the corona, and identify the internal Alfven waves, so as to provide an observation basis for the formation of Alfven turbulence in the solar wind source region. The proportion of outward and inward Alfv\u00e9n waves in the solar wind turbulence is expressed by the normalized interhelicity. The observed interhelicity gradually decreases with the radial heliocentric distance. The reasons for this are not entirely clear. The speed shear between the fast and slow solar wind is considered to be a possible reason for the generation of internal Alfv\u00e9n modes and the reduction of the interhelicity. However, the mechanism by which velocity shear generates internal propagating modes is not very effective, and it is only applicable to the solar wind near the ecliptic plane. The parametric attenuation of the outward Alfv\u00e9n waves in the compressible heterogeneous interplanetary medium can produce compression modes and inward Alfv\u00e9n waves, which may lead to the reduction of interhelicity, the oscillation of the number density and the magnitude of the magnetic field. However, the parameter decay mechanism depends on the plasma \uf062 value, and cannot explain the phenomenon of turbulence formation in the fast solar wind with relatively high \uf062 value. Therefore, a complete theory of solar wind turbulence in the future needs to be able to explain the observed radial evolution characteristics of outward and inward Alfv\u00e9n waves. [4] Figure 3. \tThe power spectrum of the magnetic field disturbance in a large frequency range observed by the Cluster satellite. The blue shaded area is the area higher than the ion cyclotron frequency but lower than the electron cyclotron frequency. It is not yet possible to determine whether the power law spectrum in this area is Produced by whistler wave turbulence cascade or by kinetic Alfv\u00e9n wave dissipation. This figure is modified from [9] 4. Cascading and dissipation of solar wind turbulence In solar wind turbulence, the energy of large-scale (low-frequency) disturbances cascades to small-scale (high-frequency) disturbances through the inertial region, and passes through the dissipation region Viscous or wave-particle interactions dissipate. The WKB-like Alfven turbulence theory can well describe the radial evolution characteristics of the magnetic field disturbance power spectrum (below the ion cyclotron frequency) in the Alfven turbulence: the power spectrum in the low frequency region is the approximate WKB propagation that has not yet developed into turbulence The power spectrum in the high-frequency region is a power-law spectrum developed into turbulence, and the relevant scale of turbulence increases with the radial distance, so that more low-frequency disturbances participate in the turbulence cascade and dissipate [1]. There are two ways to dissipate solar wind turbulence in the region near the ion cyclotron frequency. One is ion cyclotron resonance dissipation of quasi-parallel propagating ion cyclotron waves, and the other is ion Landau damping dissipation of quasi-perpendicular propagating kinetic Alfv\u00e9n waves. The high vertical temperature components observed in the solar atmosphere as well as in the perisphere confirm the role of ion cyclotron resonance dissipation. The kinetic dissipation of Alfv\u00e9n waves near the ion cyclotron frequency has not yet been confirmed by observation. How the solar wind turbulence is dissipated in the region higher than the ion cyclotron frequency is a current and future research hotspot. One point of view is that the solar wind turbulence is dominated by quasi-vertically propagating dynamical Alfven waves above the ion cyclotron frequency, and the dynamical Alfven waves are finally dissipated through electronic Landau damping. Another view holds that the turbulent fluctuations of the solar wind will be dominated by electron cyclotron waves (whistle waves) above the ion cyclotron frequency, and the electron cyclotron waves will be dissipated through electron cyclotron resonance or electron Landau damping. The former view is supported by dynamical numerical simulation results, and the latter view is supported by particle cloud numerical simulation results [10]. Therefore, in the future, it is necessary to theoretically analyze the physical differences behind these two numerical simulation methods, and judge their advantages and disadvantages in describing solar wind turbulence. We also need to analyze and diagnose the fluctuation pattern of solar wind turbulence in the high-frequency region in observation.", "On July 22, 2009, residents in the middle and lower reaches of the Yangtze River in my country witnessed a total solar eclipse once in hundreds of years. This is the most spectacular total solar eclipse in the 21st century. Many, it is rare. In fact, as early as 2,000 years ago, there were records of solar eclipse observations in ancient China. When a total solar eclipse occurs, we can see the weak scattered light radiated from the outer atmosphere of the sun to the photosphere. These scattered lights have a coronal structure, so the outer atmosphere is called the corona (Figure 1). The first successful photocoronagraphy was taken during a solar eclipse in 1851. In 1931, he began to use a coronagraph to observe the corona when there is no solar eclipse. Coronal hyperthermia was first observed by sounding rockets in 1946. Since the 1970s, high-resolution instruments have been used to observe the corona on the ground and satellites, such as Skylab, and use soft and hard X-rays to study the activities of the corona, opening up a new era of exploring the corona[1]. Figure 1 The corona of the sun photographed during a total solar eclipse The sun is a hot gas ball, which can be divided into four layers: the nuclear reaction zone, the radiation layer, the troposphere and the solar atmosphere from the center to the edge of the sun. The solar atmosphere is the outermost and only visible layer of the sun. The temperature, density, and magnetic field in the solar atmosphere vary greatly. People often divide the solar atmosphere into the photosphere, chromosphere, transition layer, and corona according to the change of temperature with altitude (Figure 2). The corona is the outermost layer of the solar atmosphere, which is composed of a very thin and fully ionized high-temperature plasma, mainly protons, highly ionized ions and high-speed free electrons [1]. As early as the first half of the 19th century, the emission lines Fe X and XIV in the corona were discovered, which can only occur when the temperature of the corona is as high as one million degrees. The temperature in the central nuclear reaction zone of the sun is as high as 15 million \u2103, and the temperature decreases rapidly from the center to the outside, and the temperature on the surface of the sun (photosphere, chromosphere) is only about 6000 \u2103. And when it reaches the corona of the outer atmosphere of the sun, the temperature not only does not drop but rises by two orders of magnitude to millions of degrees (Figure 2), which is contrary to the natural law that heat can only be directed from high temperature areas to low temperature areas. There must be some kind of heating mechanism that can balance the energy loss (mainly convective radiation and solar wind loss) caused by various dissipation mechanisms in the corona to maintain such a high temperature in the corona. The mechanism of coronal heating, one of the most challenging problems in solar physics, has puzzled many scientists for many years. Figure 2 \tThe solar atmosphere is divided into photosphere, chromosphere, transition zone and corona. 500km down from the temperature pole is the photosphere, while the height H of the chromosphere is from 200 to 2200km, and its bottom is located in the temperature pole. The thickness of the low chromosphere is about 400km, and the temperature begins to rise; the range of the middle chromosphere is about 1200km, and the temperature rises slowly from 5500K to 8500K; the thickness of the high chromosphere is about 500km, and the temperature rises rapidly to 50 000K. The base of the corona is about at H=3000km. The area from the top of the chromosphere to the bottom of the corona is the transition zone, and the temperature rises sharply from tens of thousands of kelvins to 106K[2] Many different theoretical mechanisms have been proposed in the research on the problem of coronal heating, and it is believed that the convective movement of the plasma below the photosphere Drive the movement of the coronal loop magnetic field lines rooted in the photosphere (the root point is the foot point, Figure 3 is the image of the coronal loop), causing changes in the topological configuration of the magnetic field, accumulating magnetic energy, and then converting it into heat energy by a certain mechanism to heat the coronal plasma body. The mechanical motion of the photosphere and the matter inside it is the ultimate source of coronal heating energy. Based on the comparison of the characteristic time scale of coronal loop foot point motion rooted in the photosphere and the characteristic time of local shear Alfv\u00e9n wave propagation, the coronal heating mechanism can be roughly divided into: DC heating and AC heating Heating) two categories. DC heating, also known as ohmic heating, such as current sheet heating in simple Joule heating and magnetic reconnection process, etc., this kind of heating foot point movement is slow; AC heating is caused by plasma wave-particle interaction, such as acoustic heating , fast and slow magnetic acoustic wave heating, Alfven wave heating and ion cyclotron resonance heating, etc., the foot point moves faster during the AC heating process. Ohmic heating theory is the process of magnetic energy passing through classic ohmic heating and current dissipation, and the movement of the foot point is relatively slow. Because the resistivity in the corona is very small, it is obvious that heating by ohms is not enough for the coronal temperature Fig. 3 \tcoronal loop image [3]. This requires the appearance of a small-scale structure, in which there is a large magnetic field gradient, so that a large current density can be generated, and the dissipation of magnetic energy can be accelerated through ohmic heating, so that the coronal temperature requirement can be met. In fact, the convective movement of the feet of the magnetic force lines in the corona can indeed promote the formation of small-scale current structures, such as small-scale current sheets. In these small-scale current sheets, the magnetofreezing effect in ideal magnetohydrodynamics no longer holds true, and the convergence and reconnection of magnetic force lines generate an electric field and a singular current layer parallel to the magnetic field. In this way, the energy stored in the lines of magnetic force is released in the form of ohmic heating in this strange current layer. The corresponding heating mechanism has been studied by a lot of work [4-9]. However, the conversion process of magnetic energy to thermal energy is still a great challenge before us. AC heating, also known as wave heating theory, believes that the movement of foot points excites various modes of waves in the coronal plasma, and the wave energy is dissipated into plasma energy to heat the corona through the form of wave-particle interaction. Foot movement can generate a series of wide-spectrum fluctuations, such as sound waves, Alfven waves, and fast and slow magnetoacoustic waves. Acoustic waves and slow waves will form shock waves during their upward propagation, and it is difficult to pass into the corona due to strong dissipation; while the refraction and reflection of fast waves are very serious. Alfven waves are most likely to enter the coronal plasma, and it is indeed found that there are Alfven waves in the corona in the observations. Because the Alfv\u00e9n wave is a transverse wave, the shock wave will not be formed during the propagation process, and the energy of the wave will propagate along the magnetic force line. However, due to its weak damping properties, it is difficult to dissipate energy in the coronal plasma and cannot provide sufficient heating power for the corona. Generally speaking, due to the lack of understanding of coronal fluctuations (type, energy flow, wave spectrum), we still cannot well understand the AC heating mechanism of the corona[10]. Although there are various other theoretical mechanisms, the common point is that coronal heating is caused by the convective motion of plasma on the surface of the sun, and the twisting of the coronal magnetic field lines at the feet of the photosphere drives the change of the topological configuration of the coronal loop magnetic field caused by. Therefore, the observation of the solar magnetic field and the analysis of these observation data and the comparison with the existing theoretical models are the key to the study of the coronal heating mechanism. Due to the limitations of the existing observation conditions, it is difficult for us to judge which heating mechanism plays a decisive role in the process of coronal heating. Fortunately, the Hinode (Sunrise, 2006\u2014) satellite, which aims to determine the solar coronal heating mechanism as one of its scientific goals, has been launched, and the Solar Probe Plus (Sun Probe) program is also in the making, which will provide a basis for solving the mystery of coronal heating. Unprecedented opportunity.", "Many geophysical phenomena, most of which are invisible and intangible. People use scientific instruments to observe, and use physical principles to infer and deduce them. The aurora is the only spectacular geophysical phenomenon that people can see with the naked eye. Anyone can see this beautiful atmospheric glow on clear winter nights in select regions of Earth's north and south poles. The aurora mostly occurs in the banded area between the two poles at a latitude of 65\u00ba~70\u00ba, called the auroral belt, and the ground height of the luminous area is about 100km. As shown in Figure 1, the common light is yellow-green, and occasionally there is red light. The aurora mostly occurs in the polar region on the night side of the earth. There are few sunny sides, which can only be observed with the help of instruments. Figure 1 \tAurora on the left is discrete aurora, and the picture on the right is diffuse aurora. When viewed from the ground, it consists of rays that shine along the direction of the magnetic field lines. Suddenly, a huge band of yellow light hangs right above your head, and the luminous area looks like an undulating curtain. It drifts and swings in the sky, constantly moving and changing shape. A light arc of very narrow thickness seen in a particular direction. Discrete auroras mostly appear before midnight, and the bottom edge of the luminous height is about 100km. \u2461 The diffuse aurora appears as an inactive band of light tens of kilometers wide. Its brightness is not as strong as the discrete aurora, and there is no fixed shape and boundary in the sky, and some can cover a large area of the sky, and the luminous area has no complex structure. Diffuse aurora often appear after midnight. Polar-orbiting satellites can observe the aurora mainly occurring between 65\u00ba and 70\u00ba of geomagnetic latitude over the polar regions of the earth, presenting an oval luminous belt, called the auroral ellipse. Their size is not fixed. When the geomagnetic disturbance intensifies, it widens and expands to low latitudes, which makes it possible to see auroras in the middle and low latitudes of the earth during extremely large magnetic storms . \tThe aurora is a phenomenon in which charged particles (mainly electrons) in the magnetosphere sink into the polar atmosphere along the magnetic field lines and collide with the neutral atmosphere, causing the neutral atmospheric molecules and atoms to be in an excited state and emit light. The strongest emission line in the aurora is the 5577\u00c5 emission line of the neutral oxygen atom. It emits light at a height of about 100km. It is yellow-green light, which is the main component of the aurora visible to the naked eye. Another prominent spectral line is the doublet line of oxygen atom (6300\u00c5 and 6364\u00c5). In the red region of the spectrum, the spectral line is weak, and only in a few cases the doublet line becomes very strong. Occasionally, the red auroral arc can be seen, and its luminous height Over 200km. Another stronger emission line is the 3914\u00c5 emission line of nitrogen molecules, which is mixed with the yellow light of oxygen. The two types of aurora described above are produced by different types of falling particles. The luminous area of diffuse aurora is often on the equatorial side of the auroral ellipse, which is lower than the latitude where the discrete auroral arc occurs. This position is exactly the projection of the center of the magnetospheric plasma sheet along the magnetic force line at high latitudes, indicating that the particles that generate the diffuse aurora come from the plasma sheet, which is deposited in the polar region when the plasma sheet particles are transported to the Earth's radiation belt. These particles were not appreciably accelerated as they fell from the plasma sheet toward Earth. Discrete aurora is different, it has structure, the auroral arc of violent movement is sheet-like, and the luminescence is also strong. It has been found that the discrete aurora is mainly caused by large-scale settled electrons at 1-10 keV. These electrons are significantly accelerated during the process of settling from the plasma sheet to the polar region of the earth, and their energy is increased by 10-100 times. There is a distinct single-energy peak. There are two difficult problems in the aurora physics: one is the acceleration of the falling electrons. Observations have shown that the falling electrons that produce discrete auroras are accelerated by electric fields. The satellite crossed the discrete auroral region, and measured an increase in electron energy flux below 10 keV. It can be 2~3 orders of magnitude higher. On the other hand, the discrete aurora is associated with the upward field-directed current, which is produced by the upward field-directed current in the polar region ionosphere, corresponding to the electron deposition along the magnetic field and the upward drift of positive ions. These flux lines connect to the inner boundary of the plasma sheet, where the plasma energy is lower. Apparently, electrons move from the boundary of the plasma sheet to the polar ionosphere, and undergo an acceleration process on a not-too-long path, and their energy increases by more than 2 orders of magnitude. It is now believed that the most likely effective acceleration mechanism is the localized electric field (E//) with parallel magnetic fields. Observations also show that E// exists, and the strongest can reach 100mV/m. The hard question is how the electric field is generated and maintained. Any electrostatic field generated by charge accumulation will quickly disappear due to current discharge. The earliest proposed theory of auroral particle acceleration includes electric double layer and electrostatic shock wave. The electric double layer assumes that when a discharge occurs between the magnetosphere and the ionosphere, a charge layer can be disturbed so that the potential drop is limited to a narrow space. Afterwards, there are various electrostatic wave and solitary wave acceleration theories. Recently, some people began to put forward the kinetic theory Alfv\u00e9n wave acceleration theory[1\uf02d4]. The problem of accelerated charged particles is a pervasive problem of cosmic plasmas in astrophysics. In addition, the generation of aurora and its accompanying process are not an isolated event. It is part of a global process in which the magnetosphere and ionosphere couple, and it is part of a series of eruptive processes associated with magnetospheric substorms, also known as auroral substorms. At the beginning of an auroral substorm, a violent disturbance of the geomagnetic field occurs simultaneously in the auroral ellipse in the polar region, which is called a magnetospheric substorm. Both are luminescence and magnetic field changes caused by the same main cause. Auroral substorms are discovered by satellite observations of the auroral ellipse. At the beginning of the auroral substorm, the aurora in the auroral ellipse located on the equatorial side of the discrete auroral area and the polar edge of the diffuse auroral area suddenly brightened, and then the auroral brightening area expanded to high latitudes. Looking down from the pole cover area, a brightening area raised to the pole area is formed, which is called the auroral bulge. In fact, this expanding bulge is composed of continuously forming new auroral arcs, and these auroral arcs are all moving curtain-shaped auroras. As the auroral bulge expands to the longitude direction and the equator direction, the large folded ray arc travels westward at a speed of 1km/s like a sea wave, which is called the auroral westward swell and can travel thousands of kilometers. After the auroral bulge reached its maximum range, the extended aurora began to shrink, the brightened active aurora gradually disappeared, and the auroral ellipse returned to the state before the substorm. Recent studies have shown that there are various forms of other auroras in the earth's space, such as the shock aurora related to the interplanetary large-scale shock wave, and the polar aurora associated with the polar tip of the earth[5]. In addition, in connection with the aurora phenomenon, there is a huge current in the polar ionosphere. These currents flow into and out of the pole region along the magnetic field lines from the magnetosphere, and are connected with the overall current system of the magnetosphere. Therefore, another difficult problem in the mystery of the aurora is the cause and effect, origin and details of the coupling process between the earth's magnetosphere and the ionosphere. This is a topic of concern to many space physicists.", "1. Understanding the importance of geospace currents The speculation and understanding of geospace currents can be traced back to the research on the origin of the diurnal variation of the geomagnetic field in the 1880s. From the change of the magnetic field on the ground, it is inferred that there is an electric current in the upper atmosphere. Since then, the understanding of space current has gradually deepened through the research of K. Birkeland, S. Chapman, and H. Alfv\u00e9n. The earth's space current system is the source of all changes in the earth's magnetic field. During the process of the solar wind compressing the earth's magnetic field to form the earth's magnetic field, currents will be generated at the boundary of the magnetosphere (magnetopause) and in different regions within the magnetosphere, and the spatial distribution and temporal variation of these currents contain rich information about the energy coupling between the sun and the earth, which is It is one of the important sources of data for the study of space physical phenomena and an important basis for forecasting space weather. These changing currents will generate induced currents in the conductive earth, and the induced currents are related to the electrical conductivity distribution inside the earth. Therefore, space currents are also widely used to detect the electromagnetic properties of the earth's interior, and then infer the material properties, structures and properties of the earth's interior. process. 2. The current situation of the understanding of the earth's space current According to the theory of electromagnetism, ground observation and satellite observation, people have already had a deep understanding of the earth's space current system. The large-scale current system in the earth's space is formed by the movement of charged particles in the solar wind, magnetosphere and ionosphere. It is these currents that generate the changing magnetic fields observed in various regions of space and on the ground. For example, the heliospheric current accompanying the dynamic process of the solar wind is the source of the interplanetary magnetic field, the ionospheric tidal wind generator current is the source of the solar static diurnal variation Sq and lunar diurnal variation L of the geomagnetic field, and the aurora produced by the auroral particle settlement Charged currents are the source of geomagnetic substorms, the equatorial ring current produced by the circumferential drift of charged particles in the magnetosphere is the source of the storm-time variable field Dst, and geomagnetic pulsations are generated by magnetic fluid waves in the magnetosphere . Space currents are often divided into magnetopause currents, equatorial ring currents, ionospheric currents, field-directed currents, and magnetotail currents according to their physical processes and distribution areas. Among them, the equatorial ring current can be divided into symmetrical ring current and partial ring current, and the field direction current can be divided into zone 1 and zone 2 current. The current must form a closed loop, so part of the ring current, the field-directed current in zone 2 and its closed current in the ionosphere are often combined as a current system, and the magnetotail neutral sheet current and the magnetotail magnetosphere top current Consider it as a current system. In this way, the magnetosphere-ionosphere current can be divided into six major current systems: the Chapman-Farraro current (CF current), the symmetric ring current (SRC), the partial ring current and the zone 2 field in the sunward magnetopause current and its closing current in the ionosphere (PRFI current), magnetotail current (MTL current), field-oriented current in zone 1 (FAC1), and ionospheric current (IDC). Figure 1 is a schematic diagram of these current regimes. It should be pointed out that although the above-mentioned six major current systems summarize the main parts and characteristics of the current in the magnetosphere-ionosphere system, the actual structure and time variation of the space current are much more complicated than this model. Figure 1 \tSchematic diagram of the large-scale current system in the magnetosphere\uf02dionosphere. The geometric parameters and current intensity of the space current system can be determined according to the observed values of the satellite and the ground magnetic field by the Biot\uf02dSaval law, and the parameter values vary with the level of geomagnetic activity in the calculation. The change. Fig. 2 is the horizontal magnetic field distribution on the ground calculated from the typical parameters of the six current systems. Fig. 2 \tThe ground horizontal magnetic field distribution of the magnetosphere\uf02dionosphere six major current systems to the Chapman\uf02dFararo current (CF current) in the heliotropic magnetopause; (b) symmetric ring current (SRC), (c) partial ring Current and Field-Award Current in Region 2 and its Closure Current in the Ionosphere (PRFI Current); (d) Magnetotail Current (MTL Current); (e) Field-Award Current in Region 1 (FAC1); (f) Ionosphere Current (IDC) In order to study the energy process of the magnetosphere-ionosphere system, sometimes a more simplified \"equivalent circuit\" is used. At this time, the specific geometric structure of the magnetic field and the current is not considered, but the space of each part of the current passing through Electrical properties are integrated into concentrated parameters to form a circuit network, and then the relationship between current and energy in each part of the circuit is investigated. Figure 3 is an example of such an \"equivalent circuit\". Magnetopause current \nFig. 3 \tThe \"equivalent circuit\" structure of the magnetosphere\uf02dionosphere system The magnetopause current was first studied by Chapman and Fararo in 1931, so it is also called Chapman\uf02dFararo current , or CF current. It is composed of two current vortices on the north and south sides of the equator of the magnetopause on the heliopause, and the center of the vortex is located in the pole gap region of the magnetopause. Looking from the sun to the earth, the current vortex in the northern hemisphere flows counterclockwise, and the current vortex in the southern hemisphere flows clockwise, forming an eastward current belt flowing from morning to dusk in the equatorial belt of the magnetopause. Figure 4 is a schematic diagram of the CF current. In the figure, point Q is the neutral point of the magnetic field and the center of the current vortex, which is equivalent to the position of the pole gap region. The generation principle of CF current can be explained simply by using the single particle theory of plasma physics. Figure 5 shows the situation of solar wind particles (electrons and protons) penetrating into the magnetopause boundary layer, and the direction of the earth's magnetic field is perpendicular to the surface of the paper (ie the z direction). Due to the action of v \uf0b4 B force, the solar wind particles also return to the solar wind, and they form a current in the direction of morning to evening (y direction) in the boundary layer of the magnetopause. From the discussion about the current in the magnetic fluid in the previous section, we know that the CF current in the magnetopause is a kind of diamagnetic current produced by the solar wind plasma. Fig. 4 \tThe boundary surface current formed by the compression of the solar atmosphere and the geomagnetic field. The sun is on the right; (b) Looking from the sun to the earth, morning is on the left and dusk is on the right. Fig. 5 \tPhysical mechanism of CF current generation. The earth is about 10 earth radii, and the ground magnetic field generated by the CF current is only a few nT. Since the CF current structure is similar to the ionospheric Sq current system, its magnetic field on the earth's surface is similar to the solar static diurnal variation. When the solar wind shock suddenly compresses the magnetopause, the CF current of the magnetopause can reach 5~6 radii of the earth, so that the geomagnetic field suddenly increases, which is the reason for the sudden onset of the magnetic storm. Equatorial ring current The ring current in the magnetosphere is a ring current belt around the earth near the equator, which is basically symmetrical with the geomagnetic equator and spreads within a certain latitude range. The position and width of the ring current vary with the geomagnetic activity, and are generally distributed within the range of 2 to 10 earth radii from the center of the earth. The main part of the ring current is the westward current, and the total intensity can reach several million amperes during the magnetic storm, thus causing the horizontal component of the earth's magnetic field to decrease significantly, as shown in Figure 6. Fig. 6 \tMagnetospheric ring current and its generated magnetic field W and E represent the westward and eastward ring currents respectively, and the arrows indicate the magnetic field of the ring current. The charged particles of the sheet are continuously injected into the ring current, which greatly increases the intensity of the ring current and causes a large decrease in the horizontal component of the ground magnetic field. Each injection of particles will cause a drop in the ground magnetic field, which is called a substorm, and the main phase of the magnetic storm can be regarded as the result of a series of successive occurrences of substorms. The magnitude of the main phase of the magnetic storm is proportional to the total energy of the ring current particles. When the magnitude of the magnetic storm is 100nT, the energy of the ring current particles can reach 4 \uf0b41015 J, which is slightly larger than the total energy of the main magnetic field beyond 3 earth radii, which shows the intensity of the magnetosphere disturbance during the magnetic storm. The magnetospheric ring current intensity is not the same in all longitude planes, so it can be divided into two parts: symmetrical ring current and partial ring current. The former is basically symmetrically distributed around the earth, while the latter is only distributed within a certain longitude range without forming a closed ring. The changes in the geomagnetic field formed by the two parts of the current are also different. The formation mechanism of the magnetic ring current is the core problem of the magnetic storm theory. As early as the beginning of the 21st century, Chapman had proposed a ring current formation mechanism that is very close to the modern theory, as shown in Figure 7, which is the equatorial plane viewed from above the North Pole. Due to the compression of the solar particle flow, the geomagnetic field is confined within a cavity, forming a layer of positive charge on the wall of the cavity on the morning side and a layer of negative charge on the wall of the cavity on the evening side, so that the space behind the earth A roughly uniform morning and evening electric field is generated in the cavity, and the lines of force are approximately parallel to the equatorial plane. The electric field causes the charge on the cavity wall to be repelled, creating a tendency to leave the wall and cross over to the opposite cavity wall. But the particles can only hop to the opposite side if their radius of gyration is comparable to the cavity width. Chapman estimated that the width of the cavity is several earth radii, while the radius of electron gyration is only about 3 radii of the earth, which is not enough to cross to the opposite cavity wall, and can only drift under the action of magnetic and electric fields; while the radius of gyration of protons is much larger (about 11 Earth radii), it can reach the opposite wall of the cavity after going through a large orbital path, thus forming a current across the tail of the cavity, and finally forming a ring current around the Earth, reducing the magnetic field near the Earth. Magnetotail current \nFig. 7 \tFormation mechanism of ring current proposed by Chapman Magnetotail current is the current system with the largest spatial scale in the magnetosphere, and it is closely related to the formation of an extremely long magnetotail in the magnetosphere. Since the mass, momentum and energy of the solar wind are transferred to the magnetosphere mainly in the magnetotail, the study of the magnetotail current is very important for understanding this transfer process. The magnetotail current is also the main source of current in the auroral belt. During substorms, the magnetotail current flows into the ionosphere along the magnetic force lines, forming a complex current system in the polar ionosphere and generating severe geomagnetic disturbances. The magnetotail current consists of two parts: the neutral sheet current and the magnetopause current in the magnetotail. The former flows from the morning side to the dusk side in the magnetotail neutral sheet, and after reaching the magnetosphere boundary on the dusk side, the current returns to the morning side along the magnetopause of the magnetotail, thus forming two magnetotail lobes surrounding the south and north magnetotail lobes respectively. A semi-cylindrical current tube, as shown in Figure 8. At the end facing the earth, the two parts of the magnetotail current system are adjacent to the ring current system and the CF current system respectively (see Fig. 8), and can extend far away in the far magnetotail. The magnetic field generated by the magnetotail current on the earth's surface is relatively uniform, and because the magnetotail current is far away from the earth, the ground magnetic field is relatively small, generally on the order of 10nT. The physical mechanism of magnetotail current formation can be illustrated vividly by particle orbital theory. Field current \nFigure 8 \tViewing the magnetotail current from the direction of the magnetotail to the sun The field current is also called the Birkeland current, which flows along the magnetic force lines and is an important current system in the magnetosphere space. The field-directed current connects the ionosphere and the magnetosphere, making the magnetosphere and the ionosphere form a unified, mutually coupled dynamic and electrodynamic system. In this system, the ionosphere is like a huge TV screen, which reflects and displays the processes and phenomena in the magnetosphere and is observed by ground instruments, thus making the ground monitoring of the far magnetosphere possible. The ionosphere is another huge load, about 1/3 of the solar wind energy entering the magnetosphere-ionosphere system is consumed here. The ionosphere is also an important source of plasma. During a magnetic storm, a large number of oxygen ions flow out from here to the current region of the magnetosphere, which greatly affects the process of the magnetic storm. Fig. 9 is the mean field direction current pattern during a typical substorm, and the coordinate system used is the local time-invariant geomagnetic latitude coordinate system. The large-scale field-directed current is located in the 65\uf0b0-80\uf0b0 latitude band, basically distributed along the oval auroral belt aurora, and its center moves about 4\u00ba to the night side relative to the latitude circle. The field-directed current can be roughly divided into three regions: polar gap region, region 1 and region 2. The field-directed current in the polar gap region is concentrated in the magnetosphere polar gap region at a latitude of 80\uf0b0 near noon, and generally flows into the ionosphere in the afternoon and flows out of the ionosphere before noon. The field current in zone 1 flows into the ionosphere from the morning side, and flows out of the ionosphere from the dusk side, and the strongest current is distributed at 0700~0900 and 1300~1500 local time. The direction of the field current in zone 2 is opposite, flowing in from the dusk side , flowing from the morning side. Field-oriented current is the root cause of geomagnetic substorms. After the field-to-current is injected into the ionosphere, a complex electric field and current system is formed in the ionosphere, which causes drastic changes in the geomagnetic field. During the quiet period of the geomagnetic field, the field current is weak, the maximum field current density in zone 1 is about 2~3\u03bcA/m2, and the field current in zone 2 almost disappears. However, during the disturbance period, the field-directed current is greatly enhanced, and the total intensity can reach several million amperes. The maximum current density in zone 1 can exceed 5 \u03bcA/m2. The current density in zone 2 is approximately equal to the current density in zone 1 at night, and about 1 zone in daytime 1/3~1/4 of that. At the same time, the entire field moves to the lower latitude, the latitude spread becomes wider, and the current distribution becomes very complicated. The three regions of field-directed currents define an oval band surrounding the polar cap, called the auroral oval, or oval auroral band. In this belt, auroras occur frequently, and a strong auroral charged current flows along the belt. Ionospheric current \nFig. 9 \tDistribution of field-directed current The ionospheric current is concentrated in the range of 90~150km height, which is the current system closest to the ground, so it is also the direct cause of most ground geomagnetic changes: geomagnetic fields such as Sq and L Calm changes are generated by the dynamo process of ionospheric plasma moving in the geomagnetic field. The violent magnetic field disturbance during substorms is caused by field-directed currents injected into the ionosphere, driving westward and eastward auroral charged currents, and the equator The \"Honggayo phenomenon\" of the belt originates from the equatorial electric current generated by the special magnetic field configuration and the east-west ionospheric electric field. When we discuss ionospheric currents, we must consider how it differs from the magnetospheric plasma. The ionosphere has three remarkable features. One is that the density of neutral particles and charged particles is very high, so the collision effect between particles (especially between charged particles and neutral particles) cannot be ignored. Collisions are to cause momentum transfer, so electrical conductivity and Joule heating make sense. Another feature is that neutral particles constitute the dominant component of the ionosphere, especially in the lower ionosphere, so the movement of neutral components (neutral wind) plays a non-negligible role in the movement of ionospheric plasma. The third feature is that the ionospheric plasma presents anisotropy due to the existence of the geomagnetic field, and the degree and performance of this anisotropy vary with height, so the current system caused by the same electric field at different heights is very different . Considering these characteristics, when we study the electrodynamics of the ionosphere, we need to use the generalized Ohm's law and the anisotropic conductivity tensor. The calm solar diurnal variation Sq of the geomagnetic field is produced by the dynamo effect of the solar tidal wind. There are many tidal wind components, among which the (1, \uf02d1) module contributes the most to Sq. Figure 10 is the Sq generator current system calculated under a certain conductivity model. In the middle and low latitudes, this theoretical current system basically reproduces the main features of the equivalent current system obtained from the observation of the geomagnetic field. In the high latitudes, there are some differences between them, which is due to the fact that the field-directed current is not considered in the calculation. reason. Fig. 10 \nSq Dynamo current system (a), current vector (b), electrostatic field (c) and total electric field (d) Lunar diurnal variation of geomagnetic field L is produced by the dynamo effect of lunar gravitational tidal wind. The tidal wind component that contributes the most to L is the (2,2) mode. Figure 11 is the L generator current system calculated under a certain conductivity model. It not only reproduces the main characteristics of the L equivalent current system in the middle and low latitudes, but also reproduces the basic characteristics of the high latitudes. 3. Problems existing in the study of earth space current Although people have studied space current for hundreds of years, there are still some important theoretical \tproblems b) The comparison question, the actual measurement technology problem and the current experimental verification problem need to be solved. Theoretically, the physical process of solar-terrestrial energy coupling and the current system generated by it still need to be further understood. The two mechanisms of magnetic field reconnection and quasi-viscous have been proposed for half a century. Observations have made important progress, but basically still in the exploratory stage. All currents must form a loop, but there are still great differences in the judgment of the field-directed current loop and the understanding of the structure of partial loop currents. In terms of observation technology, most of the current understanding of space currents is inferred from observation data of the geomagnetic field, and a small part comes from radar observations. It is very difficult to directly observe the current, and it is necessary to observe all charged particles in all energy ranges, so as to obtain the distribution function of charged particles, and then integrate to obtain the current. At present, there is no ideal means to meet this requirement. 4. Difficulties in the study of geospace currents The biggest difficulty in the study of geospace currents is that it is difficult for us to observe the currents on the spot in the entire earth space at the same time, so it is difficult to obtain a complete picture of the instantaneous current system. However, the current patterns obtained from data observed by different satellites, at different time periods, along different orbits, and with different instruments are \"montage\" patterns in the average sense. For the time-varying current system, this statistical average pattern is far from the real pattern, and it must be used with caution. The earth's space is very huge, but the number of satellites is very limited, and the orbit coverage is extremely incomplete. Therefore, it is very important and very difficult to develop on-the-spot observations of the earth's space current system.", "The plasma in the Earth's magnetosphere above thousands of kilometers is generally collision-free plasma. Various electromagnetic processes allow energy to be converted between charged particles and electromagnetic field fluctuations, and the energy exchange between particles is also carried out by electromagnetic field disturbances of various frequencies. Generally speaking, we can think that the energy exchange in the magnetosphere is mediated through various wave processes, that is, through wave-particle interactions. This wave-particle interaction plays an important role not only in the Earth's magnetosphere, but also in the study of the heliosphere and cosmic plasma. An important feature of magnetospheric plasmas is that they are usually not in thermodynamic equilibrium, and their distribution functions often have beam or ring distribution, throwing angle anisotropy, or directional drift velocity such as current. Plasma deviates from the equilibrium state distribution and thus has some free energy. Under certain conditions, these free energies will stimulate various instabilities, thereby triggering the growth of various fluctuations [6]. In general, the quasi-linear diffusion process will gradually consume these free energies (such as reducing the asymmetry of the distribution function, reducing the orientation speed or forming a plateau in the distribution function, etc.). But at the same time, if there are other particles (they can be in other parts of the distribution function, or other kinds of particles), the particles satisfying some specific conditions can emerge from the fluctuation in the time scale of the quasi-linear diffusion process Absorb energy, get accelerated and heated. There are also various forms of wave-particle interaction between particles in a specific energy range and different fluctuations. The more common ones are cyclotron resonance, bouncing drift resonance, and Landau resonance. Important areas of study for wave-particle interactions are the polar regions and the inner magnetosphere. It is generally believed that the interaction between waves and particles is closely related to the formation of the cone-shaped distribution function of ionospheric upward particles on the one hand, and on the other hand, it can make the particles diffuse into the loss cone, and finally cause the ring current particles and radiation belt particles in the polar atmosphere. Settling and wave-particle interaction are also considered to be one of the main reasons for the formation and loss of electrons in the radiation belt [3]. For example, in the Earth's inner magnetosphere, \"cold\" electrons and protons (plasmospheric particles), \"hotter\" energetic electrons and energetic protons (ring current particles), and higher energy radiation-banded electrons and protons ( trapped particles) coexist[4]. Abundant plasma waves can be generated inside the plasmatope and its boundary layer region; the magnetic current wave in the magnetosheath can also enter the inner magnetosphere through magnetic force resonance and other forms; atmospheric lightning can excite various whistle wave modes; The hiss formed by the cyclotron instability generated by the injection of high-energy electrons into the cool plasma of the plasma layer, etc. [6]. All kinds of electrostatic and electromagnetic fluctuations (frequency range from a few millihertz to megahertz) can be excited and propagated in the inner magnetosphere, making it a region rich in fluctuations and complex wave-particle interactions. An important application problem of wave-particle interaction is the acceleration and loss of electrons in the radiation belt. Theoretical models about the origin of electrons in the radiation belt can be roughly divided into two categories: radial transport and local acceleration. The radial transport mechanism theory holds that when there is an electromagnetic field fluctuation close to the drift frequency of the electron in the magnetosphere, the interaction between the wave and the particle can transport the electron at a large value of L to the interior under the condition that the adiabatic invariants \uf06d and J are conserved. Transport, Figure 1 \tSchematic diagram of the main wave-particle interaction in the inner magnetosphere to increase its energy. The local acceleration mechanism theory believes that electrons can become radiation belt electrons through the local resonant heating with VLF hybrid waves[2, 4]. The loss of relativistic electrons is also caused by many reasons, and one of the important processes is still considered to be the dissipation of high-energy electrons due to the enhanced electromagnetic ion cyclotron wave (EMIC) and plasma hiss during the burst. Although people have carried out observations and researches on waves and particles in the magnetosphere for many years, and have made many progress[1,5], the research on wave-particle interactions in this region, especially the excitation and decay process of waves It is still a difficult point, the main reason is that: due to the large range of parameters of the magnetospheric plasma, there are various forms of possible excitation waves, and the specific properties of the waves, such as frequency, polarization, and group velocity, are very different. The excitation process is often related to The dynamic process of the magnetosphere is closely related, and the location of excitation and propagation evolution are complicated. In order to study the wave-particle interaction, it is necessary to observe the waves and particles in multiple bands at the same time, but at present, there is a lack of comprehensive and complete space observations (limited space-time coverage of space detection and limited detection accuracy, etc.), which also affects the wave-particle interaction process. in-depth understanding. The phenomena found in observations are often the result of the interaction and joint action of multiple waves and particles. Therefore, the focus of research is often not to propose the interaction between a certain type of wave and a certain type of particle, but to distinguish the relative importance of various processes , quantitative research is even more important. At the same time, it is of great significance to quantitatively study the effects of different wave modes and particle interactions for space weather modeling. In addition, most of the existing wave-particle interaction theories are linear and quasi-linear theories, and the characteristics of the interaction between waves and particles in the nonlinear stage still need further study. The plasma in the magnetosphere contains not only electrons and protons, but also some helium ions and other heavy ions. The existence of these components will change the excitation conditions and dispersion relations of some waves, which may further complicate the wave-particle interaction process.", "Magnetic field reconnection refers to the process of spontaneous or forced disconnection and reconnection of magnetic field lines in the plasma current sheet, which is accompanied by the sudden release of magnetic energy and conversion into plasma kinetic energy and thermal energy. Magnetic reconnection is associated with many eruptions in space plasmas and laboratory plasmas, such as flares in the solar atmosphere, coronal mass ejections, flux transfer events in the Earth's magnetopause, and magnetospheric substorms. The study of it helps us to better understand various burst phenomena in the plasma. The concept of magnetic field reconnection was first proposed by Giovanelli, who believed that discharges would occur near the neutral point or neutral line where the magnetic field strength is zero, and may have an important impact on the occurrence of solar flares. In 1958, Dungey first introduced the term reconnection, then applied it to the Earth's magnetosphere and established the first open magnetosphere model. Later, people established a variety of magnetic field reconnection models on this basis, the most famous of which are the Sweet-Parker model and the Petschek model [1]. In the Sweet-Parker model, the plasma carries the magnetic field from both sides of the current sheet to the current sheet, and the magnetic field reconnection occurs in the current sheet, in which the magnetic field lines are cut and reconnected, and the magnetic field energy is converted into plasma kinetic energy and thermal energy, and finally Plasma flows out at both ends of the current sheet. Figure 1 is a schematic diagram of the Sweet-Parker reconnection geometry. The magnetic field reconnection predicted by the Sweet-Parker model is too slow to explain the bursts that actually occur. In the solar corona, the reconnection time scale based on the Sweet-Parker model is estimated to be about several decades, while the actual eruption phenomenon is generally several minutes. Petschek improved the Sweet-Parker model. He believed that the magnetic field reconnection occurred in a system composed of two pairs of slow shock waves and X-type diffusion regions, although the reconnection speed of the Petschek model was significantly higher than that of the Sweet-Parker model. However, the reconnection speed of the magnetic field is still too slow if it is used to explain the observed burst phenomenon in the space plasma. These early proposed magnetic field reconnection models are all based on the magnetohydrodynamic equations, and energy dissipation is achieved through resistance. At the same time, these models believe that Alfv\u00e9n waves play an important role in it. Alfv\u00e9n waves accelerate the plasma and make them flow out of the two ends of the current sheet at Alfv\u00e9n speed. Figure 1 \tSchematic diagram of the geometry of Sweet-Parker reconnection. The triggering mechanism and electron dynamics behavior of collisionless magnetic field reconnection \t. 689. However, most of the space plasma is very thin, for example, 1 AU (1 AU That is, an astronomical unit, which is the average distance between the sun and the earth, about 150 million km), and the mean free path of particles in the vicinity is about 1 billion km. The characteristic time for significant changes in plasma parameters is much shorter than the average collision time between charged particles, space plasma is generally collision-free, and has no resistance in the classical sense, that is to say, the reconnection of the magnetic field in space plasma actually It is basically collision-free. Recent studies have shown that the Hall effect\u2460 plays a decisive role in magnetic field reconnection. The two-dimensional magnetic field reconnection problem under the same conditions was studied by using different numerical simulation methods (including magnetic fluid, magnetic fluid with Hall term, hybrid simulation and particle simulation)[2]. Triggered by a disturbance at a given time. The results show that the reconnection velocities obtained by the MHD, hybrid simulation and particle simulation methods including the Hall term are almost equal. At the same time, the reconnection region has a multi-level structure. Outside the inertial scale of the ion, the plasma freezes on the magnetic force line and flows out of the two ends of the current sheet at the Alfv\u00e9n speed. Within the scale of the ion's inertial length, electrons are frozen on the magnetic field lines, but ions cannot be frozen on the magnetic field lines, so the motions of electrons and ions are separated, and the resulting Hall effect determines the speed of magnetic field reconnection. This separation movement of electrons and ions makes the third component of the magnetic field a quadrupole distribution, and the fluctuations that play a leading role in this region may be whistle waves, and the electron outflow velocity can be much greater than the Alfv\u00e9n velocity[3]. In the smaller scale of the electron inertial length, even the motion of the electron cannot be frozen on the magnetic field lines, and the inertial term of the electron and its pressure anisotropy play a major role in this region. Since the inertial length of electrons is much smaller than that of ions, the overall reconnection velocity is determined by the Hall term. Figure 2 is the image of reconnection without collision magnetic field. Figure 2 \tSchematic diagram of the geometric configuration of collision-free magnetic reconnection However, in the above research, the magnetic reconnection is triggered by an artificially given disturbance at the initial moment, that is to say, we do not know the trigger mechanism of the magnetic reconnection. At present, low confounding fluctuations are considered to be the most likely triggering mechanism of magnetic field reconnection, and observations have also found evidence of low confounding fluctuations in the current sheet[4]. However, according to the linear theory, low confounding waves can only The place where the edge density gradient of the sheet is large is excited, whether it can pass to the middle of the current sheet in the nonlinear evolution stage, or trigger the magnetic field reconnection through other mechanisms, at present \u2460 Hall effect: in plasma and conductor, when When the current is perpendicular to the external magnetic field, a potential difference will appear between the two end faces perpendicular to the direction of the magnetic field and current. Not very clear yet [5]. The acceleration mechanism of electrons in magnetic field reconnection is also a matter of concern to everyone. It is generally believed that electron acceleration is related to the reconnection electric field generated in magnetic field reconnection [6]. Studies have shown that electrons can be trapped and accelerated to very high energies in the reconnection region, and these high-energy particles may generate electron pressure anisotropy near point X[7], but the specific process of these electron accelerations is not clear. In addition, the structure of three-dimensional magnetic field reconnection and its electron acceleration process, whether the magnetic field reconnection process is quasi-steady or explosive, the scale of magnetic field reconnection, how boundary conditions affect the time and space scale of magnetic field reconnection, etc. All are issues worthy of attention.", "In the early morning of March 13, 1989, Quebec, known as the city that never sleeps in North America, was completely dark. The culprit of the torment was the strongest geomagnetic storm in 50 years. The strong magnetic storm produced a strong induced current, which caused the power grid to paralyze. Residents spent 9 hours in the winter without electricity; the power loss was 2000kW, and the direct economic loss amounted to 500 million US dollars. However, this disturbance of the Earth's magnetic field can sometimes surprise humans. For example, residents living in areas with higher geomagnetic latitudes are often fortunate to enjoy the colorful aurora phenomena triggered by auroral substorms. With the rapid development of space and ground detection technology, human beings' understanding of the earth's space is constantly deepening. Disturbances in the magnetosphere will have a huge impact on human production and life. Among them, the long-term hotspots of space weather research are geomagnetic storms and substorms. A magnetic storm is a phenomenon in which the horizontal component of the Earth's magnetic field drops sharply within one to several hours and recovers within the following days. This is the most violent form of geomagnetic disturbance. Generally, the disturbance of the geomagnetic field is caused by the fluctuation of the solar wind hitting the earth, and is generally limited to the high pole region. However, when the interplanetary magnetic field has a long-term (several hours or longer) southward component and has a very large amplitude (greater than 10~15nT), the magnetosphere is continuously under pressure, and the magnetic field disturbance reaches the equatorial region, and the equatorial magnetic field deviates seriously from the normal level , causing a magnetic storm. Certain magnetic storms, especially large ones, show a sudden pulse at the beginning, a signal that marks the arrival of the interplanetary shock structure. The sudden pulse before the start of the storm is called the start of the storm. There are also other types of magnetic storms, which have a 27-day recurrence, with high-speed solar wind streams originating from coronal holes, and these storms are moderate [1]. Magnetic storms can usually be divided into three stages of development: initial phase, main phase and recovery phase. After the sharp start, the horizontal component of the magnetic field generally maintains its increased value within 1 hour to several hours, and then suddenly drops. The period from the sharp start to the sudden drop of the horizontal component is called the initial phase. Typically the horizontal component drops to \uf02d100nT to \uf02d300nT over a period of hours. Generally, the period when the horizontal component of the magnetic field is smaller than the normal value is called the main phase of the magnetic storm. After falling to the minimum value for a period of time, the horizontal component slowly returns to the state before the magnetic storm, which generally takes 1 to 3 days, and this period is called the recovery phase [2]. Usually, the storm time variation index Dst is used to characterize the development of a magnetic storm. Firstly, the horizontal component of the magnetic field at five stations uniformly distributed on different longitudes is subtracted from the basic magnetic field and the change of quiet day Sq, and then the average is calculated. Finally, the obtained average The value normalized to the equator is the Dst index. The local time variation of a magnetic storm is usually described by the Asy index. When the changes of the H components of the five stations are drawn together, the distance between the upper and lower envelopes gives the asymmetry index, which is Asy. The unit of Dst index and Asy index is nT. Dst=0 means a quiet day, Dst < \uf02d100nT means a large magnetic storm, \uf02d100nT < Dst < \uf02d50nT means a moderate storm, and \uf02d50nT < Dst < \uf02d30nT means a weak storm[2]. Another sudden change of the geomagnetic field\u2014\u2014magnetospheric substorm is mainly due to the interaction between the solar wind and the magnetosphere. The energy of the solar wind is continuously transported into the magnetosphere. The energy transported into the magnetosphere is often expressed as The drastic way of dissipation leads to a series of active phenomena in the upper atmosphere of the auroral ellipse, in the magnetotail and in the inner magnetosphere, and finally releases a large amount of energy in the magnetosphere. Magnetospheric substorms are systematic changes in the entire magnetosphere, including the magnetotail, with a time range of about three hours. In fact, the change of the magnetic field during the substorm is similar to the change of the Spq field during the calm, and the magnetic substorm can be regarded as the effect of the enhanced Spq current system. In order to characterize the intensity of the polar electric jet during the substorm period, the following magnetic activity index is defined, and the time variation of the horizontal component of the magnetic field observed by stations uniformly distributed along the longitude of the auroral belt (with the observation of a quiet day as the baseline) is plotted in On a graph, the upper and lower envelopes of the curve cluster are obtained, the upper envelope gives the AU index, the lower envelope gives the AL index, and the distance between the upper and lower envelopes is called the AE index, both in nT . The AU index represents the maximum magnetic field disturbance caused by the eastward electric jet, the AL index represents the maximum magnetic field disturbance caused by the westward electric jet, and the AE index represents the total disturbance intensity, which is called the auroral photojet index. However, the AE index is obtained from the data of stations distributed in the auroral belt, so it can only reflect the geomagnetic activity of the electric jet at the latitude of the auroral belt[2]. Substorms were initially identified in ground-based studies of the aurora, and were divided into three phases based on the Aurora Electrojet Index AL: growth phase, expansion phase, and recovery phase. Substorms are large-scale dynamical events in the magnetosphere that result in the projection of high-energy (tens to hundreds of keV) electrons and ions into the inner magnetosphere and the global reconfiguration of the magnetosphere's magnetic field. Substorms are well correlated with IMF orientation, and they occur when the IMF is southward. On average, there are several isolated substorms per day. Although geomagnetic storms and substorms are phenomena that occur in space far away from the earth's surface, they can cause fatal damage to spacecraft and astronauts, and when their subsequent development reaches the ionosphere, they will also cause damage to human beings. Daily life produces immeasurable economic and personal losses [1]. Therefore, the research on magnetic storms and substorms has great significance. However, even compared with the level of cognition in the early stages of space science exploration, human space exploration technology and theoretical knowledge have made great progress, and there are still many problems in the theory of magnetic storms and substorms in the space science community that have not been resolved. From the perspective of space science research, there are four main problems that have been puzzling our researchers. The formation and development process of substorms in the Earth's magnetosphere At present, scientists mainly hold two views on the formation and development process of substorms, and have established corresponding theoretical models, as shown in Figure 1. One is called the near-earth neutral model, which was originally proposed by McPherron et al. [3]. The model attempts to provide a self-consistent explanation for most magnetospheric phenomena. McPherron et al. considered that magnetic reconnection occurred on the sunward side of the Earth when the interplanetary magnetic field (IMF) reversed to the south. Sunward magnetic flux is then transported from the solar wind from the polar caps into the lower regions of the Earth. During the expansion phase, a plasmoid is formed near the Earth's neutral line of the magnetotail, and when the more distant neutral line of the magnetotail breaks, these particles move toward the Earth along the local closed field lines. After these particles settle, they will interact with the particles in the polar region to stimulate the brilliant aurora[4]. On the other hand, in the NENL model of Hones, the southward reversal of the IMF is not a necessary element. He considered that under the more general IMF conditions, the magnetic reconnection occurs directly at the magnetotail[4]. Figure 1. \tTwo opposing models of magnetospheric substorms: the near-Earth neutral model and the near-Earth current disturbance model. The near-Earth neutral model is coupled with many observational phenomena and theoretical models: \u2460 During the expansion phase of the substorm, the satellite is in the The obvious magnetic reconnection phenomenon was observed at the position of the magnetotail 25 earth radii away from the Earth; \u2461 This phenomenon was predicted several years ago when the tail-facing plasmoid during substorms was actually observed; \u2462 The near-Earth neutral line The concept is based on the global MHD model under southward IMF conditions (the research model commonly used in the field of magnetosphere research), coupled with the triggering mechanism of substorms; \u2463 Under the Vasyliunas equation, the generation of substorm current wedges is automatically satisfied. However, the near-Earth neutral line model also has some observational unexplainable problems: \u2460 It cannot explain the sudden increase of the equatorial auroral arc at the beginning of the auroral substorm (it can only be mapped to about 8 Earth radii); \u2461 It cannot explain the onset of the substorm The dramatic changes occur at the plasma sheet of nearly 8 Earth radii when . The model itself proposes that this is an effect on the strong near-Earth magnetic field caused by a flow of geodirectional particles released from the magnetic reconnection zone. The other is called the near-Earth current disturbance model, which was first proposed by Lui et al. based on events observed by satellites[5]. Different from the previous theory, the current perturbation model believes that during the growth phase of the substorm, a current sheet develops in the inner magnetosphere; then, the current sheet becomes thinner and thinner, and the local ions become non-adiabatic and start to cross the current sheet. These streams of ions interact with adiabatic electrons drifting in opposite directions, creating lower-frequency confounding waves. At the same time, the density gradient at the boundary of the plasma sheet excites lower frequency confounding waves. These two kinds of waves will cause the abnormal resistance in the plasma sheet, so that the over-tail current at the magnetotail is interrupted, and then the aurora brightening is observed in the high latitude region of the earth, and finally the reconnection of the magnetic field lines occurs farther away in the magnetotail, At the same time, the trailing sparse wave induces the overall flow of particles towards the earth [4]. Compared with the near-Earth neutral line model, the advantage of the Near-Earth current disturbance model is that it can explain the satellite observation phenomenon that occurs near the Earth (about 8 Earth radii and 25 Earth radii). However, the model is still unable to make a quantitative explanation for the observed phenomenon. It can be seen from the above description that the two main models on the formation and development of substorms can only explain part of the observed phenomena, and neither theory can overthrow the other to establish a clear and complete substorm model. Even if the frequency of substorms is higher than that of magnetic storms, it is still very difficult at the current technical stage to satisfy multiple satellites in the right position during their development. Coupled with the limitation of the accuracy of detection instruments, it is difficult to judge the sequence of related phenomena in the rapidly developing substorm development phase and expansion phase. It can be seen that the reason why the development theory of substorms has been deadlocked is that the two major technical limitations lie in the lack of satellites and the lack of detection accuracy. Mechanism of rapid formation of ring currents during strong magnetic storms Generally speaking, the energy of Earth's magnetic storms comes from the solar wind, and the ionosphere of the earth itself is the main source of ring current energy particles during storms. However, the energy of the particles in the earth's ionosphere, such as oxygen ions, is very low (< 5 eV), while the energy of the particles in the ring current reaches 30~500keV. Moreover, during periods of intense geomagnetic activity, a large number of particles ascend into the magnetosphere in the ionosphere and even the thermosphere in the polar regions of the earth. Among them, one of the main differences between O+ and H+ outflow characteristics is that O+ is strongly related to geomagnetic activity, and the O+ ion composition in the magnetosphere is greatly enhanced during magnetic storms, especially during large magnetic storms, it will become the main component of the storm-time ring current. It can be seen that the thermosphere of the earth, as well as the ionosphere in the aurora and polar cap regions are a whole, which together affect the process of the ionosphere supplying particles to the magnetosphere, and at the same time, the particles from the ionosphere will significantly change the particle composition and energy distribution in the ring current region . However, how the particles in the earth\u2019s ionosphere are rapidly accelerated and then injected into the ring current region; how to establish the correlation between the ionosphere\u2019s upward particles and processes such as the sun, interplanetary conditions, and geomagnetic activities; the fields of aurora and polar caps How is the electric field formed and distributed; the area where the wave interacts with the particle, that is, the area heated by the particle, is distributed with height; which area contributes the most to the magnetospheric ions, and so on, there is still no clear answer. . The relationship between geomagnetic storms and substorms The relationship between geomagnetic storms and substorms has always been a hot topic among space scientists. Although a large number of historical satellite statistics show that the two phenomena have their own characteristics, there is no inevitable relationship between them. However, whether the magnetic storm may be triggered by successive substorms is still not conclusive. During the main phase of a magnetic storm, strong substorms are often observed. Many researchers believe that the development of magnetic storms is the result of frequent substorms. Chapman mentioned in 1962: \"Polar substorms consist of discrete, intermittent polar disturbances, usually with a lifetime of one hour or several hours. Although polar substorms often occur during magnetic storms, they also occur in the absence of obvious during magnetic storms.\" But there are some reasons to believe that magnetic storms consist of strong substorms. During substorm activity, energy can be stored in the inner magnetosphere, leading to the formation of part of the ring current. If strong substorms occur one after another, part of the ring current effect of the previous substorm still maintains, expands and becomes larger from the place where it occurred, and evolves into a symmetrical ring current during the storm. As far as we know so far, there are mainly the following criteria to distinguish between the two: \u2460 Substorms last shorter than magnetic storms, usually only 1-2 hours; \u2461 Magnetic storms often have obvious driving sources, such as coronal mass ejections (Coronal Mass Ejections) , CME), corotating interaction regions (Corotating Interaction Regions, CIR) and interplanetary shock waves, etc.; \u2462 Magnetic storms are global magnetosphere disturbances, while substorm disturbances are only regional, and mainly occur in high latitude aurora Ellipse area; \u2463 As mentioned above, magnetic storms are usually accompanied by the onset of magnetic storms, that is, the enhancement of the magnetopause current. Radiation belt trough injection and formation of new radiation belts during magnetic storm Earth's radiation belts are high-intensity high-energy charged particle trapping areas in the Earth's near-earth magnetosphere, which are mainly composed of inner and outer radiation belts. The inner radiation belt is a relatively stable belt, located within 2.5Re, the center is located at L=1.5Re (3000km from the earth's surface), and its main component is 1~100 MeV protons, which are produced by cosmic ray albedo neutron decay. The outer radiation belt lies in the range of 3~7Re and mainly consists of electrons from ~100KeV to MeV. The outer radiation belt is very unstable, and its inner boundary can be eroded to 2Re during strong magnetic storms. There is a slot region with very low particle radiation flux between the inner and outer radiation belts, which is located near 2.5 Re in calm state. This area is often considered as a safe area for on-orbit spacecraft, as shown in Fig. 2 and Fig. 3 . In addition, it was recently discovered that there is an \"abnormal cosmic ray capture zone\" composed of heavy nuclei (mainly oxygen, but also nitrogen, helium and carbon) ions in the inner radiation belt region. It is generally considered that the trough area between the inner and outer radiation belts is a serious radiation environment near the earth. Figure 2. \tSchematic diagram of the structure of the static radiation belt. Figure 3. \tThe flux of high-energy electrons observed by the SAMPEX satellite. \u2014\u2014The temporal and spatial evolution of high-energy electrons in the inner radiation belt, outer radiation belt, and radiation belt trough. Panels (a), (b) indicate a relatively safe region in the formation of two different types of new radiation belts (trough injection). This is because the whistling waves generated by the plasmatope cause strong angular scattering of particle throwing in the trough, so the particles are not easily trapped, forming a region of low radiation flux. The trough area of the radiation belt was once regarded as a safe area very suitable for spacecraft operation, known as the safety island. However, a transient radiation belt event caused this safe zone to cease to exist. The inner and outer radiation belts change dynamically with the solar wind, interplanetary conditions and geomagnetic activity. During the solar activity and geomagnetic disturbance, there will be disastrous space weather, such as high-energy electron storm events, solar proton events, and high-energy particle injection in the trough And transient new radiation belt events (including relativistic electron belt and second proton belt), strong electron and ion injection events, etc. New satellite observations show that the actual radiation belt environment is far more complex than statically described. After each major magnetic storm process, the coverage of the radiation belt, the central position and the intensity of the particle radiation flux will change. Although a large amount of satellite data has helped us further understand the problem of radiation belts, how the particles in the trough of the radiation belts are accelerated and how new radiation belts are formed rapidly are still puzzled by scientists around the world. As mentioned above, geomagnetic storms and substorms are strong disturbances of the earth's magnetic field. Auroral substorms can bring us brilliant visual impact, but space disasters caused by storms may also cause inestimable personal and property losses; moreover, for The research and development of the mechanisms of magnetic storms and substorms can provide a basis for many theories of space science. Therefore, from various perspectives, the problems related to geomagnetic storms and substorms will always be one of the focuses of the space science community.", "The Earth's atmosphere is composed of the troposphere, stratosphere, mesosphere, thermosphere, and escape layer (Figure 1). It extends from the ground to a height of several hundred kilometers. The middle and lower atmospheres are mainly controlled by the Earth's gravitational field. The ionosphere is composed of ionized atoms and molecules formed by photoionization caused by solar ultraviolet radiation. It is distributed in the altitude range from 50km to about 1000km, so the middle layer and thermosphere of the earth's atmosphere coincide with the ionosphere (Figure 1 ). The ionosphere is governed by both the gravitational field and the Earth's magnetic field. In earth space, the ionosphere is the inner boundary of the magnetosphere. The vast area above the ionosphere is called the magnetosphere. The magnetosphere is mainly controlled by the earth's magnetic field, the solar wind and the interplanetary magnetic field. On the sunny side, the magnetosphere can reach 10~12 earth radii, on the backside, the magnetosphere presents a cylindrical structure with a radius of 20~25 earth radii, and the magnetotail can reach 200 earth radii or even farther away. How long the Earth's magnetotail is is still an unsolved mystery (Figure 2). Figure 1 \tEarth's atmosphere and ionosphere height distribution The discovery and study of the ionosphere began more than a century ago [1]. However, the discovery of the magnetosphere only has a history of half a century. After the 1950s, mankind entered the space age, and the magnetosphere was discovered during the International Geophysical Year (International Geophysical Year) in 1958 by the United States' first artificial satellite, Explorer 1. was first discovered [2]. Figure 2 \tThe solar wind and the Earth\u2019s magnetosphere Source: During nearly half a century of earth space exploration and aerospace practice, humans have gradually realized that the Earth\u2019s atmosphere, ionosphere, and magnetosphere are not independent of each other, but are tightly coupled Complex systems, and various activities in the earth's atmosphere, ionosphere and magnetosphere, especially magnetic storms, ionosphere storms and solar activities can affect human communication and other daily life, and at the same time seriously affect human spaceflight activities, and even cause disasters Sexual space weather events. So far, the coupling between magnetosphere\uf02dionosphere\uf02datmosphere, especially the coupling process of magnetosphere, ionosphere and thermospheric atmosphere during magnetic storms has been the most challenging science in geospatial heliophysics research question. The physical mechanisms underlying the coupling of this complex system are still poorly understood. At present, there are four scientific problems that need to be answered in the coupling process of the system[3~5]: The first is the energy coupling process: during the magnetic storm, the magnetosphere can inject a large amount of energy into the ionosphere/thermosphere in a very short time . However, how much energy is injected into the ionosphere/thermosphere system by the magnetosphere during the magnetic storm and what is the complex response process of the ionosphere/thermosphere system to the energy injection? The second is the dynamic coupling process: after the magnetic storm, the thermosphere The dynamical process is greatly changed, how does the feedback effect of the neutral wind field on the magnetosphere and how does the change of atmospheric circulation affect the ionosphere? The third is the electrodynamic coupling process: the global distribution of the complex current system in the ionosphere and its How do solar and magnetic quiet periods and during magnetic storms change and what is the effect on the ionosphere? The potential drop distribution along the direction of the magnetic field and how it affects particle acceleration? The fourth is the matter coupling process: What are the escape rates and acceleration mechanisms of H\uf02d and O+ ions during magnetic quiet periods and magnetic storms? What mechanisms control the plasmasphere Plasma density? How do ascending particles affect the occurrence, development and recovery of magnetic storms and magnetospheric substorms? At present, these important scientific issues are not fully understood. In addition to the important unsolved scientific problem of the coupling between the magnetosphere, ionosphere and thermosphere, recent ground-based and more systematic satellite detection and research have found that the ionosphere and thermosphere are also strongly affected by disturbances from the lower atmosphere Impact. Solid earth events, such as earthquakes; disturbances in the middle and lower atmosphere, such as typhoons, thunderstorms, atmospheric gravity waves, planetary waves, and tidal waves, etc., will have a significant impact on the ionosphere [6~10]. Figure 3 presents a preliminary understanding of the current response of the ionosphere/thermosphere to the lower atmosphere and to various processes from above. However, how the lower atmosphere strongly affects the ionosphere and thermosphere has not been studied clearly, and what is the most effective coupling mechanism between them is still an unsolved mystery. Figure 3 \tSchematic diagram of the coupling process between the ionosphere/thermosphere and the upper and lower layers Image source: At present, the main difficulty in the research of these scientific issues comes from the detection aspect. The study of this complex system requires detailed detection of solar radiation and solar wind As well as the detection of multiple physical parameters covering various places in the earth's layers from the polar regions to the equator, some of which are quite difficult to detect, for example, the detection of the global distribution of the electric field, the detection of particles in various energy segments, and the detection of upper atmospheric winds. Field and temperature fields and the detection of the global distribution of chemical components, etc. Secondly, it is very difficult to establish a physical model spanning multiple scales from the sun to the surface of the earth, which also restricts the solution to the above scientific problems. Therefore, the solution to these scientific problems depends on the more systematic and comprehensive detection of multiple physical parameters and the establishment of more complete physical models in the future. The resolution of these scientific problems will greatly promote the research of space physics and environment, and provide a theoretical basis for the accurate prediction of space weather.", "In the earth's ionosphere, the change with a solar return year as the basic period is a very obvious and important ionospheric climatological change. According to the classical ionospheric theory, for a certain place on the earth, since the angle of the solar zenith angle in summer is smaller than that in winter, the solar ultraviolet and other radiation that can lead to ionosphere ionization received by the upper atmosphere of this place is also strong in summer and weak in winter, so the ionosphere The annual variation of the ionization intensity should also mainly show that it is strong in summer and weak in winter. However, a large number of observations have shown that the annual changes of the ionosphere are often not the case, that is, on the background of the normal seasonal changes of the ionosphere, \"abnormal changes\" that are comparable or even larger than the normal changes are superimposed (Fig. 1). Figure 1 \tshows the annual anomalies of various ionospheric changes in terms of global electron content (GEC). GIM (global ionospheric map) conversion; the dotted line and dotted line are the fitted annual and semi-annual changes (both with annual mean) curves respectively. The abscissa marks the vernal equinox, summer solstice, autumnal equinox and winter solstice in the northern hemisphere; the ordinate unit is GECu (1030 electrons) Seasonal anomaly (or winter anomaly): The ionospheric ionization intensity in winter is greater than that in summer, which is exactly the result predicted by classical theory on the contrary. Seasonal anomalies occur only in daytime mid-latitudes and tend to occur in sectors closer to the north and south magnetic poles, namely North America and Australia. Non-seasonal anomaly (or annual anomaly): In terms of the global ionosphere as a whole, the ionization intensity around December (winter in the northern hemisphere) is greater than that around June (summer in the northern hemisphere). An equivalent statement is that the degree of seasonal anomaly in the northern hemisphere is greater than that in the southern hemisphere, that is, the ionosphere in the southern hemisphere tends to conform to the normal seasonal variation of the classical theory, while the ionosphere in the northern hemisphere tends to have seasonal anomalies that violate the classical theory. Semi-annual anomaly: the ionospheric ionization intensity around the spring and autumn equinoxes is greater than that around the winter and summer solstice. Semi-annual anomalies mainly appear in middle and low latitudes and equatorial regions. Analytical studies on various anomalous changes in ionospheric climatology have a long history. In the early days, the ionospheric seasonal anomaly (variation) and non-seasonal anomalous components were separated by using the data of the observation stations at the geomagnetic conjugate points in the northern and southern hemispheres; distribution; recently, the analysis of satellite observation data and global GPS network data has given more detailed annual anomaly characteristics of the ionosphere. The causes of various ionospheric annual anomalies have always been a matter of great concern in the study of ionospheric physics. Among them, the change of upper atmospheric composition driven by upper atmospheric circulation affects the ionospheric ionization degree (Fig. 2), which can better explain the formation of ionospheric seasonal anomalies and semiannual anomalies in some areas. The main basis of this theory is: Based on the classical theory, it can be obtained that the ionization intensity of the ionosphere is proportional to the ratio of the atomic concentration to the molecular concentration at the height of the ionosphere, such as the ratio of the oxygen atomic concentration to the nitrogen molecular concentration [O/N2], and the atmospheric circulation It may change the concentration ratio of neutral components and affect the ionospheric ionization degree. Figure 2 \tUpper atmospheric circulation in winter and summer. The circulation system drives the change of the medium-sized composition of the ionosphere height (that is, the change of [O/N2]), which can produce semi-annual anomalies and seasonal anomalies of the ionosphere (according to Yu Tao et al., 2006). , since the solar radiation is balanced between the two hemispheres, there is no transequatorial flow in the thermospheric atmosphere. At this time, the concentration of lighter atoms is higher at the height of the ionosphere, while the heavier molecules are mainly located in the higher At a lower altitude, the [O/N2] at the ionosphere height is larger, and the ionization intensity is also larger. In winter and summer, the imbalance of solar radiation between the two hemispheres produces mid-equatorial flow in the thermosphere. This flow is like stirring a spoon in a pot, making the different gas components that are originally separated up and down at the equator and in the middle and low latitudes (that is, those located in the lower latitudes) The high-altitude gas atomic composition and the gas molecular composition at a lower altitude) are mixed, and as a result, [O/N2] on the ionosphere height decreases, and the ionization intensity of the ionosphere also decreases. In this way, the intensity of the ionosphere in the winter and summer seasons in the middle and low latitudes and the equator is smaller than that in the spring and autumn seasons, which is the semi-annual anomaly of the ionosphere. In addition, the high-level atmospheric circulation related to the above-mentioned cross-equatorial flow can cause daytime atmospheric updrafts in the middle and high latitudes of the summer hemisphere, and cause downdrafts outside the auroral oval in the winter hemisphere (especially in the near-polar regions), resulting in ionization of the summer hemisphere. The decrease of [O/N2] in the layer height leads to the decrease of ionization intensity, and the increase of [O/N2] in the winter hemisphere increases the ionization intensity, which leads to the ionosphere ionization intensity in summer is lower than that in winter, that is, the ionosphere is seasonally anomalous. Non-seasonal anomalies of the ionosphere are yearly variations of the ionosphere that have not yet been satisfactorily explained. A variety of possible mechanisms have been proposed in history. Among them, the annual variation of the distance between the sun and the earth (small in December and large in June) can be used to qualitatively explain the non-seasonal anomalies of the ionosphere. The expected non-seasonal anomaly can reach 7%, but this is less than half of the actual observation. In recent years, possible mechanisms such as the hemispheric asymmetry of the geomagnetic field, bottom atmospheric fluctuations (planetary waves, tides, gravity waves, etc.), and surface meteorological activities have been proposed, and simulation calculations have been carried out using large-scale ionospheric theoretical models. However, there is still a lot of work to be done before fully explaining the causes of the non-seasonal anomalies in the ionosphere.", "In the ionospheric E region at an altitude of 90~160km, a high-density thin ionized layer will occasionally appear suddenly in some local areas of the earth, which is called the ionospheric burst E layer, or Es for short. The altimeter echo in the HF band is usually used to detect Es (Fig. 1). In addition, VHF radar and radio occultation are effective means to detect Es. Es has a high ionization density, which can make the high-frequency VHF or UHF radio waves return to the ground, and use this feature to realize trans-horizon radio communication and long-distance TV broadcasting. Japanese amateur radio operators have directly received images broadcast by CCTV in Beijing, and occasionally received FM broadcasts from the south in central my country, relying on this abnormal propagation of Es. Therefore, the observation and research of Es has important use value and scientific significance. Figure 1 \tThe ionogram observed by the altimeter at the Wuhan Ionospheric Observatory Station. The altimeter emits radio waves of different frequencies (the abscissa in the figure) upwards, and estimates the reflection point in the ionosphere by measuring the arrival time of the ionospheric reflected echo. Height (ordinate in the figure). The traces appearing at a height of about 110km in the figure and in the frequency range of 3.5\uf07e8.5MHz come from the echoes of the ionosphere Es (the picture is provided by the Wuhan Ionosphere Observatory of the Institute of Geology and Geophysics, Chinese Academy of Sciences) on the ionosphere Es It has a history of more than 50 years, but its morphological changes are diverse, its appearance is random, and its formation time and duration are unpredictable. The explanation of its morphological and distribution characteristics is still an unresolved scientific problem. One of the important topics of the study. In different latitude regions, the ionosphere Es has different formation mechanisms and exhibits different regional characteristics. Most of the Es in the auroral region is mainly produced by energy particles (electrons) injected along the magnetic field lines, and thus is associated with the aurora; this kind of Es mostly appears at night and is usually thick (similar to the ordinary E layer), so it is also called Layer E at night. The equatorial region Es that appears in the narrow band close to the magnetic equator is basically a diurnal phenomenon; the equatorial region Es is closely related to the equatorial current collection, and is produced by the related plasma instability and nonlinear effects. Except for Es at the equator and Es at the polar regions, the Es that appears in the vast mid-latitude regions is called mid-latitude Es, which usually appears as a thin layer with a high density and a thickness of only one or two kilometers, and its reflection efficiency for radio waves is very high. High, very little absorption. Midlatitude Es has a statistically significant spatiotemporal distribution. In the global distribution of Es, people have long noticed the \"Far East anomaly\" phenomenon in the ionosphere, that is, in the middle and low latitudes of the Far East (namely, East Asia), the occurrence rate of Es in summer is particularly high, and the intensity is particularly strong, which is much higher than that of Other areas with the same latitude and the same season (especially South America where the geomagnetic conjugate). The seasonal variation of mid-latitude Es is reflected in the phenomenon of \"summer anomaly\", that is, from May to September in the northern hemisphere, and from November to February in the southern hemisphere, the observed Es is the most and strongest. The Far East anomaly and summer anomaly of Es can be seen from Fig.2. In addition to the above two abnormal distributions and changes, Es also has less prominent diurnal and long-term changes. The global distribution of Es and the diurnal and seasonal variations of Es have not yet been clearly explained, and the correlation between Es and solar activity is still a controversial issue. In addition, in the mesotope region, Lidar often observes Ns (metal atomic layer burst phenomenon) similar to Es, and finds that the appearance of Ns is similar to that \tof Es Distribution The 4 pictures correspond to 4 different northern hemisphere seasons, that is, autumn (September-November 2006) in the upper left picture, winter (December 2006-February 2007) in the upper right picture, spring (March-May 2007) in the lower left picture ), the lower right figure is summer (June-August 2007). It can be seen from the figure that there is an obvious summer anomaly, that is, the summer where Es mainly occurs in the hemisphere. At the same time, there is also an obvious Far East anomaly, that is, the occurrence rate of Es in summer in East Asia is much higher than that in South America [1]. There is currently no satisfactory explanation either. Since the 1960s, the most influential theory on the formation of mid-latitude Es is the \"wind shear theory\". The theory holds that the ionosphere dissociated by solar radiation or high-energy particles is generally much thicker than the Es layer, and the most likely reason for the thin Es layer is to compress the existing ionized components, and the most effective way of compression is wind shear. At the height where Es appears, due to the effect of the Lorentz force, the change in height of the horizontal neutral wind (shear wind) makes the plasma in the E layer gather into a thin layer with higher density, forming the Es layer. Calculations show that, in the northern hemisphere, shear winds with a southward bottom and a northward top (and vice versa in the southern hemisphere) can meet the conditions for plasma concentration, while eastward shear winds with a bottom east and top west can generate Es more efficiently. The problem encountered immediately after the wind shear theory was put forward is that the lifetime of conventional ions (usually including non-metallic ions such as oxygen and nitrogen) in the E layer is very short (average 10s), so the Es formed by wind shear will disappear quickly. It was pointed out that the lifetime of metal ions is long enough to maintain Es formed by wind shear, so metal ions may be the main ion components in Es. Later rocket observations really detected metal ions such as magnesium and iron in Es, which made the wind shear theory accepted by most scholars. Although the wind shear-metal ion theory is successful in explaining the formation of mid-latitude Es, it cannot satisfactorily explain some basic distribution characteristics of the Es layer, for example, it cannot explain the above-mentioned \"summer anomaly\" of mid-latitude Es, \"Far East Anomaly\" and other phenomena. Shear winds show little variation over the year, and metal ions, likely of extraterrestrial origin, enter the Earth's upper atmosphere as meteors and meteorites do not appear to be significantly more abundant in summer than in winter. On the issue of geographical distribution anomalies, although some scholars have proposed that the Far East anomaly of the horizontal component of the geomagnetic field can explain the geographical distribution of Es to some extent, but it is not enough to explain why Es occurs so frequently in the Far East. In addition, the explanation of details such as the thickness of Es and the movement of Es is also powerless to the wind shear theory. Therefore, until now, people still lack a comprehensive understanding of the most basic physical mechanism of mid-latitude Es. In recent years, studies on the relationship between planetary waves and Es have pointed out that planetary waves are also an important factor affecting Es at mid-latitudes, and have proposed a reasonable explanation for the summer anomaly of Es, which needs to be further confirmed by theoretical and observational studies. In addition, it was also pointed out that the seasonal variation of the sudden meteor deposition number in the upper atmosphere has a good correlation with the seasonal variation of Es. In addition, planetary waves are closely related to regional factors such as surface weather systems and topography. Whether these regional characteristics can be used to explain the \"Far East anomaly\" phenomenon in the occurrence rate of Es is also worth looking forward to.", "The 80-120km mesosphere and low thermosphere above the earth are considered to be the transitional region between the earth's atmosphere and space. In the atmosphere above this region, the degree of ionization becomes increasingly high, and forcing influences from outside the Earth\u2014electromagnetic and particle radiation from the sun generally control the behavior of the atmosphere in this region, and the atmospheric composition (neutral and ionized) often shows High-speed movement and drastic changes. Below this region is an atmosphere governed by neutral hydrodynamics, where motion and change are relatively gentle. Many exotic atmospheric phenomena occur in this region, for example: the mesopause region has the lowest temperature in the Earth's atmosphere, and shows anomalous seasonal variation (high in winter, low in summer), the existing theory is not yet convincing explain this phenomenon. There are many layered phenomena in the mesopause region, such as noctilucent clouds, mesospheric summer radar echoes, dust and metal layers, etc. It is generally believed that the material that produces these phenomena mainly comes from the ablation of meteors (including cosmic dust). However, some exotic behaviors such as the stratification phenomenon of the mesotope cannot be understood based on existing theories and models. For example, some research work believes that there is no clear relationship between meteor injection and changes in the atomic layer of mesotopic metals [1]. At the same time, iron lidar observations and rocket detection found that although the dust layer is at a height of several kilometers lower than the iron atomic layer, their cross-sectional structures are similar, and their movement methods are also consistent [2]. This observational fact is surprising because the structure and variation of the dust layer are related to local dynamic factors (vortex diffusion and neutral wind) and the properties of injected meteors (velocity and mass, etc.), while the gas phase chemistry is considered It is the structure and changes that dominate the lower part of the iron layer. In addition, simultaneous iron and sodium lidar observations show that the lower boundaries of the iron and sodium layers show a pervasive fine layering with the same motion [3]. This observation challenges existing chemical models of metal layers as well as the theory of gravitational waves. Clearly, the formation mechanism of the stratification phenomenon and the reason for the change are not well understood. The atmospheric and space transition region is too high for sounding balloons and too low for orbiting satellites, and sounding rockets capable of reaching this region have only been launched in a limited number of locations around the world, so the region's environmental parameters Data on direct detections in situ have so far been very scarce. Ground-based remote sensing detection and satellite remote sensing detection provide the main data sources of the environmental characteristics of the region, and the main detection parameters include density, temperature, wind speed and some components. But most ground-based remote sensing equipment can only detect the state parameters of various tracers (such as charged particles, sodium atoms and iron atoms, etc.). Assuming that the tracer is in equilibrium with the neutral atmosphere, the measured tracer parameters are taken as atmospheric parameters (such as temperature and wind speed). In the transition region between atmosphere and space, the tracer and the neutral atmosphere are not necessarily in equilibrium. For example, the seasonal variation of atmospheric temperature measured by a large-aperture Rayleigh lidar in the 85-90km area is 10-15K lower than that measured by the sodium temperature lidar system at the same latitude[4]. In addition, in the strange phenomena and fundamental physical processes in the transition region of atmosphere and space in the \t711 domain, the charged particles may be decoupled from the neutral atmosphere, so the wind observed by radio radar may also represent the charged particles drift. Existing satellite remote sensing detection can give the global atmospheric density, temperature and wind field, but its time resolution is low, and it is often impossible to distinguish the spatial (such as longitude) and temporal (local time) changes of atmospheric parameters. In short, people's understanding of the basic physical processes in this region is still at a very preliminary stage, and some key scientific questions cannot be answered so far. For example, does the main forcing in this region come from outside the Earth or from the Earth's interior? Do atmospheric fluctuations and tides have an important influence on the mesopause temperature[5]? Are the metal atoms in the observed meteors in thermal equilibrium with the surrounding atmosphere? The generation and disappearance mechanism of metal atoms, the mutual conversion mechanism of metal elements in atomic state and ion state, and the influence of kinetic transport process on metal atomic layer are still very insufficient, and a lot of new observation and simulation work is still needed. .", "1. Space weather and disastrous space weather In modern life, people generally pay attention to weather conditions. The weather we talk about in daily life occurs in the troposphere, and refers to the physical image and physical state of the neutral atmosphere that affects human life and production, such as cloudy, sunny, rain, snow, cold, warm, dry, wet, and wind. It is mainly the space below 30km from the earth's surface. Space weather mainly studies the part above the troposphere. The \"wind\" it focuses on is \"solar wind\", and the \"rain\" it focuses on is the high-energy particle rain from the sun. It doesn't care much about \"cold and warm\", but pays special attention to the sun's ultraviolet and The change of electromagnetic radiation does not care much about \"cloudy and sunny\", but has a soft spot for electromagnetic field disturbance [1-2]. Severe weather changes such as violent storms, lightning and thunder will bring disasters to people's production and life such as clothing, food, housing, and transportation. Disastrous space weather will cause satellites to fail or even fall, communication interruptions, navigation errors, and power systems. collapse and threaten human health and life, causing significant losses to socio-economic and national security. Because of this, in today's high-tech development, space weather has attracted people's attention. The environment we live in, in addition to the earth's solid, ocean and atmospheric environment, also has a sun-terrestrial space environment that is closely related to human survival and development. The solar-terrestrial space environment from the sun to the earth can be divided into several levels: the solar atmosphere and the interplanetary space, the magnetosphere, the ionosphere and the atmosphere, and each level is coupled and connected with each other (Figure 1)[2] . Since the launch of the artificial satellite in 1957, human spaceflight, communication, navigation, and military activities have expanded from the earth's surface to hundreds or thousands of kilometers of space, and the sun-earth space environment has become an important activity place for human survival and development. In this solar-terrestrial space environment, the sun is the source, and violent activities such as flares and coronal mass ejections on the sun often pose a threat to the earth's space (magnetosphere, ionosphere and middle and upper atmosphere), the operation and safety of satellites, and human activities. to seriously affect and harm [3]. People call this short-term change caused by solar activity space weather. The sun is the source of space weather events, it is sometimes quiet and sometimes active, and it has great power. Its every move and instantaneous change are enough to affect the earth revolving around it. Local regions of the sun often release huge amounts of energy and matter in a very short period of time. This phenomenon is called solar activity, such as solar flares (Fig. 2(a)), solar prominence eruptions and coronal mass ejections [Fig. 2(b)] wait. When the solar activity erupts, it continuously radiates electromagnetic waves, ejects particles, and \"blows\" out the solar wind, often accompanied by X-rays, enhanced ultraviolet radiation, high-energy particle flow inflation, and coronal mass ejections, etc., which will generate powerful shock waves and various disturbances . It takes about 8 minutes for X-rays and ultraviolet rays to reach the earth and affect the ionosphere and upper atmosphere. The flow of high-energy particles, such as the solar proton event, will reach the earth within a few hours, which will increase the flow of protons at an altitude of 10,000 meters on the earth by tens of millions of times, affecting space safety. Figure 1 \tSchematic diagram of the solar-terrestrial space region The solar-terrestrial space can be divided into the following levels: the solar atmosphere and interplanetary space, the magnetosphere, the ionosphere and the atmosphere[2] The \tlargest solar flare event recorded in Figure 2 (a); ( b) Image source of a coronal mass ejection event: NASA In addition, the sun continuously blows out \"solar wind\", and the coronal mass ejection speed is as high as hundreds to thousands of kilometers per second, sweeping across the earth like a shock wave. The high-speed solar wind can break through the protection of the earth's magnetosphere, break into the earth's space, and cause harm to production and life. For example, it will cause various effects in the earth's space: the earth's aurora, global ionospheric disturbance, ionospheric storm, sudden disturbance of the geomagnetic field, geomagnetic storm, submagnetic storm and high-energy particle storm, etc., will cause serious damage to aerospace, communication, navigation and power systems Huge hazard, which is called a catastrophic space weather event. 2. The hazards of disastrous space weather In fact, before entering the space age, the losses caused by space weather disasters were very small. With the development of aerospace technology, the improvement of the modernization of human production and life, and the expansion of the scale of space development, space disasters have become more and more obvious. Effects on spacecraft Dramatic changes in the sun can directly affect changes in space weather. Geomagnetic storms and the enhancement of solar ultraviolet radiation will heat the upper atmosphere of the earth, making the atmosphere density at the satellite operating altitude significantly enhanced. When the satellite is running in the upper atmosphere, the atmospheric resistance reduces the kinetic energy of the satellite, the altitude of the orbit decreases, the orbit shrinks, and it enters a denser atmospheric area, which leads to a further increase in the resistance of the satellite and accelerates the descent of the navigation satellite. Push the satellite a little higher or it will fall slowly [2,4]. For example, when the Columbia space shuttle of the United States flew for the first time, due to solar activity, the density of the upper atmosphere increased sharply, and the resistance encountered by the space shuttle increased by 15% compared with before. Fortunately, the satellite carried enough fuel to avoid a crash. The tragedy of death [5]. The United States Skylab (Skylab) crashed into the sea near Australia two years ahead of schedule in June 1982, because the atmospheric drag was not fully estimated due to the approaching solar peak year[2]. High-energy charged particles damage various aerospace materials with huge radiation doses, resulting in deterioration of the performance of structural materials. The surface potential of the spacecraft changes with the state of the space plasma, and low-energy particles can charge the surface of the satellite. During a substorm, the high-density, low-energy plasma is replaced by a low-density plasma cloud with an energy of 1-50keV, which can charge the surface of the spacecraft to a high potential, and even produce electrostatic charge breakdown. Higher energy electrons will cause internal charging of the spacecraft, shorten the life of components, and even cause single event effects, causing program confusion and leading to spacecraft failure [2,4]. There are many examples of satellite failure caused by strong particle radiation. For example, in January 1994, a high-energy electronic storm caused the Canadian communication satellite Anik to lose control, and the backup system had to be activated. It took 6 months to resume work, and the loss was as high as 200 million U.S. dollars[5 ]; In November 1990, the main control computer of my country's \"Fengyun-1\" meteorological satellite was affected by a single event event caused by high-energy charged particle radiation, and the satellite's attitude could not be controlled and failed, resulting in an irreversible situation. Space debris and meteors will cause mechanical damage to the spacecraft. They have extremely high kinetic energy. If they collide with the spacecraft, they will cause surface deformation or even breakdown; the magnetic field can also change the attitude of the spacecraft[2,6]. According to statistics from the US space department, about 40% of satellite failures are related to space weather conditions. As far as the space disaster weather loss in the aerospace field is concerned, the annual loss is calculated in tens of millions of dollars. Impact on communication, navigation and positioning Any system that transmits signals in the form of electromagnetic waves will be affected by changes in the ionosphere when the signal passes through the ionosphere or propagates below it, such as long-distance long-medium and short-wave communication, over-the-horizon radar , and even low-frequency navigation systems, etc. [1]. Ionospheric disturbances will have a significant impact on the transmission of radio communications and radar signals, leading to communication interruptions and radar reflection errors; satellite microwave communication signals passing through the ionosphere will degrade communication quality due to ionospheric disturbances flickering; space radiation background and space Debris will affect the detection, identification and tracking of targets by reconnaissance and early warning satellites, long-range early warning radars and over-the-horizon radars; ionospheric refraction and scintillation will affect the accuracy of radar measurement of target azimuth, speed and distance, especially when it is close to the geomagnetic equator , can also cause system failure. There are many examples of ionospheric disturbances seriously affecting communications. For example, during the great magnetic storm in March 1989, radio communication at low latitudes almost completely failed, and the navigation systems of ships and planes failed. In the Iraq war, there were successive incidents of accidental shooting and accidental injury by the U.S. military in \"cannibalism\". Some experts pointed out that, in addition to man-made reasons, there is also a certain relationship with the influence of space weather\u2014on March 28, 2003, solar flares and geomagnetic storms caused ionospheric storms[6]. The impact on the ground power transmission system The destructive power of magnetic storms to the power grid system is quite huge. When the sun erupts, it will cause strong disturbances in the geomagnetic field\u2014magnetic storms and substorms. The dramatic change of the geomagnetic field will induce a potential difference of up to 20V/km on the ground surface. When it is added to the power system as a voltage source, it will generate a strong geomagnetic induced current. This induced current is quite dangerous and harmful to power transmission equipment. It is extremely harmful. The influence of geomagnetically induced current on the power grid was first discovered in the United States. Since the 1940s, many power grids have been affected by magnetic storms in North America. In recent years, the most noticeable incident of strong magnetic storm damage to the power system occurred in March 1989. Due to a large magnetic storm in Montreal, Quebec, Canada, the entire power system collapsed, resulting in a 9-hour power outage for 6 million residents. Up to 19400MW, the direct economic loss amounted to 500 million US dollars. At the same time, the strong magnetic storm also burned down the giant transformer of a nuclear power plant in New Jersey, USA, and tripped or damaged a large number of power grid equipment such as transmission lines, transformers and static compensators [1]. The study of the impact on the earth's weather, climate and ecosystems shows that solar activity has a certain correlation with the earth's weather and climate changes. For example, the change period of the number of sunspots is basically consistent with the change period of the annual precipitation, which is about 11 years. The incidence of flood disasters and average temperature changes also have 11-year changes similar to the solar cycle [2]. Strong particle radiation can affect the Earth's atmosphere. For example, two major proton events occurred in February 1965 and August 1972. The former increased the number of neutrons on the ground by about 90 times, and the carbon-14 isotope in the atmosphere increased by 10%. The ozone in the air is reduced by 15% for a long time. The images of high-energy electrons penetrating the atmosphere captured by American satellites in 1994 show that the intensity of high-energy electrons in the middle and low latitude atmospheres is also very high. High-energy electrons will produce nitrogen compounds in the atmosphere, which directly affects the distribution of global ozone. Ozone has a strong absorption effect on ultraviolet rays. The existence of the ozone layer prevents too much ultraviolet radiation from the sun from reaching the ground, which plays an important role in protecting humans and organisms. The reduction of atmospheric ozone content will cause serious imbalance and malignant changes in marine and terrestrial ecosystems [1]. Impact on human health Some researchers pointed out that the incidence of some infectious diseases, cardiovascular diseases and eye diseases are positively correlated with the intensity of solar activity. But these effects are not caused by the electromagnetic waves and particles radiated by the sun directly hitting the ground. The massive solar explosion of August 4-10, 1972 sent a powerful stream of charged particles to Earth, causing an intense geomagnetic storm. Indian scholars conducted statistics on two cities with a million people and found that the number of heart disease patients admitted to hospital doubled during this period. What is the reason? Some scientists and medical scientists believe that the human body has bioelectricity, and the bioelectricity of countless cells gathers to form the electromagnetic field of the human body. Under normal circumstances, the electromagnetic field of the human body and the earth's electromagnetic field are in a state of mutual harmony. When the earth's magnetic field is strongly disturbed, it will break the balance between the human body's electromagnetic field and the earth's electromagnetic field, causing some functions of the human body to be disordered, affecting people's emotions and inducing diseases [7]. The high-energy particles released by super flares will cause harm to human beings, just like nuclear radiation does to human beings. The earth's atmosphere and magnetosphere can provide sufficient protection for people on the ground, but astronauts in space lack this protective barrier and face potential radiation hazards. If astronauts step out of the spacecraft during the peak period of space radiation, they may be injured or even killed by the particle attack; solar proton events may also cause serious radiation damage to pilots flying over the polar regions. In order to minimize this risk, the FAA regularly issues routine forecasts and warnings, and potentially dangerous flights can be rerouted or lowered to reduce radiation risks. During the half-year operation of the orbital module of my country's \"Shenzhou IV\" spacecraft, the space environment of the spacecraft's orbit was initially ascertained, and a \"safety road map\" was successfully drawn for the next safe flight of my country's manned spacecraft [6]. 3. How to reduce or even avoid space weather disasters Space weather forecast These losses caused by disastrous space weather can be avoided or reduced if accurate forecasts can be made in advance. For example, if the load on the power system is reduced in advance, damage to the transmission system due to magnetic storms can be avoided. When launching a satellite, choosing the appropriate launch time and orbital parameters can also avoid the harm caused by solar eruptions. For satellites already in orbit, if it is known in advance when the space storm will occur, all commands of the satellite can be closely monitored through the ground control system, and false commands caused by single event flips can be eliminated in time. Therefore, the most effective measure to avoid and mitigate space weather disasters is to accurately forecast space weather [1]. Space weather forecasting needs to predict the short-time scale changes of the solar-terrestrial space environment state. By monitoring solar activity, the solar activity and space environment changes can be predicted from the ground observation network and satellite observation data. When the solar activity is calm, it is \"good weather\"; when the solar activity is frequent, which may affect the communication, navigation and power systems on the earth, as well as the operation of satellites or spacecraft, it is \"bad weather\"[8]. The content of space weather forecast mainly has several aspects. One is the solar activity forecast, including periodic activities and explosive activities, such as sunspot numbers, flares and high-speed solar wind. The second is the planetary space weather forecast, such as the size and direction of the interplanetary magnetic field, the state of the solar wind, etc. The third is geospace weather forecasting, including magnetic storms, geomagnetic activity, auroral phenomena, and ionospheric storms. In terms of time advance, space weather forecasting can be divided into long-term forecasting, which mainly forecasts changes in solar activity levels in the next year or even decades. The mid-term forecast is half or one solar rotation cycle to several months in advance, and the main content is to predict the overall level of solar activity in the next month or 27 days. The short-term forecast 1 to 3 days in advance, whether there will be a solar X-ray outburst and its level, whether there will be a sudden increase in the flow of solar protons near the earth\u2014the occurrence of solar proton events[9]. Challenges faced by space weather forecasting With the advancement of science and technology and social development, space disasters have an increasing impact on human activities. However, to mitigate and avoid space weather disasters, it is not easy to do a good job in space weather forecasting. It is not only necessary to strengthen space weather Detection also needs to do a good job in theoretical and modeling research. Space weather detection is the foundation of space weather research, and the observation data on the earth is also an important basis for continuous observation of the sun or the space environment. Without continuous and real-time detection data, space weather prediction is empty talk. Although humans have launched a large number of scientific experiment satellites into space, the range of space weather on the sun and the earth is too wide, from the sun to the earth is 150 million km, within such a large range, how many satellites are needed to monitor the sun Geospace monitoring? Therefore, from the perspective of purpose and demand, the existing detection methods and technologies are still quite scarce, far from being able to meet the needs of space weather research. The physical process controlling space weather events is still not very clear, and there are many major scientific issues that have not been resolved, such as the trigger mechanism of coronal mass ejection events, the physical process and generation mechanism of magnetic storms and substorms, the magnetosphere-ionosphere-thermal Layer coupling, propagation and dissipation of low-level atmospheric waves, etc. The solution to these problems is the scientific basis for realizing accurate forecasting of disastrous space weather. Whether it is analyzing the state of space weather or forecasting space weather, we need modeling. The so-called modeling is to use the existing observations as input, according to a certain theoretical model, to obtain the overall image of the particles and electromagnetic fields in the region of interest and to predict the possible effects on the technical system. Space weather modeling is based on physical research and observations. In order to improve the level of space weather modeling, it is necessary to deeply study the physical mechanism of the problem of concern and obtain as much detection data as possible in key areas. However, in order to complete the space weather forecast, it is necessary to study the entire solar-terrestrial space as a whole. Although there are already a large number of models for the sun, solar wind, magnetosphere, ionosphere, and upper atmosphere, many models also emphasize the relationship between regions. coupling, but an overall joint model linking these regions is currently elusive [1]. Although many methods have been adopted for accurate space weather prediction, if there is not enough understanding of the basic physical process of space weather and the lack of monitoring data from the source of the sun to the space response of the earth, then accurate space weather prediction is still quite difficult. difficulty. In recent years, with the successful implementation of my country's \"Earth Space Dual-Star Exploration Program\" and the use of satellites equipped with space environment detection instruments, space-based space environment detection has taken a solid step. The ground-based observation of the space environment also has a certain foundation, especially the construction of the national major scientific and technological infrastructure project \"Eastern Hemisphere Space Environment Ground-Based Monitoring Meridian Chain\" (referred to as \"Meridian Project\") will effectively promote the establishment of a three-dimensional detection of the combination of heaven and earth in my country's space environment system. Over the years, with the support of the country through multiple channels, my country has formed a certain accumulation of technologies and infrastructure related to space environment protection, and has carried out space environment protection services for some space activities. In the field of space weather research, the first \"State Key Laboratory of Space Weather Science\" was established. The Chinese Academy of Sciences unites and integrates the space environment application service force of the whole academy, and establishes the \"Space Environment Research and Forecast Center of the Chinese Academy of Sciences\". The 22nd Institute of the Ministry of Information Industry has also carried out the forecasting work of the ionospheric environment. The China Meteorological Administration established the \"National Space Weather Monitoring and Early Warning Center\". The Bureau of Meteorology and Hydrology of the General Staff is also starting to provide space weather support services. The sun-terrestrial space weather forecast service is entering the field of application from scientific research, and will become an important cause of national economic and social development. We have reason to believe that in the near future, space weather forecasting will provide timely and accurate early warning and forecasting services for aerospace, communications, electric power and energy sectors, which can reduce or even avoid the impact of space weather disasters on human beings.", "The geomagnetic field was formed 4 billion years ago. It is like a giant \"umbrella\" that protects life on Earth from cosmic rays and solar particles. At the same time, the geomagnetic field also \"guides\" the travel course for humans and some creatures. As early as 300-400 A.D. (Jin Dynasty), China had produced a nautical compass. With the development of navigation, in China and Europe, the compass has become the most widely used directional tool, which has achieved feats in the history of human navigation and greatly promoted social civilization. , the Italian Columbus discovered the New World of America, the Portuguese Da Gama's first voyage to India, and the Portuguese Magellan completed the first circumnavigation around the world. In today's highly modernized world, people's dependence on and demand for precise global positioning has further increased, such as the precise guidance and positioning of various aircraft, automobiles and submersibles. The Global Positioning System (GPS) of the United States has covered 98% of the world, and its positioning accuracy can reach 1m. Can people use the \"everywhere and everywhere\" geomagnetic field on the earth's surface and near-Earth space to achieve precise navigation? Especially in areas without GPS coverage, or when GPS is unavailable, it is obviously very important to be able to achieve geomagnetic navigation, but it is extremely difficult to achieve. At present, the orbit determination of small satellites is mainly realized by using the geomagnetic field model. However, the accuracy of current geomagnetic navigation is far from being comparable to that of GPS, and it can hardly meet the needs of precise navigation and positioning. Simply put, the modern geomagnetic navigation system consists of three parts: \u2460 geomagnetic reference map, composed of geomagnetic field model and geomagnetic time-varying elimination system; \u2461 geomagnetic real-time detection system, composed of magnetometer and geomagnetic field compensation system; \u2462 geomagnetic field The matching system is composed of navigation matching algorithm and route planning system. The development of near-surface geomagnetic navigation is restricted by various factors. The first major constraint is the geomagnetic field model. The geomagnetic field is mainly composed of four parts: the main magnetic field originating from the magnetic fluid generator process of the earth core, the exogenous changing magnetic field originating from the space current system, the induced magnetic field of the exogenous field in the conductive earth medium, and the remanent magnetization of the lithospheric medium The magnetic anomaly produced by the intensity [1]. The geomagnetic field model includes the main magnetic field model and the geomagnetic anomaly field model. Since the source area of the main magnetic field is located in the outer core at a depth of 2900km underground, it appears as a large-scale gentle magnetic field distribution on the surface of the earth. The spherical harmonic series of the international reference geomagnetic field describing the main magnetic field is 13th, and the corresponding shortest space wavelength is 3000 km, which can be used to determine the main characteristics of the main magnetic field and give the large-scale characteristics of navigation [2]. The spatial resolution of the main magnetic field map is significantly too coarse for precise localization and orientation at small scales. Recently, the geomagnetic field model has developed rapidly, and the spherical harmonic series of the geomagnetic field comprehensive model (NGDC-EMM7.0 model) can be extended to 720 orders. This model includes lithospheric magnetic anomalies, integrated satellite, aerial survey and ground geomagnetic observation data, and the spatial scale that can be resolved is about 100km. If we want to achieve precise navigation, we must obtain a higher-order geomagnetic field model (a higher-resolution geomagnetic map), which depends on the quality of magnetic survey data and the geomagnetic field modeling method. Geomagnetic field measurements include manual measurements, aerial/navigational measurements, and satellite measurements. Manual measurement is time-consuming and cannot achieve global high-density measurement. Since geomagnetic measuring instruments are precision instruments, they are easily disturbed by platform vibration and ferromagnetic materials. At present, the accuracy of geomagnetic field vectors measured by aviation/navigation cannot meet the needs of geomagnetic navigation. The satellite can have a good measurement platform, and the measurement accuracy can reach 0.1 nT, but many small-scale magnetic anomalies have been attenuated at the height of the satellite, so the spatial resolution of the surface geomagnetic field model based on the satellite magnetic measurement data is limited. Big constraints. The second most restrictive factor is the suppression and elimination of the interference in the real-time measurement of the geomagnetic field. Install magnetometers on moving carriers such as aircraft, ships, and missiles to measure the magnetic field information of their locations in real time, and compare them with the geomagnetic field model to obtain their location information. The interference in the actual measurement includes: \u2460 The magnetic field disturbance of the carrier itself, such as the aircraft fuselage, internal components, engines, circuits, etc.; this interference can be partially removed by geomagnetic compensation; \u2461 The magnetic field disturbance from space, That is, the effect of changing magnetic field. The changing magnetic field originates from the space current system and is controlled by the interaction of solar wind-magnetosphere-ionosphere. The space current system mainly includes: Sq current, magnetopause current, neutral sheet current, ring current, polar photoelectric current collection and field current. It is these magnetosphere-ionosphere current regimes that collectively generate the changing geomagnetic field that we record both on the ground and aloft. The structure and strength of these current systems change with the active state of the solar wind and the magnetosphere, forming complex and varied changing magnetic field patterns. These changing magnetic fields will be superimposed on the observations of the magnetometer used for navigation, thereby affecting the accuracy of navigation. These changing magnetic fields may be eliminated by modeling approaches to describe these currents and their geomagnetic effects [3]. In short, the current problems restricting geomagnetic navigation can be summarized as: accurate measurement of geomagnetic field and high-resolution geomagnetic field modeling. Biology can give us a lot of inspiration. Many organisms are capable of precisely directed migrations over thousands of kilometers. In 1968, when Wiltschko et al. studied Eurasian pigeons, they first proposed the concept of a magnetic compass in birds. Behavioral experiments soon confirmed that pigeons can use magnetic orientation. Recently, researchers have discovered that bats, migratory turtles, lobsters, and microorganisms (magnetotactic bacteria) can sense and use the geomagnetic field for directional navigation. How do animals \"measure\" the Earth's magnetic field? At present, the most researched is the homing pigeon. One possible mechanism is that homing pigeons transmit magnetic signals to the nervous system by using the magnetite particles arranged along a certain axis in the dendrites of the upper beak skin trigeminal nerve cells [4]. Another possible mechanism is the transmission of magnetic field information to the nervous system through free radical reactions in photoreceptor cells (eg, the speed and production of free radical products). Mora et al. found that in conditional selection experiments, homing pigeons can distinguish the presence or absence of abnormal magnetic fields, but when a magnet block is pasted on the beak of the homing pigeon or partial anesthesia is performed on the upper beak area, or the ocular branch of the trigeminal nerve is bilaterally removed, the homing pigeon This ability to distinguish between magnetic fields is lost [5]. It is proved that the magnetic induction of homing pigeon is in the upper beak area, and the magnetite particles may be used as magnetoreceptors. With the application of transmission electron microscopy, magnetite particles arranged along a certain axis were found in the dendrites of trigeminal nerve cells in the upper beak skin of homing pigeons, providing direct evidence for the magnetite-based magnetoreceptor hypothesis. The experiment of transient strong pulsed magnetic field conditions also supports this hypothesis [6]. Experimental studies have found that birds can orient accurately under white light, blue light and green light; however, birds lose their orientation ability under red light, indicating that birds can only orient accurately under light in a certain wavelength range[7] . The latest research suggests that some kind of photopigment may be involved in the reaction. Chlorophyll and riboflavin molecules can use light energy to transfer electrons to surrounding molecules, thereby generating free radical pairs, and these active free radical pairs can trigger further reactions to form biochemical signals. This hypothesis emphasizes that biomacromolecules mediate photoreceptor orientation. However, it is not clear how organisms transmit the information of the geomagnetic field to the nervous system. Microbes (magnetotactic bacteria) rely on a chain called magnetosomes to sense the direction of the magnetic field, and can quickly locate the most suitable ecological interface in water or sediment. This bacterium can synthesize 30~120nm magnetite (Fe3O4) or colloid pyrite (Fe3S4) particles in the cell, consisting of one or more chains. The above-mentioned biological dependence on nano-magnetite particles and the magnetic induction mechanism relying on photosensitive molecules as magnetic receptors provide a unique idea for the development of bionic magnetic field measurement and matching. Future research will focus on breakthroughs in new high-precision magnetic field measurement methods and the establishment of high-resolution geomagnetic field models.", "The distribution and variation of water vapor in the stratosphere are extremely important for understanding long-term climate change. Water vapor not only has a strong radiative effect, but also has a major impact on microphysical (involving cirrus clouds and aerosols) and chemical processes in the stratosphere. Under solar ultraviolet radiation, the OH radicals produced by the reaction of water vapor and ozone control the lifetime of methane and the generation and loss of ozone in the atmosphere; through the formation of stratospheric clouds, water vapor plays an important role in the heterogeneous chemistry of the atmosphere and affects the climate effect of aerosols . The water vapor content in the atmosphere varies greatly, which is mainly controlled by temperature. The near-surface temperature in the tropics is high, and the water vapor volume mixing ratio can be as high as about 4%, while the temperature near the tropopause in the tropics is low, and the water vapor concentration is only a few ppmv (million One part), so that conventional hygrometers cannot measure accurately. There are two main sources of stratospheric water vapor: one is the upward transport of tropospheric water vapor; the other is the oxidation of stratospheric methane. The air in the stratosphere is extremely dry, and the average concentration of water vapor in the lower stratosphere is only 3.5~4ppmv. 1. Tropical tropopause\u2014the door to the stratosphere As early as the 1940s, when Brewer (1949) explained why the stratosphere was extremely dry, he believed that the water vapor in the stratosphere was controlled by the temperature of the tropical tropopause, and that the tropical tropopause was the air From the troposphere into the stratosphere gate (gate to stratosphere): due to the low temperature of the tropical tropopause, the air passes through here through cooling and dehydration (freeze drying), so that the water vapor mixing ratio in the air is reduced to the local tropopause saturation mixing ratio, and then Into the stratosphere. Mote et al.[1] further confirmed the controlling effect of tropical tropopause temperature on stratospheric water vapor based on UARS satellite observation data. Their analysis found that the seasonal variation of the water vapor mixing ratio in the tropics is in the same phase as the seasonal variation of the tropical tropopause temperature, and the seasonal variation signal is slowly uploaded by the upward movement of the Brewer-Dobson circulation in the tropics, similar to a tape recorder. 2. Opinions vary Although Brewer's qualitative model is generally accepted, the temporal and spatial scales of transport and the related dynamical, radiative, and physical processes are still highly controversial. Newell and Gould-Stewart [2] pointed out that the annual zonal average temperature of the tropical tropopause would be too high to explain the observed stratospheric water-vapour mixing ratios if the air crossed the tropopause uniformly across the tropics. Some other studies also pointed out that the average mixing ratio of water vapor at the entrance of the stratosphere is about 3.8 ppmv, which is lower than the saturation mixing ratio of the ice surface corresponding to the average temperature of the tropical tropopause (about 4.5 ppmv). To explain this difference, scientists have proposed a number of different mechanisms. Holton and Gettelman[3] classified these mechanisms into two categories: vertical transport and horizontal transport, and the vertical transport mechanism also has two views, namely large-scale adiabatic rise and convective injection. \"Stratosphere Fountain\" Hypothesis Newell and Gould-Stewart[2] proposed the \"stratosphere fountain\" hypothesis, that is, the air enters the stratosphere through the tropopause at a specific time and in a specific area. These areas have very low temperatures, mainly including The tropical western Pacific in the northern hemisphere winter and the Bay of Bengal and India in the northern hemisphere summer, the so-called \"fountain\" region. However, observational and model results show a net subsidence of the tropopause in the tropical western Pacific, despite the low temperature and frequent observation of thin cirrus clouds. The \u201cthrough convection\u201d hypothesis Other studies [4,5] pointed out that dehydration is controlled by convective-scale motion. In this hypothesis, convection overshooting produces a very cold air mass with a very low ice-saturated mixing ratio, which then rolls out of the convective cloud to mix with the surrounding stratospheric air. However, an efficient dehydration process requires air masses to remain near tropopause temperatures long enough for the cooling process to form ice crystals that strip moisture from the air. The assumption of convective roof collapse obviously cannot satisfy this condition. Other studies have shown that the phenomenon of tropical convection collapsing into the tropical tropopause (TTL) is relatively small, the contribution of convective flux to TTL characteristics is relatively small, and the contribution of convective roll-out is mainly located in the lower half of TTL; \"through convection\" The hypothesis is not consistent with the vertical structure of water vapor observed by the UARS/MLS satellite detector in the upper troposphere and tropopause, nor is it consistent with the variability of water vapor isotopes. The theory of \"cold trap\" Holton and Gettelman[3] proposed a \"cold trap\" dehydration mechanism based on large-scale horizontal transport. Their study shows that when thin cirrus clouds occur below the tropopause and above the thicker anvil cloud layer (which can reach a height of 14 km), radiative cooling of the tropopause cirrus clouds results in extremely low temperatures and subsidence of the tropopause. They proposed that the horizontal movement of air through the tropopause would cause cooling and dehydration. For a typical horizontal velocity of 5 m/s and a cooling rate of 0.5 K/d, the air will cool 5 K on the way from the edge of the cold cave (approximately 5000 km) to the center. Since the air mass takes a relatively long time (several days) to travel in the cold cave, the formed ice crystals can undergo significant subsidence, resulting in irreversible dehydration. The dehydrated air mass is heated by radiation and irreversibly enters the stratosphere. Models that consider large-scale horizontal transport through \"cold traps\" can well reproduce the observed seasonal variations in tropical water vapor entering the stratosphere. However, there is one problem that the \"cold hole\" hypothesis cannot explain: lower stratospheric water vapor has been increasing from the 1970s to the 1990s, while temperatures near the tropical tropopause have shown a downward trend. Although stratospheric methane concentrations have also increased over the past 50 years, the rate of increase in water vapor due to methane oxidation is only half that of stratospheric water vapor. In addition, observations of HDO isotopes in the tropical tropopause region show that its concentration is significantly higher than that caused by purely temperature-controlled cooling and dehumidification processes, which means that the higher HDO content in the tropopause and lower stratosphere cannot be explained by slow dehydration. The theory of \"side door\" In addition to the debate on the dehydration mechanism of the tropical tropopause, the water vapor exchange process between the outer tropical lower stratosphere and the tropical upper troposphere is also an important topic in the study of water vapor transport processes, among which the Asian summer monsoon region has attracted much attention One of the regions [6]. Satellite data analysis shows that in summer, the water vapor content in the upper troposphere-lower stratosphere region of the Asian monsoon region is relatively high, and the highest water vapor content in the lower stratosphere occurs in the northern part of the South Asian monsoon region, mainly in the Qinghai-Tibet Plateau and its southern slope. Based on the analysis of TRMM satellite data, Fu et al.[7] found that the occurrence probability of air convective cloud clusters over the Qinghai-Tibet Plateau is higher than that of the South Asian monsoon region on the south side of the plateau. Based on this, they speculated that there is a flow from the troposphere to the advection over the Qinghai-Tibet Plateau due to strong convective transport. The layer transports the \"short circuit\" of water vapor. But there are also views that the large-scale circulation in the lower stratosphere plays an important role that cannot be ignored. According to the analysis of OLR data, Randel and Park[8] showed that the strongest convection area is located on the southeastern side of the Tibetan Plateau, not over the Tibetan Plateau. They believe that the strong anticyclone of the South Asian high pressure in the upper troposphere and lower stratosphere makes the air in the region stay under the control of the anticyclone for a long time and cannot spread outside the anticyclone; the center of the anticyclone is located over the Qinghai-Tibet Plateau and the Iranian Plateau , which just corresponds to the center of the large value of water vapor. 3. Controversy\u2014the role of convective transport Although there is no obvious convective signal in tropical stratospheric water vapor, and convection cannot explain the seasonal oscillation of water vapor or the concentration of stratospheric water vapor, other tracer observations show evidence of convective influence. Clear-sky radiative heating calculations show that there is a transport barrier 1\u20132 km below the cold tropical spot tropopause (CPT), corresponding to the zero radiative heating height (LZH): the air above it rises and the air below sinks. The presence of the LZH below the CPT makes the region between the two special, a region that has both tropospheric and stratospheric behavior and is often referred to as the tropical tropopause (TTL). Observations show that CO2 and CO have seasonal cycles in the tropopause, which is a seasonal cycle in the boundary layer. Sherwood and Dessler [9] explained that convection must exist to transport boundary layer air across the clear-air LZH into the TTL. HDO isotope observations indicate concentrations significantly greater than those produced purely by temperature-controlled cooling dehumidification processes, and it is speculated that evaporation of convectively lifted ice crystals provides excess HDO [10]. In addition, aircraft observations have found evidence for ice crystal uplift and in situ cooling dehydration in subtropical convection [11]. Appendix Water isotope has three important effects, two of which affect its distribution in the UT/LS area, and the other becomes its detection principle[12]. The first and most notable effect is that when an atom of water is replaced by an atom of a heavy isotope, the water vapor pressure decreases, known as the vapor pressure isotope effect (VPIE). Compared with the substitution of 16O, which is heavier in water, by 17O or 18O, the isotope effect of water vapor pressure produced by the substitution of 1H with lighter mass by 2H(=D) is 8~9 times higher. The effect of heavy isotopes on reducing water vapor pressure will lead to: \u2460When liquid water evaporates, H has a higher evaporation rate than D, which increases the ratio of heavy isotopes in liquid water; \u2461When water vapor condenses into liquid water, heavy isotopes condense quickly, which also makes The heavy isotope ratio increases; \u2462 same as water vapor transforming into liquid state, when water vapor condenses into solid state, heavy isotope freezes faster, which increases the heavy isotope ratio in ice particles. A second important effect of isotopes is the difference in the rate of important reactions that create and destroy water. But compared with VPIE, this effect is small and can be ignored. The third effect of isotopes is that the infrared absorption spectrum of heavy isotopes is shifted relative to the main isotope of water, which allows us to observe the isotopes of water.", "At present, the global three-dimensional meteorological observation system coordinated by the World Meteorological Organization (WMO) and jointly established by all countries includes a network of surface meteorological observation stations, an upper-air observation (ie radiosonde) network, and a series of geostationary and polar orbiting meteorological satellites. Weather reports from civil aircraft and merchant ships also provide some useful information. The Global Climate Observing System (GCOS) for climate and climate change research is being established. A series of satellites such as the Earth Observing System (EOS) monitor some important parameters that reflect or affect climate change on a global scale, such as the radiation budget of the Earth-atmosphere system, atmospheric aerosol and cloud parameter profiles, ocean surface water temperature and height, ice and snow Coverage and thickness, terrestrial primary productivity, etc. The current meteorological observation system is mainly established to monitor weather systems above the mesoscale, so it provides good data on the development and evolution of weather systems such as large-scale circulation, extratropical cyclones, tropical cyclones, typhoons (hurricanes) and fronts, but This observation system has very few stations in uninhabited areas such as vast oceans and deserts, and the temporal and spatial resolution is not enough to capture and monitor small and medium-scale (several kilometers to hundreds of kilometers) weather systems, such as short-term thunderstorms and heavy rains. In order to improve the accuracy of numerical weather prediction and reduce the uncertainty in climate change research, more and better observational data are needed. The question is, how can existing meteorological and climate observing systems be improved effectively and cost-effectively to improve the level of operational and scientific understanding? This requires a fully organic combination of observations, numerical models, and theoretical studies. We must actively participate in international research programs such as THORPEX (Observing System Research and Predictability Experiment) and COPES (Coordinated Observation and Prediction of Earth Systems), and strengthen observations on sensitive (target) areas that affect weather and climate change sensitive areas; The main factors of climate change uncertainty (aerosol-cloud-radiation interaction, etc.) require the establishment of super comprehensive observation stations. In the future, on the basis of the numerical experiment (data impact test) with sufficient data increase effect, the existing conventional observation network and observation frequency will be encrypted, and the temporal and spatial resolution of observation will be increased; atmospheric observation items (such as radar wind profile and GPS or microwave radiation water vapor from the calculation), and develop appropriate data interpretation and assimilation technologies; develop a new generation of atmospheric remote sensing satellites, especially to strengthen the active and passive remote sensing monitoring of the global atmospheric water vapor, cloud and precipitation three-dimensional distribution and other climate change factors; develop atmospheric Comprehensive detection technology for kinetic-chemical-radiation process research, including high-precision measurement of parameters such as turbulence spectrum (flux exchange), tracer gas, radiation spectrum and cloud microphysics. Quantitatively study the impact of observation data with different temporal and spatial resolutions on model forecasts, and establish a super integrated system that fully integrates the observation system and the model system. Based on physical and chemical principles and effects, a variety of detection and analysis techniques have been developed to monitor global atmospheric state parameters and atmospheric phenomena. Conventional meteorological elements (atmospheric temperature, pressure, humidity, wind speed and direction, and long-short wave radiation, etc.) and various atmospheric phenomena (clouds, precipitation, lightning, etc.) are observed and recorded by a standardized network of ground-based stations. A hundred years; the air pressure, temperature, humidity and wind speed and direction of the atmosphere from the ground to about 30km altitude are measured by a network of radiosonde stations all over the world; the monitoring network of atmospheric components (greenhouse gases such as carbon dioxide, polluting gases and aerosols) has basically been formed; the nearby The precipitation weather system mainly relies on the Doppler weather radar network monitoring, and the thunderstorm and lightning positioning monitoring network has been established and perfected in many countries; the polar orbiting and geostationary meteorological satellites and the Earth observation system satellite series are playing an increasingly important role in the three-dimensional monitoring of the global atmosphere The role of typhoons/hurricanes that have formed and developed in the ocean since the meteorological satellites were launched into the sky has not missed a single one; the meteorological reports of civil aviation aircraft and merchant ships provide meteorological information on the route. The most important platform for in-situ 3D survey data (in-situ and remote sensing). Atmospheric dynamics, radiation, and chemical processes have complex interactions, and there are many parameters describing state parameters and controlling factors. The development of high-precision comprehensive detection technology of all elements can meet the needs of various atmospheric process research. Atmospheric processes and phenomena cover very wide temporal and spatial scales. For example, a tropical or extratropical cyclone contains large, medium and small scale movements; the spatial and temporal distribution of aerosols, clouds, and precipitation is extremely uneven. It is a great technical challenge to establish a global three-dimensional monitoring network of major atmospheric elements with high temporal and spatial resolution. It is necessary to strengthen and optimize the current ground-based and satellite observation network, improve the temporal and spatial resolution of atmospheric multi-element monitoring; transform a large number of remote sensing instruments currently used for research into conventional instruments, including sophisticated lidar, millimeter-wave radar, microwave radiometers and interferometers. These instruments can monitor some key parameters that current conventional meteorological observations cannot provide, such as cloud base and cloud top height, aerosol profile, surface radiation budget (from mesoscale to global scale), atmospheric radiative heating profile, and the size of water objects. distribution, ice and liquid water content, and water vapour. The atmosphere is a three-dimensional flow, and it is necessary to develop a variety of aircraft (including unmanned aircraft) platforms, equipped with different atmospheric detection and remote sensing equipment, and carry out regular and irregular atmospheric flight detection according to business and research needs. For example, use high-altitude aircraft to conduct down-drop sounding and radar remote sensing detection of tropical cyclones and typhoons, and conduct atmospheric and surface flight detection in the upper reaches of the weather system. It is possible to establish an atmospheric and earth observation network based on an airship platform in the stratosphere in the next 12 to 20 years. The advantage of this observation network is that it is more economical than satellite systems, and important equipment can be recycled and reused. Very high spatial resolution can be obtained. The heat and moisture fluxes on the surface and ocean surface are the basic energy sources of atmospheric motion on various length and time scales; many atmospheric trace components have important climate effects, and without the measurement of surface fluxes, it is impossible to understand their presence in the atmosphere Distribution. However, the measurement of land-air exchange on complex ground and sea-air exchange on strong wind and wave sea surface, as well as the accurate measurement of large-scale land surface temperature and soil moisture, are still great technical problems. In addition to the development of new technologies for on-site direct flux measurement with high sensitivity and fast time response at a single point, new technologies capable of measuring flux over a certain area must be developed, especially those that can work reliably under complex weather and marine environmental conditions technology. Harmless tracers can be artificially released in the boundary layer or small-scale weather systems to conduct research on atmospheric diffusion and entrainment or convective cloud dynamics. Atmospheric observation techniques are now developed almost independently of numerical weather and air quality models. In the future, it is necessary to strengthen the combination of detection technology and numerical models to realize the two-way effect of observations and models, that is, on the one hand, observations provide initial values or boundary conditions for models, and on the other hand, models also provide guidance for observations and technology development, such as model (data Negative) results require appropriate intensification or supplemental observations of sensitive or target areas. It is necessary to develop an instrument simulator for numerical atmospheric detection and atmospheric remote sensing, which has important applications in the development and improvement of large-scale instruments and equipment, and is also indispensable for the coupling of detection technology and forecast models. Current technology is still difficult to deal with the massive data generated by satellite remote sensing and ground-based sensors, and this problem will become more prominent in the next decade or so. Automated systems and new techniques need to be developed to routinely analyze the remote sensing data produced by these satellite and ground-based systems, producing quantitative results that are within a certain margin of error but better than our current ability to retrieve geophysical parameters.", "The quasi-biennial oscillation (QBO) of the equatorial stratosphere is an atmospheric phenomenon in the lower stratosphere in the tropics. It is mainly manifested in the alternating occurrence of east-west wind in the zonal wind field. 28 months, so it is called Quasi-Biennial Oscillation. The most notable features of the quasi-biennial oscillation are: \u2460 quasi-biennial periodicity; \u2461 north-south symmetrical easterly or westerly winds along the equator; \u2462 easterly or westerly winds continuously propagate downward, and the amplitude does not decrease due to the increase of air density ; \u2463 The zonal wind QBO is limited to a narrow latitude band of about 12\u00b0S~12\u00b0N, and the latitudinal difference is relatively small. After this peculiar phenomenon was revealed in the early 1960s[1,2], it immediately aroused great interest in the scientific community. Since then, the research has mainly focused on: \u2460 the observation and characteristic analysis of QBO; \u2461 the formation of QBO mechanism; \u2462 The global influence of QBO. The last two points are the focus of the research. Regarding the question of why QBO occurs, Lindzen and Holton[3] wrote in 1968 that this oscillation phenomenon is driven by the fragmentation of gravity waves propagating upward in the troposphere; in 1972 they updated the previous theory[ 4], it is considered that the uploaded fluctuations are mainly large-scale Kelvin waves and Rossby-gravity mixing waves, where the breaking of Kelvin waves provides acceleration to the westerly wind, while the breaking of Rossby-gravity mixing waves produces acceleration to the east wind. In the following 20 years, this dynamic model has almost become a perfect theory generally accepted; however, in the middle and late 1990s, this theory was challenged, and studies showed that [5], these two fluctuations alone are not enough to drive The QBO of actual amplitude must therefore include the uploaded gravitational waves of wider frequency, which is actually similar to the viewpoint of Holton and Lindzen in 1968. At present, the research on the formation mechanism of QBO is still going on, and the role of small and medium-scale gravity waves is one of the key points. One of the most important reasons why the QBO phenomenon is important is its non-negligible impact on global weather and climate. For example, a large number of studies have shown that although the QBO is a phenomenon in the equatorial stratosphere, its influence can reach the middle and high latitudes of the two hemispheres: when the QBO is in the easterly phase, the stratospheric polar vortex is often abnormally weak, and the circumpolar westerly wind is weak , easily lead to explosive stratospheric warming; on the contrary, when the QBO is in the westerly phase, the polar vortex in the stratosphere is generally abnormally strong, and the westerly wind around the pole is also strong, and the temperature in the polar region of the stratosphere is often abnormally low. Recent studies have also found that the circulation anomaly in the mid-high latitudes of the stratosphere can affect the troposphere through the interaction between the stratosphere and the troposphere, thus having a certain impact on the weather and climate of the troposphere[6,7]. The relationship between stratospheric QBO and hurricane frequency has been used to predict the hurricane activity in the Atlantic Ocean. When the QBO is in the westerly phase or the wind direction is changing to westerly near 50hPa, there will generally be more strong hurricane activities in the tropical Atlantic, while the easterly phase The situation is the opposite. In addition, the stratospheric QBO has a certain relationship with typhoon activities in the Northwest Pacific Ocean, monsoons, Meiyu precipitation in East Asia, tropical convection, and precipitation in the Sahel region of West Africa. \t) reveals the QBO phenomenon of the equatorial zonal mean wind. The figure shows the evolution of the equatorial zonal mean wind with time, and the time period is from September 1957 to August 2002[8]. Because of its periodicity, QBO can serve as an important predictor of the above anomalies. It is worth emphasizing that these relationships are basically statistical relationships, and the physical processes and influencing mechanisms are not clear at present, and further research is needed. QBO also has an important impact on the distribution of trace gases such as ozone, water vapor, methane, volcanic aerosols, and nitrogen oxides in the atmosphere through dynamic processes in the atmosphere. Therefore, atmospheric models that study climate change and stratospheric chemical processes should include QBO phenomenon, but so far, the numerical simulation of QBO is still a serious challenge to the General Circulation Model (GCM) [8]. In fact, until recently, only a few sporadic GCMs around the world can successfully simulate some main features of QBO, and most GCMs can't even simulate Kelvin waves and Rossby-gravity hybrid waves with amplitudes close to the actual ones. Scientists also don't quite know how to successfully simulate QBO, because the simulation results depend on the subtle interrelationships between several factors. Since QBO is mainly caused by fluctuations induced by cumulus convection, it is necessary to have a reasonable convective parameterization scheme in order to simulate the QBO phenomenon in the actual atmosphere. Gravity waves and other distinguishable fluctuations will be a primary problem in the future numerical simulation research on QBO. The phenomenon of quasi-biennial periodic oscillation still exists in the troposphere (Tropospheric Quasi-Biennial Oscillation, TBO), only because the weather phenomena in the troposphere are complex, and the troposphere atmosphere interacts with other layers (ocean, land, cryosphere, etc.) , so its periodic variation is more complex than that of the stratosphere. This quasi-two-year cycle is reflected in the changes in monsoon intensity in the Asian-Australian monsoon region, monsoon precipitation, tropical sea surface temperature and tropical zonal wind, northern hemisphere surface pressure, Northwest Pacific subtropical high pressure activity, Atlantic storm number, and Western Pacific typhoon activity. Both have been found [9]; there is no definite theory about the mechanism of TBO, the earliest research thinks that it is related to the QBO in the stratosphere, but further research shows that most TBO do not correspond to the QBO in the stratosphere, but It is likely to be the result of the interaction between different layers of the troposphere[10,11]. For example, studies on the interaction between the monsoon and ENSO (El Ni\u00f1o and Southern Oscillation, El Ni\u00f1o-Southern Oscillation) have shown that the East Asian monsoon can excite the The convective activity of the monsoon-ENSO interacts with the ENSO cycle, causing the TBO characteristics of the monsoon-ENSO air-sea system[12]. On the other hand, although there are TBO phenomena in many physical quantity fields, it is not clear which element or elements whose TBO changes are the most important and fundamental for the entity of TBO. Compared with the great progress in the mechanism research of QBO, the research on the mechanism of TBO is still mostly in the descriptive theoretical stage, so more in-depth research and exploration work is still needed.", "The global climate system refers to a highly complex system consisting of the atmosphere, hydrosphere, cryosphere, lithosphere (land surface) and biosphere, with clear interactions between these parts (Fig. 1). The climate system is constantly evolving (gradual and abrupt) over time, with different Climate change and variability on temporal and spatial scales (climate variability and oscillations on monthly, seasonal, interannual, interdecadal, and centennial scales). Figure 1. \tSchematic diagram of the climate system and its interaction process between spheres. According to IPCC (2007) 1. Climate system and cross-scientific issues The climate system is one of the main parts of the earth system. The earth system also includes human beings and living systems, socio-economic aspects, etc. It is a complete, interrelated system with complex metabolic and self-regulatory mechanisms. Its biological processes strongly interact with physical and chemical processes to form a complex life support system on Earth. Therefore, not only the characteristics and circulation of each part of the system need to be studied separately, but also the integration behavior of the whole system and the interaction of each subsystem must be studied. This requires breaking the concept of traditional disciplines and studying interdisciplinary scientific issues between the boundaries of various disciplines. Atmosphere The atmosphere is the most unstable and fastest changing part of the climate system. The atmosphere is not only directly affected and affected by the other four circles, but also has the closest relationship with human activities. Human beings mainly live in the atmosphere, so the state and changes of the atmosphere directly affect human living conditions and various activities. The final impact of changes in other layers of the climate system will be reflected in the atmosphere. The atmosphere is thus the center of the climate system. The atmosphere mainly affects the Earth's climate through changes in its atmospheric composition and radiation budget. Changes in atmospheric composition in the atmosphere are closely related to climate change. The atmosphere is composed of various gases and water vapor, solid and liquid particles (aerosols), and clouds. In the gas, nitrogen (N2) accounts for 78.1% (volume mixing ratio), oxygen (O2) accounts for 20.9%, and argon (Ar) accounts for 0.93%. But these gases are so-called inert gases, which generally have little interaction with incident solar radiation and do not interact with long-wave infrared radiation emitted by the earth, that is, they neither absorb nor emit thermal radiation. Having a major impact on Earth's climate are many trace gases in the atmosphere, such as carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O) and ozone (O3). Although these gases account for less than 0.1% of the total volume of the atmosphere, they are also called greenhouse gases because they absorb and emit radiation and play a fundamental role in the Earth's energy budget. Water vapor (H2O) in the atmosphere is also a natural greenhouse gas, and it is the strongest greenhouse gas; because it can be transformed into water droplets, cloud droplets and ice crystals through phase change, it has a great influence on the change of the earth's climate, and its volume The mixing ratio varies greatly with time and place, and generally accounts for about 1% of the total volume of the atmosphere. Ozone (O3) also plays an important role in the Earth's energy budget. O3 in the lower layer of the atmosphere (troposphere and lower stratosphere) is a greenhouse gas, but its life span is short and the concentration of O3 in the upper layer of the stratosphere is high, forming a natural ozone layer, which absorbs solar ultraviolet radiation. play an important role in the radiation balance. Suspended solid and liquid masses (aerosols) and clouds in the atmosphere interact with incident solar radiation and outgoing long-wave radiation in extremely complex ways, thereby affecting the climate change of the earth, which has attracted increasing attention in recent years. The hydrosphere and its circulation The hydrosphere consists of all liquid surface and groundwater, both freshwater (such as in rivers, lakes, and rock formations) and saltwater in the oceans. These waters are all connected to each other through complex hydrospheres. The water on the ocean and land surface enters the atmosphere in the form of water vapor through evaporation or transpiration. In particular, a large amount of water vapor in the ocean is transported to the air above the land by the atmospheric circulation, where it forms clouds and rain. Part of the precipitation flows into the ocean in the form of surface runoff (mainly in rivers), affecting the salinity and circulation of the ocean. The other part seeps into the ground and becomes underground runoff and groundwater. The former can flow back to the ocean, while the latter is stored underground to supplement the groundwater that is constantly being taken there. The above-mentioned hydrosphere cycle repeats itself to provide the necessary water sources for the various systems of the earth. The ocean has the greatest impact on climate in the hydrosphere. The ocean accounts for about 70% of the earth's area. On the one hand, it can store and transport a large amount of energy, and at the same time it can dissolve and store a large amount of CO2. It is a very important part of the global carbon cycle. According to recent estimates, the oceans can absorb 1.7 billion tonnes of carbon per year, accounting for 27% of total emissions from fossil fuel combustion and industrial production. The circulation of the ocean is much slower than that of the atmosphere, driven by the density difference between the salt and the temperature gradient (i.e. thermohaline circulation). The North Atlantic thermohaline circulation is the one that has the most significant impact on the climate. Sudden changes in climate are associated with sudden weakening or closing of this circulation. The ocean has a large thermal inertia, which is mainly due to the large heat capacity of seawater. On the one hand, it can damp or slow down the huge and strong temperature changes, and play the role of the earth's climate regulator; on the other hand, because it has a long Memory, especially in tropical oceans, can affect atmospheric changes over long periods of time through air-sea interactions, becoming a source of natural climate variability. This is why the ocean must be included in the various complex climate models and carbon cycle models currently being designed. The El Ni\u00f1o and La Ni\u00f1a phenomena (that is, the rapid increase or decrease in sea surface temperature in the region) that occur in the equatorial eastern Pacific Ocean are the most significant natural variability produced by the ocean, and it has become the current inter-annual and seasonal forecast in various countries The most important strong climate signal. Lithosphere and land surface processes The lithosphere refers to the upper part of the solid Earth, including both land and oceans. It consists of all crustal surface rocks and a low temperature elastic part in the upper mantle. Although volcanic activity is part of the lithosphere, it is not included in the climate system, but affects the Earth's climate as a natural external forcing factor. The part of the lithosphere most relevant to climate change is the structure of the Earth's crust, mantle, and land surface. They can affect Earth's climate by changing the chemical composition of the weather. This phenomenon is known as the tectonic cause of climate change. The crust breaks up into different plates, which in turn float on top of the mantle. When two plates collide, a series of physical changes can be caused. Volcanic eruptions, orogeny, are physical processes that can alter the climate on Earth and are a source of the current composition of the Earth's atmosphere. In climate change in geological time, the cause of such geological structure change is very important, and will continue to be an important climate change factor in the future. The structure of the land surface, or its roughness, can also dynamically affect the atmosphere as wind blows over the land surface. Roughness is generally determined by terrain and vegetation conditions. The wind also blows dust from the surface into the atmosphere, thereby affecting the regional atmospheric radiation budget. The occurrence of sandstorms is the most obvious example. Biosphere It includes land and sea and all ecosystems and living things. Biological processes interact strongly with physical and chemical processes through the biosphere to create the environment that sustains living systems on Earth. The biosphere has important influences on atmospheric composition, as biological processes control long-term atmospheric CO2 concentrations by absorbing large amounts of CO2 through the oceans. The CO2 content of the ocean surface is reduced through plant-plankton photosynthesis, thereby allowing more atmospheric CO2 to be dissolved in the ocean (Fig. 2). About 25% of the carbon absorbed by the plant-plankton in the upper ocean sinks into the interior of the ocean, where it is no longer in contact with the atmosphere and stored in the deep sea for hundreds or thousands of years. This so-called biological pump is related to the above-mentioned CO2 dissolution process. Controls the distribution of CO2 exchange between the air and sea. The biosphere thus plays a central role in the carbon cycle. Terrestrial biota are also an important part of the climate system with many functions. For example, terrestrial vegetation type affects water evaporation to the atmosphere and absorption or reflection of solar radiation. The condition and activity of vegetation roots also play an important role in carbon and water storage and land-air fluxes. Leaf index is an important index to describe the canopy function of flora, and it is closely related to global and regional climate change. Biodiversity in terrestrial ecosystems affects the magnitude of key ecosystem processes (such as productivity) and plays an important role in the long-term stability of ecosystems. Figure 2 \tThe interaction between the sea surface and the bottom atmosphere SOLAS, 2002, quoted from the literature [1] Cryosphere The cryosphere includes continental glaciers, snow areas, sea ice and permafrost, and ice sheets in Greenland and Antarctica. Currently, glaciers cover about 3% of the global surface and store 75% of non-marine water (fresh water). Sea ice accounts for 7% of the land area, and permafrost accounts for 20% to 25% of the land surface. The cryosphere is important to the climate system due to its high reflectivity to solar radiation (albedo), low thermal conductivity, large thermal inertia, and its key role in driving deep ocean circulation. It affects surface energy and water vapor fluxes, clouds, precipitation, the hydrological cycle, and atmospheric and oceanic circulation, but the most obvious effect is on sea level. Because ice sheets store large amounts of water, changes in their volume can cause sea levels to rise. If the Antarctic ice sheet, which contains nearly 90% of the world's glacial ice, melts, the global sea level may rise by 70m. If only the west of the Antarctic ice sheet melts (this is likely to happen, the Larsen-B ice shelf broke and melted in March 2002), it is enough to raise the sea level by about 6m. The mass balance of glaciers thus represents a direct effect on sea level. At present, the most direct effect of glaciers on sea level change is through the melting of alpine glaciers and the melting of ice shelf edge areas. It is estimated that about half of the sea level rise in the past century is the result of melting glaciers. 2. Interactions between the spheres of the climate system The spheres of the climate system do not exist independently, and there are obvious interactions between them. This interaction is not only physical, chemical and biological, but also has different time and spatial scale. This makes the climate system a very complex system. Although the various layers of the climate system have obvious differences in composition, physical and chemical characteristics, structure and state, they are all connected to each other through mass, heat and momentum fluxes, so these layers are an open interconnected systems. In the interaction process of each circle layer, the exchange and cycle process of water, carbon and oxygen are mainly carried out. Among the interactions among the spheres of the climate system, the most important are air-sea interaction, land-atmosphere interaction, and land-sea interaction. Air-sea interaction The ocean and the atmosphere are strongly coupled and exchange heat, water vapor, and momentum through sensible heat transport, momentum transport, and evaporation processes, thereby affecting the atmosphere and the ocean (hydrosphere) itself. Heat and water vapor are part of the hydrological cycle, causing condensation, forming clouds, precipitation and runoff, and powering the motion of weather systems. On the other hand, precipitation has an effect on the salinity of the ocean and its distribution and thermohaline circulation. The atmosphere and oceans also exchange CO2, both of which are an important part of the global carbon cycle. CO2 is dissolved in cold polar water that sinks into the deep ocean and is released in warmer rising water near the equator to maintain a balance. Although the important role of air-sea interaction has been recognized, and long-term observations and researches have been carried out, there are still many important issues that are not well understood, and there are fewer quantitative calculations, especially the chemical and physical interactions between air and sea. effect and feedback. How these interactions affect the climate system or are affected by climate is also not well understood. Additional questions worth investigating include: How do physical, chemical, and biological processes in the upper ocean affect air-sea fluxes and climate conditions? How does the climate system affect the structure and productivity of marine ecosystems? How are substances, especially carbon compounds, transported or stored in the deep ocean? What are the key physical, chemical and biological processes linking oceans and continental margins? Land-atmosphere interaction This is one of the most basic interactions in the climate system, including interactions between snow, glaciers, permafrost and lithosphere in the cryosphere and the atmosphere; including the transport and transformation of various substances, heat, water vapor, and land use change, etc. The key question is: How does the exchange of water and energy between land and air change the climate and the emission and deposition of trace gases on Earth? How do the multitude of mesoscale and small-scale processes on the land surface together affect large-scale weather processes? What is the role of human-induced changes in land cover in processes at the land-air interface and in the overall climate system? How are ecosystems that provide humans with food and fiber affected by climate change and human use? The most critical issues of land-sea interaction are changes in the coastal zone and transboundary transport issues, which include: material transport across the land-sea interface and the impact of coastal ecosystems on climate change; The impact of regional material transfer, filtration or storage capacity; the impact of systematic changes in the climate system on the coastal zone, especially the most vulnerable areas, and the impact of the air-sea interface on the heating field and atmospheric circulation. Climate includes very complex physical, chemical and biological processes and interactions between spheres. Any change in any layer of the climate system, whether it is anthropogenic or natural, internal or external forcing, will cause changes in the climate system or climate variability through interactions. The largest source of water in the climate system is in the mantle (see Table 1). When a volcano erupts, water in the mantle is ejected outward. It is estimated that only 5% of the water produced by life on Earth was released. Table 1 \tThe residence time of different water sources [2] The carbon cycle process is shown in Figure 3, and it can be seen that it involves the interaction of the four spheres except the cryosphere. Carbon is stored in the atmosphere, oceans, biosphere and Earth's crust. This is the carbon pool. There are obvious and complex exchanges (fluxes) between them, which finally adjust and determine the concentration of carbon in the atmosphere. That is, the concentrations of two greenhouse gases: CO2 and CH4. However, the cryosphere cannot be ruled out in the carbon cycle. As the climate continues to warm, hydrates such as methane stored in the permafrost and seabed may melt and become a considerable carbon source. Figure 3 \tThe carbon cycle among different carbon pools in the Earth's climate system [2] The change of oxygen in the atmosphere is also worth noting. In the early days of the earth's formation, it was mainly hydrogen, and later oxygen increased, and hydrogen decreased in a large amount. This is the period when oxygen rises. Therefore, life on the earth is born. Oxygen is mainly produced through the photosynthesis of the biota, and is reduced through the oxidation process. Currently, it is beginning to decrease in oxygen. Ocean acidification can lead to the reduction of oxygen in the ocean, which will threaten the safety of marine life and ecosystems. The earth's climate system is in a balanced or quasi-balanced state from a long-term average, but once this balance (that is, the radiation balance of the earth-atmosphere system) is destroyed, climate change will occur. Climate change refers to a statistically significant change in the mean state of the climate or a climate change that lasts for a long period of time (typically 10 years or longer). The causes of climate change may be natural internal processes, or external forcing, or anthropogenic sustained changes in the composition of the atmosphere and land use. The first paragraph of the United Nations Framework Convention on Climate Change (UNFCCC) defines \"climate change\" as: \"After a considerable period of observation, it is caused by human activities directly or indirectly changing the composition of the global atmosphere in addition to natural climate change. climate change.\u201d The UNFCCC thus distinguishes \u201cclimate change,\u201d which changes the composition of the atmosphere due to human activities, from \u201cclimate variability,\u201d which is attributable to natural causes. Climate change refers to a statistically significant change in either or both of the mean state and the dispersion (anomaly) of the climate. The larger the deviation value, the greater the range of climate change and the increase of climate instability. Sensitivity to climate change is also greater. To put it simply, it actually represents the change or change of the climate state that can last for a long period of time, such as changing from a colder state to a warmer state or a period of less rainstorms to a period of more rainstorms, so some people also call it climate. change. It can be caused by natural causes, human activities (defined by the United Nations Framework Convention on Climate Change), or both natural and human activities (defined by the IPCC). Climate variability refers to the change or variation of the climate average state or other statistics (such as standard deviation, frequency of extreme events) on all time and space scales, and can also be understood as various time scales superimposed on a long-term climate change trend or average state climate fluctuations or anomalies. It has interdecadal, interannual, annual, seasonal, intraseasonal and high-frequency variations. There are local scales, regional scales and continental and global scales. Climate variability often results in abnormal weather and climate over time. It can be caused by variability within the atmosphere (dynamically induced) or by external forcing from natural and human activities. The distinction between climate variability and climate change is primarily semantic: if the change of interest occurs over a specific time period (such as the 20th century), it is called climate variability within that time period; if it involves two consecutive eras ( Such as the difference between the first half of the 20th century and the second half of the 20th century) (climatic state) changes are called climate changes from one era to the next, such as glacial and interglacial periods. It should be emphasized that although climate change can be caused by internal processes and/or external forcings, a key purpose of understanding climate change is to understand climate changes produced by human activities and natural external forcings, and how to distinguish them from changes caused by processes internal to the climate system difference between and variability. Internal variability manifests itself on various time scales. Atmospheric processes can generate internal variability on timescales ranging from extremely short instants to years. Other components of the climate system, such as oceans and large ice sheets, generate variability on much longer timescales. They generate some internal variability by their own evolution and are also mixed with variability caused by a rapidly changing atmosphere. In addition, the coupled interaction of various spheres of the Earth's climate system also produces internal variability, and ENSO is an obvious example. But it is not easy to distinguish the role of external influences from internal climate variability. This requires analysis based on observations and a physical understanding of the climate system. This is the detection and attribution study of climate change, which mainly uses objective statistical methods to test whether the observation data contain the evidence of the expected external forcing response and evaluate whether it is related to the changes produced in the climate system (internal variability) the difference.", "The Earth's climate is constantly changing. From the alternation of the Great Ice Age (snowball) hundreds of millions of years ago to the hot chamber period (water polo), it evolved into the cycle of glacial and interglacial periods 2.5 million years later, and then to the rapid global warming of the modern climate. Significant, even drastic, oscillations and changes in temperature, greenhouse gas levels, ice and snow cover, sea level, and other ecological and environmental conditions. For a long time, although the global average surface temperature has been used as the main measurement parameter or standard for the earth's climate change, it actually involves changes in the global climate system. The global climate system refers to a highly complex system consisting of the atmosphere, hydrosphere (including oceans), cryosphere, lithosphere (including land surface) and biosphere. These circles not only undergo obvious changes and evolutions, but also the \ttop left figure in Figure 1 (a) shows the linear trend distribution of global temperature from 1979 to 2005, and the right figure shows the linear global temperature trend of the troposphere measured by satellites. The bottom is the change in global mean surface temperature since 1850 and the linear trend over the same period. Relative to the climate average from 1961 to 1990. The smooth curve (blue line) represents change on a decadal scale. The total temperature increase from 1850~1899 to 2001~2005 was 0.76\u00b10.19\u00b0C. (b) Changes in atmospheric CO2 concentration over the past 10,000 years. This is obtained from ice core data analysis and instrumental measurements. The vertical axis is ppm Source: IPCC, 2007 There is a clear interaction between them. Under the influence of the system's own dynamics and external forcings (such as changes in geological structure, volcanic eruptions, solar changes, changes in atmospheric composition and land use changes caused by human activities, etc.), the climate system is constantly evolving (gradual and abrupt changes) over time. , resulting in climate change on different temporal and spatial scales (such as cold period and warm period, dry period and wet period) and climate variability and variation (such as monthly, seasonal, interannual, interdecadal, centennial scale, etc.). Modern climate mainly refers to the climate change after the industrialization of human society (1750), especially since 1850 when the world had accurate meteorological instrument records. It is characterized by the global average temperature, the concentration of greenhouse gases, the rapid rise of sea level and the obvious reduction of snow cover (Figure 1). Although compared with the long geological time, some of their change ranges do not completely exceed the natural fluctuations, but their change rate is unprecedented, such as CO2 concentration, the variability of the geological time is about 100ppm/Ma, that is, 0.0001ppm/ a (or 0.001ppm/a), while the growth rate of CO2 in the past century is 1~1.9ppm/a, about 10,000 times (at least 1,000 times) of the geological age, and the temperature increase rate is 0.0001\u00b0C/a or 0.00001\u00b0 C/a, and the temperature increase rate in the past hundred years is about 1\u00b0C/100a \u2248 0.01\u00b0C/a, which is 100~1000 times that of the geological age! In addition, modern climate change is not entirely determined by natural factors (solar activity, volcanic eruptions, internal variability of the climate system, etc.), human activities also play an important role, and may even play a major role in the last fifty years. The effects of human activities on climate change are mainly manifested in three aspects: \u2460 Greenhouse gases such as CO2 emitted from fossil fuel combustion affect the climate through the greenhouse effect, which is the main driving force of climate warming caused by human activities; \u2461 emissions from agricultural and industrial activities Greenhouse gases such as CH4, CO2, N2O, PFC, HFC, and SF6 also enhance climate warming through the greenhouse effect; \u2462 Greenhouse gas source/sink changes and surface albedo changes caused by land use changes further affect climate change, including deforestation, Urbanization, vegetation change and destruction, etc. It should be pointed out that in the long-term evolution of the earth's climate, greenhouse gases (causing warming) and aerosols (causing cooling) are always the two main influencing factors, but in the early stage of climate change or geological time, these two factors are of natural origin, not of human origin. In the geological age tens of millions of years ago, the atmospheric content of carbon dioxide was much higher than today, and the highest value may have reached 700ppm, which may be 20 times higher than today. At that time, the average temperature of the earth was much higher than today, and the ice caps on the earth were completely disappear. After 2.5 million years, the CO2 content will generally cycle between 180 and 280ppm. At this time, the average temperature of the earth will also continue to decline, and the glacial-interglacial cycle will take 100,000 years as a cycle. Geological data show that the CO2 concentration and temperature change always evolve in the same trend. Although sometimes the CO2 temperature change may be delayed for a certain period of time, the CO2 feedback makes it gradually approach the temperature change. Thus, CO2 is a key driver of climate change, and human-emitted CO2 is a new driver of modern climate change. In recent decades, with the rapid development of climate science and the actual evolution of the Earth's climate, people's scientific understanding of the influence of human activities on climate change has continued to deepen, and the evidence provided has continued to increase, and the scientific community is now more sure than ever before. The impact of human activities on Earth's climate. The scientific debates that emerged during this process have greatly promoted the progress of scientific research on climate change, and the result has significantly changed human understanding of the nature of climate change. These new scientific achievements have attracted extensive attention and attention from governments and the scientific community, and eventually led to international consensus and joint actions to deal with global climate change (\"United Nations Framework Convention on Climate Change\" and \"Kyoto Protocol\"). Therefore, modern climate change, characterized by global warming, is not only a scientific issue, but has evolved into a global political, diplomatic, environmental and energy issue. Climate change is a highly complex process, which involves not only various physical processes, but also geobio-chemical processes, and most of them have obvious interactions, that is to say, not only the impact of one process A on another process B must be considered. effect, and the feedback effect of process B on process A must also be considered. In addition, climate change is essentially a nonlinear process, so how to deal with such a complex process mathematically is also a difficult problem. It should be pointed out that no matter how complicated the processes involved in climate change are, physical processes are the most important and fundamental, and most mathematical problems are solved around physical problems. Solar radiation is the source of energy that drives all weather and climate phenomena on Earth. It can be seen from Figure 2 that, on a global and annual average, 343W/m2 of solar radiation (also known as short-wave radiation) is incident on the top of the atmosphere, but one-third of it (103W/m2) is reflected back by clouds and the ground surface. Space, thus leaving only 240W/m2 to be absorbed by the Earth's climate system. The atmosphere itself absorbs little of the solar radiation directly, and most of it is absorbed by the land surface, ocean, and ice surface, making them warmer. In order to keep the earth's climate constant for a long time, according to the principle of radiation balance, the solar radiation absorbed by the earth's surface and atmosphere must be absorbed by the earth's own infrared radiation (also known as long-wave radiation) at the top of the atmosphere. Radiation) balanced, its magnitude should also be 240W/m2. Figure 2 \tSimplified diagram of the global radiation balance at the top of the atmosphere. The net input of solar radiation must be balanced by the earth's net infrared radiation output (240W/m2). One-third of the incident solar radiation is reflected back into space (103W/m2), the rest is mainly absorbed by the Earth's surface. The outgoing longwave radiation is absorbed by greenhouse gases and clouds, making the Earth about 33\u00b0C hotter than without the greenhouse effect Source: IPCC, 1996 In this case, the Earth system can maintain the Earth's climate since there is no net energy input (mainly characterized by the global mean surface temperature) unchanged. Therefore, the global radiation balance is the basic principle to maintain a stable earth climate. Regardless of the reason, if this balance is destroyed, the global radiation balance cannot be maintained, and the earth system will gain or lose energy, which will lead to changes in the earth's climate. There are two ways to destroy the global radiation balance: One is that the amount of solar short-wave radiation incident on the top of the atmosphere changes, which is mainly caused by changes in solar activity itself or the solar constant, and can also be caused by the orbital parameters of the earth's revolution around the sun Changes in (eccentricity, precession and inclination angle) (i.e. Milankovitch cycles) can also be caused by changes in the cloud cover area in the atmosphere or in the content of atmospheric aerosol particles, thereby changing the amount of reflected solar radiation (expressed in albedo). These changes are one of the main causes of natural variability in the climate. It can affect climate change on different time scales. The second is the change of emitted long-wave radiation. The main factors that can affect the transmission of long-wave radiation emitted by the earth to outer space are water vapor, O3 and greenhouse gases in the atmosphere. They can capture or intercept the long-wave radiation emitted by the earth and the atmosphere, reducing the emitted long-wave radiation, thereby destroying the global radiation balance. From the above we know that any factor that can change the net radiation at the top of the atmosphere or disturb or destroy the radiation balance can cause global climate change, and they are called radiative forcing. In fact, global climate change is a response to radiative forcing. Through this response process, the Earth system changes its own climatic conditions to restore the original or establish a new global radiation balance. In this process, due to the different response speeds of the various spheres of the climate system, the climate changes they exhibit are different. The troposphere and oceans of the atmosphere respond slowly, so they may not appear until decades later. Obvious climate change; while the stratospheric atmosphere responds quickly, generally within a month or so, obvious changes can occur. Positive radiative forcing can increase the surface temperature and lead to global warming, while negative radiative forcing (such as volcanic eruptions) can cause global cooling. It should be pointed out that the calculation of radiative forcing is a key to study the causes of climate change and predict climate change. What it cares about is the change of solar radiation and long-wave radiation, not itself. In this sense, the incident solar radiation is not radiative forcing, only its variation is radiative forcing. As mentioned above, modern climate change research mainly focuses on how human activities, especially the increasing greenhouse gas emissions after industrialization, have affected global climate change in the past century. Therefore, the following discussion will focus on the physical processes and mechanisms through which the increase in greenhouse gases can affect global climate change. Global climate change, in short, this is the so-called greenhouse effect. Since the greenhouse effect is the key physical principle that causes modern climate change and the cornerstone of explaining global warming, its origin and warming effect will be explained in more detail below. In the past 200 years, many physicists have made important contributions to this. Human beings' understanding of the greenhouse effect has roughly gone through three stages. As early as 1681, Mariotle (Edme Mariotle) pointed out that although sunlight and its heat easily pass through glass and other transparent materials, heat from other sources cannot pass through glass. In the 1860s, Horace Benedict de Saussure used a solar thermometer (a thermometer placed in a black box and covered with a glass vessel) to do a simple greenhouse effect experiment, which showed for the first time that The ability to artificially warm, it was a conceptual leap to realize that the air itself was capable of intercepting thermal radiation. In 1824, French scientist Joseph Fourier (J. Fourier) quoted the above-mentioned results of Saussure, and further pointed out: The temperature of the earth can be increased due to the influence of the air, and the atmosphere and the glass of the greenhouse will produce similar warming results , which is where the name greenhouse effect comes from. In 1836, Poulliet pointed out based on Fourier's thought that the layered state of the atmosphere makes the air absorb more rays (radiation) emitted by the earth than solar rays. This view illustrates for the first time the important role of atmospheric temperature stratification (the drop in temperature with altitude in the troposphere) in producing the greenhouse effect. In 1839, the British scientist Tydall (J. Tydall) realized through laboratory experiments that the absorption characteristics of complex molecules such as water vapor and CO2 on thermal radiation are different from those of bimolecular O2 and N2, which account for the main atmospheric components. And measured the absorption of infrared radiation by water vapor and CO2, and further clarified the special effect of trace greenhouse gases in the atmosphere on the change of the earth's temperature. He pointed out that any change in the amount of radiatively active atmospheric components such as water vapor and CO2 can produce the climate changes that geologists' studies reveal. The second stage is mainly the quantitative calculation and prediction of the warming effect of greenhouse gases. Numerous people made such calculations in the late 19th century and over the next 50 years. In 1895, Swedish chemist J. Arrhenius calculated that the global average temperature would increase by 5~6\u00b0C when the atmospheric concentration of CO2 was doubled due to the burning of coal. This result is very close to what we now calculate from complex climate models. He also pointed out that if the amount of trace CO2 in the atmosphere increases or decreases by 40%, it may trigger the advance and retreat of ice ages. A hundred years later, it was found that CO2 did change by this amount during glacial and interglacial periods. But it is now recognized that initial climate change appears to have preceded changes in CO2, which are further enhanced by CO2 greenhouse feedbacks. During this period, Langley and Wood (S.Langley and R.Wood) also pointed out the difference between the actual greenhouse and the greenhouse effect in the atmosphere. In 1938, GS Callendar solved a set of equations linking greenhouse gases and climate change. He found that the doubling of CO2 could increase the global average temperature by 2\u00b0C, and the warming of the poles was significantly stronger. He also put Increased fossil fuel burning is associated with increased CO2 and its greenhouse effect. He points out that, as humans are now changing the composition of the atmosphere at an unusually different rate than geologic time, it is only natural to look for possible outcomes of this change. From the best laboratory observations, the main result of increased atmospheric CO2 will be a gradual increase in the average temperature of the Earth's cold regions. In 1947, Ahlmann reported a warming of 1.3\u00b0C in the Arctic since the 19th century, but his interpretation was not entirely correct and he believed that such climate fluctuations could be caused entirely by greenhouse warming. In 1956, GN Plus obtained a similar model prediction result: If by the end of this century (20th century), the measurement shows that the CO2 content in the world increases significantly, and at the same time the temperature continues to rise globally, it can be confirmed that CO2 is an important factor causing climate change. During this period, people tried to further understand how the emission of fossil fuels changed the concentration of atmospheric CO2, which involved the issue of carbon cycle, thus starting a new interdisciplinary research on carbon cycle. The first consideration in this problem is the CO2 air-sea exchange process. In 1957, R. Revelle and H. Suess explained why emitted CO2 can accumulate in the atmosphere instead of being absorbed by the ocean. They point out that the CO2 mixing process can occur rapidly in the upper mixing layer of the ocean and with the deep ocean over hundreds of years, and because of this, the atmospheric concentration of CO2 will increase significantly and disappear very slowly. The scientific community has predicted that the interaction of climate change with ocean circulation and biogeochemical processes will further alter the ocean's uptake of human-emitted CO2, potentially leading to further increases in global temperatures. In 1957, CO2 measuring stations were established internationally in Mona Loa, Hawaii and Antarctica, thus entering the stage of actual measurement of greenhouse gases (Figure 3), and the precise measurement results showed that the concentration of CO2 in the atmosphere is indeed constantly increasing. increase, which opened the prelude to modern global climate change research. Special mention should be made of GD Keeling's systematic measurement and analysis of CO2 in the 1950s. The main measurements during this period were CO2 and H2O, two greenhouse gases identified by Tyndall more than 100 years ago. It was not until the 1970s that other greenhouse gases such as CH4, N2O and CFCS were recognized as additional important greenhouse gases that contribute to climate change. Current measurements of greenhouse gases show that CO2 has increased from 280ppm pre-industrial (1750) to 379ppm in 2005. The atmospheric concentration of CO2 in 2005 has far exceeded the natural variation range (180-280 ppm) of the concentration over the past 650,000 years based on ice core records. And the growth rate of CO2 atmospheric concentration in the past ten years (1995~2005) (1.9ppm per year) is much higher than the growth rate (1960~2005: 1.4ppm per year) since there were continuous direct atmospheric measurements in the past. Atmospheric concentrations of other greenhouse gases have increased significantly, and their magnitudes have exceeded the range of natural variation since 650,000 years. The resulting greenhouse effect, if measured by radiative forcing, has a global average of 1.6 W/m2 (range of 0.6-2.4 W/m2) (compared with that in 1750). Among them, the radiative forcing of CO2 increased by 20% from 1995 to 2005, which may be the fastest growing decade in the past 200 years. Figure 3 \t\t(a) Atmospheric CO2 concentration changes measured by Mauna Loa Observatory; (b) Atmospheric CO2 concentration changes measured by China Waliguanshan Global Atmospheric Background Station (Chinese Academy of Meteorological Sciences, 2005) Why is the greenhouse effect caused by greenhouse gases Can it cause global climate change? This is one of the most important physical foundations for understanding the causes of global climate change. There are two types of greenhouse effect, one is the natural greenhouse effect, and the other is the enhanced greenhouse effect, that is, the greenhouse effect caused by human activities. Let us first examine the natural greenhouse effect, which not only occurs in the climate evolution of the earth, but also in the formation of the climate of other planets. As far as the current knowledge is concerned, at least in the planet Venus (closer to the sun than the earth) to our earth And on Mars (farther from the sun than the earth). In the earth's atmosphere, in addition to nitrogen and oxygen, which account for 99%, there are also a small amount of other trace gases such as CO2, CH4, etc., as well as clouds, water and dust. Although the latter occupy a small volume and quantity, they can absorb surface radiation. Part of the infrared heat radiation, according to Kirchoff's law (Kirchoff's law), if a gas layer in the atmosphere absorbs radiation, it is also at the same frequency, which is proportional to the absorption and then emits radiation at its own temperature. Therefore, the above-mentioned trace gases, water vapor, clouds, etc. in the atmosphere absorb the long-wave heat radiation emitted by the atmosphere and the surface, and at the same time emit heat radiation to the outer space with their own temperature. These gases, water vapor and clouds in the upper atmosphere, because their temperature is much lower than that of the surface (the temperature in the atmosphere drops at an average of 6\u00b0C/km from the surface to the tropopause (about 12km on average), so in the upper layer of 5~10km Atmosphere, which is 30~50\u00b0C colder than the surface), emits a relatively small amount of thermal radiation, so these high-level greenhouse gases absorb a large amount or all (as a black body) of long-wave radiation emitted by the surface and the lower atmosphere, but emit outward absorbs much less long-wave radiation than it absorbs. This is much smaller than the thermal radiation lost to outer space by the atmosphere without these water vapor and greenhouse gases. Therefore, the effect of these greenhouse gases is like a layer of quilt covering the earth's surface (that is, the effect of blanket). The outer surface of the quilt is colder than the inner surface, so that the surface heat radiation will not escape to the outer space unobstructed, thus making the earth's surface warmer than it would be without these greenhouse gases. From the perspective of radiative transfer, it can also be considered that it increases the downward emission of long-wave radiation from greenhouse gases, water vapor, and clouds in the middle and upper atmospheres, which increases the temperature of the surface and near-surface air. It can be seen from the above that if there is no vertical temperature distribution in which the temperature decreases with height on the earth, there will be no greenhouse effect. The absorption of infrared radiation by greenhouse gases and water vapor occurs in different spectral bands. The entire absorption spectrum is quite complex, such as carbon dioxide (CO2), the most important greenhouse gas, with absorption bands of 15\uf06dm, 10\uf06dm, 5.2\uf06dm, 4.3\uf06dm, 2.7\uf06dm and 2.0\uf06dm, the strongest of which is There are two absorption bands at 15\uf06dm and 4.3\uf06dm. In the process of climate change research, some people thought that the absorption zone of atmospheric CO2 has been saturated, thus the greenhouse effect has reached saturation, even if the CO2 concentration increases in the future, it will not produce a significant greenhouse effect. But this is not the case. Many studies of infrared spectroscopy and atmospheric radiation have shown that the absorption of CO2 or the greenhouse effect has indeed reached saturation in the central band of 15\uf06dm, but in the entire absorption range of CO2 (14~18\uf06dm) (especially wings of the central peak), as well as other absorption bands (such as 10\uf06dm, 5.2\uf06dm, etc.) are far from being saturated, nor will they be in the near future. It should be noted that no matter how complex the physical processes at the surface and inside the atmosphere are, as stated at the beginning of this entry, there must be a balance between the radiant energy entering and leaving the top of the atmosphere. It has been pointed out above that in an atmosphere with clouds, the net solar radiation entering the atmosphere is 240W/m2, so the emitted long-wave radiation must also have this amount. Once this balance is disrupted, it can be restored by increasing the temperature of the Earth's surface. Due to the presence of naturally occurring greenhouse gases, clouds, and water vapor in the composition of the atmosphere, through their positive radiative forcing and the greenhouse effect, the natural greenhouse effect warms the surface to a temperature higher than that of the atmosphere without these greenhouse gases and water vapor It has risen by 33\u00b0C, that is, from \uf02d19\u00b0C (snowball) to 14~15\u00b0C. This is the temperature suitable for the existence of life on the earth. It can be said that without the natural greenhouse effect, life would be difficult to maintain. Mars and Venus have a similar natural greenhouse effect, but due to the different CO2 content and temperature from Earth, they eventually reach an equilibrium planetary temperature that is either too high or too low for life to exist. The main factor that can cause disruption of the global radiation balance can also be the increase of greenhouse gases in the atmosphere due to human activities. The resulting further increase in surface temperature is known as the enhanced greenhouse effect. Therefore, this enhanced greenhouse effect is actually a kind of greenhouse effect caused by human activities added to the natural greenhouse effect. Although its magnitude is much smaller than the natural greenhouse effect, the significance of its warming effect is very important. Through this man-made greenhouse effect, the blanket effect of preventing long-wave radiation from radiating outward can be further enhanced, which means that the emission of long-wave radiation from the upper atmosphere to outer space is further reduced. From the perspective of radiative transfer and radiation balance, it is equivalent to an increase in the radiation flux density at the top of the atmosphere, which makes the radiation at the top of the atmosphere unbalanced (the net long wave decreases). The surface temperature will thus increase further in response to this imbalance (radiative forcing) until the net longwave radiation emitted from the top of the atmosphere again equals the net incoming solar radiation. When the Earth system is fully adjusted to this anthropogenic radiative forcing, the average temperature of the Earth will increase to a temperature of the magnitude that responds to enhanced or anthropogenic greenhouse forcing. Figure 4 illustrates the natural greenhouse effect (Fig. 4a) versus the enhanced greenhouse effect (Fig. 4c). In Figure 4, due to \tthe natural greenhouse effect and enhanced greenhouse effect of the earth in Figure 4, the natural greenhouse effect; (b) CO2 concentration increased to 2 times; (c) enhanced greenhouse effect; (d) feedback effect data Source: Based on Houghton's original drawing, the presence of water vapor and greenhouse gases in the atmosphere in 1998 caused the earth's temperature to rise by 15\u00b0C from \uf02d19\u00b0C. If the concentration of CO2 in the atmosphere doubles due to human emissions (Fig. 3b), then the radiation balance at the top of the atmosphere will be destroyed. Since the increased CO2 intercepts the long-wave radiation emitted by the earth and the atmosphere, the amount of long-wave radiation leaving the atmosphere Only 236W/m2, so the internal climate system will be adjusted to restore the original balance. According to the Stefan-Boltzmann formula E \uf03d \uf073 T 4 (Tg is the mean temperature of the Earth's surface), the Earth's surface must warm by 1.2\u00b0C. After the temperature rises, according to the Clausius-Clapeyron equation, the water vapor in the atmosphere will increase, which will further strengthen the greenhouse effect. Through this positive feedback effect, the surface warming will not be 1.2\u00b0C, but 2.5\u00b0C, so the feedback effect is very obvious. It can be seen from the above that the reasons for modern global climate change are physically speaking. In addition to natural external forcing factors such as past historical periods and geological ages that continue to affect the earth's climate, they also increase the impact caused by human activity factors, and with human This anthropogenic forcing is likely to increase with increasing activity-induced greenhouse gas emissions. The simulation and prediction of the climate model show that its effect may exceed the natural factors in the future, making the earth's climate warmer continuously, and at least in the 21st century, human beings will enter a warmer world. The scientific basis for this conclusion is the fact that greenhouse gases in the atmosphere and the temperature of the earth have been increasing in the past 200 years and the physical reasons for the greenhouse effect.", "Bioaerosol is a general term for living aerosol particles (including microbial particles such as bacteria, fungi, and viruses) and active particles (pollen, spores, etc.) and various plasmids released into the air by living organisms. Because of its important impact on human life and health, research on the health effects of bioaerosols, especially toxicology and pathogenicity, is still the research focus of epidemiology, epidemiology, medicine and even environmental science. one. Since the Dutchman Leeuwenhoek invented the microscope in 1676, which opened up the world of microbiology, nearly two centuries later, the French scientist Pasteur collected and cultivated microorganisms from the air for the first time, opening up air microbiology. Since then, scientists have used various technologies and methods to pay attention to the distribution, transmission and pollution of these special aerosol species in the atmosphere, and have taken various measures to eliminate and prevent the threats to human health caused by them. It was not until the end of the 1980s that scientists formally proposed the definition of bioaerosol. At the 1989 American Government Industrial Hygiene Annual Conference, bioaerosol was first defined as: released into the air by living organisms Among them, macromolecules and variable mixtures with sizes ranging from 0.1 to 100 \u00b5m. Atmospheric aerosols, as small solid or liquid particles suspended in the atmosphere, have always been an important research field of atmospheric science due to their unique radiative forcing effect and their physical and chemical pollution characteristics. The IPCC 2007 work report clearly pointed out that as an important factor driving climate change, it is necessary to conduct more in-depth research on aerosol. As an important type of aerosol, bioaerosols have attracted more and more attention to their impact on climate change. Since the 1950s, due to the widespread presence of bioaerosols in the air, scientists have observed bacteria not only in indoor air, the atmospheric boundary layer and the troposphere [1], but even in the stratosphere at an altitude of 41 km [2]. Some scientists began to be interested in its physical and chemical properties in the nucleation of ice nuclei, the activity of cloud droplets and even their role in the physical and chemical processes of the atmosphere [3]. Schnell and Vali (1972) mentioned that microorganisms It may also be an important source of ice nuclei in the atmosphere [4]. Since then, microorganisms have indeed been found in ice crystals, cloud water and rainwater in the atmosphere. However, due to the lack of multidisciplinary comprehensive interdisciplinary research, the relevant research has not progressed smoothly. Until the past decade, the characteristics of bioaerosols and their influence on atmospheric processes began to attract scientists' attention as a special scientific issue in the biogeoscience community. Extensive attention [5]. The latest research has shown that certain bacteria, fungi and even pollen particles can act as efficient ice nuclei and cloud condensation nuclei to affect the nucleation process of the atmosphere[6]. As we all know, the icing process in the atmosphere has a wide range of regional and global aspects, and the ice-forming process of homogeneous nucleation requires a state of supercooling and supersaturation, which is usually difficult to occur in the real atmosphere except under extreme low temperature conditions at high altitudes. Phenomenon. The formation of natural clouds and precipitation processes are dominated by heterogeneous nucleation processes. Heterogeneous ice nuclei in real atmospheric environments are considered to be catalysts for the nucleation process. They can freeze into ice under relatively warm conditions and are often used as To explain the ice nucleation process in the atmosphere. Although the magnitude and extent of the contribution of bioaerosols to the ice nucleation process has not yet been clarified, literature reports have confirmed that biological matter can play an important role in the ice nucleation process. Current atmospheric science research People have generally recognized the important impact of bioaerosols on ecological climate effects. Studies have found that more than six types of bacteria can carry out ice nucleation process, they are Pseudomonas fluorescens, Pseudomonas syringae, Pseudomonas viridiflava, Erwinia herbicola, Erwinia ananas , Erwinia uredovora and Xanthomonas campestris, the ability of these bacteria to act as ice nuclei for nucleation is largely dependent on the single gene inaZ, a homologue found in different species of bacteria that utilizes glycosylphosphatidyl alcohols to coat the outer layer Cell membrane lipoproteins link to the inner membrane, thereby serving as ice nuclei templates during ice formation. More and more scientists in the United States, Canada, and Europe are beginning to study the role of bioaerosols in the nucleation of atmospheric ice nuclei and cloud condensation nuclei, and researchers in China are currently participating in this research. The current research on the role of bioaerosols in the nucleation process of atmospheric ice nuclei and cloud condensation nuclei in atmospheric science mainly focuses on the influence of its physical and chemical properties on the nucleation process of the atmosphere, because this research field involves the synthesis of many disciplines Interdisciplinary research, there will be many unknown scientific problems to be discovered and explored by scientists in the future, mainly reflected in the following aspects [7]: At present, there is an urgent need for advanced detection technology to timely and accurately monitor the atmosphere in the atmosphere. The various bioaerosols that exist, and even the species they belong to, the concentration of different bioaerosol particles, and the exchange flux can be distinguished. Establish simulation experiments and model research methods suitable for studying the participation of bioaerosol particles in the ice nucleation process, so as to identify whether bioaerosol particles effectively participate in the four kinds of heterogeneous nucleation of ice nuclei (desublimation, condensation-freezing, infiltration- freezing, contact nucleation) mechanism process. How to effectively simulate and study the complex physical and chemical properties of atmospheric bioaerosol particles and the interaction process between different particle phases and interfaces. How to establish a laboratory research method to effectively simulate and study the chemical transport kinetics and mechanism of bioaerosols between different interfaces in the natural environment. In the future, how to develop and establish effective models to study and predict the mechanism of emission, transport and removal of atmospheric bioaerosol particles.", "Affected by monsoons, subtropical high pressure, topography, and mid-to-high latitude weather systems, heavy rains occur frequently in my country, often causing floods. At the same time, rainstorms in arid areas also bring many benefits such as mitigating drought conditions and increasing water use for industry and agriculture. Therefore, the formation mechanism of rainstorms and rainstorm forecasting have always been one of the most concerned issues of Chinese meteorologists. Internationally, heavy rain is also the main content of research, and the MJO caused by tropical convection is closely related to heavy rain. Many meteorologists have studied the relationship between MJO active position and rainstorm intensity and falling area, which shows that the research on rainstorm is international. Its research results can directly or indirectly reveal the formation mechanism of heavy rain and its forecasting problems, and play a positive role in flood control and disaster reduction to improve the national economy and people's livelihood. Heavy rain has obvious suddenness, its formation mechanism is complicated and its forecast is extremely difficult. Rainstorm forecasting has long been an international problem. Although there are various remote sensing observation data such as satellites and radars, various types of high-resolution numerical forecasting models have been developed, and data assimilation technology has been added, the accuracy of rainstorm forecasting It is still very low, and the reason is that the key scientific issues in the formation process of the heavy rain have not really been resolved. As a result, the accuracy of rainstorm forecasting is low, and rainstorm forecasting has become a major problem in the world. The most closely related to the rainstorm is the accurate calculation of the vertical velocity of the atmosphere. Due to the inaccurate calculation of the vertical velocity in the numerical model, there are often obvious errors in the intensity of the rainstorm forecast and the fall area. The importance of the vertical movement of the atmosphere lies in its relationship with The height of cloud system development in a rainstorm system is closely related. If the vertical velocity is strong, the vertical velocity will transport the water vapor from the bottom layer of the atmosphere to the upper layer, and convective clouds will appear in the upper layer. Because the temperature of the upper layer is lower, it will cause The composition of ice crystals is more. That is to say, vertical motion will directly affect the changes of microphysical processes in convective clouds. That is, it directly affects the nature and intensity of precipitation, so it can be seen that the calculation of atmospheric vertical velocity is very important. It is also an international problem that has not been solved for a long time. In order to calculate the vertical velocity, meteorologists have come up with various methods for a long time. At first, they mainly use the continuity equation to calculate the vertical velocity. For example, in the P coordinate, the continuity equation can be expressed as For then here D is the horizontal divergence. The problem of accurate calculation of atmospheric vertical velocity can be obtained through integral \t\u00b7 757 \u00b7 Here Po is the surface air pressure, \uf077o is the surface vertical velocity. The vertical velocity calculated by this method has a large error because the horizontal divergence D is calculated using the spatial variation of the horizontal wind velocity. The horizontal wind speed not only has a large observation error, but also the error caused by calculating its horizontal gradient is even greater. After the calculation, it is necessary to perform vertical integration on it, which will cause accumulated errors. Therefore, the calculated vertical velocity is unbelievable. Later, someone used the thermal equation to calculate the vertical velocity, that is, \n\u239b \uf0b6T \uf02b - where S\uf03d \uf02d\uf028T \uf071 \uf029\uf028\uf0b6\uf071 / \uf0b6p\uf029one \tis the horizontal wind speed, and \uf0d1 is the two-dimensional level operator. P \tis the static stability parameter, and \tthe advantage of this calculation method of vH H is that it avoids the accumulated error caused by the integral of formula (3). At the same time, the error caused by using the gradient of the horizontal wind speed to calculate the divergence is avoided, but this method still has a large error. Because the observation error of the horizontal wind speed itself is very large. Its error may be greater than the size of the vertical velocity itself. Moreover, the calculation error of the time change of temperature is relatively large, and there is also an error in the horizontal gradient of temperature, so the accumulation of error is still relatively large. Its effect is obviously better than the divergence method (the continuity equation calculates the vertical velocity), but the error is so large that it is still unreliable. In order to improve the calculation of vertical velocity, for large-scale weather systems, using quasi-geostrophic approximation, meteorologists proposed the quasi-geostrophic atmospheric vertical velocity diagnostic equation [1], that is, using the upper and lower layers (such as 850hPa, 700hPa) on the isobaric surface The difference between the vorticity advection and the temperature advection is used to calculate the magnitude of the vertical motion. Because the vertical motion calculated by this method is obtained under the quasi-geostrophic frame, the main problem is that the geostrophic wind is used to replace the real wind, which will cause a large error, especially for the rainstorm weather system, where the wind The field is highly non-geostrophic. So the calculation of the vertical velocity is still unresolved. In order to further solve the problem of calculating the vertical velocity, the Q vector method was proposed later, that is, the divergence of the Q vector is used as the only forced item of the vertical motion to diagnose the vertical velocity. But there is still the problem of using the geostrophic wind instead of the real wind to calculate the vertical velocity, so the calculation of the vertical motion is still unsolved. In recent years, Chinese meteorologists have been very concerned about the calculation of the vertical motion, and have proposed a new expression for the calculation of the vertical velocity. The geostrophic wind is no longer used but the real wind is used to calculate the vertical velocity, and the calculation is carried out in the Z coordinate, which avoids the error caused by the vertical velocity conversion from the P coordinate to the Z coordinate [2]. However, since the calculation formula of this equation is relatively complicated, and it is a linear diagnostic equation, the error caused by the difference in the calculation process is still very large, and the essence is still that there is no real solution to the problem of accurate calculation of vertical motion. Although meteorologists and their workers have been paying attention to and trying to improve the calculation of vertical velocity in torrential rain systems for a long time, the problem itself has become more difficult because the vertical velocity cannot be directly and accurately observed at present, and this problem has not been solved so far. This obviously affects the accuracy of rainstorm forecasting, making it a major international problem. It is hoped that meteorologists will further perfect and improve the calculation of vertical velocity on the basis of previous studies, and make new contributions to the forecast of rainstorm intensity and falling area.", "Aerosol acts as a \"nucleus\" in the formation of cloud particles[1-2], and is an indispensable part of the formation of cloud particles. In meteorology, the aerosol that becomes the nucleus of cloud droplet formation is called cloud condensation nucleus (CCN), and the aerosol that becomes the nucleus of ice crystal formation is called ice nucleus (IN). The process of converting water vapor in the atmosphere into liquid cloud droplets or solid ice crystals is called the nucleation process of cloud particles. It is a very complicated process. Under normal atmospheric conditions, without the participation of aerosols, this process will hardly happen. . The process of changing the state of water vapor to liquid water is condensation, and the condition for this process to occur is that the relative humidity of the air reaches 100%, or the water vapor pressure is equal to the saturated water vapor pressure. But in fact, without the participation of aerosol, the condition for condensation of pure water vapor is that its saturation needs to reach 120% (20% supersaturation), because the spherical structure of water droplets is an unstable structure, There is a resistance called \"surface tension\". Only when the supersaturation reaches a certain level can the resistance formed by the surface tension be overcome and water droplets formed. If aerosols (or CCN) are involved, the supersaturation required for water vapor to be converted into water droplets is much lower, and there are generally a large number of aerosols in the actual atmosphere, which makes the formation of cloud droplets much easier. After the cloud droplets are formed, they grow up to the size of raindrops through the process of collision and merger. The nucleation process of ice crystal formation is similar to that of cloud droplet nucleation. In the absence of ice nuclei (IN), water vapor molecules begin to form ice embryos only when the temperature is very low. A lattice forms on the embryo, and this lattice structure is easily broken by thermal perturbations, especially if the droplets freeze. Therefore, the smaller the droplet size, the lower the temperature requirement for freezing to form ice crystals. Experiments have shown that the freezing temperature of pure water droplets with a diameter of 5\uf06dm is \uf02d40\u00b0C, while the freezing temperature of large drops of 25\uf06dm is \uf02d36\u00b0C, so the conditions for the formation of ice crystals without aerosol participation are very harsh and do not meet the actual Atmospheric observed conditions for ice crystal formation. In the case of ice nuclei, water vapor can directly condense into ice on the surface of the ice nuclei, or directly contact or collide with cloud droplets to produce a freezing process, and the required temperature conditions are much lower. Like CCN, IN is present in the atmosphere, but in very low concentrations. The findings suggest that IN should have a hexagonal lattice structure similar to natural ice. And only at a certain temperature below the freezing point can it be activated and participate in the formation of ice crystals. For example, the activation temperature of AgI used for artificial spreading is \uf02d4\u00b0C, that of NaCl is \uf02d8\u00b0C, and that of pozzolan is \uf02d13\u00b0C. The lower the temperature, the higher the activated IN concentration should be. Modern weather modification is based on this theory. By artificially increasing CCN or IN, changing the formation process of clouds and precipitation to achieve the purpose of increasing rainfall, preventing hail, and eliminating fog [3], it belongs to conscious weather modification. There is a kind of unconscious weather modification, which is caused by human activities. With the development of modern industry, human activities lead to the increase of environmental pollution aerosols, whether it will cause significant changes in cloud and precipitation processes, and then cause changes in regional and global environmental climate. This issue has become a new research topic currently facing [4-6]. Several observational facts have shown the potential modification of cloud formation by anthropogenic aerosols. Such as the use of aircraft and satellites to observe the clouds over the ship's trajectory. Differences in the distribution of aerosol particles under clouds under clean and polluted conditions can lead to differences in the relationship between cloud optical depth and liquid water content. However, not all observations are in complete agreement. Some research results show that increasing the concentration of ambient aerosols can increase cloud reflectivity, but other research results show that high pollution aerosols entering clouds can lead to cloud water content and a reduction in cloud reflectivity. The interaction of aerosols and clouds is a complex nonlinear process. The aerosols produced by human activities, such as sulfate, nitrate, organic matter and black carbon, have significant differences in the atmospheric temperature and humidity environmental conditions required for nucleation into CCN and IN due to their differences in chemical composition and physical properties. In particular, the solubility and mixture characteristics of these aerosols also have obvious differences in the influence of cloud droplet formation and surface tension. These differences lead to large uncertainties in cloud formation processes (such as cloud droplet spectral distribution, cloud ice concentration, shape, etc.) and related optical properties. The main reason for the differences in the above research results is that our current understanding of the physical and chemical properties of man-made aerosols is not deep enough. Some studies have shown that the presence of organic aerosols can reduce surface tension, leading to an increase in cloud droplet number concentration. Mixtures of sea salt and organic matter have different optical properties after nucleation compared to pure sea salt particles. Aircraft observations indicate that ambient aerosols are primarily composed of mixtures of biocombustion, organic matter, smoke and other aerosols. However, this kind of observational research is often limited to a certain area and time period, and the results cannot be fully used to explain the complex relationship between the composition of aerosol particles and the formation of cloud droplets and ice crystals. However, the fact that sea salt, dust, and meteorite particles are present in small ice crystals, and that sulfate, nitrate, and organic matter are present in haze and cloud droplets suggests that the interaction of natural and polluting aerosols in cloud and precipitation formation The need for effect research. To reveal to what extent and in what manner this aerosol mixture participates in the different processes of cloud particle formation (e.g. nucleation, aggregation, etc.). The different chemical composition of aerosols can also significantly affect the formation of cloud particles. In laboratory studies, it was found that organic compounds are not equally distributed in the ice and liquid phases when water droplets freeze homogeneously. Particles rich in organic matter remain unchanged. freezing state, which has potential implications for the development of mixed clouds under the influence of anthropogenic aerosols. The insoluble aerosol particles in ice crystals will affect the radiative transfer process of ice crystals, and the effects of different insoluble aerosols contained in ice crystals are also different, or even completely opposite. Therefore, the goal of future research is to further reveal the role of aerosols in the formation of cloud precipitation and the associated changes in regional environmental climate, so as to effectively reduce the uncertainties on some key issues.", "my country is a country with a large number of climate resources in the world, especially the storage capacity and richness of wind energy are among the top in the world. Wind energy is an inexhaustible clean energy, and it is most suitable for large-scale development and utilization among renewable energy sources. Wind power not only helps to solve the problem of energy shortage, but also helps to resolve the climate change crisis brought about by the greenhouse effect. It will be one of the main energy sources for human beings to survive in the 21st century. The international community attaches great importance to this and is competing to develop wind power generation technology. my country's wind power is also developing rapidly, and the energy issue has been promoted to an unprecedentedly important strategic position in the sustainable development of my country's economy and society [1]. Scientifically speaking, wind power generation mainly utilizes the wind resources in the atmospheric boundary layer (currently most wind turbine hub heights are below 120m), and the kinetic energy of the wind is converted into electrical energy through the power generation device [2\uf02d3]. Wind power generation is closely related to regional climate, atmospheric boundary layer physics and electric power, electrotechnical science. Understanding and grasping the characteristics and changes of the wind field in the atmospheric boundary layer is one of the keys to wind power generation. The so-called atmospheric boundary layer refers to the lower atmosphere at a height of about 1-2 km from the earth's surface. The interfaces and key areas of exchange have extremely important effects on the changes of weather, climate and environment. Human daily activities, industrial and agricultural production and engineering activities (such as aerospace engineering, large-scale construction projects, large-scale wind and solar resource development, etc.) are all closely related to the atmospheric boundary layer [4\uf02d5]. For more than half a century, due to its great practical application value, the atmospheric boundary layer has been a very active frontier branch in the field of atmospheric science in the world. Because the air flow in the atmospheric boundary layer is often in a turbulent state, the wind field is random, intermittent and uncertain, and is greatly affected by climate conditions, weather conditions, local terrain and the nature of the underlying surface, so the wind force The stability and efficiency of power generation have a great influence (the available wind energy is proportional to the cube of the wind speed), especially when large-scale wind power is connected to the grid, the problem is more prominent. Internationally, the theory of uniform underlying surface boundary layer has been perfected, but the understanding of non-uniform surface atmospheric boundary layer is still insufficient. Compared with small countries such as the Netherlands and Denmark, which are advanced wind power generation countries in Europe, my country has a vast territory, and the terrain, underlying surface conditions and climate backgrounds in different regions are different. Therefore, the situation we face is more complicated than them, and it is more uneven. Boundary layer problem. The existing atmospheric boundary layer observation data in my country are still far from meeting the needs of wind energy resource assessment and forecasting, and the existing various weather and climate numerical models at home and abroad are basically unable to predict the scale of wind farms (or wind turbines). Make detailed simulation and forecast of wind speed distribution. Based on the current situation and future development trend of wind power generation at home and abroad, as well as the needs of the development of the atmospheric boundary layer discipline, this paper proposes ten difficult problems that are closely related to the atmospheric boundary layer in the development of wind energy resources. These problems are not only practical problems that must be faced in the development of wind energy resources, but also challenging topics for the basic research and application of the atmospheric boundary layer discipline. The climate average characteristics of wind speed in the atmospheric boundary layer The long-term change of wind speed in any place will have certain regularity, that is, it has the climate average characteristics. The main concerns of wind power generation include the climatic characteristics of the boundary layer wind in the seasonal, interannual, interdecadal and centennial time scales. The climatological statistical average of the atmospheric boundary layer wind observed at a certain location is determined by the macroclimate background and topographical characteristics of the measurement point area. In order to obtain wind climate data, long-term continuous observations are required. Many regions with rich wind energy resources in China (such as the western plateau and the southeast coast) still lack such long-term observation data, especially the observation data of wind profile, which brings difficulties to the study and analysis of long-term wind characteristics. How to infer the changing law of average wind speed in different regions and seasons and its probability distribution (not all follow the Weibull distribution) based on the existing historical data is a difficult problem [2], how to scientifically and optimally use modern observation methods to obtain climate representative It is also a difficult problem to obtain closed-field observation data. On the other hand, if numerical simulation is used, it is much more difficult to accurately simulate the boundary layer wind field (vector field) in the climate numerical model than the temperature field and precipitation field (scalar field). Short-term variation of wind speed in the atmospheric boundary layer A remarkable feature of the structure of the atmospheric boundary layer is that it has a strong diurnal variation. Under the condition of flat terrain, people have a good understanding of the diurnal variation characteristics of wind speed; however, under the condition of complex terrain, the diurnal variation of wind is not only affected by solar radiation, but also greatly affected by the local terrain. For example, wind fields can become very complex due to the influence of sea and land breezes, valley winds, and even urban heat island circulation and their nonlinear interactions. In addition, the impact of weather processes on the structure of the boundary layer is also very large. In fact, short-term changes in wind speed are more uncertain and harder to predict than seasonal changes. These variations, which generally peak over a few days, are caused by so-called \"weather variances\", which are associated with a wide range of weather types and processes. Much is not well understood about the boundary layer structure and wind field characteristics influenced by topography and by weather processes. Variation of Wind with Height in the Atmospheric Boundary Layer In order to effectively utilize wind energy, it is important to know the distribution of wind speed in the atmospheric boundary layer with height (wind profile or wind profile) from the ground to the height of the turbine hub. Even under the condition of flat terrain, the wind speed is affected by the roughness of the underlying surface, the stability of atmospheric stratification and the Coriolis force, and the change is very complicated (the commonly used logarithmic law and power function law are only approximate). Under terrain or non-uniform surface conditions, it is more difficult to calculate the distribution of wind speed with height (especially the wind speed at the hub height) based on near-surface wind speed, which is affected by the aforementioned mesoscale circulation such as valley wind, sea and land breeze. At this time, we need to conduct in-depth research combining observation experiments and computer simulations to obtain reasonable results. If possible, it is best to sum up convenient and practical semi-empirical theoretical formulas in engineering, and it is also very helpful for numerical simulations. Of course, because wind power generation is under relatively high wind speed conditions, we can ignore the influence of temperature stratification or atmospheric stability, and only need to consider the so-called pure aerodynamic flow under neutral stratification. Turbulence and gusts The movement of air in the atmospheric boundary layer (especially the near-surface layer) is almost always in a turbulent state. Although the time and space scales of turbulent eddies are relatively small, their dynamic effects on a single fan and multiple fans The wake effect generated between them is very strong, and the turbulent flow will reduce the average wind speed and induce fan vibration. On the other hand, gusts are also common in the boundary layer, especially in strong weather conditions. In some cases, stronger turbulent pulses are sometimes defined as gusts, but the gusts formed by self-organization are different in mechanism from gusts excited by external weather conditions. Gusts are highly destructive, both turbulence and gusts reduce the possibility of effective use of wind energy in wind power generation, and also accelerate the damage or wear of wind turbines by forming fatigue loads, so the towers of wind turbines are usually high enough to avoid Strong turbulence created by winds on the ground. The generation mechanism, statistical law and parametric model expression of turbulent flow are well-known scientific problems [5\uf02d6]. However, gust generation mechanism, gust coefficient estimation, gust theoretical model and probability description of extreme wind speed are all very important and difficult issues[7]. Fine-grained assessment of wind energy resources In the past, the investigation and assessment of wind energy resources in my country mainly used conventional wind measurement data at a height of 10m from meteorological stations and a small amount of wind measurement tower data. Constrained by factors such as the evaluation theory suitable for monsoon climate characteristics and terrain characteristics in my country, it is still impossible to accurately give the total wind energy resource reserves at the height of the wind turbines and the regional distribution in a refined manner (for example, with a horizontal resolution of 1km and a vertical resolution of 10m) Features [8]. Regarding the distribution of China's offshore wind energy resources, the earliest is a simple extrapolation based on the statistical calculation results of wind measurement data from land meteorological stations. Over the past ten years, my country's coastal provinces have carried out a new round of offshore wind energy resource assessment, mainly using the historical wind measurement data of meteorological stations, ocean stations, and island stations in coastal areas for statistical analysis, combined with a small number of short-term observations on islands data for offshore wind resource analysis. Nonetheless, what is the total wind energy resource reserve within the height range of wind turbines in China? Where is the macro-rich area of wind energy resources in China? These are difficult questions that have no clear answers, and require the use of modern meteorological observation methods including satellite remote sensing, wind profile radar, automatic weather stations, special wind energy detailed survey towers, offshore observation platforms, etc., as well as refined numerical models can be solved. Micro-site selection of wind farms How to scientifically guide the optimal site selection of wind farms in macro-rich wind energy resources? It is necessary to establish an optimal theoretical model for the micro-site selection of wind farms in order to better apply the refined evaluation results of wind energy resources to the design and construction of wind farms. For this reason, it is necessary to develop micro-scale numerical models of wind farms and wind turbines. Since this involves turbulent flow problems under various complex geometric conditions (local terrain, wind turbine structures, and surrounding buildings), it belongs to the boundary layer meteorological and environmental Interdisciplinary study of aerodynamics. For example, mesoscale meteorological models or Computational Fluid Dynamics (CFD) models alone are not enough. It is also necessary to develop a model system that couples CFD with mesoscale meteorological models or atmospheric boundary layer models in order to truly realize multi-scale wind field simulation and forecasting. , which is a key technical problem that needs to be overcome. Short-term forecasting of wind power in large-scale wind farms It has been pointed out above that the wind field in the atmospheric boundary layer is intermittent and uncertain, especially affected by the complex underlying surface. It will have a great impact, and at the same time increase the difficulty of planning and dispatching of the power grid. Therefore, how to forecast wind power accurately and effectively has become a bottleneck problem restricting the development of large-scale wind power. The objects of wind power forecasting include a single wind farm or a group of wind farms (regional wind farms), and the forecast periods include long-term, medium-term, short-term and ultra-short-term, among which short-term forecast is the most urgent demand at present [9]. The wind power company's demand for short-term wind power forecasting is to release the hourly power generation forecast within 36~72 hours in advance, and give the credibility of the forecast. my country has a large land area and very complex topographical conditions. Foreign numerical models, especially the European small-scale numerical models, have turbulence closure parameters determined based on local near-surface turbulence observation and test results, which are different from my country's topography and surface conditions. Therefore, the calculation results are quite different from the actual ones, and in most cases the results are too large. The actual power generation of most domestic wind farms is smaller than expected, which fully proves this point. Therefore, we must independently research and develop the wind power short-term forecasting system suitable for our country's national conditions. How to establish a short-term wind power forecasting model system for large-scale wind farms to provide scientific parameters for optimal dispatching of the power grid, so as to overcome the unpredictability of wind power generation and realize \"smart power generation\" for large-scale wind farms? This is a very difficult and challenging subject [10]. At present, it seems that it is an effective solution to establish a set of multi-scale wind field fine forecasting model system suitable for the climate characteristics and terrain characteristics of the wind farm area. The numerical model system can be considered to be composed of advanced mesoscale meteorological model nesting (including data assimilation) combined with computational fluid dynamics (CFD) model or fine atmospheric boundary layer model, and dynamic downscaling or statistical downscaling methods can also be used depending on the situation. It meets the requirements of high spatial resolution, and can also integrate neural network and nonlinear time series forecasting methods into the forecasting system. It should be emphasized that the model system must be compared with actual observations, constantly tested and improved, and compared with the simulation results of foreign popular models (such as the KAMM-WASP software of Risoe National Laboratory in Denmark). Extreme weather and climate events and wind power development safety Sandstorms, cold waves and strong winds, lightning and thunderstorms, tropical cyclones, tornadoes, explosive extratropical cyclones, storm surges and other severe weather and climate environments will seriously affect the safe operation of inland and coastal wind farms, Among them, tropical cyclones have the most severe impact[11\uf02d12]. At present, there is still a lack of understanding of the characteristics of tropical cyclones within the range of wind turbine heights, and reliable aerodynamic parameters cannot be provided for the design of coastal and offshore wind farms. The planning and construction of wind farms also lack scientific basis for climate risk analysis. How to guide the macro layout and key areas of wind farm construction (such as coastal and offshore wind farms) according to extreme climate and environmental conditions? How to scientifically assess the climate risk of wind farm construction in different regions (the probability of damage to wind farms due to extreme climate and environmental conditions)? These are relatively difficult topics. Foreign countries pay more attention to this aspect of work, which is related to the cost and price of wind power generation. Climate and environmental effects of large-scale wind power development According to the experience of countries with relatively developed wind energy resource development in the world, large-scale wind energy resource development may have negative impacts on local or even regional climate and environment, such as affecting surface albedo, The roughness of the underlying surface, dynamic drag coefficient, etc., can also affect local precipitation, cause noise, and harm birds. These problems are not very prominent in our country at present. With the gradual increase of the scale of wind power generation in the future, the corresponding climate and environmental problems will definitely be raised. For example, my country is a country greatly affected by the monsoon climate. The airflow energy used by offshore wind power generation is exactly the energy that transports water vapor to the inland. Will large-scale offshore wind power generation affect the precipitation on land? We need to carry out forward-looking scientific research. The climatic and environmental effects of large-scale wind power generation is a very difficult subject that involves a wide range of disciplines. Development and utilization of wind energy resources under the background of climate change How did my country's wind energy resources change in the past? What is the future evolution trend? Can it be developed and utilized sustainably? In the context of global climate change, the large-scale sustainability of China's wind energy resources can only be realized on the basis of a scientific understanding of the past changes in China's wind energy resources and their causes, and a more accurate prediction of possible future trends. Development and utilization can provide a scientific basis for the macro layout and micro site selection of wind farms. In the past 50 years, under the background of global changes, China's surface wind speed has undergone significant changes, and the annual average wind speed has shown an overall downward trend. For example, areas rich in wind energy resources in inland China, such as Xinjiang, Inner Mongolia, northern Hebei, and most of Northeast China, are precisely the areas where the temperature rises most significantly and the annual average wind speed decreases the most. In order to reveal the historical change law and future evolution trend of China's wind energy resources under the background of climate change, and to provide scientific basis for the macro-decision-making of large-scale development and utilization of wind energy resources, it is necessary to rely heavily on computer numerical simulation, but the use of climate models to simulate and predict the atmospheric boundary layer Turbulence field and wind field are much more difficult than simulating and predicting temperature field and precipitation field.", "The formation of new particles is considered to be the condensation of supersaturated steam (such as sulfuric acid vapor) into molecular clusters, and the formation of new particles through condensation, collision, etc. [1]. The size of newly formed particles may be 1~2nm. Aitken reported evidence of new particle generation in the atmosphere as early as 1897. However, with the improvement of particle measurement technology in the 1990s, particles as small as 3nm in the atmosphere were widely observed and reported. The new particles reported in the current literature The particle size of the generated event is in the range of 3-20nm. From the perspective of observation range, new particle generation events occur all over the world; from the perspective of occurrence space, new particle generation events can occur within hundreds of kilometers at the same time; from the perspective of occurrence frequency, 5%~50% of the annual New particle generation events can be observed on days. New particle generation events are ubiquitous on a global scale and are one of the important sources of atmospheric particulate matter. Once formed, new particles grow rapidly and participate in many important atmospheric processes. The possible effects of new particle generation events include: affecting global climate change. On the one hand, it will change the number spectrum distribution of atmospheric particles. It itself and the particles formed after it grows up will absorb and reflect solar radiation, reduce the visibility of the atmosphere, deteriorate the quality of the atmosphere, and affect the climate. On the other hand, a large number of new particles grow up to become cloud condensation nuclei (CCN). Under certain humidity conditions, the concentration of cloud condensation nuclei increases with the increase of fine particle concentration, cloud droplets become smaller, and the reflection ability of clouds is strengthened. A day's increase of new particles in coastal areas can triple the availability of local cloud condensation nuclei [2]. New particles can affect the global radiation balance and global climate through cloud physics processes and precipitation. Affects atmospheric chemical processes. The new particles and the particles generated after their rapid growth will participate in many important heterogeneous reaction processes in the atmosphere, thereby affecting the atmospheric environment. Such as the loss of OH free radicals in the atmosphere, etc., indirectly changing the oxidation of the atmospheric environment [3]. negative health effects. Because the new particles have a strong diffusion ability, they can be effectively deposited in the human lungs after entering the respiratory tract, and a large part of them are deposited in the alveoli. A large number of ultrafine particles can lead to the weakening or even inactivation of alveolar macrophage activity, and the longer the ultrafine particles are in contact with alveolar epithelial cells, the more severely their activity is weakened, which may lead to lung inflammation and other lung diseases[ 4]. Ultrafine particles can enter the lung tissue through the epithelial cells of the lung, and even enter the human body circulation through capillaries and lymphatic capillaries. It will also produce synergistic health effects with O3 and NOx, etc., and aggravate the adverse effects on health [5]. Therefore, the generation of new particles has received extensive attention and has become a research hotspot and frontier in the field of atmospheric environmental science. The impact of new particle generation events must be considered when establishing global and regional atmospheric models. After nearly 20 years of observation and research, there is still no definite answer and explanation for the mechanism and precursors of how new particles are generated and grown in the atmosphere. Sulfuric acid, water, ammonia, and organic matter are all likely to participate in the nucleation process of new particles in the atmosphere. The kinetic models of particle nucleation, collision coalescence, condensation growth and other processes are widely used in the deduction of nucleation mechanism and the evolution of particles after nucleation. researching. The current scientific issues and progress of concern are as follows: 1. The formation mechanism of new particles Sulfuric acid-water binary nucleation. Binary nucleation theory is widely applied to the observed new particle generation events. With the development of H2SO4 vapor concentration measurement technology in the atmosphere, it provides a basis for further verification of the binary theory. Although the dual-phase nucleation theory has successfully explained the nucleation rate of many new particle generation events, for some new particle generation events, especially at coastal stations and some continental stations, the dual-phase nucleation theory cannot explain the relatively fast the nucleation rate. Therefore, the binary theory has its applicable conditions: low temperature, high humidity, relatively few particles in the atmosphere, and relatively high concentration of sulfuric acid vapor [6]. Sulfuric acid-ammonia-water ternary nucleation. Since the presence of ammonia in the atmosphere will greatly reduce the vapor pressure of sulfuric acid, ammonia may nucleate with sulfuric acid vapor in the atmosphere or with sulfuric acid and water. This is the sulfuric acid-ammonia-water ternary nucleation theory. Model calculations show that for typical sulfuric acid vapor concentrations (105~107cm\uf02d3) in the atmosphere, the three-way nucleation theory can give a sufficiently high rate of new particle generation. It can reasonably explain the high nucleation rate new particle generation events observed in coastal regions. Under the same nucleation rate conditions, the concentration of sulfuric acid vapor required for ternary nucleation is much smaller than that of the binary nucleation theory [6]. Organic matter participates in nucleation. What role organic matter plays in the atmospheric nucleation process is currently a hot topic of discussion. During the photooxidation of VOCs, the generation of non-volatile or semi-volatile organics contributes to the formation of secondary organic aerosols. When the concentration of H2SO4 is low enough to make primary molecular clusters grow, and the gaseous low-volatile organic compounds are abundant, it will participate in nucleation and even play a controlling role [7]. There are also laboratory simulation studies that prove that organic matter can indeed significantly increase the nucleation rate of sulfuric acid and promote the generation of new particles [8]. Ion induced nucleation. Ions in the atmosphere are continuously generated and ubiquitous. Ions can be produced by cosmic rays or by local sources. These simple but highly reactive ions, such as N+, O+, N+, O+, rapidly capture and interact with trace gases such as H2O, H2SO4, HNO3, NH3 and different organic species to rapidly form charged molecular clusters. Ion-induced nucleation successfully predicts the evolution of aircraft emission aerosols. Iodine is involved in nucleation. In coastal areas or on the sea surface, iodine released by algae can participate in the generation of new particles in the atmosphere. Some studies also believe that a large amount of CH2I2 emitted in ocean areas is an important precursor for local new particles to participate in nucleation through iodine[9,10] . CH2I2 has strong photochemical activity, and is rapidly oxidized by ultraviolet rays in the atmosphere to generate iodine oxides. These oxides, mainly IO2, participate in nucleation to form new particles. 2. Growth mechanism of new particles At present, some literature reports believe that the growth process of particles formed by nucleation mainly goes through two steps: initial growth and subsequent condensation growth. Initial growth refers to the process in which the newly generated extremely small particles (1-2nm) grow to the size (~3nm) that can be achieved by the current observation means. Since the particle size of the newly formed particles is extremely small, the vapor pressure of low-volatile substances on the surface of the particles changes due to the curvature effect, so the initial growth may be different from the growth process of larger particles. Pathways for the initial growth of new particles may include condensation of nucleating vapors, intensification of soluble vapors, heterogeneous nucleation processes, ion-mediated growth of molecular clusters, molecular cluster self-collisions, and heterogeneous chemical reactions. The subsequent growth process is relatively simple, mainly through condensation and collision process growth, and both gaseous sulfuric acid and organic vapor contribute to the subsequent condensation growth[11]. Overall, the process of nucleation and growth of new particles is shown in Figure 1. Figure 1 \tNucleation and growth process of new particles[12]", "According to the vertical distribution of atmospheric temperature, the entire atmosphere is usually divided into layers with different thermodynamic properties. As shown in Figure 1, the layer from the ground to about 10km high is called the troposphere, and its mass accounts for about 85% of the total mass of the atmosphere. Accounting for about 15% of the total mass of the atmosphere. Atmospheric observations show that weather phenomena such as thunderstorms, typhoons, and tornadoes mainly occur in the troposphere, and because there is such a large mass difference between the stratosphere and the troposphere, it is traditionally believed that the stratosphere has little effect on the weather and climate of the troposphere. influence; instead, it only passively receives the influence of the troposphere. Therefore, in the past, when forecasting weather and climate change, people were mainly concerned with the changes in the tropospheric weather system and the impact of the tropospheric weather system on the stratosphere, but rarely cared about the impact of the stratosphere on the tropospheric weather and climate. Figure 1 Schematic diagram of the vertical distribution and temperature profile of the atmosphere. This figure also shows the levels of human sounding activities and the levels of middle and upper atmospheric phenomena. In the past 10 years, this \"stratosphere only passively accepts the impact of tropospheric changes\" Due to the research progress of two major issues, the traditional view has changed. The first is about the problem of stratospheric ozone depletion. Since the late 1970s, stratospheric ozone has been systematically reduced. Observation and analysis have found that the ozone depletion at the two poles of the stratosphere has caused the two hemispheres to Changes in tropospheric circulation and surface warming in late winter and early spring at mid-high latitudes [1]. The second is the proposal and research on the Arctic Oscillation (AO) [2]. Observation analysis found that regardless of the positive phase or negative phase AO anomalies, they always first occur in the upper stratosphere, then propagate downward, and reach the tropopause after about 2\u20133 weeks, causing changes in the tropospheric atmospheric circulation and affecting the tropospheric weather system. Therefore, some scholars have proposed that the signal of stratospheric AO anomaly can be used as a leading indicator for forecasting tropospheric weather changes[3]. The stratosphere and troposphere are closely coupled through atmospheric fluctuations. As early as the 1960s, Charney and Drazin[4] had theoretically studied the vertical upward propagation of atmospheric fluctuations. Their theoretical results mainly include three points: \u2460 Only planetary scale fluctuations (wave numbers 1~3) are It can propagate upwards from the troposphere into the stratosphere, while smaller-scale synoptic-scale waves (baroclinic fluctuations) only exist in the troposphere and cannot enter the stratosphere, as if the stratosphere atmosphere has a filtering effect, only allowing planetary fluctuations with large spatial scales pass, and filter out smaller-scale weather-scale fluctuations; \u2461 even planetary waves can only propagate in the westerly airflow, but not in the easterly airflow, which is consistent with the observational fact that in the summer half year (ball), the stratosphere The prevailing smooth easterly airflow has basically no fluctuation characteristics; in the winter half of the year (ball), the westerly airflow prevails in the stratosphere, and planetary waves can propagate from the troposphere to the stratosphere; \u2462 The strong westerly wind in the stratosphere is not conducive to the upward propagation of planetary waves, while When the westerly wind in the stratosphere is weak, planetary waves tend to propagate toward the stratosphere. According to Charney and Drazin's theory, the stratosphere can indeed have an important influence on the tropospheric weather system. First, the Arctic stratosphere experiences explosive warming every two years on average. During the explosive warming event, the polar vortex collapses, and the temperature of the polar stratosphere can increase by 30 to 40 degrees within a few days. Air currents, and easterly winds can extend from the poles to areas south of 60\u00b0 north latitude. The explosive warming of the stratospheric North Pole itself is caused by the strong planetary waves breaking up in the stratosphere, but once the high-latitude airflow in the stratosphere is converted into easterly winds, it will limit the upward propagation and development of the tropospheric planetary waves in the vertical direction, Therefore, the tropospheric planet-scale fluctuations are usually weak until the stratospheric flow returns to westerly. Observations show that after the explosive warming of the stratospheric Arctic, the westerlies in the middle and high latitudes of the troposphere are indeed relatively smooth, and there are relatively few blocking high events, especially the blocking highs in the North Atlantic Ocean and the Ural Mountains are not active. Conducive to the development of strong planetary waves. Secondly, it is traditionally believed that the strength of the stratospheric westerly airflow is controlled by the strength of the tropospheric planetary fluctuation, that is, when the tropospheric planetary fluctuation is strong, the westerly airflow weakens due to the fragmentation of the stratospheric planetary wave; otherwise, the stratospheric westerly airflow strong. However, observations in recent years have found that the stratospheric atmospheric circulation system is independent and is not completely affected by the transmission of planetary waves in the troposphere. A typical observational fact is that there was no explosive warming in the Arctic stratosphere for eight consecutive winters from 1991 to 1997, and it is difficult to completely attribute this to the fact that the tropospheric planetary fluctuations were relatively weak in these eight winters. In addition, there is no observational fact to prove that when the stratospheric polar night jet (polar vortex) is established in the early winter of each year, its strength has a certain relationship with the strength of the tropospheric planetary wave. These all indicate that the stratospheric circulation system has its own independent Tropospheric side. Then, if the stratospheric polar night jet is stronger in a certain winter due to its own nature, according to the theory of Charney and Drazin, the planetary wave will not be easy to propagate and develop upwards, so the tropospheric planetary wave will be weaker, and the westerly airflow will be smoother , there are fewer blocking events, correspondingly, small disturbances in the westerly belt are more frequent, but the intensity is generally weak. The difference in time scale between the stratospheric general circulation system and the tropospheric weather system also provides an effective way for the extended forecast of the evolution of the tropospheric weather system. Observational analysis and theoretical studies have shown that the time scale of stratospheric atmospheric circulation (that is, the process from the westerly flow weakened by planetary wave breaking to the westerly flow recovery caused by radiation, such as the explosive warming of the Arctic stratosphere) is relatively long, about 1~2 months [5, 6]. The time scale of mid- and high-latitude weather systems in the troposphere is about 7 days, and the effective period of numerical weather prediction is also about 1 week. This means that the signal of changes in stratospheric atmospheric circulation can indeed extend the effective time scale of tropospheric weather forecasting. Fig. 2 shows the vertical profiles of the evolution of AO index over time in synthetic weak polar vortex events (explosive warming) and strong polar vortex events (strong polar night jet). It can be seen that whether it is a weak polar vortex or a strong polar vortex, the signal of AO change always first occurs in the upper stratosphere, then propagates downward, and reaches the ground after about 3 weeks. From Figure 2, we can also see the difference in time scale between the stratosphere and troposphere weather systems. The time scale of the stratosphere \"weather process\" is about 1~2 months, while the time scale of the troposphere weather process is about 7~ 10 days. On average, the signal of changes in the stratospheric circulation usually precedes the troposphere by about 2 to 3 weeks, which means that it is possible to improve the timeliness of tropospheric weather forecast to more than 3 weeks by using the change signal of the stratospheric system. But it should be noted that Figure 2 is the result of synthesis, not all strong and weak polar vortex times can propagate downward and enter the troposphere. Fig. 2 \tThe time-height profiles of AO indices synthesized by 18 weak polar vortices (or explosive warming) and 30 strong polar vortices respectively. The red in the figure represents the negative AO index, the blue represents the positive AO index, and the white The region corresponds to a weak AO index, that is, the AO index is in the range of \uf02d0.25~0.25, and the contour line indicates the size of the AO index[3] Although both theory and observational facts show that the change of atmospheric circulation in the stratosphere is related to the weather in the troposphere systems are closely related and have the potential to have a significant impact on the evolution of tropospheric weather systems, but so far there is no quantitative evidence of the extent to which the stratosphere can influence tropospheric weather, the physical mechanisms by which the stratosphere affects tropospheric weather systems Haven't quite figured it out yet. Changes in stratospheric circulation mainly affect planetary fluctuations, while tropospheric synoptic systems are mainly controlled by baroclinic fluctuations. Therefore, the impact of the stratosphere on tropospheric synoptic systems is probably not direct, but through the interaction between planetary waves and synoptic-scale waves. Indirect effects such as role [7]. Therefore, to clearly understand the physical mechanism of the stratosphere's influence on the tropospheric weather system and to quantify the strength of its influence is one of the important issues in this research field. In addition, although studies have shown that abnormal signals in the stratosphere may have a precursory effect on forecasting weather changes in the mid- and high-latitudes of the troposphere in winter, so far there has been no successful experience of using this signal for weather forecasting, and long-term experience is still needed. None of the current numerical weather prediction models include the complete stratosphere and the physical and chemical processes of the stratosphere. So, is it possible to significantly extend the timeliness of weather forecasts by adding the stratosphere and related information to numerical forecast models? The problem is probably not that simple. Using the signal of atmospheric circulation in the stratosphere, combined with effective statistical methods and more complete numerical forecasting models may be an effective way to extend tropospheric weather forecasting. As far as our current knowledge is concerned, the anomalous signals of stratospheric atmospheric circulation may be more effective in predicting extreme weather events with longer time scales and larger spatial scales in the troposphere, but relatively less significant for forecasting weather events with smaller temporal and spatial scales. smaller. More than 90% of the ozone in the atmosphere is located in the stratosphere. Ozone absorbs the sun's ultraviolet radiation and plays an important protective role in the biosphere on the earth's surface. Therefore, changes in the ozone layer are of great significance to the global climate and environment. In the last 20 years of the 20th century, the concentration of stratospheric ozone continued to decrease, which was due to the depletion of stratospheric ozone caused by human-produced freons (CFCs). However, observations in the past 10 years have found that stratospheric ozone has not continued to decline since 1998, and even showed a slight upward trend (Figure 3). Most scholars believe that the change trend of stratospheric ozone in the past 10 years is a signal that the ozone layer has begun to recover, because the weak recovery trend of ozone is consistent with the observed \tchange in ozone column content over time in Figure 3 [8] The equivalent available chlorine content in the stratosphere ( The decreasing trend including Cl and Br) is consistent. According to the prediction of chemical-climate coupling model, stratospheric ozone may return to the pre-1980 level around 2050, and may exceed the pre-1980 level by the end of the 21st century. This is because when the content of CFCs in the atmosphere decreases, the stratospheric temperature decreases with the increase of greenhouse gases (mainly CO2). Lower temperatures slow down the chemical reactions that cause ozone to decompose, so ozone levels increase. The stratospheric temperature is mainly determined by the ozone concentration, and changes in the ozone concentration will directly lead to changes in the stratospheric temperature. From the late 1970s to the late 1990s, as the ozone concentration decreased and the greenhouse gas concentration increased, the stratospheric temperature also showed a cooling trend. In the 21st century, the recovery of the ozone layer will cause an increase in the temperature of the stratosphere. But on the other hand, the most important factor affecting global climate change is the increase of greenhouse gases, and this trend will continue in the 21st century. The increase in greenhouse gases warms the surface and troposphere, but cools the stratosphere. This is because in the stratosphere, greenhouse gases emit more long-wave radiation than they absorb infrared long-wave radiation from the tropospheric atmosphere. Therefore, the radiative effect of greenhouse gases in the stratosphere is cooling, not heating. How the stratospheric temperature will change in the 21st century under the action of these two opposing factors is an important issue to be studied. How the recovery of stratospheric ozone and the resulting changes in stratospheric climate affect the tropospheric climate is a hot topic. A large number of studies have shown that the depletion of the ozone layer from the late 1970s to the late 1990s had a significant impact on the tropospheric climate, which is an important reason for the trend of the northern and southern annular modes. According to this result, the recovery of stratospheric ozone in the 21st century will tend to cause the negative trend of the annular mode. Whether this is the case still needs to be verified by future observations and numerical simulations.", "The monsoon is an important circulation system in the global climate system. The prevailing wind direction of different precipitation types changes with the seasons, which is formed by the thermal difference between the sea and the land. However, the understanding of the monsoon has become more and more abundant with the expansion of research. Some scholars emphasize that the monsoon is mainly manifested in the change of precipitation, while some scholars emphasize that the monsoon is mainly manifested in the change of wind field. Indeed, the monsoon has different manifestations in different regions. Some regions show more seasonal changes in precipitation, while others show more obvious changes in the prevailing wind direction with the seasons. The East Asian monsoon system is not only manifested in the seasonal variation of the prevailing wind direction, but also in the seasonal variation of precipitation. The southerly wind prevails in East Asia in summer, and because the southerly wind brings a large amount of water vapor from the ocean, it causes a large amount of precipitation in East Asia; while the northerly wind prevails in winter, because the northerly wind brings dry and cold air from high latitudes, so the precipitation is very low. few. Therefore, the climate in East Asia is a typical monsoon climate. my country is located in the East Asian monsoon region. The interannual and interdecadal changes and anomalies of the East Asian monsoon have caused frequent and severe major climate disasters such as droughts, floods, extreme heat, low-temperature rain, snow and freezing in my country, resulting in huge economic losses and heavy casualties. Therefore, the change and anomaly of the East Asian monsoon system and its mechanism are not only important research topics in my country's atmospheric science, but also an important research content in the World Climate Change Research Program (WCRP). 1. Climatic characteristics of the East Asian monsoon system In view of the important impact of monsoon changes and anomalies on my country's climate disasters, as early as 70 years ago, the famous Chinese climatologist Zhu Kezhen [1] first proposed the impact of the East Asian summer monsoon on China's precipitation. Later, Tu Changwang and Huang Shisong[2] studied the impact of the advance and retreat of the East Asian summer monsoon on the intraseasonal variation of the rain belt in China. These studies have opened the way for research on the East Asian summer monsoon variation and its impact on the East Asian climate. Following them, Tao Shiyan, Chen Longxun, etc. and Ding Yihui made a systematic study on the structure and characteristics of the East Asian summer monsoon circulation[3\uf02d5]. Structural features of the East Asian monsoon system The East Asian monsoon system, the South Asian monsoon system and the Northern Australia monsoon system all belong to the Asia-Australia monsoon system. These three monsoon systems are both different and related. Due to the influence of the western Pacific subtropical high and mid-latitude disturbances, the East Asian summer monsoon circulation has not only the nature of the tropical monsoon, but also the nature of the subtropical circulation, while the South Asian and northern Australian summer monsoons are only tropical monsoon systems. The East Asian monsoon system is not only the prevailing southerly airflow in summer and the prevailing northerly airflow in winter, it is composed of several circulation subsystems. According to the research of Tao Shiyan and Chen Longxun[3], as shown in Figure 1, the East Asian summer monsoon system includes: the southwest monsoon airflow from India and the Bay of Bengal, the Australian cold high pressure, the transequatorial airflow along 100\u00b0E, the South China Sea monsoon trough, the western Pacific The subtropical high and the easterly airflow to its south, Meiyu, mid-latitude westerly disturbance, etc.; and the East Asian winter monsoon includes: the Siberian high, the northerly airflow over East Asia, and the East Asian trough over East Asia. Tao Shiyan and Chen Longxun first proposed that the East Asian summer monsoon is a relatively independent monsoon system that is both related and different from the South Asian monsoon system from the composition of the main horizontal circulation system of the East Asian summer monsoon system. Figure 1. \tSchematic diagram of the East Asian summer monsoon circulation system[3] Recently, Chen Jilong and Huang Ronghui analyzed the vertical structure and annual cycle characteristics of the East Asian monsoon system wind field and its differences with the South Asian and northern Australia monsoon systems[6,7], they pointed out The difference between the vertical structure of the wind field of the East Asian monsoon system and the structure of the South Asian and North Australian monsoon systems is revealed. The vertical structure of the zonal wind of the East Asian summer monsoon is very complex. To the south of 25\u00b0N, the lower layer is westerly, the upper layer is easterly, and the vertical direction is easterly shear; while to the north of 25\u00b0N, the lower layer is westerly, and the upper layer is strong westerly. The vertical direction is westerly shear. The vertical structure of the zonal wind of the summer monsoon in the other two monsoon systems is westerly in the lower layer, easterly in the upper layer, and easterly shear in the vertical direction. Moreover, the vertical structure of the meridional wind field of the East Asian summer monsoon is also obviously different from that of the South Asian and North Australian monsoon systems. The East Asian summer monsoon is a northerly wind in the upper troposphere and a stronger southerly wind in the lower troposphere; while the vertical structure of the winter monsoon wind field is just opposite to the summer monsoon, with a northerly wind in the lower troposphere and a strong upper troposphere. Southerly wind. The vertical difference between the upper and lower level meridional winds of the East Asian monsoon system is significantly larger than that of the South Asian and northern Australian monsoon systems. Therefore, the vertical wind field structure of the East Asian monsoon system is different from that of the other two monsoon systems. Annual cycle characteristics of the wind field of the East Asian monsoon system The winter and summer monsoon annual cycle of the East Asian monsoon system is first manifested in the meridional northward advance of the southerly wind in the lower troposphere in early summer and the southward retreat in mid-to-late August. Tao Shiyan, Chen Longxun, Huang Ronghui and others pointed out that the Asian monsoon broke out first in the South China Sea, that is, it usually broke out in mid-May. After the East Asian summer monsoon erupts in the South China Sea, it will go through two phased northward advances and three stagnations, and finally the southerly monsoon will reach North China and Northeast China, and the northern part of the Korean Peninsula in mid-to-late July[3,8]. However, the southward withdrawal of the southerly monsoon in summer in East Asia is very rapid. Generally, from the middle to late August, the southerly monsoon quickly withdraws from North China and Northeast China, and withdraws to South China in less than two weeks. It reaches over the South China Sea and stays there until mid-October. After mid-October, the northerly winter monsoon blows southward along East Asia through the East China Sea to the South China Sea, and then turns southwest to the Indo-China Peninsula and Southeast Asia. The wind can be maintained until April of the next year, which will cause strong convection and heavy precipitation in Southeast Asia. The East Asian monsoon system is not only manifested in the northward advance and southward retreat of the southerly wind, but also in the changes in the vertical structure of the wind field. However, unlike the monsoon systems of South Asia and North Australia, the annual cycle characteristics of the wind field of the East Asian monsoon system are less obvious in the zonal wind, and more obvious in the meridional wind cycle. From early summer, southerly winds prevail in the lower troposphere over East Asia, while northerly winds prevail in the upper troposphere. In mid-September, the meridional circulation over East Asia reverses, with northerly winds prevailing in the lower troposphere and southerly winds prevail in the upper layer. . However, the annual cycle of winter and summer monsoons in the South Asian and Northern Australian monsoon systems is different from that in the East Asian monsoon system. In these two monsoon systems, the annual cycle of winter and summer monsoons is more obvious in the zonal wind cycle. The annual cycle characteristics of the East Asian monsoon rain belt Huang Ronghui et al.'s research shows that the winter and summer monsoon annual cycle of the East Asian monsoon system is more clearly manifested in the seasonal variation of the rain belt [9]. In April and May in spring, the monsoon rain belt is located in the South China Sea region. From late May to early June, the rain belt moves northward to South China to the Jiangnan area; after that, the rain belt will suddenly jump northward to the Jianghuai River Basin in my country, Japan and China. South Korea, this means the beginning of Meiyu in the Jianghuai River Basin in my country, Baiu in Japan and Changma in South Korea; then in early or mid-July, this monsoon rain belt will move northward again to North China, Northeast China and North Korea, which shows that the Jianghuai region The end of the Meiyu, and the beginning of the rainy season in North China and Northeast China; after mid-August, the summer rainy belt quickly retreated to South China. Afterwards, the East Asian region was gradually controlled by the winter monsoon, and the precipitation mainly depended on frontal precipitation. The northward advance and southward retreat of the East Asian monsoon rain belt is consistent with the annual cycle of East Asian winter and summer monsoons. Water vapor transport characteristics of the East Asian monsoon system Research by Huang Ronghui et al. shows that the water vapor transport characteristics of the East Asian monsoon system are significantly different from those of the South Asian and northern Australian monsoon systems[10]. In the East Asian monsoon system, the meridional water vapor transport from south to north in summer is larger than the latitudinal water vapor transport, while the water vapor transport of the latter two monsoon systems is mainly latitudinal water vapor transport; moreover, the difference between them is more obvious with In terms of the divergence of water vapor transport related to precipitation, in the East Asian monsoon system, the monsoon airflow blows from areas with high humidity to areas with low humidity, so the divergence and convergence of water vapor transport not only depend on the divergence and convergence of the wind field, but also Also depends on water vapor advection. However, in the South Asian monsoon region, due to the small difference in latitudinal air humidity, the divergence and convergence of water vapor transport mainly depend on the divergence and convergence of the wind field, while the contribution of water vapor advection is small. The above analysis results show that the wind field structure, annual circulation of winter and summer monsoons, and water vapor transport characteristics of the East Asian monsoon system are obviously different from those of the South Asian and northern Australian monsoon systems. Therefore, from the perspective of the climate characteristics of the East Asian monsoon system, it is a relatively independent monsoon system in the large Asian-Australian monsoon system. 2. Temporal and spatial variation and anomaly (variation) characteristics of the East Asian summer monsoon system The summer monsoon and winter monsoon in the East Asian monsoon system are not only affected by the global atmospheric circulation, but also affected by the ocean, land, ice and snow, and the Qinghai-Tibet Plateau. Significant interannual and interdecadal variations. The study on the intraseasonal variation characteristics of the northward advancement of the East Asian summer monsoon shows that the Asian summer monsoon will advance northward after the onset of the South China Sea, and bring the prevailing southerly monsoon and abundant rainfall to eastern China, Japan, and the Korean Peninsula during the northward advancement. Precipitation [1,2]. As early as the 1950s, Ye Duzheng et al. [11] first pointed out that the planetary-scale circulation in the East Asian region would undergo a seasonal abrupt change in the first and middle June, which would lead to the outbreak of the East Asian summer monsoon in the Jianghuai Basin. However, Huang Ronghui and Sun Fengying\u2019s research shows that the \tthermal state of the tropical western Pacific Ocean (warm pool), the convective activity around the Philippines, the onset of the South China Sea monsoon, the location of the western Pacific subtropical high and its northward movement, and the relationship between summer monsoon precipitation in the Yangtze River and Huaihe River Basin. The warm pool is in a warm state; (b) The warm pool is in a cold state. Whether the seasonal transition of the subsummer monsoon circulation is abrupt depends on the thermal state of the tropical western Pacific and the convective activity over it [12]. As shown in Figure 2, when the tropical western Pacific is in a warm state and the convective activity around the Philippines is strong, in this case, the atmospheric circulation in East Asia will undergo abrupt seasonal changes in early and mid-June; on the contrary, when the tropical western Pacific is in In the cold state, the convective activity around the Philippines is weak. In this case, the atmospheric circulation in East Asia will not have a sudden seasonal change in the first and middle June, but a gradual seasonal change. Moreover, their research also shows that the northward movement of the East Asian summer precipitation rain belt is seriously affected by the northward advancement of the East Asian summer monsoon. When the tropical western Pacific is warmer, the convective activities in the Philippines are stronger. In this case, the summer monsoon quickly advances northward to the Jianghuai River Basin in early and mid-June, and the monsoon quickly moves from the Jianghuai River Basin in early July. Moving northward to the Yellow River Basin, North China, and Northeast China, the Meiyu in the Jianghuai River Basin ends, and the rainy season begins in North China and Northeast China. Therefore, in this year, the summer monsoon precipitation in the Yangtze River Basin or the Jianghuai River Basin is weak, and droughts often occur; on the contrary , the tropical western Pacific is in a colder state, and the convective activity in the Philippines is weaker. The East Asian summer monsoon does not experience two phased northward advances and three stagnation processes, but an asymptotic northward advance and maintains in the Yangtze River Basin and In the Jianghuai River Basin, the weak monsoon gradually moved northward to North China and Northeast China only in mid-July. Therefore, in this year, the summer monsoon precipitation in the Yangtze River Basin or the Jianghuai River Basin is relatively strong, and severe floods often occur, while the precipitation in North China is relatively low. Drought prone. The quasi-biennial periodic oscillation and meridional tripole distribution of the interannual variability of the East Asian summer monsoon system Huang Ronghui et al. pointed out that the interannual variability of monsoon precipitation, convective activities, water vapor transport, and monsoon circulation in the lower troposphere over East Asia and the Western Pacific is not only There are quasi-two-year periodic oscillations in time, and there are obvious \u201c\uf02d, +, \uf02d\u201d or \u201c+, \uf02d, +\u201d meridional tripole distribution characteristics in spatial distribution, that is, the tripole mode[ 7,9]; and they also pointed out that the inter-annual variation of the triple pole distribution of the East Asian summer monsoon system will be well reflected in the meridional triple pole distribution of drought and flood climate disasters in my country. In the typical flood and drought years of the Jianghuai Basin, the summer monsoon precipitation anomalies in my country obviously present a meridional triple pole distribution. For example, in the summers of 1980, 1983, 1987, and 1998, the summer monsoon precipitation in the Jianghuai Basin was relatively high, and floods occurred. However, the summer monsoon precipitation in South China was relatively low, and droughts occurred to varying degrees. In these years, the summer monsoon precipitation in North China was obviously low, and drought occurred. However, the summer monsoon precipitation in South China is relatively high and floods occur, and the summer monsoon precipitation in North China is also relatively high. Similar to the above meridional tripole distribution, there are many years of abnormal distribution of summer precipitation in my country. In contrast, there are not many years of nationwide floods or droughts in my country. Interdecadal variation of the East Asian summer monsoon system The East Asian summer monsoon system not only has obvious interannual variation, but also has great interdecadal variation. Huang Ronghui et al. proposed that the East Asian summer monsoon system had a significantly weakened interdecadal change around 1976, and this change was particularly evident in the summer monsoon precipitation in North China [5,7,13]. Moreover, recent studies have also shown that the East Asian summer monsoon has experienced four interdecadal change stages from the 1950s to the present: \u2460 From 1958 to 1977, the East Asian summer monsoon was relatively strong, and the southerly wind that reached North China was stronger, which made the During this period, the monsoon precipitation in North China was relatively high; \u2461 From 1978 to 1992, the East Asian summer monsoon began to weaken, and the southerly wind that reached North China was significantly weakened, which led to a significant decrease in monsoon precipitation in North China, and persistent drought occurred. The summer monsoon precipitation was significantly more, and flood disasters occurred frequently; \u2462 From 1992 to 1998, the East Asian summer monsoon strengthened, and the southerly monsoon reaching North China strengthened, resulting in an increase in summer precipitation and drought in North China since 1992. \u2463 From 1999 to 2009, the East Asian summer monsoon weakened significantly, and when it reached the North China summer monsoon, it was obviously weaker again, and the summer monsoon precipitation in North China and the south of Northeast China was obviously less, while the summer monsoon in South China and the Huaihe River Basin Summer monsoon precipitation is significantly more, forming a \"southern flood and northern drought\" (except along the Yangtze River). 3. Spatiotemporal variations of the East Asian winter monsoon system East Asia is not only a region of strong summer monsoon, but also a region of strong winter monsoon. The characteristics of the East Asian winter monsoon are: the Mongolian Plateau, Northeast China and North China, the Korean Peninsula and Japan have strong northwest winds, while the East China Sea, South China Sea and southeast coastal areas have strong northeast winds. Controlled by the Shen low pressure, it is controlled by the East Asian trough at high altitude. The strong winter monsoon not only brings cold waves, snow disasters, and freezing damage to the Mongolian Plateau, Northwest China, North China, Northeast China, the Korean Peninsula, and Japan, but also brings spring dust storms or blowing sand to the above-mentioned areas. Moreover, the East Asian winter monsoon will also bring strong convective activities and heavy rains to Southeast Asia. The interannual variation characteristics of the East Asian winter monsoon system and its impact on the summer monsoon Chen et al. [14] defined an East Asian winter monsoon index (EAWM index) using the wind field along the coast of East Asia. Their research results show that this index can The interannual variation of the intensity of the East Asian winter monsoon is well described, and it is pointed out that the East Asian winter monsoon has a large interannual variation. Changes in the East Asian winter monsoon will bring about warm or cold winters in East Asia, and in cold winters, cold waves and snowstorms frequently occur in East Asia. Recently, the research of Huang Ronghui et al. [5] showed that the interannual variation of the East Asian winter monsoon presents the characteristics of quasi-four-year periodic oscillation, which may be related to the influence of the ENSO cycle on the East Asian winter monsoon. The research results of Chen Wen et al. show that in the summer after the strong East Asian winter monsoon, the summer monsoon precipitation in the Yangtze River and Huaihe River basins in my country is less, and drought may occur; on the contrary, in the summer after the weak East Asian winter monsoon, the Yangtze River and Huaihe River basins in my country The summer monsoon has more rainfall, and floods may occur[14]. Interdecadal variations of the winter monsoon system The East Asian winter monsoon system not only has obvious interannual variations, but also has significant interdecadal variations. Studies have shown that the East Asian winter monsoon was stronger from the 1950s to the 1960s, but weaker from the mid-1960s to the mid-1970s, and stronger from the mid-to-late 1970s to the mid-to-late 1980s. After the mid-to-late period, it was seriously weak, which brought warm winters to East Asia for many years [5]. The studies of Chen Wen, Huang Ronghui, etc. have shown that the interannual and interdecadal variations of the East Asian winter monsoon are closely related to the Siberian high and the Aleutian low[7,15]. Moreover, they pointed out that the interannual and interdecadal oscillations of the northern hemisphere quasi-stationary planetary wave propagation duct in the three-dimensional spherical atmosphere seriously affect the Arctic Oscillation (AO), thus affecting the interannual and interdecadal variations of the East Asian winter monsoon. 4. The influence of sea-land-atmosphere interaction on the change of the East Asian monsoon system and the East Asian monsoon climate system As pointed out by Webster[16], the monsoon system is not just an atmospheric circulation system, but a sea-land-atmosphere interaction coupling system. Similarly, the East Asian monsoon system is not just a circulation system over East Asia that changes significantly with the seasons, it is also a regional climate system affected by oceans, land surfaces, ice and snow, and plateaus[7,9]. As shown in Figure 3, this system includes the following parts: \u2460 In the atmosphere, there are the Asian monsoon circulation system (including winter and summer monsoons), the western Pacific subtropical high, and mid-latitude disturbances; The thermal effect of the Pacific warm pool and the Indian Ocean on the monsoon, the ENSO cycle in the tropical Pacific, etc.; \u2462 In the lithosphere, there are the dynamic and thermal effects of the Qinghai-Tibet Plateau on the monsoon, Eurasian snow (especially the snow on the Qinghai-Tibet Plateau), arid and semi-arid The land-air temperature difference and polar ice in the region. Changes in the East Asian monsoon system are closely related to changes in the above-mentioned sea-land-atmosphere coupling system. They are interrelated and interact as a whole. We call this coupled sea-land-atmosphere system that affects the variation of the East Asian monsoon system also known as the East Asian monsoon climate system. Figure 3 \tSchematic diagram of the East Asian monsoon climate system The thermal effect of convective activities around the tropical western Pacific Ocean and the Philippines on the variation of the East Asian monsoon system As early as the 1980s, Nitta[17], Huang and Li[18], Huang Ronghui and Li Weijing[19] and Kurihara[ 20] pointed out that the tropical western Pacific heat and convective activities around the Philippines play an important role in the interannual variation of the East Asian monsoon system through the East Asia/Pacific (EAP) teleconnection, especially the studies of Huang Ronghui, Sun Fengying, and Lu Riyu showed that The thermal dynamics in the western Pacific and the changes in convective activity around the Philippines seriously affect the north-south and east-west oscillations at the location of the subtropical high in the western Pacific[12,21]. As shown in Figure 2, when the tropical western Pacific is in a warm state and the convective activity around the Philippines is strong, in this case the location of the western Pacific subtropical high is eastward and northward; on the contrary, when the tropical western Pacific is in a cold state, the convective activity around the Philippines is weak, In this case, the location of the western Pacific subtropical high is westward and southward. Changes in the western Pacific subtropical high will lead to changes in the East Asian summer monsoon. Effect of Tropical Pacific ENSO Cycle on East Asian Monsoon System The tropical Pacific El Ni\u00f1o/Southern Oscillation (ENSO) is not only an important member of the global climate system, but also an important system that affects the interannual variation of the East Asian monsoon system. Huang Ronghui and Wu Yifang [22] studied that different stages of the ENSO cycle have different effects on the East Asian summer monsoon and my country's summer monsoon precipitation. On the contrary, when the El Ni\u00f1o event is in the attenuation stage, the summer monsoon precipitation in the Jianghuai Basin in my country is relatively low, while the Poyang Lake, Dongting Lake Basin and the Xiangjiang River, Zishui, Yuanjiang, Fengshui Basin and Northeast Songhua The summer monsoon precipitation in the Jiang and Nenjiang basins is relatively high, which often causes severe floods. Moreover, the research of Zhang Renhe et al. showed that when the El Ni\u00f1o event reached its mature stage, anticyclonic circulation anomalies prevailed over the tropical western Pacific Ocean, and the northwest of this anticyclonic circulation would strengthen the water vapor transport of the southwest airflow. Therefore, when the El Ni\u00f1o event reached its mature stage, South China Precipitation will increase in Hetao and Jiangnan areas, and summer precipitation in the Hetao area of North China will also tend to increase [23]. According to the research of Huang Ronghui et al., the SST in the central and eastern tropical Pacific Ocean has warmed significantly since 1976, and there has been an obvious \"El Ni\u00f1o-like\" interdecadal SST anomaly distribution, that is, an \"interdecadal El Ni\u00f1o event\" [7 ,13]. This SST anomaly distribution not only weakens the East Asian summer monsoon, but also has an important impact on the tropical Walker circulation. The \"interdecadal El Ni\u00f1o phenomenon\" in the equatorial eastern Pacific is an important cause of the continuous severe drought in North China. It can not only directly affect the circulation of the East Asian monsoon through the circulation over the tropical western Pacific, but also affect the African monsoon. Affects the East Asian summer monsoon circulation. Changes in the land-air temperature difference in the arid/semi-arid regions of western my country and its impact on the East Asian monsoon system. The influence of the East Asian summer monsoon precipitation pointed out that there is a positive correlation between the spring ground-air temperature difference in the arid/semi-arid area of Northwest my country and the summer precipitation in the Yangtze River and Huaihe River Basin in my country, and a negative correlation with the summer precipitation in North China; and their analysis results also show that, There are obvious interdecadal changes in the land-air temperature difference in the arid/semi-arid areas of Northwest my country. Up to now, this shows that the ground-air temperature difference in spring in this region has been significantly enhanced since the mid-late 1970s. The increase of the surface-air temperature difference in the northwest region increases the upwelling in the arid region of northwest my country, which in turn increases the downdraft over the North China region, which in turn leads to a decrease in summer precipitation in the Yellow River Basin and North China region, and an interdecadal persistent Drought phenomenon, while summer precipitation in Northwest China is significantly enhanced. Changes in snow cover over the Qinghai-Tibet Plateau and its impact on the East Asian monsoon system The land surface thermal conditions of the Qinghai-Tibet Plateau have an important impact on the East Asian monsoon system. Ye Duzheng and Gao Youxi[26] first pointed out the thermal effect of the Qinghai-Tibet Plateau on the Asian monsoon. However, since the thermal effect of the Qinghai-Tibet Plateau is affected by the plateau snow, the plateau snow plays an important role in the variation of the Asian monsoon. important role. Wei Zhigang, Luo Siwei, Huang Ronghui and others pointed out that the winter and spring snow cover on the Qinghai-Tibet Plateau has a significant positive correlation with the precipitation in the flood season in the south of the Yangtze River Basin in my country, but has a negative correlation with the summer precipitation in North China[7,27]. This shows that if the Qinghai-Tibet Plateau has more days of snow cover in winter and spring and the depth is greater, the summer rains in Dongting Lake, Poyang Lake and Jiangnan area will be strong, while the summer precipitation in North China will be weak. Moreover, there are obvious interdecadal changes in the number of snow days and depths in winter and spring on the Qinghai-Tibet Plateau. Compared with before 1976, from 1976 to the present, both the number of snow days and the depth of snow on the Qinghai-Tibet Plateau in winter and spring have increased. This interdecadal variation of snow cover in winter and spring on the Qinghai-Tibet Plateau has resulted in a decrease in summer precipitation in North China and an increase in precipitation in the Yangtze River and Huaihe River basins. V. Scientific issues that need urgent research on the East Asian monsoon system As mentioned above, the research on the temporal and spatial variation of the East Asian monsoon system and its impact on climate disasters in my country has made great progress. However, there are still many scientific issues about the East Asian monsoon system that need to be studied urgently: global warming may have had an impact on the East Asian monsoon system, for example, the East Asian summer monsoon and East Asian winter monsoon have weakened significantly since the late 1970s, and have caused drought. severe climate disasters such as waterlogging, extreme heat, low temperature, rain and snow. Whether this is a phenomenon caused by global warming or a natural oscillation of the climate system is still unclear and understudied. Therefore, this issue should be further studied in the future. In terms of spatial distribution, the East Asian monsoon system has a relatively independent horizontal and vertical wind field structure, which is different from that of the South Asian and North Australian monsoon systems. However, these monsoon systems are interactive and spatially related. Therefore, the relationship between these monsoon systems should be further studied. In terms of time variation, the interdecadal variation of the East Asian monsoon system has an important impact on its interannual variation, and its interannual variation also affects the intraseasonal variation. However, the physical processes through which the different timescales of this system interact have remained unclear until now. Therefore, the physical process of the interaction of different time scales in the East Asian monsoon system is still an important research topic in the future. The internal dynamic and thermal processes and the external thermal and dynamic processes affecting the interannual and interdecadal variations of the East Asian monsoon system are quite complex. Some recent studies have only emphasized the thermal effects of tropical heating on the East Asian monsoon system and the Qinghai-Tibet Plateau. In the future, the influence of the internal dynamic process and external thermal and dynamic processes on the East Asian monsoon system should be studied in depth. The East Asian summer rain belt is a mixture of cumulus and stratus clouds, and the precipitation process is quite complicated. At present, it is difficult to propose a parameterization scheme for cumulus convection that is suitable for the East Asian summer rain belt. There is still a big gap between numerical simulation and prediction and reality. Therefore, the numerical model, simulation and prediction of the East Asian summer rain belt distribution is also a research topic that should be paid attention to in the future. It can be seen from the above problems that in the context of global warming, the intraseasonal, interannual and interdecadal variations of the East Asian monsoon system and their mechanisms are still important research topics in the future. Therefore, it is absolutely necessary for us to further analyze and study the interactions among the members of the coupled sea-land-atmosphere system affecting the East Asian monsoon system and the internal dynamic process of the system from the aspects of observation data, dynamical theory and numerical simulation, especially its internal The interaction process and mechanism of changes in different time scales and different spatial scales, so as to further reveal the spatiotemporal variation characteristics of the East Asian monsoon system and its impact mechanism on my country's drought and flood climate disasters, in order to improve the understanding of the East Asian monsoon system anomalies and my country's drought and flood climate Disaster prediction level. We believe that through the implementation of a series of research projects on climate system change and global warming, we can further understand the intraseasonal, interannual and interdecadal variability of the East Asian monsoon system and its mechanism, as well as the impact of global warming on the East Asian monsoon system.", "The intensification of global warming, the increasingly fragile ecological environment, and the frequent occurrence of extreme weather and climate events seriously affect the sustainable development of economy and society. Therefore, climate change is a major global issue of concern to the international community today. The polar region is one of the most sensitive and vulnerable regions to global climate change, and its changes can indicate and amplify global climate change. In the context of global warming, the polar region has experienced more obvious climate changes than other regions in the past 30 years, especially the rapid reduction of Arctic sea ice and the rapid melting of Greenland and West Antarctic ice sheets are the most noticeable. Polar climate change and its impact have become a scientific issue of international concern in recent years. Arctic sea ice coverage has declined since the 1950s. On an annual average, Arctic sea ice coverage has decreased by about 3% per decade over the past 30 years, most dramatically in summer. After entering the 21st century, the melting of summer sea ice has greatly accelerated, with a reduction of 18% per decade [1]. In September 2007, the extent of Arctic sea ice coverage was the smallest recorded since satellite observations, about 39% less than the climate average from 1979 to 2000 (Fig. 1). The latest climate prediction results show that the summer sea ice in the Arctic may completely disappear by the 2130s and 1940s[2]. The rapid loss of Arctic sea ice means that those animals that depend on sea ice for survival (polar bears, walruses, etc.) will face serious threats. The formation of atmospheric circulation is fundamentally the result of the joint action of polar cold sources and equatorial heat sources. The high albedo of sea ice greatly reduces the absorption of solar radiation by the polar regions, making the polar regions a heat sink for the global climate system. The existence of sea ice isolates the direct connection between the ocean and the atmosphere, and prevents the exchange of heat, water vapor, momentum and CO2 between the atmosphere and the ocean. The freezing and salt precipitation of seawater can increase the salinity of the ocean surface, and the melting of sea ice can cause the decrease of the salinity of the ocean surface, which directly affects the formation and intensity of the vertical and horizontal ocean circulation, and then affects the global ocean thermohaline circulation. Rapid reductions in Arctic sea ice will have important implications for global climate. In addition, my country is located in the mid-latitude region, and the cold air from the Arctic has a direct impact on weather and climate disasters such as snowstorms in winter, sandstorms in spring, and droughts and floods in summer. Therefore, whether the Arctic summer sea ice will completely disappear in the next 20-30 years. If it disappears, what impact it will have on the global climate is an important problem to be solved. In response to the rapid changes in Arctic sea ice in the past 30 years, some scholars have proposed that the changes in Arctic atmospheric circulation (that is, the dynamic process) caused by the Arctic Oscillation (the main mode describing climate variability in the middle and high latitudes of the northern hemisphere) are the causes of modern Arctic climate change. The main reason for sea ice change [3]. This view is mainly based on the fact that since the end of the 1970s, the Arctic Oscillation has been in a strong positive phase (that is, the sea level pressure in the Arctic has decreased significantly, and the circumpolar vortex has significantly strengthened), which is conducive to the Arctic sea ice from the Fram Strait. export to the North Atlantic. However, the analysis of observational data in recent years indicates that the change of the Arctic Oscillation Index is inconsistent with the change trend of Arctic sea ice, and cannot explain \tthe sea ice edge line with 50% sea ice density retrieved by satellite in recent years in Arctic sea ice Figure 1 The orange line is the sea ice edge line from 1979 to September 1983 retrieved by SMMR; the red line is the sea ice edge line from 2002 to September 2005 retrieved by AMSR-E; the green line is the sea ice edge line from 2007 retrieved by AMSE-R The rapid decrease of the sea ice edge line in September [4]. With global warming, Arctic sea ice is changing from multi-year ice to annual ice, and warm water in the North Atlantic is also expanding into the Arctic Ocean. This makes the thermodynamic process of sea ice particularly important, especially the radiative interaction between the atmosphere-sea ice-ocean, the heat flux exchange between sea ice and the ocean, etc. Studying to what extent the rapid decrease of Arctic sea ice is caused by thermal processes and to what extent is caused by dynamic processes is the key to predicting future changes in Arctic summer sea ice. Global warming has also rapidly reduced the ice mass of the Greenland and West Antarctic ice sheets [5]. The latest satellite observation shows that from 2002 to 2007, Greenland lost 150~250km3 of ice every year. Summer melting on the surface of the Greenland ice sheet also reached unprecedented levels (Fig. 2). Like the Greenland ice sheet, the West Antarctic ice sheet has also been shrinking in recent years. If the Greenland ice sheet completely disappears, the sea level will rise by about 7.3m; if the West Antarctic ice sheet completely melts, the sea level will rise by about 5.8m. Satellite and ground-based observations show that sea level has continued to rise at a rate of 3mm per year or faster since 1993 - a rate well above the 20th century average. Greenland's contribution to sea level rise is about 0.25~0.55mm, while that of Antarctica is about 0.14mm[6]. Rapid melting of the Greenland and West Antarctic ice sheets means that low-lying coastal areas will suffer devastating effects. Figure 2 \tChanges in the melting range of the Greenland ice sheet from 1979 to 2008 The ice sheet is not only complex on the surface, but also has a very complex internal and bottom structure. There are water channels inside the ice sheet, and these water channels play an important role in the melting of the ice sheet. In summer, the meltwater on the surface of the ice sheet infiltrates through the ice crevices and channels, and the water carrying summer heat flows into the bottom of the ice sheet and mixes with the soil. The lubricating effect of the mud can accelerate the ice flow to the sea, causing The amount of ice on the Greenland Ice Sheet is shrinking. We still don't know exactly how far the water seeps down (from the top to the bottom of the ice sheet), but figuring out how the water flows will help us answer a more critical question: whether the water flows to the bottom of the ice sheet? The bottom in turn accelerates the movement and melting of the ice sheet? Climate warming has caused sea temperatures to rise, which has caused the bottom of the Antarctic ice shelf to melt, which in turn has caused the ice shelf to thin and even collapse. If calving occurs, the massive ice stream loses its natural barrier. The land-based flow of ice, previously trapped at the bottleneck, slides off rapidly under gravity, accelerating its movement toward the ocean, eventually contributing to sea-level rise. After the Larsen-B ice shelf disintegrated in 2002, the velocity of the ice stream increased significantly. In April 2009, the Wilkins Ice Shelf began to collapse, breaking into multiple glaciers. These factors lead to considerable uncertainty in sea level rise projections. The extent to which the Greenland and West Antarctic ice sheets are melting this century and their contribution to sea level rise are therefore important unresolved questions. Due to harsh climate conditions and little human activity, the polar regions are one of the regions with the least data for Earth system observations. The lack of observational data, especially long-term observational data, greatly limits the in-depth development of sea ice models, ice sheet models and climate system models. In recent years, polar scientific investigations at home and abroad have increased significantly (especially the International Polar Year Program implemented by the International Union of Science and the World Meteorological Organization in 2007-2008), which has improved our understanding of sea ice and ice sheets to a certain extent. But there are still great time and space limitations. Continue to strengthen polar scientific investigations, especially multi-disciplinary comprehensive observations, and deepen the understanding of the dynamic and thermodynamic processes of sea ice and ice sheets. On this basis, improve and improve the forecasting capabilities of sea ice models, ice sheet models and climate system models, so as to solve whether the Arctic summer sea ice will completely disappear in the next 20 to 30 years, and the Greenland and West Antarctic ice sheets in this century The extent of rapid ablation, and its impact on the global climate, is a scientific puzzle.", "The earth's weather and climate are ultimately driven by the sun's radiant energy to the earth. Due to the earth's elliptical orbit around the sun and the earth's rotation and the deviation of its rotation axis from the revolution plane, the relatively stable seasonal changes and instability of the earth's climate are caused. weather events. The energy delivered by the sun to the earth is generally relatively stable. Over a long period of time, due to the limitation of measurement accuracy, the total radiant energy injected by the sun into the top of the earth's atmosphere is called the \"solar constant\". But at the same time, human beings have also observed that the surface of the sun is not calm, and there are \"sunspot groups\" that can be distinguished by naked eyes. The long-term observation of the sunspot group's area of the sun surface also shows that the period of 11 years changes. Since the 20th century, with the development of physics and detection technology, people have discovered more abundant phenomena on the activities on the surface of the sun. In addition to optically observing the changes of sunspot groups, the microwave radiation emitted by the sun (called radio flux) also presents the same phenomenon. Long-term variations, in addition to transient outbursts on the Sun's surface also deliver strong magnetic fields and streams of particles into Earth's atmosphere. These instantaneous eruptions also show quasi-periods of 11 years and longer periods of longer time scales (such as 22-year magnetic cycle and 80 years). When we affirm that solar radiation is the most important driving factor of the earth's weather and climate change, people will naturally ask whether such quasi-periodic changes in solar activity will also cause weather and climate changes. This is a probing question that has long fascinated the atmospheric science and applied communities. It was first brought to public attention in the 19th century when grain market prices in England exhibited a periodicity similar to sunspot activity, which was attributed to grain The harvest is affected by the climate cycle changes caused by solar activity. Obviously, this is a rough correlation statistical result in a short period of time, which is far from a reliable and stable relationship. Attention and long-term exploratory research, so far, scholars at home and abroad have published no less than a thousand papers on the research on the relationship between solar activity, weather and climate (hereinafter referred to as the relationship between the sun and the earth), roughly focusing on the following three aspects. Using historical solar activity data, including long-term sunspot activity and solar radio flow data, solar eruption data and climate (temperature, pressure, precipitation, cloud, weather, lightning) data for correlation analysis. This kind of empirical analysis research has gone through more than 200 years, and most of the early work is to analyze the correlation (period or lag) between solar activity and ground single-point and multi-point meteorological elements. Although this type of analysis has also drawn some positive correlation results. For example, the correlation between the drought and flood indices of single-point meteorological elements and the 11-year or 22-year (magnetic) cycle of solar activity. But in general, the significance is poor. Since the 1980s, scientists have adopted the sun-earth connection under different atmospheric modes and regional atmospheric basic states when establishing the correlation of the sun-earth connection, such as combining the solar activity cycle with the equatorial stratospheric quasi-biennial oscillation (QBO ) year of the westerly phase and the year of the east wind phase were respectively analyzed, and the correlation was high, and the result passed the significance test. Another example is the analysis of the relationship between solar activity and the Arctic Oscillation index (NAO) and the northern annular mode (NAM) in the stratosphere. And NAM will pass down from the stratosphere to the troposphere to modulate the tropospheric weather and climate. The basic idea of this type of research is that the change of total solar irradiance directly affects the dynamic and thermal structure of the atmosphere and ocean, thereby modulating the weather and climate. There is a hypothesis in the physical process chain of the sun-weather-climate relationship that the Schumann-Runge band (175~200nm) in the ultraviolet band in the variation of solar radiation energy has a large amplitude in the 11-year solar cycle, and the ultraviolet radiation in this band passes through Ozone photochemical processes produce changes in radiative heating of the upper stratosphere at the equator. This initial heating disturbance then changes the temperature and wind disturbance in the stratosphere, and this disturbance may propagate downward to affect the tropospheric circulation through mechanisms such as wave-current interaction. In the link where the stratosphere is affected by the disturbance of the solar cycle, the QBO in the equatorial lower stratosphere affects the propagation of planetary waves in the extratropical region, so many related analyzes have noticed the joint effect of the 11-year solar cycle and different phases of the QBO . In the analysis of the correlation between solar activity and the earth's atmospheric temperature, in recent years, the combined effects of multiple solar activities such as changes in solar irradiance and solar magnetic activity (characterized by the geomagnetic activity index Ap caused by it) on atmospheric temperature have appeared. Be researched. The influence of solar activity on the distribution and concentration of stratospheric ozone has always been one of the hotspots in the study of the relationship between the sun and the earth. Stratospheric ozone is the product of photochemical reactions in the stratosphere atmosphere caused by solar ultraviolet radiation, and the quantitative theory of this aspect has been relatively mature. If we study from this causal relationship, we can better understand an important aspect of the relationship between the sun and the earth. Analysis of years of data already obtained by satellite and ground-based observations shows that, in addition to the photochemical control of solar activity, the aspect of power delivery also exhibits solar activity's modulation of stratospheric ozone. Obviously, the control of power transport caused by solar activities may include both the vertical transport caused by convective activities and the control of horizontal transport in the direction of the northern and southern hemispheres outside the tropics. This highly nonlinear control of power delivery must also be related to the characteristic changes of the basic modes of atmospheric circulation, such as planetary wave activity, QBO in the equatorial region, and so on. Therefore, the analysis in this area is also one of the hot spots. Cosmic rays cause atmospheric ionization, which in turn affects the state of atmospheric clouds and electricity, causing changes in weather and climate. This is the most controversial exploration problem. At the heart of the controversy is not the physical process of individual effects, but the chain of processes. It is generally believed that the changes in the intensity of cosmic rays reaching the earth are modulated by solar activity, so the cosmic ray particles entering the atmosphere will have the same fluctuations as the solar activity cycle. The showers of cosmic ray particles in the atmosphere will produce ions and ion cluster aerosols from the upper to lower atmosphere, which act as the cloud cores of ice clouds and water clouds, and change the atmospheric electrical environment, thereby possibly modulating the formation and disappearance of clouds in the atmosphere, affecting climate. The scientific community uses the global cloud distribution obtained by the International Climate Program (ISCCP) to analyze the relationship between solar activity and cloud distribution. The results obtained by different authors are contradictory. The core reason is the dispute over the credibility of the ISCCP long-term data. However, as the physical process chain of cosmic ray particles entering the atmosphere, how to combine them with suitable atmospheric cloud formation and thunderstorm conditions to produce sufficient triggering and amplification effects is the key to the problem. All in all, the relationship between solar activity and weather and climate conditions has been a very attractive and controversial research direction for more than one hundred years. It has drawn wide attention from the space and meteorological science community and global change scholars. This is a multidisciplinary research and requires strong support from experiments and theories. The core of the problem is to fully understand this process chain physically, and to explain and confirm it with complete multi-factor observation data. An important aspect of the relationship between solar activity and weather and climate is to understand it from a longer time scale (ten, hundred years or even longer), for which it is necessary to obtain historical data on the sun and climate, which requires the study of tree rings, ice cores Alternative indicators of solar activity and climate, such as isotope concentration ratios, were obtained in paleoenvironments such as sedimentation and sedimentation. On the basis of understanding the relationship, it will be very helpful to improve the ability of climate prediction.", "Data assimilation is the use of forecast and observation data from numerical models and their respective uncertainties to estimate the state of the atmosphere and its uncertainties as accurately as possible. Among all kinds of fused information, the estimation of model uncertainty is a very difficult problem, which significantly affects the data assimilation and numerical prediction results. There are various forms of data assimilation methods, but they can be roughly divided into two categories: variation [1] and ensemble Kalman filtering (EnKF [2]). The variational method is to obtain the analysis field by minimizing the distance between the analysis and the forecast and observation data. The background error is the statistical result of the long-term forecast error and does not change with the airflow. This method is difficult to add the model error term. It is generally assumed that the model is error-free. The ensemble Kalman filter uses the Monte Carlo method to approximate the background error covariance from the sample covariance of a short-term ensemble forecast, so the background error covariance changes with the system under study (Fig. 1). Since EnKF came out in 1994, it has been widely used in models of different complexity and scales [3], and the meteorological departments of Canada, France and Italy have put EnKF into operation. At the end of 2008, WMO organized an international seminar on the comparison between EnKF and 4D variational methods in Argentina. EnKF and 4D variational assimilation have become the two main candidates for future operational data assimilation methods. Figure 1 \tEnKF flow diagram set in the set of black dots with table set average. Quoted from Aksoy (2003, private communication) As a developing technology, EnKF has many problems that need to be solved but have not been solved well, including the estimation of model error, the formation of initial disturbance, the balance between physical quantities and so on. The most immature and most difficult of these problems is the estimation of model errors. Since EnKF utilizes model ensemble forecasts to estimate background error covariance, model error estimation is a problem that EnKF must face. Studies have shown that ignoring the model error can underestimate the divergence of the background set by 50%[4], which will cause the analysis field to gradually move away from the observed and true atmosphere and cause the assimilation system to collapse. Therefore, the estimation of the model error is directly related to the success or failure of the actual data EnKF . The model errors mainly come from the truncation error, the parameterization error of the sub-grid physical process, the boundary error of the finite area model, and the error caused by the increase of the initial small-scale error with time due to the randomness and instability of the atmosphere, etc. At present, the estimation of model error generally involves random perturbation of the background field or model by some empirical method. The perturbation to the background field mainly includes the method of product expansion, superposition expansion and background perturbation adjustment. The random perturbation of the background field generally changes the divergence of the set, but it is difficult to correct the system deviation, and the perturbation of the pattern can be improved in this respect. At present, the disturbance to the model mainly includes the disturbance to the model forcing field, superimposing random disturbance directly on the right side of the forecast equation and multi-parameterization scheme set. Although the perturbation of the model forcing field has a certain effect on ensemble forecasting, the effect in the data assimilation system is not obvious. The method of directly superimposing random disturbances on the right side of the model prediction equation is more suitable for unforced models, such as the air-sea coupled model, but it is more difficult to apply to independent atmospheric models. The set of multi-parameterization schemes simulates the error of the model physical process by using different physical process parameterization schemes for different samples. Predecessors [5,6] compared different cumulus convective parameterization schemes for several weather cases, and found that due to the respective advantages and disadvantages of different schemes, none of the cumulus convective parameterization schemes was statistically consistent. It is superior to other schemes, which indicates that different schemes may represent different phase space trajectories, and multi-scheme ensembles may effectively simulate model errors due to a more comprehensive description of the phase space mode of the model atmosphere. Multi-parameterized scheme ensembles were first used for ensemble forecasting at different scales [7\uf02d8]. Meng and Zhang [9] used the multi-scheme ensemble method for model error estimation of ensemble data assimilation, which can significantly improve the effect of EnKF. However, due to the limited parameterization scheme of the model physical process, EnKF will always have multiple samples using the same parameterization scheme, which may cause the set divergence to be too small. The above-mentioned methods are all in the experimental stage and have great randomness. How to reasonably estimate the model error requires high-resolution observation data to evaluate the schemes of different sub-grid physical processes. If the problem of model error estimation can be properly solved, it will definitely help to improve the effect of data assimilation, thereby improving the accuracy of numerical forecasting, which will have a positive effect on the improvement of the simulation and forecasting capabilities of mesoscale disastrous weather such as rainstorms and typhoons. Very important scientific significance and application value.", "Atmospheric radiology is a branch of atmospheric physics that studies the transport and conversion of radiant energy in the Earth's atmosphere. After the radiation (electromagnetic wave) enters the atmosphere, it will have a series of interactions with the atmospheric medium, and as a result, the state of the atmosphere and electromagnetic waves will change. The study of this interaction and its effect constitutes the main content of the study of atmospheric radiation. Now we know that the characteristics of radiation (electromagnetic wave) can be expressed by a series of parameters related to fluctuations, such as frequency, amplitude, phase, polarization state and energy, etc., and their distribution in space constitutes the radiation field. The atmosphere also has parameters that characterize its characteristics, such as temperature, humidity, pressure, wind speed, etc. These parameters also constitute various fields. The interaction and change of these two sets of fields is caused by a series of processes such as absorption, emission, scattering, refraction and phase delay, which is the subject of atmospheric radiation research. From the standpoint of effects on the atmosphere, the energy transferred by radiation is of greatest concern. Because the energy budget will affect the temperature of the air mass, thereby causing changes in the pressure field and wind field, the movement of the entire atmosphere is controlled by this factor. The energy in and out of the atmospheric block through the radiation process is not only related to the state of the atmosphere itself, but also related to the characteristics of the radiation field. Therefore, atmospheric radiology should study the changes of the radiation field after entering the atmospheric medium. If we mainly focus on the problem of energy exchange between the radiation field and the atmosphere, the theory of radiative transfer can be the basis for a quantitative description of the interaction between these two. It can quantitatively estimate the distribution of the radiation field under the condition of setting the atmospheric characteristics and the incident radiation field, and then estimate the radiation budget of the atmospheric medium. From the radiative transfer theory, we can get the radiative transfer equations suitable for dealing with different problems, and develop many mathematical or physical solutions. Of course, the equation can also be omitted, and the required radiation field distribution and its changing law can be obtained only according to the concept of the physical process of radiation transfer. The study of atmospheric radiation also focuses on changes in the properties of electromagnetic waves. Since the electromagnetic wave characteristics change after interacting with the atmospheric medium, it carries information about the characteristics of the atmospheric medium. If we can confirm that the change of a certain characteristic of the electromagnetic wave is related to a certain atmospheric parameter, then after measuring the change of the characteristic of the electromagnetic wave, it is possible to calculate the situation of the atmospheric parameter on the path of the electromagnetic wave. This is the atmosphere remote sensing. The above two aspects finally focus on such a problem, which is to obtain the distribution of radiation field characteristics under certain atmospheric conditions (including its boundaries) according to the principle of radiation transfer. Since the 1930s, the basic principle of radiative transfer was proposed. In 1950, American Pakistani scholar S. Chandrasekhar wrote the book \"Radiative Transfer\", summarizing his main theory of radiative transfer in stars and planetary atmospheres. He has made important contributions to the theory and research methods of radiative transfer. In the 1960s, R. Gudi of the United Kingdom and \u041a. \u042f. Kondratiev of the Soviet Union also did a lot of work on the radiation transmission in the planetary atmosphere. Under radiative transfer and its effects \t\u00b7 801 \u00b7 certain facilitation. The theory of radiative transfer has made great progress under simple atmospheric conditions. At present, there are very good software packages that can solve the problem of radiative transfer in a horizontally stratified atmosphere or even a spherically stratified atmosphere. Using these tools and data sets, we can obtain the distribution of the radiation field in the atmospheric medium under the condition of horizontal layering or spherical layering, including the radiance field, the radiant flux, and even the polarization state of the radiation. But the problem is that horizontal stratification or spherical stratification is only a simplified model of the actual atmosphere, which is not so simple. For example, the atmosphere with scattered clouds is the non-uniform atmosphere we often see. For such a non-uniform atmosphere, the problem of radiative transfer is much more complicated than that of a horizontally stratified atmosphere. Because in a horizontally stratified atmosphere, the spatial distribution of the radiance field is a one-dimensional problem except that it changes with the direction of the incident light and the outgoing light, that is, it only changes with the height. However, under the condition of non-uniform atmosphere, the radiance field changes with its spatial position, so it is a three-dimensional problem. Although the existing radiative transfer equation still has the ability to describe the changing law of this radiation field, its calculation difficulty is much higher than that of dealing with one-dimensional cases. Before specifically considering the solution to the radiation field, we need to first consider what changes the non-uniformity of the atmosphere will bring to the characteristics of the radiation field from the perspective of the physical model? What impact will it have on energy exchange due to radiative transfer and remote sensing inversion. We can start with a simple example, such as comparing the radiance field associated with stratiform clouds and cumuliform clouds. Stratiform clouds can be considered as a horizontally uniform atmosphere, while cumuliform clouds are horizontally non-uniform. In stratiform clouds, except the top and bottom are external surfaces, the other directions are all inside the cloud body. Its top is irradiated by sunlight, while other parts are not irradiated by external light sources. When photons occur in the cloud When scattering, only the photons scattered in the vertical direction can leave the cloud body from the upper and lower boundaries, and no longer participate in multiple scattering in the cloud, while the photons scattered to the side will still stay in the cloud body, and they will continue to participate in scattering. The situation of cumuliform clouds is different. In addition to being illuminated by sunlight on the top, it may also be illuminated by sunlight or other external light sources on its sides. Of course, some interface parts may be in shadow and not exposed to sunlight. Irradiation, and the scattering of photons in clouds is also different from that of stratiform clouds, because it may also leave the cloud body from the side boundary. These differences will change the reflection and absorption of solar radiation by clouds of the same thickness, thereby affecting the intercloud space, and even the ground radiation budget under cloudy conditions. When discussing remote sensing inversion problems, uniform and non-uniform cloud layers will also bring complex changes. In addition to the difference in the outgoing radiation formed by different cloud layers, if there is a cloud in a pixel and there is no cloud in a part, its radiation field can be considered as a linear superposition of different parts, but since the inversion process is often nonlinear, the inversion The result is that simple linear superposition cannot be done. This also brings additional errors to the inversion, which needs to be estimated theoretically and find an appropriate processing method. At present, the problem of non-uniform atmospheric radiative transfer can only be solved by the Monte Carlo method. Although this method has no restrictions on atmospheric conditions, its calculation efficiency is very low, which limits its application range. Another difficult problem is the variability of atmospheric conditions. As far as the condition of clouds in the atmosphere is concerned, it is ever-changing, and it is difficult to describe it in mathematical language. What is the law of the transmission of radiation in the heterogeneous atmosphere, how much it affects the energy balance of the earth-atmosphere system and how much it affects the remote sensing inversion, all are worthy of further study.", "In the past 30 years or so, China has gone through the development process of developed countries in the past 100 years. In this process, the urban and regional air quality deteriorated rapidly. The country has invested heavily in air pollution control. Although the rise of traditional air pollutants such as SO2, NO2 and inhalable particulate matter (PM10) has been curbed to a certain extent, the pollution of oxidants represented by ozone has reached a very serious level. At the same time, the visibility of the atmosphere decreases rapidly, and haze occurs frequently, forming regional compound air pollution [1]. An important reason for such a complex pollution phenomenon is that the oxidation capacity of the atmosphere is constantly increasing. The concentration level of free radicals in the atmosphere (such as OH and HO2 radicals) is an indicator of the oxidation capacity of the atmosphere. However, due to the extremely low concentration of these free radicals in the atmosphere and their short lifetimes, there are very few international studies on the actual measurement of atmospheric OH and HO2 free radicals under external field conditions. my country's actual observations in a suburban area of the Pearl River Delta show that the concentration of OH radicals in the atmosphere is 3 to 5 times higher than expected. Moreover, comparing the observed results with those simulated by the air quality model, it was found that there is a new mechanism for the generation of OH radicals (Fig. 1). This discovery not only reveals the existence of high atmospheric oxidation capacity in our country, but also requires further in-depth exploration of the formation process of free radicals. Figure 1 \tSchematic diagram of the measured atmospheric OH free radical chemical process in the Pearl River Delta [2] P in the figure represents the generation rate, and L represents the removal rate. The red arrows indicate the chemical processes that are basically known at present, and the blue arrows indicate the processes so far. Another important reason is that there is a complex coupling relationship between the generation of oxidants and particulate matter in the atmosphere [3,4]. Quantitative research on this mechanism is still very lacking at present. Studies in urban agglomerations in my country (such as Beijing-Tianjin, the Pearl River Delta, etc.) have revealed some important phenomena, such as atmospheric HONO has a significant contribution to free radical chemistry, and the formation of HONO may be related to particulate matter. It is related to the heterogeneous process on the surface [5]. In addition, the transformation of atmospheric gas phase components into particle phase is a key factor affecting the chemical composition of particulate matter, and the phenomenon of new particle generation is the direct field experimental evidence[6]. Current research shows that organic aerosols (SOA) generated by secondary conversion in atmospheric particulate matter account for a very important share. Although a large number of atmospheric chemical simulation box experiments (Fig. But simulation studies to date may have underestimated SOA generation under field conditions by about an order of magnitude. Quantitative research on SOA generation has a significant impact not only on air pollution, but also on climate change. Figure 2 \tThe mass percentage of gaseous intermediate products and secondary organic matter (SOA) after chemical transformation of different atmospheric volatile organic compounds (VOCs) (unID refers to the part that has not been quantitatively analyzed) [7] Science and Decisions are of special importance to our country. my country's future social economy will continue to grow at a high speed. Atmospheric composite pollution is manifested in the phenomenon of increased concentrations of atmospheric oxidative species and fine particles, a significant decrease in atmospheric visibility, and the trend of environmental deterioration spreading to the entire region; in essence, it is manifested in the interaction between species and the interaction between sources and sinks, species Coupling of multiple processes transformed in the atmosphere [1]. This new type of pollution poses a challenge to the current air pollution control: From the perspective of pollution hazards, the harm of these pollutants to the ecosystem and human health far exceeds the conventional SO2, NO2 and PM10; but in a variety of pollution Under the condition of the simultaneous existence of pollutants, on the one hand, it is necessary to further accurately understand the health and ecological hazards of secondary pollutants (O3, fine particles), and at the same time, it is necessary to pay close attention to the synergy or antagonism between the effects of these pollutants. From the perspective of the scale of pollution, the range of damage caused by compound air pollution is regional, which is especially obvious in urban agglomeration areas, far exceeding the boundary of administrative management in my country; from the perspective of pollution control, pollution sources The geographic area where emissions and pollution pose a hazard also crosses administrative boundaries between cities and even provinces. The essential changes in air pollution put forward new requirements for air pollution control strategies. According to the occurrence, development and impact scale of compound air pollution itself, the establishment and implementation of regional joint control and corresponding management mechanisms will be a major scientific and technological issue for the sustainable development of cities and urban agglomerations in my country in the future.", "Climate sensitivity refers to the steady increase of the global average annual temperature caused by a given global radiative forcing, that is, the response of the global average surface temperature \u0394Ts to the radiative forcing \u0394F. It is possible to define a climate sensitivity parameter to represent this linear relationship: this equation also represents the transition of the surface-tropospheric system from one equilibrium state to another under the forcing of an external radiative perturbation. The concept of climate sensitivity was first proposed in the study of one-dimensional radiation-convection models. In this model, the forcing is approximately a constant parameter for various radiative forcings, generally about 0.5k/(W\u00b7m\uf02d2), so there is a possible universal relationship between forcing and response. Because of this feature, radiative forcing is considered a useful tool to approximate the relative climate impacts caused by different applied radiative perturbations. For more complex climate models, the value of \u03bb may be different for each model. When considering the impact of CO2 doubling on climate, the climate changes simulated by different models are not the same. This difference in climate response is thought to be largely the result of different climate sensitivities between the models. How to better understand the differences in climate sensitivity between models and how to better define the change of this parameter is a prerequisite for improving climate models and making better climate change projections. Generally speaking, climate sensitivity refers to the global average temperature change caused by the radiative forcing when the concentration of CO2 in the atmosphere reaches 2 times under the given global average radiative forcing conditions. At present, there are mainly three estimation methods: Equilibrium climate sensitivity [1]: when the climate system or climate model reaches equilibrium, the global average temperature change due to the radiative forcing caused by the doubling of atmospheric CO2 concentration (using atmospheric model coupled to a well-established ocean model or an atmospheric model coupled to a mixed layer upper ocean model). In the simple thermodynamic budget equation, there is a balance between energy input and output when a new equilibrium state is reached. Equilibrium climate sensitivity provides a direct measure of a system's response to changes in a given forcing and can be used to compare the responses of different models, calibrate climate models, and quantify temperature changes in other scenarios. In earlier IPCC assessments, climate sensitivity was mainly derived from calculations of ACGM-coupled mixed-layer ocean models. In this case, since there is no heat exchange with the deep ocean, the model integration reaches equilibrium after decades. However, for a well-coupled air-sea model, heat exchange with the deep ocean delays equilibrium, and the model often needs to be integrated over thousands of years, rather than decades, to reach equilibrium. This will greatly increase the required calculation time. Effective climate sensitivity: A measure of the strength of a feedback at a specific time. It may change with forcing and climate regimes. As the coupled model integration reaches a new equilibrium state, the effective climate sensitivity increases and approaches the equilibrium climate sensitivity. Although the definition of equilibrium climate sensitivity is straightforward and applies to the specific case of equilibrium climate change after CO2 doubling, the required integration time is very long for coupled models. Effective climate sensitivity is one measure around this need. When in equilibrium, the effective climate sensitivity becomes the equilibrium climate sensitivity. Transient climate response (TCR): In climate change integration, temperature change at any time depends on the interaction of all processes that affect energy input, output, and ocean heat storage. For the special case of a 1% annual increase in atmospheric CO2 concentration, the change in global mean temperature when the CO2 concentration doubles is called the system's transient climate response. The value of this response can be used to account for and calibrate differences in the responses of different models to the same standard forcing. Other forcing scenarios similar to TCR can also be used to compare the differences between the different models. In the late 1970s, based on the simulations of two models, the equilibrium sensitivity of CO2 doubling conditions was obtained in the range of 1.5~4.5\u00b0C. Since then, the models have been greatly improved and compared with observations. For a more comprehensive comparison, the range of climate sensitivity calculated by the model has not been significantly reduced, and the equilibrium climate sensitivity in the first, second, and third IPCC assessment reports all maintains the range of 1.5-4.5\u00b0C. The equilibrium climate sensitivity given in the latest IPCC Fourth Assessment Report is 2.1~4.4\u00b0C, with an average of 3.2\u00b0C, which is similar to the sensitivity of the Third Assessment Report (2001)[2]. The values of the second, third and fourth IPCC assessment reports are 3.8\u00b10.78\u00b0C (obtained by 17 models), 3.5\u00b10.92\u00b0C (obtained by 15 models) and 3.26\u00b10.69\u00b0C respectively (18 patterns obtained). For the transient climate response (TCR), the TCR obtained in the second (1995) and third (2001) assessment reports are 1.1~3.1\u00b0C (average is 1.8\u00b0C) and 1.3~2.6\u00b0C C (the median value is 1.6\u00b0C), and in the fourth assessment report of IPCC published in 2007, the TCR is 1.5~2.8\u00b0C (the median value is 2.1\u00b0C), and the range has been somewhat narrowed. The average TCR is generally lower than the equilibrium sensitivity value. Based on the above results, the equilibrium climate sensitivity or global average equilibrium warming under doubling of CO2 may be in the range of 2-4.5\u00b0C, the most likely value is around 3\u00b0C, which is likely to be greater than 1.5\u00b0C. Values significantly greater than 4.5\u00b0C cannot be ruled out due to fundamental physical reasons and data constraints, but their agreement with observations and proxy data is generally worse than values in the 2\u20134.5\u00b0C range. Climate sensitivity depends both on the type of forcing exerted on the climate system and its geographic and vertical distribution, and on the strength of feedback processes. Since the feedback process is related to the mean climate state, it also depends on the mean climate state. The key physical processes involved in climate sensitivity are water vapor, atmospheric extinction rate, surface albedo (mainly caused by changes in ice and snow extent), and cloud feedback. Climate models have improved significantly in recent decades, especially in the parameterization of clouds, boundary layers, and convection. On this basis, many experiments have been carried out on the equilibrium climate sensitivity. Some models show changes in climate sensitivity due to improved cloud parameterization or cloud-radiative characterization. However, changes in climate sensitivity in most models cannot be attributed to changes in the model's treatment of a specific physical factor. This is because the change of the parameterization of the physical factors in the model is a nonlinear interaction, and the sum of the A and B factors is not equal to the change of A+B. In addition, the global effects of individual changes roughly cancel each other out. This thus results in a marked improvement in the parameterization of climate models and their key physical processes and their possible characterization, but does not exhibit large changes in climate sensitivity. To better understand climate sensitivity and minimize its uncertainty, various climate feedback processes need to be understood. The importance of the climate feedback process can be explained with simple reasons. Assume that the radiative forcing at the top of the atmosphere when CO2 is doubled is 4.0~4.5W/m2. Stratospheric adjustment (about one month, is a fast process) decreases by about 0.5W/m2, and the remaining 3.5~4.0W/m2 radiative forcing will make the surface-troposphere temperature adjust (due to the effect of the ocean, etc., this The adjustment takes about several decades, is a slow process), and is equivalent to exerting on the tropopause. It has been pointed out earlier that temperature is the only climate variable that changes in response to this radiative forcing. Then the air temperature will rise by 1.2\u00b0C to restore the radiation balance. But rising temperatures, or a warming climate, can cause changes in other atmospheric and surface variables or properties. In the future, these changes will also lead to changes in the energy balance through a feedback process, thereby further increasing the temperature. So the final increase in temperature is not 1.2\u00b0C, but a higher or lower value, depending on whether it is a positive or negative feedback. Assuming that a certain variable is A, which changes first due to some reasons, this initial change leads to a change in another variable B, the change of B variable is the response to the change of A variable, and the magnitude of the response is measured by climate sensitivity of. If the change of variable B further causes A to change in the direction of the original change, then the feedback of variable B to the initial change of variable A is positive, which tends to strengthen the initial change, while negative feedback is the opposite, which can reduce the initial change. Suppose Ts is the sensitivity to radiative forcing, and yi is a certain variable affected by climate change, such as water vapor content, ice and snow coverage area, low cloud cover, etc., then the dimensionless feedback factor of a certain variable: if (2 ) In the formula, the two terms on the right have the same sign, the feedback factor is positive, and vice versa. For example, when Ts increases, the planetary albedo yi decreases, which further increases Ts, which is positive feedback (both items are negative signs). The total feedback factor can be obtained by linearly adding the feedback factors of various variables: the addition of various feedback factors takes the sign into account. It should be pointed out that many processes and interactions in the climate system are nonlinear, that is, there is no simple proportional relationship between cause and effect. cause significant changes in the climate system in the future. But that doesn't mean the future state of the climate system is completely unpredictable. For highly nonlinear climate system changes, although some parts must be predicted by statistical and empirical methods, in most cases, the climate system can be approximately treated as a quasi-linear response to external radiative forcing, so Make a forecast of the outcome. The main feedback mechanisms in the climate system include: atmospheric water vapor and temperature lapse rate feedbacks. When the wind speed is constant, the increase in temperature increases evaporation, leading to an increase in the amount of water vapor in the atmosphere. Water vapor is a greenhouse gas, which further increases the temperature, so water vapor has a positive feedback effect. Calculations show that it would increase the global average temperature increase due to doubling CO2 by 60%. Even with an increase in equilibrium climate sensitivity of about 2\u00b0C. The feedback effect from the increase of water vapor in the upper troposphere is most obvious. This is because most of the long-wave radiation emitted into space originates in this layer. As the climate warms, the saturation specific humidity (the water holding capacity of the atmosphere) will increase, and according to the Clausius-Clapeyron equation, the actual water vapor specific humidity in the atmosphere will also increase, which generally keeps the relative humidity constant. Both observations and models support this relative humidity constant under warming conditions. After the water vapor content in the atmosphere changes, the atmospheric temperature structure or temperature lapse rate will change through the feedback effect, which can enhance the warming of the upper troposphere in the tropics and produce a negative feedback on the surface temperature. The combined water vapor/temperature lapse rate feedback effect would increase warming by around 50%. Snow and ice albedo feedback. Surfaces of ice and snow are strong reflectors of solar radiation. Albedo is a measure of this reflective ability. If the sea surface (albedo 0.1) or land surface (albedo 0.3) with low albedo is covered by sea ice with high albedo (albedo \u2265 0.6), the solar radiation absorbed by the surface will be less than half of the original , thus further cooling the surface, and vice versa. This is the ice-albedo positive feedback process. After climate warming, the ice and snow cover with high albedo will decrease significantly, which will reduce the albedo and increase the absorbed solar radiation, which will increase the warming produced by doubling CO2 by another 20%. cloud feedback. Clouds have a strong absorption, reflection or emission of radiation, which is called cloud feedback. The feedback effect of clouds is very complex, and its feedback strength and sign depend on the specific type, height and optical properties of clouds, but can basically be divided into two types. Clouds can reflect the sun, reflecting a part of the solar radiation incident on the cloud surface back to space, reducing the total incident energy obtained by the climate system, and thus have a cooling effect. On the other hand, clouds can absorb the long-wave radiation emitted by the surface and the atmosphere below the clouds, and at the same time, they themselves emit thermal radiation, which, like the effect of greenhouse gases, can reduce the heat loss from the ground to space, thereby increasing the temperature of the lower layer of the cloud. Generally speaking, low clouds are mainly reflective and often cool the ground; high clouds are mainly blanket effect and often warm the ground, so whether the total feedback effect of clouds is positive or negative depends on which of the above two effects dominant. In modern climates, clouds have a cooling effect on climate (globally averaged cloud radiative forcing). Under global warming conditions, the cooling effect of clouds on the climate can be enhanced or weakened, thereby creating a radiative feedback to global warming. If the reflective clouds increase, the global average surface air temperature decreases, which is a negative feedback, but if the reflective cloud decreases, the global average surface air temperature increases, which is a positive feedback. Climate change is very sensitive to changes in cloud (cloud amount, cloud area and structure), which also significantly affects the sensitivity of climate models. When the cloud amount changes by a few percent (such as 3%), it will have a certain impact on the climate, and the net temperature increase or decrease caused by it can be equivalent to or even exceed the temperature increase caused by greenhouse gases. Therefore, the calculation of cloud feedback obviously affects the calculation and prediction of global climate change values. Cloud feedbacks are one of the most uncertain factors in climate change and its predictions. The difference in cloud feedback in climate models is the main reason for the obvious differences in climate sensitivity between models. Therefore, in climate models, truly characterizing the cloud feedback process is an important way to improve the prediction of future climate change. Ocean Feedback. The feedback effect of the ocean is achieved in three ways. First of all, it is the main source of water vapor in the atmosphere. Once the temperature change evaporates through the ocean, it can affect the change of water vapor content in the atmosphere. Next, \tthe schematic diagram of the main feedback process in the climate system in Figure 1. The solid line indicates positive feedback, and the dotted line indicates negative feedback. climate change. Second, the ocean has a large heat capacity, which means that it takes much more heat to raise the temperature of the ocean than the atmosphere does to raise the same temperature. Therefore, in the change of the climate system, the warming of the ocean is much slower than that of the atmosphere, so the large thermal inertia of the ocean plays a major role in controlling and regulating the speed of atmospheric changes. Third, ocean circulation through the interior of the ocean (such as the Atlantic thermohaline circulation) can transport heat, allowing heat to be redistributed throughout the climate system. The heat transported by this ocean circulation is very large in the Atlantic region, for example between northwestern Europe and Iceland, and the heat input is similar to the solar radiation that this region receives at the sea surface. This is also the main reason why the temperature in the Nordic region is warmer in winter. It is estimated that once this circulation stops, the temperature in northern Europe will be about 10\u00b0C lower than it is now, that is, a significant cooling of the climate will occur. Land feedback. The net radiation absorbed by the land surface (the sum of net solar radiation and long-wave radiation) is released back into the atmosphere mainly through the flux of sensible heat and latent heat (evapotranspiration), which can directly affect the regional temperature and humidity, and later affect the climate other variables of the system. As mentioned earlier, the amount of soil moisture and the state of vegetation basically determine the amount of net radiation received by the surface. Therefore, the interaction process between the land surface and the atmosphere must be properly considered in climate models. Special attention should be paid to the connection between vegetation and land energy, water and carbon cycle, and land use change. Carbon cycle feedback. Climate can alter the sources and sinks of CO2 and CH4 through its effects on the terrestrial biosphere and oceans, leading to changes in their atmospheric concentrations. This in turn allows further changes in temperature. The radiative feedback process generated by this carbon cycle is generally positive for CO2, which not only makes the atmospheric CO2 concentration increase faster, but also some model calculations show that the temperature rise is higher than that without considering the carbon cycle feedback1 \u00b0C or so. In the above feedback process, the feedback of water vapor and cloud responds to climate warming basically at the same time, and the sea climate sensitivity and feedback \t\u00b7811\u00b7response to ice and snow takes several years. The above feedback process may be called a fast feedback process. The time scale of vegetation and carbon cycle feedback processes is decades, and other feedback processes such as the reduction of continental ice sheet area, the dissolution of carbonate sediments in the ocean and the enhancement of land chemical weathering (the latter two processes can reduce atmospheric CO2 concentration) would take hundreds or thousands of years to complete. These feedback processes are collectively referred to as slow feedback processes.", "The Qinghai-Tibet Plateau is the highest and largest plateau in the world, and its dynamic and thermal forcing have an important impact on the atmospheric circulation and weather and climate in the northern hemisphere. The dynamic forcing of the plateau makes the westerly wind flow around and form two airflows, south and north. The India-Myanmar trough associated with the South Branch has an important influence on the onset of the Asian monsoon. The plateau is a cold source in winter and a heat source in summer. During the seasonal advance from winter to summer, changes in the meridional thermal difference between the plateau and the Indian Ocean and the latitudinal thermal difference between the plateau and the Pacific also have an important impact on the onset and progress of the Asian monsoon. The atmospheric circulation over the plateau and its surrounding areas also affects the plateau thermal Forced \"thermal adaptation\" changes [1]. The Qinghai-Tibet Plateau is a key area that affects the Asian monsoon and climate anomalies in East Asia. The sea-land-air interaction process in this region is complex, which is one of the key factors leading to large-scale or persistent or explosive meteorological disasters in my country, and has always been a frontier issue in international climate dynamics. The interaction between the special topography of the plateau and the circulation will also produce a southwest vortex, which often causes severe weather processes in eastern my country after its development and eastward movement. The southwest vortex played an important role in many major rainstorms and floods affecting my country[2]. From the perspective of interannual variability, the plateau heat source is strong (or weak) in summer. In the troposphere of the plateau and its adjacent areas, the lower layer is deviant cyclonic circulation (or anticyclonic circulation), and the lower layer of the Yangtze River Basin in China is anomalous southwest wind ( or northeast wind), corresponding to the strong (or weak) summer monsoon in East Asia. Precipitation increased (or decreased) in the upper reaches of the Yangtze River and the Huaihe River Basin, while that in South China decreased (or increased) [3\uf02d4] (Fig. 1). The thermal status of the plateau in spring has a certain significance for the abnormal summer climate in East Asia. Stronger (or weaker) plateau sensible heat in spring often corresponds to more (or less) precipitation in the middle and lower reaches of the Yangtze River in my country in the subsequent summer [5]. Against the background of global warming, China's climate in the second half of the 20th century showed many unique characteristics of change. A noteworthy phenomenon is the unusually significant warming of the Qinghai-Tibet Plateau. At the same time, the diurnal temperature range (the difference between the daily maximum temperature and the minimum temperature) on the plateau has decreased significantly, which shows that the warming mainly occurs at night. Fig. 1 \t5 The abnormally strong (a) and weak (b) sensible heat heating on the Qinghai-Tibet Plateau in the month corresponded to the precipitation anomalies (unit: mm/d) and winter at 160 stations in China in July, so it is easy to associate with the greenhouse effect intensified link [6]. Due to the continuous weakening of the surface wind speed and the reduction of the surface-air temperature difference (the ground temperature rises faster than the air temperature), the plateau sensible heat flux has gradually weakened since the mid-1980s (Fig. 2). Coupled with the intensified radiative cooling effect, although the latent heat of condensation has increased, the atmospheric heat source over the plateau (the sum of the diabatic heating components integrated in the entire layer of air column) tends to weaken in spring[7]. Fig. 2 \tThe relative change rate (unit: %) of surface air temperature Ta(a), surface temperature Ts(b), surface 10m wind speed V0(c) and surface sensible heat flux (d) on the Qinghai-Tibet Plateau from 1980 to 2003 (unit: %) except climate change In recent decades, the phenomenon of climate change in China that has attracted the most attention is that the precipitation in the east shows a continuous characteristic of \"flooding in the south and drought in the north\" in summer. That is, in the 1950s, China experienced a rainy period. In the 1980s, the rainy belt was located in North China, and then gradually moved south to the middle and lower reaches of the Yangtze River, forming the phenomenon of \"flooding in the south and drought in the north\" [8]. This decadal-scale climate transition has had a huge impact on social production and people's lives, and it is also a subject of close attention in the development of national and regional strategies. There are many viewpoints on the causes of the interdecadal climate transition in East Asia, involving anomalous thermal conditions over the Qinghai-Tibet Plateau, changes in the upper and middle troposphere atmospheric circulation, tropical ocean forcing, the North Pacific decadal oscillation, ENSO interdecadal changes, anthropogenic atmospheric Sol emissions, etc., but it is still difficult to give a conclusion. There seems to be a corresponding relationship between the weakening of the plateau spring heat source and the weakening of the East Asian summer monsoon. However, recent research progress shows that the interdecadal transition of East Asian climate is a regional manifestation of the northern hemisphere climate interdecadal transition, which is closely related to the overall change of Asian monsoon, and it does not exist in summer alone, but is reflected in all seasons . Therefore, it is necessary to study the Asian monsoon as a whole, which is more conducive to fundamentally understanding the mechanism of interdecadal climate transition in East Asia. We now urgently need answers to the following questions. Is human activity the main cause of heat source change on the Qinghai-Tibet Plateau and climate change in East Asia? How are these two changes related? To what extent will the warming of the Qinghai-Tibet Plateau affect the \"southern flood and northern drought\" in eastern my country? The answers to these questions are limited by the current research level of climate dynamics, observation data and numerical simulation conditions. Due to the harsh natural environment, the Qinghai-Tibet Plateau, especially in the central and western regions of the plateau, lacks long-term observation data. Satellite observation data can effectively make up for the lack of observation data in remote areas, but the relatively short time series is undoubtedly a bottleneck for applying it to study climate change . Numerical simulation is gaining more and more attention in understanding the role of the physical mechanism of climate change, but there are still many uncertainties in the simulation and prediction of climate change, mainly because of the current level of model development and some uncertain factors affecting climate in the model cannot be described in. Under the actual external forcing including aerosols and greenhouse gases, it is difficult for the current coupling model to simulate the interdecadal climate change in East Asia that occurred in the late 1970s[9]. Therefore, it is necessary to conduct in-depth research combining numerical simulation, data diagnosis and dynamic analysis around the role of natural variation and human factors in the interdecadal climate change in East Asia. The key scientific issues involved include the long-term change trend of heat sources on the Qinghai-Tibet Plateau, the radiative forcing of local atmospheric components, the evolution of the East Asian monsoon system, the interaction between the stratosphere and the troposphere, the relationship between the cooling of the troposphere in East Asia and the interdecadal changes in the Pacific Ocean, the mid-high Effects of latitude processes, etc. In a word, how the climate and environment of the Tibetan Plateau are affected by the Asian monsoon and global climate on different time scales, and how the Tibetan Plateau affects the Asian monsoon and global climate is an important scientific problem we face , involving the intersection of multiple disciplines. Its accurate answer will help to understand the formation and change of climate, help improve the level of climate prediction, and reduce the uncertainty of climate prediction.", "In the earth's atmosphere, the atmosphere above 60km is completely ionized due to the ionization of solar ultraviolet radiation and solar wind, and the atmosphere below 60km is partially ionized. The ground can be regarded as an equipotential body, and the entire earth system can be regarded as a system composed of the surface and A spherical capacitor composed of the lower boundary of the ionosphere. The global thunderstorm activity is equivalent to a generator, which is connected upward to the ionosphere and downward to the conductive ground. The charge distribution in the thunderstorm can maintain the vertical current from the earth to the bottom of the cloud, and also drive the current from the cloud top to the ionosphere. Thunderstorms continue The earth charges the ionosphere, and generates a continuous steady-state current in the sunny area far away from the thunderstorm, which flows from the ionosphere to the ground through the conductance atmosphere, thereby maintaining the balance of the ionosphere potential. This is the concept of the global atmospheric circuit, as shown in Figure 1 A schematic diagram of this concept. On average, the earth's atmosphere carries a net positive charge, and the global sunny atmosphere carries a total positive charge of about 5\u00d7105C. Although the concept of the global atmospheric circuit has been proposed for several decades, it has not been possible to reliably measure it until the last decade. The ground-based and satellite detection of global lightning has greatly promoted the application of lightning in global climate change research. and fusion [1]. Figure 1 \tConceptual schematic diagram of the equivalent global atmospheric circuit The electrical problems of the earth's atmosphere and the climate effects \t\u00b7817\u00b7response to temperature and temperature changes are an important scientific issue that is currently concerned by the international academic community. The electrified atmosphere can be described by a global DC circuit, and the global lightning can be unified by a Schumann resonance. Schumann resonance refers to the global electrical oscillation between the resonant cavity formed by the earth and the ionosphere, excited by global lightning, and its oscillation frequency is mainly determined by the size of the earth, about 7.5Hz. Long-term monitoring of electromagnetic waves at this frequency can provide insights into global changes in lightning activity. Williams et al. [3] successfully correlated the global lightning frequency obtained using the Schumann resonance with the wet-bulb temperature, and found that the sensitivity of the lightning frequency to 1\u00b0C ground warming can be as high as 30% or higher. Correlation studies of ionospheric potential and global temperature have also found very good positive correlations, and there is also a very good positive correlation between ionospheric potential and global lightning (deep convective clouds), warmer temperatures lead to deeper convection and higher ionospheric potential [4]. Although many studies have revealed that there is a correlation between the electrical parameters of the global atmospheric circuit and climate change [5], so far, the mechanism and physical process of these correlations are still unclear. Thunderstorms and electrified clouds are an important source of maintaining global circuits, and the electrification of thunderstorms and clouds depends on temperature changes. From a shorter time scale (hours, days, months and years), tropical lightning activities and There are clear positive correlations between surface temperature, tropopause water vapor, cloud cover, and ice crystal content, but on longer timescales, the relationship between lightning activity and other factors is uncertain, although climate models confirm a positive correlation between lightning activity and temperature . The internal mechanism of the relationship between the electrical parameters of the global atmospheric circuit and climate change still needs more observations and in-depth research. The mechanism and process of the temporal and spatial distribution of lightning and electrified clouds affected by solar activity and climate is an urgent science to be solved. problem. The influence of atmospheric aerosols on clear-sky electricity has long been concerned. In recent years, the development of satellite observations on cloud microphysics diagnosis technology has greatly expanded the influence of aerosols on cloud microphysics, precipitation, cloud electrification, and modulation of lightning activities. It is known that the increase of aerosol content may lead to the decrease of liquid droplets, the inhibition of warm rain coalescence and the increase of cloud water reaching the mixed phase region, and then affect the electrification and lightning activities in clouds. In order to reveal the relationship between aerosols, electrification and lightning, a series of observation experiments have been carried out in academia, but the experimental results in different regions are different. The main difficulty is how to combine the influence of aerosols on lightning The contribution of lightning activity is distinguished. The strong atmospheric discharge phenomenon was once thought to occur only in the troposphere, and the stratosphere and mesosphere atmosphere have long been considered to be electrically calm, but since the late 1980s, more and more studies and observations have shown that , tropospheric lightning can induce a strong transient electric field and a variety of different discharge phenomena in the stratosphere and mesosphere above thunderstorm clouds. These events are associated with thunderstorms, so the contribution of thunderstorms to the global circuit may be much larger than previously expected. Quantitative knowledge of the frequency and intensity of these events will help to quantitatively assess their substantial contribution to the global circuit. Due to the difficulty of observation and the lag in the understanding of its generation mechanism, the impact of these discharge events on the characteristics of the ionosphere, the changes in the properties and components of the middle and upper neutral atmosphere (such as the impact on nitrogen oxides, ozone and other components, and chemical effects, etc. ) and other issues are scientific problems to be solved urgently in the international academic circles, and the solution of these problems may challenge the traditional concept of the global atmospheric circuit. The main research goal in the future is to solve the key problem of the long-term change behavior of some key parameters in the global atmospheric circuit and the impact and response to climate change. With the advancement of detection methods, the accumulation of long-term observation data and the deepening of research, especially the deep crossing of multiple disciplines, this problem will be gradually solved.", "A typical electrical signature of a thunderstorm cloud is the separation of charges within the cloud and the creation of lightning. In addition, thunderstorms are often accompanied by severe weather phenomena such as strong winds and showers, hail, and tornadoes, which often cause serious impacts and economic losses on industrial and agricultural production and people's lives. Therefore, the research on electrification and lightning mechanism in thunderstorm cloud is an important content in the research and prevention of thunderstorm physics and severe convective disasters. The electrification in thunderstorm clouds is an important basic problem in the study of thunderstorm electricity. However, due to the complexity of the problem itself and the limitation of observation methods, the electrification mechanism in thunderstorm clouds and the laws of lightning activities of different types of thunderstorms have not yet been clearly understood. There is still insufficient understanding, especially of the reasons for differences in electrical activity of thunderstorms with different precipitation characteristics. In recent ten years, more and more soundings of electric fields in clouds have proved that the actual charge structure in thunderstorm clouds is much more complex than the traditional dipolar or tripolar, and four polarities with alternating polarities can appear in the updraft area. There may be six charge regions with alternating polarities outside the updraft[1], and a charge structure with opposite polarity to the charge region of normal thunderstorms may also occur[2]; while the plateau thunderstorm presents A special tripolar charge structure with a strong lower positive charge region [3]. So how can thunderstorms be strongly electrified in a short period of time and finally reach the stage of discharge? Why are the charge structure characteristics and lightning characteristics in different thunderstorm clouds different? In order to explain these problems, there have been many hypotheses about the electrification mechanism in clouds in the world. There is a convective electrification mechanism not directly related to precipitation. In the induced electrification mechanism, the environmental electric field causes the electric polarization of the precipitation particles, and the charge transfer occurs after small ice crystals or small water droplets collide with the polarized precipitation particles in the falling. In addition to inductive electrification, non-inductive electrification is considered to be the most effective electrification mechanism, including electrification by temperature difference, electrification by frost, electrification by the breaking of large water droplets and ice crystals, electrification by freezing and melting water, etc. Takahashi[4] proposed the famous frost electrification mechanism through experiments. Because in the ice and water coexistence area, the surface of soft hail is covered with a layer of liquid surface composed of supercooled water droplets. When ice crystals and soft hail collide, the soft hail The temperature difference between the warm frosted surface and the cold frosted surface of the ice crystals leads to the transfer of charges. The relative diffusion growth rate between frosted soft hail and ice crystals and the interaction between them are important factors that determine the charge transfer. The growth rate depends on temperature, local supersaturation, liquid water content, and ice crystal size [5]. Why different configurations of these factors will cause charge transfer of different polarities is an important problem in the study of atmospheric electricity. According to the traditional theory based on experiments and gas discharge, when the ambient electric field reaches about 300kV/m, the air will break down, thereby inducing lightning. However, so far, the electric field sounding results in a large number of thunderstorm clouds have not detected such a high electric field in the cloud. Since the discovery of the middle and upper atmospheric discharge phenomenon above thunderstorm clouds in the 1980s, a new discharge mechanism, the escape breakdown mechanism, has been introduced into the discussion and numerical simulation of the tropospheric discharge mechanism. The breakdown required by the escape breakdown mechanism The breakdown voltage is much smaller than the air breakdown mechanism. However, so far, although there are physical models for the escape breakdown mechanism, what is the direct evidence for this mechanism in the tropospheric atmosphere? How and so on the contribution of the lightning initiation and development process remains to be resolved. Thunder and lightning can be mainly divided into cloud flash and ground flash according to the location of occurrence. Ground flash refers to the lightning that occurs between the thunderstorm and the ground. Cloud flash is defined as all lightning that does not reach the ground. Cloud flash generally accounts for two-thirds of the total lightning. There are also two special discharge phenomena - ball lightning and spider lightning. Ball lightning specifically refers to a moving luminous ball that occurs near the ground during a thunderstorm. Spider lightning occurs under the cloud but close to the cloud base. It is similar to the crawling of a spider. It is a discharge phenomenon with large-scale horizontal development and multi-branched channels. Because ground lightning is the most harmful to human beings and relatively easy to observe, the current understanding of ground lightning is relatively sufficient. The ground-to-ground lightning usually includes the pre-breakdown process in the cloud, the cascade leading process from the cloud to the ground, and the leading head approaching Figure 1. \tThe discharge characteristics of a cloud flash obtained by using the three-dimensional positioning system of the time-of-flight method [7] Ground time and ground The connection process between objects, as well as the high current return process and the process between strikes, etc. The connection process is also the process of the ground object being struck by lightning. It takes place in less than one millionth of a second. Due to the limitation of the resolution of the detection means, how the ground object is struck by lightning has not yet been given a clear explanation. Clear images are also an important factor restricting the development of lightning protection technology. The peak current during the kickback process can usually reach tens of thousands of amperes, and the rise time from 0 to the peak value is only a few microseconds, which is the main source of damage to ground objects or electronic equipment. A lightning strike may include several high-current return strokes. The time interval between lightning strikes is only a few milliseconds to tens of milliseconds. How can a thunderstorm accumulate the electric energy required for a strike back in such a short period of time, a quantitative description of the process of a large current strike back, and the plasma channel formed by the discharge process The analytical description of has so far not been satisfactorily theorized. The research using the high time resolution positioning technology of VHF radiation pulses shows that cloud flash usually presents a double-layer structure connected by an upward development channel[6,7]. The upper and lower layers correspond to the upper part of the thunderstorm cloud respectively. Positively charged regions and negatively charged regions in the middle. Although the structure and morphological characteristics of cloud flash have been well understood based on this, the quantitative description of cloud flash physical process and discharge intensity, especially the verification method of cloud flash theory, is still an important problem to be solved urgently. Also, why does lightning come in so many different shapes? The mechanism of their respective occurrence is also a question that is difficult to answer in the scientific community. Many factors are involved in the perfect description of the electrification and discharge process and mechanism of thunderstorms, especially the high temporal and spatial resolution lightning detection for real thunderstorm clouds, the simultaneous development of thunderstorm cloud microphysical process and dynamic process detection, and numerical simulation methods and related physics. The development of the theory will make it possible to solve this difficult problem step by step, and it will be applied in lightning disaster protection and strong convective disaster prevention and control.", "Lightning is a strong instantaneous discharge phenomenon that occurs under thunderstorm weather conditions. Due to the rapid ionization of the air during the discharge process, a series of physical and chemical effects are produced. However, until now, human beings are far from understanding these effects clearly. On average, about 100 lightning strikes per second occur on Earth at any one time. In recent years, with the widespread use of microelectronic devices, the damage caused by lightning current and its strong electromagnetic radiation has become more and more serious. Scientific lightning protection technology and measures depend on the scientific understanding of lightning, but due to the randomness and instantaneousness of natural lightning occurrence and development on a certain time-space scale, the probability of hitting a fixed target is very low, and the lightning current and It is very difficult to directly measure its short-distance electromagnetic field, which restricts the acquisition of its characteristic parameters and data accumulation, and also increases the difficulty of studying the physical and chemical effects of lightning to a large extent. How does the electromagnetic radiation of lightning depend on the discharge current of the lightning channel? How to analyze and express the time evolution of discharge current in different frequency bands? How does the lightning electromagnetic radiation at close range attenuate with distance? How do strong lightning electromagnetic fields cause damage to electronic equipment? And other issues are the core issues in the study of lightning electromagnetic effects. The technology of artificial lightning induction can make lightning proceed in a certain time and space controllable state, thus providing conditions for the direct measurement of lightning current and its close-range electromagnetic field and the research of these problems. As early as the early 1960s, American scientists successfully induced lightning at sea for the first time by launching a small rocket that towed a grounded metal wire. In the following decades, France, China, Japan and Brazil all carried out artificially induced lightning experiments, and It has gradually been applied in the field of lightning physics and lightning protection research [1]. But the main problem of rocket-leading lightning is that the success rate is low, thereby limiting the development and application of this technology. The main problems in improving the success rate of lightning induction and technology application are: how to quantitatively distinguish the electrification area and electrification intensity in thunderstorm clouds on the ground, and the lightning measurement technology under the severe electromagnetic environment conditions where the lightning electrodes are close to each other. Another way to improve the success rate of artificially induced lightning is to develop other new lightning-induced lightning technologies, such as laser-induced lightning technology, which uses laser to ionize air in the atmosphere to generate a discharge channel to guide the formation and development of lightning in the channel, thereby Determined path discharge. Although there are many theoretical discussions on this technology, the generation technology of the continuous ionization laser channel that can be applied in the harsh thunderstorm environment in the field is still a problem that needs to be solved urgently. However, once the laser-induced lightning technology is successfully tested in a thunderstorm environment, it will have important scientific significance and practical value. Lightning is one of the important natural sources of NOx in the atmosphere and therefore an important source of tropospheric ozone. At the moment of lightning, the air near the discharge channel is rapidly heated to 30,000K, and the air pressure reaches several atmospheric pressures. The N2 and O2 in the channel are completely ionized. Due to the chemical reaction at high temperature, NOx will be generated near the channel after the discharge is over. . NOx produced by lightning plays a very important role in atmospheric photochemistry and global biogeochemical cycles, and can directly affect the physical and chemical effects of chemical substances such as OH and O3 in large lightning discharges through photochemical reactions \t. 823. Figure 1 \tUsing a rocket -Lightning caused by wire technology [2] The left picture is the traditional lightning, the shooting distance is 60m; the right picture is the air concentration of the lightning caused by the air, the shooting distance is 550m. Quantitative evaluation of NOx produced by lightning will help to make more accurate predictions of global climate change and O3 concentration, but the main process and mechanism of NOx produced by lightning are still not clearly understood[3], such as: lightning strikes back Process, continuous current process, or streamer discharge process produces NOx? Or which process contributes more? This is directly related to the parameterization scheme of NOx produced by single lightning and the quantitative assessment of NOx produced by global lightning and its effect. In fact, due to the uncertainty of these factors and the individual differences between lightning, the global total amount of nitrogen oxides produced by lightning and thunderstorms is still a very uncertain amount. In addition to the physical and chemical effects of lightning discharge itself, strong lightning can induce the discharge of thunderstorm clouds to the middle and upper atmosphere above the thunderstorm, which are collectively referred to as transient luminous events (TLEs) in the middle and upper atmosphere. According to the morphological characteristics and location of the light radiation, the discovered TLEs can be classified into four categories [4]: Red Sprites, Blue Jets, ELVEs and Gigantic Jets (Fig. 2). Although since the first Red Sprites photo was taken in 1989, \tthere have actually been a large number of observational facts and theoretical studies of TLEs on the morphological characteristics of the four types of transient luminescence events in the middle and upper atmosphere[3] of the National Map 2. However, due to the difficulty of observation , making their research still very difficult, and many basic issues still need to be clarified. In addition to its phenomenology and morphological characteristics need to be further revealed, its physical mechanism and theoretical research need to be further studied, especially the understanding of the streamer-leader transformation and the development of the model parameterization of lightning leader regions with different polarities under low pressure are very important. Importantly, the further accumulation of ground-based and space-based synchronous experiments and observation data is also essential for the understanding of the mechanism of TLEs. In addition to the common electromagnetic, acoustic and optical effects, the lightning process is also accompanied by high-energy radiation such as x-rays and gamma-rays. X-rays are believed to be induced by the strong electric field at the head of the lightning development channel, and \u03b3-rays are called \"Terrestrial \u03b3-ray Flash (TGF)\", which was first discovered by the BATSE detector on the CGRO satellite in 1994[5 ], and was attributed to bremsstrahlung by high-energy electrons. Previously, such intense gamma-ray transient bursts were considered to exist only in the field of astrophysics. Since the discovery of this peculiar physical phenomenon in 1994, it has gradually become a very active field of study, and has aroused great interest and attention from space and atmospheric electricity, space physics and astrophysicists, and the earth\u03b3 - The generation mechanism of ray flash and its relationship with the lightning process is a hot and difficult issue currently being discussed. The solution of this issue not only has its own very important scientific significance, but also provides important enlightenment for the research of astrophysics. The main research goal in the future is to solve the key problems such as the characteristics of electromagnetic radiation at close range and its response to the channel discharge current, as well as the mechanism and influence of the physical and chemical processes induced by the intense lightning discharge process. With the development of various lightning detection technologies with high spatio-temporal resolution and satellite detection technologies, especially the development of relevant physical and chemical theories, this problem will be gradually solved, and the physical and chemical effects of lightning discharge process and related theories will also be solved. applied in other fields.", "Since the Industrial Revolution, human activities have released a large amount of greenhouse gases into the atmosphere, leading to global warming. Among these anthropogenic greenhouse gases, carbon dioxide (CO2) plays the leading role, and its concentration has increased from 280ppmv before the industrial revolution to the current 380ppmv. The Intergovernmental Panel on Climate Change (IPCC) clearly pointed out in the Third Assessment Report [1] that the current increase in atmospheric CO2 concentration is caused by man-made emissions of CO2, three-quarters of these emissions come from the combustion of fossil fuels, and the rest are from resulting from land use changes. In the fourth assessment report [2] released in 2007, it was pointed out that the concentration of atmospheric CO2 in 2005 had far exceeded the range of natural variation (180-330 ppmv) recorded in ice cores since 650,000 years ago. The report also pointed out that the global temperature has increased by about 0.74\u00b0C in the 100 years from 1906 to 2005, and 11 of the last 12 years (1995-2006) have been among the 12 warmest years since 1850. Therefore, in order to accurately evaluate and predict future climate change, it is very important to correctly understand the carbon cycle and its interaction with climate. There are three main carbon pools on Earth, namely the atmosphere, oceans and land. The carbon cycle is the biogeochemical cycle of carbon, that is, the conversion and exchange of carbon elements in and between these carbon pools through physical, chemical and biological processes. If the impact of human activities is not considered, the income and expenditure of each carbon pool are basically balanced, and the carbon content is also generally stable. Human activities increase the concentration of CO2 in the atmosphere, which in turn affects the carbon cycle of the ocean and land. Figure 1 shows the global carbon cycle in the 1990s [2]. The ocean carbon cycle mainly studies how the ocean absorbs atmospheric CO2, how the carbon that enters the ocean is transferred and transported within the ocean, and the main control factors of these processes and their interaction with climate change. The large-scale ocean circulation transports the carbon entering seawater from the upper layer of some areas to deep water, and then transports it back to the upper layer from other areas and emits it into the atmosphere; in addition, in the photic layer of the ocean, phytoplankton through photosynthesis Absorb CO2 in seawater and convert it into organic carbon. Part of the organic carbon will sink to the deep layer of the ocean, and be oxidized and decomposed during the sinking process, and converted back to inorganic carbon. This constitutes the marine biogeochemical cycle of carbon. Since the 1970s, a large number of ocean observation programs have obtained a lot of observation data, which roughly outlines the distribution of carbon cycle elements. As the key to filling the observation gap and realizing the predictability of the carbon cycle, ocean carbon cycle models have been developed for more than 50 years, and can be simply divided into inorganic carbon models and carbon cycle models with biological processes. At present, a hot and difficult issue in the ocean carbon cycle is that whether it is estimated based on observation data or model-based estimation, the size of the ocean carbon sink still has great uncertainty. Certainty is greater. The terrestrial carbon cycle mainly involves the carbon cycle in the terrestrial ecosystem after the vegetation absorbs CO2 from the atmosphere through photosynthesis and converts CO2 into vegetation carbon. Animals can obtain vegetation carbon by eating plants; regardless of \tplants All must carry out respiration, which will consume organic matter and release CO2 into the atmosphere; plant litter, animal remains, and excrement fall to the ground, and part of the organic matter is directly decomposed by microorganisms in the air into CO2 and emitted back into the atmosphere. After entering the soil, soil organic carbon is further decomposed and transformed, and CO2 is emitted back to the atmosphere through heterotrophic respiration of microorganisms. Since the 1990s, there have been a lot of research work on the carbon exchange between the land surface and the atmosphere, mainly through direct observation and model estimation. In the past few decades, terrestrial carbon cycle models have made great progress, and there are dozens of relatively influential models, and international comparison programs have been carried out. Since terrestrial ecosystems include different systems such as forests, grasslands, and farmlands, the cycle of carbon in these systems is very complicated, and many processes are still unclear. In addition, the entire terrestrial ecosystem is very heterogeneous, resulting in a large uncertainty in the estimation of the net CO2 land-atmosphere flux. In addition, it is more difficult to estimate the flux of land use change. Therefore, the current estimated Terrestrial carbon sinks have very large uncertainties. The biogeochemical cycle of carbon is both constrained by climate change and has an important feedback effect on climate change. Therefore, the interaction between the global carbon cycle and climate is a very important research direction. Studies of the global carbon cycle and climate interactions in coupled models that include ocean, land, and atmospheric physical and biochemical processes have only begun in recent years. At the beginning of the 21st century, Cox et al. [3] published in the journal Nature (Nature) in 2000 the prediction of the future climate by the coupled model of the Hadley Center\u2019s global carbon cycle and climate. The results showed that the carbon cycle has a large positive feedback ( The positive feedback referred to here is: the increase of atmospheric CO2 concentration will warm the climate, and the climate warming will reduce the carbon sink of land and ocean, thus increasing the concentration of atmospheric CO2, which will make the climate warmer)! The main reason is that the terrestrial ecosystem, as a sink of CO2 emitted by human activities, reached saturation in the middle of the 21st century, not only cannot further absorb CO2 in the atmosphere, but also emits it to the atmosphere; capacity for CO2. However, the research results of Friedingstein et al. [4] showed that the feedback is very weak. After further analyzing the feedback, Friedingstein et al. [5] believed that the difference in the results of the two models is mainly caused by the difference in the Southern Ocean circulation and the response of the terrestrial biosphere to global warming, and the difference in the response of the land to climate change is two Models simulate the causes of major differences in the 21st century. In addition, some researchers have used the climate-carbon cycle model to reveal that the positive feedback effect of terrestrial carbon and climate has a significant impact on simulating future climate change. The response of vegetation carbon productivity to climate change is the main factor in the carbon cycle-climate feedback simulation. Controlling factor. In short, many researchers have pointed out that the carbon cycle has a feedback effect on climate change, but estimates of the strength of this effect vary greatly. The uncertainty of carbon sink estimation and the further uncertainty of the interaction between carbon cycle and climate change are not only an academic problem, but also a social problem, which will have huge impact on the world environment, economy and even the whole society. The impact directly threatens the survival of human beings, so this problem needs to be solved urgently in the future. Facing the carbon pools with different and extremely complex conditions, improve the accuracy of the estimation of each carbon pool in the global carbon cycle research, reveal the laws of biogeochemical processes of carbon in different regions and different ecosystems, and clarify the carbon cycle-climate change The various mechanisms of interaction and the size of the feedback effect are the main research goals in the future. With the improvement of various observation techniques and model simulation techniques, this difficult problem will be gradually solved, which will help to accurately predict future climate changes and better serve people's lives.", "1. The origin and importance of the problem Climate extremes have become a hot topic, thanks in part to the research on global warming and its impacts in the past century. Since the 1980s, people have gradually and clearly realized that the world is warming significantly due to the release of atmospheric greenhouse gases such as carbon dioxide due to large-scale human activities caused by the industrial revolution. One of the most definite conclusions drawn from the assessment report of the Intergovernmental Panel on Climate Change[1] based on a large number of climate studies in various countries in the world is that the current average global warming rate is about 0.7\u00b0C/100a. The warming over the past century has been less than 1\u00b0C, which is difficult for ordinary people to appreciate, and it is also difficult for scientists to directly use it to assess its specific impact. Therefore, since the 1990s, people have increasingly realized that in order to carry out impact studies, it is necessary to pay attention to regional extreme weather, because the impact of climate change on ecosystems and human society is more directly realized through local extreme weather phenomena. In climatology, extreme weather phenomena are called climate extremes. The impact of climate extremes is huge, and many examples in recent years are vivid: In the summer of 2003, a heat wave lasting about 10 days in Western Europe caused thousands of deaths; in 2005, Hurricane Katrina destroyed the city of New Orleans in the United States overnight; Located in the East Asian monsoon region, there is either flood or drought every year; the prolonged freezing rain in southern my country in early 2008 was also a typical climate extreme event. A question that people are generally concerned about is, with the global warming (or cooling), will my country's regional disastrous weather become more frequent or more intense? The study of climate extremes is the necessary scientific basis for answering such questions. 2. Current status of climate extreme value research Unlike traditional climate research, which mostly uses monthly and seasonal average data, research on climate extreme values requires daily or higher resolution data. Since the late 1990s, daily weather observations and simulation data have been widely used in climate change research in order to distinguish regional extreme weather phenomena (such as abnormally heavy precipitation and continuous drought periods; high-temperature heat waves and low-temperature cold waves, freezing weather; typhoon , strong storms and sandstorms, etc.) climate change law. In a sense, daily data may be the best data to reflect large-scale changes in climate extremes. Extreme phenomena reflected by higher resolution data, such as squall lines and tornadoes, are also worthy of attention. However, it is still difficult to obtain large-scale high-resolution weather data sequences for studying climate change. Analysis of day-to-day extreme weather records will undoubtedly provide insight into climate change. Yan Zhongwei and Yang Chi [2] used the daily meteorological data of my country to analyze the change pattern of various extreme climate indicators. Although only the daily data of more than 60 stations were used at that time, many results are still of general significance. For example, the frequency of \"drizzle\" in my country has generally decreased, which is an important feature of the current drought in northern China. This conclusion has been verified on the basis of more recent data [3]. The conclusions about the annual and seasonal changes in extreme elements in my country are 5-10 times larger than the average changes also reflect the necessity of climate extreme value research. Due to the difficulty of directly obtaining high-resolution meteorological data from various countries, the climate extreme value index came into being, such as the longest drought days, the maximum five-day precipitation in a row, the number of frost days, and the frequency of extremely high/low temperature events calculated based on probability distribution percentiles and strength etc. From the late 1990s to the early 21st century, with the cooperation of meteorological departments of various countries, multiple climate extreme indices and their trend changes in multiple regions were obtained (such as literature [4]~[6]). In recent years, some global-scale analyzes have been carried out (such as literature [7]). In view of the limited accuracy of current climate models, it is difficult to directly compare and analyze many simulated quantities with observed quantities. The application of climate extreme value index also helps to properly compare observed and simulated climate changes (for example, see literature [8]). In terms of analytical methods, generalized extreme value theory (GEV) has gained more applications in the climate community in recent years. In principle, GEV does not depend on the probability distribution characteristics of the original data, and only samples the extreme value part of it, so it is the most direct fitting description of the climate extreme value information contained in climate observations[9]. Tu et al. [10] performed a GEV fitting analysis on the daily precipitation in North China, which was experiencing severe drought, and found that although the total precipitation and most precipitation events were decreasing, the number of heavy rain events had increased since the 1970s. This shows that climate extremes cannot be defined arbitrarily, otherwise some key climate change information may be confused. Due to the limited period of modern meteorological observations, the extreme value samples that can be provided are even more limited, and the application of GEV theory is limited to a certain extent. Generalized linear fitting (GLM) has also played a role in the study of climate extreme change. For climate change and extreme value problems of non-normal variables such as precipitation, many traditional climate analysis methods in the past are not applicable. GLM regards each \"weather\" value as a sample drawn from a certain climate distribution, and determines the distribution that best fits all samples through the most natural regression (including determining the autoregressive law and the distribution parameters with time, location and various possible climate factors. Changes); and then generate a large number of simulated samples through the Monte Carlo method, from which we can judge the changes of climate distribution and extreme values with various possible reasons [11~13]. Since all data are incorporated into a research framework about distribution (including mean and extreme value) at the same time, the results have superior statistical stability. Yan et al. [14] applied GLM for the first time to analyze the change of regional daily (wind speed) climate distribution and its relationship with large-scale climate factors, showing the unique value of GLM in revealing regional climate change and its cause analysis. Wang et al. [15] used GLM to analyze the evolution law of daily precipitation occurrence probability in my country, and found that the climate change pattern of \"southern flood and northern drought\" in my country's summer monsoon region in recent decades is closely related to large-scale warming. How do regional climate extremes change under the background of global warming? Considering that climate extremes are mainly caused by extreme weather fluctuations, Yan et al. [16] used the wavelet method to analyze the changes in weather fluctuations in the daily temperature series in Europe and my country over the past century. The results show that global warming has weakened seasonal fluctuations, and weather fluctuations in mid- and high-latitude regions (especially in the cold season) have also generally weakened, corresponding to the weakening of cold season cold waves in my country; while the weather fluctuations in mid- and low-latitude regions in warm seasons have become shorter and stronger. This trend may be related to the enhancement of local convective weather in warm seasons under the background of warming, which corresponds to the increasing frequency of drought and flood disasters in summer in my country. Goswami et al. [17] pointed out that with global warming in the past century, the annual total precipitation in the Indian monsoon region has not changed significantly, but the extreme precipitation has increased. my country is located in the East Asian monsoon region, and the regional differences in climate change are very large, but some studies have shown that the precipitation climate does have changes similar to those pointed out by Goswami[3,10]. However, due to the wide range of climate extremes, some existing studies are far from being able to fully answer the relevant questions. 3. Clear reference and necessary explanation of climate extremes In order to understand climate extremes more accurately, it is necessary to clarify the concept of climate. The traditional concept of climate is often simply regarded as average weather conditions. Even quite well-known climate change academic journals define climate in this way. Strictly speaking, climate is a comprehensive expression of all weather phenomena. What is a \"comprehensive statement\"? Obtaining the average state is obviously only a simple \"comprehensive expression\". Mathematically, probability distributions can be used to comprehensively describe a large number of events. The occurrence probabilities of various weathers are different, some are more common (higher probability of occurrence), some are rare (lower probability), and the combination constitutes a probability distribution. For a particular meteorological element (such as temperature), the probability distribution of all possible weather values is the temperature-related climate. The mean is the most important parameter of many probability distributions, and in the case of a normal distribution, the mean also represents the most frequently occurring phenomenon. This is why traditional climate studies focus on average climates. However, the mean climate is only one characteristic parameter of the climate distribution. Climate extremes refer to those extreme weather values far from the average state in the climate distribution, which represent abnormal weather phenomena with a small probability. Note that the meaning of \"abnormal\" can vary widely in different seasons and locations. For example, in the European heat wave in the summer of 2003 mentioned above, the highest temperature was about 30\u00b0C, which was indeed a rare high temperature in Northwestern Europe, but it was considered a low temperature weather for India. Another example is that the daily precipitation of 20 mm can be described as anomalously heavy in many areas in Northwest my country, but it is far from anomalous in the southern coastal areas. For this, climate distributions must be calculated for a given location and time. As an example, Fig. 1 shows the climate distribution mean and 3/97th percentile threshold of the daily average temperature at Beijing Station from 1915 to 1997 (see references [18] and [19] for specific algorithms). It can be seen that the climate variability in Beijing on August 8 is small, and the daily average temperature is less than 21.5\u00b0C but greater than 29.5\u00b0C, which is regarded as an abnormally cold/warm weather event. For January 1st, an extreme cold/warm event requires the daily average temperature to be more than ten degrees below zero or higher than 2\u00b0C. The frequency and intensity of extreme weather in that year (season) can be calculated by examining the daily temperature in any period (such as a certain year or a certain season) and comparing it with the corresponding threshold range. Many current studies use similar methods to define climate extremes. But the percentile is artificially set. For engineering design, a more extreme percentile (such as 1/99) is generally taken. However, less extreme thresholds are often used in climate change research. Figure 1. \tThe climate distribution mean (thick black) and abnormal high/low temperature thresholds (red/blue lines) of Beijing\u2019s daily temperature from 1915 to 1997. The abscissa is the daily number n, n = 1~366 corresponds to January 1~December 31 (such as 5/95 and 10/90), so as to ensure the statistical stability of the obtained analysis results[18,20,21]. Such non-objective factors can be avoided by using advanced statistical methods such as GEV and GLM, but the method shown in Figure 1 is still widely used because of its simplicity and intuition. There have been a lot of analyzes on the fact that my country's climate extremes have changed in recent decades. Zhai Panmao, Yan Zhongwei and others have made an overview of this [22]. Carrying out further factual analysis and mechanism research on the basis of better data, and developing corresponding simulation and impact research are the direction to promote the development of this field. 4. Main difficulties in the study of climate extremes Difficulties in observational research. Due to the low probability of extreme values and the small spatial scale of extreme weather, the observed samples of extreme values are very limited. Various errors and non-uniformities in the observation sequence seriously affect our judgment on extreme value changes. In terms of time, modern meteorological observations in many areas have only been recorded for decades. How to reflect the \"once in a century\" extreme weather changes? In terms of space, the conventional climate observation network will miss many extreme phenomena that occur on a small scale, and high-resolution observation data such as satellite remote sensing often lack long-term sequences. It is beneficial to use distribution fitting analysis methods such as GEV and GLM. However, for the convenience of processing in mathematics, it is often assumed that the research object follows a specific distribution. The actual climate distribution follows this assumption only approximately at best. It is not easy to revise and utilize various observational data and develop distribution analysis methods to describe climate change more accurately, especially the changes in extreme values. Difficulties in mechanistic studies. The occurrence and development of local extreme weather is often related to changes in the larger-scale weather and climate background. For example, existing studies believe that the 100-year freezing rain event in southern my country in early 2008 was not only related to the anomalous atmospheric circulation in mid-high latitudes [23], but may also be related to the La Nina event in the low-latitude Pacific Ocean and the anomalously warmer sea temperature in the North Atlantic Ocean. However, due to the very small number of extreme event samples, it is difficult to draw a clear conclusion through individual case analysis. From the perspective of climate change, exploring the evolution of extreme weather requires the development of new thinking. Difficulties in modeling studies. Most of the existing climate models are still pursuing a more reasonable simulation of annual and seasonal average climate characteristics. Since the extreme value phenomenon occurs locally, it is difficult to directly compare with the gridded model results, and it is even more difficult to test and apply the extreme value simulation ability of climate models. It is a commonly used method at present to make information of different scales more comparable through climate extreme index or distribution fitting analysis. Downscaling research such as random weather generators is also an avenue worth exploring. Difficulties in impact assessment. As mentioned above, climate extremes research stems from the need to assess the impact of climate change on various industries. However, the average climate condition is still the basic factor in the field of impact assessment for many years, and many impact assessment models cannot reflect the extreme value effect. It is necessary to develop new methods. Here, the study of climate extremes will become an important cornerstone of new interdisciplinary collaborations.", "With the development of human beings, various industrial activities are increasing, such as waste recycling, pharmaceutical production, application of biotechnology, and agricultural fertilization of biological particle waste, which inevitably release a variety of pathogens into the air[1~3]. facing an unprecedented risk of disease. A typical example is the SARS that occurred in China in 2003, which caused huge loss, panic and inconvenience to the society [4\uf02d6]. Recently, avian influenza incidents have occurred in the human population one after another, such as Nanjing in 2008 and Pakistan in 2006 [7]. Studies have shown that H5N1 influenza virus can infect human lungs and brains, and even unborn fetuses [8]. The possibility of human-to-human transmission of avian influenza is brewing another global large-scale influenza after the Spanish flu in 1918 (about 20 million to 50 million deaths) [9]. In 2009, influenza A H1N1 swept most countries around the world, and the World Health Organization raised the warning to Level 6. During an influenza outbreak, airborne transmission is an important route [10], so research on bioaerosols, including the detection of airborne viruses, bacteria and other biological substances, is of great significance. In 1900, the effect of temperature on bacteria in moist air was published in Lancet. Since then, the germination of Penicillium spores in humid air, the first discovery of Bacillus thuringiensis in the air, and the research on the airborne transmission of smallpox in hospitals have been published successively. Even in 1904, some scholars made useful attempts to study the relationship between bacteria and traditional chemical pollutants in the air. In 1908, Science reported the first new method for the detection of bacteria in the air. These studies opened the prelude to human research on bioaerosols. The detection of bioaerosols mainly includes sampling and identification of microorganisms. Bioaerosol sampling usually has filter type, impact type and liquid type. Each of these sampling methods has advantages and disadvantages. The filter type has high collection efficiency, but usually the collection flow rate is low, and it will cause damage to bacteria and viruses. The impact type mainly relies on the inertia of biological particles, and the particle size of viruses is relatively small (generally less than 100\uf06dm), so the collection efficiency is relatively low, and it will also cause impact damage to bacteria and viruses. Although the air samples collected by the liquid sampler can be directly used for analysis, the collected biological particles may be released again as the sampling time increases. Efficient bioaerosol samplers generally have three requirements: high collection flow rate, high collection rate, and less damage to bacteria and viruses. However, high collection flow is often accompanied by greater damage to bacteria and viruses, and the collection efficiency for small virus particles is relatively low. In the event of bioterrorism or other emergencies, a large-flow biological sampler is required, so that more bacteria and viruses can be collected in a short period of time, so that they can be better detected. Another requirement for bioaerosol samplers is portability, which is in conflict with high collection flow, because portable samplers usually have lower collection flow due to power reasons. The identification of microorganisms is initially through traditional microscopes, but this method usually requires a high concentration of bacteria and viruses, otherwise it is difficult to find them under the microscope, which is a problem for identifying low-concentration pathogens, and it is accompanied by many artificial identifications error. Since 1988, the emergence of polymerase chain reaction (polymerase chain reaction, PCR), also known as cell-free molecular cloning system or specific DNA sequence in vitro primer-directed enzymatic amplification method, is a major innovation in gene amplification technology. PCR can specifically amplify a very small amount of target DNA by millions of times, thereby greatly improving the analysis and detection capabilities of DNA molecules. PCR can detect single molecules of DNA or samples containing only 1 molecule of target DNA per 100,000 cells [11]. Due to the advantages of high sensitivity, strong specificity, rapidity and simplicity, PCR has shown great application value and broad development prospects in the field of microbiology. In recent years, PCR and quantitative PCR (using standard DNA to quantify sample DNA) have been increasingly used to detect bacteria and viruses in the air, which is a big impetus to the field of bioaerosols. However, it takes about 3-4 hours from air sampling, sample DNA or RNA extraction, to gene amplification. Although this is a major breakthrough in microbial detection, it is difficult to automate and even more difficult to achieve real-time detection. At present, there is turmoil in some areas in the world, and the possibility of highly pathogenic microorganisms as weapons of mass destruction is increasing, and the requirements for the detection time and accuracy of bioaerosols are higher [12,13]. At the same time, to prevent the outbreak of large-scale infectious diseases, the detection of biological components in exhaled breath also puts forward high requirements in terms of time. The time requirement (generally within 1 minute) of gene amplification technology and early warning is far from that of [14]. Following the \"9.11\" incident, the United States has invested a lot of financial, material and human resources in the development of biosensor technology, but there is still no major breakthrough in the online detection of bioaerosols. Many technologies such as bioaerosol mass spectrometry [15] and aptamer [16] have achieved certain results in terms of time, but there are many problems of false positives and false negatives. Recently, small transistors based on nanowires have successfully established a bridge between physics and biology. This major breakthrough provides a good technical support for online biological alarm systems [17,18]. Nanowire-based field-effect transistors have been successfully applied to the detection of influenza virus in water [18]. The main principle is to use biological species-specific antibodies to biomodify silicon nanowires of field effect transistors. When biological species are combined with specific antibodies combined with nanowires, the conductivity of nanowires will change slightly. This tiny A change in represents the presence of the detected species. Biosensors based on small nanowire transistors are sensitive, their timeliness (a few seconds) is unmatched by traditional detection techniques, and their accuracy is greatly improved. Nanowire-based biosensing technology is expected to be applied to the on-line detection of bioaerosols. However, there are some technical difficulties in the successful realization of this system. When the antibody is exposed to natural environmental conditions, its activity will be greatly challenged, which also poses a certain obstacle to the continuous detection of air. At the same time, the non-specific combination of virus and antibody will also cause certain false positives. Figure 1 \tOn-line detection system of bioaerosol In summary, the research and development of on-line detection technology of bioaerosol has become an urgent problem to be solved in the scientific community, that is, through the real-time detection of bacteria and viruses in the air, the isolation and monitoring of dangerous areas should be carried out. measures such as evacuation to reduce the risk of disease. The main challenges in the future are: to develop aerosol-to-hydrosol (aerosol-to-hydrosol) technology with high flow rate and high condensation rate, to develop a fast and stable biosensor system and to integrate these technologies including signal amplification and network transmission as shown in Figure 1 Efficient integration shown in et al.", "1. Heavy rain and flood disasters run through the history of the Chinese nation. Our country is one of the countries in the world where natural disasters, especially meteorological disasters, occur frequently. Natural disasters pose a major threat to people's lives and property. There are endless books about flood disasters in our country. \"Mencius Teng Wengong Xia\" said: \"In the past, Yu suppressed the flood and leveled the world.\" According to statistics, in the 2046 years from the establishment of the Han Dynasty in 206 BC to 1840 in the late Qing Dynasty, there were a total of 984 major flood disasters, which occurred once every two years or so on average. As the mother river of the Chinese nation, the Yellow River is even more disaster-ridden. From the fifth year of King Ding of Zhou Dynasty (602 BC) to the present about 2600 years, there have been 26 major diversions, and the latest diversion occurred in the fifth year of Xianfeng in Qing Dynasty, that is, 1855 [1]. Will it happen again in the future? This is by no means alarmist talk, but something we should take seriously. In recent years, with the rapid development of the economy, the impact of floods has become more and more serious. Since the founding of New China, we have experienced the great floods in the Jianghuai River Basin in 1954, 1991, 1998, and 2007, the Haihe River flood in 1963, and the Henan catastrophic flood in 1975. The study of rainstorm is not only an old problem, but also a new research topic. For this reason, since the founding of New China, especially during the \"Seventh Five-Year Plan\", \"Eighth Five-Year Plan\", \"Ninth Five-Year Plan\" and \"Tenth Five-Year Plan\", the Ministry of Science and Technology of the People's Republic of China has listed major projects and carried out special research. However, due to the heavy rain disaster is a major problem, we still need to continue to break through. 2. Effects of Tropical Monsoon and Its Variation Floods are mostly related to rainstorms, and the Asian monsoon has a great impact on rainstorms in my country. my country is located in the world's largest Asian-Australian monsoon region. For a long time, people have paid great attention to the Asian monsoon [2], and it is generally believed that the Asian monsoon includes two branches of the Indian monsoon and the East Asian monsoon, and the former is already familiar to people [3]. Later, people revealed the existence of the East Asian monsoon, which can extend northward and even reach the region of 45\uf0b0N. However, due to the complexity of this monsoon itself and the lack of data in the western Pacific Ocean, it has not yet been fully understood by scholars in the world, especially in Western countries. Some meteorologists even believe that the East Asian monsoon is just a simple extension of the Indian monsoon. . Therefore, more in-depth research should be done on the East Asian monsoon [4]. In fact, since China is located in the East Asian monsoon region, the frontal models and theories proposed by the European and American schools are difficult to fully apply. Therefore, research on the monsoon circulation and its variation, the systematic characteristics, structure and mechanism of East Asian precipitation has to be carried out. 3. my country's main rain belts and the impact of the western Pacific subtropical high Chinese scholars have a long history of research on monsoons and rainstorms. It is already known that in summer, heavy rains in my country are mostly concentrated in the eastern region, and there are obviously three main rain belts, that is, the heavy rains during the pre-flood season in South China from May to mid-June (scholars in Taiwan call it the \"plum rain\" in this area), From mid-June to mid-July, there is Meiyu in the Yangtze River Basin, and from mid-July to mid-August, there is heavy rain in the north. The onset of the summer monsoon is the beginning of the torrential rain in the pre-flood season in South China, and the northward advance of the summer monsoon means that the Meiyu in the Yangtze River Basin and the torrential rain in North China begin successively (this paper mainly discusses the Meiyu in the Yangtze River Basin as an example, and does not involve typhoon torrential rain). However, the timing of monsoon onset and advancement varies from year to year. Chinese scholars have revealed that there is a sudden change in the atmospheric circulation in early summer, and the time of sudden change is different from year to year, which makes the start and end time of the rainy season in different regions have obvious annual changes. Moreover, because the atmospheric circulation system appears stable and less mobile in a certain period, and these systems move faster in other periods, thus resulting in the difference between the persistence and process of the rainstorm, which increases the difficulty of rainstorm forecasting. The main reason is that the evolution characteristics and mechanism of the low-latitude monsoon system and the mid-latitude westerly belt circulation are not yet fully understood, including: the establishment and collapse of the middle-level blocking high in the troposphere at mid-high latitudes, the westward extension and northward jump of the western Pacific subtropical high and mechanism etc. In addition, the cooperation of cold air at mid-high latitudes also needs to be considered, which involves the establishment and stability of the Baikal low-pressure trough. At present, the research on the law of the evolution of the mid-latitude westerly belt system is relatively weak. As for the change characteristics of longer time scales, such as whether monsoon precipitation still has a cycle of about 30 years or 60 years, more data are needed for further discussion. 4. Mesoscale systems are the direct culprits of floods and rainstorms The above conditions are only the environmental characteristics of rainstorms. The appearance of heavy rain is not only determined by environmental conditions. He also needs the concentration of water vapor, that is, there is a large amount of water vapor concentrated to the precipitation area from the surrounding vast area, and not only the passing ability of water vapor (called water vapor flux), but also the concentration ability of water vapor (called water vapor flux) divergence), that is, what is the net income of the whole layer of water vapor in this area? Another condition is that there must be a strong vertical upward movement, which will lift the water vapor to high altitude, condense into clouds, change from cloud water to rainwater and then fall to the ground. Heavy rainfall requires a continuous supply of water vapor and a continuous strong upward movement. To have such conditions, it is not enough to have large-scale weather systems (such as fronts and cyclones). The use of military radar in World War II found that there was a lot of \"noise\" that affected the observation of military targets, and these \"noises\" were exactly the targets that later weather radars wanted to capture-clouds and rain. Radar reveals smaller systems that cannot be analyzed on weather maps. Thus, the concept of \"mesoscale\" appeared in the 1950s. The research shows that in the process of heavy rain, especially heavy rain, the mesoscale system (horizontal scale 20~2000km) is essentially the direct influence system. And the mesoscale system has various classifications, some people divide it into \u03b1 mesoscale (200~2000km), \u03b2 mesoscale (20~200km) and \uf067 mesoscale (2~20km) system. The troublesome problem is that, first, the dynamics of mesoscale systems and large-scale systems have many different characteristics. Second, these mesoscale systems are only defined synoptically, and it is difficult to describe them with unified equations from the perspective of dynamics. For example, the \u03b1 mesoscale satisfies the static equilibrium, the \uf067 mesoscale satisfies the non-hydrostatic equilibrium, and the \u03b2 mesoscale is in between. What kind of equilibrium is it suitable for? There is no consensus yet. The \u03b2-mesoscale system is exactly equivalent to the convective cloud cluster and is the direct producer of heavy rain. If the static force assumption is used, it will inevitably lose some useful information and distort the image of the rainstorm process. If the non-hydrostatic force assumption is used, it will increase the trouble of solving and increase the amount of calculation because it contains harmful sound waves, and has a higher value for the initial value. These are issues for further research. 5. The main progress and problems in the study of Meiyu rainstorm in China Among the three rain belts in summer in my country, meteorologists have done a lot of research on Meiyu in East Asia (especially the rainstorm in the Yangtze River Basin). The ancients have long noticed the existence of the Meiyu phenomenon. Su Dongpo's poem said: \"At three o'clock, the rain of the yellow plum blossoms has stopped, and the wind blows on the boats at the beginning of thousands of miles.\" But it is still less than a hundred years to study it from a scientific point of view. That is to say, since meteorology has been regarded as a modern science, through the hard work of several generations of Chinese scholars, the understanding of Meiyu has been continuously deepened. The most important feature of this period is that every major development of detection technology has greatly pushed forward the research on the Meiyu front rainstorm. It can be roughly divided into three different stages: first, due to the acquisition of ground observation data, it has entered the stage of air mass analysis and research. Zhu Kezhen, the founder of my country's modern meteorological science, pointed out as early as 1934 that the summer monsoon precipitation in the Yangtze River Basin is the result of the joint action of southwest airflow (tropical air mass), southeast airflow (tropical-subtropical air mass) and northerly airflow (cold air mass). [5]. This draws a blueprint for the study of precipitation in our country. Second, due to the establishment of the high-altitude detection network, since the 1950s, Chinese scientists have studied the characteristics of Meiyu from the perspective of atmospheric circulation and weather systems. It reveals the establishment and maintenance of the circulation situation during the Meiyu period. Tao Shiyan [6] revealed the sudden change of atmospheric circulation and the effect of the three airflows, and Xie Yibing [7] studied several major precipitation systems in China. Third, since the 1970s, history has pushed Chinese scientists to the mesoscale research stage. With the development of satellites and Doppler radar, Chinese scholars have done a lot of research on the mesoscale system on the Meiyu front[ 8,9]. Generally speaking, the Meiyu front appears as a shear line close to the east-west wind field (that is, the positive vorticity zone). During the summer monsoon period, especially during the Meiyu front season, heavy rain does not occur everywhere or at any time in the banded rain area. The analysis shows that the key issue of rainstorm forecast is when, where and with what intensity the rainstorm may occur, which is a \"bottleneck\" issue in the current rainstorm research work. In other words, from the perspective of mesoscale dynamics, when and where mesoscale disturbances (or low pressures) will occur on the Meiyu front, a difficult problem in mesoscale meteorology has not yet been broken through. The reason is that the study of mesoscale systems requires the use of unconventional encrypted data, the solution of nonlinear equations, and the assumption of non-hydrostatic equilibrium and non-geostrophic characteristics of the atmosphere [10]. Because the scale of the mesoscale low pressure on the Meiyu front is small, it has some characteristics different from those of the mid-latitude weather system. Therefore, some existing models of cyclones (low pressure) in Europe and the United States cannot be fully applied. It is also noted that there are certain differences in baroclinicity between the eastern section of the Meiyu front (Baiu in Japan) and its western section (Meiyu in China)[11]. The study also shows that there are at least two types of mesoscale low pressure disturbances on the Meiyu front in the Yangtze River Basin. One is mesoscale low pressure disturbances with relatively small temporal and spatial scales. It is closely related to the rainstorm center on the Meiyu front, and another type of low pressure, when they start to occur, is only a small disturbance on the Meiyu front, and under favorable circumstances, the disturbance increases and develops into a cyclone (low pressure) , the scale can reach more than 1000 km, and the life history can reach several days. This latter type of low pressure can trigger heavy rainfall over a larger area. Although a lot of research has been done on the structure, stability and energy conversion of these vortices, in general, the research on the occurrence and development of the above two types of low vortices (disturbances) is not enough, and the mechanism of their occurrence is still not clear enough. often lead to errors in forecast results. At present, there is still a lack of a three-dimensional mesoscale physical model of heavy rainfall based on observational facts applicable to our country. This will be an important task that we will strive for in the long term. 6. More precise quantitative precipitation numerical forecast With the development of high-speed electronic computers, it is possible to simulate and predict the rainstorm process by using fluid dynamic equations that include relatively complete physical processes. However, the forecast results for some heavy rain cases are not satisfactory. At present, the forecast of the rain belt is generally of reference significance, but the forecast of the rainstorm center is not satisfactory. The same is true at the international level. Shapiro et al. [12] listed in the THORPEX international science program that, taking the United States as an example, the skill score (TS) of its precipitation forecast (note: not the more difficult rainstorm forecast) is roughly around 0.2, and its summer precipitation score may be higher than that of other countries. The season is even lower. The main reason is that the physical process of the current model is difficult to fully reflect the real atmospheric conditions, including the parameterization scheme of the so-called \"implicit\" cumulus convection process, and the treatment of the microphysical process in the explicit cloud scheme, and There are some uncertain factors in the parameterization scheme describing the physical process of the atmospheric planetary boundary layer, etc., and it needs to use a large amount of observation data to improve, which is especially true for our country. Only in this way can we develop a rainstorm numerical forecast model that is truly suitable for East Asia and my country. In addition, the horizontal resolution of the current conventional observation data is relatively low, and it is difficult to capture the mesoscale system that directly affects the rainstorm. To this end, it is necessary to make full use of a variety of special high-temporal-resolution data that may be obtained, including Doppler radar , satellites, ground automatic stations and other data. However, simply using these data directly as the initial value of the numerical model cannot achieve the desired effect, and may be counterproductive. This requires the development of corresponding data assimilation technology. With the rapid development of economy and the progress of urbanization, precise numerical weather prediction in the 21st century has been put on the agenda, which requires serious research and development. This not only requires higher horizontal and vertical resolution, which is possible from current computer resources, but also requires a more detailed description of atmospheric dynamics and physical processes, from the synoptic scale to the micron level The cloud droplet scale of , the order of magnitude difference is at least 109, which is equivalent to the ratio of the earth to the scale of a ping-pong ball. The complexity of describing these objects is self-evident. It will take time to complete this task, and there is still a long way to go. Although the mesoscale system \"Sparrow\" is small, it has all the \"spirits and gallbladders\". Since this type of model requires high horizontal and vertical resolution and a very short time step, its calculation amount is not lower than that of the global model. 7. The influence of the eastward movement system of the Qinghai-Tibet Plateau It should also be noted that the Qinghai-Tibet Plateau, the highest in the world, stands in the west of my country. (Disturbance) In summer, it often moves eastward along the westerly belt out of the plateau, and superimposes with the mesoscale system on the Meiyu front in the middle and lower troposphere in eastern my country, triggering the development of low-level disturbances and causing heavy rain. However, westerly disturbances are often weakened when they cross the plateau and are easily overlooked. Meteorological satellites and various monitoring methods are required to track them, which may advance the expected period of rainstorm forecast. However, it is a common problem faced by the scientific community in the world today to properly deal with the influence of plateau topography on heavy rainfall in numerical forecasting models. 8. Multi-scale systems and their interactions In terms of the temporal and spatial scope of their influence, heavy rain can be divided into continuous heavy rain and sudden heavy rain. The former is mostly related to the stable maintenance of large-scale circulation such as blocking high and subtropical high, while the latter is mostly related to some local strong convection. If it is a large-scale long-lasting torrential rain, it may also involve the interaction between the middle and low latitude systems, the interaction between the northern and southern hemisphere systems, and the interannual and interdecadal variations of the monsoon, ENSO, low-frequency oscillations, and Ross. The possible influence of Bepole, etc., which complicates the issues we need to pay attention to. Another question worth noting is, how will the intensity and frequency of rainstorm disasters change under the environment of global warming? How will it affect the regional distribution? It is still difficult to answer this question accurately based on the information currently available, and more research is needed. To sum up, in addition to the difficult problem of the occurrence and development of mesoscale systems, the forecast of heavy rain also involves the influence of multiple scale systems and the interaction between them, which is very difficult. It seems that the research on rainstorm disasters in my country has a long way to go. The people of the country still need to continue to work hard and strive to achieve breakthroughs in the above-mentioned many aspects.", "The earliest attention to the impact of urbanization on climate change is from the urban heat island effect, that is, due to the development of urban population, society and economy, the temperature in urban areas is significantly higher than that in suburban areas, which has an impact on climate change over time. Later, it was further noticed that the development of urbanization and the proliferation of high-rise buildings also caused obvious changes in wind direction and speed in different regions. With the rapid development of urban economy and the increase of industrial emissions and energy consumption, as well as the development of modern transportation, air pollution in urban areas has increased significantly, visibility has deteriorated, haze and acid rain have increased, which not only affect temperature and precipitation, but also affect sunshine and Solar radiation received by the ground also changes, affecting human health and causing increased disease. In recent years, some studies have also noticed the \"weekend effect\" of urbanization, that is, there are obvious changes in temperature and precipitation between work and weekends. In addition, due to the development of urbanization, meteorological observation stations originally built in the suburbs have become located in urban areas, or due to obvious changes in the surrounding environment of the observation field, it may also cause changes and inconsistencies in the observation data. On the other hand, greening such as urban tree planting and the construction of artificial lakes and canals may have a \"cold island effect\" on the urban temperature. The urbanization of the world and China is constantly developing. The extent of the impact of urbanization on climate change and the possible physical mechanism of the impact are all in front of scientists. Therefore, it is necessary to pay attention to and study the impact of urbanization on climate change. The impact of changes [1~9]. One way to quantitatively measure the heat island effect of urbanization is to classify the observation stations according to the population of the city where they are located, and calculate the population changes in different years. Another method is to select the temperature difference between an urban observation station and an adjacent rural observation station (or adjacent SST) in the same general climate background to represent the impact of urbanization heat island effect. Taking China as an example, since many observatories in China are located in the outskirts of cities, they are more and more affected by the urbanization process, and the urbanization heat island effect affects the \"warming\" of the observed temperature. Some studies have given the contribution of urbanization to the observed air temperature. The earlier research gave an average annual warming of 0.06\u00b0C/10a in China from 1951 to 1989. The heat island effect accounts for about 83% of the warming, indicating that the urbanization heat island effect is very noteworthy in China [2]. The latest research has noticed that compared with the reference station \u201cwithout urbanization impact\u201d at the national station from 1961 to 2004, the temperature increase rate of urbanization is obvious, the national average is 0.06~0.09\u00b0C/10a, and some areas with significant urbanization If it reaches 0.10\u00b0C/10a, the calculated urbanization warming contribution rate will reach an average of 27% nationwide, and the urbanization warming effect contribution rate in each season will be 18%~38%. Warming may account for one-fifth to one-third of the warming, and it is important to propose that the contribution of the urbanization-induced heat island effect in China's warming cannot be ignored [3,10]. Another recent study, estimating the urbanization heat island effect in China, uses the adjacent sea surface temperature as a reference point for comparison, and believes that the sea surface temperature is not affected by the urbanization heat island effect, and calculates the total warming trend of China from 1951 to 2004 as 0.22\u00b0C /10a, the sea temperature warming trend is 0.14\u00b0C/10a, and the warming of the urbanization heat island effect is about 0.08\u00b0C/10a, accounting for about 36% of the total warming. It is believed that China's urban population has grown rapidly in recent decades , urban expansion, social and economic development, so there is an obvious urbanization heat island effect, but it does not affect the mainstream trend of warming in China in the past 25 years [4]. The results of several groups of studies are generally consistent, indicating that China's urbanization has contributed about one-fifth to one-third of the warming in recent decades, which is obvious and cannot be ignored. The contribution of warming is significantly higher than that of developed countries [4]. Table 1 \tContribution rate of China's urbanization heat island effect to warming The urbanization effect can also be explained from the research on the city's \"weekend effect\". Calculation and analysis of the intra-week variation of daily precipitation frequency at 194 observation stations in China from 1979 to 2002 in the summer (June to August) found that there is an obvious weekend effect in the precipitation frequency, that is, the precipitation frequency increases on weekends and decreases during the week , the very small frequency appears on Wednesday, and the weekend effect of the light rain frequency is more obvious. The calculation of the daily range of temperature in eastern China during the winter (December to February) and summer (June to August) from 1955 to 2000 found that the discharge was most obvious on mid-week Wednesday, and the corresponding daily maximum temperature and daily range were highest on Wednesday. Studies have shown that the \"weekend effect\" may be related to man-made emissions of aerosols [9]. Another effect of urbanization is to reduce the wind speed. Considering the change in wind speed from 1956 to 2004, we selected 174 and 180 observatories that were greatly affected by urbanization and 180 observation stations, respectively, to calculate their annual mean wind speed changes. The average difference between the two is about 0.1~0.3m/s, and the calculated decreasing trend of wind speed change in the past 50 years is 0.12m/(s\u00b710a) and 0.13m/(s\u00b710a) respectively. , only 0.01m/s per decade (Fig. 1)[6]. This shows that urbanization does reduce wind speed, but it has little effect on the decreasing trend of wind speed change in China in recent decades. Figure 1. \tAnnual average wind speed changes of 174 stations (cross lines) significantly affected by urbanization and 180 stations (circle lines) slightly affected by urbanization in China from 1956 to 2004 [6] According to observations at some stations around the world Records show that the solar radiation received by the ground decreased (dimmed) by 1.3%/10a during 1961-1990, about 7W/m2, and China decreased by 2%-5% during this period, about 4-9W/m2. Since 1991, however, there has been a slight increase (brightening). Correspondingly, the annual average global land temperature was significantly different around 1990. Before it was affected by the darkening, the temperature was significantly lower, and then it became brighter, and the temperature increased significantly (see Figure 2). China's annual average temperature also had a similar change. Some studies have shown that the reason for the decrease of solar radiation received by the ground is related to the increase of anthropogenic aerosol emission, that is, the increase of pollution caused by urbanization [5,7,8]. Figure 2. \tThe annual average temperature anomaly of the global land from 1958 to 2002 [1] Although the impact of urbanization on climate change has received attention and some researches have been carried out, the main problems are: first, how to calculate urbanization reasonably and quantitatively The impact of climate change is usually compared with stations not affected by urbanization, but with the rapid development of urbanization, there are fewer and fewer observation stations that are not affected by urbanization, so the reference stations are in \" There are doubts about the method used for quantitative calculation and the credibility of the calculation results. Some recent studies use sea surface temperature as a reference, and believe that sea surface temperature is not affected by the urbanization heat island effect, which proposes a new method that can be tested [4], at least in the urbanization heat island effect of land closer to the ocean can be considered in the estimate. The second is whether the global warming of the past 50 years is an important influencing factor. One view is that global warming and urbanization have played an important role; effect, but this is only at a certain moment or period of time for individual stations in local areas, and has little effect on the average trend of annual average temperature changes at a large number of observation stations on a global scale of several decades, about 0.006\u00b0 C/10a, so the impact of urbanization\u2019s heat island effect on global warming is negligible [1,4,5]. Third, the urbanization effect not only has an impact on temperature, but also affects changes in precipitation, cloud and temperature diurnal variation, and weekend effects, but its urbanization effect needs to be estimated quantitatively. Fourth, the mechanism of urbanization\u2019s impact on climate change is still unclear. As mentioned above, whether urbanization has an impact on precipitation and changes in solar radiation received by the ground, and through what mechanism, there are great uncertainties. Fifth, there are obvious disputes about whether the solar radiation received by the ground in some areas changed from darkening to slightly brightening around 1990, and whether the corresponding changes in global annual average temperature are global and why. One view is that it is global Another point of view is that it is only a local feature [1,5,7,8]. In terms of analyzing the cause, one view is that it may be the natural variability of climate, and the other view is that it may be caused by air pollution caused by urbanization. Sixth, whether the impact of urbanization on climate change is positive or negative, and how to prevent negative effects. Today, with the rapid development of urbanization, these key issues need to be further studied and resolved.", "Climate change includes natural climate change and climate change caused by human activities, and the climate change caused by human activities has attracted more and more attention from scientists, policy makers and the public in various countries. Since the publication of the first scientific assessment report on climate change by the Intergovernmental Panel on Climate Change (IPCC) in 1990 to 2007, four scientific assessment reports have been published. Whether the past, present and future have caused, are and will continue to cause global climate change and impacts, and what countermeasures and strategies should be adopted, has evaluated a large number of scientific researches on this focal issue in the past 20 years [1-3]. Human beings are developing, the population is increasing, and society and economy are constantly progressing. On the one hand, it causes an increase in energy use and consumption, and emits a large amount of greenhouse gases and man-made aerosols into the atmosphere, which changes the composition of the atmosphere and increases the amount of carbon dioxide contained in the atmosphere. The concentration of greenhouse gases also increases air pollution; on the other hand, human land use changes, such as deforestation, farmland development, desertification, salinization, etc., have caused significant changes in land, vegetation, and water. If all these human activities cause significant changes in climate and their destructive effects, they will in turn threaten the human habitat. Therefore, it is necessary for various countries to put emission reduction, development of clean energy and development of low-carbon economy on the agenda Come up, this will involve many issues such as economic development, carbon trading, and responsibility of each country. Therefore, the study of the relationship between human activities and climate change is not only a very important scientific issue, but also involves politics, economy, military and society. As well as safety and other aspects, it has attracted great attention from all walks of life. To study the role of human activities in global climate change, the first aspect is to analyze the role of human activities in global warming in the 20th century. In the past 20 years, the four IPCC assessment reports have gradually deepened the understanding of the role of human activities in global climate change, especially the observed global warming in the 20th century. It is believed that the global climate change in the past century may be caused by natural fluctuations or human activities or both; the second assessment report pointed out that more and more evidence shows that the impact of human activities has been detected; the third assessment report Considers that new and stronger evidence indicates that the observed global warming in the past century is likely (66%\u201390%) to be caused by increases in the concentration of greenhouse gases caused by human activities, although there may be uncertainties; The four assessment reports further clearly pointed out that the observed warming of the global average temperature since the middle of the 20th century is very likely (>90%) due to the observed increase in the concentration of greenhouse gases emitted by humans into the atmosphere. Moreover, it is further proposed that human activities are not only manifested in global (including land and ocean) annual average warming, but also in hemispheric and intercontinental scale land warming and warming in various latitudes, and warming in all seasons, especially in winter Significant and obvious warming in middle and high latitudes[1\uf02d3]. Evidence from many studies also shows that warming is not only at the surface, but also in the upper and middle oceans and the troposphere of the atmosphere, while maximum and minimum temperatures also exhibit warming, reduced diurnal ranges, fewer frost days, enhanced heat waves and more days, glaciers Intensified melting, accelerated melting of land ice and sea ice in high latitudes and polar regions, shrinking mountain snow line, accelerated melting of permafrost, rising sea level, and weakening winter monsoon, these evidences are more obvious in the past 50 years[3]. As an example, Fig. 1 shows the global annual mean surface temperature anomaly changes from 1906 to 2005, including the global and continental average change characteristics simulated by observations and climate models considering natural forcing or combined natural and anthropogenic forcing. Studies have shown that if climate models only consider natural forcings such as solar activity and volcanic activity, it is impossible to simulate the characteristics of global and continental warming trends in the 20th century. would be able to simulate significant global and continental warming since the middle of the century. According to the fingerprint method, the increase of greenhouse gases emitted by humans is likely to be the main reason for the apparent warming of the globe, land, ocean and continents over the past 50 years [3]. The research on China also confirmed that the warming in the past 50 years is likely to be caused by the increase of greenhouse gases emitted by humans [5\uf02d10]. To study the role of human activities in global climate change, the second aspect is to study the projections of the role of human activities in climate change in the future, such as centuries or longer, that is, considering population growth, social and economic development, energy emissions and land Using factors such as changes in climate, artificially designing a variety of scenarios of future human activities, and using a large number of climate models to predict future global climate changes. The future human emission scenario given by the IPCC's first assessment report is to consider that human emissions of carbon dioxide will continue to grow as usual, referred to as the \"business as usual\" scenario (BAU). Because it is a scenario without emission reduction, it is a high-emission scenario, reaching the Doubling around 2030. The second assessment report of the IPCC assumes that carbon dioxide equivalent emissions will increase roughly by 1% per year, more or less, so that the time to achieve doubling of carbon dioxide is roughly between 2030 (high-emission scenario) and impossible (low-emission scenario). To double around 2070 (medium emissions), a total of six emission scenarios (IS92a, b, c, d, e, f for short) are designed. The IPCC third assessment report continued to use the IS92 scenario, and at the same time designed a total of 40 emission scenarios (SRES for short) that considered the increase in greenhouse gases and anthropogenic sulfate aerosols. Among them, high emission scenarios such as SRESA2, medium emission scenarios such as SRESA1, A1B, B2, low emission scenarios such as SRESB1. The IPCC Fourth Assessment Report continues to use SRES scenarios, most of which use A2, A1B and B1 scenarios [1-3]. Recently, in order to prepare the fifth assessment report of IPCC, a new typical emission scenario (RCP for short) was given, considering radiative forcing of 2.7, 4.5, 6.0 and 8.5 W/m2[4]. According to the IPCC Fourth Assessment Report, 23 global climate models are used to consider 7 emission scenarios of future human emissions to estimate the change of global average annual temperature in the 21st century. The warming will continue, and the difference in the degree of warming in different scenarios is not obvious in the early stage, for example, the warming range is between 0.4 and 0.9\u00b0C in 2020, but the difference in the degree of warming in the later period is obvious, for example, by 2100, the warming range will be between 0.5 and 0.9\u00b0C. 4.0\u00b0C [3] (see Figure 2). Climate model projections of climate change for the 21st century that consider various anthropogenic emissions scenarios indicate that human activities will likely continue to cause global warming, ocean warming, continental warming, extreme temperature warming, continued sea level rise, and melting ice and snow accelerate. However, the impact of human activities on precipitation and climate phenomena such as ENSO and monsoons, as well as climate extreme events such as extratropical and tropical cyclones and abnormal droughts and floods varies greatly with the prediction results of different climate models[3]. The climate model considers the predictions of future temperature changes made by various human activity scenarios. For the first three assessment reports of the IPCC, it is estimated from 1990, and for the fourth assessment report, it is estimated from 2000. Screenshot 1 \t1906~2005 Annual global (land and ocean, respectively) and continental mean surface temperature anomalies in black are observations (10-year average), blue are 19 simulations of 5 global climate models considering natural forcing (solar and volcanic activity) ), the pink is the average change in temperature anomalies caused by combined natural and anthropogenic (including greenhouse gases and anthropogenic aerosols) in 58 simulation experiments of 14 global climate models (relative to 1901 ~1950 year average) (the shaded area represents the range of 5%~95% of the simulation)[3] Up to now, there have been 19 years or 9 years of observation facts respectively, so the prediction results can be compared with the observation results to test The confidence with which climate models account for human activity in their projections. The test shows that all climate models have consistently predicted the global warming trend in this period considering various scenarios, but the abnormal warming in individual years (such as 1998 and 2005~2008) has not been predicted due to the small warming range out (see Figure 3) [3,8]. The 88 climate models considered a variety of human activity scenarios for China's prediction test and obtained a similar conclusion, that is, the warming trend was consistently predicted, but for the abnormally warm 1998 and 2007 and the small warming in 1996, Neither 2000 nor 2005 is estimated (figure omitted) [8,10]. This suggests that there are other forcing factors besides the warming trend caused by human activities. Although the research on the role of human activities in global climate change has made significant progress in the past two decades, there are still many problems, confusions, doubts, and different views[3\uf02d10]: the first is the recent century of global climate change. Is the warming caused by human activities, or is it natural forcing and quasi-periodic or multi-decadal variability within the climate system \t? Changes (relative to 1980~1999) The black line in the figure is the observed temperature anomaly in the 20th century, and the estimated shaded area represents \u00b11 standard deviation[3] Fig. 3 \tPredicted and observed global annual average temperature warming[3 ,8] Various colored lines represent predictions from the first, second, third and fourth assessment reports respectively; thick black lines represent observations, provided by Jones, resulting from personal communication, or a combination of the two. As an example, Figure 4 shows the evolution of the annual average temperature anomaly in the northern hemisphere from 1880 to 2008. It is noted that in the past hundred years, in addition to the obvious warming trend, there is also an obvious quasi-periodicity, that is, in addition to the warming caused by human activities , there are other forcings and interactions and feedback processes within the climate system. On the other hand, the apparent warm period in the 1940s may have some connection to the lack of volcanic activity. There are similar characteristics in the change of annual mean temperature in China (figure omitted)[8,10]. The second is that the estimate of greenhouse gas emissions caused by human activities is too large, so its climate effect is too exaggerated. Comparing the distribution of the 50-year zonal average temperature with height and observations, it is found that a false warm center is simulated at the equator and in the middle and upper tropospheres of the tropics. Therefore, it is questioned that the greenhouse effect may be exaggerated (see Figure 5). What's more, climate change is mainly natural change, and the climate change caused by human beings is small or even negligible [9]. The third is that the various scenarios of human activities in the future estimates are artificial assumptions, not the actual future emission facts, so the credibility of future climate change forecasts is very low, or even unreliable. The fourth is that the current prediction of future climate change only assumes some future human activity scenarios, and does not take into account the prediction of future natural forcing such as solar activity and volcanic activity, thus bringing about the unreliability of future predictions ( See Figure 3). The fifth is that all climate change attribution analyzes and future predictions are based on climate models, and both global climate models and regional climate models have the potential to describe the interaction and feedback mechanisms of various spheres of the climate system. Uncertainties, especially at regional scales, are even greater. The sixth is that the urbanization has been obvious in the past 50 years, so the increase of the urban heat island effect cannot be ignored. The seventh is that the solar radiation received by the ground decreased (dimmed) in the 1950s to 1980s, and increased (brightened) since 1990, thus causing obvious warming over the past 50 years [10]. As for the solar radiation received by the ground Why there are these changes in radiation, there is still a big debate. Fig. 4 \tThe annual mean temperature anomaly in the northern hemisphere from 1880 to 2008 (black), the trend line (red), and the 21-year moving average (blue) Latif, personal \tcommunication Latitude and height profile distribution, the above picture is the global climate model simulation of four United States (A) CCSM, (B) PCM, (C) GFDL, (D) GISS to do the 20th century full forcing calculation, (E) is the radiosonde Scientists have conducted decades of research on the scientific problem of the role of human activities on global climate change, and more in-depth research is needed in the future to answer and solve this scientific problem.", "The root cause of climate change comes from the forcing and driving effect of natural factors, so it is extremely important to study the role of natural factors in global climate change. Global climate change is affected by different natural factors on different time scales, crustal movement such as continental drift and pole swimming, crustal equilibrium and orogeny, earth rotation such as earth orbit parameters (earth axis inclination, precession, eccentricity, etc.), volcanoes Activity, solar activity (sunspot activity, solar energy, magnetic field, galactic cosmic rays, ultraviolet radiation, etc.), galactic dust, galactic spiral arms, etc. are all natural factors that may affect global climate change on different time scales. On the millennium and centennial time scales, solar activity and volcanic activity are the two main natural factors affecting the global climate system (including the five circles). Long-term research has focused on the impact of solar activity and volcanic activity on climate. As the global climate system as a whole, the interaction and feedback processes within and among its various spheres exist on various time and space scales, and at the same time form intricate interaction processes with external forcing factors, including physical, chemical and biological A multifaceted process of learning. Solar activities include many contents, among which the change of sunspot number has nearly 400 years of observation records, although sometimes there are some gaps and different observation data are slightly different. There are obvious 11-year, annual and century cycles in the variation of sunspot number, and it is noted that the maximum value of sunspots tends to increase from the early 19th century to the 1960s (Fig. 1) [1\uf02d4]. Studies have shown that the number of sunspots has a certain relationship with the global average annual temperature. Generally, the temperature corresponding to the sunspot peak year is higher than that of the sunspot valley value year. During the glacial period, severe cold records appeared. For example, the three freezes of the Thames River in the UK (1684, 1694 and 1709) all occurred during this period. During the year, Taihu Lake, Hanshui River, Huaihe River and Dongting Lake were frozen 3~4 times. Therefore, it is believed that the weakening of solar activity may be an important reason for the formation of the Little Ice Age. A comparison of solar activity and climate change in the past millennium found that a comprehensive analysis of the records of East Asia, the former Soviet Union, Europe, North America, the Arctic region, and the southern hemisphere revealed that there were five cold periods in the past millennium, which appeared in the 1100s-1150s, 1300s-1390s, 1450~1510s, 1560~1690s and 1790s~1890s, these cold periods roughly correspond to the Mundel minimum and the Sperel minimum (1420~1570). It is worth noting that the number of sunspots in the early and late 19th century and in 2009 were abnormally low, and correspondingly, the temperature in some parts of the world was also low. On the other hand, the Medieval Warm Period (900~1300 years) and the Medieval Sunspot Maximum Period (1140~1340 years) have considerable coincidence. Therefore, the enhancement or weakening of solar activity has a good corresponding relationship with the global temperature. Solar activity has obvious quasi-11-year, 22-year and century cycles, and many meteorological elements also have these quasi-periodic changes, such as changes in the intensity of westerly winds in the North Atlantic, changes in the water level of Lake Victoria in Africa, changes in precipitation in Central Europe, East Asia, and some parts of China, There are similar quasi-periodic changes in Meiyu changes, typhoon changes in the Northwest Pacific Ocean, changes in sea level along the Atlantic coast, and changes in Bering sea ice volume. Therefore, it is proposed that they may be related to the quasi-periodicity of solar activity[1,2]. In addition to the above-mentioned diagnostic analysis of solar activity and climate change, some studies have tried to use climate models to simulate the sensitivity of solar activity to climate effects. Most simulation studies show that if the solar constant increases by 1%, the global temperature may increase by about 1.5\u00b0C, and if the solar constant increases by 2%, the global average temperature may rise by 3\u00b0C. The impact of the increase or decrease of the solar constant on climate is asymmetric Yes, when the solar constant decreases by 2%, the average global temperature could drop by 4\u00b0C or more. These simulations show that the contribution of the solar constant change and the greenhouse effect to global temperature is roughly equal, so it is believed that in the factor analysis of global temperature change in recent decades, people may have underestimated the role played by solar activity[1,3\uf02c4] . Figure 1 \tThe number of sunspots at the Zurich Observatory (Wolf) from 1749 to 2009 (the last 261 years) (Ren Guoyu provided data from the Zurich Observatory Network in Switzerland, 2010) Obviously, the diffusion of volcanic ash and aerosols from volcanic eruptions reduces the direct solar radiation reaching the ground, thereby affecting the heat balance of the climate system. Studies have shown that 3 to 5 months after a strong volcanic eruption, the direct solar radiation received by the ground can be reduced 20%~30%, the temperature drop is relatively obvious, the low temperature can last for 10~15 months, and the temperature will be significantly affected within 1~2 years, and the temperature will return to normal after about 4~5 years. The impact of volcanic activity is obvious in regions and seasons The difference is that the impact of volcano eruptions in different hemispheres is different. For example, the eruption of the El Chichong volcano in the Gulf of Mexico in 1982, due to systematic solar radiation observations and satellite observations, it can be clearly seen that the volcano spread from east to west. Around the earth for about 1 day, it is measured that the direct radiation decreases by 33%, the scattering increases by 77%, and the total radiation decreases by 6%. As an example, the relationship between global and Chinese annual average temperature anomalies and volcanic activities in the past century is given. It is noted that in 1902 , 1907, 1912, 1956, 1982 and 1991 with six volcanic eruptions, the volcanic activity contributed to the cooler period during 1900-1915, and the 1920-1940s was a period of volcanic silence, with warmer (global) or In the warm period (China), the volcanic eruptions in 1982 and 1991 caused the temperature to cool down or warm up in the following 1-3 years (Figure 2). Volcanic activities may be related to some climate phenomena, such as the low temperature in summer in Japan It is closely related to volcanoes, and the four famous years of cold injury in Japanese history (1695, 1755, 1783 and 1837) were all related to strong volcanic eruptions. China's century-old temperature data also show that within two years after the volcanic eruption, summer and autumn The range temperature is obviously low, and at the same time, the eastern monsoon rain belt in midsummer tends to move southward, which may easily lead to drought in the north and flood in the south. The study also pointed out that since the 17th century, 67% of cold summers in East Asia occurred in the first half of the year or the previous year For excessively strong volcanic eruptions, the cold summer in East Asia has a cycle of about 70 years, which coincides with the 70-year cycle of volcanic activity[1,2,5,7]. Some studies also use climate models to consider volcanic eruptions and simulate subsequent Climate change, the temperature in the lower troposphere is significantly reduced 10 to 15 months after the eruption of Mount Pinatubo, and the global average temperature drop is between 0.5\u00b0C, which is similar to the observation. tends to disappear. Figure 2 \tobserves the global (top) and China (bottom) annual mean temperature anomalies over the past century [1,3] The black triangles in the figure represent volcanic eruptions, and the curves of different colors in the figure represent the different author's In recent years, 13 climate models have simulated and studied the temperature change in the northern hemisphere in the past millennium (see Figure 3). Driven by both forcing (solar activity and volcanic activity) and anthropogenic external forcing (including anthropogenic emissions of greenhouse gases and tropospheric sulfate aerosols), the model used has consistently simulated the main feature of the northern hemisphere temperature change in the past 1100 years, that is, the Medieval Warm Period (about 1200 ~1400 years) and the Little Ice Age (17th century, mid-15th century and early 19th century), studies have shown that natural external forcing played a significant role in the temperature change in the previous 900-1000 years, while human forcing played a significant role in the warming in the last century may have played a significant role. Although the 13 studies have different designs for various external forcings and different climate models, the simulation results are generally similar to the reconstruction of the northern hemisphere temperature changes in the past 1100 years using proxy data. This shows that natural external forcing factors such as solar activity and volcanic activity play an important role in climate change, so they cannot be ignored in simulations and future predictions[2]. Figure 3 \t(a) volcanic forcing, (b) solar forcing, (c) all other forcings (including anthropogenic emissions of greenhouse gases and tropospheric sulfate aerosols), (d) driven by (a)~(c) over the past 1100 years The annual average temperature change of the northern hemisphere simulated by the lower climate model (relative to 1500-1899) [2] The internal interaction of the global climate system is intricate, including different time and space scales. Among them, the interaction between the ocean and the atmosphere (such as low-latitude regions, mid-high latitude regions, Indian monsoon regions, East Asian monsoon regions, ENSO, ocean currents and non-ocean current regions), the interaction between ocean circulation and climate (such as wind-induced ocean currents, thermohaline Circulation and Meridional Overturning Circulation, Heat and Moisture Transport of Ocean Circulation), the impact of cryosphere on climate (such as snow cover, Arctic sea ice, Antarctic ice and snow, Qinghai-Tibet Plateau snow cover), the impact of terrestrial lithosphere and biosphere on climate ( Such as terrestrial water cycle, terrestrial heat cycle, land use change, vegetation change), quasi-periodic and decadal to centennial scale climate variability (such as North Atlantic Oscillation, North Pacific Oscillation, Southern Oscillation, Antarctic and Arctic Oscillation, Monsoon variability, Tropical Atlantic Dipole Pattern, Indian Ocean Dipole Pattern, Temperature and Precipitation Variability) are the most studied. As an example, Table 1 shows the comparison of North Atlantic Thermohaline Circulation (THC) with North Atlantic Sea Surface Temperature (SST), Sea Level Pressure (SLP), North Atlantic Oscillation (NAO) and climate elements [1]. Note The interdecadal variability to THC may be one of the reasons for the formation of the interdecadal variability of climate elements in the North Atlantic region. Of course, this relationship needs to be verified by more observational data, and further research is needed on its physical mechanism. Table 1 \tComparison of North Atlantic Thermohaline Circulation (THC) and Climate Element Variation Trends [1] In summary, the causes of modern climate change include both natural and human forcings. It is the intricate process of solar activity and volcanic activity, interactions and feedbacks within the global climate system and external forcing that constitutes the cause of modern climate change. There has always been great controversy about the role of natural factors such as solar activity and volcanic activity in global climate change on a centennial time scale. First, there is a lack of detailed long-term observation records of solar activity and volcanic activity. The understanding of the physical mechanism of the impact of global or local climate change is still unclear. The third is the prediction of future climate change, because it is extremely difficult to predict the future solar activity and volcanic activity, especially the volcanic activity. Forecasting. Fourth, climate models have obvious uncertainties in simulating and predicting climate change considering natural factors, and more in-depth research is needed to reduce the uncertainties. As for the interactions and feedbacks within the climate system, they are also intricate and require more research [8]. On the other hand, if the influence of human activities is added, it is a more complicated problem, that is, considering the interaction of solar activities, volcanic activities, human activities and various layers of the global climate system at the same time [9,10], it constitutes the climate change attribution. Due to the extremely difficult scientific problems in the research.", "Climate change and long-lived greenhouse gases such as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) in the atmosphere, as well as nitrogen oxides that indirectly affect radiative forcing (a general term for nitric oxide and nitrogen dioxide, also known as NOx), ammonia (NH3), volatile organic carbon (VOC) and other active carbon and nitrogen gases are closely related. These long-lived and reactive carbon and nitrogen gases are reactants or products of carbon and nitrogen biogeochemical cycle processes[1,2]. The global carbon cycle is a process that starts and ends with atmospheric CO2 and is closed by several intermediate links such as photosynthesis and respiration. The process of removing atmospheric CO2 out of the atmosphere and keeping it from returning to the atmosphere for a long period of time is called the sink of atmospheric CO2, or carbon sink for short. Conversely, the process of releasing CO2 that has been fixed outside the atmospheric system for a long time back into the atmosphere, or converting carbon that exists in other forms into atmospheric CO2, is called the source of atmospheric CO2, or carbon source for short. Due to various factors, the source of atmospheric CO2 is greater than the sink, so that atmospheric CO2 gradually accumulates, which leads to the enhancement of the atmospheric greenhouse effect, which is considered to be one of the important reasons for global warming. The global nitrogen cycle is a process that starts and ends with nitrogen, and is closed by several intermediate links such as nitrogen fixation, nitrification, and denitrification. All forms of nitrogen except nitrogen are collectively referred to as reactive nitrogen, represented by Nr [3]. The process from nitrogen to Nr is called the source of Nr, and vice versa, it is called the sink of Nr [4]. Due to various reasons, the source of Nr is greater than the sink, so that Nr gradually accumulates in the soil, water body, atmosphere and other systems, resulting in the enrichment of environmental nitrogen, which not only poses a threat to ecological environment security and human health, but also directly or indirectly affects the global environment. Climate [3-5]. The combination of the global nitrogen cycle and the carbon cycle lies in the carbon and nitrogen organic compounds synthesized by photosynthesis using CO2 and Nr to form biological tissues. As a large number of elements carbon and nitrogen that constitute biological tissues and life carrier substances, their respective biogeochemical cycle processes are like two closely linked and inseparable interlocking gears. Changes in the carbon cycle process always drive the nitrogen cycle. Processes also change and vice versa. Therefore, the global nitrogen cycle process not only affects the global climate by enriching Nr components such as N2O, NH3, NH3, and NOx in the atmosphere, but also regulates the CO2 source and sink process and the CH4 source and sink process of the global carbon cycle. The process of VOC emission affects the climate. Because the carbon and nitrogen biogeochemical cycles of the ecosystem are subject to environmental conditions such as moisture, temperature, and light, climate change will in turn affect the process of the ecosystem emitting or absorbing the above-mentioned long-lived greenhouse gases and reactive carbon and nitrogen gases. . Human activities produced a large amount of Nr in the process of producing food, fiber and fossil energy, which made the global anthropogenic Nr source from about 15 million tons per year before the industrial revolution (equivalent to 6% of the total global natural source and terrestrial natural source at that time, respectively). % and 13%), rising to 187 million tons in 2005 (approximately equivalent to \t80% and 180% of the total global climate net effect of man-made changes in the global nitrogen cycle in this year. 861. natural sources and terrestrial natural sources ), if the current growth rate continues, it will reach 267 million tons in 2050 (equivalent to approximately 120% and 270% of the global total natural sources and terrestrial natural sources at that time)[3, 5]. We know that terrestrial soil organic carbon pool is 2~3 times that of atmospheric carbon pool [6], and slight changes in soil carbon pool may lead to significant changes in atmospheric CO2 concentration, thereby affecting climate. We also know that there is a stoichiometric balance of carbon and nitrogen in terrestrial vegetation and soil organic matter, that is, the ratio of carbon to nitrogen under certain conditions is maintained at a certain level. This means that the increase of anthropogenic Nr in terrestrial ecosystems may increase the soil carbon sink, which is beneficial to the mitigation of climate warming. At the same time, increasing anthropogenic Nr will increase N2O emissions. This change not only affects the climate by directly enhancing the greenhouse effect, but also indirectly affects the climate by destroying the atmospheric ozone layer [7]. In addition, increasing anthropogenic Nr will also increase the emission of other Nr gases at the same time, which may have negative effects on climate warming under some conditions, and may have positive effects on climate warming under other conditions[3, 5] . Not only that, increasing anthropogenic Nr can also affect the climate by changing the carbon cycle process of soil and water ecosystems and affecting the source and sink processes of CO2 and CH4[8]. However, it is still unclear what kind of climate net effect such a large increase of anthropogenic Nr sources will have, directly or indirectly, on a global or regional scale[5]. To solve this problem, the following series of scientific questions must be answered first: How to accurately quantify the sink of anthropogenic Nr[4]? How do sinks of anthropogenic Nr change with increasing anthropogenic Nr sources? How do these changes affect the enrichment of anthropogenic Nr in soil, water, and the larger environment? Where does anthropogenic Nr go in different systems (soil, water, atmosphere)? How do the individual fates change with increasing anthropogenic Nr sources? How are these anthropogenic Nr finally reduced to nitrogen gas? How long does the whole restoration process take[5]? How do these destinations affect the carbon cycle process and the source-sink balance of CO2 in the whole reduction process? How will emerging human activities such as biomass fuel development affect the global or regional nitrogen cycle[5]? Each of these issues is a frontier topic and a difficult problem that the field of global change science is currently facing. Finding the answers to these questions and finally elucidating the net climate effect of anthropogenic Nr is a great challenge for Chinese and world scientists.", "The characteristics and causes of climate change in China under the background of global warming are a matter of public concern. Monitoring climate change requires accurate observations. my country's relatively systematic observation network of meteorological stations was established around 1950. Focusing on the characteristics of climate change in the past 60 years, Chinese scholars have carried out a lot of work. Taken together, these studies have found that the climate change in eastern China shows many special characteristics [1-3]. Expressed in terms of changes in temperature and precipitation that everyone is most concerned about, it can be summarized into two points: one is the change from \"cold to warm\", and the other is the change from \"flood to drought\". The so-called \"cold versus warm\", as shown in Figure 1, means that the temperature change in China does not show a uniform warming, but a cooling zone centered on the Sichuan Basin and distributed in an east-west belt. Warming is on both sides; the cold center in the Sichuan Basin exists throughout the year, and the cooling in the Yangtze River Basin is strongest in summer. The so-called \"flood versus drought\", as shown in Figure 2, although the precipitation in the cold season in China shows an overall increasing trend, but in the main Figure 1 \t, the average change trend of China's surface temperature for three consecutive months from 1951 to 2000 (unit: \u00b0C /50a) is quoted from Zhou Tianjun et al., 2008. Figure 2 \tshows the trend of precipitation in China from 1951 to 2000 on average for three consecutive months [unit: mm/(d\u00b750a)] quoted from Zhou Tianjun et al., 2008. , since the late 1970s, it has moved southward from North China to the middle and lower reaches of the Yangtze River, forming a phenomenon of \"flooding in the south and drought in the north\". After entering the 21st century, the rain belt has recently moved northward to the Huaihe River Basin. The analysis of the observed facts of climate change in East Asia is relatively clear. What remains to be resolved is the cause, the mechanism. Scholars at home and abroad have carried out a lot of work in this area. According to the summary, the currently proposed East Asian climate change mechanism involves the impact of tropical ocean warming, the impact of warming on the Qinghai-Tibet Plateau, the emission of anthropogenic aerosols, and the impact of global warming. And the natural variability of the climate system [3,4]. The reason why the problem is so complicated is related to my country's unique geographical location and climate characteristics. my country is located in the southeastern corner of the Eurasian continent, spanning three latitude zones: high, middle, and low. It has the world's highest plateau in the west and the world's largest ocean in the east. The climate is changeable and the control system is complex. Due to the large number of factors affecting climate anomalies in my country, it is still difficult to accurately and quantitatively estimate the relative contributions of various factors under the existing conditions. To study the causes of climate change in East Asia, we must not only rely on the multi-angle analysis of observation data to reveal new observational facts, but also rely on the use of climate models to conduct numerical experiments. By assigning different forcing factors to the climate models and examining their effects , to discuss the relative contribution of different factors. In the above two aspects, Chinese scholars have made remarkable achievements in recent years. The analysis of observational facts in recent years points out that climate change in East Asia is not an independent phenomenon, but a local manifestation of the interdecadal climate change in the northern hemisphere that occurred in the late 1970s, and is a component of the global land monsoon change[5] . The climate change in East Asia has its specific three-dimensional structural characteristics, which correspond to changes in surface temperature and precipitation elements. The temperature in the upper and middle layers of the troposphere in East Asia has decadal-scale cooling. The cold center is most significant in spring and summer. The temperature On the one hand, the anomalous cyclonic circulation in the upper layer strengthens the westerly wind south of the East Asian jet axis; on the other hand, the anomalous anticyclonic circulation in the lower layer leads to the weakening of the East Asian summer monsoon. The enhancement of the westerly jet changes the divergence intensity in the upper and middle layers of the troposphere and triggers a unique cloud radiation feedback process, which plays an important role in cooling the surface air temperature in the lower reaches of the Qinghai-Tibet Plateau. \"Drought\" type precipitation anomaly[1,6]. Regarding spring cooling of the upper troposphere over East Asia, the study found a significant link between it and the strengthening trend of the North Atlantic Oscillation (NAO) in recent decades. From March to May, the cold center gradually moved southward and strengthened, moving to the south of 35\uf0b0N in May, leading to a drought trend in southeastern China in the past 50 years[7]. In the midsummer season from July to August, the tropospheric cold center is located near 40\u00b0 north latitude and 110\u00b0 east longitude, which causes the \"flood in the south and drought in the north\" type of precipitation anomaly in eastern China[8]. It is still difficult to give a conclusive answer to the question of why the middle and upper troposphere in East Asia becomes colder in summer. Numerical experiments based on atmospheric circulation models have shown that when the climate model is driven by actual observations of SST changes, especially tropical SST changes, the weakening trend of the monsoon circulation can be partially simulated and reproduced, but the simulated East Asian troposphere The temperature change is very weak[9\uf02d11]. In the climate simulation experiments, the indirect effect of the ocean-land thermal differential change and the direct effect of the tropospheric temperature change jointly caused the weakening of the monsoon circulation. However, affected by the performance of climate models, it is still difficult to simulate and reproduce actual precipitation changes. In addition, the results of current climate simulation experiments do not support this point of view [4].", "The prediction and estimation of climate change is a common concern of the scientific community, the public and policy makers, and is closely related to social and economic development. Climate system models are an important tool for climate prediction and projection. Climate models are derived from atmospheric circulation models, and climate system models are extensions of climate models. The climate system includes components such as the atmosphere, hydrosphere, cryosphere, lithosphere, and biosphere. Correspondingly, a completed climate system model includes the four bases of the atmospheric circulation model, the ocean circulation model, the land surface process model, and the sea ice model. Subsystem component, because it comprehensively considers the interaction process among multi-circle layers such as \"sea-land-air-ice\", it is usually called \"physical climate system model\", or simply \"climate system model\"[1 ~3]. Adopting a \"pluggable\" modular framework is an international trend in the development of climate system models, that is, through a flexible \"coupler\", the four component models of the atmosphere, ocean, land surface, and sea ice are coupled together to form a climate system model , thus contributing to the sustainable development of the model [4]. Climate change involves the complex interaction of various layers of the Earth's climate system. The physical climate system model has become an important tool for understanding the mechanism of past climate changes and predicting (forecasting) future climate anomalies (changes). For example, since the early 1990s, the Intergovernmental Panel on Climate Change (IPCC), through the \"Working Group on Coupled Modeling\" (WGCM) of the \"World Climate Research Program\" (WCRP), aimed at the causes and mechanisms of past climate change As well as different scenarios of future climate change, on an international scale, organize relevant climate simulation centers to use climate system models to conduct global climate change simulation experiments, and form a scientific assessment report every five years or so, referred to as the IPCC report. By 2007, IPCC had published four scientific assessment reports on climate change. Among the series of simulation experiments organized by WGCM for the IPCC assessment report, the \"20th Century Climate Modeling\" (20C3M) and future climate change prediction experiments have the greatest impact. Simulations of the 20th century climate provide modeling evidence for understanding the mechanisms of climate change over the past century. The future climate change prediction experiment, that is, using the greenhouse gas emission scenarios in the IPCC Special Report on Emission Scenarios (SRES) to drive the climate system model and predict the potential changes in the climate in the next century, the results are often formulated by countries and regions. One of the important basis for long-term social and economic development planning. However, it should be pointed out that due to the limitations of current model capabilities, whether it is the simulation of past climate changes or the prediction of future potential climate changes, the reliable skills of the above-mentioned simulations and forecasting experiments are limited to global, hemispheric and continental scales , while at the regional scale, the uncertainty of the simulation results is very large, which is largely unreliable [5]. At present, the climate system model is developing towards an earth (climate) system model that simultaneously considers complex processes such as physical processes, biogeochemical processes, and the impact of human activities [6]. A complete Earth (climate) system model contains three key components, namely, the physical climate system, the biogeochemical system, and the human-social system associated with the impact of human activities. The purpose of developing (physical) climate system models is to study the multi-sphere interactions of the atmosphere, ocean, land, ice and snow, and vegetation, and to predict and predict their future changes. The purpose of developing the earth (climate) system model is to understand the operation laws of energy process, ecological process and metabolic process in the earth system, and to understand the climate response caused by land surface cover and land use change. In particular, understanding the role of biogeochemically coupled processes in the carbon, nitrogen, and iron cycles in the climate system, and how the influence of human activities on these cycles changes the climate. In recent years, based on the framework of the Earth's climate system model, the international community has begun to consider complex processes such as the solid Earth and space weather, in order to develop towards a true Earth system model [7]. Figure 1 shows a schematic diagram of the Earth System Model, and its five basic functional blocks include: physical climate system (sky blue), biogeochemical system (dark yellow), and human (or social science) system related to the impact of human activities (magenta), solid Earth (blue) and space weather related to solar activity (red) [7]. Note that in the physical climate system model and the earth climate system model, the influence of the solid earth and space weather is simply considered through given parameters, and will be described objectively and in detail in future earth system models. However, it should be pointed out that the core component of the Earth system model is its physical climate system, because it focuses on the sphere in which human beings directly live, and the consideration of other subsystems is fundamentally for a more comprehensive Good simulation and prediction of changes in the atmosphere and oceans. The atmospheric circulation model and the oceanic circulation model based on the laws of motion of the earth's hydrodynamics are the key parts that determine the energy exchange and operation laws of the earth system. For quite a long time in the future \t, the Earth climate system model with the physical climate system as its core component will still be the research focus of the international Earth science community.", "Ocean circulation and its related material transport processes, like the blood circulation of the human body, affect and regulate the distribution, circulation and changes of ocean quality, heat, salinity, biogenic elements, pollutants, etc., and then affect and regulate climate and ecology Evolution and variation of systems and biological resources. The oceanic western boundary circulation system plays a key role in these processes. Although people have carried out ocean circulation observations and theoretical studies for hundreds of years, and successively established wind-induced ocean circulation, thermohaline circulation theory and a wide variety of numerical models, the existing models can no longer describe the three-dimensional circulation that has been discovered but is not perfect. Moreover, there is still a lack of clear understanding of the intricate three-dimensional structure of the oceanic western boundary circulation system, let alone the establishment of a theory that can describe its three-dimensional structure and variation laws. This is one of the problems in physical oceanography at present, and even for a long time to come. The three-dimensional structure of the western oceanic boundary circulation system The western oceanic boundary circulation system presents a three-dimensional structural feature of the coexistence of upper-level circulation and subsurface reverse undercurrent. Compared with the upper-level circulation at the western boundary of the ocean, which was known before the 1950s, the subsurface reverse undercurrent covered below, except for the Equatorial Undercurrent (EUC), has only gradually emerged since the late 1980s. be found. The discovery of the undercurrent benefited from the US-Australia Western Equatorial Pacific Ocean Circulation Study (Western Equatorial Pacific Ocean Circulation Study, WEPOCS, 1985~1988), the Sino-US Equatorial Western Pacific Sea-atmosphere Interaction Joint Investigation and Research (PRC-US, 1985~1990 Years), Chinese Academy of Sciences Tropical Western Pacific Sea-atmosphere Interaction and Interannual Climate Change (CAS, 1985~1990) and other large-scale survey and research projects. Through investigation, it was found that the New Guinea Coastal Undercurrent (NGCUC) exists under the seasonally turned New Guinea Coastal Current (NGCC), and flows northwest all the year round; the Mindanao Undercurrent (Mindanao Undercurrent , MUC) exists below the southward-flowing Mindanao Current (MC), and flows northward all year round. In addition, in the mid-to-late 1990s, the Agulhas Undercurrent (AUC) was discovered in the Indian Ocean through repeated section surveys and regional intensified observations organized by the World Ocean Circulation Experiment (WOCE). , the North Brazil Undercurrent (NBUC) was found in the Atlantic Ocean, and the East Australia Undercurrent (not yet named, this article referred to as EAUC) was found in the South Pacific. The above results show that the existence of subsurface reverse underflow is a relatively common feature in the three-dimensional structure of the western boundary current. The existence of the undercurrent reveals the complexity of the three-dimensional structure of the western boundary current system of the ocean. The classic ocean circulation theory has been greatly challenged, and it has been unable to explain the three-dimensional circulation structure including the undercurrent. However, so far, there is still a lack of sufficient understanding of the intricate three-dimensional structure of the oceanic western boundary circulation system, let alone the establishment of a theory that can describe its three-dimensional structure and variation laws. Basic understanding of undercurrents Among the above-mentioned undercurrents, the observation and research of NGCUC are relatively systematic and in-depth. Since American and Australian scholars discovered NGCUC using temperature, salinity and current velocity data from three voyages of the WEPOCES project between 1985 and 1988, its existence, seasonal and interannual changes have been successively investigated by historical hydrological data, Acoustic Doppler Current Profiler (ADCP) shallow scale observation data and Fully demonstrated by TRITON buoy observation data. MUC was discovered by Chinese scholars using the temperature and salinity data of three voyages of the CAS project between 1986 and 1988, and using the inversion method. Later, it was confirmed by dynamic calculation, ADCP current measurement and anchor current meter data; but also based on years of observation data, it is believed that MUC is not necessarily a permanent flow. So far, there is no direct and long-term observation of the main part of MUC, and the understanding of its structure and variation is not as intuitive and profound as that of NGCUC. The current limited understanding of the individual underflow structures and the relationships among the underflows is often linked to their role in meridional mass transport as a function of depth. Hydrological analysis data show that after the NGCUC carried the South Pacific cline water and Antarctic Intermediate Water (AAIW) across the equator, some of the water flowed into the EUC, and some continued to spread northward into the area where the MUC is located. There are different views on the whereabouts of AAIW after it flows into MUC. One view is that AAIW will continue to go north along the coast and reach the coast of Luzon; another view based on measured data is that AAIW carried by MUC is between 10\u00b0N ~ 12\u00b0N Between flows east into NEUC. Notably, the southern hemisphere's share of mass transport from the subtropical to the tropical Pacific rises with depth. Therefore, what is the water source and dynamic relationship between NGCUC, MUC and EUC, NEUC, etc., and what role does it play in the mass and even heat exchange between the hemispheres and the middle and low latitudes, which attracts attention. However, what role MUC plays between NGCUC and NEUC is not very clear. Compared with the understanding of the dynamic and thermal characteristics of the reverse underflow in the western boundary subsurface, the understanding of its formation mechanism lags behind. In fact, there is still no dynamic theory that can reasonably explain the formation mechanism of the reverse subsurface flow at the western boundary. In the existing ocean circulation theory, the traditional barotropic model cannot explain the vertical variation of the western boundary current; the baroclinic ventilated and non-ventilated thermocline models cannot obtain the western boundary solution that matches the inner zone solution. In 1990, Nof used a simple model to analyze the dynamic process of NGCUC turning eastward into EUC after crossing the equator. Based on the observed fact that MUC is associated with thermocline tilt, Chinese scholars obtained the subsurface geostrophic flow steering criterion by solving the two-dimensional geostrophic model, and based on this, the formation mechanism of MUC was preliminarily attributed to the tropical eddy. Geostrophic flow reversal due to thermocline tilt. However, the above theoretical research results are not enough to explain its formation mechanism. Since the observation and research of each branch of subsurface flow is basically carried out in isolation, the relationship between them is mainly speculated by the similarity of water mass properties, and it is difficult to establish a dynamic relationship. This restricts the understanding of the overall structure, mass transport and variation mechanism of the subsurface flow system. In addition, although previous studies have gained a certain understanding of the components of the upper and subsurface layers of the ocean, there is still no overall study of the three-dimensional circulation system such as the upper layer circulation and the subsurface flow. The kinetic relationship between them is poorly understood. Research Significance The circulation system at the western boundary of the ocean has a complex three-dimensional structure, which is highlighted by the presence of subsurface undercurrents opposite to the upper circulation near and below the thermocline. These subsurface currents have similar distribution levels and close dynamic relations. They are key hubs and main passages connecting the northern and southern hemispheres and ocean circulation and mass and heat exchange at different latitudes. The complex three-dimensional structural characteristics of the western boundary of the ocean reflected by the above-mentioned undercurrent cannot be explained by the classic two-dimensional ocean circulation theory[1\uf02d6]. Therefore, the three-dimensional structure of the western ocean boundary circulation system and its formation and variation mechanism are important but weak links in the current study of ocean circulation dynamics. Carry out targeted investigation and research to clarify the formation of the three-dimensional structure of the western ocean boundary circulation system, the variation mechanism and its mass and heat transport process, promote the development of the western ocean boundary circulation dynamics from two-dimensional to three-dimensional, and understand ocean dynamics from multiple perspectives The role of processes in climate change is of great scientific importance. In the absence of sufficient investigation and understanding of the structural characteristics and changing laws of the underflow in the past, the understanding of the dynamic process that regulates the formation and variation of the underflow on the western boundary of the ocean must be severely restricted. It is a huge and unavoidable challenge to establish an appropriate three-dimensional dynamic model, and to theoretically clarify the formation mechanism of subsurface reverse undercurrent and its internal relationship with basin-scale circulation and external forcing. , and even one of the problems that will exist for quite a long time in the future.", "The interdecadal climate change has greatly impacted the orderly development of human society, and the ocean plays a very important role in the interdecadal change of the climate system, such as the Pacific Decadal Oscillation (PDO), the Atlantic Multidecadal Oscillation ( Atlantic Multidecadal Oscillation (AMO), interdecadal variability of the Asian monsoon, and Sahara rainfall are all closely related to changes in ocean physical processes. But so far, although there are many different conjectures about the causes of interdecadal climate change, due to limited observation data, the mechanism of interdecadal climate change is not clear, which restricts the improvement of its prediction level. In addition, climate predictions require an accurate determination of the initial state of the ocean, atmosphere, land surface, and cryosphere. At present, how to determine the optimal initialization scheme in the case of errors in observations and models is an unresolved problem in ocean and climate research. The phenomenon of interdecadal climate change and its impact The phenomenon of interdecadal climate change can be basically classified into three types: periodic or quasi-periodic climate change, such as climate change related to periodic changes in solar radiation; some high-frequency weather or climate phenomena persist in certain years Explosive, intensified or weak, such as El Ni\u00f1o, tropical cyclones, heavy rainfall, etc.; the climate changes from one state to another within 5 to 10 years. The interdecadal climate change has different manifestations in different regions, such as PDO in the Pacific Ocean and AMO in the Atlantic Ocean. The spatial characteristics of the Pacific PDO are similar to those of the ENSO event: the central North Pacific (Sea Surface Temperature, SST) is cold anomaly, and is surrounded by the sea surface temperature (SST) anomaly from the Gulf of Alaska along the west coast of the North American continent to the tropical central and eastern Pacific; the atmospheric field corresponds to The Aleutian low pressure system in the North Pacific strengthened and moved eastward, and the trade winds in the tropical Pacific relaxed. But the location of the strongest signal is different: the strongest signal of PDO is in the mid-latitude North Pacific Ocean (Fig. 1a), while the strongest signal of ENSO is in the central and eastern tropical Pacific Ocean (Fig. 1b). In addition, the duration of the time series is also different: based on the analysis of observational data in the 20th century, the warm/cold phase of PDO can last for 20\u201330 years (Fig. 1c), while the cold/warm phase of ENSO can only last for 6\u201318 months (Fig. 1d). The different phases of PDO in the North Pacific and their phase transitions not only have a great impact on the local climate and ecosystem of the North Pacific [1], but also have a huge impact on the climate of Asia and North America. The Atlantic Multidecadal Oscillation (AMO) is the main mode of Atlantic climate interdecadal variation (Fig. 2), with a typical time scale of 50\u201370 years, and spatially reflects the \u201cseesaw\u201d relationship between the SST anomalies in the North Atlantic and South Atlantic . Changes in AMO are related to the Atlantic thermohaline circulation, and have a great impact on the North Atlantic, Europe, and even the global climate [2], such as the heat wave that hit western Europe in the summer of 2003 [3], the hurricane that hit the United States in 2005, and the Sahara precipitation. wait. The Mechanism of Climate Interdecadal Variation Due to the time duration of North Pacific PDO and Atlantic AMO, the length of current instrument observations only covers a few complete oscillation cycles, so it is still very difficult to understand the mechanism of climate interdecadal variation. Due to the very limited length of observation data, numerical model simulation has become the main means of studying interdecadal climate change. At present, there are several conjectures in the scientific community to explain the mechanism of interdecadal climate change. These conjectures are highly dependent on models, and the results of different numerical models are not only inconsistent, but even contradictory. Therefore, for a long time, the cause of interdecadal climate change will be a major problem in the field of marine climate. The mechanisms of North Pacific PDO mainly include: \u2460 Atmospheric stochastic forcing mechanism[4]: Random changes in the atmospheric forcing field drive long-term oceanic changes. \u2461Mechanism of unstable air-sea interaction at mid-latitudes[5]: Initially, the SST anomaly in the Kuroshio extension region was enhanced through air-sea interaction, causing anomalies in atmospheric circulation, accompanied by changes in the rotation of atmospheric wind stress; The change stimulates the oceanic Rossby wave; the oceanic Rossby wave reaches the western boundary of the ocean after several years, changes the heat transport of the subtropical western boundary current to the pole, and thus changes the sign of the SST anomaly in the Kuroshio extension area. The whole process takes about 10 years. However, later observations and model studies found that the adjustment process of the ocean provided positive rather than negative feedback to the Kuroshio extension, with a time lag of less than 5 years [6]. Wu et al. used the coupled model to unify the positive and negative feedback mechanisms of the adjustment of ocean circulation to the Kuroshio extension, and there are both positive and negative feedback processes[7]. \u2462Interaction between the tropics and extratropics[8]: Tropical warm SST Figure 1 \tComparison of temporal and spatial characteristics of PDO (a,c) and ENSO (b,d) In a,b, the color is the SST anomaly, the arrow is the wind stress anomaly, Contours are sea surface pressure anomalies, c, d are PDO and ENSO indices respectively (quoted from Figure 2 \tAtlantic Multidecadal Oscillation (AMO) [3] a is the AMO index from 1871 to 2003. The index is the North Atlantic (0\u00b0 ~60\u00b0N, 75\u00b0~7.5\u00b0W) mean annual average SST, obtained after 37-point Henderson low-pass filtering and trend removal, unit: \u00b0C; b is the SST space obtained by regression of SST and standardized AMO index Distribution, unit: \u00b0C/unit Standard difference often drives the cold SST anomalies in the mid-latitude South and North Pacific through the correlation of atmospheric circulation. In the advective tropical Pacific Ocean, the vertical upturning of seawater in the tropical Pacific makes the SST in the tropical Pacific become a cold anomaly; the cold SST anomaly in the tropics then undergoes the opposite process through the atmospheric circulation. Deser et al. used XTB observation data to test this view, and found that The mid-latitude SST anomalous signal enters the thermocline through subsidence and cannot reach the tropical Pacific region with the average thermocline circulation[9]. The mechanism of the Atlantic AMO mainly includes the following conjectures. \u2460 Stochastic forcing of the atmosphere: random whitening of the atmospheric forcing field The combination of the input of noise and the low-frequency selection of the Atlantic itself leads to changes in the Atlantic AMO, and the ocean is only a passive response to the atmospheric \"noise\" [4], mainly due to the multidecadal forcing of the North Atlantic Oscillation. \u2461Ocean-atmosphere interaction: AMO The ocean\u2019s feedback to the atmosphere plays a very important role in the changing [5], but the dynamic mechanism such as the strength and structure of the ocean\u2019s feedback to the atmosphere is unclear. \u2462Tropical air-sea interaction: the air-sea interaction in the tropical Atlantic and the ocean-air interaction in the tropical Pacific Atmospheric interactions have also played an important role in AMO changes. In addition, it is emphasized that AMO is the result of Arctic sea ice forcing or the result of ice-sea-air interaction. In short, the current understanding of the mechanism of Atlantic AMO changes is very limited .Prediction of interdecadal climate change In the past 20 years, the scientific community has made a major breakthrough in the seasonal-interannual forecast of climate change, especially the ENSO forecast of the tropical Pacific Ocean. Now, people are also trying to use the method of climate change seasonal forecast To improve the prediction level of interdecadal climate change, it is necessary not only to understand the physical process and mechanism of climate interdecadal change, but also to have an efficient ocean-atmosphere coupling model. The mechanism of decadal change is not completely clear, but there is some predictability in slowly changing systems such as the ocean circulation system. To make predictions, we first need to have a more accurate state of the ocean, including surface and subsurface temperature and Salinity. While Argo The buoy array provides a certain basis for this, but due to certain errors in the observation itself and the limitation of spatial coverage, it is still a challenge to give an accurate initial ocean state. Even given a reasonable initial state, how to reasonably assimilate it into the forecast model is also a challenge, because all models have systematic errors and climate drift. In addition, what quantities can be predicted on an interdecadal scale, and how to test these predictions is also a challenge. It can be said that the current prediction research on interdecadal climate change is still in the stage of blind men and elephants. The sustainable development of society and economy puts forward an urgent demand for the prediction of interdecadal climate change. Therefore, the prediction of interdecadal climate change will be an important issue in ocean and climate research in the next few decades.", "Introduction The ocean itself is the largest heat storage body on the earth's surface and the largest heat transfer belt on the earth's surface, which has a great impact on climate change. Key factors of ocean macroscopic motion. The ancient Greek physicist and mathematician Archimedes once said: \"If you give me a fulcrum, I can move the earth\"; and ocean mixing is a fulcrum for the study of the process and mechanism of the movement of the ocean. What is Ocean Blending? Ocean mixing is a micro-scale process that occurs in the interior of the ocean, and its scale is generally on the order of millimeters to centimeters. Ocean mixing is not only a key factor controlling the marine environment, but also plays an important role in the mass, momentum, energy transport and global climate change of the ocean, and is the source of power driving ocean thermohaline circulation [1]. According to different inducing mechanisms, ocean mixing is mainly divided into three categories: turbulent mixing, salt-finger mixing and biological mixing. Turbulent mixing is triggered by dynamic processes such as sea surface wind field, submarine frictional resistance, and internal wave breaking; salt finger mixing is when the salt finger is disturbed by the outside world, because the temperature molecular diffusion coefficient is two orders of magnitude larger than the salinity molecular diffusion coefficient, it is easy to Promotes vertical mixing of seawater; biological mixing is mixing induced by the movement of plankton and larger animals in groups in the ocean. In recent years, ocean mixing research has become one of the core research directions of physical oceanography, and has received sufficient attention. A series of major scientific discoveries have been made, which has deepened people's understanding of the driving mechanism of ocean thermohaline circulation. Research Status and Challenges of Turbulent Mixing Since Munk and Wunsch revealed in 1998 that ocean mixing is an important factor controlling the intensity of oceanic thermohaline circulation, the research on ocean mixing has entered a new stage. At home and abroad, a series of ocean mixing experiments have been carried out in the Mid-Atlantic Ridge, Hawaii Island, the Southern Ocean, the Yellow Sea, the East my country Sea, and the South China Sea. Several high-mixing areas have been discovered, and the effects of mixing on ocean circulation and climate change have been investigated. Research. The scientific purpose of the Mid-Atlantic Ridge Mixing Experiment is to explore the distribution of mid-ocean ridge mixing and its inducing mechanisms. Experiments have found that the mixing rate is weaker at 10\uf02d5m2/s on the flat bottom of the deep sea in the Brazilian Basin of the South Atlantic Ocean and the South American continental prominence; while the mixing rate is greatly enhanced on the rough mid-Atlantic Ridge and can reach 10\uf02d3m2/s[2 ](figure 1). This experiment revealed the relationship between the mixed distribution of the ocean and the complex topography of the seafloor, and also hinted at the connection between the complex spatial structure of the deep-sea circulation and the mixed distribution. The main purpose of the Hawaiian ocean mixing experiment is to reveal the internal tidal energy flux[3] of the Hawaiian island chain and the spatial distribution of the mixing rate of the Hawaiian Ridge. This test discovered for the first time the cascade process of mixed energy in ocean turbulence: the energy is converted from the interaction between the 1000km large-scale barotropic tidal current and terrain into internal tidal energy, the internal tide is transformed into small-scale internal waves, and the internal waves induced by the nonlinear interaction of internal waves Fragmentation, and finally internal wave fragmentation trigger centimeter-scale turbulent mixing. At the same time, it is also determined that the sum of the eight major internal tidal energy fluxes in the surveyed sea area is 26GW, while the M2 internal tidal energy flux reaches 19GW; this indicates that the Hawaiian island chain is a high-value area of global M2 internal tide generation. At the same time, Chinese scientists have also carried out mixing experiments in the South China Sea, and found for the first time that the mixing rate in the South China Sea is two orders of magnitude higher than that in the adjacent Northwest Pacific Ocean, revealing the energy source of the strong mixing in the South China Sea, and analyzing the relationship between the strong mixing in the South China Sea and the deep-sea waterfalls in the Luzon Strait. This finding will contribute to the in-depth study of the driving mechanism of the thermohaline circulation in the South China Sea. Figure 1 \tDistribution of mixing rate in the Brazilian Sea Basin [2] and distribution of mixing rate at 21\u00b0N [4] Today's research on turbulent mixing is facing three challenging issues. \u2460Munk and Wunsch pointed out in 1998: \"To maintain the strength of the current oceanic thermohaline circulation, at least the average mixing rate of the ocean needs to be 10\uf02d4m2/s\", and some offshore observation experiments show that: \"The main thermocline in most ocean areas The mixing rate is only 10\uf02d5 m2/s, which makes it difficult to seal the current oceanic thermohaline circulation.\u201d This means that some new zone of intense mixing must exist in the interior of the ocean to support the current operation of the oceanic thermohaline circulation. \u2461Current turbulent mixing approach: Barotropic waves \"transform\" baroclinic waves and \"induce\" turbulent mixing. However, due to the rough topography of the ocean floor, especially the mid-ocean ridge (similar to fractal terrain), does the interaction between barotropic movement and seabed topography directly induce turbulence? Mixing without intermediate internal wave breaking process. \u2462Ocean mixing induced by typhoon plays a significant role in the ecological environment and ocean dynamic process, but the research on the mechanism of typhoon mixing process is slow due to the scarcity of on-site observation data of typhoon mixing. Research Status and Challenges of Salt Finger Mixing In 1985, the United States carried out the Caribbean Sea salt finger mixing test, which directly observed the heat dissipation rate and turbulent kinetic energy dissipation rate. However, because the rapid salinity measurement technology has not yet come out, it can only be estimated indirectly. The salinity diffusion rate is about 1\u00d710\uf02d4~2\u00d710\uf02d4 m2/s[5], and it is pointed out that the vertical salt flux is 3~4 times that of other regions. With the development of tracer technology, Schmitt et al. used sulfur hexafluoride (SF6) tracer material and high-resolution microstructure profiler to carry out the salt finger mixing integrated observation experiment in the western tropical Atlantic Ocean[6]. The direct observation results show that the salt diffusion coefficient is about 0.8\u00d710\uf02d4~0.9\u00d710\uf02d4 m2/s, and the thermal diffusivity is about (0.45\u00b10.2)\u00d710\uf02d4 m2/s (Fig. 2). It is also pointed out that the salt finger mixing coefficient in the western Atlantic Ocean is 5 times that of the eastern part, and the salt finger mixing is more prominent from the perspective of quantification \t. 53\u00b045\u2032W), after 2 weeks of release, SF6 was found in the red rectangle area. The color map represents the intensity of the vertical integration of SF6 after 10 months of release. The black dots represent the observation stations in the experiment, and the black dotted line represents the important contribution of the isoline with a salinity of 35.1 along the density surface where the SF6 is released [6] to various processes in the ocean. Due to the technical difficulty in implementing the tracer test, and the obtained salinity mixing coefficient is a large-scale spatio-temporal average value; therefore, rapid salinity monitoring technology should be developed to realize direct and rapid measurement of salt finger mixing and break through the bottleneck of salt finger mixing observation. Bio-mixing Bio-mixing is a concept that has only been put forward in the recent period. It reconsiders the mixing of seawater caused by the migration of marine plankton that was previously thought to be negligible. Observations by Kunze et al. in an inshore inlet in 2006 showed that the mixing caused by the movement of a large number of krill at dusk was much higher than the intensity during the day, and this mixing increased the daily average mixing intensity by 100 times; And it is pointed out that biological mixing is of great significance to the transport of nutrients and the exchange of CO2 at the air-sea interface [8]. Katija and Dabiri (2009) pointed out the presence of strong biological mixing comparable to wind and tidal mixing in the vicinity of plankton formations (Fig. 3). At the same time, they also proposed the mechanism of this biological mixing, that is, the Darwinian mechanism that has been neglected for a long time [9]. At present, the research on biological mixing has just started abroad, and the progress is relatively slow due to the limitation of observation technology and equipment. Therefore, it is necessary to develop relevant observation techniques to carry out in-depth research on biological mixing and reveal the role of biological mixing in ocean processes. Summary \nFigure 3 \tField observation of biological mixing [9] The ocean is a highly nonlinear open system, and mixing is not only the destination of multi-scale interactions in the ocean, but also the source of power supporting large-scale motion. Therefore, it is of great scientific significance and application value to develop mixed observation techniques, implement mixed experiment plans, clarify the rules of mixing processes, reveal the mechanism of mixing processes, and quantify mixed parametric models for the study of ocean circulation and the prediction of climate change.", "As the largest-scale movement in the ocean, the thermohaline circulation is known as the conveyor belt of the ocean, carrying a large amount of heat from the warm tropical ocean to the cold high-latitude areas, making Europe and North America in high-latitude areas suitable for human survival. mainland. At the same time, the thermohaline circulation is also an important channel connecting the oceans, and it is one of the most important controlling factors in global climate change. Although there has not been an exact definition of thermohaline circulation in the field of physical ocean [1], as the name implies, thermohaline circulation includes two important physical processes, the ocean thermohaline structure and the corresponding circulation. Human beings' knowledge of the ocean's thermohaline structure originated in the middle of the 18th century. Henry Ellis, the captain responsible for transporting slaves, first reported the existence of low-temperature deep water in the subtropical Atlantic Ocean. In the ensuing 200 years, human beings have improved the measurement methods and conducted a large number of systematic observations of the temperature and salinity in various sea areas around the world. So far, they have a relatively clear understanding of the three-dimensional temperature and salinity structure of the global ocean (Fig. 1). In view of the fact that the main heat flux and freshwater (salt water) flux for the ocean come from the surface of the ocean, the uneven heating of the ocean by the sun and the process of evaporation and precipitation are the main factors that cause the current three-dimensional temperature and salt structure of the ocean. Everyone's understanding is basically the same. Figure 1 \tGlobal ocean temperature and salinity distribution a is the horizontal distribution of sea surface temperature, b is the horizontal distribution of sea surface salinity, c is the vertical temperature structure along the 180\u00b0 meridian direction, d is the vertical salinity structure along the 180\u00b0 meridian direction The uneven temperature and salt distribution in space is the result of a dynamic balance, and there must be a corresponding circulation field, which is the thermohaline circulation that people pay attention to. However, the understanding of the circulation was later than the understanding of the thermohaline structure. It was not until the middle of the 19th century that Lenz first gave the rudiment of the meridian overturning circulation that we are familiar with today[2]. In the following 100 years, the structure of this meridian overturning circulation has been continuously improved, especially in the Atlantic Ocean, the meridian overturning circulation can penetrate all the way to the deep sea, and cross the equator to form an important link of the global conveyor belt [3], as shown in Figure 2 shown. Figure 2 \tThe structure of the Atlantic meridian overturning circulation [1] and the horizontal distribution of the global thermohaline circulation [3] However, there are great differences in the field of physical ocean research on the issue of the dynamics driving the thermohaline circulation. Early people believed that since the thermohaline structure of the ocean is forced by buoyancy from the sea surface (thermal forcing and evaporation\uf02dprecipitation forcing), the corresponding circulation should also be the result of buoyancy forcing from the sea surface. If only the temperature structure is considered, the circulation is driven by the continuous sinking of seawater in the cold high-latitude sea area after being cooled, so that the thermohaline circulation is the product of a typical heat engine. This explanation is very consistent with people's intuitive imagination, so it has been believed by many physical oceanographers for a long time, and we call it the heat engine theory. The first challenge to this point of view came from Sandstr\u00f6m[4]. In 1908, Sandstr\u00f6m proposed that when the heat source is higher than the cold source, there cannot be any circulation in the fluid, so the thermohaline circulation in the ocean is not caused by the heating and cooling of the sea surface, and the ocean is not a heat engine. In the past 100 years, with the continuous improvement of measurement methods, people have carried out many simulation experiments on this problem in the laboratory, from the early experiments of Rossby[5] to the recent experiments of Wang and Huang[6]. The simple forcing of horizontal temperature difference can indeed drive an overturning circulation, but the results of different experiments have different conclusions on whether the ocean is a heat engine. Almost all laboratory experiments support the view that the ocean is a heat engine, but the experimental results of Wang and Huang show that the thermohaline circulation observed in the ocean cannot be regarded as a product of a heat engine from an energy point of view. Therefore, in this sense, the ocean Not a heat engine. If buoyancy forcing from the sea surface is not the driving force behind the thermohaline circulation, what physical process drives the thermohaline circulation? Not being able to accurately grasp the origin of thermohaline circulation not only hinders our understanding of thermohaline circulation, an important ocean process, but also makes it impossible to measure and predict its possible variation and its impact on the global climate. In the past 20 years, many physical oceanographers believed that the mixing process of seawater inside the ocean was the fundamental driving force for thermohaline circulation [7], and a new theory\u2014mixing drive theory was produced. This is also the current mainstream view. Figuratively speaking, the heat engine theory relies on the sinking of cold water as a driving force to drive the thermohaline circulation, while the hybrid drive theory relies on continuously \"lifting\" the cold water in the depths of the ocean to drive the thermohaline circulation. The biggest difference between the two is that the heat engine theory cannot satisfy the energy balance, while the hybrid drive theory takes the energy balance as the basic starting point. Whitehead and Wang [8] used laboratory experiments to prove that the theory of hybrid drive is feasible. However, the hybrid drive theory faces two major problems: one is that the energy source that can effectively act on the deep sea\u2014the tide cannot provide enough energy, and it is necessary to find a new energy source that can reach the deep sea, such as the energy in the wind and waves on the sea surface; the other is that if Find a new energy source, and the observed mixing rate in the ocean will be much larger than actually observed. Therefore, the theory that the deep-sea mixing process drives the thermohaline circulation must not be the final and complete answer. Another mechanism driving thermohaline circulation was proposed by Toggweiler and Samuels[9] more than 10 years ago. Using a numerical model with a mixing rate of almost zero, they found that as long as the wind field on the Antarctic Circumpolar Current exists, the thermohaline circulation existing in the ocean can be well simulated. The wind field in the Southern Ocean and the Antarctic Circumpolar Current act like a huge pump to draw the deep cold water generated in the North Atlantic to the sea surface and push it northward to form an overturning circulation. However, due to the problem of parameterization in the numerical model, this driving mechanism has yet to be further verified. Although each of the above theories has explained the cause of the thermohaline overturning circulation to a certain extent, a strict theoretical solution has not been obtained so far. Therefore, the coexistence of different theories will take a long time to come. continue. Not only that, but our understanding of thermohaline circulation is still limited to the overturning circulation. Most of the global thermohaline circulation distribution shown in Figure 2 is just a schematic, and cannot be confirmed to represent the real process. In addition, our latest experiments found that when considering the rotation effect, the three-dimensional structure of the circulation driven by horizontal temperature difference is far richer than the well-known overturning circulation, and many phenomena that occur in the ocean are contained in it. Perhaps the previous oversimplified models are not only helpless to the problem However, it misleads our understanding of the thermohaline circulation. The mechanism of thermohaline circulation has puzzled the field of physical oceans for more than 100 years. If people have been seeking answers only out of the exploration of natural phenomena, the current concern about global climate change makes the solution of thermohaline circulation more urgent. The good news is that in the past 10 years or so, the systematic ocean observation network (Argo) has been established, which provides us with valuable first-hand information for a comprehensive understanding of the thermohaline circulation. The implementation of the circulation scheme makes it hopeful to solve the formation and evolution mechanism of the thermohaline circulation.", "Background and Significance El Ni\u00f1o refers to the large-scale anomalous warming of seawater that occurs every few years in the eastern and central equatorial Pacific Ocean. Since this phenomenon is always most pronounced around Christmas, it has been called the \"Son of God\" (El Ni\u00f1o's original meaning in Spanish) by Peruvian fishermen for centuries. With awareness of its global impact, El Ni\u00f1o has received a lot of attention from scientists and society in recent decades. In particular, the so-called \"Event of the Century\" that occurred in 1997 and 1998 was the most comprehensively observed and reported El Ni\u00f1o in history, which pushed people's interest in this phenomenon to a climax. Since then, El Ni\u00f1o has become almost a household word, often cited as the culprit for erratic weather and climate change around the world. Although the media and the public have exaggerated and unfairly exaggerated its role and influence, it is undeniable that El Ni\u00f1o plays a very important role in the earth's ocean and climate systems. To understand the dynamics of El Ni\u00f1o, it is necessary to recognize that it is part of an unstable oscillation in the coupled air-sea system in the tropical Pacific Ocean. The cold phase of the oscillation is called \"La Ni\u00f1a\", and its atmospheric part is called the \"Southern Oscillation\". As shown in Figure 1, in the La Ni\u00f1a state, the easterly wind on the equatorial surface strengthens the sea surface temperature gradient in the east-west direction, and the latter further strengthens the easterly wind through the pressure field; in the El Ni\u00f1o state, the easterly wind on the equator weakens, and the warm water and atmospheric deep convection move eastward , the sea surface temperature gradient decreases, further weakening the easterly wind. There are two basic physical processes controlling the El Ni\u00f1o-Southern Oscillation (ENSO) cycle: one is the above-mentioned positive feedback between the zonal surface wind and the temperature gradient[1], the other is the equatorial dynamic process (especially the Kelvin and Rossby waves ) delay negative feedback [2], making the whole air-sea coupled system oscillate between cold and warm phases. Due to the nonlinear characteristics of the system and the influence of random processes, the period and amplitude of ENSO oscillations are irregular, so it is difficult to predict. Figure 1 \tLa Ni\u00f1a schematic diagram of the tropical Pacific ocean-atmosphere coupling system; b. El Ni\u00f1o El Ni\u00f1o causes global atmospheric circulation disturbances by changing the heating process of the tropical atmosphere, thereby causing short-term climate changes worldwide, including floods and droughts, which have negative impacts on socio-economic and ecological conditions system has a great impact. Therefore, predicting El Ni\u00f1o one to several seasons in advance is of great significance for disaster prevention and mitigation and sustainable social development. In fact, the study of tropical air-sea interaction mechanism and short-term climate prediction based on it is one of the current hot topics in the international scientific community, and it is also one of the most fruitful fields of marine and atmospheric science in the past two decades. Among them, the research and prediction of El Ni\u00f1o is the most active, and it is the project of many large international research programs such as the Tropical Ocean Global Atmosphere (TOGA) program and the Climate Variability and Predictability (CLIVAR) program. focus point. After systematic observation, theory and simulation research, El Ni\u00f1o prediction has become the primary content of operational climate prediction in many countries, but there is still considerable controversy about its predictability. Current status and problems The earliest air-sea coupling model used for El Ni\u00f1o prediction was a moderately complex model established by Mark Cane and Steve Zebiak in the mid-1980s[3]. It successfully predicted El Ni\u00f1o in 1986 and 1987, thus showing the possibility of short-term climate forecast for the first time. This model (now also known as LDEO model) has an extremely important historical position in ENSO's theoretical research and operational forecasting, and is still playing an active role today. Inspired by it, a large number of short-term climate prediction models of various types and complexity have emerged in the past 20 years. They can be generally divided into three categories: purely statistical models, hybrid models of physical ocean plus statistical atmosphere, and fully physical coupled air-sea models. The latter category can be divided into medium coupled mode and coupled circulation mode according to the complexity. Theoretically, this type of model should be better than the other two types, so it has greater development potential, but at present they have no obvious advantages in predictive ability. Latif et al. [4] reviewed the early work on El Ni\u00f1o prediction and pointed out that the above three types of models all have certain predictive ability. Advanced physical models seem to be able to make more useful forecasts in advance, but for 1-2 seasons in advance, the various models are about the same level. Kirtman et al. [5] assessed the state of the art in more detail and came to roughly the same conclusion, but found that an ensemble of multiple models was better predictive than any single model. In addition, the prediction ability of the model shows interdecadal changes, for example, the 1980s is better than the 1970s and 1990s, and the ability to predict large El Ni\u00f1o events is significantly higher than that of small events. Inexplicably, while the models have become more complex, their predictive power has not improved much, and real-time predictions made today are not necessarily more reliable than those made many years ago. A negative explanation is that current levels have reached the limit of what can be predicted, but evidence suggests this is unlikely. We should find the reason from the pattern itself. El Ni\u00f1o and its associated tropical anomalies are by far the most predictable perturbations in Earth's climate system. Due to the strong influence of El Ni\u00f1o, predictions of SST in the tropical Pacific Ocean have also become the basis for global temperature and rainfall seasonal forecasts. For example, the operational seasonal forecasting system of the International Research Institute for Climate for Society (IRI) relies on a collection of El Ni\u00f1o models to provide the bottom boundary conditions. It is mainly because of the predictability of El Ni\u00f1o and the quantification of its global impact that the short-term climate prediction in the tropics and even the world has become a reality from a dream. However, for specific El Ni\u00f1o events, the actual predictive ability of the model is still not satisfactory. Fig. 2 shows the prediction results of sea surface temperature in the eastern Pacific Ocean from October 2007 to July 2009 by 22 different models. Although the ensemble predictions of various models basically include the observed change curves, there are large differences among the models, indicating that El Ni\u00f1o predictions still have considerable uncertainty. Fig. 2 \tObservation (black line) and different model predictions (color line) of the average sea surface temperature anomaly in the Nino3.4 region (5\u00b0S~5\u00b0N, 120\u00b0W~170\u00b0W ) Controversy and prospective El Ni\u00f1o are undoubtedly possible Forecasting, the question is how high is the degree of predictability, and how much room is there for further improvement of predictive ability. To answer these questions, we first need to know what the physical basis of prediction is. The predictability of El Ni\u00f1o stems from the air-sea interaction in the tropical Pacific, the dominance of the slowly changing ocean in the interaction, and the low-dimensional nature of this coupling. Therefore, the debate on the predictability of El Ni\u00f1o has focused on the strength of air-sea coupling in the tropical Pacific. The traditional theory holds that ENSO is a self-sustaining interannual variation mode relying on the strong air-sea coupling in the tropical Pacific Ocean, and its predictability is mainly limited by the growth of initial errors, so the potential forecast advance should be on the order of several years . Another theory emphasizes the triggering role of atmospheric \"noise\", especially westerly wind bursts in the equatorial western Pacific, to trigger ENSO events. According to this view, ENSO is a damping oscillation maintained by random external forces, the predictability of which is dominated by noise rather than initial conditions. This means that El Ni\u00f1o cannot be predicted long in advance, because all El Ni\u00f1o events are accompanied by high-frequency external force disturbances. The dilemma of the \"noise\" theory is that it cannot explain why atmospheric noise such as westerly bursts exists at any time, but El Ni\u00f1o occurs on a specific time scale of 2 to 8 years. So the effect of the noise is more likely to reinforce rather than trigger El Ni\u00f1o. Fedorov et al. [6] proposed a compromise solution, treating ENSO as a micro-attenuated oscillation modulated by noise, whose time scale is determined by the dynamic mechanism of air-sea coupling, while random external forces maintain the oscillation and make it less stable. rule. Thus, predictability is governed by both initial conditions and random perturbations, with the former determining the phase of ENSO and the latter affecting its subsequent development. However, the hindcast experiments of Chen et al. [7] showed that all significant El Ni\u00f1os in the past century and a half could be predicted about two years in advance (Fig. 3), without any random external forces in their models. This suggests that the predictability of El Ni\u00f1o depends more on initial conditions than on high-frequency disturbances in the atmosphere. It is worth pointing out that they used only a moderately complex model and initialized only with observations of sea surface temperature, so their results should only be a conservative estimate of the potential predictability of El Ni\u00f1o. Figure 3 \tThe six largest El Ni\u00f1o events since 1856. The thick red lines are the observed NINO3.4 (5\u00b0S~5\u00b0N, 120\u00b0W~170\u00b0W ) average sea surface temperature anomalies, green, blue, purple, and light blue curves It is the result of LDEO model forecasting 24, 21, 18, and 15 months in advance. In general, there are currently four main factors that limit the level of El Ni\u00f1o prediction: inherent limitations of predictability; insufficient observational data; defects in forecasting models; Improper use of data [8]. As mentioned earlier, although the intrinsic limit of the predictability of El Ni\u00f1o is still debated, more and more facts show that the short-term climate change in the tropics, especially El Ni\u00f1o, is quite predictable, and the upper limit should be much higher than the current The level has been reached, so the existing forecasting system still has a lot of room for improvement. In order to further improve the ability of El Ni\u00f1o prediction, our main task should be to improve the observation system, prediction model and data assimilation method. Specifically, developing coupled data assimilation and model initialization schemes, improving the simulation and parameterization of surface heat fluxes and freshwater fluxes, and considering influences from outside the tropical Pacific Ocean, especially the Indian Ocean, are possible avenues to improve predictions[ 8]. These aspects are also the hotspots of El Ni\u00f1o research at present, and we hope that a breakthrough will be made in the near future, so that our ability to predict El Ni\u00f1o will approach the upper limit of its theoretical predictability.", "Sea-air interaction process and its importance The essence of sea-air interaction is the process of energy and material exchange, mutual restriction and mutual adaptation between the ocean and the atmosphere. The transfer of energy and matter through the ocean surface is the link between the two circles of the ocean and the atmosphere, and it is also one of the key scientific issues in the study of global climate change. The interaction between the ocean and the atmosphere includes processes of various temporal and spatial scales, which are usually divided into small-scale sea-air interaction processes, mesoscale (or weather-scale) sea-air interaction processes, and large-scale (planetary scale) Sea-air interaction process. In terms of small-scale sea-air interaction, a typical example is that the ocean directly controls the growth and decay of typhoon intensity through the sea-air interaction process. In terms of large-scale sea-atmosphere interactions, the most typical process is El Ni\u00f1o, and large-scale sea-atmosphere interactions also directly control global climate change. The ocean affects the global climate and its changing laws through the sea-atmosphere interaction process. The ocean accounts for about 71% of the earth's surface area and 97% of the global water volume. It is an important part of the global water cycle and affects the global precipitation distribution; ocean circulation and atmospheric circulation together play a key role in the global redistribution of heat, salt and fresh water The role of the ocean determines the formation and change of major climate features; the huge heat capacity and inertia of the ocean are the physical basis for climate change predictions; the ocean's absorption of greenhouse gases such as carbon dioxide effectively slows down global warming. The ocean has an important impact on global climate change through the sea-air interaction process. The effect of the atmosphere on the ocean and the effect of the ocean on the atmosphere The influence of the atmosphere on the ocean is mostly dynamic, which is divided into energy (momentum and heat) input and material (such as rainfall, CO2, etc.) input. Under the action of the wind field, wind currents, sea waves, and water increase and water reduction phenomena are directly generated, which regulate the horizontal and vertical distribution of temperature, salinity and density in the ocean. Ocean circulation maintains the entire climate system by transferring heat between low and mid-high latitudes. The effect of the atmosphere not only affects the upper layer of the ocean, but also affects the middle and lower layers or even the bottom layer of the ocean through processes such as ocean overturning circulation. Anomalous atmospheric circulation can also lead to anomalous oceanic circulation, retaining anomalous signals of atmospheric processes in the ocean. Generally speaking, the ocean is the carbon sink of the atmosphere, and greenhouse gases such as CO2 emitted by humans are transferred to the ocean through the air-sea interaction process, causing a certain degree of ocean acidification. The influence of the ocean on the atmosphere is mostly thermodynamic. A typical example is the east-west difference in sea surface temperature in the equatorial Pacific Ocean, driving the atmospheric Walker circulation. Air-sea interaction also directly affects the occurrence, development and extinction process of El Ni\u00f1o. In addition, typhoons/hurricanes gain energy from warm oceans to grow, and their strength decays when encountering colder oceans. The most striking thing is that the ocean controls the global climate change through the air-sea interaction \t\u00b7 895 \u00b7 change. The controlling role of the ocean in climate change has become a consensus in the fields of the ocean and the atmosphere. However, due to insufficient understanding of the ocean-air interaction process, the accuracy of climate change predictions has also been limited to a certain extent. Key scientific issues Current problems in the observation of air-sea interaction include weak air-sea observation capabilities, fewer targeted scientific experiments on air-sea interaction, and lack of observations of air-sea interaction processes under high sea conditions. The scarcity of actual observation data not only limits the understanding of the physical process of air-sea interaction, but also directly affects the forecasting ability of ocean and atmospheric disasters and the prediction accuracy of climate change. In the 1980s, a series of scientific programs \"The Tropical Ocean Global Atmosphere program (TOGA)\", followed by \"The Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA- COARE), \"World Ocean Circulation Experiment (WOCE)\", \"The Global Ocean Observing System (GOOS)\", \"Climate Variability and Predictability, CLIVAR )\u201d and \u201cThe Global Earth Observation System of Systems (GEOSS)\u201d have greatly promoted the development of the global observation network of air-sea interaction and deepened the understanding of the physical process of air-sea interaction. The current observations of air-sea interaction in the western Pacific Ocean, Indian Ocean, and South China Sea will surely greatly improve my country's understanding of air-sea interaction. At the sea-air interface, complex exchanges of momentum, heat, water vapor, CO2 and other gases and aerosols take place. How to express the physical process of air-sea interaction mathematically is the core scientific problem in the study of air-sea process. Block parameterization schemes are generally used to handle these processes. The air-sea momentum flux expression given by Charnock is currently the most common parameterization scheme[1]. Although the core issue of air-sea exchange is the flux exchange in the wave-influenced layer, the main work done so far has been on the normal flux layer that is not affected by the sea surface process [2], about the air-sea momentum, heat and material fluxes How to exchange through the air-sea interface is still lack of in-depth understanding. The latest in-situ and microwave radiometer observations reveal that the drag coefficient of the sea surface on the underlying surface shows significant changes during typhoons[3] (Fig. 1), indicating that strong sea-air thermodynamic-dynamic coupling is involved in the development of typhoons process. This coupling process does not only occur on the ocean surface, but the upper ocean directly affects this coupling process. Recent observations by the US National Oceanic and Atmospheric Administration (NOAA) have shown that the growth and attenuation of typhoons do not depend on the sea surface temperature before the hurricane, but on the heat content of the upper ocean, indicating that the air-sea interaction is not only affected by the air-sea interface influence, and the upper ocean also plays an important role in the air-sea exchange process. The numerical model is an important tool for studying the air-sea interaction process, and it is also the main direction for applying the research results of the air-sea interaction. At present, there are many common problems in the air-sea coupling model, such as tropical bias and spring forecast obstacles, etc. These problems are directly related to the low level of understanding of the air-sea interaction. The air-sea interaction occurs on the sea surface, and the wave process with the most mechanical energy on the sea surface has an important impact on the air-sea exchange. Hasselmann proposed in 1991 that the impact of ocean wave process should be added to the air-sea coupling model, not only for the momentum, heat, and material exchange processes at the sea-air interface, but also for the \tdrag coefficient of the sea surface during the strong wind process in Figure 1. The relationship with wind speed indicates the complexity of air-sea interaction. The current air-sea interaction theory and parameterization schemes cannot accurately describe this process, and the biogeochemical process simulation can be improved [4]. Through the simulation experiment of the atmospheric circulation-wave coupling model, Yu et al. pointed out that this coupling model improves the calculation of the drag coefficient of the sea surface and calculates the momentum flux between the sea and the air more reasonably. A more significant improvement [5]. Song et al. improved the ocean mixing process in 2007, thereby also changing the air-sea flux of the air-sea coupling model, which significantly improved the tropical bias [6]. These developments show that by adding the physical process of ocean waves into the process of air-sea interaction, the understanding of air-sea interaction can be deepened, and the forecasting ability of the coupled air-sea model can be improved.", "With the establishment of the northern hemisphere ice sheet 2.5 million years ago, although the fluctuation of the global climate still continued the Paleogene-Neogene climate pattern, the magnitude of the fluctuation increased significantly. Especially in the past 900,000 years, the role of the linear drive of the Milankovitch orbital rhythm in the long-term periodic changes of the Earth's climate has been widely accepted. However, paleoclimate records from ice cores and deep-sea sediments suggest that the Last Ice Age was a series of abrupt climate events on millennial timescales. The changes in atmospheric temperature calculated from the Greenland ice core \uf064 18O records show that there were 24 rapid warming events between 115,000 and 14,000 years ago, with an average annual change range of 5\u20138\uf0b0C. The warm period was followed by a cold period with a cycle around 1470, the so-called Dansgaard-Oeschger cycle [1]. The Holocene is the period of geological history that humans currently live in, which began with the termination of the Younger Dryas event and has now passed about 11,700 calendar years. Although the warm climate of the Holocene maintained the development and progress of modern society, humans have not yet formed a systematic understanding of the process and mechanism of climate evolution during this period. In the 1990s, based on relatively stable oxygen isotope temperature records from Greenland ice cores, the Holocene was once considered a stable period of climate. This is in stark contrast to the large millennial-scale climate fluctuations seen during the last ice age. But then, the ocean records from the North Atlantic showed that there were 8 IRD events in the post-glacial period, and their peaks occurred in 400 years, 1400 years, 2800 years, 4300 years, 5900 years, and 8100 years ago. 1, 9,400, 10,300, and 11,100 years ago (Bond cycle), the variation range of surface seawater temperature can reach 2\uf0b0C, indicating that the climate has undergone substantial changes (Fig. 1)[2]. What is even more interesting is that these events occur at the same frequency as the Dansgaard-Oeschger cycle, a cycle of nearly 1500 years, which seems to be a continuation of the climate fluctuations of the last glacial period in the postglacial period. Obviously, the Holocene in the North Atlantic region, together with these millennial-scale rapid climate fluctuations during the last glacial period, cannot be simply explained by changes in orbital parameters. At present, a number of international cooperation plans and projects are being implemented, all of which take high-resolution paleoclimate records as the main research content, and use them to obtain high-precision global comparisons, and further study its occurrence characteristics, genetic mechanisms, and various feedbacks. effect. Global records of millennial-scale climate change in the Holocene Since the \"1500-year\" cycle of Holocene climate change in the North Atlantic was discovered, similar quasi-periodic records in other regions of the world have been increasing, which seems to indicate that the millennial-scale climate fluctuations in the Holocene may have global significance. In the circum-North Atlantic region, the variation of sediment grain size in the Southern Ireland Basin shows that the Iceland-Scotland Overflow Water (ISOW), an important component of the North Atlantic thermohaline circulation (The Mohaline Circulation, THC), strengthens during warm events, while Weaken during cold events with a cycle of about 1500 years [3]. However, there is no direct correspondence between the Iceland-Scotland overflow water weakening event and the glacier drift debris event. The variation of \uf064 18O in the northern European land stalagmites is strongly consistent with the North Atlantic glacier drifting debris event, and the 1450-year fluctuation cycle shown is very close to the 1470-year cycle of the North Atlantic glacier drifting debris event[4]. The eastern part of North America also experienced a series of cooling events with a range of 0.2-2\uf0b0C, each lasting about 300-500 years, with an interval of about 1400 years[5]. The Holocene climate in the North Pacific Rim region is similar to that of the North Atlantic Ocean. When the North Atlantic ice raft event occurred, the temperature and water vapor in Alaska and Northwest Canada also decreased accordingly, and the moisture increased significantly during the warm event and the transition from the cold event to the warm event[6] . The surface seawater temperature of the Kuroshio continuation in the Northwest Pacific Ocean shows changes in the Holocene centennial and millennial scales, with a period of 1470 years, and some events are well comparable to the glacier drift and debris events in the North Atlantic. This finding reflects that there is a 1500-year cycle in the latitude-north-south displacement of the Kuroshio Extension Current, indicating a close climate connection between the North Pacific and the North Atlantic[7]. Figure 1 \tThe percentage of Holocene hematite-dyed particles in the North Atlantic Ocean and the ice drift events it represents Due to the enhanced southward convection and local upwelling, a series of millennium-scale cooling events occurred in sea surface temperature (SST) off the coast of West Africa, with an interval of 1500\uf0b1500 years[8]. In the Indian monsoon region, the Oman stalagmite records show that the monsoon rainfall has gradually weakened since 8000 years ago, indicating that the ITCZ has moved southward with the weakening of the northern hemisphere summer solar radiation. In this long-term change trend, there are some decadal-scale monsoon rainfall change events, but no evidence for comparison with the Bond event was found [9]. The stalagmite records from southern China show that the intensity of the Asian monsoon has gradually weakened over the past 9000 years, consistent with the Oman stalagmite records. The difference is that the stalagmite records in southern China have a millennium-scale monsoon intensity weakening event that can be compared with the North Atlantic ice raft event, and each event lasts about 100 to 500 years [10]. Possible mechanism of millennial-scale climate fluctuations in the Holocene Although human activities such as increased greenhouse gas emissions and overuse of land have had an important impact on global climate change in the nearly 200 years since the Industrial Revolution, however, during the long Holocene period of about 1.1 Over ten thousand years, the earth's climate and environment changes are mainly affected by natural processes. Some of these natural processes originate from extraterrestrial driving forces, such as changes in the Earth's orbital parameters and solar activity; The sun is the most important driver of Earth's climate system. This is mainly reflected in two aspects. On the one hand, changes in the parameters of the Earth's orbit affect the total energy received by the Earth from solar radiation and the distribution of this heat on the Earth. The changes in the amount of solar radiation received by the Earth during the Holocene are mainly controlled by the precession cycles of 19,000 years and 23,000 years. Affected by it, the summer in the northern hemisphere gradually decreased in the Holocene, while the winter increased relatively, and the seasonal differences in climate became smaller. This provides a large background field for the Holocene millennial-scale climate fluctuations. On the other hand, the variation of the sun's own radiation also has variations on different time scales. According to the intuitive comparison of solar activity proxy indicators (14C and 10Be) and North Atlantic glacier drift debris events, it is speculated that solar changes are the driving force of the Holocene \"1500-year\" climate change cycle[1]. Through its impact on sea ice, relatively small changes in solar energy output could affect the formation of deep water in the North Atlantic, thereby amplifying the signal of solar variability on climate and transmitting this effect globally. This mechanism is verified by coupled climate model simulations. However, solar activity itself has only 900-1000-year and 400-500-year cycles, and there is no \"1500-year\" cycle. It is considered that there is no linear relationship between the climate of the North Atlantic Ocean and solar activity. Interestingly, simulations show that the pulsating modulation of orbital parameters of centennial-scale variations in solar activity can produce Holocene millennial-scale climate events. Various types of particulate matter and gases emitted by volcanic activity also have an important impact on climate. Large-scale volcanic eruptions, volcanic aerosols can enter the troposphere, even the stratosphere, thereby reducing the amount of solar radiation, resulting in a decrease in the temperature of the earth's surface. The effect can last as long as three years, and the global average temperature is reduced by 0.1~0.2\uf0b0C [11]. If the effects of a large volcanic eruption were to shift over an area to the Arctic Oscillation/North Atlantic Oscillation climate mode, it would lead to warmer winters and cooler summers in the northern hemisphere. In addition to these short-term effects, a series of volcanic events can play an important role in long-term climate cooling, such as the Little Ice Age event. Changes in the interaction between land\uf02docean\uf02datmosphere at different time and space scales may also be an important reason for the Holocene climate change. The interaction process between land\uf02docean\uf02datmosphere in the Earth\u2019s climate system includes: El Ni\u00f1o\uf02dSouthern Oscillation, Arctic Oscillation/North Atlantic Oscillation, Atlantic Decadal Oscillation, Pacific Decadal Oscillation, and North Atlantic Thermohaline Circulation, etc. . Long-term changes in these large-scale air-sea interaction processes may have a profound impact on Holocene climate change. It has been recorded that the activity of the large-scale air-sea interaction phenomenon El Ni\u00f1o-Southern Oscillation in the Pacific has an increasing trend from the early Holocene to the late Holocene. The recent wavelet analysis results show that the Holocene \"1500-year\" climate change cycle is closely related to the change of ocean circulation, but has nothing to do with the output of solar radiation [12]. From the above-mentioned current studies and controversies on the possible mechanisms of Holocene climate fluctuations, it can be seen that the cause of the millennial-scale periodic law of Holocene climate change that has a profound impact on our reality is still a mystery.", "Carbon dioxide is the most abundant greenhouse gas in the atmosphere, and it is also one of the culprits leading to the current global change issue of global concern. The ocean is the most important sink for carbon dioxide after the atmosphere. If the ocean absorbs more, the rate of global warming will slow down, giving humans and the Earth system more time to adapt. However, for decades, the global marine community has focused on the oceans when studying carbon dioxide, and little is known about coastal areas. At the beginning of the planning of the Land-Ocean Interaction in the Coastal Zone (LOICZ), the carbon dioxide flux in the coastal area was listed as one of the core priorities of the whole plan. The first report published by the program in 1995 was titled \"Coastal Seas: a net source or sink of atmospheric carbon dioxide?\" (The coastal area is a source or sink of carbon dioxide?)[1]. Due to the complex interaction of land, rivers, oceans, atmosphere, sediments and organisms in the coastal area, after the first 10-year study of the above-mentioned plan, it is still unable to provide data to clarify whether the coastal area is a source or a sink of atmospheric carbon dioxide. Currently, under the framework of the International Geosphere Biosphere Program (IGBP), in addition to LOICZ, there are also the Surface Ocean-Lower Atmosphere Study (SOLAS) and comprehensive research on marine biogeochemistry and ecosystems. Program (Integrated Marine Biogeochemistry and Ecosystem Research, IMBER), and IGBP and other three global change research programs World Climate Research Program (World Climate Research Program, WCRP), International Human Dimensions Program (International Human Dimensions Program, IHDP) and The Global Carbon Project (GCP) formed by DIVERSITAS. These programs all seek to address this issue. Preliminary data show that the current global ocean carbon dioxide uptake may be underestimated by 20% because coastal areas are not included [2\uf02d4]. The crux of the problem lies in the lack of data in the tropics and subtropics, and the control mechanism of the carbon cycle is unclear, which urgently needs to be clarified. Taking the current most complete statistics as an example, the continental shelves and marginal seas in high and mid-latitude regions seem to be sinks of atmospheric carbon dioxide, while low-latitude regions are sources (Fig. 1)[5]. However , the ordinate of the air-sea exchange flux of carbon dioxide in different latitudes \nin Figure 1 \tis the quantity reported in the study, and the abscissa is the CO2 flux [unit: (mmol/m2)/d]. A positive value represents the absorption of carbon dioxide (sink) by seawater, while Negative values release (source) The above data are extremely limited. Taking the entire low latitude region as an example, there are only 11 sets of data in the world, which is seriously insufficient. Compounding the problem, there is even less data on estuaries. According to the latest statistics [5], there are only 32 estuaries in the world that have surveys of air-sea CO2 flux, and often lack data in different seasons. Therefore, although the estuary is currently considered to be the source of carbon dioxide, the exact total release cannot be known due to insufficient representative data. To solve the above problems, on the one hand, it is necessary to increase the amount of data, and on the other hand, it is necessary to study the regularity of the data, so as to extrapolate a small amount of representative data to the coastal areas of the world. In terms of increasing the amount of data, the most important thing is to strive to obtain data on carbon dioxide flux in low-latitude coastal areas and marginal seas, such as the South China Sea, Sulu Sea, Bay of Bengal, etc., as well as estuaries around the world. The amount of data in coastal waters should be at least 2~3 times that of the existing data, while the data in estuaries should be increased by at least 10 times. As far as the regularity of data is concerned, it is necessary to first understand the seasonal and annual changes of coastal seawater, the exchange mechanism and its influence with offshore seawater, and the transport flux of rivers (including particulate and dissolved organic carbon, inorganic carbon and nutrient salt), base productivity, etc., in order to generalize and estimate CO2 fluxes in similar ocean areas for which there are no survey data. The biogeochemical mechanisms of large and small river estuaries may be different, and it is necessary to understand them first, so that it is possible to estimate the carbon dioxide flux of hundreds of thousands of estuaries around the world with a small amount of data [6].", "The vast ocean accounts for about 71% of the global area, and plays a very important role in regulating the earth system, global environmental change, and global climate change. Marine science, including marine remote sensing science and technology, has received extensive attention since the second half of the 20th century and has achieved rapid development. It is recognized that the 21st century is in a sense the century of the ocean. Observing and studying the ocean from space has incomparable advantages. It can be said that ocean remote sensing is the clairvoyance of marine science. In the past 20 years, ocean remote sensing technology has obtained a large amount of high-precision data in terms of sea surface height, sea surface wind field, sea surface smoothness, temperature and water pigment concentration. We can now study fluid oceans and their changes on a truly global basis, on time scales ranging from years to days, and on spatial scales from the globe to meters. The accuracy and sophistication of the measurement methods are so well established that, with the help of a variety of new remote sensing processing techniques, a large number of studies of a wide variety of phenomena are now possible. However, due to the limitations of sensors, most oceanographic phenomena observed by remote sensing are limited to the sea surface or the upper mixed layer. The physical, biological, chemical and geological changes of the ocean are three-dimensional processes. The physical, biological, chemical, and geological phenomena in the deep ocean are very complex and important, but they are not easy to detect and study with conventional methods. Due to the limitations of remote sensing sensors on seawater penetration, space remote sensing observation research in the deep ocean has been the most difficult and important topic since the beginning of the development of marine remote sensing science. Can ocean remote sensing break the selective absorption of seawater on the surface layer of the remote sensing sensor and the emission and scattering of electromagnetic waves, and then detect deep ocean phenomena? Over the past 20 years, attempts to break the surface of the ocean have become a hot topic in oceanography, with results widely reported in the media. Yan et al.[1\uf02d3], Yan and Okubo[4] developed a method for inferring the depth of the upper ocean mixed layer from multifunctional sensor satellite data. Bobanovic and Thompson [5] developed a method for inferring the density structure of the ocean subsurface from remote sensing satellite data. Klemas et al. [6] developed a method for studying oceanic internal waves under the ocean surface using remote sensing satellite data. Ali et al. [7] used neural network remote sensing algorithm to infer the thermal structure under the ocean surface. In addition, Yan et al. [8] developed a method to study the changes of the Mediterranean outflow seawater and the eddy (Meddy) at a depth of 1000 m. Although these methods have successfully broken the view of the ocean surface from space, and there are still many important ocean processes in the ocean that need to be observed and studied from space, they are still not successful due to sensor limitations and methodological difficulties. . For example, deep-sea processes that are closely related to and impact global climate change include Meridional Overtuning Circulation (MOC), Deep Ocean Convection (DOC), mid-ocean ridges, seabed sediments, and observations of deep-sea bottom topography, some Deep sea biogeochemical processes, etc., are still unresolved. We still need to develop new methods and continue recent tentative studies combining satellite altimeters, scatterometers, infrared, water color, synthetic aperture radar and other observations and techniques (including in situ observations), and the integration and support of circulation model numerical models to infer three-dimensional ocean interior processes, temporal variations in ocean circulation, air-sea interactions, and global and regional deep ocean processes. Using remote sensing to study ocean north-south overturning current (MOC) and deep-ocean convection (DOC) has become an active frontier of space remote sensing observation research in the deep ocean. This research was recently identified by the National Academy of Sciences' decadal survey as one of the most difficult and urgently needed research topics. Take the Atlantic deep-sea circulation as an example. The Gulf Stream transports heat from lower latitudes to higher latitudes in the Atlantic Ocean. Air-sea interaction and deep-sea convection make the surface water cooled at high latitudes denser, sink into the deep sea, and then turn over and flow from the bottom to the low-latitude seas, which is called the \"Atlantic Thermal Conveyor Belt\". The melting of Arctic ice and snow caused by global climate change and global warming has reduced the density of surface seawater in high-latitude seas (the melting of ice and snow reduces the salinity of sea surface seawater and thereby reduces the density of sea surface seawater), resulting in slowing down of deep-sea convection (DOC) , the oceanic north-south overturning current (MOC) also slows down, which in turn cools the winter climate in high latitudes in Europe. Oceanic overturning current (MOC) and deep-ocean convection (DOC) are important deep-sea dynamical issues, which play a very important regulatory role in the earth system, global environmental change, and global climate change. It is very difficult to study these deep-sea phenomena with remote sensing technology, but it is of great significance.", "Dissolved organic carbon (DOC) in seawater is the largest source of exchangeable organic carbon in the ocean. reserves [1,2]. As such a large organic carbon pool in the ocean, DOC not only plays a vital role in the entire ocean and even the global carbon cycle; at the same time, as an essential food energy source for microorganisms, DOC plays an important role in maintaining the microbial chain in the ocean. The system also has functions that cannot be ignored [3]. Early understanding of ocean dissolved organic carbon was usually limited to the determination of its concentration and its distribution characteristics in the ocean. With the widespread use of new analytical instruments, marine chemists have gradually recognized the chemical composition of ocean dissolved organic carbon since the 1980s, especially in the early 1990s with the development and popularization of accelerator mass spectrometers, which can be used to Accurate determination of 14C content to obtain the age of ocean dissolved organic carbon and its main constituent compounds has greatly promoted the understanding of ocean carbon sources and their cycles[4]. Thousands of organic compounds from different sources exist in seawater, but one of the major sources is spontaneous organisms synthesized and secreted by various producers such as marine phytoplankton. Marine free-living organic matter is mainly composed of carbohydrates (sugars), proteins (amino acids) and lipids, accounting for more than 90% of marine free-standing organic compounds [5]. These compounds dissolve in seawater as the particulate organic matter breaks down. According to natural radioactive carbon isotope (14C) dating, these newly synthesized marine organic compounds are relatively young (20-50 years old), and dissolved in seawater will be easily consumed by bacteria in seawater as food. But strangely, through the 14C dating of the three groups of compounds enriched in the ocean surface and deep seawater, it is found that these compounds have completely different 14C ages[6,7]. As shown in Figure 1, by analyzing the high molecular weight total dissolved organic matter (Bulk HMW-DOM) and the proteins, carbohydrates and lipids ( The 14C dating of lipid) shows that lipid compounds have very old 14C ages in both the surface and deep layers of the ocean and in estuary seawater, which are thousands to tens of thousands of years earlier than the corresponding 14C ages of proteins and carbohydrates. Moreover, the ages of HMW TDOC and the various compounds it contains vary by thousands of years between the deep ocean and the surface. The 14C dating of marine particulate matter and organic components in sediments shows that this age difference is more obvious[8,9]. Marine organic geochemists currently have difficulty explaining this difference in 14C ages among the compounds. If these compounds in the ocean are mainly derived from marine free-living organisms, the 14C ages of them circulating in the ocean or buried in sediments should be the same, and there should not be such a large age gap. Moreover, newly formed organic compounds such as amino acids and sugars are easily consumed by microorganisms in seawater. Why do they circulate in the ocean for thousands of years without being consumed by bacteria? These problems have become the unsolved mysteries of marine organic geochemistry. There are currently some explanations that the surface adsorption process of particulate matter or the polymerization process of macromolecules in seawater protects these compounds from being consumed and decomposed by microorganisms, and can circulate in the ocean for such a long time. In addition, different sources of compounds, such as oil seepage from the seabed and biological processes may provide a large amount of lipid compounds leading to very old 14C ages of such compounds [10]. All of these hypotheses have not yet been fully tested and can only be expected to give answers in future research. Fig. 1 \tDifferences in 14C ages of proteins, sugars and lipids in surface and deep dissolved macromolecular compounds in different estuaries and the Atlantic and Pacific oceans[6,7]", "Since the industrial revolution, more than 1/3 of the CO2 released by human activities has been absorbed by the ocean, which plays an important role in mitigating global warming. However, the continuous increase in the concentration of CO2 in the atmosphere leads to an increase in the amount of CO2 absorbed by the ocean, which makes the surface seawater less alkaline. This process of increasing the acidity of seawater due to the increase of atmospheric CO2 concentration is called ocean acidification (ocean acidification). In the past hundred years, the pH of surface seawater has dropped by 0.1 (H+ in seawater has increased by 30%). In the case of no major changes in the structure of energy use, the concentration of atmospheric CO2 will rise to 8.10-101.3Pa around 2100, resulting in a decrease in the pH of the surface seawater by 0.3-0.4[1], which means that the concentration of H+ will increase by 100%-150 %. Extrapolated from available fossil fuel reserves, human CO2 emissions will reach a maximum in 2150 and then decline. However, high concentrations of CO2 will remain in the atmosphere for thousands of years. During this period, the ocean continued to absorb atmospheric CO2, and the pH of the surface seawater dropped rapidly. At the same time, the CO2 absorbed by the surface ocean will be slowly transported to the deep layer, so the pH of the deep seawater will gradually drop, and the affected depth will reach thousands of meters. Ocean acidification is changing the concentration and ratio of different forms of inorganic carbon (CO2, HCO3\uf02d, CO32\uf02d) in seawater, and affecting the CaCO3 saturation of seawater [\u03a9\uff1d[Ca2+ ]\u00d7[CO32\uf02d]/Kc, where Kc is the CaCO3 solution The solubility product of Ca2+ and CO32\uf02d at saturation is related to the crystal type of CaCO3 (such as calcite, aragonite)]. Generally, HCO3\uf02d in seawater accounts for more than 90% of dissolved inorganic carbon (DIC), CO32\uf02d accounts for about 9%, and CO2 accounts for less than 1%. The increase of atmospheric CO2 concentration will increase the concentration of dissolved CO2, HCO3\uf02d and H+, while the concentration of CO32\uf02d and CaCO3 saturation will decrease. Since 1880, surface seawater CO32\uf02d concentrations have decreased by about 10%. When the concentration of atmospheric CO2 is doubled, the concentration of CO2 in surface seawater will increase by nearly 200%, HCO3\uf02d will increase by 11%, DIC will increase by 9%, while the concentration of CO32\uf02d will decrease by 45%, and the saturation of calcium carbonate will also decrease accordingly. Therefore, ocean acidification is bound to pose a threat to the health of marine ecosystems. The bones or shells of many animals and plants in the ocean are composed of calcium carbonate, such as shellfish, corals, coralline algae, coccolitholiths, foraminifera, etc. Why these calcifying organisms calcify is a mystery that has plagued marine scientists. Although the question is unclear, the importance of their calcified structure, or skeleton, to their survival is indisputable. The calcification of marine calcifying organisms depends on the stability of the carbonate system in seawater. Decreased pH and calcium carbonate saturation affect their calcified skeleton or structure [2\uf02d6]. However, the extent to which ocean acidification affects the growth of calcified organisms, and how calcified organisms respond or adapt to the intensifying environmental pressure are problems faced by the scientific community and urgently to be resolved. CO2 concentration, dissolved inorganic carbon, pH, and CaCO3 saturation in seawater are interrelated and may affect calcification in calcifying organisms. However, which parameter plays a decisive role in the physiological process of biological calcification Misfortune of marine calcification organisms: ocean acidification \t\u00b7 909 \u00b7 impact is still unknown. Controlled culture experiments showed that under the condition of limited DIC supply, the calcification rate of coccolithophores decreased, and the coccolithophores flakes off the cell surface; while increasing the concentration of DIC under the condition of constant pH promoted its photosynthesis and calcification . The reduction of CO32\uf02d concentration caused by seawater acidification will affect the calcification rate of calcified algae[4, 5, 7]. However, it has also been reported that the acidification of seawater caused by the increase of CO2 will promote the calcification of coccolitholiths due to the increase of HCO3\uf02d concentration. This result caused great controversy, and scientists with different views launched a debate on Science in 2008 [8]. Obviously, these indoor controlled experimental systems will lead to completely different results due to different methods and processing techniques. Calcifying algal responses to ocean acidification can be quite different in the presence of high outdoor light intensities and UV radiation. Recent studies have shown that the calcium shells of coccolithophores act as a shield from harmful UV radiation. When ocean acidification leads to thinning of calcium shells, coccolithophores are vulnerable to physiological damage from large ultraviolet radiation, which further reduces photosynthesis and calcification[5]. In addition, calcification of shellfish, corals, and coralline algae will all decrease under ocean acidification. When the corals were cultured in seawater with a pH of 7.4 for one month, their calcified \"skeleton\" completely disappeared (the corals were still alive); The calcified structure can be restored [3]. This result shows the dependence of coral calcification on the alkaline seawater chemical environment. There are many types of marine calcification organisms, and their calcification mechanisms are also diverse. For example, after intracellular calcification, coccolithophores spit out calcium sheets and arrange them outside the cells regularly; coral algae calcification is carried out between cells; and corals The calcification of algae is dependent on its symbiotic algae. How will the calcification and related physiological metabolism of calcifying organisms be affected under the worsening state of ocean acidification? How do they adapt to changes in chemical conditions caused by acidification? Will the mutation or disappearance of the species occur? These are questions to be answered. In addition, ocean acidification may have more profound effects on the production processes of marine organisms. Scientific thinking about the complex chemical impacts of ocean acidification and its consequences is just beginning. Primary producers that feed marine food chains will respond differently to these environmental changes and may alter or affect the entire food chain that depends on them. How will seawater chemistry change in the future? What impact will it have on marine ecosystems and the benefits humans derive from the global environment? Obviously, we need to do a lot of research before we can face this change with confidence.", "Nitrogen is a major element that constitutes the components of living organisms (nucleic acids, amino acids, etc.) and is an important participant in many biogeochemical cycles. In the marine environment, fixed nitrogen content is one of the main limiting factors of primary productivity (biological pump), which is of great significance for the ocean to absorb the continuous increase of anthropogenic CO2 in the atmosphere[1], and also plays a role in the global carbon cycle and greenhouse gases such as N2O. It plays a regulatory role in the effect[2], making the study of the marine nitrogen cycle one of the key issues in the study of marine biogeochemistry and climate change. Nitrogen has a variety of valence states, which makes it rich in forms and can participate in many biogeochemical processes. The size of the biologically available nitrogen (fixed nitrogen, mainly nitrate nitrogen) reservoir in the ocean is mainly controlled by nitrogen fixation (input) and denitrification (output, including the recently discovered anammox process)[3]. Nitrogen fixation is the main source of input of fixed nitrogen into the open ocean, which fixes dissolved N2 in the water body through nitrogen-fixing organisms to form organic nitrogen in the organism, and then the organism dies, and the organic nitrogen is mineralized into dissolved fixed nitrogen and released in the water body middle. Denitrification is the main way to remove biologically available nitrogen in the marine environment[4], and it generally exists in anoxic (usually dissolved oxygen concentration less than 5 \u03bcmol/L) environment, and it is denitrifying bacteria that reduce nitrate to N2 a process. In the ocean, denitrification mainly occurs in anoxic middle water bodies (Eastern Equatorial Pacific Ocean, Indian Ocean) and sediments (usually the continental shelf and continental slope are the main locations). In recent years, studies have found that anammox has an important contribution to the removal of fixed nitrogen in some sea areas [5, 6]. Between the four major glacial and interglacial transitions in the past 600,000 years, marine cores recorded changes in the intensity of denitrification of water bodies (inferred from stable nitrogen isotope changes; sediment denitrification cannot be detected by isotope changes). The trend is consistent with changes in atmospheric N2O content recorded by ice cores, implying that there may have been large changes in the effective nitrogen storage pool in the past [7], and driving changes in atmospheric CO2 by affecting the efficiency of biological pumps. However, the role of the continental shelf, the region controlled by sea-level rise and fall between glacial and interglacial periods, in global denitrification fluxes is highly debated, as we Too little is known about denitrification on continental shelves, let alone ice ages. What is more difficult is that apart from direct observation of sediment denitrification, there is currently no way to retrieve the history of its intensity change with tracers. Recently, nitrogen fixation and denitrification have been found to have a good coupling relationship in space [8], suggesting that when the denitrification intensity increases, the nitrogen fixation in the adjacent sea area of the non-denitrification area will be stimulated and enhanced. It is called the homeostasis of the ocean. Such a regulatory mechanism can inhibit the excessive variation of the biological available nitrogen stock in the ocean (self-maintenance), and challenge the previously proposed change in the available nitrogen stock of the ocean. Inferences for biological pumps indirectly controlling atmospheric CO2. In order to understand the changes in the available nitrogen storage in the global ocean, the following questions must be addressed. The relative intensities of nitrogen fixation and denitrification change[9], does the coupling relationship in time and space always exist between glacial and interglacial? Can information be obtained from nitrogen isotope differences in isochrones between cores in the east and west equatorial Pacific Ocean? How to detect the coupling driving mechanism of denitrification and nitrogen fixation and the speed of coupling reaction? How to use appropriate tracers to infer the change history of the anoxic degree of the mesosphere and the spatial extent of the anoxic zone? How to assess global sediment denitrification fluxes by directly observing the intensity of denitrification in continental shelf and slope sediments? How to assess the role and importance of the anammox process in global marine bioavailable nitrogen removal?", "Marine sediments play an important role in the biogeochemical cycles of various elements. The main driving force of these chemical cycles comes from the metabolic activities of microorganisms. For a long time in the past, due to the limitations of sampling methods, biotechnology and general knowledge, the research on microorganisms in sediments was once concentrated in the depth of a few centimeters to a few meters on the surface of sediments. However, is it true that there is no biological metabolic activity of microorganisms in the deep seabed sediments? The extreme environment of the deep sediments on the seabed is fatal in terms of the living conditions on the surface, for example, the temperature is very different from that on the surface (the highest can exceed the boiling point of water at atmospheric pressure\u2014100\u00b0C, and the lowest is near the freezing point of water at atmospheric pressure\u20140 \u00b0C), high pressure (up to hundreds of atmospheres), no oxygen, lack of food/energy supply. The negative or skeptical attitude towards the existence of deep life did not change fundamentally until the 1980s. The Deep Sea Drilling Program (DSDP) and the subsequent Ocean Drilling Program (ODP) provided in situ samples of deep seabed sediments, revealing depths of hundreds or even thousands of meters below the seabed There are not only microorganisms in it [1], but also based on the extrapolation of the number of detected cells, a surprising conclusion has been drawn: its biomass accounts for at least 55% of the prokaryotic biomass on the earth, accounting for one-third of the total biomass on the earth One of [2]. At the same time, the improvement of biological gene technology and microbial culture level has confirmed the existence of these microorganisms in the deep seabed on the one hand, and provided the classification of these microorganisms at the genetic level on the other hand. These discoveries expand the scope of the biosphere and provide a possible basis for exploring the existence of life in similar extreme environments on alien planets. Therefore, there has been an upsurge in the study of life in deep seabed sediments in the field of global oceanography. A new term: \"deep biosphere\" was born in response to the situation. In 2002, ODP launched a voyage ODP 201 dedicated to the study of the deep biosphere. Drilling depths reached hundreds of meters below the water depth of several thousand meters, and obtained sediment columns with an age of tens of millions of years [3] (Fig. 1). Studies have found that the number of cells is at least 106/cm3, and the highest is 1010/cm3; using RNA (ribonucleic acid) technology, these cells have been confirmed to be undergoing active metabolism, not in a dead or dormant state[4]; DNA (deoxyribonucleic acid) information shows that the vast majority of these organisms do not have any relationship with the surface organisms that can be cultivated [3]. In addition, scientists also found that the research results obtained by using DNA, RNA, biomarkers and other means do not completely match [5]. For example, whether bacteria or archaea dominate microbial communities in the deep biosphere is still controversial [5,6]. However, little is known about the physiological characteristics and functional gene expression of these microorganisms. Existing studies have shown that in the anaerobic environment of the deep seabed, microorganisms can use the oxidation-reduction reactions between electron acceptors such as nitrate, tetravalent manganese, ferric iron, and sulfate and organic matter as electron donors. The released energy maintains its survival in the deep sediment [3]. In addition, the methanogenesis reaction mediated by microorganisms also widely exists. In Fig. 1, \tduring the ODP201 voyage, the deep sediment samples were just taken from the eastern equatorial Pacific Ocean at a water depth of 5000m. The mask is to prevent the inhalation of toxic hydrogen sulfide and explosive gas hydrate, both of which are products of microbial metabolism [5] and are important components of metabolic activities in the deep biosphere. The metabolic reactions of these microorganisms, in shallow sediments rich in organic matter, often present a hierarchical distribution from high to low according to the standard Gibbs energy that the reactions can provide. However, in deep sediments with insufficient energy supply, these reactions may occur simultaneously, and the microorganisms (including bacteria and archaea) mediating these reactions cooperate and compete, and jointly maintain the operation of the entire deep sedimentary ecosystem for millions of years. Because the metabolic reaction rate in deep sediments is very low, the usual isotope tracer technique cannot be detected. The common method is to measure the concentration of dissolved substances consumed or produced by microorganisms in sediment pore water, and use the transport reaction model to calculate the reaction rate. However, what is obtained in this way is the net reaction rate of the considered substance, and the quantitative analysis of other substances participating in the reaction cannot be clearly given, that is, the exact metabolic reaction on which microorganisms depend for survival cannot be determined. When measuring the products of the reaction, such as ferrous iron, manganese and hydrogen sulfide, a harsh anaerobic environment is required, otherwise the sample is easily contaminated. Currently available data generally have to take into account the possibility of such contamination. From the perspective of the energy supply of the deep biosphere, on the one hand, it seems that the organic matter settled in seawater can meet the needs of this sedimentary ecosystem. At this time, the deep biosphere still relies on photosynthesis, that is, solar energy. On the other hand, the water in the deep seabed is cracked by the decay energy of isotopes such as uranium and thorium to generate hydrogen (H2), which can be used as an electron donor to participate in the metabolic reactions of microorganisms. At this time, the deep biosphere can exist independently of photosynthesis. The degree of dependence of the deep biosphere on the surface biosphere is still unclear.", "As early as 1899, Tolman clearly pointed out that the ocean plays a key role in the regulation of the global distribution of CO2 [1]. By the end of the 20th century, the ocean has absorbed about 48% of the CO2 emitted by humans since the industrial revolution due to fossil fuel use and cement production[2], becoming the largest sink of anthropogenic CO2. However, some findings suggest that the ocean's efficiency in absorbing anthropogenic CO2 from the atmosphere has already begun to decline. Obviously, if the ocean's ability to absorb anthropogenic CO2 tends to be saturated, the global carbon cycle pattern will undergo major changes, which in turn will affect the basic conditions for human survival. Therefore, this issue has naturally attracted great attention of scientists. The reason why the ocean can absorb a large amount of anthropogenic CO2 mainly includes three mechanisms: the biological pump, the solubility pump, and the chemical buffering effect of seawater. In most sea areas of middle and low latitudes, due to the significant stratification of the ocean, the anthropogenic CO2 absorbed by the ocean is limited to the upper layer with a fast turnover, and it usually needs to be brought into the deep sea with a long residence time by the sedimentation of biogenic particles. Or in sediments, this is the biological pump; while in the North Atlantic Ocean and the surrounding waters of Antarctica, the density of the surface seawater increases due to two processes, one is the strong cooling effect of the high-latitude atmosphere; The process in which this heavy seawater carries a large amount of absorbed anthropogenic CO2 into the deep sea is the solubility pump. The anthropogenic CO2 entering the deep sea thus participates in the millennial cycle dominated by the oceanic conveyor belt (Fig. 1). The operating efficiency of ocean solubility pumps depends largely on the chemical buffering capacity of seawater. The magnitude of anthropogenic CO2 absorbed by the ocean through the solubility pump at different levels of seawater chemical buffering capacity can be exemplified as follows. If it is assumed that all seawater (ie 1.3 \uf0b4 1018 m3) directly participates in this millennium cycle, an average of 1.3 \uf0b4 1015 m3 surface seawater sinks into the deep sea every year; if it is further assumed that seawater has no chemical buffering effect on CO2, that is, the CO2 emitted by humans is simply dissolved into seawater, and based on the difference between the current average atmospheric CO2 level of 3.8\u00d710\uf02d8 and the concentration level before the industrial revolution of 2.80\u00d710\uf02d8, the CO2 solubility (approximately 60mol/m3, it can be calculated that the excess inorganic carbon in the form of gas in each cubic meter of sinking seawater is only about 0.07g. Therefore, if seawater has no chemical buffering effect on CO2, the ocean solubility pump can only absorb 9.0 \uf0b4107t of carbon per year The anthropogenic CO2 in the atmosphere is less than 5% of the level of anthropogenic CO2 in the atmosphere absorbed by the ocean every year, which is obviously unreasonable. The reason for this unreasonable result is that our second assumption is incorrect. The actual situation is that the anthropogenic CO2 in the atmosphere As soon as CO2 dissolves into seawater, it will interact with chemical buffer systems such as carbonate systems in seawater, and most of it will be converted into bicarbonate ions immediately, while a small proportion continues to exist in the form of free CO2. This is the chemical reaction of seawater to CO2 Buffering effect. The article published by Revelle and Suess in 1957 first quantitatively analyzed the effect of seawater chemical buffer system [3]. When the concentration of atmospheric CO2 in equilibrium with seawater increases by 10%, the concentration of dissolved inorganic carbon in seawater will only absorb anthropogenic carbon dioxide in the ocean. Will the capacity reach saturation? \t\u00b7917\u00b7 Fig. 1 \tSchematic diagram of oceanic conveyor belt circulation[4] Increased by about 1%, this ratio was later called the Revelle coefficient. Subsequently, the Revelle coefficient was further strictly defined as: at a certain temperature, salinity and Seawater homogeneous buffer coefficient under alkalinity [(\u2202pCO2/pCO2)/ (\u2202DIC/DIC)][5]. The higher the Revelle coefficient, the more CO2 absorbed by the ocean exists in the form of free CO2. The increase of partial pressure of CO2 hinders the further absorption of CO2 by the sea area, that is, the buffer effect of the sea area on the increase of atmospheric CO2 is weak; otherwise, it shows that the buffer effect of the sea area on the increase of atmospheric CO2 is strong. According to the research results in the late 20th century, the ocean The Revelle coefficient of the surface layer is distributed between 8 and 13, and the Revelle coefficient is relatively low near the equator, while the Revelle coefficient is relatively high in high latitude sea areas[2]. Based on this, we can re-estimate the magnitude of the ocean solubility pump. Still assume that all seawater (ie 1.3\uf0b4 1018 m3) directly participates in the millennium cycle of the ocean, and an average of 1.3 \uf0b4 1015 m3 surface seawater sinks into the deep sea every year; and according to the difference between the current average level of atmospheric CO2 and the concentration level before the industrial revolution, the current average Revelle factor of the North Atlantic sea area ( About 11)[2], and the surface layer of the high latitude sea area The concentration of dissolved inorganic carbon in seawater (about 24 gC/m3) can be calculated to contain about 0.78 g of excess inorganic carbon per cubic meter of sinking seawater. Therefore, the cooling and sinking of seawater in high-latitude sea areas alone can absorb anthropogenic CO2 equivalent to 1.0 \uf0b4109t carbon per year, which is equivalent to about half of the anthropogenic CO2 level absorbed by the ocean in the atmosphere every year on average. The above estimates fully demonstrate the chemical buffering capacity of seawater. However, the continuous high-intensity emission of anthropogenic CO2 has resulted in the rapid consumption of seawater chemical buffers, and the potential of the ocean to absorb anthropogenic CO2 may have been consumed by one-third. At present, scientists have directly observed evidence of a decrease in the pH of the ocean surface [6], showing that the level of alkalinity, which is critical to the chemical buffering of seawater, has decreased; data analysis of the North Sea, a major marginal sea in the North Atlantic, also shows that , the Revelle coefficient of the marginal sea in September 2005 was significantly higher than that in September 2001, indicating that the chemical buffering effect of the marginal sea on the increase of atmospheric CO2 is rapidly declining[7]. The truth and seriousness of this issue are still debated. For example, the increase in atmospheric CO2 concentration and temperature will both intensify the land weathering process and increase the flux of materials transported to the sea through rivers, dust and other channels[8, 9], which may largely offset the atmospheric Negative effects of elevated CO2 on local ocean carbonate buffer systems. In addition, some anaerobic processes in the sediments of the coastal waters will also release some alkalinity substances to supplement the consumption of local seawater chemical buffer substances [10]. In short, this issue fully reflects the complex feedback mechanism among the atmosphere, land, and ocean carbon cycles. A comprehensive analysis of this issue will help people deeply understand the laws and trends of carbon cycles and global changes, and then Develop a visionary response plan.", "CO2 is an important greenhouse gas that is closely related to primary production processes and respiration in the Earth system. Before the industrial revolution, the atmospheric CO2 concentration was about 0.28\u2030; after the industrial revolution, a large amount of CO2 released into the atmosphere through fossil fuel combustion and other human activities raised the atmospheric CO2 concentration to about 0.39\u2030 at present, and the atmospheric CO2 concentration has increased in recent years. High speeds are also increasing. In recent decades, global climate change has made the release (source) and absorption (sink) of CO2 a focus of marine scientific research. The ocean is an important sink of atmospheric CO2. If there is no ocean to absorb CO2 from the atmosphere, the current partial pressure of atmospheric CO2 should be 44.6Pa instead of 39.0Pa. Overall, the open ocean absorbs about 2.0 Gt C of CO2 from the atmosphere per year [1] (1 Gt C = 1015 g C). Although the area of the coastal sea area accounts for only 7% of the total area of the global ocean, its primary production accounts for 14%-30% of the global ocean, the total organic carbon burial accounts for 80% of the global ocean, and the sediment mineralization accounts for 10%. 90% of the world's oceans. Therefore, the coastal waters may play an important role in the global ocean CO2 absorption and release pattern, so the research on coastal carbon absorption/release has attracted the attention of the international oceanographic community in the past decade. The difficulty of offshore carbon cycle research is that, compared with the open ocean, the offshore ocean is a more complex and dynamic system with high productivity and diverse ecosystems. It is also affected by land (rivers) and oceans, and frequent mesoscale processes such as coastal upwelling. Therefore, the absorption/release (production/consumption) pattern and control process of CO2 in the coastal waters are much more complex than those in the open ocean. In the first report of an international research program, the Ocean-Continent Interaction Initiative in the Nearshore Zone (LOICZ), Kempe posed the question: Does the coastal sea absorb CO2 from the atmosphere or release it into the atmosphere [2]? This issue should be considered from the input flux of terrestrial organic carbon (including particulate organic carbon and dissolved organic carbon) and nutrients (nutrient elements such as nitrogen, phosphorus, and silicon necessary for the growth of phytoplankton) and their ecological effects. On the one hand, the land imports a large amount of organic carbon into the ocean through rivers and estuaries, and a considerable part of this organic carbon degrades in the offshore, releasing CO2. On the other hand, the offshore also receives nutrients imported from rivers, which stimulate the growth of offshore phytoplankton and absorb CO2 from the atmosphere. Simply put, the ebb and flow of these two processes controls the uptake or release of CO2 in the offshore. In addition, increasing concentrations of atmospheric CO2 enter the ocean through physical processes that are often overlooked. Early studies identified the offshore ocean as a source of atmospheric CO2 (release of CO2 into the atmosphere). For example, Smith et al. believed that the offshore system is a heterotrophic system, that is, the amount of CO2 produced by the degradation of organic carbon in the offshore area is greater than the amount of CO2 absorbed by phytoplankton from the atmosphere, so the offshore system as a whole releases CO2 to the atmosphere. They estimated from the mass balance of terrestrial organic carbon input, burial, mineralization, and net primary productivity that the offshore system releases 0.22 Gt C of CO2 to the atmosphere each year through the degradation of terrigenous organic carbon[3]. However, in recent years, more and more studies have found that many offshore systems absorb CO2 from the atmosphere. The North Sea, the Atlantic Ocean, and the eastern coast of the United States are examples of CO2 absorbed from the atmosphere on an annual average [4,5]. Ducklow and McCallister reviewed previous studies and concluded that the offshore area is a strong sink of atmospheric CO2, absorbing 2.1 Gt C of CO2 from the atmosphere every year [6], which is equivalent to the amount of CO2 absorbed from the atmosphere by the global open ocean. The results of Ducklow and McCallister are completely opposite to those of Smith et al. One important reason is that the shelf productivity used by Ducklow and McCalliste is much higher than that of Smith et al., which leads to the organic carbon synthesized by phytoplankton in the offshore system compared to the organic carbon degraded by them. Much bigger. These two diametrically opposed results also reflect that the huge spatio-temporal variation of offshore productivity brings great difficulties to accurately assess the CO2 uptake/release of offshore systems. In situ observations show that there are also near-shore areas that release CO2 into the atmosphere, such as the northern South China Sea [7]. Because the CO2 uptake/release patterns of different offshore systems are very different, and the control mechanisms are also different, Cai et al. divided the offshore into the mid-latitude eutrophic sea area, the mid-latitude mesotrophic sea area, the low-latitude western boundary current shelf sea area, and the Arctic Ocean sea area. , Antarctica sea area, mid-latitude east boundary current shelf sea area and low latitude east boundary current shelf sea area seven different systems. Among them, the mid-latitude sea area is the sink of atmospheric CO2, absorbing 0.33 Gt C of CO2 per year; while the low-latitude sea area is the source of atmospheric CO2, releasing 0.11 Gt C of CO2 to the atmosphere every year. The global offshore system is generally a sink of atmospheric CO2. Absorbs 0.22 Gt C of CO2 per year from the atmosphere. Cai et al. believed that the input of terrestrial organic carbon and sea surface temperature are the important mechanisms controlling the absorption of CO2 in the mid-high latitude offshore system and the release of CO2 in the low-latitude offshore system[8]. However, we still lack a quantitative assessment of how the input of terrigenous materials affects the CO2 source-sink pattern in the coastal sea area. On the basis of existing studies, it is assumed that, except for large river estuaries and areas of diluted water, the net effect of terrigenous material input on offshore systems is the release of CO2 to the atmosphere. At the same time, it is speculated that the reason why the offshore system generally absorbs CO2 from the atmosphere is caused by the physical factors that cause the atmospheric CO2 to dissolve into seawater due to the gradual increase of atmospheric CO2 concentration. Both changes in the natural environment and human activities may change the CO2 sources and sinks in the offshore. For example, as populations continue to grow and nutrient fluxes into the sea increase, the productivity of offshore systems may increase, which may lead to more CO2 uptake from the atmosphere by offshore systems; increasing anthropogenic activity may alter the flux of terrestrial sources into the sea. This may also change the coastal phytoplankton productivity and the ebb and flow of organic matter mineralization, thereby changing the CO2 source-sink pattern in the coastal waters. The source-sink pattern of offshore CO2 has changed. In short, the impact of terrestrial material input on offshore CO2 sources and sinks is a very complicated issue. Solving this problem not only requires a large number of on-site investigations, but also needs to summarize the different processes and situations related to the input of terrigenous materials, and establish numerical models to study and predict accordingly.", "Introduction The biggest environmental problem in the world today is climate change, that is, global warming. The main reason is that man-made emissions of CO2 into the atmosphere cause the greenhouse effect to intensify. The ocean covers 71% of the global surface area and plays an important role in regulating climate change. The biological mechanism of this regulation is photosynthetic carbon fixation. The organic carbon fixed by photosynthesis passes along the food chain from primary producers to high-trophic levels step by step, and produces various particulate organic carbon (Particle Organic Carbon, POC). Settling down, forming a vertical carbon flux from the ocean surface to the deep sea and even to the sediment. Thereby sequestering a part of carbon in the ocean without participating in the atmospheric CO2 cycle for a long time, playing the role of \"ocean carbon storage\". This process starts with photosynthesis and develops along the food chain. The whole process is led by organisms, just like organisms \"pump\" atmospheric CO2 into the sea, so it is called \"biological pump\" (Biological Pump)[1]. The above-mentioned \"biological pump\" is realized by relying on the settled POC. In fact, marine organisms also produce a large amount of non-sedimented dissolved organic carbon (DOC) during their life activities. In fact, the DOC carbon pool in the ocean is much larger than the POC carbon pool, and DOC accounts for as much as 90% of the total organic carbon pool in the ocean. So, what is the relationship between such a large DOC carbon pool and the atmospheric CO2 carbon pool? What role does it play in the ocean carbon cycle? What is the connection with global change? Activity and function of marine dissolved organic carbon The total amount of marine dissolved organic carbon (DOC) is as high as 700 Gt C[2]. According to bioavailability, DOC can be divided into three categories: active DOC (Labile DOC, LDOC), semi-active DOC (Semi-Labile DOC, SLDOC) that can be slowly degraded, and inert DOC (Recalcitrant DOC, RDOC) that is difficult to be biodegraded. The concentration of LDOC in the ocean is usually at the nM level, and the residence time is only a few minutes to a few days. SLDOC can exist in the surface seawater for months to years, while RDOC can be stored in the ocean for a long time, and the turnover time in the modern ocean is about 5,000 years, or tens of thousands of years in some periods of Earth's history. A large amount of RDOC has accumulated in the ocean. The storage of RDOC in the modern ocean is about 650 billion tons of carbon, which is comparable to the total amount of atmospheric CO2 and constitutes a sink of atmospheric CO2. The Basic Principle of \"Ocean Microbiological Carbon Pump\" So, how is RDOC formed in the ocean? This is the key to understanding the mechanism of ocean carbon storage. Recent studies have shown that a large number of micro organisms in the ocean are the main source of marine RDOC [3\uf02d7]. Indoor long-term culture experiments confirmed that bacteria can efficiently convert LDOC into RDOC. Bacterial RDOCs include prions, peptidoglycans, liposome-like substances, methyl amino sugars and N-acetyl amino sugars, lipopolysaccharides, and certain D-amino acids. The process of micro organisms being preyed or lysed by viruses can also produce a large amount of RDOC. This process of micro organisms effectively absorbs low-concentration LDOC and SLDOC and transforms and accumulates them to form high-concentration RDOC, just like a pump pumps water from a low water level to a high water level, and stores RDOC in the ocean, so it is called It is called \"Microbial Carbon Pump\" [3, 8]. Different from the classic \"biological pump\" which relies on the sedimentation mechanism of particulate organic carbon, the \"microbiological carbon pump\" does not depend on the sedimentation process, but on the ecological process of micro organisms [3, 9]. Ecological advantages of the \"microbiological carbon pump\" The microbiological carbon pump not only quickly absorbs LDOC and partially converts it into RDOC, maintaining a huge ocean organic carbon pool, but also transforms the components of ocean dissolved organic matter in the process: The carbon: nitrogen: phosphorus ratio in active dissolved organic matter is 199 : 20 : 1, while the carbon: nitrogen: phosphorus ratio in inert dissolved organic matter is 3511 : 202 : 1 [7], that is to say, \"miniature biological carbon pump\" Relatively more carbon is retained in the organic form, and relatively more nitrogen and phosphorus are released to the inorganic form. Inorganic nitrogen and phosphorus are usually lacking in the marine environment and are needed by primary producers such as phytoplankton. Therefore, the microbiological carbon pump not only has the function of \"carbon storage\", but also promotes nutrient recycling, promotes the primary productivity of the ocean, and thus promotes the classic \"biological pump\" [3,9]. In addition to the biological mechanism, another physical mechanism for ocean carbon storage is the \"solubility pump\", that is, atmospheric CO2 enters the ocean through dissolution. Although this mechanism can alleviate the increase of atmospheric CO2, CO2 will cause ocean acidification due to the movement of chemical balance after entering seawater, which will lead to ecological disasters and a series of environmental problems in severe cases. Compared with the \"solubility pump\", the product RDOC of the \"micro-biological carbon pump\" does not exist. \nFigure 1 Schematic diagram of the marine micro-biological carbon pump. Since RDOC has a slow turnover in the ocean and a long storage time, it will not cause drastic changes in the ecosystem like a \"solubility pump\". \"Micro-biological carbon pump\" is a difficult multidisciplinary interdisciplinary proposition \"Micro-biological carbon pump\" is a theoretical framework covering complex biogeochemical processes. The research on the micro-biological carbon pump involves biology, ecology, geochemistry, physical oceanography, marine sedimentology and other disciplines, and requires the application of advanced technical means and multidisciplinary joint research. For example, the utilization of micro-biological carbon sources requires not only on-site ecological investigations, but also indoor physiological experiments; not only the study of their physiological and ecological processes, but also their molecular biological mechanisms; from the cell level to the community level, from the functional gene to environmental genomics and proteomics. As another example, in the analysis of RDOC, it is necessary not only to measure the value of RDOC, but also to understand the composition of RDOC; not only to use advanced separation and measurement techniques, but also to deduce thousands of RDOC components based on cheminformatics methods. For another example, RDOC is a concept relative to environmental characteristics and biological characteristics. When the environment and organisms change, the connotation of RDOC will also change. For example, in the euphotic zone, functional microbiota such as aerobic anoxygenic photoheterotrophs (AAPB) are highly selective for organic carbon utilization, implying more RDOC for AAPB. On the contrary, the proportion of Archaea in the deep sea is relatively high, and RDOC is the main DOC component in the deep sea. Archaea may have the ability to utilize carbon sources that other bacteria cannot use, so the RDOC corresponding to Archaea may be relatively small . The study of species-specific and functional group-specific RDOC can deepen the understanding of the process of organic carbon existence, transfer and transformation in the ocean. Furthermore, climate change is difficult to verify experimentally, but can be reversed through historical analysis. Isotopic evidence shows that there was a huge RDOC carbon pool in the history of the earth. The fluctuation of this ocean carbon pool is closely related to the change of paleoclimate. Understanding its process mechanism will be of great benefit to the inversion of paleoclimate and prediction of future changes. Therefore, it is very necessary to combine the past with the present, learn from the past, and discuss the past with the present. In short, under the theoretical framework of \"marine micro-biological carbon pump\", on the basis of in-depth and systematic research on micro-biological ecological processes, through multidisciplinary cross-infiltration and inversion across time and space, it will lead to the regulation of marine carbon cycle and even global changes. A breakthrough in mechanism understanding. Conclusion The \"miniature biological carbon pump\" is complementary to the classic \"biological pump\", covering both \"sedimentation\" and \"non-sedimentation\" processes, and constitutes a more comprehensive understanding of the biological mechanism of ocean carbon storage. The research on the \"Miniature Biological Carbon Pump\" will provide unprecedented parameters and basis for in-depth understanding of the ocean carbon cycle mechanism and its role in global climate change. In view of the importance and difficulty of research on the \"Microbial Carbon Pump\", the Scientific Committee for International Ocean Research (SCOR) has set up a scientific working group SCOR WG134 (Microbial Carbon Pump in the ttp://-) To guide and promote research in this field.", "Although viruses have been found in seawater for a long time, it wasn't until 1989, with the development of microscopy, that the oceans were known to contain a surprising number of viruses. Later, people quickly realized that this \"small thing\" that can only be seen under a microscope with a magnification of tens of thousands to hundreds of thousands of times plays a pivotal role in the ocean. With the application of many new technologies in recent years, people are gradually understanding the role of marine viruses in the ocean and global ecosystems. The abundance and diversity of marine viruses Viruses in seawater are the most abundant life forms in the marine environment. The average concentration of viruses in seawater is about 3\u00d7109/L, and the total number reaches an astonishing 4\u00d71030. The vast majority of viruses are bacterial viruses (bacteriophages) [1, 2]. The number of viruses in the ocean is generally 10 to 15 times that of bacteria. On average, each marine virus contains about 0.2fg carbon, and all marine viruses add up to 200Mt, which is similar to the carbon content of 750,000 blue whales; the average length of each virus is 100nm, and if they are connected end to end, the total length exceeds 100 times larger than the Milky Way we live in [3]. Due to the different methods of counting marine viruses (such as scanning electron microscopy, fluorescence microscopy, flow cytometry, etc.), people do not have a unified understanding of the global distribution trend, temporal and spatial changes, and influencing factors of their number. Although the appearance of viruses has only a few types under the microscope (such as long-tailed, short-tailed, tailless, etc.), their diversity in genetic material is very high. From the perspective of the type of genetic material it contains, viruses can be divided into four categories: double-stranded/single-stranded DNA and double-stranded/single-stranded RNA viruses. Because every marine bacterium or other organism has at least one, and often more than one, virus, you can imagine how high the diversity of viruses is. Recent marine metagenomics studies have also verified this point: only about 10% of metagenomic sequences in bacteria are completely unknown new sequences, while in viruses, this number is 60% to 80%. This fully shows that the diversity of marine viruses is much higher than that of their hosts, and it also shows that people's understanding of the diversity of marine viruses is still at a relatively early stage. The impact of marine viruses on bacterial diversity There are two main modes of action of marine viruses on bacterial population diversity and population structure: horizontal gene transfer and virus lysis-specific hosts, in which virus lysis-specific hosts have an ecological impact on bacterial population structure have a greater impact [1, 2]. Among the two main factors causing bacterial death, non-specific predation mainly affects the number of marine bacteria, and the lysis of bacteria by viruses regulates the diversity and population structure of bacteria due to the relatively specific host range. A well-known theory that viruses regulate the structure of bacterial populations is \"Kill the Winner\" [1]: the large number of major groups (winners) in the bacterial population greatly increases the chance of specific virus infection, and its Viruses multiplied in large numbers, causing the demise of their populations, thus providing the necessary living space and nutrients for other vulnerable groups of bacteria, thereby maintaining the diversity of bacterial groups and the stability of the ecosystem (Figure 1). The use of molecular biology methods to analyze the changes in the community composition of viruses and bacteria in typical sea areas provides strong evidence for this point of view [2]. Another example is the rapid disappearance of red tides. An important feature of red tides is their sudden extinction during an outbreak. The \"Kill the winner\" theory reveals that when a species overpopulates, its chances of being infected by a virus increase dramatically. During the red tide outbreak, it is an excellent time for a large number of viruses to infect, and the cracking of the virus will cause the rapid demise of the red tide organisms. For example, during an outbreak of the ecologically important coccidioides, almost all cells were infected by the virus [2]. Figure 1 \tSchematic diagram of the Kill the Winner theory (modified from Document 1) In addition, viruses are media for horizontal gene transfer in prokaryotes. One of the three known horizontal gene transfer pathways in prokaryotes is virus-mediated transduction, that is, the gene exchange between donor bacteria and recipient bacteria achieved by virus delivery[2, 3]. The photosynthetic gene encoding the key protein of the photosynthetic light reaction center in the Synechococcus virus genome is a strong evidence for virus-mediated gene horizontal transfer; and the expression of photosynthetic protein was indeed found during virus infection. Other studies have shown that 88% of the isolated cyanophages contain photosynthetic genes, which fully demonstrates the importance of viruses as prokaryotic horizontal gene transfer agents[4, 5]. The role of viruses in the biogeochemical cycle of marine ecosystems A large number of viruses have been preliminarily confirmed to play an important role in the energy and material cycles of marine ecosystems [6]. Like the predation of marine bacteria such as flagellates, the lysis of bacteria by viruses is also the main factor causing the death of marine bacteria. In surface seawater, bacterial deaths caused by viruses account for 10% to 50%; while in some marine environments that are not conducive to the survival of protozoa, this ratio can reach 50% to 100% [2]. However, unlike the transfer of bacterial productivity and biomass to the upper layer of the marine food web caused by predation, viral lysis allows bacterial productivity and nutrients to return to or remain at a level that can be used by other bacteria, forming the material and energy in the marine ecosystem The \"viral loop\" beyond the \"microbial loop\" of circulation (Fig. 2). Modern marine ecology believes that marine bacteria are the drivers of material and energy cycles in marine ecosystems. Therefore, as viruses that can cause a large number of bacterial deaths and change the direction of nutrient circulation, they undoubtedly have a huge impact on the entire marine ecosystem[3 , 7]. Studies have shown that 6%~26% of the carbon fixed by photosynthesis flows back to the marine dissolved organic matter (DOM) pool through the \"viral ring\". The impact of marine viruses on the global carbon cycle [4]. The direct impact of the \"viral ring\" makes a considerable part of the material and energy recycled in the microfood ring and consumed by respiration, while the energy transported to higher trophic levels is relatively reduced. Compared to a virus-free ecosystem, in an ecosystem where viruses cause 50% bacterial mortality, bacterial respiration would increase by 27% and export to protozoa would decrease by 37%, ultimately resulting in microzooplankton Productivity is reduced by 20% [6]. Another effect of the \"viral ring\" is the production of large amounts of dissolved organic matter, such as monomers, oligomers and polymers, colloidal substances and cell debris. This is of great significance for preserving some limiting nutrients (such as N, P, Fe) in the euphotic layer for reproduction by micro organisms, which is especially important for the oligotrophic ocean. In addition, the \"viral ring\" also plays an important role in the production of biogenic climate gas DMS[5]. Figure 2 \tThe relationship between \"viral ring\" and the microfood ring and the main food chain [5] The red part is the main link of the virus ring and the carbon flow path. The unsolved mystery of marine viruses Despite the rapid development, people's understanding of marine viruses is still at a stage. In the initial stage, there are still many unsolved mysteries. The understanding of the basic ecological characteristics of marine viruses is still not uniform. Due to the limitations of research methods, the abundance, diversity, temporal and spatial distribution of population structures and their controlling factors of marine viruses have not yet been determined in the global ocean system. The place of viruses in the marine microbiome is unclear. The relationship between viruses and their host ecological characteristics (such as the impact of viruses on bacterial abundance, productivity, diversity, etc.) and the process of mutual evolution between them still need more in-depth research. The impact of viruses on ocean material and energy cycles is not well understood. The recycling and utilization of carbon, nitrogen, sulfur, phosphorus and other elements mediated by viruses, as well as the impact and response to global changes, are only in the initial stage of exploration, and many hypotheses and theories need to be gradually verified and clarified. The study of these issues is of great significance to people's complete understanding of the earth's ecosystem and its changes.", "Introduction Archaeal microorganisms are one of the three domains of life, and include two main branches: Crenarchaeota and Euryarchaeota. Archaea were originally thought to exist only in extreme environments, such as submarine hydrothermal vents and terrestrial hot springs. Existing data clearly show that non-thermophilic spring archaea are widely distributed in various marine and terrestrial normal temperature environments, and play an important role in the process of carbon fixation and nitrogen nitrification. At the same time, researchers discovered a new class of Euarchaeobacteria that can anaerobically oxidize methane, which has a profound impact on the methane cycle and global climate change. In addition, the rapid development of liquid chromatography-mass spectrometry (LC-MS) has enabled rapid detection of various archaeal lipids, especially some macromolecular compounds, such as glycerol dialkylglycerol tetraethers (GDGT). The study of archaeal biomarker compounds provides new insights into the distribution, function and evolution of archaea, and also promotes the application of archaeal paleoclimate. The integration of molecular microbes and lipids is one of the current driving forces for a new round of archaeal ecological and geochemical research. The molecular ecology of marine archaea adopts culture-free molecular biology techniques, and two seminal papers in the early 1990s found that non-thermophilic archaea (Chronachaaea and Euryarchaea) existed in open seas and coastal waters[1, 2]. In general, in the oceanic water column, Group I Springarchaea and Group II Euryarchaeota dominated. In marine sediments, in addition to the above two types, there are also a variety of archaeal communities, including marine benthic spring archaea, promiscuous spring archaea and an unidentified Euryarchaeota group. In addition, anaerobic methane-oxidizing archaea (ANME) are widely distributed in environments associated with methane hydrates, cold seeps, and organic-rich sediments, as well as in anoxic water bodies such as the Black Sea and the Cariaco Basin. Discovery and diversity of non-thermophilic archaea in water column and seafloor sediments. The first 16S rRNA gene detection of marine planktonic archaea contains some species closely related to extreme thermophilic archaea [1, 2]. At first this phenomenon could not be confirmed because the archaea could have come from hydrothermal vents. But DeLong and collaborators soon found high levels of archaea in icy Arctic waters; they also found symbiotic spring archaea in psychrophilic sponges. These studies unequivocally support that the newly discovered archaea are native to low-temperature marine environments. Since then, a large number of neoarchaeal types have been detected in almost every setting of the marine biosphere, from the surface to the deep and deep seafloor sediments in the Pacific and Atlantic Oceans. As far as is known, the vast majority of planktonic archaea belong to Group I.1A Springarchaea and Group II Euryarchaeota. Group I.1A spring archaea are mainly distributed in deep seawater (>200m), while Group II Euryarchaeota mainly appear in the photic layer (<200m), but also distributed in deep seawater. Non-thermophilic Euryarchaeota in seafloor sediments are mainly found in cold seep and methane hydrate-associated environments, where methane is used as an energy source to sustain anaerobic methane oxidation (AOM) and sulfate reduction reactions. Under dispersive conditions in deep-sea sediments, where methane and sulfate fluxes are low, the AOM process occurs in the sulfate-methane transition zone (SMTZ). In cold seep and methane hydrate environments, the archaeal community is dominated by as yet uncultured Euryarchaeota ANME groups, which can anaerobically oxidize methane. Specifically, ANME-1 and ANME-2 were the most dominant methane-anaerobic bacterial groups, although their relative distributions varied widely across sites. For example, ANME-1 dominates in the Black Sea and ANME-2 dominates in the hydrate ridge. In the Gulf of Mexico, both ANME-1 and ANME-2 are prevalent. In the diffuse environment below the highly productive surface seawater of the Peruvian coast, the archaeal communities in the sediments were mainly marine benthic spring archaeal group B and spring archaeal mixed group, and ANME-1 and ANME-2 were not detected. In some other non-cold see and non-methane hydrate deep-sea sediments, non-thermophilic spring archaea and non-ANME Euryarchaeota were also dominant. Abundance of non-thermophilic archaea in water column and seafloor sediments. A large number of non-thermophilic archaea were found in the open sea, which further highlights the importance of non-thermophilic archaea in marine ecosystems. An earlier evaluation of the content of planktonic archaea in temperate coastal and polar seawater showed that non-thermophilic archaea accounted for 10%\u201330% of the total prokaryotic organisms. In 2001 AD, a large-scale study in the Pacific Ocean found that archaea mainly occur in the mesosphere of water bodies in this region [3]. The 12-month monthly sampling analysis showed that planktonic spring archaea constituted most of the microplankton below 150m, and the relative content increased with depth (Figure 1), accounting for 40% of the total microplankton[3] . The study estimated that the global ocean can produce about 1.3\u00d71028 archaeal cells, which is in the same order of magnitude as the total number of bacterial cells (3.1\u00d71028)[3]. Another comprehensive study carried out in the Atlantic Ocean used improved catalytic messenger deposition\uf02dfluorescence in situ hybridization method and unique oligonucleotide probes, and the results showed that archaea were abundant below 100m depth \t. The average annual abundance of Euryarchaea and bacteria varies with depth[3] and is always higher than that of bacteria[4]; archaea in the minimal oxygen layer in the North Atlantic account for 13%-27% of the total prokaryotic production, and in the Labrador Sea 41%~84%, accounting for 10%~20% in the deep seawater of the North Atlantic [4]. The evaluation of archaea in marine sediments is often complicated by the non-uniform distribution of these microorganisms in porous media. Geochemical indicators show that 90% of the methane produced in seafloor sediments has been consumed before entering the water column, which means that a large number of methane-oxidizing archaea exist in seafloor sediments and play an important role in controlling the flux of methane into the atmosphere. plays an important role. Other studies estimate that the content of archaea in seafloor sediments ranges from 0.01% to 30% of the total biomass, with higher concentrations often occurring in sediments near the surface of the seafloor. Metabolic pathways of nonthermophilic archaea in water column and seafloor sediments. Benefiting from the development of environmental genomics and the genome analysis of a single Cenarchaeum symbiosium symbiotic with sponges, researchers began to explore the physiological and biochemical properties of non-thermophilic archaea. The newly isolated Nitrosopumilus maritimus has opened a new realm of research on the physiology of non-thermophilic spring archaea[5]. A large body of evidence supports that some non-thermophilic spring archaea can obtain energy for autotrophic growth by oxidizing ammonia. Several molecular and geochemical studies have reflected that thermophilic crynarchaea can utilize amino acids and organic carbon, implying that some crynarchaea are heterotrophic or facultative microorganisms. On the other hand, Euryarchaeota of the ANME group are thought to be able to oxidize methane through a reverse methanogenic pathway. ANME has almost all genes related to methanogenesis, which strongly supports the hypothesis that anaerobic methane oxidation adopts a reverse methanogenesis pathway. Contribution of non-thermophilic archaea to the deep-sea carbon and nitrogen cycle. Studies have shown that deep-sea non-thermophilic archaea are mainly chemoautotrophic and may use ammonia as the main energy source. The evidence of leucine synthesis reported by Herndl et al. [4] showed that the active archaea in the oxygenated water column used bicarbonate or CO2 as a carbon source, and they were dominant in the prokaryotic community of the North Atlantic water body at 100-2790 m. Ingalls et al[6] used the natural radioactive carbon isotope 14C to quantitatively evaluate the archaeal autotrophic community in the North Pacific subtropical circulation. The results show that, based on the isotopic mass balance model, 83% of the archaea autotrophically fix 14C-poor dissolved inorganic carbon in the deep sea, while the rest of the archaea heterotrophically consume 14C-rich modern organic carbon. According to the deeply integrated average rate of carbon fixation (each archaea fixes 0.014 fmol carbon per day) and the total number of global marine archaeal cells (1.3\u00d71028), Herndl et al. [4] calculated the global inorganic carbon fixation rate as 6.55\u00d71013 mol C/year. This global carbon fixation rate is consistent with estimates based on archaeal nitrification (3.3 \u00d7 1013 mol C/yr, assuming that ammonia is fully oxidized by spiroarchaea and that 10 ammonia molecules oxidized fix one carbon atom). The consistency of the above estimation results reflects that the archaeal nitrification coupled with the fixation of inorganic carbon can significantly affect the biogeochemical cycle of carbon and nitrogen in the global ocean. Methane production by methanogens in the ocean is also a major CO2 sink, but their carbon fixation is two orders of magnitude lower than that of global marine spring archaea (8\u00d71013 mol C/year)[4]. In seafloor sediments, methanogens are the only biomethane source below the sulfate-reducing zone; within the sulfate-reducing zone, methane-oxidizing archaea cooperate with sulfate-reducing bacteria to consume methane. Although heterotrophic spring archaea can dissimilate methane, their relative contribution to methane consumption is currently unknown. In both cases, most of the CO2 produced by the anaerobic oxidation of methane is converted into carbonate minerals, which constitute the major part of seafloor sediments, or form carbonate uplifts on the ocean floor. This phenomenon is common in the Gulf of Mexico and other methane-rich seafloor sediments. Globally, very large quantities of methane (>10 trillion tons) are stored in seafloor sediments or hydrate-bearing uplifts. This is a very attractive alternative energy after oil and coal. Archaeal Lipids Ether-bonded lipids (diethers and tetraethers) with isoprenoid structure characteristics are considered to be the most characteristic markers of organisms in the archaeal domain. Detailed studies on the lipids of methanogens have shown that lipid characteristics can reflect archaeal phylogenetic relationships and can be applied to taxonomic and ecological studies. Methanogens and halophiles in Euryarchaea mainly synthesize diethers, such as archaeol and cis-2-hydroxyarchaeol, while spring archaea (such as Desul - furococcus and Sulfolobus) mainly synthesize glycerol dialkylglycerol tetraethers (GDGT). Lipid studies in early archaea were somewhat limited by the lengthy chemical lysis steps required for gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS) sample preparation. The recently developed method of liquid chromatography-mass spectrometry (LC-MS) can rapidly detect archaeal lipids, especially GDGT macromolecules from a variety of environmental samples [7]. The distribution of GDGT in environmental samples is diverse. The most important ones are GDGT molecules with 0-4 five-membered rings, and sometimes GDGT molecules with 4-6 five-membered rings (Fig. 2). Crenarchaeol is a unique GDGT compound with four five-membered rings and one six-membered ring (Figure 2). It is considered a marker of planktonic spring archaea in the open ocean and in seafloor sediments. Recently, relatively high levels of crenarchaeol were found for the first time in samples from terrestrial hydrothermal vents. Subsequently, crenarchaeol was found in environmental samples and enriched cultures of thermophilic ammonia-oxidizing archaea over a wide temperature range [8]. Existing crenarchaeol data are from biomes from 10\u00b0C to 87\u00b0C, showing a broad distribution. These data suggest that the evolutionary history of crenarchaeol was longer and more complex than in the modern ocean. Figure 2 \tRepresentative GDGT structure GDGT compounds are also used to construct paleotemperature indicators, such as TEX86 (tetraether index with 86 carbons), which is suitable for ocean and lake surface waters [9]. In addition, the GDGT composition of methane hydrate samples and non-hydrate samples is significantly different, suggesting that changes in the archaeal community may be affected by hydrates[10]. Specifically, GDGT-1, GDGT-2 and GDGT-3 with 1 to 3 five-membered rings were significantly enriched in hydrate or methane-rich samples. The differences in lipids between methane hydrate samples and normal marine samples were consistent with the changes in 16S rRNA gene[10]. The gene library of samples related to methane hydrate shows that ANME-1 is the main branch of anaerobic methane oxidizing bacteria that can synthesize tetraethers, so it is likely that they contributed to the high content of archaeal lipids in hydrate samples . The above studies show that the archaeal phenotype and phylotype are consistent, which is the response of the archaeal community to the impact of methane hydrate in the marine environment. Therefore, the study of lipids in archaea has biogeochemical, ecological and paleoclimatic significance. Archaea Research Perspectives It is now known that archaea are ubiquitous and significantly involved in the global carbon cycle and energy metabolism in nearly every imaginable terrestrial niche. These new advances largely benefit from culture-free molecular techniques. Archaea have unique and stable lipid markers that can be used as molecular fossils in paleoecological and paleoclimate studies. Despite exciting discoveries and recognition of the importance of archaea, we lack a fundamental understanding of their lineage history, physiology, biochemistry, and ecological function. From a geochemical perspective, little has been done to investigate how archaeal lipids have been preserved over geological history. These are productive areas of research and more efforts are needed to better understand carbon fixation and energy metabolism in archaea in natural environments. In particular, research on thermophilic archaea can help us understand coevolutionary issues of archaeal functions, such as crenarchaeol production and archaeal ammonia oxidation.", "The marine environment is rich in microbial populations with astonishing diversity and evolutionary rates. People's understanding of their genetic diversity is only at the exploratory stage. At the same time, due to their rapid growth, frequent generation and evolutionary diversity, gene exchange within and among them also makes the ocean the largest natural genetic engineering laboratory in the world. Microbial gene transfer The methods of horizontal microbial gene transfer mainly include transformation, conjugation and transduction [1]. Transformation is the first recognized method of microbial gene transfer, and it led directly to a major discovery that pioneered modern molecular biology\u2014DNA is the genetic material. Transformation refers to the direct uptake of the DNA fragments of the donor bacteria by the recipient bacteria without the intervention of any vector, so as to obtain new expression traits, which is considered to be the main way to acquire exogenous genes in the early stages of biological evolution. Transformation is also one of the most commonly used techniques for gene manipulation (cloning and expression) in modern molecular biology. Conjugation uses bacterial plasmids or transposons as an intermediate medium to transfer genetic material from donor bacteria to recipient bacteria through bacteria-to-bacteria contact. It is precisely because of the particularity of the intermediate medium that most of the gene transfers mediated by binding are genes encoding auxiliary functions, such as antibiotic resistance, ultraviolet resistance, and heavy metal resistance. Transduction uses phage (bacterial virus) as a carrier to transfer a piece of DNA from a donor bacterium to a recipient bacterium, so that the recipient bacterium acquires new traits [2]. Microbial gene transfer in the marine environment Compared with land and sediment environments, there are a large number of dissolved high-molecular-weight DNA molecules and a large number of microbial cells in the ocean, which makes transformation in marine ecosystems one of the important ways of gene transfer. Experiments in the early 1990s confirmed this, and subsequent research has found that nutrient levels and temperature in seawater can affect the efficiency of the conversion. Compared with transformation and transduction, conjugation has the least restriction on the relatedness between donor and recipient bacteria and is therefore considered to be the most promiscuous mode of microbial gene transfer. In the natural environment, most of the binding-mediated gene transfer is concentrated in soil, animals, etc., but there are few similar studies in the marine environment. There are a large number of virus particles in the marine environment, and their abundance is far greater than that of bacteria and other hosts, so transduction is also considered to be one of the main ways of gene transfer in the marine environment [3] . Among them is a class of marine viruses, Myoviruses, which have a wide host range and are also the most common type of virus in the marine environment, and undoubtedly play an important role in marine microbial gene transfer. In addition to the above main methods, people have also discovered other \tmechanisms (such as the secretion of gene transfer factors by microorganisms to achieve gene transfer) at least in some marine microorganisms. It is ubiquitous among the taxa [4]. Research prospects of microbial gene transfer in the marine environment At present, most of the studies on microbial gene transfer are carried out through pure culture or model systems of terrestrial microorganisms. Therefore, although the research framework and preliminary information on microbial gene transfer have been provided, there are The research is not fully applicable to marine ecosystems [2]. Research from the perspective of marine ecology is almost blank. The complexity of marine ecosystems leads to the complexity of the mechanism of marine microbial gene transfer. In marine environments, for example, viruses intertwine the three main modes of gene transfer by lysing hosts containing binding plasmids. Therefore, by studying the diversity of various microorganisms (bacteria, archaea, and viruses) in the marine environment and the gene transfer between microbial groups (including methods, frequencies, mechanisms, etc., and controlling factors), we can greatly expand our knowledge of global genetic diversity. The understanding of nature and global matter and energy exchange mechanism will help to solve major scientific problems such as the origin and evolution of life, and the interaction between life process and environment.", "The so-called deep sea mainly refers to the water body below 1000m, which is characterized by no light, high pressure and low temperature. For a long time in the past, the dark deep ocean was generally considered to have almost negligible biologically active metabolism due to the harsh environmental conditions [1]. However, later studies have shown that this huge space covering 2/3 of our planet is filled with various micro organisms, carrying various life metabolic processes of micro organisms [2], and is the main place for the mineralization of organic matter in the ocean. Therefore, understanding the function of deep-sea microbiomes is critical to understanding global biogeochemical cycles. Deep-sea microfood rings and deep-sea microbiological characteristics Due to the lack of photosynthetic phytoplankton as the main food source of the food chain, the deep ocean has a food web structure and function that is completely different from that of the upper photonic ocean. The deep-sea food web structure is relatively simple, and the lack of this part of the function carried by phytoplankton is compensated by the metabolism of deep-sea prokaryotic autotrophs. At the same time, due to the decline in the availability of organic carbon in the deep sea, the entire microbiome is mainly regulated by the downward effect (availability of organic carbon). Therefore, adapting to the deep-sea environment and the characteristics of the deep-sea microfood ring, the deep-sea prokaryotic community also exhibits characteristics different from those of the euphotic prokaryotes: \u2460The nucleotide content of single cells is higher, which means that the genome of deep-sea cells is much larger than that of surface cells. Large genome, which implies that deep-sea micro organisms are good at \"opportunistic\" life mode[3]; \u2461Most deep-sea prokaryotes lack the expression gene of photolyase; \u2462Compared with the upper water body, more living organisms attach to the surface Pattern-related genes can be detected in the deep sea, which indicates that the metabolic activity of deep-sea prokaryotes may be related to particulate matter[4]. An important carrier on which metabolism depends. Deep-sea organic matter characteristics and microbial metabolism The deep-sea dissolved organic carbon (DOC) pool is mainly derived from biological processes in the upper ocean. Organic carbon is passed down through the vertical transport of particulate matter, the migration and transport of plankton, and the sedimentation and diffusion of water masses [5]. As the depth increases, the remineralization process of organic matter by micro organisms causes the continuous depletion of nitrogen (DON) and phosphorus (DOP) in dissolved organic matter (DOM), which leads to a significant increase in the ratio of deep-sea DOC:DON:DOP[6 ]. Therefore, the organic matter remineralization and degradation products of micro-organisms\u2014high-carbon and low-molecular-weight dissolved organic matter constitute the deep-sea DOM pool. This characteristic determines the low activity of deep-sea organic carbon, so deep-sea prokaryotes have a relatively low growth rate compared with surface organisms [7]. However, the unique deep-sea environmental characteristics have created unique adaptation mechanisms for deep-sea micro-organisms. Accompanied by the low growth rate of deep-sea prokaryotes, higher single-cell extracellular enzyme activity was detected [8], which is just the opposite of the prokaryotic characteristics of surface waters. For example, the single-cell alkaline phosphatase activity in the deep sea is significantly higher than that in the surface water, which seems incomprehensible, because the deep water has a high phosphate concentration, and the metabolism of deep-sea micro organisms should not be limited by phosphate. However, studies suggest that the high alkaline phosphatase activity in the deep sea is precisely a strategy for obtaining organic carbon from low-activity organic matter, not for obtaining phosphate[9]. It can be seen that the ability of deep-sea micro-organisms to utilize low-activity organic matter is significantly higher than that of upper-sea micro-organisms. Existing studies have also confirmed that the absorption ratio of D-type (low activity)/L-type (high activity) amino acids by prokaryotic communities increases significantly with depth [10, 11]. The deep sea spring archaea taxa are the main contributors. Compared with the low activity of deep-sea dissolved organic matter, the activity of colloidal and particulate matter (POM) that sinks from the euphotic zone to the deep-sea interior at different rates is relatively high. Therefore, deep-sea POM is crucial to the metabolism and distribution patterns of deep-sea micro organisms. Existing studies have also confirmed the life mode of deep-sea prokaryotes that are good at surface adsorption and their heterogeneous distribution around \"hotspots\" in the deep sea[5]. However, due to the easily decomposed characteristics of debris particles and their random distribution characteristics, it is still difficult to accurately study them with current sampling techniques. Archaeal metabolic pathways in deep-sea springs Prokaryotic communities are markedly stratified in the vertical oceanic water column. As the contribution of Bacteria to the total prokaryotic abundance gradually decreases with depth, the abundance of Archaea, especially Crenarchaeota, gradually increases with depth, and is one of the most abundant micro organisms in the deep sea. The total amount may account for 1/3 of the total prokaryotes in the global ocean [12]. Therefore, it plays an important role in the biogeochemical cycle of marine biogenic elements. However, the metabolic and functional roles of deep spring archaea are still controversial. Experimental studies of a cultured strain of Crynarchaea revealed that a major group of Crynarchaea plays a central role in the oceanic nitrogen cycle as they perform the critical first step in the conversion of ammonia-nitrogen to nitrate-nitrogen (ammonia oxidation). This finding is quite surprising, since ammonia oxidation has been thought to be performed by specific groups of bacteria for hundreds of years. At the same time, this kind of Cranearchaea also uses the energy provided by ammonia oxidation to fix inorganic carbon for autotrophic growth and metabolism. It can be seen that it also has a non-negligible position in the ocean carbon cycle. However, further studies of functional genes in natural waters revealed the diversity and complexity of the processes involved. Due to the relatively low ammonia concentration in the deep sea, the ammonia oxidation process may not be an important source of energy[13], which suggests that the metabolism of Crenararchaea in the deep sea may be dominated by heterotrophic metabolism. Nevertheless, the ability of Cryoarchaea to fix CO2 in oxygenated deep-sea water still highlights its important position in the deep-sea carbon cycle. Studies have estimated the amount of \u201cdark CO2 fixation\u201d in deep sea water to be about 1 mmol C/(m2.d)[10]. This is obviously a supplement to the deep-sea organic carbon pool that cannot be ignored; it also provides a source of new productivity of deep-sea organic matter. Therefore, this part of \"dark CO2 fixation\" is also considered as the \"primary productivity\" of the dark ocean. But the ability and extent of this part of the \"primary productivity\" to drive the deep-sea food web remains to be studied. Prospects Recent studies have shown that deep-sea prokaryotic biomass accounts for 75% of the global ocean total, and deep-sea prokaryotic productivity accounts for 50% of the global ocean total (Figure 1). Therefore, understanding the function of deep-sea microbiomes is crucial to understanding marine biogeochemical cycles, resource and environmental issues, and even global changes. Figure 1 \t(a) prokaryotic biomass and (b) prokaryotic heterotrophic productivity of the three-section vertical water column integration (\u00b1SE, X upper axis, histogram), and the percentage of the three-section water column integration value to the total water column integration value (bar diagram Inner values)[5,14] More extensive deep-sea sampling surveys and the application of new technologies, such as (macro)genomics, (macro)transcriptomics, gene chip technology, etc., will reveal more unique features of prokaryotes. Metabolic pathways allow us to gain a new understanding of the function of deep-sea planktonic prokaryotic communities and their interactions (such as interactions with protists and viruses). Finally, the contradiction between the geochemical evidence and the measured value of the mineralization rate of organic matter in the ocean should be resolved [8]; the biogeographic distribution of the microbiome and the cycle of major elements in the ocean should be better connected, so as to achieve the understanding of the microfood cycle in different oceans. Mechanistic understanding of the field; better understanding of the impact of deep-sea microbiological processes on the ocean carbon cycle and global change.", "Introduction Marine micro organisms are not only the most abundant natural resources that have not been effectively exploited on the earth, but also a key component involved in global changes and material cycles [1]. In the past 25 years, scientists have spent a lot of manpower and material resources to study the species, quantity and distribution of marine micro organisms. There are many kinds of marine micro organisms, and the biomass is huge. The number of micro organisms contained in every milliliter of seawater can reach more than one million. Marine micro organisms play an important role in marine food chains and biogeochemical cycles. With the continuous deepening of the research on marine micro organisms, new problems continue to arise. Under different environmental gradients, which groups of micro organisms are the dominant groups? How do the metabolic characteristics of micro organisms respond to changes in external environmental conditions and micro organism taxa? What role do microorganisms of different functional groups play in the cycle of marine biogenic elements? The initial approach and direct means to understand micro organisms is to isolate and cultivate pure strains and conduct experimental research on this basis. However, due to limited conditions, less than 1% of marine micro organisms can be cultured at present, and the isolation and culture technology has become a bottleneck for basic research in marine science and resource development [1]. Therefore, the adoption of new research and analysis methods, such as new isolation and culture methods, molecular biology analysis methods that do not depend on culture, and the current hot research on environmental genomics will be the key to unraveling the mystery and understanding of marine microbiological diversity. Its key to the structure and function of marine ecosystems. Research on marine micro-organisms based on isolation and culture method. Direct isolation and culture is to enrich the micro-organisms in the ocean by using the medium combined with different nutrient components (carbon, nitrogen, sulfur, phosphorus, vitamins, trace elements, etc.). The direct isolation and culture method is an important means to study and describe the characteristics of marine micro organisms. For example, in the comparative study of different ecotype isolates of Prochlorococcus, one of the most important primary productivity players in the ocean, the response rules of Prochlorococcus to environmental gradients were obtained [2]. However, due to the limited knowledge of marine micro organisms, we have not yet been able to combine appropriate media and simulate various environmental factors in natural seawater for the isolation and cultivation of all marine micro organisms. The traditional plate separation method can only cultivate 0.001%~0.1% of the total number of seawater bacteria[3], so the defect of culture technology is a bottleneck for basic research of marine science and resource development. In recent years, newly developed isolation and culture methods have played an important role in the isolation of some important groups of marine micro organisms. For example, a high-throughput isolation and culture method based on the Extinction Dilution Technique. This method is based on the number of micro-organisms in the seawater to be separated, dilute the seawater to be separated to 1-5 micro-organisms per milliliter with sterilized in-situ seawater, and after culturing on a porous plate for a period of time, the micro-organisms that grow positively are detected by the microscope quantity. Through this method, scientists obtained about 2,500 strains of bacteria, including some bacterial groups that cannot be obtained by traditional isolation and culture methods such as SAR11, OM43, SAR92 and OM60/OM241 (phylogenetic taxa). The proportion of micro-organisms obtained by isolation and culture using this method can reach up to 14%, which is about 14 to 1400 times higher than the traditional plate isolation and culture method [4]. In addition, the high-throughput screening method combined with single-cell encapsulation technology and flow cytometry sorting technology has also achieved good results in the isolation and culture of marine micro organisms. The screening method takes seawater with a certain concentration of cells (1 million cells per milliliter), adds agarose and emulsion, and uses emulsification technology to produce gel droplets, of which about 10% of the gel droplets wrap a single cell and pass through the layer. Unencapsulated cells were removed by analysis, and encapsulated cells were cultured. The proliferating gel microdroplets were sorted onto 96-well plates by flow cytometry to obtain pure culture lines. By this method, various groups of marine bacteria were isolated, such as \u03b1-, \u03b2-, \u03b3-, \u03b4-Proteobacteria, Planctomycetes and Cytophaga-Flavobacterium-Bacteroides (Cytophaga-Flavobacterium-Bacteroides, CFB ) and other important groups of marine micro organisms[5]. The isolation and cultivation of pure strains of micro organisms provides a model object for the study of marine micro organisms, but there are also many defects: the isolation and culture work cannot describe the overall role of micro organisms in the ecological process; the artificial culture conditions cannot simulate the various species in natural seawater. Subtle changes in parameters; and the mode of action of micro-organisms in the entire geochemical cycle cannot be studied during pure line culture; more importantly, most marine micro-organisms cannot be cultivated at present. Therefore, the development of research on genetic diversity and metagenomics of micro organisms that does not depend on culture technology provides a reliable technical premise for in-depth revealing of the community dynamics of marine micro organisms and the role of micro organisms in marine biogeochemical cycles. Study on the Diversity of Microbial Communities Based on 16S rRNA Gene Sequence The composition of microbiological communities is complex and highly diverse; and the community composition changes with the change of environmental factors. Therefore, the traditional isolation and culture methods have great defects in the study of the diversity of microbiological communities. At present, marine microbiological ecology is commonly used based on the sequence analysis method of the ribosomal small subunit encoding gene\u2014\u201416S rRNA gene. The 16S rRNA gene sequence is highly conserved among different species of micro organisms. It is one of the most commonly used genetic markers for prokaryotic micro organisms and is the biological basis for understanding the temporal and spatial distribution of micro organisms. After extracting the total genes of the micro-organisms, the 16S rRNA genes carrying the genetic information of all the micro-organisms in the sample are amplified by specific primers, and then analyzed by different methods according to the purpose of the research, for example: \u2460The obtained 16S rRNA Gene cloning and sequencing[6]; \u2461Fingerprint techniques, such as restriction fragment polymorphism analysis (RFLP)/terminal restriction fragment length polymorphism analysis (T-RFLP), denaturing gradient gel electrophoresis (DGGE)/ Temperature gradient gel electrophoresis (TGGE) was used to compare and identify the amplified 16S rRNA gene fragments [1] (Figure 1). The DNA sequence-based microbiome diversity research method provides the possibility to identify non-cultivable and low-abundance microbiomes, and can fully reflect the composition of microbiomes, thus providing a basis for exploring the composition of microbiomes under different environmental conditions. The regularity and interaction between different microbiological taxa provide the conditions. Figure 1 \tThe main technical roadmap for the study of the diversity of marine microbiomes. Analysis based on 16S rRNA gene sequences provides a powerful tool for revealing \"unculturable\" microbiomes. Information, such as physiological and metabolic mechanisms, environmental adaptation mechanisms, and evolutionary mechanisms. Metagenomics (Metagenomics) research Metagenomics is a concept first proposed by Handelsman et al. in 1998, which refers to the study of all genes obtained in the environment with methods similar to individual genome analysis [7]. Microbiological metagenomic research is a direct study of microbial populations in the environment by applying modern genetic analysis techniques without relying on isolation and culture techniques. Specifically, metagenomics is a research object that takes the genome of microbiological populations in environmental samples; uses functional gene screening and sequencing analysis as research methods; A new type of micro-organism research method with the research goal of collaborative relationship and connection with the environment. The process of metagenomics research is as follows: \u2460Generally, it includes extracting the genomic DNA of micro organisms from environmental samples; \u2461Clone the DNA into a suitable vector, introduce it into the host cell, and construct a genomic library; \u2462Screen the target transformants and perform gene sequencing and functional Analysis (Figure 2). Figure 2 \tSchematic diagram of the research route of environmental metagenomics Through the analysis and research of the metagenomics of micro organisms, not only can the composition information of the micro-biological communities in environmental samples be obtained, but also probes can be used to find functional genes of interest or explore unknown genes. This technology has greatly advanced researchers' understanding of marine microbiodiversity, interactions among microbiological populations, evolutionary history of microbiotics, and environmental adaptation mechanisms. The application of metagenomic technology constantly brings new \"surprises\" to researchers. For example, the discovery of rhodopsin, a kind of light-driven proton pump in seawater [8], researchers discovered the widespread distribution of rhodopsin through the study of micro-organisms in different water bodies, and confirmed that they are the true light of the ocean. An important driving mechanism for energy flow in layers [9]. Another important discovery is the ammonia oxidation of archaea. In the past, bacteria were considered to be the main undertakers of aerobic ammonia oxidation. However, in many habitats, the biomass of bacteria does not correspond to the size of ammonia oxidation. The ammonia monooxygenase encoding gene adjacent to the archaeal 16S rRNA gene was found in the metagenomic study[10], thus drawing the conclusion that the archaea is the main executor of ammonia oxidation in the marine ecosystem[11] . Metagenomics can provide a true and accurate analysis of the structure of environmental microbiomes and provide information on the potential functions of microorganisms. Theoretically, any environmental sample, as long as DNA can be extracted, can be analyzed by metagenomics (Fig. 3) [12]. Figure 3 \tDifferent font colors of metagenomic projects from 2002 to 2008 indicate the differences in the sequencing technologies used: fuel-end shotgun sequencing (black); Fosmid plasmid library sequencing (red) and pyrosequencing (green). Marine virus metagenomics (August 2006) samples from the Sargasso Sea, Gulf of Mexico, Arctic Ocean off the coast of British Columbia. Nine microbiome metagenomics include fish gut microbiome, fish pond microbiome, mosquito virus, human pneumovirus, chick gut microbiome and marine virus metagenomics research does not depend on culture technology, its Applications can give a blueprint of the composition and function of micro organisms in different habitats. Analysis of the large amount of genetic information obtained from metagenomic studies of different environmental samples can gain a comprehensive understanding of the distribution and function of marine microorganisms. Therefore, metagenomics research is considered to be an effective means to comprehensively understand marine micro organisms, and will play an increasingly important role in the future research of marine micro organisms. Looking forward, human beings are currently in a severe period of rapid climate change. The climate change caused by human activities since the Industrial Revolution has caused effects such as temperature rise, seawater acidification, and intensified stratification of the ocean. On the other hand, marine ecosystems can redistribute heat generated by land, affecting the global climate. Microorganisms are important executors of the energy flow and carbon flow cycle in the ocean. Therefore, it is important to study the impact of climate change on the metabolic process of marine microorganisms in a large spatial scale (\uf06dm~km) and the resulting feedback effects. A major challenge in scientific research [13]. The metagenomics research process can comprehensively reflect the response of functional micro-organisms under the change of marine environment. By constructing the response model of marine micro-organisms to climate change, it can provide correct support for predicting changes in the marine environment.", "Background of the problem In the historical process of utilizing the ocean and developing marine resources, human beings have always been faced with the problem of preventing marine fouling organisms. Marine fouling organisms refer to marine microorganisms, plants and animals that adhere to and grow on the surface of ships and artificial facilities in the sea, and have adverse effects on human economic activities. Common marine fouling biota groups include marine benthic bacteria, benthic diatoms, macroalgae, sponges, coelenterates, bryozoans, snails, bivalves, barnacles, and sea squirts. The massive attachment and growth of these marine fouling organisms lead to increased sailing resistance, reduced speed, increased fuel consumption, blocked aquaculture cages, net cages, aquaculture purse seines and nets for fixed fishing, and blocked seawater transportation pipelines. Causes the failure of instruments and rotating mechanisms in the sea, affects the normal use of acoustic instruments, buoys, nets, valves and other facilities in the sea, increases the burden on oil and gas exploration platforms, and accelerates the metal corrosion of ships and offshore facilities [1]. It can be seen that marine biofouling has serious harm to marine engineering, marine transportation, mariculture and naval equipment, and the resulting economic losses are huge. It is estimated that at the beginning of the 20th century, the annual loss caused by marine biofouling in the US shipping industry alone reached 100 million US dollars, and by the end of the 20th century, this value increased by an order of magnitude [2]. The annual loss of the US Navy due to marine biofouling is as high as $1 billion [3]. Human beings have been fighting against marine fouling organisms for more than 2,000 years, and have developed a variety of methods to prevent marine fouling organisms (ie, antifouling), including manual or mechanical removal, seawater filtration, ultrasonic waves, external current, and the use of radiation materials. Ultraviolet radiation, chlorine gas, electrolysis of seawater and coating of marine antifouling coatings, etc. [1]. However, many of these methods have limited antifouling effects or limited application ranges, making it difficult to promote them. So far, the most economical, effective and commonly used antifouling method is to spray marine antifouling paint on the surface of the hull or artificial facilities in the sea. Marine antifouling coatings use marine antifouling agents to exude from the coating or form a special surface to kill or repel marine fouling organisms, thereby preventing marine fouling organisms from attaching and growing on the surface of objects to achieve the purpose of antifouling. Among them, marine antifouling agent is the core component of marine antifouling coatings. The marine antifouling agents before the 1970s mainly used heavy metal compounds based on copper, lead, zinc, mercury, arsenic, etc. Since the 1970s, organic tin self-polishing antifouling coatings were introduced [1,4]. Organotin compounds are widely used due to their high efficiency and broad-spectrum properties for the prevention and control of marine fouling organisms. However, with the extensive use of organotin in marine antifouling coatings, its pollution to the marine environment has gradually attracted people's attention. It has been found that organotin is not only highly toxic to marine organisms, but also easy to accumulate in organisms, and degrades slowly, which has a serious impact on the marine ecological environment, and enters the human body through the food chain, and has adverse effects on human sex hormones and lymphocytes. It is one of the most toxic harmful substances introduced into the marine environment so far. For this reason, the International Maritime Organization passed a resolution on the deadline for the use of organotin antifouling agents in 2001, requiring that the spraying of paints containing organotin compounds on ships should be banned globally from January 1, 2003; from 2008 From January 1, 2011, all ships in operation shall no longer contain such paints [1]. Subsequently, cuprous oxide became the dominant marine antifouling agent in the market. However, cuprous oxide can also seriously endanger marine ecology. Some European countries have begun to prohibit or restrict the entry of ships painted with cuprous oxide antifouling paint. The \"China Ocean 21st Century Agenda\" formulated by the State Oceanic Administration of China also clearly proposes to develop pollution-free marine anti-corrosion and anti-fouling technologies. Therefore, the improvement of environmental protection awareness and market demand have prompted the world's major coastal countries to invest huge sums of money to find low-toxic or non-toxic environmentally friendly marine fouling biological control technology, and its research and development has become an urgent need to solve in today's marine science and technology. one of the major technical issues. Environmentally friendly marine antifouling technology and its difficulties With the implementation of the ban on organotin antifouling agents and the increasing concern of the international community on environmental issues, there has been an international upsurge in the research and development of environmentally friendly marine antifouling biocontrol technologies. The current research The direction mainly focuses on low surface energy antifouling coatings, conductive antifouling coatings and antifouling coatings containing non-toxic or low-toxic marine antifouling agents [5], among which the research and development of non-toxic or low-toxic marine antifouling agents is particularly active. Low surface energy antifouling coatings Low surface energy antifouling coatings mainly refer to low surface energy antifouling coatings based on fluorocarbon resins and silicone resins. The attachment of marine fouling organisms to the surface of objects is first to secrete mucus, wet the surface of the object with mucus, disperse on it, and then adhere. The antifouling mechanism of low surface energy coatings is that the low surface free energy makes it difficult for mucus to wet the surface, and it is difficult to spread and disperse on the surface, making it difficult for marine fouling organisms to attach to the surface. The force is also low, and it can be easily removed by its own weight, the impact of water flow during navigation or the cleaning of auxiliary equipment, so as to achieve the purpose of antifouling. Low surface energy paint antifouling method is currently the most advanced physical antifouling method, but it faces many difficulties, including poor solvent resistance, high cost, poor anti-algae effect, poor compatibility, dryness and recoatability. The basic principle of conductive antifouling coating technology is to use the conductive coating as the anode, and use the other parts of the bottom of the ship that are in contact with seawater as the cathode, and introduce a micro-current to electrolyze the seawater to generate hypochlorite ions, which can prevent marine fouling organisms of attachment. This method has no pollution to the environment, but it is technically difficult and the construction is more complicated. It is necessary to coat the bottom of the ship with an insulating coating, and because there is no active ingredient in the coating, once the power supply is cut or the coating film is damaged, the antifouling will be lost effect. Antifouling coatings containing non-toxic or low-toxic marine antifouling agents release marine antifouling agents from the coating, and killing or avoiding marine fouling organisms through chemical substances is the most effective antifouling with mature technology, simple process, wide application method. In view of the fact that toxic antifouling agents such as organotin and cuprous oxide have been banned or restricted one after another, major marine antifouling coating companies and research institutes in the world have been devoting themselves to finding new marine antifouling agents in recent years. Some herbicides and fungicides have been developed as new antifouling agents, such as isothiazolinone (Sea-nine 211), triazines (Irgarol 1051), diuron, chlorothalonil, zinc pyrithione And pyrithione copper salt, etc. However, studies have shown that these new antifouling agents have significant toxic effects on a variety of non-target marine organisms. There is increasing concern about their fate and potential dangers in the marine environment. In the UK, Sea-nine 211 and Irgarol 1051 have been restricted to be used as marine antifouling agents, and Diuron has also been banned from being used as marine antifouling agents [6]. In order to better protect the marine environment, scientists from various countries are working hard to screen new antifouling agents from natural products. Marine chemical ecology studies have shown that although marine benthic organisms and artificial facilities in the sea also face the risk of surface fouling by marine organisms, many biological species have smooth and unfouled surfaces, and their bodies contain substances with anti-fouling activity. Prevents the attachment of marine fouling organisms. These antifouling active substances are natural products, easy to degrade in the marine environment, and do not endanger the life of marine organisms, which are conducive to maintaining ecological balance, and are expected to be developed into environmentally friendly marine antifouling agents. This field is very popular, and a large number of papers are published every year. A variety of antifouling active substances have been extracted from marine benthic organisms such as corals, sponges, sea squirts, and seaweeds, including terpenes, steroids, polyphenols, fatty acids, amino acids, Natural compounds such as alkaloids, pyrimidines and heterocycles [7]. At present, the screening of natural product antifouling agents is considered to be an important way to develop pollution-free antifouling technology, but there is still a certain distance from practical application. The main problem is that the natural antifouling products have poor stability in seawater environment, they are released quickly, the antifouling period is short, and the structure is usually complex, it is difficult to synthesize artificially, and the content in organisms is very low, so in terms of quantity And the cost is limited, resulting in expensive. In addition, there are more than 4,000 kinds of fouling organisms in the ocean with high diversity, but many natural antifouling products screened at present do not have broad-spectrum antifouling performance. Prospects Under the pressure of environmental protection awareness, regulations and international conventions, marine biofouling control technology has been developing in an environmentally friendly direction for nearly 30 years. Combining the latest research results at home and abroad, it is found that there are still difficulties in the research and development of various new antifouling technologies. The development trend in this field is to develop antifouling technologies that are environmentally friendly, broad-spectrum efficient, long-term, easy to operate, and low in price. At present, the research on natural product antifouling agents has been paid more and more attention. Study the relationship between the structure and activity of natural product antifouling agents, guide further structural modification and chemical synthesis, and achieve low-cost production of antifouling agents. Microencapsulation and other technologies are used to achieve controllable release of antifouling agents in seawater, and improve this The stability and effectiveness of antifouling agents in seawater and prolonging their service life have become a research hotspot. In general, the research and development of environmentally friendly marine antifouling technology requires the cooperation of multidisciplinary experts such as marine biology, organic chemistry, polymer chemistry, materials science and environmental science. With the further understanding of the adhesion mechanism of marine fouling organisms, the continuous development of organic chemistry and material technology, the integration of multiple disciplines, and the cooperation between research institutions and enterprises, it is believed that the research on environmentally friendly marine antifouling technology will gradually achieve breakthroughs. To achieve the purpose of marine anti-pollution without polluting the environment.", "Observations of Deep Sea Hydrothermal Systems Seafloor hydrothermal vents are a very peculiar geological phenomenon on planet Earth. They are mainly distributed in mid-ocean ridges where submarine volcanism is more active, typically the spreading centers of the eastern Pacific, Atlantic, Arctic and Indian oceans, and the back-arc basins of the western Pacific. These areas are areas where deep magma upwells and new crust forms. At mid-ocean ridges, deep magma rises several kilometers above the ocean floor, heating infiltrating seawater and accelerating the interaction of hot water with oceanic basalts to produce chemical-rich hydrothermal fluids. Fluids from these hydrothermal vents can reach temperatures as high as 400\u00b0C and have a geochemical signature that is oxygen-free, strongly acidic, and rich in sulfides, methane, and various metals. Since the 1970s, more than 200 deep-sea hydrothermal vents have been discovered and recorded [1]. They differ greatly in age of formation, degree of evolution, size, stability, water depth, hydrothermal composition, temperature, biodiversity, and many other aspects. The biomes of these deep-sea hydrothermal vents contain some so-called \"living fossil\" organisms and deep-sea chemosynthetic environments. The biological communities in it also show rich diversity due to the differences in these physical-chemical conditions. At hydrothermal vents in fast-spreading regions, such as the East Pacific Rise, the main source of chemical energy is sulfide, while at the slow-spreading Mid-Atlantic Ridge, the main chemical energy sources are hydrogen and methane[2]. Hydrothermal organisms have unique physiological characteristics and can adapt and survive in the extreme high temperature, high pressure and low pH environment there. This environment has high biomass but low biodiversity. The dominant species are polychaetes, clams, mussels, various gastropod molluscs, polychaete worms and shrimps, about 500 species are recorded. Although different hydrothermal vents have similar biological taxa at the higher taxonomic unit (families, genus) levels, there are significant differences between the vents at the species level, forming different biogeographical units. Chemoautotrophic microorganisms at the bottom of the food chain in the ecosystem are the primary producers of the hydrothermal ecosystem, and archaea account for a large proportion. Significance of deep-sea hydrothermal ecosystem research Deep-sea hydrothermal ecosystem is a unique treasure house of biological and genetic resources on the earth. Green chemical industry has a huge demand for new enzyme preparations. Deep-sea hydrothermal areas have extreme environmental conditions such as high temperature, high pressure, and high levels of toxic chemicals. The flourishing ecosystem here is a good place to screen extremophiles [3]. The thermophilic archaea Pyrolobus fumarri and strain 121 hold records for the highest growth temperatures ever reported (113\u00b0C and 121\u00b0C) [4, 5]. This temperature range is considered to be the upper limit of life; Pyrococcus sp. CH1 was recently found to be able to reproduce at 98\u00b0C and 1200 atm, with a generation time of approximately 5 hours at 1000 atm[ 6]. The enzymes in these super thermophilic \uf02d high pressure microorganisms \thave incomparable application research value in industrial fields such as biological deep sea hydrothermal ecosystem \u00b7 953 \u00b7 catalysis. The biological density in hydrothermal areas is extremely high, and microorganisms inhabiting hydrothermal areas may quickly develop strategies to adapt to these environments, forming a growth, metabolism, and regulation mechanism different from those in conventional environments, and relying more on chemical defense mechanisms . The research on the biosynthesis mechanism of related metabolites will provide a theoretical basis for the establishment of new biotransformation pathways and combinatorial biosynthesis. In addition, the unique metabolites synthesized by these microorganisms contain rich chemical structure diversity and are an important source for obtaining new types of natural products in the future. The study of deep-sea hydrothermal ecosystem plays an important role in promoting the development of life science and earth science. How does life change the earth, and how does the earth change life? Research on deep-sea microorganisms from the perspective of intersecting and integrating life sciences and earth sciences will not only play a positive role in promoting the development of life sciences, but may also help to break through the traditional framework of earth sciences to solve the problem of the origin of life on earth. , The mystery of evolution. Through the study of physiology and ecology of deep-sea organisms, it helps to expand our understanding of the most important environmental factors for the origin, development and evolution of life. The high-temperature anaerobic environment of the deep sea represented by the mid-ocean ridge is similar to the environment of the early earth. Although the early life with fossil evidence appeared about 3.5 billion years ago, there are reasons to believe that life may have occurred 2 years after the earth had a liquid ocean. Within 100 million years, or even 20 million years, the earliest cells appeared. The ocean temperature was above 90\u00b0C about 3.9 billion years ago, and around 70\u00b0C around 3.5 billion years ago, so all life in the early oceans should have been thermophilic like modern hydrothermal organisms of. Microbial populations living in the extremely high-temperature environment of hydrothermal vents may retain \"housekeeping\" genes early in evolutionary history, becoming an important complement to fossils and biomarkers. Unresolved scientific question whether there are \"new\" metabolic pathways Due to the limitations of extreme environments such as high temperature, high pressure, anaerobic, etc., extreme thermophilic and barophilic microorganisms may retain some housekeeping gene \"clusters\" in early life on Earth, these normal temperature The unknown \"new\" metabolic pathway under normal pressure is an important material for studying the environmental changes of the early earth, and it is also an important reference for us to search for life in outer space. For example, the sulfide mineralization mechanism of sulfate-reducing bacteria under high temperature-high pressure conditions, the oxidation-reduction chemistry and kinetics of sulfur under the action of high temperature and microorganisms, and the high-temperature sulfur isotope thermodynamics and kinetic fractionation caused by metabolic processes have not been resolved. The analysis of sulfide mineralization under the growth conditions of deep-sea extreme thermophilic-barophilic bacteria is of great significance to the mineral and isotope records that may be related to sulfate-reducing bacteria in ancient marine sediments. Environmental genomics technology that does not depend on pure culture will also provide new technical means for searching deep sea and deep special metabolic pathways. However, modern deep-sea hydrothermal areas are a mixture of seawater ecosystems and hydrothermal \"original ecosystems\". Scientists are trying to distinguish these two ecosystems by means of deep-sea in-situ observations. Another approach is to choose microorganisms that can only live in the deep biosphere as research objects. Are there still \"new\" life forms in the deep part of the earth? By using deep-sea drilling technology, a large number of core samples have been drilled from the ocean floor. Their age is no more than 180 million years, which shows that the ocean floor is relatively young. Older oceanic crust was subducted deep into the Earth. The age of the ocean floor is symmetric to the mid-ocean ridge, and the youngest ocean bottom is on the mid-ocean ridge, and the farther away from the mid-ocean ridge, the older it is. If the earliest life on Earth originated in an environment similar to the mid-ocean ridge, is there still primitive life in the deep-sea hydrothermal environment at the mid-ocean ridge? We can consider it from two perspectives of energy metabolism and carbon fixation and absorption. Early life may have been heterotrophic, as experimental and observational evidence suggest that prebiotic chemical evolution could have accumulated large amounts of organic matter; studies show that secondary minerals from hydrothermally altered basalts at present-day mid-ocean ridges still catalyze the formation of simple organic matter . So what chemicals did the earliest life use for energy metabolism? This is a very important question about the evolution of early life. The current research progress presents two constraints: one is to use geological methods to reconstruct the geochemical composition of ocean and seabed sediments in the early stages of life evolution; the other is to use modern microbiological methods to reveal the characteristics of early life at the genetic level . Sulfate, ferric iron, nitrate, tetravalent or trivalent manganese, etc. may all be used as electron acceptors by these early microorganisms. In addition to detailed biogeochemical studies of the deep-sea hydrothermal environment at mid-ocean ridges, examining at the molecular level why microbes there appear to be ancient will help resolve the mystery of why this young crust inhabits ancient-looking microbes.", "The tectonic uplift of the Himalayas and the Qinghai-Tibet Plateau is one of the most important geological events that occurred on the earth in the Cenozoic (about 65 million years ago)[1,2]. East Asia's change from the early west-dipping topography to the current east-dipping topography is considered to be one of the reasons for the establishment of the Asian monsoon system and the overall change in the climate and environment pattern in Asia and even the world. \uf02dclimate\u201d relationship [1\uf02d13]. At the same time, the plate collision that caused the deformation of Asia also caused the expansion, development and evolution of the marginal sea in the western Pacific Ocean[14\uf02d16]. The classic view is that the rapid uplift of the plateau strengthens the weathering and denudation rate of continental silicate rocks and carbonate rocks, consumes atmospheric CO2, and affects global climate change[17,18]. At the same time, major Asian rivers originating from the Qinghai-Tibet Plateau brought a large amount of terrigenous clastic materials into the marginal sea and the open ocean, which played an important role in controlling the sedimentary evolution of the marginal sea and the chemical composition of the global ocean[14\uf02d20]. When the Himalayas and the Qinghai-Tibet Plateau began to uplift, what is the rate and process of uplift, and the impact of plateau uplift on the global climate and environment and the evolution of Asian geomorphological patterns have always been hot scientific issues of special concern to the scientific community[6,7]. The uplift of the Himalaya\uf02dTibetan Plateau strengthened the Asian monsoon circulation and blocked the transport of warm and humid air from the Indian Ocean to the interior of Asia; at the same time, the majestic Qinghai-Tibet Plateau also formed a downdraft on its north side, leading to the aridization of the interior of Asia Development [9-12]. In the past 20 years, scientists from all over the world have studied the uplift time and process of the Himalaya-Tibet Plateau through multidisciplinary research methods such as geophysics, geochemistry, structural geology, physical geography, paleontology, and marine geology. A lot of significant research progress has been made in terms of magnitude and magnitude, as well as the environmental effects of uplift, but there are still different views on the understanding of some key scientific issues. It is generally believed that the modern East Asian monsoon is the product of the joint action of two factors: the thermal difference between land and sea and the uplift of the Qinghai-Tibet Plateau. There are different understandings. Chinese scholars have obtained many important academic achievements of international attention through systematic research on typical profiles of loess-paleosols, sedimentary basins around plateaus, and deep-sea sediments in the South China Sea[1\uf02d4,8\uf02d16]. At present, global change research places special emphasis on the combination of land and sea, and studies the interaction between the spheres of the Earth system from a multidisciplinary perspective. Carrying out \"source-to-sink\" research from the inland to the deep sea through sediments will help to understand the Cenozoic topographic inversion in East Asia and its climatic and environmental consequences. Over the past 40 years, as the most important international scientific cooperative research programs in the field of earth sciences, the International Ocean Drilling Program (Ocean Drilling Program, ODP) and the Integrated Ocean Drilling Program (Integrated Ocean Drilling Program, IODP) have been trying to Deep-sea drilling to study the evolution history of the uplift of the Himalayas and the Qinghai-Tibet Plateau, the formation of the Asian monsoon, and the development of large rivers. Some scholars have reconstructed the history of the uplift and denudation materials of the Himalayas and Qinghai-Tibet Plateau since the Cenozoic from the Asian continent to the marginal sea of the Western Pacific and the Indian Ocean through the identification of the source of the deep-sea fan deposits in the Bay of Bengal and the deep-sea deposits in the South China Sea, revealing the Asian monsoon and neotectonic movement Coupling control mechanisms for the transport of terrigenous materials from land to sea [19,20]. However, judging from the main research results in the past 10 years, due to the lack of high-resolution long-term cores and the difficulty of finding suitable and reliable indicators for identifying the source of sediments, the uplift of the Himalayas and the Qinghai-Tibet Plateau can be reconstructed from the sediments of the Asian marginal sea. The denudation process and its environmental effects will still be hot and difficult points in Asian marine geology research in the next few years.", "Ge Hong in the Jin Dynasty recorded in \"The Legend of Immortals Magu\": \"Magu said that since she received the reception, she has seen that the East China Sea is three mulberry fields.\" It can be seen that the vicissitudes of the sea and mulberry fields actually describe the changes in sea level and land and sea in the eastern coastal zone of my country. Interaction history. What's interesting is that there is also the term \"a sea change\" in English, which describes the sea and land changes like \"the vicissitudes of life\". As early as 1610, the English literary giant William Shakespeare once wrote \"Nothing of him that doth fade, But doth suffer a sea-change\". Sea level change has always been a key scientific issue in the research of the international marine geology community. Under the current background of global warming, it has become a hot spot of great concern to the scientific community and the public. In 2007, the International Intergovernmental Panel on Climate Change (IPCC) submitted the 4th Global Climate Change Assessment Report [1], which stated that observations since 1961 indicated that the rise in the average global ocean temperature has affected the ocean to a depth of at least 3000m, and additional More than 80% of the total heat applied to the global climate system is already absorbed by the oceans. This heat causes seawater to expand, which in turn causes sea level rise; at the same time, rising global temperatures accelerate the melting of continental glaciers and polar ice caps, further accelerating the global sea level rise. From 1961 to 2003, the average global sea level rise rate was 1.8mm (1.3~2.3mm) per year. From 1993 to 2003, the rising rate was faster, about 3.1mm (2.4~3.8mm) per year. The total rise of the earth's sea level in the 20th century is estimated to be 0.17m (0.12~0.22m). According to different scenario models, sea level rise will reach 0.18~0.59m by the end of the 21st century, excluding the faster melting of glaciers[1\uf02d3]. In recent years, more and more studies have confirmed that the melting speed of the ice sheets in the Arctic, Antarctic and Greenland regions is accelerating, the surface temperature of the ocean has increased significantly, and seawater has desalinated[4\uf02d13]. Some scholars predict that by the end of the 21st century, the global absolute sea level will rise by about 0.8m [3]. The IPCC report also pointed out that Asia will become one of the biggest victims of global warming and sea level rise after the polar regions, sub-Saharan Africa, and small islands [1,2,14]. Sea level rise will lead to coastal erosion in coastal areas, loss of land area, increased flood disasters, salinization of land water sources, salinization of land, and then lead to severe salt water intrusion. At present, about 1/3 of the world's population lives in low-lying coastal areas or islands, including low-lying island countries such as Maldives and Seychelles, which may disappear from the ground. Shanghai, Venice, Hong Kong, Rio de Janeiro, Tokyo, Bangkok, New York and other coastal megacities And Bangladesh, the Netherlands, Egypt and other countries will also be deeply affected by the rapid rise of sea levels. Sea level rise will significantly affect the super-delta areas in Asia where the population is concentrated, including the Yangtze River, Yellow River and Pearl River Deltas in my country. These three deltas are also fragile and sensitive areas with the most developed economy, the most dense population, and the closest human-land relationship in China. [15]. Therefore, the history, current situation and future trends of global sea level rise are not only an important research task for the scientific community, but also increasingly concerned by the public. Since the 1980s, Chinese scholars have carried out a lot of research on the history of Quaternary sea level changes in the eastern marginal seas of my country, and obtained many important understandings. But at present, there are still \tmany debates on some key scientific issues, such as \"the vicissitudes of life\" and sea level changes\u00b7959\u00b7. For example, how many high sea level (transgression) periods existed in the Quaternary Period (about 2.5 million years ago) in eastern my country? During the global high sea level period about 120,000 years ago, why was the transgression not strong in eastern my country, while the ice steps during the last glacial period (about 40,000 to 22,000 years ago) had more significant transgression records? In the mid-Holocene (about 7000~6000 years ago), was there a high sea level higher than today? Under the background of accelerated global absolute sea level rise in the future, what is the rising trend of sea level along the eastern coast of my country? What is the impact of rapid sea level rise on my country's vulnerable regions such as the Yangtze River, Yellow River and Pearl River Delta? To answer the above key scientific questions, not only high-resolution core records in coastal areas are required, but also high-precision dating methods are required. At the same time, there is an urgent need to develop a more reliable sea-level change prediction model suitable for my country's scenario. The complex land-sea interaction and human activities in the Late Quaternary coastal area greatly restricted the preservation of high-quality sedimentary records in the coastal area, and also made it very difficult to reconstruct the history of ancient sea levels. With the use of interdisciplinary research methods and modern monitoring and analysis methods, and through close cooperation with the international scientific community, it is believed that Chinese scholars will make more contributions to the study of sea level change.", "The ocean, which accounts for about 2/3 of the earth's surface area, is not only covered almost entirely by sediments, but also is a very active place for modern sedimentation. The sources of marine sediments are complex, including terrigenous sediments, dust, volcanic ash and organisms transported by rivers or ice rafts, but rivers are the main source. After river sediments enter the sea, they mainly accumulate in estuaries and continental shelves, where the sediment transport and deposition mechanisms have been studied in detail, but so far little is known about deep-sea sediments at water depths exceeding 200 m. Although the deposition rate of the deep sea is slow, its area accounts for 65% of the earth's surface area, and the total amount of deposition is huge. It is an important gathering area for the global material cycle; with the deep exploitation and rapid consumption of onshore and shallow water oil and gas resources, the exploration of deep water resources has become a hot spot . The early research on deep-sea sediments can be traced back to the global expedition of the \"Challenger\" in the 1870s. Since then, a large number of deep-sea sediment surface samples and gravity columnar samples have been collected, and a relatively comprehensive understanding of the composition, type and distribution of seabed sediments has been obtained. know. Especially in 1968, the \"Groma Challenger\" began to implement the Deep Sea Drilling Project (Deep Sea Drilling Project, DSDP, 1968~1983), and the subsequent Ocean Drilling Program (Ocean Drilling Program, ODP, 1985~2003) And the ongoing Integrated Ocean Drilling Program (Intergrated Ocean Drilling Program, IODP, 2003~2013), many cores obtained in the world's oceans provide valuable information for the study of deep-sea sediments. However, the study of deep-sea sediment dynamics lags far behind the study of deep-sea sediment cores, resulting in many misunderstandings. For example, the deep-sea environment is considered to be very calm, and the sediments are carried out in the form of \"pelagic rain\" (Pelagic Rain) slow vertical settlement; mudstone and shale are the facies signs of the deep-sea environment, while conglomerate and sandstone are characteristics of the shallow water and continental environment It is inferred that the sandstone and shale interbeds in the flysch sequence are caused by vertical tectonic oscillations; the turbidity flow deposits are the Bauma sequence, etc. The root of the misunderstanding is that many understandings come from research speculation on deep-sea sediments/rocks, as well as a small number of flume experiment results, and lack of on-site observations on deep-sea sediment transport and deposition processes. The theory of turbidity flow, born in the late 1940s and early 1950s, was based on a series of flume experiments. The introduction of the turbidity current theory is considered to be of epoch-making significance, which reasonably explains that the sandstone-shale interbeds in the flysch sequence are formed by the redeposition of shallow-water coarse debris into the deep-water environment by turbidity currents, rather than frequent vertical tectonic movements. the result of. In the ensuing decades, the turbidity current theory has been continuously improved by means of flume experiments, numerical simulations, and theoretical studies, and has been widely used in the study of ancient deep-sea sedimentary facies and deep-water oil and gas resource exploration. However, the recent typical turbidity flow sedimentary facies \uf02d Bauma sequence has been strongly questioned[1,2]. It is believed that the high-density turbidity flow simulated by flume experiments is actually particle flow, while the Bedding and corrugated bedding (Tb and Tc) are the products of traction currents rather than turbidity currents, and the coarse-grained layers in the deep sea may be mostly formed by particle currents and bottom currents. In recent decades, with the development of deep-sea observation technology, the understanding of deep-sea depositional processes and mechanisms has been greatly improved. Since the 1950s and 1960s, due to the development and application of submarine photography technology and echo sounding technology, the research on the shape and internal structure of the deep seabed has been promoted, and it has been realized that the ocean bottom is not quiet and flat, but has obvious It has the sedimentary characteristics of water flow transformation. The water flow above the deep seabed can be directly measured with a current meter, and the sediment transport can be realized by measuring the turbidity with optical or acoustic methods, such as the optical transmissometer, the optical backscatter sensor, etc. , OBS), etc., acoustic backscatter sensor (Acoustic Backscatter Sensor, ABS), acoustic Doppler current profiler (Acoustic Doppler Current Profiler, ADCP), etc. Use tripods or mooring methods to place instruments in different layers of the deep water column, such as the High Energy Benthic Boundary Layer Experiment (HEBBLE: High Energy Benthic Boundary Layer Experiment), the observation of the sediment transport process on the continental shelf and slope (STRESS: Sediment Transport Events on Shelves and Slopes), Strata Formation on Margins (STRATAFORM: Strata Formation on Margins), Source-to-Sink System Research Program (S2S: The Source-to-Sink Program), etc. Many important insights have been gained in the mechanism [3\uf02d5]. In addition to the velocity, temperature, salinity, and turbidity profiles of turbidity currents measured for the first time in some submarine canyons[6\uf02d9], the widespread wave (current)-induced sediment gravity flow ( Wave/current supported sediment gravity flows), internal tide, fluid mud layer (Fluid mud) and interlayer mist layer, etc. The slow-flowing bottom flow is observed in the deep ocean, the flow velocity is mostly 5-20 cm/s, and the core part can reach more than 32 cm/s, which is enough to erode the substrate, transport silt and fine sand particle sediments, and deposit Structural features such as ripples are generated, as well as the bottom mist layer formed by the resuspension of the bottom flow[3\uf02d5]. Field observations and flume experiments have also confirmed that deep-sea fine-grained sediments settle in the form of flocs, and the flocs can form structural features such as ripples and micro-cross bedding under the action of water flow[10,11]. More and more deep-sea observation data are constantly updating the traditional understanding of deep-sea sediments, some of which are subversive. With the rise of the trend of \"building laboratories on the seabed\", developed countries have planned and established seabed observation networks one after another [12]. A revolution will also further promote the development of deep-sea sedimentology.", "For nearly half a century, a series of large-scale water conservancy projects have been built in my country's large river basins such as the Yangtze River, the Yellow River and the Pearl River, such as the Three Gorges in the Yangtze River Basin, Danjiang Estuary, Xiluodu in the upper reaches of the Yangtze River and other large reservoirs, Longyang Gorge and Liujia Gorge in the Yellow River Basin. , Sanmenxia, Xiaolangdi and other large reservoirs, Hongshuihe Yantan, Longtan and Feilaixia large reservoirs in the Pearl River Basin. According to the statistics of the World Commission on Dams (WCD) in 2003, China's dams over 15m accounted for half of the world, and dams over 30m accounted for 37% of the world, ranking first in the world [1]. The quantity, composition, and spatiotemporal distribution of material fluxes such as water, sand, biological elements, and pollutants in the estuaries of large rivers in my country have undergone major changes in the past half a century [2,3]. In recent years, the annual average estuary sediment values of the Yangtze River, Yellow River, and Pearl River are only about 2/5, 1/10, and 1/2 of the annual average values in the 1950s and 1960s, respectively, and large-scale water conservancy projects are one of the main influencing factors[3 ,4]. During the same period, my country's estuary and offshore ecosystems declined, such as the deterioration of estuary water quality, the reduction of wetlands, the frequent occurrence of offshore red tides and green tides (enteromorpha), the erosion of deltas and coasts, and the significant changes in the distribution pattern of biological resources. The total biomass and The total capacity decreased significantly, the spawning grounds and feeding grounds of fishery resources changed or even disappeared, and the fishery resources showed a declining trend [5,6]. The decline of estuary coastal ecosystem has already had a certain impact on social and economic development, and has received great attention. However, there are many factors that affect the changes in the estuary and offshore ecosystems. In addition to large-scale water conservancy projects in the basin, global climate change and other human activities in the basin also play a major role. How much of the changes in the estuary and offshore ecosystems is the impact of large-scale water conservancy projects in the river basin? Apart from a preliminary understanding of the flux of water and sediment, there is still a lack of clear understanding. In the report \"Ecosystem Impacts of Large Dams\" provided by IUCN/UNEP/WCD in 2001 [7], there is no discussion about the impact of dams on the ecosystem of estuaries and adjacent sea areas. Therefore, how to identify and scientifically and objectively evaluate the impact of large-scale water conservancy projects on estuaries and offshore ecosystems from many factors is an urgent problem to be solved. Not only has important scientific significance, but also will provide a scientific basis for our country to adopt corresponding policies and countermeasures.", "Deep geophysical studies of continental margin subduction zones have found that many subducted oceanic or continental crustal lithospheres are torn along a weak zone. The source distribution of natural earthquakes, seismic tomography, and gravity and magnetic forward inversion results show that the subduction zone of the Nankai Trough in Japan[1\uf02d3], the Manila subduction zone[4, 5], the western Taiwan subduction zone[6], Mali This tectonic phenomenon exists in the Yana subduction zone[7], the Cascady subduction zone in North America[8], and other regions of the world[9\uf02d14]. The seismic activity along the rupture surface of the subducting slab is strong. At the same time, the angle of the subducting slab on both sides of the rupture surface, the change of temperature field, the dehydration and melting of the subducting lithosphere, and the melting of the mantle wedge, volcanic activity, and back-arc spreading Many other aspects show great differences. In recent years, with the improvement of geophysical and geochemical observation methods and capabilities, people's understanding of plate tearing has greatly improved, and more and more plate tearing phenomena have been discovered at the same time[8, 14]. However, we still don't fully understand the reasons for this lithosphere tearing, let alone its formation and evolution process. It is generally believed that the tearing of the slab proceeds along the existing weak plane in the subducting slab, and is related to the difference in the internal structure of the subducting slab and its retreat rate. Yang et al. analyzed the source distribution of natural earthquakes in the Manila subduction zone in detail[4], and found that there were obvious changes in the subduction angle in the direction of the coastal trench, and attributed these changes to the arc-continent collision at the north and south ends of the Manila subduction zone and the paleoceanic ridge in the South China Sea. The subduction of the South China Sea and the accompanying tearing of the subducting slab suggest that the subduction slab in the South China Sea was probably torn along the ocean-continent transition boundary, resulting in a sudden change in the subduction angle. However, the position of the paleo-oceanic ridge marked by Yang et al. [4] is obviously wrong, and they did not seriously consider the impact of a large number of young seamounts with a height of 3\u20134 km on the subduction zone in the eastern part of the South China Sea basin, especially along the paleo-oceanic ridge. Morphological and seismicity effects. Bautista et al. [5] also collected seismic information in this area from 1963 to 1997, studied the seismicity and focal mechanism of the Manila subduction zone, and modified the subduction zone model proposed by Yang et al. [4]. It is believed that although there is a tearing of the subducting slab in the South China Sea, it is not along the transitional boundary between ocean and continent, but along the trend of the subducted paleo-oceanic ridge, and uses the subduction of the paleo-oceanic ridge to explain the central part of the Manila subduction zone ( About 17\u00b0N latitude) there are missing earthquakes in the depth range of 65-300 km. At the same time, Bautista et al. [5] speculated that the sudden change in the direction of the Manila subduction zone at around 20\u00b0 north latitude may be related to the collision and subduction of the submarine plateau, but this speculated submarine plateau and the tearing of the subducting slab in the South China Sea lack deep reflection earthquakes As well as the support of deep seismic wave velocity data, at least from the data of free space gravity anomaly and Bouguer gravity anomaly [15], these speculations need further verification. In the Cascady subduction zone, some studies also believe that the Yellowstone mantle plume tore the Juan De Fuca plate and then reached the surface, forming the Yellowstone volcanic track[8]. From this point of view, although the reasons for the tearing of the slabs may vary greatly, they will all cause huge differences and zoning in the subduction zone, and the tearing of the slabs will affect the accelerated uplift of the regional crust The tearing of or sinking subducting plates: causes and consequences \t\u00b7967\u00b7The subduction may eventually lead to the formation of new plate boundaries, thus changing the course of regional lithospheric evolution. The research on this subject must be a comprehensive scientific research project, because it involves various geological processes from the deep mantle to the surface, and it is a new subject that goes beyond the theoretical framework of traditional plate tectonics. A detailed study of slab tearing will be the basis for further understanding of global subduction zone seismicity and lithospheric evolution and circulation.", "75% of the earth's surface area is covered by sedimentary rocks and unconsolidated sedimentary layers, and the ocean margin is an important zone for the formation of geological records due to the accumulation of terrigenous and authigenic sediments. Geological records are the basic material for people to study climate change, environmental evolution and ecosystem succession in the history of the earth [1\uf02d4]. To apply the geological record correctly requires an understanding of the form in which sedimentary processes are preserved in the geological record, the continuity of the record (i.e. the completeness of the geological record at a certain scale), and how the original information and its spatial location change over time after the record is formed. Since the early days of earth science, people have used geological records to infer the environmental characteristics at that time, and some sedimentary processes can also be analyzed from them. However, it is difficult to directly judge the sedimentation kinetic process, such as the resuspension process of fine particulate matter. At present, some international research programs on geological records such as the STRATAFORM program are aimed at issues such as \"what kind of records will be formed by a certain sedimentary dynamic process\" and \"how to analyze the process of accumulation from geological records\" [5] . Although the stratigraphic sequence of some sedimentary basins can be reconstructed without involving the sedimentary dynamic process (eg, through geometric models [6]), the continuity and resolution of the geological record must rely on the knowledge of the sedimentary dynamic process. The discontinuity of the sedimentary sequence results from intermittent or periodic scouring changes in the accumulation process, in the latter case the sedimentary record was produced but partially eliminated in the scouring and silting periodic changes Whether there are missing in the sedimentary sequence, how much is missing, which layers are missing, and what is the process related to the missing of sedimentary records, these issues are the key to correctly interpreting the records[7]. The resolution of the geological record depends not only on the analytical technique but also on the characteristics of the sedimentary sequence itself. For the modern environment, the resolution can be evaluated by observing the sequence preservation potential[8,9], but geological records do not directly display the preservation potential information, and must rely on the knowledge of dynamic processes for analysis[10]. The research on these issues has made some progress through the combination of chronology and sedimentation dynamics, but further research is needed. There have been a lot of studies on the change of geological records due to diagenesis after formation[11,12], but generally speaking, the quantitative knowledge of diagenetic changes in many geological records is still insufficient. For the problem of recorded vertical and horizontal displacement, current research is still weak. Suppose P = P(z) is a parameter obtained from the measurement of columnar samples. According to the analysis data of chronology, P(z) can be transformed into time series P(t). However, in general, this time series may not be able to represent the environmental evolution history of the sampling site, because the geological record may have experienced spatial location changes after its formation. The impact of geological record displacement is easy to understand for long-term geological events. For example, the sedimentary layer near the mid-ocean ridge eventually reaches the continental margin with the process of seafloor spreading. The impact of the marine sedimentary dynamic process on the geological record \t. 969 \u2022 Near trenches, where the interpretation of this sediment layer must be based on the restoration of its original location. For relatively new geological periods, the above effects should also be considered. For example, the crustal subsidence in the Bohai Sea area can make shallow water deposits appear in the depth of boreholes. Therefore, only by restoring the water depth at that time can a reasonable comparison between sedimentary records be realized. In summary, an important issue in sedimentology and sedimentary geology is to understand the correspondence between the sedimentary dynamic process and the geological record in order to analyze the geological record in the best way; at the same time, according to the diagenetic changes and displacement features to improve the extraction and interpretation of depositional information.", "Obtaining seismic anisotropy data is the most effective observation method (and may be the only observation method at present) for the spatial distribution and mechanism of the Earth's interior deformation field (mantle flow). Field seismic observations and laboratory studies have confirmed that the directional arrangement of mineral crystals caused by deformation can generate seismic wave anisotropy in the mantle, and make the fast wave propagation direction approximately parallel to the mantle flow direction[1,2]. Therefore, the polarization direction of the fast-propagating S-wave (shear wave) and the fast-propagating direction of the P-wave can be used to observe the flow direction of the upper mantle (the asthenosphere). The spatial distribution and strength of mantle seismic anisotropy are usually described by two parameters of observed shear-wave splitting: the polarization direction of fast-propagating shear-waves and the travel time difference between fast and slow waves arriving at stations. Measurements of shear-wave splitting in the mantle wedge above the subducting plate for several subduction zones, including Japan, revealed trench-parallel anisotropy, that is, rapidly propagating shear-wave The polarization direction of is parallel to the trend of the trench. And its fast and slow wave arrival time difference (strength of seismic anisotropy) varies greatly with the spatial distribution. As shown in Figure 1, the arrival time difference between fast and slow waves is in Japan. Figure 1 \tObservation map of shear waves in Japan's subduction zone is puzzling. The mantle seismic anisotropy observation parallel to the trench \t\u00b7 971 \u00b7 Northeast is only 0.1~ 0.2s[3], and 1~2s[4] in the Ryukyu Islands located in the southeast of Japan. The Tonga-Fiji region in the South Pacific also observed the direction of fast waves parallel to the trench[5], and the direction of fast waves observed far behind the arc was perpendicular to the volcanic island arc (Fig. 2 ). A similar fast wave direction from the volcanic island arc parallel to the trench to the fast wave direction perpendicular to the volcanic island arc far behind the arc has also been observed in the Mariana subduction zone[6]. Usually, only the earthquakes in the local subducting plate are selected in the studies of shear-wave splitting of the mantle wedge, so that the influence of the seismic anisotropy of the mantle below the subducting plate can be ignored. Fig. 2 \tObservation map of shear waves in the subduction zone of the Tonga-Lao Basin in the western Pacific Ocean The asthenospheric flow in the mantle wedge is mainly driven by the movement of the subducting plate. Most dynamical models of the mantle wedge are based on the assumption of a two-dimensional corner flow field (2-D corner flow). This flow field pattern can well explain the shear wave splitting observations far behind the volcanic island arc, where most of the fast wave directions are perpendicular to the volcanic island arc. However, this two-dimensional flow field model is contradictory to the observation of mantle anisotropy parallel to the peculiar trench near the volcanic island arc. Therefore, some scholars proposed a more complicated three-dimensional flow field model of the mantle wedge[5]. Scientists have proposed several mechanisms to explain the trench-parallel mantle anisotropy observed in the subduction zone, including the retreat of the subducting plate [7], the three-dimensional flow field model of the mantle wedge [8], the crustal delamination caused Local convection [9], the influence of water on the rheological field of the mantle wedge [10]. In the above models, it is assumed that the anisotropy of the mantle is caused by the oriented arrangement of olivine mineral crystals caused by deformation. However, even if the entire mantle wedge is olivine anisotropy, it is difficult to explain the intensity of mantle anisotropy parallel to the trench observed in several subduction zones (1\u20132 s travel time difference). Faccenda et al. attributed the anisotropy of the mantle parallel to the trench to the possible presence of faults filled with serpentine in the interior of the subducting plate[11]. The latest research progress is a paper recently published in \"Nature\" by Katayama et al. [12]. They propose that the strong trench-parallel mantle anisotropy observed in volcanic island arcs may arise from deformation-induced alignment of serpentine mineral crystals. Serpentine is the main hydrous mineral in the hydrated mantle wedge. High-pressure experiments performed in their laboratory revealed that the c-axis of serpentine tends to be rotated to be perpendicular to the shear plane during deformation. That is, seismic waves traveling in a direction perpendicular to the shear plane (subducting plate surface) travel much slower than in other directions. In addition, the seismic anisotropy of serpentine is estimated to be an order of magnitude higher than that of olivine. Therefore, when the subduction angle is steep (Ryukyu Trench), the orientation of serpentine in the hydrated mantle wedge can generate strong trench-parallel mantle anisotropy near the volcanic island arc. Although the orientation of serpentine in the hydrated mantle wedge is currently promising to explain the observations of strong trench-parallel mantle seismic anisotropy near volcanic island arcs, we need more laboratory serpentines. The high-pressure deformation experiment of grain stone is used to verify and refine the only few experimental results so far. In addition, we need to carry out more and more detailed shear wave splitting observations in multiple subduction zones to better characterize the spatial distribution of mantle seismic anisotropy in the trench-volcanic island arc-back-arc region.", "It has been known for several decades that deep earthquakes in the interior of the earth (subduction zone) will not exceed the 660km seismic wave velocity discontinuity interface, which is usually regarded as the boundary between the upper and lower mantle. The academic community basically believes that the 660km discontinuous interface is closely related to the phase transition from ringwoodite minerals to perovskite+magnesiowustite [1]. The mineral phase transition process is characterized by a negative Clapeyron slope, so the phase transition boundary will become deeper (>660km) in subducting slabs that are cooler than the surrounding mantle. The popular view now is that the depth of the deepest earthquake in the subducting plate is controlled by the 660km phase transition interface[2]. Scientists have proposed several possible explanations for why deep earthquakes do not occur below the 660km boundary. The explanation of Green and Zhou is based on their \"transformational faulting model\" for the mechanism of deep earthquakes. In their deep earthquake model, it is impossible for earthquakes to occur after the endothermic phase transition, and the 660km phase transition interface is An endothermic phase transition [3]. Karato et al. thought that the phase transition at a depth of 660km may form very small-grained matter, which is unlikely to produce deep-focus earthquakes under the framework of the \u201csynthetic shear zone model\u201d proposed by them[4]. If the deepest earthquake is proved to occur near the 660km interface by actual observation, then this observation itself is the best observational evidence for the seismogenic mechanism model of subducting plate deep earthquakes. A recent paper reported a very interesting research result [5]. Tibi et al. stacked the high-quality seismic records of the dense submarine seismograph array (OBS array) deployed in the Mariana island arc and the back-arc sea area, and searched for the high-frequency reflection waves and Convert wave signals (P660p, S660p) to accurately determine the depth of the 660km boundary of the Mariana subducting plate in this area (Fig. 1a). Their research results (Fig. 1b) show that near 18\u00b0N latitude, the precise depth of the 660km interface in the Mariana subducting plate is 710~730km (with an error of \u00b114 km). Previous seismic tomography studies have shown that the Mariana subducting plate passes through the 660km interface and enters the lower mantle[6], and the deep earthquakes in this area terminated near 620km[7]. Therefore, the significance of the results of Tibi et al. is that it reveals that the deep earthquakes in this region terminated at 100km above the 660km phase transition interface. The number of deep earthquakes worldwide decreases very much below 650km [8], and observations show that the 660km boundary occurs at around 700km in cooler subducting plates. On the basis of these and their latest precise results, Tibi et al. argue that the deepest depths of subducting slab deep earthquakes are unlikely to be governed by phase transition processes occurring at the base of the upper mantle. So far, what mechanism controls the deepest depth of subducting plate deep earthquakes, or why earthquakes do not occur below the 660km interface (lower mantle) is an unanswered mystery. Figure 1 \ta. Schematic diagram of wave path investigation at different stages. The solid line and dashed line represent P wave and S wave, respectively; b. Vertical cross-section map of the earthquake location in the Mariana subduction zone", "There are three types of earthquakes in a subduction zone. The first two are intraplate earthquakes, that is, shallow earthquakes that occur in the upper plate and deep or shallow earthquakes that occur in subducting slabs. The third type is interplate earthquakes, which occur at the contact interface of two polar blocks\u2014the large fault in the subduction zone, generally at a depth of several kilometers to about 50km. The largest earthquake in the world is the third type of earthquake, such as the magnitude 9.5 earthquake in Chile in 1960, the magnitude 9.2 earthquake in Alaska in 1964, and the magnitude 9.2 earthquake in Sumatra in 2004. When the literature refers to large earthquakes in the subduction zone, unless otherwise specified, it refers to these interplate earthquakes. Such earthquakes also cause tsunamis. The massive tsunami triggered by the 2004 Sumatra earthquake killed more than 240,000 people along the Indian Ocean coast. The Pacific Rim is basically a subduction zone except for a few places. There are also subduction zones along the perimeter of other oceans. For the residents of these places, a large subduction zone earthquake and its associated tsunami are the most dangerous natural disasters. Large subduction zone earthquakes occur due to the sudden displacement of large faults between two converging plates (Fig. 1). The overall relative motion speed of the two converging plates is generally a few centimeters to more than ten centimeters per year. But the contact between them in Figure 1. \tThe internal structure of the fault zone strongly influences the magnitude of subduction zone earthquakes. The undulating surface of the inserted plate and the rupture of surrounding rocks during subduction both increase the roughness of the fault, while the sediments brought into the subduction zone and the wear of the uneven geometry during fault movement increase the smoothness of the fault Spend. The occurrence of large earthquakes is due to the fact that the unstable sliding faults in the large smooth area on the fault generally do not slide slowly at this constant speed, but show \"stick-slip\" behavior. \"Stickness\" means that the fault plane is in a closed state, and the stress gradually increases. The surrounding rock medium gradually accumulates strain energy. \"Slip\" refers to a sudden slide, that is, an earthquake, releasing a large amount of elastic strain energy within a few seconds to a few minutes, and spreading it with seismic waves. The local dislocation of the huge earthquake fault can reach 30 to 40 meters. Depending on the size and distribution of fault dislocations, the resulting sudden vertical deformation of the seabed can reach several meters and become the source of the tsunami. Of course, not all subduction zone faults have only two states of locking and earthquakes, and the actual movement mode is often more complicated. All large faults in subduction zones will generate many small and medium earthquakes. However, according to paleoearthquake research, written records and modern instrument detection, not all large faults in subduction zones have ever produced large earthquakes with magnitudes above 8.5. What factors control the magnitude of these earthquakes? Few scholars believe that the size of subduction zone earthquakes is random, as long as the observation time is long enough, any subduction zone will produce huge earthquakes sooner or later. However, the vast majority of scholars believe that major earthquakes must have geological conditions and physical backgrounds. There is another popular saying, which has been found in a large number of literatures, that large earthquakes are due to the high yield strength and high stress of faults. This claim is unfounded. We know that regardless of the size of the earthquake, the average stress drop of each earthquake is generally a few MPa, and there is no evidence that it has any relationship with the background stress. Various stress analyzes have never shown that the stresses on subduction zone faults where large earthquakes occur are greater. Earthquakes occur due to fault slip instability [1]. If the faster a fault slides, the less friction it has to resist it, and the slip can get out of control and quickly develop into an earthquake. This performance is called speed weakening. There are many mechanisms that cause weakening, and which mechanism plays a major role depends on the frictional properties and slip rates of the fault zone medium [2]. The opposite of speed weakening is speed strengthening, that is, the higher the sliding speed, the greater the frictional resistance. This behavior stabilizes fault slip. A focus of research on subduction-generation earthquakes in the last 20 years has been the frictional properties of fault materials and the factors that affect these properties, such as temperature, pressure, and pore fluid (typically water). Tribological tests with real or simulated fault gouge can determine what mineral composition will exhibit velocity weakening or strengthening under what conditions. Experimental results, combined with our knowledge of the composition of rocks within and on both sides of the fault, coupled with mathematical simulations, can help us understand the seismic behavior of faults. A major application of such studies is the estimation of shallow and deep boundaries of subduction-generation fault seismogenic zones. For example, Hyndman and Wang (1993) proposed based on the results of rock friction and other hypotheses [3] that the depth distribution of subduction-generation fault seismogenic zones is controlled by temperature. They proposed that some mineral components in the shallowest part of the fault zone prevent earthquakes in this part, but when the temperature increases to 100~150\uf0b0C with depth, the dehydration phase transition of these minerals makes the rock formation change from velocity strengthening to velocity weakening, which can Tremor. When the temperature continued to increase with depth to about 350\uf0b0C, the nature of the fault changed to velocity intensification again, and the earthquake no longer occurred. Going deeper, the rock is viscous and more difficult to generate earthquakes. Later Hyndman et al. (1997) proposed[4] that the interface between the subducting plate and the mantle part of the upper plate (that is, the mantle wedge below the Moho surface) should not generate earthquakes even if the temperature is lower than 350\uf0b0C. The reason is that the front part of the mantle wedge, especially the part close to the subduction fault, should be water-filled phase transition due to the dehydration of the subducting slab, and various hydrous minerals in the water-filled mantle rocks, such as talc and serpentine etc., should result in speed hardening. These hypotheses can roughly explain the trend of the depth distribution of large earthquakes in the subduction zone. However, with the advancement of seismic monitoring methods and the improvement of accuracy, the universality of these hypotheses is constantly being challenged. Now it seems that temperature and rock metamorphism definitely play a key role in the occurrence of earthquakes, but the specific temperature at which the frictional properties of faults change, and which minerals cause the velocity weakening or strengthening under what conditions are affected by many factors, which still need to be studied . Many scholars have new discussions on the shallow boundary of the seismogenic zone[5, 6]. In addition to the frictional properties at the segment level, another important class of factors controlling earthquake magnitude is the internal structure and geometry of the fault zone. A fault is a friction interface, which is only an approximation to simplify the mathematical model. Actual fault zones, especially large ones like subduction faults, have complex internal structures and geometric changes [7]. Large fault zones generally have a wide \"damage zone\" sandwiched within a relatively narrow \"core zone\". The inner core zone is the main shear deformation zone, which often contains some seismic slip surfaces only a few millimeters thick [7]. Fracture zones include various fissures, small faults, parent rock blocks, breccias, and crushed nitrates. On the whole, a large fault is a shear zone that is constantly being reformed. The closer to the center, the more concentrated the shear deformation and the higher the denaturation rate. Earthquakes that occur on those thin, flat seismic slip surfaces are extreme manifestations of high-speed, concentrated shear deformation. If the earthquake is to be large, the sliding surface must have sufficient ductility and continuity to facilitate the propagation of earthquake rupture. Both structural discontinuities and geometric irregularities are not conducive to rupture propagation. Much of our understanding of how the structural geometry of fault zones controls earthquake size comes from studies of faults on land, especially large strike-slip faults like the San Andreas in California. We don't know much about subduction zones. The reason is that subduction zone faults are difficult to observe directly. Where they emerge from the \"surface\" is the trench thousands of meters below the sea. Our understanding of their structure is mainly through the interpretation of various geophysical measurements or inferences based on rock tests and borehole observations [8,9]. The outcropped paleosubduction zone fault rocks can also provide a lot of valuable information[10\uf02d12]. However, these rocks have experienced various tectonic movements in the long geological time after the subduction zone ceased activity, and the identification of the effects of these late tectonic movements has added a certain degree of difficulty to the study of this kind of rocks. Some of the major geological and mechanical processes in large strike-slip faults must also occur in subduction zone faults. It is now common knowledge that the rocks on the bottom of the upper plate that are close to the fault often break away from the parent rock and are carried deeper by the lower plate, while the rocks that are close to the fault on the top of the lower plate often break away from the parent rock and are captured by the upper plate. In fact, the destruction of parent rocks on both sides of this slip zone is similar to the evolution of strike-slip fault fracture zone, and the scale may be larger. It is just that the impact of this tectonic evolution on large earthquakes has only recently attracted attention[11,13]. The greater the sliding distance of strike-slip faults, the smoother they tend to be, which is conducive to the occurrence of large earthquakes[14]. If other conditions remain the same, subduction zone faults should of course also have this tendency to become smoother and flatter. After all, however, subduction zone faults are very different from any other fault. Understanding the uniqueness of fault tectonic evolution in subduction zones helps to understand the factors that control the size of subduction zone earthquakes [6]. What are the characteristics of their structural evolution? One is that a large amount of sediment was brought in when the plate subducted (Fig. 1). These deposits include fine-grained deep-sea deposits and coarse-grained continental margin deposits. These sediments that continuously enter the subduction zone from the trench area greatly affect the seismicity of the fault. Their mineral composition, grain size, and pore water content, as well as changes in these properties as the fault zone shears and slides, all affect the frictional properties of the fault. Sediments also increase the smoothness of faults, helping earthquake rupture propagation [15]. The mega-earthquakes mentioned in the first paragraph of this article all occurred in subduction zones with abnormally rich deep deposits. The second is that various seabed landforms were brought in during the plate subduction (Fig. 1). The oceanic crust is rough and uneven when the mid-ocean ridges are formed, new faults are created during the cooling process and the flexing phase before subduction, and sometimes undergoes some magmatic activity. The surface of the oceanic crust is uneven, and after subduction, the roughness of the faults in the subduction zone will be increased, which is not conducive to the propagation of earthquake rupture, and the effect of sediments is just the opposite. The most spectacular examples are subducted seamounts. During the subduction of these seamounts, the structure and geometry of the fault zone changed greatly. They cause stress concentrations but prevent crack propagation. So they tend to cause moderate and small earthquakes [16], but hinder large earthquakes [17]. The third is that the subducting slab undergoes a series of mechanical and metamorphic effects during its descent. Not only the sedimentary layer, the basalt oceanic crust itself also undergoes a series of mineral dehydration phase transitions when it descends to a depth of tens of kilometers [18]. The shrinkage of the rock mass accompanying the phase transition process destroys the subducted oceanic crust, and the water produced by the phase transition reduces the strength of the rock[19]. When the slab descends, due to deflection and stretching, etc., new faults will be generated in the slab or the misalignment of existing faults will be increased (Fig. 1). All of this increases the roughness of the main faults in the subduction zone, making it less conducive to seismic rupture propagation. To sum up, the factors that can control the size of subduction zone earthquakes are nothing more than two types: one is the frictional properties of the fault zone medium; the other is the structural geometric characteristics of the fault zone. Both are affected by regional tectonics, temperature, pressure and pore water. Figuring out how the various factors work will require extensive geophysical observations, laboratory rock failure and friction experiments, and theoretical simulations. Coseismic and interseismic displacements along fault zones also need to be finely monitored. Studies on paleosubduction faults and comparisons with other types of faults are also essential.", "The main rock-forming minerals in igneous rocks, such as olivine, pyroxene, hornblende, biotite and basic plagioclase, have very little content in sedimentary rocks; minerals with high content in igneous rocks, such as potassium feldspar , acid plagioclase, quartz, etc., also exist in large quantities in sedimentary rocks. However, the content of feldspar in igneous rocks is more than that in sedimentary rocks, and the content of quartz in sedimentary rocks is more than that in igneous rocks; new minerals in the diagenesis process include: clay minerals, carbonate minerals, salt minerals , etc., which exist in large quantities in sedimentary rocks , little or no in igneous rocks; organic carbon formed by biological action during diagenesis is unique to sedimentary rocks.", "Omitted (6 points)", "Briefly describe the evidence of seafloor expansion? (7 points) Answer: 1. Symmetry of geological phenomena: from the mid-ocean ridge to both sides, the weathering degree of bedrock has a trend of gradually deepening, and the seabed stratum has a trend of changing from thin to thick. (2 points) 2. The symmetrical distribution of magnetic stripes on both sides of the mid-ocean ridge, and the regular distribution of positive and reverse magnetic anomalies. (2 points) 3. The symmetry of strata distribution on both sides of the mid-ocean ridge, from the mid-ocean ridge to the strata on both sides, from new to old, and the oldest stratum is no earlier than 200 million years. (3 points)", "The fundamental cause of earthquakes is plate movement. Due to the relative movement of the plates, the accumulation of geostress and the sudden release of strain energy at the edge of the plates cause earthquakes. Therefore, the distribution of earthquakes is controlled by the plate boundary, and the world earthquakes are mainly distributed in the following four regions: the Pacific Rim seismic belt; Mediterranean - Indonesia seismic zone; Ridge seismic zone; Continental rift seismic zone.", "Weathering refers to the destruction of surface rocks under various geological agents. Weathering includes three types: physical (mechanical) weathering, chemical weathering and biological weathering. Mechanical weathering is mainly due to temperature changes, water state changes (water freezing and melting and salt crystal growth), rock weight release and the role of growing plant roots. Chemical weathering is the chemical decomposition of rocks, mainly including oxidation, dissolution, hydrolysis, hydration and other important chemical reactions. Biological weathering refers to the process in which the biochemical reaction between the metabolic products and the decomposed products of the body and the chemical elements of the minerals in the rock occurs during the biological activities of the organism, causing the destruction of the original minerals or rocks.", "There are three types of weathering products of the parent rock: first, terrigenous clastic material: it is the clastic material formed by the mechanical weathering of the parent rock and the mechanical transportation and sedimentation, such as quartz, feldspar, etc. Secondly, clay material: clay mineral is mainly decomposed from feldspar in the chemical weathering of parent rock. Third, chemical and biochemical substances: such sediments are derived from the chemical decomposition of the parent rock. Mainly Al2O", "Mechanical deposition occurs when the gravity of debris is greater than the carrying force of water flow. Because the flow velocity and flow rate of the flowing water are variable, and the size, shape and specific gravity of the debris itself are different, the deposition sequence can be divided into two parts. From the perspective of debris size, the coarse-grained debris was first deposited and then transited to the smallest debris; From the perspective of the specific gravity of the debris, the particles with larger specific gravity deposit before the particles with smaller specific gravity. In this way, in the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation. As a result of this action, the sediments form regular zonal distribution along the direction of transportation in the sequence of gravel - sand - siltstone - clay. Therefore, after consolidation, the sediments form conglomerate, sandstone, siltstone and claystone respectively.", "Diagenesis refers to the process of turning loose sediment into consolidated rock after sediment deposition. Diagenesis includes the following three aspects:", "Groundwater contains a large amount of CO2 and organic acids. In areas where soluble carbonate rocks are widely distributed, groundwater flows along the bedding and pores, and continuously dissolves rocks along the way during the flow process. The dissolution of groundwater is the main factor, and the combined action of surface water makes the surface and underground form some special landforms. These landforms and processes are called karst. The basic conditions for karst formation are: thick soluble rock with gentle occurrence and developed joints and other fissures, and abundant flowing groundwater.", "Gravel (grain size \uff1e 2mm) in clastic rock with content greater than 50% is called conglomerate. In clastic rock, those with terrigenous clasts with grain size of 2~0. 1mm and content of more than 50% are called sandstones. Siltstone is defined as siltstone with more than 50% of silt-grade clasts (grain size 0.05~0.005mm) in clastic rock. Clay rock mainly refers to loose or consolidated rock composed of fine particles with grain size<0.005mm and containing a large amount of clay minerals (kaolinite, montmorillonite, hydromica, etc.).", "In the vertical section of fluvial facies, the lower part is the riverbed subfacies, which is often the main body of fluvial facies deposition. It is generally thick and mainly composed of riverbed retained conglomerate and sandstone (side beach or core beach). Because it is located at the lower part of the fluvial facies section, it is also called bottom sediment; The upper part of the section is the bank subfacies and the river overflow subfacies, which are called the top sediments of the river facies, and mainly composed of siltstone, claystone and other fine-grained sediments. The coarse and fine positive cycles under the vertical section formed by the combination of bottom sediments and top sediments are called the binary structure of river sediments.", "The contact relationship of a set of strata of different ages deposited continuously is called conformity contact. The upper and lower strata are continuous without discontinuity, and the lithology is consistent or gradual with the contained fossils, and the occurrence is basically consistent. There are sedimentary discontinuities between the upper and lower strata, that is, a part of the strata is missing between the two sets of strata. This kind of stratum contact relationship is called unconformity contact. It reflects that a region has been exposed to the surface for a long time and has been subjected to weathering and denudation. The sedimentary discontinuity between two sets of strata is called unconformity. According to the occurrence relationship between the upper and lower strata of the unconformity and the structural movement characteristics reflected, the unconformity can be divided into: parallel unconformity: the occurrence of the upper and lower strata of the unconformity surface is basically the same, reflecting the overall rise. Angular unconformity: The upper and lower strata of the unconformity surface have different attitudes and intersect at an angle, reflecting the uneven ascending and descending movement or horizontal movement, making the strata fold or tilt. The new strata are parallel to the unconformity surface, while the old strata are oblique to it.", "Omitted (8 points)", "The crystallization process of magma from high temperature to low temperature includes two parallel evolution series: on the one hand, it is the continuous solid solution reaction series of plagioclase belonging to light-colored minerals (silica-alumina minerals), that is, from calcium-rich plagioclase to sodium-rich plagioclase (that is, from basic plagioclase to acidic plagioclase); During the evolution of this series, the mineral crystal lattice has not changed much, but the mineral composition has changed continuously, which is actually a continuous isomorphic process. On the other hand, it is a discontinuous reaction series of dark minerals (iron magnesium minerals), that is, crystallization in the order of olivine, pyroxene, hornblende and biotite; During this series of evolution, there is no continuous transition in composition between the adjacent minerals before and after, but the magma reacts with the early minerals to produce new minerals, and the crystal framework of adjacent minerals has also changed significantly. With the decrease of temperature, these two series synthesized a single discontinuous reaction series in the late stage of magma, crystallized potassium feldspar, muscovite in turn, and finally precipitated quartz. Bowen reaction series, to a certain extent, explains the crystallization order and paragenetic association law of minerals in magma, and provides a simple method to master the classification of igneous rocks. The vertical line indicates the sequence of mineral crystallization from high temperature to low temperature; The horizontal line indicates that the minerals at the same horizontal position are basically crystallized at the same time and form certain types of rocks according to the symbiotic law. For example, pyroxene and calcium-rich plagioclase constitute basic rocks, which cannot be associated with quartz; Potassium feldspar, sodium-rich plagioclase, quartz, biotite and other acidic rocks are impossible to coexist with olivine. The farther apart the minerals are in the vertical direction, the less chance of symbiosis.", "A: The fundamental cause of the earthquake is plate movement. Due to the relative movement of the plates, the accumulation of geostress and the sudden release of strain energy at the edge of the plates cause earthquakes. Therefore, the distribution of earthquakes is controlled by the plate boundary, and the world earthquakes are mainly distributed in the following four regions: the Pacific Rim seismic belt; Mediterranean - Indonesia seismic zone; Ridge seismic zone; Continental rift seismic zone.", "A: Weathering refers to the destruction of surface rocks under various geological agents. Weathering includes three types: physical (mechanical) weathering, chemical weathering and biological weathering. Mechanical weathering is mainly due to temperature changes, water state changes (water freezing and melting and salt crystal growth), rock weight release and the role of growing plant roots. Chemical weathering is the chemical decomposition of rocks, mainly including oxidation, dissolution, hydrolysis, hydration and other important chemical reactions. Biological weathering refers to the process in which the biochemical reaction between the metabolic products and the decomposed products of the body and the chemical elements of the minerals in the rock occurs during the biological activities of the organism, causing the destruction of the original minerals or rocks.", "Answer: There are three types of weathering products of the parent rock: first, terrigenous detrital material: it is the detrital material formed by the mechanical weathering of the parent rock and the mechanical transportation and sedimentation, such as quartz, feldspar, etc. Secondly, clay material: clay mineral is mainly decomposed from feldspar in the chemical weathering of parent rock. Third, chemical and biochemical substances: such sediments are derived from the chemical decomposition of the parent rock. Mainly: Al2O3, Fe2O3, FeO, SiO2, CaO, Na2O, K2O, MgO, etc. They are precipitated in the form of colloidal true solution in water and transported to the appropriate environment.", "6 Answer: Mechanical deposition occurs when the gravity of debris is greater than the carrying force of water flow. Because the flow velocity and flow rate of the flowing water are variable, and the size, shape and specific gravity of the debris itself are different, the deposition sequence can be divided into two parts. From the perspective of debris size, the coarse-grained debris was first deposited and then transited to the smallest debris; From the perspective of the specific gravity of the debris, the particles with larger specific gravity deposit before the particles with smaller specific gravity. In this way, in the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation. As a result of this action, the sediments form regular zonal distribution along the direction of transportation in the sequence of gravel - sand - siltstone - clay. Therefore, after consolidation, the sediments form conglomerate, sandstone, siltstone and claystone respectively.", "A: Diagenesis refers to the process of transforming loose sediment into consolidated rock after sediment deposition. Diagenesis includes the following three aspects: 1. Compaction: due to the gradual thickening of the overlying sediment and the increasing pressure, the attached water in the sediment is gradually discharged, the pores between the particles are reduced, the volume is reduced, and the contact force between the particles is strengthened, thus making the sediment consolidated and hardened. 2. Cementation The minerals filled in the pores of sediments bind the dispersed particles together, which is called cementation. 3. The recrystallization sediment is affected by temperature and pressure, making the non-crystalline material into crystalline material, and the fine-grained crystalline material into coarse-grained crystalline material. This action weighs the crystallization.", "Answer: Groundwater contains a large amount of CO2 and organic acids. In areas where soluble carbonate rocks are widely distributed, groundwater flows along the bedding and pores, and continuously dissolves rocks along the way during the flow process. The dissolution of groundwater is the main factor, and the combined action of surface water makes the surface and underground form some special landforms. These landforms and processes are called karst. The basic conditions for karst formation are: thick soluble rock with gentle occurrence and developed joints and other fissures, and abundant flowing groundwater.", "Answer: Gravel (grain size>2mm) in clastic rock is called conglomerate if its content is more than 50%. In clastic rock, those with terrigenous clasts with grain size of 2~0. 1mm and content of more than 50% are called sandstones. Siltstone is defined as siltstone with more than 50% of silt-grade clasts (grain size 0.05~0.005mm) in clastic rock. Clay rock mainly refers to loose or consolidated rock composed of fine particles with grain size<0.005mm and containing a large amount of clay minerals (kaolinite, montmorillonite, hydromica, etc.).", "Answer: In the vertical section of the river facies, the lower part is the riverbed subfacies, which is often the main body of the river facies deposition. It is generally thick and mainly composed of the riverbed retained conglomerate and sandstone (side beach or core beach). Because it is located at the lower part of the river facies section, it is also called the bottom sediment; The upper part of the section is the bank subfacies and the river overflow subfacies, which are called the top sediments of the river facies, and mainly composed of siltstone, claystone and other fine-grained sediments. The coarse and fine positive cycles under the vertical section formed by the combination of bottom sediments and top sediments are called the binary structure of river sediments.", "7. A set of contact relations of strata of different ages deposited continuously is called conformity contact. The upper and lower strata are continuous without discontinuity, and the lithology is consistent or gradual with the contained fossils, and the occurrence is basically consistent. There are sedimentary discontinuities between the upper and lower strata, that is, a part of the strata is missing between the two sets of strata. This kind of stratum contact relationship is called unconformity contact. It reflects that a region has been exposed to the surface for a long time and has been subjected to weathering and denudation. The sedimentary discontinuity between two sets of strata is called unconformity. According to the occurrence relationship between the upper and lower strata of the unconformity and the structural movement characteristics reflected, the unconformity can be divided into: parallel unconformity: the occurrence of the upper and lower strata of the unconformity surface is basically the same, reflecting the overall rise. Angular unconformity: The upper and lower strata of the unconformity surface have different attitudes and intersect at an angle, reflecting the uneven ascending and descending movement or horizontal movement, making the strata fold or tilt. The new strata are parallel to the unconformity surface, while the old strata are oblique to it.", "A: The main factors that affect and control the occurrence of metamorphism are temperature, pressure and chemically active fluids. In the metamorphic process, these factors do not exist in isolation, but often exist at the same time, coordinate and restrict each other, and play different roles under different circumstances, thus forming different characteristics of metamorphism. Generally speaking, temperature is the most important factor. With the increase of temperature, the activity of molecules or atoms in the rock increases, which creates the prerequisite for metamorphism, mainly causing recrystallization and the formation of new minerals. There are two functions of pressure. The static pressure is caused by the weight of the overlying material and increases with depth. Its function makes the temperature of metamorphic reaction rise and forms minerals with smaller molecular volume and larger specific gravity. The stress is a kind of directional pressure, which is related to tectonic movement. It is stronger in the shallow part of the crust and weaker in the deep part. In the shallow part of the earth's crust, the stress of crustal movement is the most concentrated, which mainly forms the changes in rock structure (mechanical transformation). In the depth of the earth's crust, chemical reactions between minerals are easy to occur due to the high temperature. Dissolve in the direction of maximum stress (pressure solution), precipitate in the direction of minimum stress, and form columnar and flaky minerals under the action of directional pressure. In the underground fluid, there are mainly H2O, CO2, F, Cl, B and other volatiles. It generally exists in intergranular pores and fractures of minerals. It may come from intergranular pores of protolith, dehydration of protolith minerals, or magma and deep crust. The function of fluid is to act as a solvent, promote the dissolution of components, increase the diffusion rate, and thus promote the recrystallization and metamorphism reaction. It can also participate in the metamorphism reaction as a component to form minerals without water or water. The aqueous solution is also an indispensable medium for substances to be brought in or out of the metasomatism. The above factors are not isolated, but coexisting, coordinated and constrained. Under different circumstances, certain factors play a leading role, and thus show different characteristics of metamorphism.", "Weathering refers to the destruction of surface rocks under various geological agents. Weathering includes three types: physical (mechanical) weathering, chemical weathering and biological weathering. Mechanical weathering is mainly due to temperature changes, water state changes (water freezing and melting and salt crystal growth), rock weight release and the role of growing plant roots. Chemical weathering is the chemical decomposition of rocks, mainly including oxidation, dissolution, hydrolysis, hydration and other important chemical reactions. Biological weathering refers to the process in which the biochemical reaction between the metabolic products and the decomposed products of the body and the chemical elements of the minerals in the rock occurs during the biological activities of the organism, causing the destruction of the original minerals or rocks.", "There are three types of weathering products of the parent rock: first, terrigenous clastic material: it is the clastic material formed by the mechanical weathering of the parent rock and the mechanical transportation and sedimentation, such as quartz, feldspar, etc. Secondly, clay material: clay mineral is mainly decomposed from feldspar in the chemical weathering of parent rock. Third, chemical and biochemical substances: such sediments are derived from the chemical decomposition of the parent rock. Mainly: Al2O3, Fe2O3, FeO, SiO2, CaO, Na2O, K2O, MgO, etc. They are precipitated in the form of colloidal true solution in water and transported to the appropriate environment.", "Mechanical deposition occurs when the gravity of debris is greater than the carrying force of water flow. Because the flow velocity and flow rate of the flowing water are variable, and the size, shape and specific gravity of the debris itself are different, the deposition sequence can be divided into two parts. From the perspective of debris size, the coarse-grained debris was first deposited and then transited to the smallest debris; From the perspective of the specific gravity of the debris, the particles with larger specific gravity deposit before the particles with smaller specific gravity. In this way, in the process of deposition, the original coarse, fine, light and heavy materials mixed together are deposited in a certain order, which is called mechanical deposition differentiation. As a result of this action, the sediments form regular zonal distribution along the direction of transportation in the sequence of gravel - sand - siltstone - clay. Therefore, after consolidation, the sediments form conglomerate, sandstone, siltstone and claystone respectively.", "Diagenesis refers to the process of turning loose sediment into consolidated rock after sediment deposition. Diagenesis includes the following three aspects on page 7 of 8: (1) Due to the gradual thickening of the overlying sediment and the increasing pressure, the attached water in the sediment is gradually discharged, the pores between the particles are reduced, the volume is reduced, and the contact force between the particles is strengthened, thus making the sediment consolidated and hardened. (2) Cementation The minerals filled in the pores of sediments bind the dispersed particles together, which is called cementation. (3) The recrystallization sediment is affected by temperature and pressure, making the non-crystalline material into crystalline material, and the fine-grained crystalline material into coarse-grained crystalline material. This action weighs the crystallization.", "Bedding is a layered structure formed by changes in mineral composition, color, texture and other characteristics along the vertical direction of the original sedimentary plane. Bedding is not only the basic structural feature of sedimentary rocks, but also a good indicator for studying sedimentary environment or sedimentary facies. Generally, the bedding is divided into the following types according to the morphological characteristics: A. Horizontal bedding: the fine layers and the interfaces between the fine layers and the layers are parallel to each other, mainly formed in fine silt and argillaceous rocks, and mostly found in the sediments formed in the environment of slow flow or advection, such as flood plain, oxbow lake, lagoon, swamp, and closed bay sediments. B. wavy bedding: the fine layer is wavy, but its general direction is parallel to each other and parallel to the layer plane. There are two kinds of causes. One is caused by the oscillating waves. The wave layer is symmetrical, and it is mainly seen in the sediments in the shallow water zone of lakes, bays and lagoons; The other is caused by weak unidirectional flow, and the wave layer is asymmetric, which is mostly seen in the flood plain sediments. C. Oblique bedding: the fine layer and the layer system interface are oblique, and the layers can overlap and cross. It is the structural feature on the rock profile after the sand grain or sand wave formed in the current (or wind) is buried. The tendency of the fine layer reflects the flow direction (wind direction) of the medium, and the thickness of the fine layer (equivalent to the height of the sand ripple or sand wave) reflects the flow velocity of the medium. D. Grain-order bedding: also known as progressive bedding, it has no obvious fine layer boundary. The whole bedding mainly shows the change of grain size, that is, the grain size gradually changes from coarse to fine from bottom to top. It is the sedimentary feature of turbidity current and is relatively common. E. Massive bedding: the lithology of the rock layer is uniform from bottom to top, and other internal bedding structures can not be seen with the naked eye. The thickness is generally greater than lm, which is the product of rapid accumulation of sediments. It can also be caused by biological disturbance.", "Gravel (grain size \uff1e 2mm) in clastic rock with content greater than 50% is called conglomerate. In clastic rock, those with terrigenous clasts with grain size of 2~0. 1mm and content of more than 50% are called sandstones. Siltstone is defined as siltstone with more than 50% of silt-grade clasts (grain size 0.05~0.005mm) in clastic rock. Clay rock mainly refers to loose or consolidated rock composed of fine particles with grain size<0.005mm and containing a large amount of clay minerals (kaolinite, montmorillonite, hydromica, etc.). Page 8 of 8", "slightly", "Briefly describe the classification of sedimentary rocks and their main rock types (7 points) A: sedimentary rocks can be divided into two types according to their genesis and composition: clastic rocks, chemical rocks and biochemical rocks. In addition, there are some sedimentary rocks formed under special conditions. (3 ') 1. Clastic rocks mainly include sedimentary clastic rocks and volcaniclastic rocks. Sedimentary clastic rocks can be further divided into conglomerate, sandstone, siltstone and claystone according to grain size. According to the granularity of pyroclastic rocks, pyroclastic rocks can be divided into volcanic agglomerate, volcanic breccia and tuff. 2. Chemical and biochemical rocks, mainly including aluminum, iron and manganese rocks, silicon and phosphorous salts, carbonate rocks, evaporite rocks and combustible organic rocks. 3. Special sedimentary rocks include tempestite and turbidite. \uff084\u2019\uff09" ] } }