URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.turito.com/learn/physics/sound-grade-6
[ "#### Need Help?\n\nGet in touch with us\n\n# Sound – Definition and Types\n\n###", null, "Key Concepts\n\n• Introduction to sound energy\n• Sound wave\n• Parts of sound wave\n\n## Introduction to Sound\n\n### Introduction:\n\nSound in the form of energy travels with decreasing amplitude. Sound is a mechanical wave; hence, it requires a medium (solids, liquids and gases) to propagate.\n\nUnlike light, sound energy cannot travel through a vacuum.\n\n### Important points of sound energy:\n\n1. A sound is a form of energy.\n1. It is a mechanical wave.\n1. Sound needs a medium for the propagation.\n1. Sound cannot travel through the vacuum.\n1. It travels with decreasing amplitude.\n\n### Sound wave:\n\nSound has two types of waves\n\n1. Longitudinal wave\n1. Transverse wave\n\n### Longitudinal wave:\n\nThe particles in the wave oscillate parallel to the way of propagation of the wave; such a type of wave is called longitudinal wave.\n\nIn longitudinal waves, there are alternate compression and rarefactions.\n\n### Compression:\n\nIf the particles in the wave are closer to each other in the longitudinal wave, then it is called compression.\n\n### Rarefaction:\n\nIf the particles in the wave are far from each other in the longitudinal wave, then it is called rarefaction.\n\n### Transverse wave:\n\nThe particles in the wave oscillate perpendicular to the way of propagation of the wave; such a type of wave is called transverse wave.\n\nIn transverse waves, there are alternate crests and trough.\n\n#### Crest:\n\nThe maximum displacement in a transverse wave is called a crest.\n\n#### Trough:\n\nThe minimum displacement in a transverse wave is called a crest.\n\n### Sound wave:\n\n#### Amplitude:\n\nThe maximum displacement or displacement of the wave from the mean position or equilibrium position is called amplitude transverse waves.\n\nThe maximum displacement of the wave when plucked in the string, which is at rest, is called amplitude in longitudinal waves.\n\nAmplitude is denoted by A, and units of amplitude are m(meter).\n\nNote: Sound always travels with decreasing amplitude.\n\n#### Wavelength:\n\nThe distance between a two consecutive crest or the trough or the distance between two consecutive compressions or the rarefaction is called wavelength.\n\nThe distance between the crest and the trough is called half of the wavelength, and the distance between compression and the rarefaction is also called half of the wavelength.\n\nWavelength is denoted by λ(lambda), and units of the wavelength are m(meters) in the SI system.\n\n#### Frequency:\n\nThe number of cycles per second is called frequency; the number of waves passing through the specific point is called a cycle.\n\nFrequency is denoted by υ(nu), and frequency units are Hz(hertz).\n\n#### Wave speed:\n\nThe distance travelled by the wave in the given amount of time is called wave speed.\n\nSpeed of wave = wavelength x frequency\n\nWave speed is denoted by meter Hertz (MHz).\n\n### Summary:\n\nSound in the form of energy travels with decreasing amplitude. Sound is a mechanical wave; hence, it requires a medium (solids, liquids, and gases) to propagate.\n\nUnlike light, sound energy cannot travel through the vacuum.\n\n#### Sound wave:\n\nSound has two types of waves\n\n1. Longitudinal wave\n1. Transverse wave\n\n#### Longitudinal wave:\n\nThe particles in the wave oscillate parallel to the way of propagation of the wave; such a type of wave is called longitudinal wave.\n\nIn longitudinal waves, there are alternate compression and rarefactions.\n\n#### Transverse wave:\n\nThe particles in the wave oscillate perpendicular to the way of propagation of the wave; such a type of wave is called longitudinal wave.\n\nIn transverse waves, there are alternate crests and troughs.\n\n#### Amplitude:\n\nThe maximum displacement or displacement of the wave from the mean position or equilibrium position is called amplitude transverse waves.\n\n#### Wavelength:\n\nThe wavelength is between two consecutive crests, troughs, or two consecutive compressions or rarefaction.\n\nWavelength is denoted by λ(lambda), and units of the wavelength are m(meters) in the SI system.\n\n#### Frequency:\n\nThe number of cycles per second is called frequency; the number of waves passing through the specific point is called a cycle.\n\nFrequency is denoted by υ(nu), and units of the frequency are hertz.\n\n#### Wave speed:\n\nThe distance travelled by the wave in the given amount of time is called wave speed.\n\nSpeed of wave = wavelength x frequency\n\nWave speed is denoted by meter Hertz (MHz).\n\n#### Define Position Time Graph and its Types\n\nKey Concepts • Slope of a graph • Position time graph • Slope of s-t graph = Velocity • Types of position time graphs Introduction An object in a uniform motion covers equal distances in equal intervals of time. This also indicates that it moves at a constant velocity. When its position at different instants […]\n\n#### Magnetic Field Lines: Definition, Explanation and Q&A\n\nKey Concepts Magnetic Field Magnetic Field Lines properties of magnetic field lines Uniform and non uniform magnetic lines Introduction Two magnets when placed close to each other attract and stick to each other. However, if we go on increasing the distance between them, the attraction between them reduces gradually to such an extent that they […]\n\n#### The Life Cycles of Stars: Meaning and Example\n\nKey Concepts Stars Analysis of starlight Composition of stars Stars’ temperature Size and mass of stars Stages of life cycle of a star Introduction Stars are huge, shining balls of extremely hot gas (known as plasma) in space. The Sun is our nearest star. During the nighttime, many other stars are visible to the naked […]", null, "", null, "" ]
[ null, "https://www.turito.com/learn-internal/wp-content/uploads/2022/09/image-165.png", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9105348,"math_prob":0.96857816,"size":5784,"snap":"2022-40-2023-06","text_gpt3_token_len":1180,"char_repetition_ratio":0.14792387,"word_repetition_ratio":0.4226254,"special_character_ratio":0.19363762,"punctuation_ratio":0.10769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99025726,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T10:55:23Z\",\"WARC-Record-ID\":\"<urn:uuid:5edd6bce-7cb9-4fc6-b133-3ae2fde6c972>\",\"Content-Length\":\"109954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b45d22ed-17e0-48e6-b514-79662b6c46ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:f45c1005-5a05-4489-a1fe-bfdae3ea30f3>\",\"WARC-IP-Address\":\"65.1.150.125\",\"WARC-Target-URI\":\"https://www.turito.com/learn/physics/sound-grade-6\",\"WARC-Payload-Digest\":\"sha1:HUTPSOP2K2DNCPI2GMAHGOGN4WOEAWIZ\",\"WARC-Block-Digest\":\"sha1:EG2WJD35VCUXMEMV77ZAGXEK7XP32WF2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500456.61_warc_CC-MAIN-20230207102930-20230207132930-00237.warc.gz\"}"}
https://www.zbmath.org/?q=an%3A1300.76014
[ "# zbMATH — the first resource for mathematics\n\nCapillary drops on a rough surface. (English) Zbl 1300.76014\nSummary: We study liquid drops lying on a rough planar surface. The drops are minimizers of an energy functional that includes a random adhesion energy. We prove the existence of minimizers and the regularity of the free boundary. When the length scale of the randomly varying surface is small, we show that minimizers are close to spherical caps which are minimizers of an averaged energy functional. In particular, we give an error estimate that is algebraic in the scale parameter and holds with high probability.\n\n##### MSC:\n 76D45 Capillarity (surface tension) for incompressible viscous fluids 35B27 Homogenization in context of PDEs; PDEs in media with periodic structure 35R60 PDEs with randomness, stochastic partial differential equations 35R35 Free boundary problems for PDEs 49Q10 Optimization of shapes other than minimal surfaces\nFull Text:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8034844,"math_prob":0.91474545,"size":1215,"snap":"2021-31-2021-39","text_gpt3_token_len":306,"char_repetition_ratio":0.098265894,"word_repetition_ratio":0.02247191,"special_character_ratio":0.2452675,"punctuation_ratio":0.15625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95012134,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T23:50:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6cbce653-e18b-4690-b517-18938dfdeb28>\",\"Content-Length\":\"46696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80ba70b1-d1c4-41ca-8d34-022f1e44b4bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:83b54671-dd3f-4373-bead-8134cbe7b856>\",\"WARC-IP-Address\":\"141.66.194.3\",\"WARC-Target-URI\":\"https://www.zbmath.org/?q=an%3A1300.76014\",\"WARC-Payload-Digest\":\"sha1:WKEO6NN26MDIZZRYFQZ2JMZY4XTM2BCQ\",\"WARC-Block-Digest\":\"sha1:2OJ5B6SSFYVVAKAF467EULXMPVWR5JAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152156.49_warc_CC-MAIN-20210726215020-20210727005020-00613.warc.gz\"}"}
https://findthefactors.com/2018/02/17/1033-and-level-3/
[ "# 1033 and Level 3\n\nTo solve a level 3 puzzle begin with 80, the clue at the very top of the puzzle. Clue 48 goes with it. What are the factor pairs of those numbers in which both factors are between 1 and 12 inclusive? 80 can be 8×10, and 48 can be 4×12 or 6×8. What is the only number that listed for both 80 and 48? Put that number in the top row over the 80. Put the corresponding factors where they go starting at the top of the first column.\n\nWork down that first column cell by cell finding factors and writing them as you go. Three of the factors have been highlighted because you have to at least look at the 55 and the 5 to deal with the 20 in the puzzle. Have fun!", null, "Print the puzzles or type the solution in this excel file: 12 factors 1028-1034\n\nHere are a few facts about the number 1033:\n\nIt is a twin prime with 1031.\n\n32² + 3² = 1033, so it is the hypotenuse of a Pythagorean triple:\n192-1015-1033 calculated from 2(32)(3), 32² – 3², 32² + 3²\n\n1033 is a palindrome in two other bases:\nIt’s 616 in BASE 13 because 6(13²) + 1(13) + 6(1) = 1033\n1J1 in BASE 24 (J is 19 base 10) because 24² + 19(24) + 1 = 1033\n\n8¹ + 8º + 8³ + 8³ = 1033 Thanks to OEIS.org for that fun fact!\n\n• 1033 is a prime number.\n• Prime factorization: 1033 is prime.\n• The exponent of prime number 1033 is 1. Adding 1 to that exponent we get (1 + 1) = 2. Therefore 1033 has exactly 2 factors.\n• Factors of 1033: 1, 1033\n• Factor pairs: 1033 = 1 × 1033\n• 1033 has no square factors that allow its square root to be simplified. √1033 ≈ 32.1403\n\nHow do we know that 1033 is a prime number? If 1033 were not a prime number, then it would be divisible by at least one prime number less than or equal to √1033 ≈ 32.1. Since 1033 cannot be divided evenly by 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 or 31, we know that 1033 is a prime number.", null, "Here’s another way we know that 1033 is a prime number: Since its last two digits divided by 4 leave a remainder of 1, and 32² + 3² = 1033 with 32 and 3 having no common prime factors, 1033 will be prime unless it is divisible by a prime number Pythagorean triple hypotenuse less than or equal to √1033 ≈ 32.1. Since 1033 is not divisible by 5, 13, 17, or 29, we know that 1033 is a prime number.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "https://i2.wp.com/findthefactors.com/wp-content/uploads/2018/02/1033-puzzle.jpg", null, "https://i1.wp.com/findthefactors.com/wp-content/uploads/2018/02/1033-factor-pairs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92689854,"math_prob":0.9968378,"size":2182,"snap":"2021-43-2021-49","text_gpt3_token_len":719,"char_repetition_ratio":0.16345271,"word_repetition_ratio":0.059602648,"special_character_ratio":0.3877177,"punctuation_ratio":0.115079366,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996355,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T17:36:58Z\",\"WARC-Record-ID\":\"<urn:uuid:cc15ea93-1f18-4db8-a79b-20ef0ff7c24e>\",\"Content-Length\":\"53016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f208b555-5153-410a-960b-111d2d81deb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe2e6e6f-7972-4917-8614-97230d8a12eb>\",\"WARC-IP-Address\":\"192.0.78.153\",\"WARC-Target-URI\":\"https://findthefactors.com/2018/02/17/1033-and-level-3/\",\"WARC-Payload-Digest\":\"sha1:GQPZRGXANBZ2OR3KQFVC6566HODCS23I\",\"WARC-Block-Digest\":\"sha1:NRXQ3GFR66QW57RBLOHO2K5263EFPVEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587915.41_warc_CC-MAIN-20211026165817-20211026195817-00307.warc.gz\"}"}
https://urchin.earth.li/~twic/Predictors_in_Image_Coding.html
[ "# Predictors in Image Coding\n\nThis is a short one. I'm talking about heuristics for predicting the value of a pixel based on its neighbours, for the purposes of generating residuals with less entropy than the input pixels, and which are thus more compressible. Read the PNG specification or an introductory text on image coding, or the prediction + residual coding strategy more generally, if you have no idea what i'm on about.\n\nI'm going to define the predictors in terms of pixels called N, W and NW, being the pixels lying to the north, west and northwest of the target, respectively. I'll call the predicted value P.\n\nPNG defines five predictors:\n\n• None: P = 0\n• Sub, aka West: P = W\n• Up, aka North: P = N\n• Average: P = (N + W) / 2\n• Paeth: Q = N + W - NW; dN = |N - Q|; dW = |W - Q|; dNW = |NW - Q|; P = (if dN < dW, dNW) N, (if dW < dN, dNW) W, (if dNW < dN, dW) NW. In English, compute a meta-predictor, Q, as N + W - NW (which corresponds to the value P would have if there was a constant gradient of colour in this part of the image), then pick the neighbouring pixel whose value is closest to the meta-predictor to use as a predictor.\n\nPaeth is a weird one; i would have thought the meta-predictor would itself make quite a good predictor! I think the point is that sticking to an already-used value makes more sense in images like drawings, where there are big blocks of constant colour.\n\nA popular new predictor is:\n\n• Median Adaptive: P = (if NW > max(N, W)) min(N, W), (if NW < min(N, W)) max(N, W), (else) N + W - NW\n\nBasically, this is applying the Paeth meta-predictor, but clamping the value to the range [min(N, W), max(N, W)]." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92517895,"math_prob":0.97278315,"size":1014,"snap":"2021-21-2021-25","text_gpt3_token_len":219,"char_repetition_ratio":0.13564357,"word_repetition_ratio":0.0,"special_character_ratio":0.20907298,"punctuation_ratio":0.12315271,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947241,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T11:29:07Z\",\"WARC-Record-ID\":\"<urn:uuid:75c12b67-ff99-4b0c-a33d-91c4214b7148>\",\"Content-Length\":\"2333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84a48432-3d38-4745-a172-6fdc903fc761>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9494302-fbca-491b-ac37-2a31421a8b39>\",\"WARC-IP-Address\":\"185.73.44.122\",\"WARC-Target-URI\":\"https://urchin.earth.li/~twic/Predictors_in_Image_Coding.html\",\"WARC-Payload-Digest\":\"sha1:IWOMVXIXID3SQJBPYHD6QTFWBOPRENDT\",\"WARC-Block-Digest\":\"sha1:JRUSZG3PQBZ2ROSF5JPGNRCDUUPA2ZKK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487608702.10_warc_CC-MAIN-20210613100830-20210613130830-00398.warc.gz\"}"}
https://socratic.org/questions/what-is-hess-s-law-of-heat-summation
[ "# What is Hess's law of heat summation?\n\nJun 23, 2014\n\nHess's law of heat summation states that the total enthalpy change during a reaction is the same whether the reaction takes place in one step or in several steps.", null, "For example, in the above diagram,\n\nΔH_1 = ΔH_2 + ΔH_3 = ΔH_4 + ΔH_5 + ΔH_6.\n\nIn Hess's Law calculations, you write equations to make unwanted substances cancel out.\n\nSometimes you have to reverse an equation to do this, and you reverse the sign of ΔH.\n\nSometimes you have to multiply or divide a given equation, and you do the same thing to the ΔH.\n\nEXAMPLE\n\nDetermine the heat of combustion, ΔH_\"c\", of CS₂, given the following equations.\n\n1. C(s) + O₂(g) → CO₂(g); ΔH_\"c\" = -393.5 kJ\n2. S(s) + O₂(g) → SO₂(g); ΔH_\"c\" = -296.8 kJ\n3. C(s) + 2S(s) → CS₂(l); ΔH_\"f\" = 87.9 kJ\n\nSolution\n\nWrite down the target equation, the one you are trying to get.\n\nCS₂(l) + 2O₂(g) → CO₂(g) + 2SO₂(g)\n\nStart with equation 3. It contains the first compound in the target (CS₂).\n\nWe have to reverse equation 3 and its ΔH to put the CS₂ on the left. We get equation A below.\n\nA. CS₂(l) → C(s) + 2S(s); -ΔH_\"f\" = -87.9 kJ\n\nNow we eliminate C(s) and S(s) one at a time. Equation 1 contains C(s), so we write it as Equation B below.\n\nB. C(s) + O₂(g) → CO₂(g); ΔH_\"c\" = -393.5 kJ\n\nWe use Equation 2 to eliminate the S(s), but we have to double it to get 2S(s). We also double its ΔH. We then get equation C below.\n\nC. 2S(s) + 2O₂(g) → 2SO₂(g); ΔH_\"c\" = -593.6 kJ\n\nFinally, we add equations A, B, and C to get the target equation. We cancel things that appear on opposite sides of the reaction arrows.\n\nA. CS₂(l) → C(s) + 2S(s); -ΔH_\"f\" = -87.9 kJ\nB. C(s) + O₂(g) → CO₂(g); ΔH_\"c\" = -393.5 kJ\nC. 2S(s) + 2O₂(g) → 2SO₂(g); ΔH_\"c\" = -593.6 kJ\n\nCS₂(l) + 3O₂(g) → CO₂(g) + 2SO₂(g); ΔH_\"c\" = -1075.0 kJ\n\nHope this helps." ]
[ null, "http://www.docbrown.info/page03/3_51energy/HessLaw1.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8401527,"math_prob":0.9994248,"size":1572,"snap":"2020-45-2020-50","text_gpt3_token_len":521,"char_repetition_ratio":0.16262755,"word_repetition_ratio":0.05,"special_character_ratio":0.32569975,"punctuation_ratio":0.12396694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999393,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T17:04:01Z\",\"WARC-Record-ID\":\"<urn:uuid:03a685a2-c2b8-4f85-be45-6c5b31ea8749>\",\"Content-Length\":\"37955\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4487dcf9-3411-406d-8b3a-1796fbde68bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ea20064-93a6-4d1b-b1ee-2feca44166cd>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-is-hess-s-law-of-heat-summation\",\"WARC-Payload-Digest\":\"sha1:VANCGSQZI7EA32WOTIJGLBKDA5CPB6OJ\",\"WARC-Block-Digest\":\"sha1:4AY7EV3WGOXRF2PP5HWUNPK2NYEMG2JE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00340.warc.gz\"}"}
http://hejunhao.me/archives/1237
[ "# RNN/LSTM-递归神经网络\n\n### 一、RNN vs LSTM\n\n#### RNN 逻辑结构图", null, "#### LSTM 逻辑结构图", null, "LSTM 是 RNN 的变体,它们的原理几乎一样,唯一的不同是 output 即 hidden state 的计算逻辑不同\n\n#### RNN 如何计算某个时刻 t 的 output ?\n\n$h_t = tanh(W*[h_{t-1}, x_t]+b)$\n\n#### LSTM 如何计算某个时刻 t 的 output ?\n\n$f_t = sigmoid(W_f*[h_{t-1}, x_t] + b_f)$\n\n${C}^{*} = tanh(W_c*[h_{t-1}, x_t]+b_c)$\n\n$i_t = sigmoid(W_i*[h_{t-1}, x_t] + b_i)$\n\n$C_t = f_t*C_{t-1} + i_t*{C}^{*}$\n\n$o_t = sigmoid(W_o*[h_{t-1}, x_t] + b_o)$\n\n$h_t = o_t*tanh(C_t)$\n\n#### LSTM 为什么优于 RNN ?\n\nRNN 通过叠乘的方式进行状态更新,当 sequence 比较长时容易出现梯度消失/爆炸的情况,主要原因是反向传播的连乘效应,而 LSTM 是通过门控制的叠加方式来更新状态(C_t的计算公式),所以可以有效防止梯度问题,当然对于超长的 sequence,LSTM 依然会有梯度消失或者爆炸的可能。\n\n### 二、RNN/LSTM 中的 num_units 是啥意思 ?", null, "num_units 相当于神经网络的隐层神经元的个数,例如上图表示一个 LSTM Cell,包含四个神经网络层,即黄色方框部分,num_units 就是每个神经网络结构的隐层神经元个数(全连接单元数),它实际上也是 LSTM 输出向量的维度数,所以 h_t 为 num_units 维向量。\n\n### 三、如何计算 Keras 的 LSTM layer 的参数个数?\n\n$(num\\_units + input\\_dims + 1) * num\\_units * 4 = 150600$\n\n1. num_units + input_dims 是因为上层输出需要首先与输入进行一次concat,即 [h_t-1, x_t],\n2. + 1 是因为 bias\n3. * 4 是因为共有4个神经网络层(黄色方框)\n4. 为什么不需要乘以 time_steps 即 number of cell ?\n\n### 五、为什么 LSTM 的输入和输出值不用 sigmoid 而用 tanh 作为激活函数 ?\n\nLSTM 内部维护了一个状态向量,其值应该可以增加或者减少,而 sigmoid 的输出为非负数,所以状态信息只能增加,显然不合适,相反,tanh 的输出范围包含了正负数,因此可以满足状态的增减。\n\n[1,2,3,4] => 5\n[2,3,4,5] => 6\n\n### 七、Keras 中的 return_sequence", null, "### 八、Keras 中 LSTM 的 stateful 概念", null, "### 九、Keras 中 LSTM 的 state 何时 reset?\n\nLSTM 的 state 是针对每一个 sequence 而言的,所以对于一个 batch_size = 10 的 batch,它会为10个 sequences 同时创建10个并行的 state。\n\n### 十、常用 LSTM 模型结构\n\n#### 多层 LSTM", null, "#### 双向 LSTM", null, "" ]
[ null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/1.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/2.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/3.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/4-1-300x144.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/5.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/6.png", null, "http://hejunhao.me/wordpress/wp-content/uploads/2018/08/7.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.91729164,"math_prob":0.99907464,"size":3132,"snap":"2019-51-2020-05","text_gpt3_token_len":1967,"char_repetition_ratio":0.16719949,"word_repetition_ratio":0.01055409,"special_character_ratio":0.25510857,"punctuation_ratio":0.06095238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98283964,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T12:07:21Z\",\"WARC-Record-ID\":\"<urn:uuid:9b480bf0-aa60-465f-9fc7-7c9151859620>\",\"Content-Length\":\"43461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e0eae27-1301-48d7-9547-b8eabe94b766>\",\"WARC-Concurrent-To\":\"<urn:uuid:04c70186-63de-4890-b023-1cd12662ea44>\",\"WARC-IP-Address\":\"119.29.22.179\",\"WARC-Target-URI\":\"http://hejunhao.me/archives/1237\",\"WARC-Payload-Digest\":\"sha1:IULEOUOBTKL2YZKTMMYVJ62K2JHU7Q52\",\"WARC-Block-Digest\":\"sha1:NOETLWGCLU2BTIITS4XLV4XNHP2SPL44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606975.49_warc_CC-MAIN-20200122101729-20200122130729-00185.warc.gz\"}"}
https://pythoninformer.com/generative-art/generativepy/triangle/
[ "# Triangle\n\nBy Martin McBride, 2020-08-26\nTags: geometry triangle\nCategories: generativepy generative art", null, "The Triangle class draws a triangle.\n\nThere is also a triangle function that just creates a triangle as a new path.\n\n## Triangle class methods\n\nThe Triangle class inherits add, fill, stroke, fill_stroke, path, clip and other methods from Shape.\n\n• of_corners\n\n### of_corners\n\nCreates a triangle based on a set of 3 points.\n\nof_corners(a, b, c)\n\nParameter Type Description\na (number, number) (x, y) tuple, giving the position of corner a of the triangle.\nb (number, number) (x, y) tuple, giving the position of corner b of the triangle.\nc (number, number) (x, y) tuple, giving the position of corner c of the triangle.\n\n## triangle function\n\nAdds a triangle as a new path, without the need to create a Triangle object in code.\n\ntriangle(ctx, points)\n\nParameter Type Description\nctx Context The Pycairo Context to draw to\na (number, number) (x, y) tuple, giving the position of corner a of the triangle.\nb (number, number) (x, y) tuple, giving the position of corner b of the triangle.\nc (number, number) (x, y) tuple, giving the position of corner c of the triangle.\n\n## Example\n\nHere is some example code that draws triangles using the class and the utility function. The full code can be found on github.\n\nfrom generativepy.drawing import make_image, setup\nfrom generativepy.color import Color\nfrom generativepy.geometry import triangle, Triangle\n\n'''\nCreate triangles using the geometry module.\n'''\n\ndef draw(ctx, width, height, frame_no, frame_count):\nsetup(ctx, width, height, width=500, background=Color(0.8))\n\n# The triangle function is a convenience function that adds a triangle as a new path.\n# You can fill or stroke it as you wish.\ntriangle(ctx, (100, 100), (150, 50), (200, 150))\nctx.set_source_rgba(*Color(1, 0, 0))\nctx.fill()\n\nTriangle(ctx).of_corners((300, 100), (300, 150), (400, 200)).stroke(Color('orange'), 10)\n\nmake_image(\"/tmp/geometry-triangle.png\", draw, 500, 500)" ]
[ null, "https://pythoninformer.com/img/ads/numpybook-body-ad.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5715899,"math_prob":0.96923566,"size":3935,"snap":"2023-40-2023-50","text_gpt3_token_len":825,"char_repetition_ratio":0.13482575,"word_repetition_ratio":0.15732369,"special_character_ratio":0.20736976,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945351,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T05:37:07Z\",\"WARC-Record-ID\":\"<urn:uuid:f5f288ae-da96-4e76-af5e-02a44f56b758>\",\"Content-Length\":\"34004\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b9a0e0f-145b-44a2-bf18-f39eea6b0e65>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0b6c535-3e3a-41ff-9aa1-ec6a08a0a473>\",\"WARC-IP-Address\":\"89.117.139.101\",\"WARC-Target-URI\":\"https://pythoninformer.com/generative-art/generativepy/triangle/\",\"WARC-Payload-Digest\":\"sha1:Q5GM5W33BC63HSRI56VCQVOL4FPX2NNS\",\"WARC-Block-Digest\":\"sha1:VN2ASLNNKKDYPZDNI6VRLBQRPMHHHVDK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506686.80_warc_CC-MAIN-20230925051501-20230925081501-00888.warc.gz\"}"}
https://splice-bio.com/parametric-or-non-parametric-tests-for-qpcr-data/
[ "", null, "", null, "# Parametric or Non-Parametric Tests for qPCR Data?\n\nResearchers often wonder which statistical tests they should use with their qPCR data. Most frequently these questions arise when analysing gene expression data.\n\nThere are broad groups of statistical tests, parametric and non-parametric. Parametric tests assume the underlying data have normal distribution, whereas non-parametric tests do not.\n\nParametric tests are far more powerful and sensitive so it is better to use them whenever possible.\n\nThis means that before doing statistical tests, we should analyse the data distribution (e.g. with a histogram). In case of an obvious non-normal data distribution (at least one group of samples you are comparing) or if the data were collected in form of rankings rather than scores (“first”, “second”, “third”,..) we have to use the non-parametric tests.\n\nIn case of small sample size (<30) it is difficult to check the normality of distribution. Non-parametric tests are a good solution for small sample sizes.\nIf you are comparing two independent groups of samples (e.g. healthy and treatment) you can use parametric test like t-test or its non-parametric counterpart Mann-Whitney (for repeated measurements use Wilcoxon test).\n\nIf you are comparing more than two groups, you should use Analysis of variance ANOVA (a parametric test) or a Kruskal-Wallis (a non-parametric test).\nTo sum up a few tips: use t-tests (or ANOVA) unless the data are obviously non-normally distributed, are in the form of ranks or you have small sample groups.\n\nBy Matjaz Hren, PhD, COO, Head of Research and Development BioSistemika LLC", null, "### Leave us a comment:\n\nWe also recommend you to read:" ]
[ null, "https://splice-bio.com/wp-content/uploads/2015/03/Statistic-tests-870x250.jpg", null, "https://splice-bio.com/wp-content/themes/creative/assets/img/overlay-zoom.png", null, "https://googleads.g.doubleclick.net/pagead/viewthroughconversion/872304814/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9092767,"math_prob":0.83632535,"size":1647,"snap":"2020-45-2020-50","text_gpt3_token_len":354,"char_repetition_ratio":0.14911747,"word_repetition_ratio":0.0,"special_character_ratio":0.19914997,"punctuation_ratio":0.102649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.951748,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T22:46:20Z\",\"WARC-Record-ID\":\"<urn:uuid:4615b11f-4c66-473f-a7d4-d42c56c0113f>\",\"Content-Length\":\"83639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:923bdd8d-7273-4e0d-91ed-4e8ba91fde43>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc2186a8-aa0c-446a-a667-e03cf228e23c>\",\"WARC-IP-Address\":\"35.208.9.33\",\"WARC-Target-URI\":\"https://splice-bio.com/parametric-or-non-parametric-tests-for-qpcr-data/\",\"WARC-Payload-Digest\":\"sha1:6FOHO7WVFDKXHRHRYNGQHZGZEG3AOSVN\",\"WARC-Block-Digest\":\"sha1:GL6LBXCB3VUROH5FTBQYAVQJ7TA7TNQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890028.58_warc_CC-MAIN-20201025212948-20201026002948-00017.warc.gz\"}"}
https://brilliant.org/practice/conic-sections-level-1-2-challenges/
[ "", null, "Geometry\n\n# Conic Sections: Level 2 Challenges\n\nWhat is the number of intersections between $y=4$ and $x^2+y^2=9$?\n\nFor any real number $\\alpha$, the parabola $f_\\alpha (x) = 2x^2 + \\alpha x + 3 \\alpha$ passes through the common point $(a, b)$. What is the value of $a+b?$\n\nConsider the point $P(-1, 0)$ on the ellipse given by the equation $4x^2 + y^2 = 4$. There are two points $(a, b)$ and $(a, c)$ on the ellipse whose distance from $P$ is a maximum. What is the value of $a$?", null, "Point $O$ is the center of the ellipse with major axis $AB$ & minor axis $CD$ Point F is one of the focus of this ellipse.\n\nIf $OF=6$, and the diameter of inscribed circle of triangle $\\triangle{OCF}$ is $2$, then find $(AB)\\cdot (CD)$\n\nThere are four lines that are tangent to both circles\n${ x }^{ 2 }+{ y }^{ 2 }=1 \\quad \\text{ and } \\quad ({ x }-6)^{ 2 }+{ y }^{ 2 }=4.$\n\nWhat is the sum of the slopes of these four lines?\n\n×" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/chapter/Conic%20Sections-TrcyNO.png", null, "https://ds055uzetaobb.cloudfront.net/brioche/uploads/xx7QA0n9v7-37516.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93977475,"math_prob":1.000008,"size":961,"snap":"2019-43-2019-47","text_gpt3_token_len":214,"char_repetition_ratio":0.15673982,"word_repetition_ratio":0.22404371,"special_character_ratio":0.22580644,"punctuation_ratio":0.14354067,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T02:45:57Z\",\"WARC-Record-ID\":\"<urn:uuid:765d4ff9-30e3-4bb0-9ddb-363d880bff6c>\",\"Content-Length\":\"92570\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61cdd104-dfd5-4611-97d2-947734f8c420>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5a7fcfb-3ca4-4663-93ac-a2304c9f5109>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/practice/conic-sections-level-1-2-challenges/\",\"WARC-Payload-Digest\":\"sha1:F36BFRQUJ54UJLT7YXLXANB6GLVPQBGT\",\"WARC-Block-Digest\":\"sha1:367C7YUKFKQLYMDGTQMNMB5RVGKKWBYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667767.6_warc_CC-MAIN-20191114002636-20191114030636-00131.warc.gz\"}"}
https://prioritizr.net/reference/category_layer.html
[ "Convert a RasterStack-class object where each layer corresponds to a different identifier and values indicate the presence/absence of that category into a RasterLayer-class object containing categorical identifiers.\n\ncategory_layer(x)\n\n## Arguments\n\nx Raster-class object containing a multiple layers. Note that pixels must be 0, 1 or NA values.\n\n## Value\n\nRasterLayer-class object.\n\n## Details\n\nThis function is provided to help manage data that encompass multiple management zones. For instance, this function may be helpful for interpreting solutions for problems associated with multiple zones that have binary decisions.\n\nbinary_stack.\n\n## Examples\n\n# create a binary raster stack\nx <- stack(raster(matrix(c(1, 0, 0, 1, NA, 0), nrow = 3)),\nraster(matrix(c(0, 1, 0, 0, NA, 0), nrow = 3)),\nraster(matrix(c(0, 0, 1, 0, NA, 1), nrow = 3)))\n\n# convert to binary stack\ny <- category_layer(x)\n\n# plot categorical raster and binary stack representationplot(stack(x, y), main = c(\"x[]\", \"x[]\", \"x[]\", \"y\"), nr = 1)", null, "" ]
[ null, "https://prioritizr.net/reference/category_layer-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.602677,"math_prob":0.9877336,"size":823,"snap":"2019-51-2020-05","text_gpt3_token_len":208,"char_repetition_ratio":0.13553114,"word_repetition_ratio":0.03539823,"special_character_ratio":0.25030378,"punctuation_ratio":0.17419355,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99214214,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T06:00:39Z\",\"WARC-Record-ID\":\"<urn:uuid:3b67ae62-7a80-4b99-8ece-20d280914367>\",\"Content-Length\":\"13171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb78d17f-7ae2-4cc7-bc02-46853378d51d>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbb4ae1e-cb88-4676-bb1b-fdfb901cde7f>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://prioritizr.net/reference/category_layer.html\",\"WARC-Payload-Digest\":\"sha1:AIA7XGZ4OUMO26J5TOSRCHOJJYOQDHSN\",\"WARC-Block-Digest\":\"sha1:3R5OWZKQITDGS2YDZMLWAWS3PPCHUL3T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540496492.8_warc_CC-MAIN-20191207055244-20191207083244-00449.warc.gz\"}"}
http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/node12.html
[ "", null, "", null, "", null, "", null, "Inverse B-spline interpolation", null, "", null, "Next: Conclusions Up: Inverse Interpolation and Data Previous: Test example\n\n## Application to 3-D seismic data regularization\n\nIn this subsection, I demonstrate an application of B-spline inverse interpolation for regularizing three-dimensional seismic reflection data. The dataset of this example comes from the North Sea and was used before for testing AMO (Biondi et al., 1998) and common-azimuth migration (Biondi, 1996). Figure 33 shows the midpoint geometry and the corresponding bin fold for a selected range of offsets and azimuths. The goal of data regularization is to create a regular data cube at the specified bins from the irregular input data, preprocessed by NMO. As typical of marine acquisition, the fold distribution is fairly regular but has occasional gaps caused by the cable feathering effect.", null, "cmpfold\nFigure 33.\nMidpoint geometry (left) and fold distribution (right) for the 3-D data test", null, "", null, "", null, "The data cube after normalized binning (inverse nearest neighbor interpolation) is shown in Figure 34. Binning works reasonably well in the areas of large fold but fails to fill the zero fold gaps and has an overall limited accuracy.", null, "bin1\nFigure 34.\n3-D data after normalized binning", null, "", null, "", null, "Inverse interpolation using bi-linear interpolants significantly improves the result (Figure 35), and inverse B-spline interpolation improves the accuracy even further (Figure 36). In both cases, I regularized the data in constant time slices, using recursive filter preconditioning with plane-wave destructor filters analogous to those in Figure 28. The plane wave slope was estimated from the binned data with the method of Fomel (2000a). The inverse interpolation results preserve both flat reflection events in the data and steeply-dipping diffractions. When data regularization is used as a preprocessing step for common-azimuth migration (Biondi and Palacharla, 1996), preserving diffractions is important for correct imaging of sharp edges in the subsurface structure.", null, "int2\nFigure 35.\n3-D data after inverse interpolation with bi-linear interpolants", null, "", null, "", null, "", null, "int4\nFigure 36.\n3-D data after inverse interpolation with third-order B-spline interpolants", null, "", null, "", null, "", null, "", null, "", null, "", null, "Inverse B-spline interpolation", null, "", null, "Next: Conclusions Up: Inverse Interpolation and Data Previous: Test example\n\n2014-02-15" ]
[ null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/next.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/up.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/previous.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/left.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/right.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/sei3d/Fig/cmpfold.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/viewmag.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/configure.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/sei3d/Fig/bin1.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/viewmag.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/configure.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/sei3d/Fig/int2.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/viewmag.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/configure.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/sei3d/Fig/int4.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/viewmag.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/configure.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/next.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/up.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/previous.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/left.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/right.png", null, "http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/icons/pdf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86803365,"math_prob":0.87477285,"size":2090,"snap":"2019-13-2019-22","text_gpt3_token_len":444,"char_repetition_ratio":0.1581975,"word_repetition_ratio":0.013071896,"special_character_ratio":0.2,"punctuation_ratio":0.07714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9639551,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T18:45:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7870daae-ca5d-42c9-97c8-5fd02a92ded9>\",\"Content-Length\":\"8805\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ae769fb-5635-4aa5-94cd-7cc4978c7c66>\",\"WARC-Concurrent-To\":\"<urn:uuid:24b19c41-ddde-415b-b471-f531b3aec333>\",\"WARC-IP-Address\":\"66.33.210.126\",\"WARC-Target-URI\":\"http://www.reproducibility.org/RSF/book/sep/bspl/paper_html/node12.html\",\"WARC-Payload-Digest\":\"sha1:QQGSNVSZMIHWQJIL3B3V3L56SHRVPURV\",\"WARC-Block-Digest\":\"sha1:FVBO34AVCFMWF3UUHUSPRWF53YGHSU7B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257361.12_warc_CC-MAIN-20190523184048-20190523210048-00449.warc.gz\"}"}
https://itprospt.com/num/15247616/evaluate-the-integral4rzdswhen-s-is-the-part-of-the-planet
[ "5\n\n# Evaluate the integral4rzdSwhen S is the part of the planeT + 29 + 2-shown inenclosed by the cylinder2+y? = 4....\n\n## Question\n\n###### Evaluate the integral4rzdSwhen S is the part of the planeT + 29 + 2-shown inenclosed by the cylinder2+y? = 4.\n\nEvaluate the integral 4rzdS when S is the part of the plane T + 29 + 2- shown in enclosed by the cylinder 2+y? = 4.", null, "", null, "#### Similar Solved Questions\n\n##### Let F be the outward flux of the vector field F (4x, 2y.0) over the part of the sphere Of radius 2 centered at the origin between the planes < = - 1.0712 and < = 0.5396. The value of sin F is (A) ~0.790653 (B) 0.263694 0.628161 (D) -0.664158 -0.985565 -0.348672 0.26484 (H) 0.937302\nLet F be the outward flux of the vector field F (4x, 2y.0) over the part of the sphere Of radius 2 centered at the origin between the planes < = - 1.0712 and < = 0.5396. The value of sin F is (A) ~0.790653 (B) 0.263694 0.628161 (D) -0.664158 -0.985565 -0.348672 0.26484 (H) 0.937302...\n##### #3 Gaed faiule: From Wucceilui Dterdt N Waae the mean Bregnancy term wJs 274 davs hundard Heuantion otlOrcrmingRNITKnoxn (romn prviau} dutadayieCortrruct 45 Zconflidence interyal for the Mejn prcanangy torm;(b ) Wkt Sxle 3iz2 Wovll b2 Needed_ have Mard 'N of erroc of a F MosF 2 Jays CsEll wil 951 Covfi devce\n#3 Gaed faiule: From Wucceilui Dterdt N Waae the mean Bregnancy term wJs 274 davs hundard Heuantion otl OrcrmingRNIT Knoxn (romn prviau} duta dayie Cortrruct 45 Zconflidence interyal for the Mejn prcanangy torm; (b ) Wkt Sxle 3iz2 Wovll b2 Needed_ have Mard 'N of erroc of a F MosF 2 Jays CsEll...\n##### 21. [0/3.7 Points]DETAILSPREVIOUS ANSWERSLARCALCETFind the limit: (If an answer does not exist, enter DNE:) lim 4x)2 3(x 4x) + 2 3x +2) 4x-0\n21. [0/3.7 Points] DETAILS PREVIOUS ANSWERS LARCALCET Find the limit: (If an answer does not exist, enter DNE:) lim 4x)2 3(x 4x) + 2 3x +2) 4x-0...\n##### Drat tnt Atktal ttnctnt ot th condenied Wructurltoa Ch =CHCR CICHJ](antBettunOlnecHa Ooly petiaidat BLue Ina Deh Katp Youf pape notl Jnd clan_ Wonn \"parnfi Durnm Totzt: Z0 pointsGlvan Ihe {ormula: (CH;eC(ch;CH;h LIryc (utal oamnulaOlol(LTQILAIl, MlidlyTocarntntn enormuThnt comnound enimalnonone Dleeenmin mqulalino the intp cycle Lhucturt Ehown on Ina nohtGlvenIlne-anato lommuli:Cegimotec ulyt laabWtntity chethtr tnt truatlonuuon Btior Invoheran Incnezze CrCte482 0 Do Enana k numbe o maroatn\nDrat tnt Atktal ttnctnt ot th condenied Wructurltoa Ch =CHCR CICHJ] (ant Bettun OlnecHa Ooly petiaidat BLue Ina Deh Katp Youf pape notl Jnd clan_ Wonn \"parnfi Durnm Totzt: Z0 points Glvan Ihe {ormula: (CH;eC(ch;CH;h LIryc (utal oamnula Olol (LTQILAIl, Mlidly Tocar ntntn enormu Thnt comnound eni...\n##### Points) 3. Find the tangent line to the given curve at the point specified by the value of the parameter. x=t+4t, y=2'_ -5 t=l\npoints) 3. Find the tangent line to the given curve at the point specified by the value of the parameter. x=t+4t, y=2'_ -5 t=l...\n##### (1 point) Consider the initial value problem y' + 3y = 27t, y(0) = 2a. Take the Laplace transform of both sides of the given differential equation to create the corresponding algebraic equation: Denote the Laplace transform of y(t) by Y(s) . Do not move any terms from one side of the equation to the other (until you get to part (b) below)help (formulas)b. Solve your equation for Y (s).Y(s) = L{y(t)}C Take the inverse Laplace\n(1 point) Consider the initial value problem y' + 3y = 27t, y(0) = 2 a. Take the Laplace transform of both sides of the given differential equation to create the corresponding algebraic equation: Denote the Laplace transform of y(t) by Y(s) . Do not move any terms from one side of the equation ...\n##### Given pulse rate for males 18-25 follow normal distribution with mean 72 and standard deviation 9.7 beats per minute: What percentage of males have pulse rate between 60 and 842 What is the 68th percentile for male pulse rates? What would be the IQR for pulse rates?\nGiven pulse rate for males 18-25 follow normal distribution with mean 72 and standard deviation 9.7 beats per minute: What percentage of males have pulse rate between 60 and 842 What is the 68th percentile for male pulse rates? What would be the IQR for pulse rates?...\n##### Problem4 Fntond412puints IaltJon cotpCe Fn4 SojuaAataed andptCuyonc.DauutaidT GutometMntTV Ad Relerred Total Walk-InDissatislied Neutral Satisficd ery Satislicd TTotalCalredeenbFnd #ha probacilay tt Cuttont enllie Boolflnc cualomc?DarbsieoOreenedandaval-nDnatooMttedgren releriedsalistrd & suw 8 Tv a0Aaeuci\nProblem 4 Fntond 412puints IaltJon cotpCe Fn4 Sojua Aataed andpt Cuyonc. Dauutaid T Gutomet Mnt TV Ad Relerred Total Walk-In Dissatislied Neutral Satisficd ery Satislicd TTotal Cal redeenb Fnd #ha probacilay tt Cuttont enllie Boolflnc cualomc? Darbsieo Oreenedandaval-n Dnatoo Mttedgren releried sal...\n##### Question 8 (1 point) For a first-order reaction after 230 5 33% of the reactants remain: Calculate the rate constant for the reaction;207 $-10.00174 5\"10.002090.000756$ 0.00482\nQuestion 8 (1 point) For a first-order reaction after 230 5 33% of the reactants remain: Calculate the rate constant for the reaction; 207 $-1 0.00174 5\"1 0.00209 0.000756$ 0.00482...\n##### In the graph below, determine the degree of each vertexv6v5v3\nIn the graph below, determine the degree of each vertex v6 v5 v3...\n##### Find the horizontal and vertical components of the vector with given length and direction, and write the vector in terms of the vectors i and $\\mathbf{j}$. $$|\\mathbf{v}|=\\sqrt{3}, \\quad \\theta=300^{\\circ}$$\nFind the horizontal and vertical components of the vector with given length and direction, and write the vector in terms of the vectors i and $\\mathbf{j}$. $$|\\mathbf{v}|=\\sqrt{3}, \\quad \\theta=300^{\\circ}$$...\n##### Kyle baards Ferris wheel from Ihe bottom und rides around several tites hefore gelting ofF: The following graph of the funetion [epresents height above the ground (in fet} respcetto the DInnf of' time (in secondsh. since the Fetrs begun moving for One complete rotation of the #hecl:264)Evaluate 9(6explain Is meaning-Ihe #TOuna 20 Ieel Seconds ;ler Ihe Feutis wheel 75D Mois 09l6) 20. Kyle's height 4buv erund lcel 20 #ccOnds alle the Fetris Whee' hcenn abor moving 0g(6) 20: Kyle\u0002\nKyle baards Ferris wheel from Ihe bottom und rides around several tites hefore gelting ofF: The following graph of the funetion [epresents height above the ground (in fet} respcetto the DInnf of' time (in secondsh. since the Fetrs begun moving for One complete rotation of the #hecl: 264) Evalua...\n##### Which statement correct one? Choose every oneRadical the same the anion but not the cationRadical VcN [Caclive eeneralFormation ol a radica being cbserved, since it is stableFrce radical not pclar at all\nWhich statement correct one? Choose every one Radical the same the anion but not the cation Radical VcN [Caclive eeneral Formation ol a radica being cbserved, since it is stable Frce radical not pclar at all...\n##### Acid and Base WorksheetUsing your knowledge of the Bronsted-Lowry theory of acids and bases, write equations for the following acid-base reactions and indicate each conjugate acid-base pair:HNO, + OH >CH;NHz Ho 7OH: HPOs- vName the following compounds as acids and circle which are weak acids: a) HNO; b) HzSO c) HFd) HzCO,e) HC_H;Ozf) H;POs\nAcid and Base Worksheet Using your knowledge of the Bronsted-Lowry theory of acids and bases, write equations for the following acid-base reactions and indicate each conjugate acid-base pair: HNO, + OH > CH;NHz Ho 7 OH: HPOs- v Name the following compounds as acids and circle which are weak acids...\n##### Random variables, {Sk : k ≥ 1} are independent with commonexponential distribution having E [Sk] = 20 minutesIntroduce variables {Wk : k ≥ 1} as follows: Wk = X k j=1 Sk andW0 = 01. Find expectation of a ratio, Q = 2. Derive expected value for a ratio, T =\nRandom variables, {Sk : k ≥ 1} are independent with common exponential distribution having E [Sk] = 20 minutes Introduce variables {Wk : k ≥ 1} as follows: Wk = X k j=1 Sk and W0 = 0 1. Find expectation of a ratio, Q = 2. Derive expected value for a ratio, T =..." ]
[ null, "https://cdn.numerade.com/ask_images/a9109feacb3244269f05d94dd85e5d95.jpg ", null, "https://cdn.numerade.com/previews/90899943-ff90-4ea4-9c9c-4608d3ca84b1_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71812934,"math_prob":0.9781942,"size":10143,"snap":"2022-27-2022-33","text_gpt3_token_len":3205,"char_repetition_ratio":0.10168656,"word_repetition_ratio":0.5724299,"special_character_ratio":0.27723554,"punctuation_ratio":0.116748765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883612,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T15:54:08Z\",\"WARC-Record-ID\":\"<urn:uuid:7b7ae3c3-03a4-4003-af55-54113632a208>\",\"Content-Length\":\"88081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:813fed1e-b230-49eb-a57d-739cc9f3cf9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:17b600dc-14b6-4481-bbdd-2827318f25f2>\",\"WARC-IP-Address\":\"104.26.7.163\",\"WARC-Target-URI\":\"https://itprospt.com/num/15247616/evaluate-the-integral4rzdswhen-s-is-the-part-of-the-planet\",\"WARC-Payload-Digest\":\"sha1:Q5LJEEWSCRYQRYBCLITIDE3HP342QEYZ\",\"WARC-Block-Digest\":\"sha1:MATOTJQAENSVBRDYRZA7BPZX6PYWZ7A7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572192.79_warc_CC-MAIN-20220815145459-20220815175459-00118.warc.gz\"}"}
https://www.edouniversity.edu.ng/oer/courseware/electricalelectronic-engineering
[ "Department Of Electrical/Electronic Engineering Courseware\n\nCOURSE DETAILS:\nWeek 1-2: Introduction to current electricity, DC circuits, measurement of voltage, current, resistance in a circuit and ohms law.\nWeek 3-4: Basic circuit laws and theorems.\nWeek 5-6: Analysis of power in AC circuits, resonance in AC circuits and power factor.\nWeek 7-8: Analysis of semi-conductor materials, diode and its application.\nWeek 9-10: Transistor characteristics, device and circuits.\nWeek 11: Electrical power measurements.\nWeek 12 Revision\n\nComplex analysis – Element of complex algebra, trigonometry, exponential and logarithmic\nfunctions. Real number, sequences and series. Vectors – Elements, differentiation and\nintegration. Elements of linear algebra. Pre-requisite MTH 111.\n\nCalculus – Elementary differentiation, relevant theorems. Differential equations – Exact\nEquations. Methods for second order equations. Partial differential equation. Simple cases –\nApplications. Numerical Analysis – linear equations, non-linear equations. Finite difference\noperators: Introduction to linear programming.\n\nGeneral Overview of Lecture: The course introduces the classifications of electrical machines, review the concept of electromehanical energy conversion, the theory of electromagnetic induction as it applies to static electrical machine, transformer, rotating magnetic field in the case of electric motors and generators. The principle of operation, analysis, test and areas of applications transformers, DC motors and DC generators will be studied. Parallel operation of power transformers and generators will be discussed. The Performance and methods of speed control of…\n\nGeneral overview of lecture: The purpose of this course is to give students the basic understanding and applications of the transistor and operational amplifiers. Areas covered by this course include the parameter analysis of the equivalent circuit of the single-stage and multistage transistor amplifiers using BJTs and FETs, Operational Amplifiers analysis such as the feedback, broadband and narrowed band amplifiers; power amplifiers analysis; Voltage amplifiers,\nVoltage and current stabilizing circuit.\n\nGeneral overview of lecture: The purpose of this course is to give students the basic understanding the science behind electronic materials and phenomena and their applications. It is centered mainly on particles responsible for current flow in materials. Areas covered by this course include electrons and electronic structure of matter, conductivity in crystalline solids and semiconductors, theory of energy bands, electron emissions, elementary discrete devices fabrication techniques and IC technology.\n\n'); printWindow.document.close(); printWindow.print(); };" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8709487,"math_prob":0.94631314,"size":1141,"snap":"2019-51-2020-05","text_gpt3_token_len":198,"char_repetition_ratio":0.123131044,"word_repetition_ratio":0.1986755,"special_character_ratio":0.15074496,"punctuation_ratio":0.09248555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9775978,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T04:58:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ba8a3bd3-1dfa-4615-a6e8-bae83c9bb30f>\",\"Content-Length\":\"103664\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7edd6661-dfa2-4977-bb92-09367282a15c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ca9953d-220d-4929-a156-d8b33ac5e41e>\",\"WARC-IP-Address\":\"107.180.25.164\",\"WARC-Target-URI\":\"https://www.edouniversity.edu.ng/oer/courseware/electricalelectronic-engineering\",\"WARC-Payload-Digest\":\"sha1:ADK7JVC5NULT5MD3ROLJBMEZFC352GOT\",\"WARC-Block-Digest\":\"sha1:5A63VLO5AKB7VRR4A2GWHN4IWYZULAVQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250597230.18_warc_CC-MAIN-20200120023523-20200120051523-00103.warc.gz\"}"}
https://byjusexamprep.com/study-notes-for-general-aptitude-part-a-Mirror-WaterImage-i
[ "# Study Notes For General Aptitude (Part A): Mirror & Water Image, Embedded figures etc\n\nBy Astha Singh|Updated : March 22nd, 2022\n\nThe upcoming CSIR NET 2022 exam is not so far away and it is the right time for the candidates to roll their sleeves devote all their time to prepare themselves for it. While preparing for the examination, it is very important to revise the concepts and topics that they have covered so far. They must also practice as many mocks as they can to ensure the time and speed they take to complete the questions. The candidates should focus on practising the questions that have been repeated often in previous year's question papers. Mirror & Water Image, Completion of figures, Embedded figures is one such topic that can help you score well within the time limitation of the important and most scoring topics of the General Aptitude (Part A) through which one can score full marks in this section.\n\nTime management, proper exam strategies, and being updated on the news regarding the exam can be the key to success for the student. The study notes can be the best way to revise the topics simply in no time. We at BYJU'S EXAM PREP Prep have formulated study notes for Mirror & Water Image, Completion of figures, Embedded figures under the General Aptitude to help their CSIR NET 2022 preparations easily.\n\nWe've come up with the study notes on Mirror & Water Image, Completion of figures, Embedded figures for the upcoming exam CSIR-NET 2022. Scroll down the article below and learn this concept to crack the exam.\n\n## Study Notes On Mirror & Water Image, Completion of figures, Embedded figures\n\nMirror image - A mirror image is a reflected duplication of an object that appears almost identical, but is reversed in the direction perpendicular to the mirror surface. As an optical effect, it results from the reflection of substances such as a mirror or water.\nWhen the mirror is placed at top of the image.\nIf a mirror is placed on the line MN, then which of the answer figures is the right image of the given figure?", null, "When the mirror is placed at the right-hand side of the image.\nIf a mirror is placed on the line MN, then which of the answer figure is the right of the given figure?", null, "", null, "Hence, the correct option is D.Hence, the correct option is D.\n\nWhen the mirror is placed on the left-hand side of the image. If a mirror is placed on the line MN, then which of the answer figures is the right image of the given figure?", null, "", null, "Solution-\nIn a plane mirror, a mirror image is a reflected duplication of an object that appears almost identical, but it is reversed in the direction perpendicular to the mirror surface. As an optical effect, it results from the reflection of substances such as a mirror or water.", null, "Clock based mirror image question:\nIf a mirror is placed on line AB, then which of the answer figure is the right image of the given figure?", null, "", null, "Letter based mirror image question: If we place a mirror on the right side of the given figure, find the correct mirror image of the figure.E8t4g9C", null, "Solution – After observing the given diagram carefully, option B is the correct mirror image of the given question.", null, "", null, "", null, "", null, "Embedded figures are the figures which are hidden or embedded in the question figure. They can be complex, but by looking at all the options we may be able to deduce the final answer. Embedded figures without Rotation –⦁ From the given answer figures, select the one in which the question figure is hidden/embedded.", null, "", null, "Ans. D Solution –\nThe question figure is hidden in option D", null, "⦁ From the given answer figures, select the one in which the question figure is hidden/ embedded", null, "", null, "⦁ From the given answer figures, select the one in which the question figure is hidden/embedded.⦁ From the given answer figures, select the one in which the question figure is hidden/embedded.", null, "", null, "Hence, the correct option is C.\n\n⦁ From the given answer figures, select the one in which the question figure is hidden/embedded.", null, "Ans. B Solution –\nOn close observation, we find that the problem figure is embedded in figure (B) as shown below", null, "Pattern completion these questions are the figures in which certain part of the figure is blank or not visible. They can be complex, but with the help of the options, we may be able to deduce the final answer. We can solve this by analyzing the lines or patterns and continuing them in the direction of the blank space and cross verify with the options to get the final answer. Some common types of pattern completion,\n\n⦁ The upper half portion is the mirror image of the lower half portion⦁ Half left portion is the mirror image of the half right portion\n\n⦁ The Upper the left portion is the mirror image of the lower right portion and vice versa⦁ Rotating the diagram in the clockwise or in the anti-clockwise direction\n\n⦁ Which answer figure will complete the pattern in the question figure?", null, "", null, "Hence, the correct answer is option B. Hence, the correct answer is option B.\n\n⦁ Which answer figure will complete the pattern in the question figure?", null, "", null, "Ans. C Solution –\nOn observing the options we can see that the figure given under option (C) is completing the pattern when placed in the blank space of the question figure as shown below,", null, "", null, "", null, "Ans. C Solution –\nAfter observation, it is clear that answer figure (c) will complete the question figure after placing it at the question marked place.", null, "Papercutting and folding\n\nIn this type of question, some cuts or folds are made on a sheet of paper and then opened. We have to choose the correct options from the options.It is classified under 2 categories –\n\n⦁ Cutting\n\n⦁ FoldingBelow we are going to explain the categories mentioned above with examples\n\n⦁ Select the option that depicts how the given transparent sheet of paper would appear if it is folded at the dotted line.", null, "", null, "Ans. C\n\nSolution –When the transparent sheet would be folder from the dotted line. The bigger mark will cover the small one and the small one will be inside the bigger one as shown in the figure given in option ‘C’Hence, option (C) is the correct answer.\n\n⦁ A piece of paper is folded and punched as shown below in the question figures. From the given answer figures, indicate how it will appear when opened?", null, "", null, "→ If you have any questions feel free to ask in the comments section below.\n\n### BYJU'S Exam Prep Team\n\nThe Most Comprehensive Exam Prep App.\n\n#DreamStriveSucceed", null, "GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303 [email protected]" ]
[ null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2021/1/artboard-img1611906121672-22.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2018/12/screenshot-2018-12-21-at-3-img1545387500435-75.png-rs-high-webp.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9125795,"math_prob":0.7285391,"size":5308,"snap":"2023-40-2023-50","text_gpt3_token_len":1172,"char_repetition_ratio":0.191365,"word_repetition_ratio":0.29605964,"special_character_ratio":0.206104,"punctuation_ratio":0.085545726,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802574,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T23:29:12Z\",\"WARC-Record-ID\":\"<urn:uuid:138598ca-4ed6-40e4-84cd-a402300a673e>\",\"Content-Length\":\"521629\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33c462d8-53ff-4b00-8453-9b9b2547480e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b69c11f9-cf7a-49b7-a900-d3be327d90a9>\",\"WARC-IP-Address\":\"104.18.21.173\",\"WARC-Target-URI\":\"https://byjusexamprep.com/study-notes-for-general-aptitude-part-a-Mirror-WaterImage-i\",\"WARC-Payload-Digest\":\"sha1:QYA6CHBR3N63ICDXQ4I23G233LGSC4II\",\"WARC-Block-Digest\":\"sha1:QKUDVHPDSTDJNYDHSXTK7BQ7AHQ2CCOM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510100.47_warc_CC-MAIN-20230925215547-20230926005547-00283.warc.gz\"}"}
https://socratic.org/questions/how-do-you-solve-x-2-8x-2-0-by-completing-the-square-1
[ "# How do you solve x^2 + 8x + 2 = 0 by completing the square?\n\n##### 2 Answers\nJun 28, 2017\n\n$x = - 4 \\pm \\sqrt{14}$\n\n#### Explanation:\n\n$\\text{express as } {x}^{2} + 8 x = - 2$\n\n$\\text{to \"color(blue)\"complete the square}$\n\nadd (1/2\"coefficient of x-term\")^2\" to both sides\"\n\n$\\text{that is add \" (8/2)^2=16\" to both sides}$\n\n$\\Rightarrow {x}^{2} + 8 x \\textcolor{red}{+ 16} = - 2 \\textcolor{red}{+ 16}$\n\n$\\Rightarrow {\\left(x + 4\\right)}^{2} = 14$\n\n$\\textcolor{b l u e}{\\text{take the square root of both sides}}$\n\n$\\sqrt{{\\left(x + 4\\right)}^{2}} = \\pm \\sqrt{14} \\leftarrow \\text{ note plus or minus}$\n\n$\\Rightarrow x + 4 = \\pm \\sqrt{14}$\n\n$\\text{subtract 4 from both sides}$\n\n$x \\cancel{+ 4} \\cancel{- 4} = \\pm \\sqrt{14} - 4$\n\n$\\Rightarrow x = - 4 \\pm \\sqrt{14}$\n\nJun 28, 2017\n\nMove +2 to the right side of the equation.\n${x}^{2} + 8 x = - 2$\n\nThen halve the coefficient of x.\n${x}^{2} + \\left(\\frac{8}{2}\\right) x = - 2$\n\nThen square that same coefficient.\n${x}^{2} + {\\left(\\frac{8}{2}\\right)}^{2} x = - 2$\n\nSince ${\\left(\\frac{8}{2}\\right)}^{2}$ = $16$ we can put that number into the equation.\nSo, ${x}^{2} + 8 x + 16 = - 2$\n\nWhen you find the number to complete the square you must add it to both sides of the equation.\nSo, ${x}^{2} + 8 x + 16 = - 2 + 16$\n= ${x}^{2} + 8 x + 16 = 14$\n\nThen factorise ${x}^{2} + 8 x + 16$ = $\\left(x + 4\\right) \\left(x + 4\\right)$\n\nTherefore, the answer is: $\\left(x + 4\\right) \\left(x + 4\\right) = 14$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7072158,"math_prob":1.0000099,"size":826,"snap":"2021-21-2021-25","text_gpt3_token_len":280,"char_repetition_ratio":0.13868614,"word_repetition_ratio":0.0,"special_character_ratio":0.34866828,"punctuation_ratio":0.061728396,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000067,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T14:23:34Z\",\"WARC-Record-ID\":\"<urn:uuid:10802578-7498-405e-b9fb-4080341c8401>\",\"Content-Length\":\"35927\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38a9d163-2103-45fa-9fa6-f4a27d22256b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5611f8da-6cae-4d70-8bbe-47051b974640>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-solve-x-2-8x-2-0-by-completing-the-square-1\",\"WARC-Payload-Digest\":\"sha1:AZQNPJGQEH772W4W7ALZDWMHSN6UQRUK\",\"WARC-Block-Digest\":\"sha1:LSOAE2K2JRT3TOKOFWP574NBZOUO66EP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487584018.1_warc_CC-MAIN-20210612132637-20210612162637-00389.warc.gz\"}"}
https://www.colorhexa.com/032d3d
[ "# #032d3d Color Information\n\nIn a RGB color space, hex #032d3d is composed of 1.2% red, 17.6% green and 23.9% blue. Whereas in a CMYK color space, it is composed of 95.1% cyan, 26.2% magenta, 0% yellow and 76.1% black. It has a hue angle of 196.6 degrees, a saturation of 90.6% and a lightness of 12.5%. #032d3d color hex could be obtained by blending #065a7a with #000000. Closest websafe color is: #003333.\n\n• R 1\n• G 18\n• B 24\nRGB color chart\n• C 95\n• M 26\n• Y 0\n• K 76\nCMYK color chart\n\n#032d3d color description : Very dark blue.\n\n# #032d3d Color Conversion\n\nThe hexadecimal color #032d3d has RGB values of R:3, G:45, B:61 and CMYK values of C:0.95, M:0.26, Y:0, K:0.76. Its decimal value is 208189.\n\nHex triplet RGB Decimal 032d3d `#032d3d` 3, 45, 61 `rgb(3,45,61)` 1.2, 17.6, 23.9 `rgb(1.2%,17.6%,23.9%)` 95, 26, 0, 76 196.6°, 90.6, 12.5 `hsl(196.6,90.6%,12.5%)` 196.6°, 95.1, 23.9 003333 `#003333`\nCIE-LAB 16.665, -7.078, -14.086 1.818, 2.233, 4.75 0.207, 0.254, 2.233 16.665, 15.765, 243.32 16.665, -11.072, -13.617 14.943, -4.433, -8.386 00000011, 00101101, 00111101\n\n# Color Schemes with #032d3d\n\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #3d1303\n``#3d1303` `rgb(61,19,3)``\nComplementary Color\n• #033d30\n``#033d30` `rgb(3,61,48)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #03103d\n``#03103d` `rgb(3,16,61)``\nAnalogous Color\n• #3d3003\n``#3d3003` `rgb(61,48,3)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #3d0310\n``#3d0310` `rgb(61,3,16)``\nSplit Complementary Color\n• #2d3d03\n``#2d3d03` `rgb(45,61,3)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #3d032d\n``#3d032d` `rgb(61,3,45)``\n• #033d13\n``#033d13` `rgb(3,61,19)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #3d032d\n``#3d032d` `rgb(61,3,45)``\n• #3d1303\n``#3d1303` `rgb(61,19,3)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #01090c\n``#01090c` `rgb(1,9,12)``\n• #021b25\n``#021b25` `rgb(2,27,37)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #043f55\n``#043f55` `rgb(4,63,85)``\n• #05516e\n``#05516e` `rgb(5,81,110)``\n• #076386\n``#076386` `rgb(7,99,134)``\nMonochromatic Color\n\n# Alternatives to #032d3d\n\nBelow, you can see some colors close to #032d3d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #033c3d\n``#033c3d` `rgb(3,60,61)``\n• #03373d\n``#03373d` `rgb(3,55,61)``\n• #03323d\n``#03323d` `rgb(3,50,61)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #03283d\n``#03283d` `rgb(3,40,61)``\n• #03233d\n``#03233d` `rgb(3,35,61)``\n• #031f3d\n``#031f3d` `rgb(3,31,61)``\nSimilar Colors\n\n# #032d3d Preview\n\nText with hexadecimal color #032d3d\n\nThis text has a font color of #032d3d.\n\n``<span style=\"color:#032d3d;\">Text here</span>``\n#032d3d background color\n\nThis paragraph has a background color of #032d3d.\n\n``<p style=\"background-color:#032d3d;\">Content here</p>``\n#032d3d border color\n\nThis element has a border color of #032d3d.\n\n``<div style=\"border:1px solid #032d3d;\">Content here</div>``\nCSS codes\n``.text {color:#032d3d;}``\n``.background {background-color:#032d3d;}``\n``.border {border:1px solid #032d3d;}``\n\n# Shades and Tints of #032d3d\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000405 is the darkest color, while #f1fbfe is the lightest one.\n\n• #000405\n``#000405` `rgb(0,4,5)``\n• #011118\n``#011118` `rgb(1,17,24)``\n• #021f2a\n``#021f2a` `rgb(2,31,42)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #043b50\n``#043b50` `rgb(4,59,80)``\n• #054962\n``#054962` `rgb(5,73,98)``\n• #065675\n``#065675` `rgb(6,86,117)``\n• #076488\n``#076488` `rgb(7,100,136)``\n• #08729a\n``#08729a` `rgb(8,114,154)``\n``#0980ad` `rgb(9,128,173)``\n• #098ec0\n``#098ec0` `rgb(9,142,192)``\n• #0a9bd3\n``#0a9bd3` `rgb(10,155,211)``\n• #0ba9e5\n``#0ba9e5` `rgb(11,169,229)``\n• #11b5f3\n``#11b5f3` `rgb(17,181,243)``\n• #24bbf4\n``#24bbf4` `rgb(36,187,244)``\n• #36c0f5\n``#36c0f5` `rgb(54,192,245)``\n• #49c6f6\n``#49c6f6` `rgb(73,198,246)``\n• #5cccf7\n``#5cccf7` `rgb(92,204,247)``\n• #6ed2f8\n``#6ed2f8` `rgb(110,210,248)``\n• #81d8f9\n``#81d8f9` `rgb(129,216,249)``\n• #94defa\n``#94defa` `rgb(148,222,250)``\n• #a6e3fb\n``#a6e3fb` `rgb(166,227,251)``\n• #b9e9fc\n``#b9e9fc` `rgb(185,233,252)``\n• #cceffc\n``#cceffc` `rgb(204,239,252)``\n• #dff5fd\n``#dff5fd` `rgb(223,245,253)``\n• #f1fbfe\n``#f1fbfe` `rgb(241,251,254)``\nTint Color Variation\n\n# Tones of #032d3d\n\nA tone is produced by adding gray to any pure hue. In this case, #1e2122 is the less saturated color, while #012e3f is the most saturated one.\n\n• #1e2122\n``#1e2122` `rgb(30,33,34)``\n• #1c2224\n``#1c2224` `rgb(28,34,36)``\n• #192327\n``#192327` `rgb(25,35,39)``\n• #172429\n``#172429` `rgb(23,36,41)``\n• #14252c\n``#14252c` `rgb(20,37,44)``\n• #12262e\n``#12262e` `rgb(18,38,46)``\n• #0f2731\n``#0f2731` `rgb(15,39,49)``\n• #0d2933\n``#0d2933` `rgb(13,41,51)``\n• #0a2a36\n``#0a2a36` `rgb(10,42,54)``\n• #082b38\n``#082b38` `rgb(8,43,56)``\n• #052c3b\n``#052c3b` `rgb(5,44,59)``\n• #032d3d\n``#032d3d` `rgb(3,45,61)``\n• #012e3f\n``#012e3f` `rgb(1,46,63)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #032d3d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5573923,"math_prob":0.86678344,"size":3658,"snap":"2019-13-2019-22","text_gpt3_token_len":1665,"char_repetition_ratio":0.12616311,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5587753,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99050206,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T18:45:42Z\",\"WARC-Record-ID\":\"<urn:uuid:de4d3a57-fe87-441c-8e32-5e4108f121af>\",\"Content-Length\":\"36310\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:91bd50ae-58b6-4a02-bca1-74445fce3030>\",\"WARC-Concurrent-To\":\"<urn:uuid:abbbd4c0-e224-4e38-9bc1-7041fe46b2fc>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/032d3d\",\"WARC-Payload-Digest\":\"sha1:NIR76OTF5K55IZETO2OI3T3X5HWGSY5M\",\"WARC-Block-Digest\":\"sha1:ZN3IZFYXJR5MAVMQE3TMBQ2QNYFQ2Z4U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202924.93_warc_CC-MAIN-20190323181713-20190323203713-00064.warc.gz\"}"}
https://pocketsense.com/calculate-investment-rates-return-5095445.html
[ "# How to Calculate Investment Rates of Return\n\nShare It\n\nLooking at a year-end statement that shows the \"rate of return\" on investments only tells part of the story on return. Whether it is an annual percentage yield (APY) or compound annual growth rate (CAGR), it is important for the investor to understand how these are calculated to make sure the numbers they are reading are what they expect. These are two of many rates of return and are most often used with personal finance investment calculations.\n\nAssume, for example, a \\$10,000 investment with a 6 percent return that compounds quarterly to be held for three years.\n\nConvert the interest rate into a decimal point by dividing it by 100. In our example, 6%/100 = .06.\n\nDetermine the number of compounds in the year. Since our example is quarterly, this number is 4. Divide this into the number derived in Step 2. Example: .06/4 = .015.\n\nAdd one (1) to the number determined in Step 3. Example: 1 + .015 = 1.015.\n\nMultiply the number of compounds per year with the number of years the investment will be held. Example: 4 x 3 = 12. Raise the number from Step 4 by this exponent. Example: 1.015 to the 12th power = 1.19561817.\n\nMultiply the number in Step 5 with the amount invested. Example: \\$10,000 x 1.19561817 = \\$11,956.18. This is the amount the original investment will earn over the three years of compounding quarterly interest.\n\n## How to Calculate CAGR\n\nAssume in this example an investment of \\$10,000 is worth \\$12,000 at the end of year 1, \\$13,000 at the end of year 2 and \\$15,000 at the end of year 3.\n\nDivide the initial amount by the ending value. Example: \\$15,000 / \\$10,000 = 1.5.\n\nDivide the number of years invested by 1. Then use this number as the exponent to raise the value derived in Step 2. In the example: 1/3 years of investment = .3333. Then 1.5 raised .3333 = 1.14469877.\n\nSubtract 1 from this number and then multiply this by 100. Example: 1.14469877 – 1 = .1446. The compound growth rate is 14.46 percent." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93214667,"math_prob":0.99642366,"size":2130,"snap":"2023-40-2023-50","text_gpt3_token_len":555,"char_repetition_ratio":0.15098777,"word_repetition_ratio":0.007978723,"special_character_ratio":0.30234742,"punctuation_ratio":0.15550756,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99977535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T10:23:03Z\",\"WARC-Record-ID\":\"<urn:uuid:3c6b4c0d-b54e-469e-a8ea-204e1cf5169e>\",\"Content-Length\":\"178286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cf9a6bf-0144-4aca-91da-30fb5df9145d>\",\"WARC-Concurrent-To\":\"<urn:uuid:548e7baa-423d-4e77-9baa-d167b9296139>\",\"WARC-IP-Address\":\"23.212.250.213\",\"WARC-Target-URI\":\"https://pocketsense.com/calculate-investment-rates-return-5095445.html\",\"WARC-Payload-Digest\":\"sha1:6DQNTT25XH3OCLJCZFJUZI5QMZCZTGFB\",\"WARC-Block-Digest\":\"sha1:ABWYE6UEIGSRGAEU67Y3OC6Q2QX4NMDH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100593.71_warc_CC-MAIN-20231206095331-20231206125331-00267.warc.gz\"}"}
https://examradar.com/operators/
[ "An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. C language is rich in built-in operators and provides the following types of operators:\n\n• Arithmetic Operators\n\n• Relational Operators\n\n• Logical Operators\n\n• Bitwise Operators\n\n• Assignment Operators\n\n• Increment and decrement operators\n\n• Conditional operators\n\n• Misc Operators\n\nArithmetic operator:\n\nThese are used to perform mathematical calculations like addition, subtraction, multiplication, division and modulus.\n\nFollowing table shows all the arithmetic operators supported by C language. Assume variable A holds 10 and variable B holds 20 then:", null, "", null, "Relational Operators:\n\nThese operators are used to compare the value of two variables.\n\nFollowing table shows all the relational operators supported by C language. Assume variable A holds 10 and variable B holds 20, then:", null, "Logical Operators:\n\nThese operators are used to perform logical operations on the given two variables.\n\nFollowing table shows all the logical operators supported by C language. Assume variable A holds 1 and variable B holds 0, then:", null, "Bitwise Operators\n\nBitwise operator works on bits and performs bit-by-bit operation. Bitwise operators are used in bit level programming. These operators can operate upon int and char but not on float and double.\n\nShowbits( ) function can be used to display the binary representation of any integer or character value\n\nBit wise operators in C language are; & (bitwise AND), | (bitwise OR), ~ (bitwise OR), ^ (XOR), << (left shift) and >> (right shift).\n\nThe truth tables for &, |, and ^ are as follows:", null, "The Bitwise operators supported by C language are explained in the following table. Assume variable A holds 60 (00111100) and variable B holds 13 (00001101), then:", null, "Assignment Operators:\n\nIn C programs, values for the variables are assigned using assignment operators. There are following assignment operators supported by C language:", null, "", null, "You may be interested in:\nProgramming In C MCQs\nProgramming In C++ MCQs\nObject Oriented Programming Using C++ Short Questions Answers" ]
[ null, "https://examradar.com/wp-content/uploads/2016/10/Arithmetic-operator.png", null, "https://examradar.com/wp-content/uploads/2016/10/Arithmetic-operator-2.png", null, "https://examradar.com/wp-content/uploads/2016/10/Relational-Operators.png", null, "https://examradar.com/wp-content/uploads/2016/10/Logical-Operators.png", null, "https://examradar.com/wp-content/uploads/2016/10/truth-tables.png", null, "https://examradar.com/wp-content/uploads/2016/10/Bitwise-operators.png", null, "https://examradar.com/wp-content/uploads/2016/10/Assignment-Operators.png", null, "https://examradar.com/wp-content/uploads/2016/10/Assignment-Operators-2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84749496,"math_prob":0.8883671,"size":2069,"snap":"2021-31-2021-39","text_gpt3_token_len":419,"char_repetition_ratio":0.19225182,"word_repetition_ratio":0.11320755,"special_character_ratio":0.20976317,"punctuation_ratio":0.112716764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953411,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T00:11:28Z\",\"WARC-Record-ID\":\"<urn:uuid:652993b2-d327-4281-8516-ee10c0e2246f>\",\"Content-Length\":\"52780\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d74cdbd-ed06-49d7-9e6b-0cd80860f298>\",\"WARC-Concurrent-To\":\"<urn:uuid:fda72eae-c897-4920-9955-72ea8dad32ca>\",\"WARC-IP-Address\":\"172.67.153.41\",\"WARC-Target-URI\":\"https://examradar.com/operators/\",\"WARC-Payload-Digest\":\"sha1:WOIZZF57UNJK7KUDB7FTGAKWYAGFYXE6\",\"WARC-Block-Digest\":\"sha1:2KBGS77M5IV3EXNFNBAHV5IXCST5QAR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057787.63_warc_CC-MAIN-20210925232725-20210926022725-00197.warc.gz\"}"}
https://convertoctopus.com/394-cubic-inches-to-quarts
[ "## Conversion formula\n\nThe conversion factor from cubic inches to quarts is 0.017316017316055, which means that 1 cubic inch is equal to 0.017316017316055 quarts:\n\n1 in3 = 0.017316017316055 qt\n\nTo convert 394 cubic inches into quarts we have to multiply 394 by the conversion factor in order to get the volume amount from cubic inches to quarts. We can also form a simple proportion to calculate the result:\n\n1 in3 → 0.017316017316055 qt\n\n394 in3 → V(qt)\n\nSolve the above proportion to obtain the volume V in quarts:\n\nV(qt) = 394 in3 × 0.017316017316055 qt\n\nV(qt) = 6.8225108225258 qt\n\nThe final result is:\n\n394 in3 → 6.8225108225258 qt\n\nWe conclude that 394 cubic inches is equivalent to 6.8225108225258 quarts:\n\n394 cubic inches = 6.8225108225258 quarts\n\n## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 quart is equal to 0.14657360406059 × 394 cubic inches.\n\nAnother way is saying that 394 cubic inches is equal to 1 ÷ 0.14657360406059 quarts.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that three hundred ninety-four cubic inches is approximately six point eight two three quarts:\n\n394 in3 ≅ 6.823 qt\n\nAn alternative is also that one quart is approximately zero point one four seven times three hundred ninety-four cubic inches.\n\n## Conversion table\n\n### cubic inches to quarts chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from cubic inches to quarts\n\ncubic inches (in3) quarts (qt)\n395 cubic inches 6.84 quarts\n396 cubic inches 6.857 quarts\n397 cubic inches 6.874 quarts\n398 cubic inches 6.892 quarts\n399 cubic inches 6.909 quarts\n400 cubic inches 6.926 quarts\n401 cubic inches 6.944 quarts\n402 cubic inches 6.961 quarts\n403 cubic inches 6.978 quarts\n404 cubic inches 6.996 quarts" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76193297,"math_prob":0.99819916,"size":1878,"snap":"2020-34-2020-40","text_gpt3_token_len":517,"char_repetition_ratio":0.2353255,"word_repetition_ratio":0.013157895,"special_character_ratio":0.35569754,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99806756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T13:10:04Z\",\"WARC-Record-ID\":\"<urn:uuid:bd6a0903-17c1-45d1-bbb5-5df14d39ce52>\",\"Content-Length\":\"29177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5f66695-1c69-42ba-b921-86cada6865a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbdcd346-c48e-4e5d-a933-3a8c0701442f>\",\"WARC-IP-Address\":\"172.67.208.237\",\"WARC-Target-URI\":\"https://convertoctopus.com/394-cubic-inches-to-quarts\",\"WARC-Payload-Digest\":\"sha1:TSZ72HJ5TLRV3PWY6OACKKYXZ2OSYAZY\",\"WARC-Block-Digest\":\"sha1:AJHEYK7XFCAPXRFUF6D5L3MJAWUQU3JX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400191780.21_warc_CC-MAIN-20200919110805-20200919140805-00211.warc.gz\"}"}
https://zh.wikipedia.org/wiki/%E5%A4%9A%E5%85%83%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83
[ "# 多元正态分布\n\n參數", null, "Many samples from a multivariate (bivariate) Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction (longer vector) and of 1 in the second direction (shorter vector, orthogonal to the longer vector).概率多变量函數 μ ∈ RN — 位置Σ ∈ RN×N — 协方差矩阵 (半正定) x ∈ μ+span(Σ) ⊆ RN $(2\\pi )^{-{\\frac {N}{2}}}|{\\boldsymbol {\\Sigma }}|^{-{\\frac {1}{2}}}\\,e^{-{\\frac {1}{2}}(\\mathbf {x} -{\\boldsymbol {\\mu }})'{\\boldsymbol {\\Sigma }}^{-1}(\\mathbf {x} -{\\boldsymbol {\\mu }})},$", null, "(仅当 Σ 为正定矩阵时) 解析表达式不存在 μ μ Σ ${\\frac {1}{2}}\\ln((2\\pi e)^{N}|{\\boldsymbol {\\Sigma }}|)$", null, "$\\exp \\!{\\Big (}{\\boldsymbol {\\mu }}'\\mathbf {t} +{\\tfrac {1}{2}}\\mathbf {t} '{\\boldsymbol {\\Sigma }}\\mathbf {t} {\\Big )}$", null, "$\\exp \\!{\\Big (}i{\\boldsymbol {\\mu }}'\\mathbf {t} -{\\tfrac {1}{2}}\\mathbf {t} '{\\boldsymbol {\\Sigma }}\\mathbf {t} {\\Big )}$", null, "## 一般形式\n\nN维随机向量$\\ X=[X_{1},\\dots ,X_{N}]^{T}$", null, "如果服从多变量正态分布,必须满足下面的三个等價条件:\n\n• 任何线性组合$\\ Y=a_{1}X_{1}+\\cdots +a_{N}X_{N}$", null, "服从正态分布\n• 存在随机向量$\\ Z=[Z_{1},\\dots ,Z_{M}]^{T}$", null, "( 它的每个元素服从独立标准正态分布),向量$\\ \\mu =[\\mu _{1},\\dots ,\\mu _{N}]^{T}$", null, "$N\\times M$", null, "矩阵$\\ A$", null, "满足$\\ X=AZ+\\mu$", null, ".\n• 存在$\\mu$", null, "和一个对称半正定阵$\\ \\Sigma$", null, "满足$\\ X$", null, "特征函数\n$\\phi _{X}\\left(u;\\mu ,\\Sigma \\right)=\\exp \\left(i\\mu ^{T}u-{\\frac {1}{2}}u^{T}\\Sigma u\\right)$", null, "$f_{\\mathbf {x} }(x_{1},\\ldots ,x_{k})={\\frac {1}{\\sqrt {(2\\pi )^{k}|{\\boldsymbol {\\Sigma }}|}}}\\exp \\left(-{\\frac {1}{2}}({\\mathbf {x} }-{\\boldsymbol {\\mu }})^{\\mathrm {T} }{\\boldsymbol {\\Sigma }}^{-1}({\\mathbf {x} }-{\\boldsymbol {\\mu }})\\right),$", null, "{\\begin{aligned}f(x,y)&={\\frac {1}{2\\pi \\sigma _{X}\\sigma _{Y}{\\sqrt {1-\\rho ^{2}}}}}\\exp \\left(-{\\frac {1}{2(1-\\rho ^{2})}}\\left[{\\frac {(x-\\mu _{X})^{2}}{\\sigma _{X}^{2}}}+{\\frac {(y-\\mu _{Y})^{2}}{\\sigma _{Y}^{2}}}-{\\frac {2\\rho (x-\\mu _{X})(y-\\mu _{Y})}{\\sigma _{X}\\sigma _{Y}}}\\right]\\right)\\\\\\end{aligned}}", null, "${\\boldsymbol {\\mu }}={\\begin{pmatrix}\\mu _{X}\\\\\\mu _{Y}\\end{pmatrix}},\\quad {\\boldsymbol {\\Sigma }}={\\begin{pmatrix}\\sigma _{X}^{2}&\\rho \\sigma _{X}\\sigma _{Y}\\\\\\rho \\sigma _{X}\\sigma _{Y}&\\sigma _{Y}^{2}\\end{pmatrix}}.$", null, "## 參考資料\n\n1. ^ UIUC, Lecture 21. The Multivariate Normal Distribution, 21.5:\"Finding the Density\"." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/thumb/1/15/GaussianScatterPCA.png/220px-GaussianScatterPCA.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f0ef6c1b64abf712ce8f7cf633e902112e8af904", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9d09a1aa8a9924ad9e8f7c794e5429331bb7c715", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/495536fd02e177f1d3ddf6c7b784ed669756e89f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4de993eb7b7ced5280b3cd5da989352df78d00cc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4b4ddba80bd912ad089215569a53f5600705adeb", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7f46f7863eacdca7c845f98487df682ab112e403", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1f0f71ea15130a02696ad1a1ae1065a2c1fe33fa", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/60a0bdecee9234af91dca992b6e70362011f1eeb", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5f4c0be393d026f1ee4c2baa3e84a774013914a2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9a29b7d7604962065930f413ee72574d1f7a6e81", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9acd2c02592b1b5aeaf3a84edcee882b5e792bb4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4178b57a136fb73410257c8b91b62a993bfe2767", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a96dd62d4cca19aa212ae1216891f4388ca4be24", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6094cde301feba9c7ea9116851d2fe017d4d6cee", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/999bd54845bdbed1807db7d4ead36499c0bdd0d8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c6fc534bfde62d6d2b3b743b0c3fa2fb7fc3174a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1d6238c86bf561c952e0560e6f6ad3591278fb82", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.63881344,"math_prob":1.0000008,"size":780,"snap":"2019-51-2020-05","text_gpt3_token_len":518,"char_repetition_ratio":0.083762884,"word_repetition_ratio":0.0,"special_character_ratio":0.29102564,"punctuation_ratio":0.09701493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000072,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,null,null,4,null,null,null,8,null,8,null,2,null,3,null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T05:28:17Z\",\"WARC-Record-ID\":\"<urn:uuid:954f948e-b071-46d3-b25f-6f27678d34df>\",\"Content-Length\":\"168622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8200513e-73f1-4aba-8af8-e4439627465b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2092e940-1c15-42e7-b234-a93066aae557>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://zh.wikipedia.org/wiki/%E5%A4%9A%E5%85%83%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83\",\"WARC-Payload-Digest\":\"sha1:U5TGLMEJCOIUUDYV5OXLPGXQ7BKRRU4N\",\"WARC-Block-Digest\":\"sha1:KTHUI4FJ4TR3S7VPK4EOAHCGHP4LERAL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541301598.62_warc_CC-MAIN-20191215042926-20191215070926-00287.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/hep-ph/9810322/
[ "# Lattice calculation of αs in momentum scheme\n\nPh. Boucaud, J.P. Leroy, J. Micheli, O. Pène, and C. Roiesnel\n###### Abstract\n\nWe compute the flavorless running coupling constant of QCD from the three gluon vertex in the (regularisation independent) momentum subtraction renormalisation scheme. This is performed on the lattice with high statistics. The expected color dependence of the Green functions is verified. There are significant effects which can be consistently controlled. Scaling is demonstrated when the renormalisation scale is varied between 2.1 GeV and 3.85 GeV. Scaling when the lattice spacing is varied is very well satisfied. The resulting flavorless conventional two loop is estimated to be, respectively for the MOM and scheme, MeV and MeV , while the three loop results are, depending on : and . A preliminary computation of in the scheme leads to MeV.\n\nLaboratoire de Physique Théorique et Hautes Energies111Laboratoire associé au Centre National de la Recherche Scientifique - URA D00063\n\nUniversité de Paris XI, Bâtiment 211, 91405 Orsay Cedex, France\n\nCentre de Physique Théorique222 Unité Mixte de Recherche C7644 du Centre National de la Recherche Scientifique\n\ne-mail: Philippe.B,\nde l’Ecole Polytechnique\n\nF91128 Palaiseau cedex, France\n\nLPTHE Orsay-98/49\n\nhep-ph/9810322\n\nThe non-perturbative calculation of the running coupling constant of QCD is certainly one very important problem. This program has been performed using the Schrödinger functional , the heavy quark potential -, the Wilson loop , the Polyakov loop and the three gluon coupling . The latter method is the one we will follow in the present letter. The principle of the method is quite simple since it consists in following the steps which are standard in perturbative QCD in the momentum subtraction scheme. One usually uses as a subtraction point for the three gluon vertex function an Euclidean point with symmetric momenta: .\n\nIn this non-perturbative minimum subtraction calculation has been performed, but using an asymmetric subtraction point . The presence of a vanishing momentum induces some subtleties which will be discussed later. The running coupling constant computed in shows a signal of perturbative scaling at large scale.\n\nIn this letter we perform the same program with symmetric subtraction points, which is the genuine non-perturbative momentum subtraction scheme. We also repeat the work done in . The whole calculation is achieved with a larger statistics, a check of finite volume effects, and a check of scaling when the lattice spacing is varied.\n\nIn section 1 the general principle of the method is recalled and the systematic construction of the symmetric momentum points is summarized. In section 2 the lattice calculation is described including the checks of color behaviour, of finite volume effects, and the discussion of and three loop effects. Both the scaling in and in are demonstrated. In section 3 we compare our result for with other lattice approaches and conclude.\n\n## 1 Computing αs from non-perturbative Green functions\n\nIn this section we describe the general method used to compute and in the continuum, assuming one is able to compute the Euclidean Green functions of QCD in the Landau gauge. The lattice aspect will be treated in the next section. The principle of the method is exactly the standard textbook one, generalized to non-perturbative QCD .\n\nThe Euclidean two point Green function in momentum space writes in the Landau Gauge:\n\n G(2)a1a2μ1μ2(p,−p)=G(2)(p2)δa1a2(δμ1μ2−pμ1pμ2p2) (1)\n\nwhere are the color indices ranging from 1 to 8.\n\nThe three-gluon Green function is equal to the color tensor333In general schemes and gauges the tensor should also be considered, but not in our case as we shall see. times a momentum dependent function which may be expressed as a sum of scalar functions multiplied by tensors: , the being built up from and momenta. In general there is some arbitrariness in choosing the tensor basis. One of these may be taken to be the tree level three-gluon vertex, projected transversally to the momenta (Landau gauge):\n\n Ttreeμ1μ2μ3=[δμ′1μ′2(p1−p2)μ′3+cycl. perm.]∏i=1,3(δμ′iμi−piμ′ipiμip2i) (2)\n\nwhile the choice of the other tensors in the tensor basis will be explained below in the particular cases. Calling the scalar function which multiplies the tensor (2), the renormalised coupling constant in the considered scheme is given by \n\n gR(μ2)=G(3)(p21,p22,p23)Z3/23(μ2)G(2)(p21)G(2)(p22)G(2)(p23) (3)\n\nwhere\n\n Z3(μ2)=G(2)(μ2)μ2 (4)\n\nand is the renormalisation scale which will be specified in each scheme.\n\nThe justification of eq. (3) is standard: the momentum scheme fixes the renormalisation constants so that the two-point and three-point renormalised Green functions at the renormalisation point take their tree value with the only substitution of the bare coupling by the renormalised one. In particular the renormalised takes its tree value, , at , which fixes the field renormalisation constant (4). The renormalised coupling constant is then defined so that the three-point Green function is equal to the bare tree level one (with the substitution of by ) at the symmetric Euclidean point .\n\nAt the symmetric point only two independent tensors exist in Landau gauge444Notice that in the Landau gauge the transversality condition with respect to the external momenta, reduces the number of independant tensors as compared to a general covariant gauge., which we choose to be:\n\n G(3)a1a2a3μ1μ2μ3(p1,p2,p3)=fa1a2a3[G(3)(μ2,μ2,μ2)Ttreeμ1μ2μ3+\n H(3)(μ2,μ2,μ2)(p1−p2)μ3(p2−p3)μ1(p3−p1)μ2μ2] (5)\n\nwith defined in (2). To project out we contract with the appropriate tensor:\n\n G(3)(μ2,μ2,μ2)fa1a2a3=118μ2G(3)a1a2a3μ1μ2μ3(p1,p2,p3)\n [Ttreeμ1μ2μ3+(p1−p2)μ3(p2−p3)μ1(p3−p1)μ22μ2] (6)\n\nIn the following we will call this momentum configuration the “symmetric” one, and this defines the MOM scheme.\n\nWe have also considered the scheme defined by subtracting the vertex function at the asymmetric Euclidean point , . In Landau gauge there remains only one tensor, the one in (2), which simplifies:\n\n G(3)a1a2a3μ1μ2μ3(p,0,−p)=2fa1a2a3pμ2[δμ1μ3−pμ1pμ3μ2]G(3)(μ2,0,μ2) (7)\n\nand the scalar factor is extracted via\n\n G(3)(μ2,0,μ2)fa1a2a3=16μ2G(3)a1a2a3μ1μ2μ3(p,0,−p)δμ1μ3pμ2 (8)\n\nIn the following we will call this momentum configuration the “asymmetric” one.\n\nSome caution is in order for the latter asymmetric configuration. From (1) it is clear that\n\n G(2)(p2)δa1a2=13∑μG(2)a1a2μμ(p,−p) (9)\n\nfor any non vanishing value of the momentum. But when the momentum vanishes, the term is undetermined. It could seem quite natural to follow by continuity formula (9). On the other hand since only the tensor is defined for zero momentum, leads to replacing the factor 1/3 by 1/4. Indeed, the Landau gauge condition does not eliminate global gauge transformations, and one additional degree of freedom is left at zero momentum. This theoretical issue is delicate but it is perfectly obvious that the numerical results favor in a dramatic way the factor 1/4. When using the factor 1/3 no sign of perturbative scaling can be seen. The factor 1/4 was used in and we will follow the same recipe.\n\n### 1.1 Computing ΛQCD\n\nThe conventional two-loop is obtained in any scheme from by\n\n Λ(c)≡μexp(−2πβ0α(μ2))×(β0α(μ2)4π)−β1β20 (10)\n\nwhere\n\n μ∂α∂μ=−β02πα2−β14π2α3−β264π3α4−... (11)\n\n, and is scheme dependent (). Integrating exactly eq. (11) expanded up to order , and imposing the asymptotic limit when , leads to the definition\n\n (12)\n\nat two loop and to the three loop :\n\n Λ(3)≡Λ(c)(α)(1+β1α2πβ0+β2α232π2β0)β12β20×\n exp{β0β2−4β212β20√Δ[arctan(√Δ2β1+β2α/4π)−arctan(√Δ2β1)]} (13)\n\nwhen .\n\nOne simple criterium has been proposed in to exhibit perturbative scaling: when plotting (10) as a function of perturbative scaling implies that should become constant for large enough . We will thus try to fit each of the formulae (10), (12), (13), expressed in terms of our measured , as a constant. All these formulae converge to the same when . But since our fits are for in the range of 0.3 - 0.5, and since they do not have the same dependence on they should not all fit our data. However, as we shall see, within our errors, acceptable fits are possible with (10), and (13) varying on a wide range. This is due to the fact that, even with our rather large statistics, the three loop effect, being only logarithmic, does not modify strongly enough the variation of in our fitting range although the resulting fitted depends significantly on the formula used and on . In other words we have different acceptable fits, with slightly different slopes in , which lead asymptotically, when , to significantly different ’s. There results a systematic error which cannot be fully eliminated until is really computed.\n\nNotwithstanding this problem, we believe that the possibility to fit several of these formulae by a constant on a large range of and with small statistical errors, is an indication that perturbative scaling has been reached. In other words, our data show that the uncertainty is of logarithmic type (higher loops), but there is no room for significant power corrections. To study power corrections, one has to consider with care lower scales , and we plan to do that in a forthcoming publication .\n\nIn order to compare different schemes, it is standard to translate the results into the scheme. Once known in any scheme, a one loop computation is enough to yield in any other scheme , . From and we get for zero flavors\n\n ΛMOM=3.334Λ¯¯¯¯¯¯¯MS,Λ˜MOM=exp(70/66)≃2.888Λ¯¯¯¯¯¯¯MS. (14)\n\n### 1.2 Momenta\n\nIn a finite hypercubic volume the momenta are the discrete set of vectors\n\n pμ=2πnμ/L (15)\n\nwhere are integer and is the lattice size. The isometry group for momenta is generated by the four reflections for and the permutations between the four directions such as . Altogether this group has elements. We use fully this symmetry in order to increase the statistics. The functions and in eqs (1) and (7) are systematically symmetrized over the momenta lying on one given group orbit. The number of distinct momenta in an orbit is 384 or a divisor of 384 (when the momentum is invariant by some subgroup).\n\nIn the case of in eq. (5) we furthermore symmetrize over the 6 permutations of external legs (Bose symmetry). The number of elements in an orbit will be or one of its divisors.\n\n### 1.3 Triplets of external momenta\n\nWe build all triplets of momenta up to some maximum value of the momentum to be specified later.\n\nIn the asymmetric case, there are as many triplets as momenta. For every integer number there exists at least one orbit of the isometry group with equal to that integer. The number of elements is often a much smaller number than 384: 1, 8, 16, 24, 64, etc.\n\nIn the symmetric case, has to be an even number: , where the subindices label the external particles, and consequently, being an integer, is even. It happens that for every even integer, we have found at least one orbit. The number of elements in one orbit is often a large number, 2304 and 1152 are frequent, 576 and 192 are less. Notwithstanding these larger sets, the statistical noise will turn out to be larger in the symmetric case than in the asymmetric one. Let us give some examples of symmetric triplets: for : and its 192 tranformed by the isometry-Bose group; for : and its 192 transformed. For : its 192 transformed. For there are 6 orbits, for example and its 2304 transformed, and its 1152 transformed, etc.\n\n## 2 Lattice calculation of αs and ΛQCD.\n\n### 2.1 The calculation.\n\nThe calculation has been performed on a QUADRICS QH1, with hypercubic lattices of and sites at , at ( all three with 1000 configurations) and at (100 configurations), combining the Metropolis and the overrelaxation algorithms.\n\nThe configurations have been transformed to the Landau gauge by a combination of overrelaxation algorithm and Fourier acceleration. We end when and when the spatial integral of is constant in time to better than .\n\nWe define\n\n Aμ(x+^μ/2)=Uμ(x)−U†μ(x)2iag0−13Tr⎛⎝Uμ(x)−U†μ(x)2iag0⎞⎠ (16)\n\nwhere indicates the unit lattice vector in the direction and is the bare coupling constant, and compute the n-point Green functions in momentum space from\n\n G(n)a1a2⋯anμ1μ2⋯μn(p1,p2,⋯pn)= (17)\n\nwhere , indicates the Monte-Carlo average and where\n\n Aaμ(p)=12Tr[∑xAμ(x+^μ/2)exp(ip(x+^μ/2))λa] (18)\n\nbeing the Gell-Mann matrices and the trace being taken in the color space.\n\nWe have computed the Fourier transforms up to a maximum momentum of GeV at , and GeV at and . These maxima correspond to for and for all the other cases.\n\n### 2.2 Check of the color dependence\n\nFrom the color structure of QCD we expect the two point Green functions to be proportional to the color tensor . This is indeed the case to an accuracy of the order of 1 %. Furthermore one can prove from gauge symmetry (global and local) and Bose symmetry that the three point Green functions have to be proportional to in the MOM and schemes. This is indeed the case, but the errors now depend on the momentum. For small values of the agreement is of a few percent. The errors increase with and when reaches the ’s the error reaches 100 %. This is an indication that the large momenta are grieved by noise. Luckily this caution is necessary only for the very few largest values of that we have considered. Indeed we will exclude the points from our fits. In order to reduce the noise we work from now on with color averaged Green functions: and .", null, "Figure 1: Comparison between the volumes of 164 and 244 at β=6 for the coupling α(μ) (figs. a and b), Λ(c)MOM (fig. c) and Λ(c)˜MOM (fig. d). No “sinus improvement” has been applied here.\n\n### 2.3 Finite volume effects\n\nThe finite volume effects can be checked by a comparison of the two calculations at , Fig 1. For relatively small , close to the maximum of , there is a visible decrease of when the volume is increased.\n\nFor larger , the volume dependence is still visible, but reduced to a few percent. Comparing the values of fitted in the asymptotic region given in table 1, one finds\n\n Λsym(24)/Λsym(16)=0.96±0.02,Λasym(24)/Λasym(16)=0.97±0.02 (19)\n\nwhich indicates that the finite volume effect affects moderately the asymptotic estimate of . More study is needed to quantify precisely this effect. Still, from gross estimates, our largest physical volume, , lies presumably within 5% above the infinite volume limit.\n\n### 2.4 Scaling in μ and O(a2p2) effects", null, "Figure 2: The effect of the “sinus improvement” on Λ(c)¯¯¯¯¯¯¯MS is illustrated in the case of the 1000 configurations at (β,V)=(6.2,244). A similar improvement can be seen in all cases.\n\nFigs. 1(a,b) show the shape of . The same shape is seen for the other ’s. In fact scales in to a very good accuracy. We keep this study for another publication .\n\nTurning to the scaling in , we see from figs 1(c,d) that both ’s do not really show plateaus at large momentum: they go through a maximum around 2 GeV and fall down later on. Our study shows that this feature cannot be cured simply by a three loop effect. Using eq. (13) with different values for cannot lead to acceptable plateaus for all lattice spacings. Since the fall at large is observed systematically, beyond statistical errors, but decreases when increases, we conjecture that we deal with an effect.\n\nWe have successfully tried a correction which will be described now. It starts from the remark that in the lattice Landau gauge, obtained by minimizing , does not vanish while does, when is defined from eq (18) and where\n\n ~pμ=2asin(apμ2). (20)\n\nThe latter momentum differs from the one in (15) by : . It results that the lattice two point Green function is not really proportional to the tensor in (1) but to the tensor deduced from (1) with substituted by , .\n\nWe perform a similar change in the tensors used to extract . The projectors in (6) and (8) have been normalized to give 1 when contracted to the tensors which multiply in (5) and (7) respectively. Assuming that the lattice calculations is such as to produce the tensors in (5) and (7) with substituted by , there would be a bias in our formulae (6) and (8). Indeed the contraction of the “tilded” tensors in (5) and (7) with the tensors in (6) and (8) is smaller than one and decreases with increasing . We tentatively correct the bias by dividing the result in (3) by this factor smaller than one. We shall refer to this as the “sinus improvement”.\n\nFor brevity we only show in fig. 2. The improvement of the plateaus is dramatic. The large fall has been considerably reduced. The improvement is confirmed by a reduction of the per degree of freedom from exceedingly large values to acceptable ones, see tables 2 and 3.\n\nOf course, this is only an ad hoc improvement, by no way rigorous and systematic. Fitting directly a corrective term of the form leads also to drastically improved with best values of in reasonable agreement with the “sinus improvement” (). It should be stressed that the sinus improvement, and the fits yield very similar values of . We may thus conclude that the systematic error on is moderate after “sinus improvement”.\n\n### 2.5 Three loop effect.\n\nA final source of systematic uncertainty comes from our ignorance of in the MOM scheme. From a preliminary perturbative calculation in the scheme we get . On the other hand we unsuccessfully tried to fix non-perturbatively from our asymptotic fits. The ratio , eqs (10)-(13), drops from 1 when increases, the drop increasing with . As a result, the fitted value for will decrease as increases. Simultaneously the shape of the plateaus are modified. In principle, the requirement of an acceptable might have restricted the admissible domain for . Unhappily our preliminary analysis did not turn out to be so restrictive. Only for below does the become prohibitive, see for example the case () in tables 2 and 3. It might look strange that fits well while does not, both being two-loop formulae. In fact is only an approximate two-loop formula which can be proven to be very close to with .\n\nWe therefore cannot do better in the MOM scheme, at present, than to provide fits of as a function of . For comparison we also provide the same analysis in the scheme. The maximum value for which we consider is since, for such a large value, the term in the function (11) is of the same order as the term for our range of . If was larger than that, the perturbative expansion would be dubious, and the evidence for perturbative scaling shown by our data would appear as a miraculous fake.\n\n### 2.6 Scaling in a\n\nThe “sinus improved” ’s exhibit a very clear scaling when varies above 2.1 GeV, as can be seen from the quality of the plateaus in figs 2 and 3 and from the per d.o.f. In this subsection we want to study further the scaling when , i.e. , is varied.\n\nSince depends linearly on , the consistency of our fits can only be checked through a spacing independent ratio. We use the ratios , see table 1, where is the central value of string tension computed in .\n\nIn order to write in physical units we then multiply all ratios by one global scale factor: MeV tuned to the central value of a very recent fit from the mass: GeV. We take the central value: GeV, whence GeV and GeV.\n\nThis leads to the plots in fig. 3. The presence of nice plateaus is striking. We fit the average on these plateaus, for scales never smaller than 2.1 GeV, and as high as allowed by lattice effects. The results are presented in tables 1, 2 and 3. The fits for and for a large range of yield a per degree of freedom smaller than 1.5. Scaling in the lattice spacing is striking, especially for those lattice parameters which correspond to a similar physical volume of , i.e. and . They average to:\n\n Λ(c)¯¯¯¯¯¯¯MS=378(6)MeV (symmetric)Λ(c)¯¯¯¯¯¯¯MS=355(4)MeV (asymmetric)\n Λ(3)¯¯¯¯¯¯¯MS(β2=1.69β2¯¯¯¯¯¯¯MS=4824)={327(5)MeV (symmetric)313(3)MeV (asymmetric) (21)\n\nwhere the errors are only statistical. results from our preliminary calculation in the asymmetric scheme. For comparison we provide the result with the same in the symmetric case. The result at the larger volume of , , presumably close to the infinite volume limit (section 2.3), is:\n\n Λ(c)¯¯¯¯¯¯¯MS=361(6)MeV (symmetric)Λ(c)¯¯¯¯¯¯¯MS=345(6)MeV (asymmetric)\n Λ(3)¯¯¯¯¯¯¯MS(β2=1.69β2¯¯¯¯¯¯¯MS=4824)={311(5)MeV (symmetric)303(5)MeV (asymmetric) (22)\n\nVarying we find acceptable ’s from up to beyond which we take as the maximum perturbatively consistent value, see section 2.5. In this range of the fitted have, to a surprisingly good approximation, a linear dependence on . We provide the result in the next section.\n\nFinally it is worth mentioning that we have also checked scaling of in over the whole range in , including the small values. We leave this point for a forthcoming publication", null, "Figure 3: The fits for aΛ(c)¯¯¯¯¯¯¯MS/(a√σ)√σ0 (with √σ0=445 MeV), including the “sinus improvement” are shown for all studied β’s and volumes.\n\n## 3 Discussions and conclusions\n\nThere is scaling, as can be seen first from the plateaus of as a function of the momentum scale , and second from the striking agreement of the runs for different ’s. We now quote our final results from our largest physical volume, , which we estimate to give values of less than 5% from the infinite volume limit.\n\nThe analysis for symmetric momentum configurations is better grounded theoretically since it avoids the delicate problem of zero momentum. On the other hand, this analysis is noisier than the asymmetric one which exhibits beautiful plateaus. The good agreement of these two analyses allows a sort of reciprocal support.\n\nSeveral other lattice estimates of have been performed. The ALPHA collaboration, , quotes MeV. Other results are 244(8) MeV , MeV , 340(50) .\n\nOur results for happen to be very sensitive to the three loop effect but cannot be fitted non perturbatively from our data. A wide range, is allowed, in which the three loop can be approximated by the following formulae:\n\n Λ(3)¯¯¯¯¯¯¯MS=[(412−59β2β2,¯¯¯¯¯¯¯MS±6)MeV]a−1(β=6.0)1.97GeV (symmetric)\n Λ(3)¯¯¯¯¯¯¯MS=[(382−46β2β2,¯¯¯¯¯¯¯MS±5)MeV]a−1(β=6.0)1.97GeV (asymmetric) (23)\n\nComparing the results in both schemes seems to indicate that the ’s in MOM and schemes are not too different. A calculation of in the MOM scheme would be most welcome.\n\nOur preliminary computation of in the scheme, , uses the results of and yields a value of . Our final result is then\n\n Λ(3)¯¯¯¯¯¯¯MS=(303±5MeV)a−1(β=6.0)1.97GeV(asymmetric) (24)\n\n## Acknowledgements.\n\nThese calculations were performed on the QUADRICS QH1 located in the Centre de Ressources Informatiques (Paris-sud, Orsay) and purchased thanks to a funding from the Ministère de l’Education Nationale and the CNRS. We are specially indebted to Francesco Di Renzo, Claudio Parrinello and Carlotta Pittori for thorough discussions which helped initiating this work We acknowledge Damir Becirevic, Konstantin Chetyrkin, Yuri Dokshitzer, Ulrich Ellwanger, Gregory Korchemsky and Alfred Mueller for several inspiring comments." ]
[ null, "https://media.arxiv-vanity.com/render-output/4807112/x1.png", null, "https://media.arxiv-vanity.com/render-output/4807112/x5.png", null, "https://media.arxiv-vanity.com/render-output/4807112/x7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8943184,"math_prob":0.96950954,"size":22877,"snap":"2021-31-2021-39","text_gpt3_token_len":5776,"char_repetition_ratio":0.12276483,"word_repetition_ratio":0.028100025,"special_character_ratio":0.2643266,"punctuation_ratio":0.13230501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9881765,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T03:02:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3c351f5d-1a90-4807-83a4-e268547f4f6d>\",\"Content-Length\":\"859386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cae82e0-5379-4e16-9bbe-a1f2d78b77d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:53569d59-0102-4624-b57a-84f4613dd5a3>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-ph/9810322/\",\"WARC-Payload-Digest\":\"sha1:3XS3Y2K4SIGD6DWML7RMRBGFX6PRMJWS\",\"WARC-Block-Digest\":\"sha1:6PVHH2VE6QHMHFX3MSNYC7OUNRZXMUJT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057131.88_warc_CC-MAIN-20210921011047-20210921041047-00576.warc.gz\"}"}
https://www.studentstutorial.com/java/java-fibonacci-series.php
[ "# Java program for fibonacci series\n\nIn Fibonacci series, next number is the sum of previous two numbers for example 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 etc. By default the first two numbers of fibonacci series are 0 and 1.\n\n#### fibonacci .java\n\nclass FibonacciSeries{\npublic static void main(String args[])\n{\nint n1=0,n2=1,n3,i,count=10;\nSystem.out.print(n1+\" \"+n2); //printing 0 and 1\nfor(i=2;i {\nn3=n1+n2;\nSystem.out.print(\" \"+n3);\nn1=n2;\nn2=n3;\n}\n}\n}\n\n# Java Program to Display Fibonacci Series using recursion\n\n#### fibonacci .java\n\nclass FibonacciSeries1{\npublic static void main(String args[])\n{\nint n1=0,n2=1,n3,i,count=10;\nSystem.out.print(n1+\" \"+n2);//printing 0 and 1\nfor(i=2;i {\nn3=n1+n2;\nSystem.out.print(\" \"+n3);\nn1=n2;\nn2=n3;\n}\n}\n}\n\n# Java Program to Display Fibonacci Series using while loop\n\n#### fibonacci .java\n\npublic class FibonacciSeries2{\npublic static void main(String[] args) {\nint i = 1, n = 10, t1 = 0, t2 = 1;\nSystem.out.print(\"First \" + n + \" terms: \");\nwhile (i <= n)\n{\nSystem.out.print(t1 + \" + \");\nint sum = t1 + t2;\nt1 = t2;\nt2 = sum;\ni++;\n}\n}\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5445951,"math_prob":0.98314613,"size":852,"snap":"2019-13-2019-22","text_gpt3_token_len":319,"char_repetition_ratio":0.14740565,"word_repetition_ratio":0.1971831,"special_character_ratio":0.44248825,"punctuation_ratio":0.2648402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999484,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T21:56:12Z\",\"WARC-Record-ID\":\"<urn:uuid:46eabca8-50c6-4eb6-9292-856da9c66fdf>\",\"Content-Length\":\"44819\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47e5f28f-192f-4b02-a25e-0c991aa9156c>\",\"WARC-Concurrent-To\":\"<urn:uuid:513f47ca-976d-4934-bffb-42bbdf87931c>\",\"WARC-IP-Address\":\"50.16.132.243\",\"WARC-Target-URI\":\"https://www.studentstutorial.com/java/java-fibonacci-series.php\",\"WARC-Payload-Digest\":\"sha1:H5LHZOKU2JOG5LYMHAWHJ5LMLQWN2B4Z\",\"WARC-Block-Digest\":\"sha1:TO6UHO344MQKVDUQ4WVCX6B2ERM6XYRC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202131.54_warc_CC-MAIN-20190319203912-20190319225912-00229.warc.gz\"}"}
https://www.specularium.org/3d-time/item/246-mach-s-principle
[ "## 3 Dimensional TimeKeys to Quanta\n\nThursday, 25 October 2018 20:56\n\n## Mach's Principle Featured\n\nAbstract.\n\nThis paper presents a quantitative equation for Mach’s Principle that appears to satisfy the requirements of most of the qualitative expressions of Mach’s Principle, and it suggests that the universe has spatial closure and a Transactional gravitational exchange mechanism.\n\nMach’s Principle.\n\nWe recognise mass as having two familiar components, firstly inertial mass – a resistance to acceleration as quantified by Newton’s second law that force equals mass times acceleration, shown here by denoting the mass involved as the inertial mass.", null, "Secondly mass also exhibits a gravitational effect, any two masses appear to attract each other with a force described by Newton’s equation of gravity. In this, the force between a mass m (usually a large mass like a planet or a star) and the mass of interest depends on both masses, the gravitational constant G, and the square of the distance between them. He we denote the mass of interest (which usually means the smaller one that falls or moves most) by its gravitational mass", null, "Now in Newton’s theory, inertial mass  miraculously seems to exactly equal gravitational mass  and this equivalence explains Galileo’s observation that all masses fall at the same rate.  Thus although a heavier mass has more inertia and needs more force to move it, its greater mass gives gravity an exactly proportional extra amount of gravitational mass to work on, so to speak, so all masses fall at the same speed.\n\nOn the human scale, inertial mass often seems intuitively ‘stronger’ than gravitational mass, we can feel the resistance a cannon ball offers to our pushing it around, but we cannot feel the gravitational attraction between it and ourselves. This arises because the ‘ratio’ between gravitational mass and inertial mass has a very small value represented by G, the gravitational constant. Nevertheless, the ratio remains constant and an amount of material that has a certain amount of inertia will always have a certain corresponding amount of gravity.\n\nNow Einstein saw something deeper and beyond mere convenient coincidence in the equivalence of inertial and gravitational mass (or more precisely in the equivalence of acceleration and gravity), and he reformulated our ideas about gravity.\n\nHe described gravity not as some sort of force at a distance but as an effect arising from the curvature of spacetime by mass. Thus, a massive object like a planet curves spacetime about itself and freely moving objects all follow this curvature by falling towards it at the same rate, and they do not actually feel any forces at all whilst in free fall.\n\n‘Spacetime tells matter how to move; matter tells spacetime how to curve.’ As John Archibald Wheeler so elegantly summarised the idea of General Relativity.\n\nThe description of gravity as spacetime curvature gives far more accurate predictions than the basic Newtonian model but it does not say much about the inertial component of mass. In the Newtonian model, inertial mass just appears as somehow intrinsic to an object and gravity appears as a mysterious ‘force at a distance’. Neither of these ideas sits comfortably with the relativistic perspective in which properties arise on a relational basis.\n\nMach's Principle has hung around on the fringes of cosmology and relativity for 100 years, it has influenced a number of theorists but nobody seems to have managed to formulate it concisely or to develop some maths for it. Roughly speaking it suggests that the inertial mass of any object arises because of the effect of all the stuff in the entire universe, so as Mach put it, if the subway jerks, the far stars and galaxies throw you to the ground\n\nThis idea has lurked within physics since Einstein named it after some remarks by Mach (who devised the Mach numbers for multiples of the speed of sound). Nobody appears to have found a way to quantify it and various qualitative versions of it exist. It broadly states that some relationship should exist between the inertia of any body and ALL of the rest of the material in the entire universe. This would seem to imply a spatially closed universe, otherwise inertial mass, and many of the laws of physics, would vary with time.\n\nDebate rages about whether General Relativity really incorporates Mach’s Principle or not. The currently most popular interpretation of cosmological observations asserts that the universe expands from some sort of a big bang, in which case Mach’s Principle seems invalid unless inertial masses either vary with time or remain invariant to the distribution of matter within the universe. As it appears that the distribution of matter in the universe always remains homogenous on a large scale, this distribution invariance comes down to an invariance to the size of the universe which seems highly unlikely as  the spacetime curvature we recognise as gravity remains very much distance dependent.\n\nNow if the inertia of any object depends on the entire rest of the universe then it must depend on the gravitational mass of the body in question, the gravitational mass of the entire universe, the Gravitational constant that relates them, and the size of the universe, and the speed of light. Moreover, the ratio of inertial to gravitational mass must remain constant as we have every indication from astronomy that it has not varied over the observable history of the universe.\n\nNow in Hypersphere Cosmology1 , which posits a non-expanding universe: -", null, "Where G = The Gravitational constant\n\nM = The Mass of the universe\n\nL = The antipode distance in a hyperspherical universe\n\nc = The speed of light.\n\nThis strongly suggests that the following equation quantitatively fulfils the major qualitative requirements of most expressions of Mach’s Principle.", null, "If the gravitational constant of the universe or the mass of the universe were to increase so would all inertial masses, conversely any increment in the size of the universe or lightspeed would decrease all inertial masses.\n\nGM/Lc^2 thus appears as a sort of scalar field, omnipresent and apparently non-local* in the hypersphere of the universe, and giving rise to an acceleration A = GM/L^2 which has detectable effects such as the Pioneer anomaly and the anomalous galactic rotation curves.\n\n(See Hypersphere Cosmology1 )\n\nAt present the equation has no practical use except to complete hypersphere cosmology because there seems no way we could currently manipulate the effects of G,M,L, or c, in the vicinity of a spacecraft, but if we could reduce inertial mass somehow then we could easily move matter around the universe.\n\n*The apparently instantaneous effects of both inertia and ‘static’ gravitational spacetime curvatures raises interesting questions. Gravitational waves undoubtedly propagate at light-speed from accelerated masses as Einstein predicted and as recent experiment has confirmed.\n\nIf we accept that nothing can travel faster than light then the apparently instantaneous effects of inertia and static spacetime curvatures, (gravity ‘fields’) may arise from a Transactional Exchange between curvatures. The laws of gravity seem completely time symmetric and capable of supporting advanced negative curvatures propagating down retarded positive curvature paths back into the past to create apparently instantaneous effects.\n\nThe time-symmetric nature of all physical laws except the Second Law of Thermodynamics (which states that in an isolated system Entropy will increase only) at first appears very mysterious and physicists usually quietly ignore the time reversed solutions to their equations and calculations.\n\nHawkin remarked that ‘Entropy increases with Time because we measure Time in the direction in which Entropy increases’. We have no way of telling in which ‘direction’ ‘time goes’ or even if it actually ‘goes’ anywhere.\n\nIf however, we adopt the Transactional Interpretation and extend it beyond quantum physics to gravitation and to model gravitational effects as arising from positive spacetime curvatures propagating forward in time and negative spatial curvatures propagating backwards from the future to create effects in the present then we can explain the apparently instantaneous effects of inertia and static gravitational ‘fields’.\n\nOf course, the same argument applies to electrostatic fields that would then correspond to a special class of spacetime curvatures.\n\nAccelerations of masses and electrostatic charges will of course create disturbances in spacetime curvatures that propagate at light speed as bosons with the wave –particle duality characteristic of quanta. However, the concept of virtual bosons mediating static gravitational and electrostatic interactions becomes redundant.\n\nReference1.\n\nHypersphere Cosmology 2. Author: Peter J Carroll. http://vixra.org/abs/1601.0026\n\n• #### 3 Dimensional Time in a Nutshell +", null, "Introduction. The hypothesis of three dimensional time arises from a consideration of the following two principles: - ‘If quantum gravity fails,\n• #### Quantum Chromodynamics and the Transactional Interpretation of Quantum Physics. +", null, "Peter J Carroll. 16/1/2015. Abstract. If we accept the gluon model to explain the strong nuclear force, then only a\n• #### Quantum Hyperspheres +\n\nHypersphere sizes as given by , and hypersphere vorticitation rates as given by do not appear to equate with such phenomena\n• #### Hypersphere Mechanics (Quantum) +\n\nThis section of the site will shortly become upgraded with material from these and other papers. http://vixra.org/abs/1611.0133 http://vixra.org/abs/1611.0323 http://vixra.org/abs/1612.0069\n• #### Hopf Fibrations +\n\nAfter buying up the entire split keyring stock of the local hardware shop and breaking several fingernails before turning to\n• #### Mach's Principle +\n\nAbstract. This paper presents a quantitative equation for Mach’s Principle that appears to satisfy the requirements of most of the\n\n• #### Quantum and Particle Physics +\n\nQuantum and Particle Physics How do the subatomic particles of matter and energy that seem to make up this universe\n• #### Modified Newtonian Dynamics in a Hypersphere. MOND(H) +\n\nAbstract. This paper proposes a modification to Newton’s laws of motion and gravitation that would have negligible and undetectable effects\n• 1" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9309041,"math_prob":0.93104094,"size":8793,"snap":"2020-45-2020-50","text_gpt3_token_len":1702,"char_repetition_ratio":0.15269086,"word_repetition_ratio":0.011577424,"special_character_ratio":0.1788923,"punctuation_ratio":0.06781915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802641,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T09:21:04Z\",\"WARC-Record-ID\":\"<urn:uuid:4a668e84-adbd-40a6-88eb-810c9e0ae1db>\",\"Content-Length\":\"103287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a947e04-0c16-47b1-8fdb-b89b3cfcee0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8aea239-f03d-4d12-b264-076575608eeb>\",\"WARC-IP-Address\":\"162.13.79.108\",\"WARC-Target-URI\":\"https://www.specularium.org/3d-time/item/246-mach-s-principle\",\"WARC-Payload-Digest\":\"sha1:EAS2H3GJ7DLMOWPSKO66PIQ6WATPA7AB\",\"WARC-Block-Digest\":\"sha1:Q2RYHCWZALJBXFVMDKZIOWSRSLCMXOZ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891203.69_warc_CC-MAIN-20201026090458-20201026120458-00411.warc.gz\"}"}
http://surveillance.r-forge.r-project.org/pkgdown/reference/makeControl.html
[ "Generate control Settings for an hhh4 Model\n\n## Usage\n\nmakeControl(f = list(~1), S = list(0, 0, 1), period = 52, offset = 1, ...)\n\n## Arguments\n\nf, S, period\n\narguments for addSeason2formula defining each of the three model formulae in the order (ar, ne, end). Recycled if necessary within mapply.\n\noffset\n\nmultiplicative component offsets in the order (ar, ne, end).\n\n...\n\nfurther elements for the hhh4 control list. The family parameter is set to \"NegBin1\" by default.\n\n## Value\n\na list for use as the control argument in hhh4.\n\n## Examples\n\nmakeControl()\n\n## a simplistic model for the fluBYBW data\n## (first-order transmission only, no district-specific intercepts)\ndata(\"fluBYBW\")\nmycontrol <- makeControl(\nf = list(~1, ~1, ~t), S = c(1, 1, 3),\noffset = list(population(fluBYBW)), # recycled -> in all components\nne = list(normalize = TRUE),\nverbose = TRUE)\nstr(mycontrol)\nif (surveillance.options(\"allExamples\"))\n## fit this model\nfit <- hhh4(fluBYBW, mycontrol)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50526786,"math_prob":0.94740593,"size":924,"snap":"2023-40-2023-50","text_gpt3_token_len":260,"char_repetition_ratio":0.1173913,"word_repetition_ratio":0.08955224,"special_character_ratio":0.25865802,"punctuation_ratio":0.16875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9666825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T10:33:29Z\",\"WARC-Record-ID\":\"<urn:uuid:fbc00839-271d-46e6-8254-aea368e9b3f3>\",\"Content-Length\":\"10805\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1843f0c8-8dcf-4d71-a5d3-96159440807e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9af175b-5637-41d8-bf5e-96f1eac66964>\",\"WARC-IP-Address\":\"137.208.57.38\",\"WARC-Target-URI\":\"http://surveillance.r-forge.r-project.org/pkgdown/reference/makeControl.html\",\"WARC-Payload-Digest\":\"sha1:D4OFXHH3YFW6TNO5VHBHMX3YVJSCFVTL\",\"WARC-Block-Digest\":\"sha1:H2GEPI7SGX3WWHGKALHZINQATCJQWOTA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00072.warc.gz\"}"}
https://math.libretexts.org/Bookshelves/Combinatorics_and_Discrete_Mathematics/Yet_Another_Introductory_Number_Theory_Textbook_-_Cryptology_Emphasis_(Poritz)/02%3A_Congruences/2.02%3A_Linear_Congruences
[ "$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 2.2: Linear Congruences\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n$$\\newcommand{\\NN}{\\mathbb N}$$\n$$\\newcommand{\\RR}{\\mathbb R}$$\n$$\\newcommand{\\QQ}{\\mathbb Q}$$\n$$\\newcommand{\\ZZ}{\\mathbb Z}$$\n$$\\newcommand{\\Cc}{\\mathcal C}$$\n$$\\newcommand{\\Dd}{\\mathcal D}$$\n$$\\newcommand{\\Ee}{\\mathcal E}$$\n$$\\newcommand{\\Ff}{\\mathcal F}$$\n$$\\newcommand{\\Kk}{\\mathcal K}$$\n$$\\newcommand{\\Mm}{\\mathcal M}$$\n$$\\newcommand{\\Pp}{\\mathcal P}$$\n$$\\newcommand{\\ind}{\\operatorname{ind}}$$\n$$\\newcommand{\\ord}{\\operatorname{ord}}$$\n\nBecause congruence is analogous to equality, it is natural to ask about the analogues of linear equations, the simplest equations one can solve in algebra, but using congruence rather than equality. In this section, we discuss linear congruences of one variable and their solutions.\n\nDefinition: Linear Congruence in One Variable\n\nGiven constants $$a,b\\in\\ZZ$$ and $$n\\in\\ZZ$$, a congruence of the form $$ax\\equiv b\\pmod n$$ where $$x\\in\\ZZ$$ is unknown is called a linear congruence in one variable.\n\nIf a linear congruence has one solution, then it has infinitely many:\n\nTheorem $$\\PageIndex{1}$$\n\nGiven constants $$a,b\\in\\ZZ$$, $$n\\in\\ZZ$$, and a solution $$x\\in\\ZZ$$ to the linear congruence $$ax\\equiv b\\pmod n$$, any other $$x^\\prime\\in\\ZZ$$ satisfying $$x^\\prime\\equiv x\\pmod n$$ is also a solution of the same congruence.\n\n[Note: in the early history of number theory, before Gauss, one talked about:\n\nDefinition: Diophantine Equation\n\nAn algebraic equation whose constants and variables are all integers is called a Diophantine equation.\n\nThen, the modern linear congruence $$ax\\equiv b\\pmod n$$, for $$a,b\\in\\ZZ$$ and $$n\\in\\NN$$ is equivalent to the linear Diophantine equation $$ax-ny=b$$ in the two unknowns $$x$$ and $$y$$.]\n\nThe following gives a fairly complete characterization of solutions of linear congruences:\n\nTheorem $$\\PageIndex{2}$$\n\nLet $$a,b\\in\\ZZ$$ and $$n\\in\\NN$$ and consider the linear congruence $ax\\equiv b\\pmod n\\ .$ Setting $$d=\\gcd(a,n)$$, we have\n\n1. If $$d\\nmid b$$, then the congruence has no solutions.\n2. If $$d\\mid b$$, then the congruence has exactly $$d$$ solutions which are distinct modulo $$n$$.\nProof\n\nFor Part 1, we prove its contrapositive. So assume the congruence has solutions, meaning $$\\exists x,k\\in\\ZZ$$ such that $$ax-b=kn$$, or $$ax-kn=b$$. But since $$d=\\gcd(a,n)$$ is a common divisor of $$a$$ and $$n$$, it divides the linear combination $$ax-kn=b$$. Hence $$d\\mid b$$.\n\nNow for Part 2, assume $$d\\mid b$$, so $$\\exists k\\in\\ZZ$$ such that $$kd=b$$. But from Theorem 1.5.3 we know $$\\exists p,q\\in\\ZZ$$ such that $$d=pa+qn$$. This means that $$kpa+kqn=kd=b$$ or, rearranging, $$a(kp)-b=(-kq)n$$. Hence $$n\\mid a(kp)=b$$, i.e., $$a(kp)\\equiv b\\pmod n$$ and thus $$x=kp$$ is one solution to the linear congruence $$ax\\equiv b\\pmod n$$.\n\nFinally, let us prove that there are the correct number of solutions, mod $$n$$, of the congruence equation. We have just seen that there is at least one $$x\\in\\ZZ$$ satisfying $$ax\\equiv b\\pmod n$$. Let $$y\\in\\ZZ$$ be any other solution. Then $$ax\\equiv b\\equiv ay\\pmod n$$. By part (2) of Theorem 2.1.3, $$x\\equiv y\\pmod{n/d}$$, or $$\\delta = y-x\\equiv0\\pmod{n/d}$$. Now by Theorem 2.1.4, there are exactly $$d$$ possibilities, modulo $$n$$, for this $$\\delta$$. Thus there are $$d$$ solutions of $$ax\\equiv b\\pmod n$$ of the form $$x+\\delta$$.\n\nNote\n\nNotice that if $$a\\in\\ZZ$$ and $$n\\in\\NN$$ are relatively prime, then $$\\forall b\\in\\ZZ$$ there is a unique solution modulo $$n$$ to the equation $$ax\\equiv b\\pmod n$$.\n\nExample $$\\PageIndex{1}$$\n\nLet us find all the solutions of the congruence $$3x\\equiv 12\\pmod6$$. Notice that $$\\gcd(3,6)=3$$ and $$3\\mid 12$$. Thus there are three incongruent solutions modulo $$6$$. Using the Euclidean Algorithm to find the solution of the equation $$3x-6y=12$$ we get a solution $$x_0=6$$. Thus the three incongruent (modulo $$6$$) solutions are given by $$x_1=6\\pmod6$$, $$x_1=6+2=2\\pmod6$$ and $$x_2=6+4=4\\pmod6$$.\n\nAs we mentioned in the note above, the congruence $$ax\\equiv b\\pmod n$$ for $$a,b\\in\\ZZ$$ and $$n\\in\\NN$$ has a unique (modulo $$n$$) solution if $$\\gcd(a,n)=1$$. This will allow us to talk about modular inverses.\n\nGiven $$a\\in\\ZZ$$ and $$n\\in\\NN$$, a solution to the congruence $$ax\\equiv 1\\pmod n$$ for $$(a,n)=1$$ is called the inverse of $$a$$ modulo n. We denote such an inverse by $$a^{-1}$$, with the $$n$$ to be understood from context.\n\nStating formally what was just recalled from the above note, we have:\n\nCorollary $$\\PageIndex{1}$$\n\nGiven $$a\\in\\ZZ$$ and $$n\\in\\NN$$ which are relatively prime, the modular inverse $$a^{-1}$$ exists and is unique modulo $$n$$.\n\nExample $$\\PageIndex{2}$$\n\nThe modular inverse $$7^{-1}$$of $$7$$ modulo $$48$$ is $$7$$. Notice that a solution of $$7x\\equiv 1\\pmod{48}$$ is $$x\\equiv 7\\pmod{48}$$.\n\nExercise $$\\PageIndex{1}$$\n\n1. Find all solutions of $$3x\\equiv 6\\pmod9$$.\n2. Find all solutions of $$3x\\equiv 2\\pmod7$$.\n3. Find inverses modulo $$13$$ of $$2$$ and of $$11$$.\n4. Given $$a\\in\\ZZ$$ and $$n\\in\\NN$$, show that if $$a^{-1}$$ is the inverse of $$a$$ modulo $$n$$ and $$b^{-1}$$ is the inverse of $$b$$ modulo $$n$$, then $$a^{-1}b^{-1}$$ is the inverse of $$ab$$ modulo $$n$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7431723,"math_prob":1.0000097,"size":5143,"snap":"2021-43-2021-49","text_gpt3_token_len":1707,"char_repetition_ratio":0.16890445,"word_repetition_ratio":0.01690141,"special_character_ratio":0.34337935,"punctuation_ratio":0.11046512,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T19:46:05Z\",\"WARC-Record-ID\":\"<urn:uuid:543da3b2-8720-410a-b4fd-a1b794ef4672>\",\"Content-Length\":\"106544\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cfaba38d-02b5-4acc-8f2e-f8a4db5a4679>\",\"WARC-Concurrent-To\":\"<urn:uuid:20af61ac-6439-4eb2-8b6b-3060a2baa775>\",\"WARC-IP-Address\":\"13.249.42.110\",\"WARC-Target-URI\":\"https://math.libretexts.org/Bookshelves/Combinatorics_and_Discrete_Mathematics/Yet_Another_Introductory_Number_Theory_Textbook_-_Cryptology_Emphasis_(Poritz)/02%3A_Congruences/2.02%3A_Linear_Congruences\",\"WARC-Payload-Digest\":\"sha1:VY3V5HEZBAYLRA47YM7UCL3D3KSOUVOV\",\"WARC-Block-Digest\":\"sha1:OLP4HWRZAQLZ2ELTT5RHD7XKAXLTTQGD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362918.89_warc_CC-MAIN-20211203182358-20211203212358-00506.warc.gz\"}"}
https://rd.springer.com/chapter/10.1057%2F978-1-137-59900-1_19
[ "# Correlation and Simple Linear Regression in Applied Linguistics\n\n## Abstract\n\nThis chapter provides an applied description of two key methods to evaluate the association between two research variables. First, we provide a conceptual view of the notion of non-directional linear correlation. Using small datasets, we discuss the various behaviors of the correlation statistic, Pearson’s r, under different scenarios. Then, we turn our attention to a neighboring but practically different concept to evaluate the directional association between two research variables: the simple linear regression. Particularly, we shed light on one of the most useful purposes of simple linear regression and prediction. By end of the chapter, we present a conceptually overarching view that links the regression methods to all other methods that applied linguists often use to find important patterns in their data.\n\n## Keywords\n\nQuantitative methods Statistics Correlation Regression\n\n## References\n\n1. Cohen, J. (1968). Multiple regression as a general data-analytic system. Psychological Bulletin, 70, 426–443.\n2. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. New York, NY: Routledge.Google Scholar\n3. DeKeyser, R. (2000). The robustness of critical period effects in second language acquisition. Studies in Second Language Acquisition, 22, 499–533.Google Scholar\n4. De Winter, J. C., Gosling, S. D., & Potter, J. (2016). Comparing the Pearson and spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data. Psychological methods, 21, 273–290.\n5. Draper, N. R., & Smith, H. (1998). Applied regression analysis. New York, NY: Wiley.Google Scholar\n6. Egbert, J., & Plonsky, L. (2015). Success in the abstract: Exploring linguistic and stylistic predictors of conference abstract ratings. Corpora, 10, 291–313.\n7. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Thousand Oaks, CA: SAGE.Google Scholar\n8. Graybill, F. A., & Iyer, H. K. (1994). Regression analysis. New York, NY: Duxbury Press.Google Scholar\n9. Howell, D. C. (2013). Statistical methods for psychology. Belmont, CA: Cengage Learning.Google Scholar\n10. Johnson, J. S., & Newport, E. L. (1989). Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language. Cognitive Psychology, 21(1), 60–99.\n11. Kline, R. B. (2015). Principles and practice of structural equation modeling. New York, NY: Guilford.Google Scholar\n12. Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2005). Applied linear statistical models. New York, NY: McGraw-Hill.Google Scholar\n13. Norouzian, R., & Plonsky, L. (2018). Eta- and partial eta-squared in L2 research: A cautionary review and guide to more appropriate usage. Second Language Research, 34, 257–271.\n14. Norris, J. M. (2015). Statistical significance testing in second language research: Basic problems and suggestions for reform. Language Learning, 65(Supp. 1), 97–126.\n15. Norris, J. M., Ross, S., & Schoonen, R. (2015). Improving second language quantitative research. Language Learning, 65(Supp. 1), 1–8.\n16. Pearson, K. (1896). Mathematical contributions to the theory of evolution. III. Regression, heredity, and panmixia. Philosophical Transactions A, 373, 253–318.\n17. Pituch, K. A., & Stevens, J. P. (2016). Applied multivariate statistics for the social sciences: Analyses with SAS and IBM’s SPSS. New York, NY: Routledge.Google Scholar\n18. Plonsky, L. (2013). Study quality in SLA: An assessment of designs, analyses, and reporting practices in quantitative L2 research. Studies in Second Language Acquisition, 35, 655–687.\n19. Plonsky, L. (Ed.). (2015a). Advancing quantitative methods in second language research. New York, NY: Routledge.Google Scholar\n20. Plonsky, L. (2015b). Statistical power, p values, descriptive statistics, and effect sizes: A “back-to-basics” approach to advancing quantitative methods in L2 research. In L. Plonsky (Ed.), Advancing quantitative methods in second language research (pp. 23–45). New York, NY: Routledge.\n21. Plonsky, L., & Derrick, D. J. (2016). A Meta-Analysis of Reliability Coefficients in Second Language Research. Modern Language Journal, 100, 538–553.\n22. Plonsky, L., & Ghanbar, H. (in press). Multiple regression in L2 research: A methodological synthesis and guide to interpreting R values. Modern Language Journal.Google Scholar\n23. Plonsky, L., & Oswald, F. L. (2014). How big is ‘big’? Interpreting effect sizes in L2 research. Language Learning, 64, 878–912.\n24. Plonsky, L., & Oswald, F. L. (2017). Multiple regression as a flexible alternative to ANOVA in L2 research. Studies in Second Language Acquisition, 39, 579–592.\n25. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: SAGE.Google Scholar\n26. Roever, C., & Phakiti, A. (2017). Quantitative methods for second language research: A problem-solving approach. New York, NY: Routledge.\n27. Rosnow, R. L., & Rosenthal, R. (2008). Essentials of behavioral research: Methods and data analysis. New York, NY: McGraw-Hill.Google Scholar\n28. Schoonen, R. (2015). Structural equation modeling in L2 research. In L. Plonsky (Ed.), Advancing quantitative methods in second language research (pp. 213–242). New York, NY: Routledge.\n29. Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. Washington, DC: American Psychological Association.\n30. Thompson, B. (Ed.). (2003). Score reliability: Contemporary thinking on reliability issues. Newbury Park, CA: SAGE.Google Scholar\n31. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.Google Scholar\n32. Wilcox, R. (2016). Understanding and applying basic statistical methods using R. New York, NY: Wiley.Google Scholar" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66087013,"math_prob":0.6137265,"size":6249,"snap":"2019-13-2019-22","text_gpt3_token_len":1576,"char_repetition_ratio":0.17566054,"word_repetition_ratio":0.08275058,"special_character_ratio":0.2619619,"punctuation_ratio":0.29638556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9622756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-18T22:48:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d5760710-0c76-4f09-a3b9-14e8d7b455e0>\",\"Content-Length\":\"87722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fe1ed7e-664a-4221-a90f-e2d4980458c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:25eb69d4-2d3d-46c9-97ab-b52eb890842b>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://rd.springer.com/chapter/10.1057%2F978-1-137-59900-1_19\",\"WARC-Payload-Digest\":\"sha1:NCSYXR6QWVOWEMAVKWNH3DYB2M3OLZ7N\",\"WARC-Block-Digest\":\"sha1:4FUP467PDRPUIVMGDTURQEJWGU2Q3TP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201707.53_warc_CC-MAIN-20190318211849-20190318233849-00470.warc.gz\"}"}
https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/
[ "", null, "# Gottlob Frege\n\nFirst published Thu Sep 14, 1995; substantive revision Sat Oct 29, 2016\n\nFriedrich Ludwig Gottlob Frege (b. 1848, d. 1925) was a German mathematician, logician, and philosopher who worked at the University of Jena. Frege essentially reconceived the discipline of logic by constructing a formal system which, in effect, constituted the first ‘predicate calculus’. In this formal system, Frege developed an analysis of quantified statements and formalized the notion of a ‘proof’ in terms that are still accepted today. Frege then demonstrated that one could use his system to resolve theoretical mathematical statements in terms of simpler logical and mathematical notions. One of the axioms that Frege later added to his system, in the attempt to derive significant parts of mathematics from logic, proved to be inconsistent. Nevertheless, his definitions (e.g., of the predecessor relation and of the concept of natural number) and methods (e.g., for deriving the axioms of number theory) constituted a significant advance. To ground his views about the relationship of logic and mathematics, Frege conceived a comprehensive philosophy of language that many philosophers still find insightful. However, his lifelong project, of showing that mathematics was reducible to logic, was not successful.\n\n## 1. Frege's Life and Influences\n\nAccording to the curriculum vitae that the 26-year old Frege filed in 1874 with his Habilitationsschrift, he was born on November 8, 1848 in Wismar, a town then in Mecklenburg-Schwerin but now in Mecklenburg-Vorpommern. His father, Alexander, a headmaster of a secondary school for girls, and his mother, Auguste (nee Bialloblotzky), brought him up in the Lutheran faith. Frege attended the local Gymnasium for 15 years, and after graduation in 1869, entered the University of Jena (see Frege 1874, translation in McGuinness (ed.) 1984, 92).\n\nAt Jena, Frege attended lectures by Ernst Karl Abbe, who subsequently became Frege's mentor and who had a significant intellectual and personal influence on Frege's life. Frege transferred to the University of Göttingen in 1871, and two years later, in 1873, was awarded a Ph.D. in mathematics, having written a dissertation under Ernst Schering titled Über eine geometrische Darstellung der imaginären Gebilde in der Ebene (“On a Geometrical Representation of Imaginary Forms in the Plane”). Frege explains the project in his thesis as follows: “By a geometrical representation of imaginary forms in the plane we understand accordingly a kind of correlation in virtue of which every real or imaginary element of the plane has a real, intuitive element corresponding to it” (Frege 1873, translation in McGuinness (ed.) 1984, 3). Here, by ‘imaginary forms’, Frege is referring to imaginary points, imaginary curves and lines, etc. Interestingly, one section of the thesis concerns the representation of complex numbers by magnitudes of angles in the plane.\n\nIn 1874, Frege completed his Habilitationsschrift, entitled Rechnungsmethoden, die sich auf eine Erweiterung des Grössenbegriffes gründen (“Methods of Calculation Based on an Extension of the Concept of Quantity”). Immediately after submitting this thesis, the good offices of Abbe led Frege to become a Privatdozent (Lecturer) at the University of Jena. Library records from the University of Jena establish that, over the next 5 years, Frege checked out texts in mechanics, analysis, geometry, Abelian functions, and elliptical functions (Kreiser 1984, 21). No doubt, many of these texts helped him to prepare the lectures he is listed as giving by the University of Jena course bulletin, for these lectures are on topics that often match the texts, i.e., analytic geometry, elliptical and Abelian functions, algebraic analysis, functions of complex variables, etc. (Kratzsch 1979).\n\nThis course of Frege's reading and lectures during the period of 1874–1879 dovetailed quite naturally with the interests he displayed in his Habilitationsschrift. The ‘extension of the concept of quantity’ referred to in the title concerns the fact that our understanding of quantities (e.g., lengths, surfaces, etc.) has to be extended in the context of complex numbers. He says, right at the beginning of this work:\n\nAccording to the old conception, length appears as something material which fills the straight line between its end points and at the same time prevents another thing from penetrating into its space by its rigidity. In adding quantities, we are therefore forced to place one quantity against another. Something similar holds for surfaces and solid contents. The introduction of negative quantities made a dent in this conception, and imaginary quantities made it completely impossible. Now all that matters is the point of origin and the end point – the idea of filling the space has been completely lost. All that has remained is certain general properties of addition, which now emerge as the essential characteristic marks of quantity. The concept has thus gradually freed itself from intuition and made itself independent. This is quite unobjectionable, especially since its earlier intuitive character was at bottom mere appearance. Bounded straight lines and planes enclosed by curves can certainly be intuited, but what is quantitative about them, what is common to lengths and surfaces, escapes our intuition. … There is accordingly a noteworthy difference between geometry and arithmetic in the way in which their fundamental principles are grounded. The elements of all geometrical constructions are intuitions, and geometry refers to intuition as the source of its axioms. Since the object of arithmetic does not have an intuitive character, its fundamental propositions cannot stem from intuition… (Frege 1874, translation in McGuinness (ed.) 1984, 56)\n\nHere we can see the beginning of two lifelong interests of Frege, namely, (1) in how concepts and definitions developed for one domain fare when applied in a wider domain, and (2) in the contrast between legitimate appeals to intuition in geometry and illegitimate appeals to intuition in the development of pure number theory. Indeed, some recent scholars have (a) shown how Frege's work in logic was informed in part by his understanding of the analogies and disanalogies between geometry and number theory (Wilson 1992), and (b) shown that Frege was intimately familiar with the division among late 19th century mathematicians doing complex analysis who split over whether it is better to use the analytic methods of Weierstrass or the intuitive geometric methods of Riemann (Tappenden 2006). Weierstrass's 1872 paper, describing a real-valued function that is continuous everywhere but differentiable nowhere, was well known and provided an example of an ungraphable functions that places limits on intuition. Yet, at the same time, Frege clearly accepted Riemann's practice and methods derived from taking functions as fundamental, as opposed to Weierstrass's focus on functions that can be represented or analyzed in terms of other mathematical objects (e.g., complex power series).\n\nIn 1879, Frege published his first book Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens (Concept Notation: A formula language of pure thought, modelled upon that of arithmetic) and was promoted to außerordentlicher Professor (Extraordinarius Professor) at Jena. Although the Begriffsschrift constituted a major advance in logic, it was neither widely understood nor well-received. Some scholars have suggested that this was due to the facts that the notation was 2-dimensional instead of linear and that he didn't build upon the work of others but rather presented something radically new (e.g., Mendelsohn 2005, 2). Though our discussion below is framed primarily with an eye towards the system Frege developed in his two-volume work of 1893/1903 (Grundgesetze der Arithmetik), the principal elements of this later system can be found already in the Begriffsschrift of 1879.\n\nFrege's next really significant work was his second book, Die Grundlagen der Arithmetik: eine logisch mathematische Untersuchung über den Begriff der Zahl, published in 1884. Frege begins this work with criticisms of previous attempts to define the concept of number, and then offers his own analysis. The Grundlagen contains a variety of insights still discussed today, such as: (a) the claim that a statement of number (e.g., ‘there are eight planets’) is a higher-order assertion about a concept (see Section 2.5 below); (b) his famous Context Principle (“never ask for the meaning of a word in isolation, but only in the context of a proposition”), and (c) the formulation of a principle (now called ‘Hume's Principle’ in the secondary literature) that asserts the equivalence of the claim “the number of Fs is equal to the number of Gs” with the claim that “there is a one-to-one correspondence between the objects falling under F and the objects falling under G” (see Section 2.5 below). More generally, Frege provides in the Grundlagen a non-technical philosophical justification and outline of the ideas that he was to develop technically in his two-volume work Grundgesetze der Arithmetik (1893/1903).\n\nIn the years 1891–1892, Frege published three of his most well-known papers, ‘Function and Concept’ (1891), ‘On Sense and Reference’ (1892a), and ‘On Concept and Object’ (1892b). Immediately after that, in 1893, he published the first volume of the technical work previously mentioned, Grundgesetze der Arithmetik. In 1896, he was promoted to ordentlicher Honorarprofessor (regular honorary professor). Six years later (on June 16, 1902), as he was preparing the proofs of the second volume of the Grundgesetze, he received a letter from Bertrand Russell, informing him that one could derive a contradiction in the system he had developed in the first volume. Russell's letter frames the paradox first in terms of the predicate P = ‘being a predicate which cannot be predicated of itself’, and then in terms of the class of all those classes that are not members of themselves. Frege, in the Appendix to the second volume, rephrased the paradox in terms of his own system.\n\nFrege never fully recovered from the fatal flaw discovered in the foundations of his Grundgesetze. His attempts at salvaging the work by restricting Basic Law V were not successful. However, he continued teaching at Jena, and from 1903–1917, he published six papers, including ‘What is a Function?’ (1904) and ‘On the Foundations of Geometry’ (First and Second Series, Frege 1903b and 1906). In the latter, Frege criticized Hilbert's understanding and use of the axiomatic method (see the entry on the Frege-Hilbert controversy). From this time period, we have the lecture notes that Rudolf Carnap took as a student in two of his courses (see Reck and Awodey 2004). In 1917, he retired from the University of Jena.\n\nIn the last phase of Frege's life, from 1917–1925, Frege published three philosophical papers, in a series, with the titles ‘The Thought’, ‘Negation’, and ‘Compound Thoughts’ (Frege 1918a, 1918b, and 1923, respectively). After that, however, we have only fragments of philosophical works. Unfortunately, his last years saw him become more than just politically conservative and right-wing – his diary for a brief period in 1924 show sympathies for fascism and anti-Semitism (see Frege 1924 , translated by R. Mendelsohn). He died on July 26, 1925, in Bad Kleinen (now in Mecklenburg-Vorpommern).\n\n## 2. Frege's Logic and Philosophy of Mathematics\n\nFrege provided a foundations for the modern discipline of logic by developing a more perspicuous method of formally representing the logic of thoughts and inferences. He did this by developing: (a) a system allowing one to study inferences formally, (b) an analysis of complex sentences and quantifier phrases that showed an underlying unity to certain classes of inferences, (c) an analysis of proof and definition, (d) a theory of extensions which, though seriously flawed, offered an intriguing picture of the foundations of mathematics, (e) an analysis of statements about number (i.e., of answers to the question ‘How many?’), (f) definitions and proofs of some of the basic axioms of number theory from a limited set of logically primitive concepts and axioms, and (g) a conception of logic as a discipline which has some compelling features. We discuss these developments in the following subsections.\n\n### 2.1 The Basis of Frege's Term Logic and Predicate Calculus\n\nIn an attempt to realize Leibniz's ideas for a language of thought and a rational calculus, Frege developed a formal notation for regimenting thought and reasoning. Though this notation was first outlined in his Begriffsschrift (1879), the most mature statement of Frege's system was in his 2-volume Grundgesetze der Arithmetik (1893/1903). Frege's two systems are best characterized as term logics, since all of the complete expressions are denoting terms. Frege analyzed ordinary predication in these systems, and so they can also be conceived as predicate calculi. A predicate calculus is a formal system (a formal language and a method of proof) in which one can represent valid inferences among predications, i.e., among statements in which properties are predicated of objects.\n\nIn this subsection, we shall examine the most basic elements of Frege's 1893/1903 term logic and predicate calculus. These are the statements involving function applications and the simple predications which fall out as a special case.\n\n#### 2.1.1 The Basis of Frege's Term Logic\n\nIn Frege's term logic, all of the terms and well-formed formulas are denoting expressions. These include: (a) simple names of objects, like ‘2’ and ‘π’, (b) complex terms which denote objects, like ‘22’ and ‘3 + 1’, and (c) sentences (which are also complex terms). The complex terms in (b) and (c) are formed with the help of ‘incomplete expressions’ which signify functions, such as the unary squaring function ‘( )2’ and the binary addition function ‘( )+( )’. In these functional expressions, ‘( )’ is used as a placeholder for what Frege called the arguments of the function; the placeholder reveals that the expressions signifying function are, on Frege's view, incomplete and stand in contrast to complete expressions such as those in (a), (b), and (c). (Though Frege thought it inappropriate to call the incomplete expressions that signify functions ‘names’, we shall sometimes do so in what follows, though the reader should be warned that Frege had reasons for not following this practice.) Thus, a mathematical expression such as ‘22’ denotes the result of applying the function ( )2 to the number 2 as argument, namely, the number 4. Similarly, the expression ‘7 + 1’ denotes the result of applying the binary function +(( ),( )) to the numbers 7 and 1 as arguments, in that order.\n\nEven the sentences of Frege's mature logical system are (complex) denoting terms; they are terms that denote truth-values. Frege distinguished two truth-values, The True and The False, which he took to be objects. The basic sentences of Frege's system are constructed using the expression ‘( ) = ( )’, which signifies a binary function that maps a pair of objects x and y to The True if x is identical to y and maps x and y to The False otherwise. A sentence such as ‘22 = 4’ therefore denotes the truth-value The True, while the sentence ‘22 = 6’ denotes The False.\n\nAn important class of these identity statements are statements of the form ‘ƒ(x) = y’, where ƒ( ) is any unary function (i.e., function of a single variable), x is the argument of the function, and ƒ(x) is the value of the function for the argument x. Similarly, ƒ(x,y) = z is an identity statement involving a ‘binary’ function of two variables. And so on, for functions of more than two variables.\n\nIf we replace a complete name appearing in a sentence by a placeholder, the result is an incomplete expression that signifies a special kind of function which Frege called a concept. Concepts are functions which map every argument to one of the truth-values. Thus, ‘( )>2’ denotes the concept being greater than 2, which maps every object greater than 2 to The True and maps every other object to The False. Similarly, ‘( )2 = 4’ denotes the concept that which when squared is identical to 4. Frege would say that any object that a concept maps to The True falls under the concept. Thus, the number 2 falls under the concept that which when squared is identical to 4. In what follows, we use lower-case expressions like ƒ( ) to talk generally about functions, and upper-case expressions like F( ) to talk more specifically about those functions which are concepts.\n\nFrege supposed that a mathematical claim such as ‘2 is prime’ should be formally represented as ‘P(2)’. The verb phrase ‘is prime’ is thereby analyzed as denoting the concept P( ) which maps primes to The True and everything else to The False. Thus, a simple predication like ‘2 is prime’ becomes analyzed in Frege's system as a special case of functional application.\n\n#### 2.1.2 The Predicate Calculus Within Frege's Term Logic\n\nThe preceding analysis of simple mathematical predications led Frege to extend the applicability of this system to the representation of non-mathematical thoughts and predications. This move formed the basis of the modern predicate calculus. Frege analyzed a non-mathematical predicate like ‘is happy’ as signifying a function of one variable which maps its arguments to a truth-value. Thus, ‘is happy’ denotes a concept which can be represented in the formal system as ‘H( )’. H( ) maps those arguments which are happy to The True, and maps everything else to The False. The sentence ‘John is happy’ (‘H(j)’) is thereby analyzed as: the object denoted by ‘John’ falls under the concept signified by ‘( ) is happy’. Thus, a simple predication is analyzed in terms of falling under a concept, which in turn, is analyzed in terms of functions which map their arguments to truth values. By contrast, in the modern predicate calculus, this last step of analyzing predication in terms of functions is not assumed; predication is seen as more fundamental than functional application. The sentence ‘John is happy’ is formally represented as ‘Hj’, where this is a basic form of predication (‘the object j instantiates or exemplifies the property H’). In the modern predicate calculus, functional application is analyzable in terms of predication, as we shall soon see.\n\nIn Frege's analysis, the verb phrase ‘loves’ signifies a binary function of two variables: L(( ),( )). This function takes a pair of arguments x and y and maps them to The True if x loves y and maps all other pairs of arguments to The False. Although it is a descendent of Frege's system, the modern predicate calculus analyzes loves as a two-place relation (Lxy) rather than a function; some objects stand in the relation and others do not. The difference between Frege's understanding of predication and the one manifested by the modern predicate calculus is simply this: in the modern predicate calculus, relations are taken as basic, and functions are defined as a special case of relation, namely, those relations R such that for any objects x, y, and z, if Rxy and Rxz, then y=z. By contrast, Frege took functions to be more basic than relations. His logic is based on functional application rather than predication; so, a binary relation is analyzed as a binary function that maps a pair of arguments to a truth-value. Thus, a 3-place relation like gives would be analyzed in Frege's logic as a function that maps arguments x, y, and z to an appropriate truth-value depending on whether x gives y to z; the 4-place relation buys would be analyzed as a function that maps the arguments x, y, z, and u to an appropriate truth-value depending on whether x buys y from z for amount u; etc.\n\n### 2.2 Complex Statements and Generality\n\nSo far, we have been discussing Frege's analysis of ‘atomic’ statements. To complete the basic logical representation of thoughts, Frege added notation for representing more complex statements (such as negated and conditional statements) and statements of generality (those involving the expressions ‘every’ and ‘some’). Though we no longer use his notation for representing complex and general statements, it is important to see how the notation in Frege's term logic already contained all the expressive power of the modern predicate calculus.\n\nThere are four special functional expressions which are used in Frege's system to express complex and general statements:\n\n Intuitive Significance Functional Expression The Function It Signifies Statement", null, "The function which maps The True to The True and maps all other objects to The False; used to express the thought that the argument of the function is a true statement. Negation", null, "The function which maps The True to The False and maps all other objects to The True Conditional", null, "The function which maps a pair of objects to The False if the first (i.e., named in the bottom branch) is The True and the second isn't The True, and maps all other pairs of objects to The True Generality", null, "The second-level function which maps a first-level concept Φ to The True if Φ maps every object to The True; otherwise it maps Φ to The False.\n\nThe best way to understand this notation is by way of some tables, which show some specific examples of statements and how those are rendered in Frege's notation and in the modern predicate calculus.\n\n#### 2.2.1 Truth-functional Connectives\n\nThe first table shows how Frege's logic can express the truth-functional connectives such as not, if-then, and, or, and if-and-only-if.\n\n Example Frege'sNotation ModernNotation John is happy", null, "Hj It is not the case that John is happy", null, "¬Hj If the sun is shining, then John is happy", null, "Ss → Hj The sun is shining and John is happy", null, "Ss & Hj Either the sun is shining or John is happy", null, "Ss ∨ Hj The sun is shining if and only if John is happy", null, "Ss ≡ Hj\n\nAs one can see, Frege didn't use the primitive connectives ‘and’, ‘or’, or ‘if and only if’, but always used canonical equivalent forms defined in terms of negations and conditionals. Note the last row of the table — when Frege wants to assert that two conditions are materially equivalent, he uses the identity sign, since this says that they denote the same truth-value. In the modern sentential calculus, the biconditional does something equivalent, for a statement of the form φ≡ψ is true whenever φ and ψ are both true or both false. The only difference is, in the modern sentential calculus φ and ψ are not construed as terms denoting truth-values, but rather as sentences having truth conditions. Of course, Frege could, in his notation, use the sentence ‘(φ→ψ) & (ψ→φ)’ to assert φ≡ψ.\n\n#### 2.2.2 Quantified Statements\n\nThe table below compares statements of generality in Frege's notation and in the modern predicate calculus. Frege used a special typeface (Gothic) for variables in general statements.\n\n Example FregeNotation ModernNotation Everything is mortal", null, "∀xMx Something is mortal", null, "¬∀x¬Mx i.e., ∃xMx Nothing is mortal", null, "∀x¬Mx i.e., ¬∃xMx Every person is mortal", null, "∀x(Px → Mx) Some person is mortal", null, "¬∀x(Px → ¬Mx) i.e., ∃x(Px & Mx) No person is mortal", null, "∀x(Px → ¬Mx) i.e., ¬∃x(Px & Mx) All and only persons are mortal", null, "∀x(Px ≡ Mx)\n\nNote the last line. Here again, Frege uses the identity sign to help state the material equivalence of two concepts. He can do this because materially equivalent concepts F and G are such that F maps an object x to The True whenever G maps x to The True; i.e., for all arguments x, F and G map x to the same truth-value.\n\nIn the modern predicate calculus, the symbols ‘∀’ (‘every’) and ‘∃’ (‘some’) are called the ‘universal’ and ‘existential’ quantifier, respectively, and the variable ‘x’ in the sentence ‘∀xMx’ is called a ‘quantified variable’, or ‘variable bound by the quantifier’. We will follow this practice of calling statements involving one of these quantifier phrases ‘quantified statements’. As one can see from the table above, Frege didn't use an existential quantifier. He was aware that a statement of the form ‘∃x(…)’ could always be defined as ‘¬∀x¬(…)’.\n\nIt is important to mention here that the predicate calculus formulable in Frege's logic is a ‘second-order’ predicate calculus. This means it allows quantification over functions as well as quantification over objects; i.e., statements of the form ‘Every function ƒ is such that …’ and ‘Some function ƒ is such that …’ are allowed. Thus, the statement ‘objects a and b fall under the same concepts’ would be written as follows in Frege' notation:", null, "and in the modern second-order predicate calculus, we write this as:\n\nF(FaFb)\n\nReaders interested in learning more about Frege's notation can consult Beaney (1997, Appendix 2), Furth (1967), and Reck & Awodey (2004, 26–34). In what follows, however, we shall continue to use the notation of the modern predicate calculus instead of Frege's notation. In particular, we adopt the following conventions. (1) We shall often use ‘Fx’ instead of ‘F(x)’ to represent the fact that x falls under the concept F; we use ‘Rxy’ instead of ‘R(x,y)’ to represent the fact that x stands in the relation R to y; etc. (2) Instead of using expressions with placeholders, such as ‘( ) = ( )’ and ‘P( )’, to signify functions and concepts, we shall simply use ‘=’ and ‘P’. (3) When one replaces one of the complete names in a sentence by a variable, the resulting expression will be called an open sentence or an open formula. Thus, whereas ‘3<2’ is a sentence, ‘3<x’ is an open sentence; and whereas ‘Hj’ is a formal sentence that might be used to represent ‘John is happy’, the expression ‘Hx’ is an open formula which might be rendered ‘x is happy’ in natural language. (4) Finally, we shall on occasion employ the Greek symbol φ as a metavariable ranging over formal sentences, which may or may not be open. Thus, ‘φ(a)’ will be used to indicate any sentence (simple or complex) in which the name ‘a’ appears; ‘φ(a)’ is not to be understood as Frege-notation for a function φ applied to argument a. Similarly, ‘φ(x)’ will be used to indicate an open sentence in which the variable x may or may not be free, not a function of x.\n\n#### 2.2.3 Frege's Logic of Quantification\n\nFrege's functional analysis of predication coupled with his understanding of generality freed him from the limitations of the ‘subject-predicate’ analysis of ordinary language sentences that formed the basis of Aristotelian logic and it made it possible for him to develop a more general treatment of inferences involving ‘every’ and ‘some’. In traditional Aristotelian logic, the subject of a sentence and the direct object of a verb are not on a logical par. The rules governing the inferences between statements with different but related subject terms are different from the rules governing the inferences between statements with different but related verb complements. For example, in Aristotelian logic, the rule which permits the valid inference from ‘John loves Mary’ to ‘Something loves Mary’ is different from the rule which permits the valid inference from ‘John loves Mary’ to ‘John loves something’. The rule governing the first inference is a rule which applies only to subject terms whereas the rule governing the second inference governs reasoning within the predicate, and thus applies only to the transitive verb complements (i.e., direct objects). In Aristotelian logic, these inferences have nothing in common.\n\nIn Frege's logic, however, a single rule governs both the inference from ‘John loves Mary’ to ‘Something loves Mary’ and the inference from ‘John loves Mary’ to ‘John loves something’. That's because the subject John and the direct object Mary are both considered on a logical par, as arguments of the function loves. In effect, Frege saw no logical difference between the subject ‘John’ and the direct object ‘Mary’. What is logically important is that ‘loves’ denotes a function of 2 arguments. No matter whether the quantified expression ‘something’ appears as subject (‘Something loves Mary’) or within a predicate (‘John loves something’), it is to be resolved in the same way. In effect, Frege treated these quantified expressions as variable-binding operators. The variable-binding operator ‘some x is such that’ can bind the variable ‘x’ in the open sentence ‘x loves Mary’ as well as the variable ‘x’ in the open sentence ‘John loves x’. Thus, Frege analyzed the above inferences in the following general way:\n\n• John loves Mary. Therefore, some x is such that x loves Mary.\n• John loves Mary. Therefore, some x is such that John loves x.\n\nBoth inferences are instances of a single valid inference rule. To see this more clearly, here are the formal representations of the above informal arguments:\n\n• Ljm ∴ ∃x(Lxm)\n• Ljm ∴ ∃x(Ljx)\n\nThe logical axiom which licenses both inferences has the form:\n\nRa1ai an → ∃x(Ra1x an),\n\nwhere R is a relation that can take n arguments, and a1,…,an are any constants (names), for any ai such that 1≤in. This logical axiom tells us that from a simple predication involving an n-place relation, one can existentially generalize on any argument, and validly derive a existential statement.\n\nIndeed, this axiom can be made even more general. If φ(a) is any statement (formula) in which a constant (name) a appears, and φ(x) is the result of replacing one or more occurrences of a by x, then the following is a logical axiom:\n\nφ(a) → ∃xφ(x)\n\nThe inferences which start with the premise ‘John loves Mary’, displayed above, both appeal to this axiom for justification. This axiom is actually derivable as a theorem from Frege's Basic Law IIa (1893, §47). Basic Law IIa asserts ∀xφ(x) → φ(a), and the above axiom for the existential quantifier can be derived from IIa using the rules governing conditionals, negation, and the definition of ∃x(…) discussed above.\n\nThere is one other consequence of Frege's logic of quantification that should be mentioned. Frege took claims of the form ∃x(…) to be existence claims. He suggested that existence is not a concept under which objects fall but rather a second-level concept under which first-level concepts fall. A concept F falls under this second-level concept just in case F maps at least one object to The True. So the claim ‘Martians don't exist’ is analyzed as an assertion about the concept martian, namely, that nothing falls under it. Frege therefore took existence to be that second-level concept which maps a first-level concept F to The True just in case ∃xFx and maps all other concepts to The False. Many philosophers have thought that this analysis validates Kant's view that existence is not a (real) predicate.\n\n### 2.3 Proof and Definition\n\n#### 2.3.1 Proof\n\nFrege's system (i.e., his term logic/predicate calculus) consisted of a language and an apparatus for proving statements. The latter consisted of a set of logical axioms (statements considered to be truths of logic) and a set of rules of inference that lay out the conditions under which certain statements of the language may be correctly inferred from others. Frege made a point of showing how every step in a proof of a proposition was justified either in terms of one of the axioms or in terms of one of the rules of inference or justified by a theorem or derived rule that had already been proved.\n\nThus, as part of his formal system, Frege developed a strict understanding of a ‘proof’. In essence, he defined a proof to be any finite sequence of statements such that each statement in the sequence either is an axiom or follows from previous members by a valid rule of inference. Thus, a proof of a theorem of logic, say φ, is therefore any finite sequence of statements (with φ the final statement in the sequence) such that each member of the sequence: (a) is one of the logical axioms of the formal system, or (b) follows from previous members of the sequence by a rule of inference. These are essentially the definitions that logicians still use today.\n\n#### 2.3.2 Definition\n\nFrege was extremely careful about the proper description and definition of logical and mathematical concepts. He developed powerful and insightful criticisms of mathematical work which did not meet his standards for clarity. For example, he criticized mathematicians who defined a variable to be a number that varies rather than an expression of language which can vary as to which determinate number it may take as a value.\n\nMore importantly, however, Frege was the first to claim that a properly formed definition had to have two important metatheoretical properties. Let us call the new, defined symbol introduced in a definition the definiendum, and the term that is used to define the new term the definiens. Then Frege was the first to suggest that proper definitions have to be both eliminable (a definendum must always be replaceable by its definiens in any formula in which the former occurs) and conservative (a definition should not make it possible to prove new relationships among formulas that were formerly unprovable). Concerning one of his definitions in the Begriffsschrift (§24), Frege writes:\n\nWe can do without the notation introduced by this sentence, and hence without the sentence itself as its definition; nothing follows from the sentence that could not also be inferred without it. Our sole purpose in introducing such definitions is to bring about an extrinsic simplificationby stipulating an abbreviation.\n\nFrege later criticized those mathematicians who developed ‘piecemeal’ definitions or ‘creative’ definitions. In the Grundgesetze der Arithmetik, II (1903, Sections 56–67) Frege criticized the practice of defining a concept on a given range of objects and later redefining the concept on a wider, more inclusive range of objects. Frequently, this ‘piecemeal’ style of definition led to conflict, since the redefined concept did not always reduce to the original concept when one restricts the range to the original class of objects. In that same work (1903, Sections 139–147), Frege criticized the mathematical practice of introducing notation to name (unique) entities without first proving that there exist (unique) such entities. He pointed out that such ‘creative definitions’ were simply unjustified. Creative definitions fail to be conservative, as this was explained above.\n\n### 2.4 Courses-of-Values, Extensions, and Proposed Mathematical Foundations\n\n#### 2.4.1 Courses-of-Values and Extensions\n\nFrege's ontology consisted of two fundamentally different types of entities, namely, functions and objects (1891, 1892b, 1904). Functions are in some sense ‘unsaturated’; i.e., they are the kind of thing which take objects as arguments and map those arguments to a value. This distinguishes them from objects. As we've seen, the domain of objects included two special objects, namely, the truth-values The True and The False.\n\nIn his work of 1893/1903, Frege attempted to expand the domain of objects by systematically associating, with each function ƒ, an object which he called the course-of-values of ƒ. The course-of-values of a function is a record of the value of the function for each argument. The principle Frege used to systematize courses-of-values is Basic Law V (1893/§20;):\n\nThe course-of-values of the concept ƒ is identical to the course-of-values of the concept g if and only if ƒ and g agree on the value of every argument (i.e., if and only if for every object x, ƒ(x) = g(x)).\n\nFrege used the a Greek epsilon with a smooth breathing mark above it as part of the notation for signifying the course-of-values of the function ƒ:\n\nεƒ(ε)\n\nwhere the first occurrence of the Greek ε (with the smooth breathing mark above it) is a ‘variable-binding operator’ which we might read as ‘the course-of-values of’. To avoid the appearance of variable clash, we may also use a Greek α (with a line above) as a variable-binding operator. Using this notation, Frege formally represented Basic Law V in his system as:\n\nBasic Law V\nεƒ(ε) = αg(α) ≡ ∀x[ƒ(x) = g(x)]\n\n(Actually, Frege used an identity sign instead of the biconditional as the main connective of this principle, for reasons described above.)\n\nFrege called the course-of-values of a concept F its extension. The extension of a concept F records just those objects which F maps to The True. Thus Basic Law V applies equally well to the extensions of concepts. Let ‘φ(x)’ be an open sentence of any complexity with the free variable x (the variable x may have more than one occurrence in φ(x), but for simplicity, assume it has only one occurrence). Then using the variable-binding operator ε Frege would use the expression ‘εƒ(ε)’ (where the second epsilon replaces x in φ(x)) to denote the extension of the concept φ (recall, though, that in Frege's notation, a smooth-breathing mark would be used instead of the overline on the first epsilon). Where ‘n’ is the name of an object, Frege could define ‘object n is an element of the extension of the concept φ’ in the following simple terms: ‘the concept φ maps n to The True’ (i.e., φ(n)). For example, the number 3 is an element of the extension of the concept odd number greater than 2 if and only if this concept maps 3 to The True.\n\nUnfortunately, Basic Law V implies a contradiction, and this was pointed out to Frege by Bertrand Russell just as the second volume of the Grundgesetze was going to press. Russell recognized that some extensions are elements of themselves and some are not; the extension of the concept extension is an element of itself, since that concept would map its own extension to The True. The extension of the concept spoon is not an element of itself, because that concept would map its own extension to The False (since extensions aren't spoons). But now what about the concept extension which is not an element of itself? Let E represent this concept and let e name the extension of E. Is e an element of itself? Well, e is an element of itself if and only if E maps e to The True (by the definition of ‘element of’ given at the end of the previous paragraph, where e is the extension of the concept E). But E maps e to The True if and only if e is an extension which is not an element of itself, i.e., if and only if e is not an element of itself. We have thus reasoned that e is an element of itself if and only if it is not, showing the incoherency in Frege's conception of an extension.\n\nFurther discussion of this problem can be found in the entry on Russell's Paradox, and a more complete explanation of how the paradox arises in Frege's system is presented in the entry on Frege's theorem and foundations for arithmetic.\n\n#### 2.4.2 Proposed Foundation for Mathematics\n\nBefore he became aware of Russell's paradox, Frege attempted to construct a logical foundation for mathematics. Using the logical system containing Basic Law V (1893/1903), he attempted to demonstrate the truth of the philosophical thesis known as logicism, i.e., the idea not only that mathematical concepts can be defined in terms of purely logical concepts but also that mathematical principles can be derived from the laws of logic alone. But given that the crucial definitions of mathematical concepts were stated in terms of extensions, the inconsistency in Basic Law V undermined Frege's attempt to establish the thesis of logicism. Few philosophers today believe that mathematics can be reduced to logic in the way Frege had in mind. Mathematical theories such as set theory seem to require some non-logical concepts (such as set membership) which cannot be defined in terms of logical concepts, at least when axiomatized by certain powerful non-logical axioms (such as the proper axioms of Zermelo-Fraenkel set theory). Despite the fact that a contradiction invalidated a part of his system, the intricate theoretical web of definitions and proofs developed in the Grundgesetze nevertheless offered philosophical logicians an intriguing conceptual framework. The ideas of Bertrand Russell and Alfred North Whitehead in Principia Mathematica owe a huge debt to the work found in Frege's Grundgesetze.\n\nDespite Frege's failure to provide a coherent systematization of the notion of an extension, we shall make use of the notion in what follows to explain Frege's theory of numbers and analysis of number statements. Though the discussion will involve the notion of an extension, we shall not require Basic Law V; thus, we can use our informal understanding of the notion. In addition, extensions can be rehabilitated in various ways, either axiomatically as in modern set theory (which appears to be consistent) or as in various consistent reconstructions of Frege's system.\n\n### 2.5 The Analysis of Statements of Number\n\nIn what has come to be regarded as a seminal treatise, Die Grundlagen der Arithmetik (1884), Frege began work on the idea of deriving some of the basic principles of arithmetic from what he thought were more fundamental logical principles and logical concepts. Philosophers today still find that work insightful. The leading idea is that a statement of number, such as ‘There are eight planets’ and ‘There are two authors of Principia Mathematica’, is really a statement about a concept. Frege realized that one and the same physical phenomena could be conceptualized in different ways, and that answers to the question ‘How many?’ only make sense once a concept F is supplied. Thus, one and the same physical entity might be conceptualized as consisting of 1 army, 5 divisions, 20 regiments, 100 companies, etc., and so the question ‘How many?’ only becomes legitimate once one supplies the concept being counted, such as army, division, regiment, or company (1884, §46).\n\nUsing this insight, Frege took true statements like ‘There are eight planets’ and ‘There are two authors of Principia Mathematica’ to be “second level” claims about the concepts planet and author of Principia Mathematica, respectively. In the second case, the second level claim asserts that the first-level concept being an author of Principia Mathematica falls under the second-level concept being a concept under which two objects fall. This sounds circular, since it looks like we have analyzed\n\nThere are two authors of Principia Mathematica,\n\nwhich involves the concept two, as\n\nThe concept being an author of Principia Mathematica falls under the concept being a concept under which two objects fall,\n\nwhich also involves the concept two. But despite appearances, there is no circularity, since Frege analyzes the second-order concept being a concept under which two objects fall without appealing to the concept two. He did this by defining ‘F is a concept under which two objects fall’, in purely logical terms, as any concept F that satisfies the following condition:\n\nThere are distinct things x and y that fall under the concept F and anything else that falls under the concept F is identical to either x or y.\n\nIn the notation of the modern predicate calculus, this is formalized as:\n\nxy(xy & Fx & Fy & ∀z(Fzz=xz=y))\n\nNote that the concept being an author of Principia Mathematica satisfies this condition, since there are distinct objects x and y, namely, Bertrand Russell and Alfred North Whitehead, who authored Principia Mathematica and who are such that anything else authoring Principia Mathematica is identical to one of them. In this way, Frege analyzed a statement of number (‘there are two authors of Principia Mathematica’) as higher-order logical statements about concepts.\n\nFrege then took his analysis one step further. He noticed that each of the conditions in the following sequence of conditions defined a class of ‘equinumerous’ concepts, where ‘F’ in each case is a variable ranging over concepts:\n\n Condition (0): Nothing falls under F ¬∃xFx Condition (1): Exactly one thing falls under F ∃x(Fx & ∀y(Fy → y=x)) Condition (2): Exactly two things fall under F. ∃x∃y(x≠y & Fx & Fy & ∀z(Fz → z=x ∨ z=y)) Condition (3): Exactly three things fall under F. ∃x∃y∃z(x≠y & x≠z & y≠z & Fx & Fy & Fz & ∀w(Fw → w=x ∨ w=y ∨ w=z)) etc.\n\nNotice that if concepts P and Q are both concepts which satisfy one of these conditions, then there is a one-to-one correspondence between the objects which fall under P and the objects which fall under Q. That is, if any of the above conditions accurately describes both P and Q, then every object falling under P can be paired with a unique and distinct object falling under Q and, under this pairing, every object falling under Q gets paired with some unique and distinct object falling under P. (By the logician's understanding of the phrase ‘every’, this last claim even applies to those concepts P and Q which satisfy Condition (0).) Frege would call such P and Q equinumerous concepts (1884, §72). Indeed, for each condition defined above, the concepts that satisfy the condition are all pairwise equinumerous to one another.\n\nWith this notion of equinumerosity, Frege defined ‘the number of the concept F’ (or, more informally, ‘the number of Fs’) to be the extension or set of all concepts that are equinumerous with F (1884, §68). For example, the number of the concept author of Principia Mathematica is the extension of all concepts that are equinumerous to that concept. This number is therefore identified with the class of all concepts under which two objects fall, as this is defined by Condition (2) above. Frege specifically identified the number 0 as the number of the concept not being self-identical (1884, §74). It is a theorem of logic that nothing falls under this concept. Thus, it is a concept that satisfies Condition (0) above. Frege thereby identified the number 0 as the class of all concepts under which nothing falls, since that is the class of concepts equinumerous with the concept not being self-identical. Essentially, Frege identified the number 1 as the class of all concepts which satisfy Condition (1). And so forth. But though this defines a sequence of entities which are numbers, this procedure doesn't actually define the concept natural number (finite number). Frege, however, had an even deeper idea about how to do this.\n\n### 2.6 Natural Numbers\n\nIn order to define the concept of natural number, Frege first defined, for every 2-place relation R, the general concept ‘x is an ancestor of y in the R-series’. This new relation is called ‘the ancestral of the relation R’. The ancestral of the relation R was first defined in Frege's Begriffsschrift (1879, §26, Proposition 76; 1884, §79). The intuitive idea is easily grasped if we consider the relation x is the father of y. Suppose that a is the father of b, that b is the father of c, and that c is the father of d. Then Frege's definition of ‘x is an ancestor of y in the fatherhood-series’ ensured that a is an ancestor of b, c, and d, that b is an ancestor of c and d, and that c is an ancestor of d.\n\nMore generally, if given a series of facts of the form aRb, bRc, cRd, and so on, Frege showed how to define the relation x is an ancestor of y in the R-series (Frege referred to this as: y follows x in the R-series). To exploit this definition in the case of natural numbers, Frege had to define both the relation x precedes y and the ancestral of this relation, namely, x is an ancestor of y in the predecessor-series. He first defined the relational concept x precedes y as follows (1884, §76):\n\nx precedes y iff there is a concept F and an object z such that:\n• z falls under F,\n• y is the (cardinal) number of the concept F, and\n• x is the (cardinal) number of the concept object other than z falling under F\n\nIn the notation of the second-order predicate calculus, augmented by the functional notation ‘#F’ to denote the number of Fs and by the λ-notation ‘[λu φ]’ to name the complex concept being an object u such that φ, Frege's definition becomes:\n\nPrecedes(x,y)  =df  ∃Fz(Fz & y=#F & x=#[λu Fu & uz])\n\nTo see the intuitive idea behind this definition, consider how the definition is satisfied in the case of the number 1 preceding the number 2: there is a concept F (e.g., let F = being an author of Principia Mathematica) and an object z (e.g., let z = Alfred North Whitehead) such that:\n\n• Whitehead falls under the concept author of Principia Mathematica,\n• 2 is the (cardinal) number of the concept author of Principia Mathematica, and\n• 1 is the (cardinal) number of the concept author of Principia Mathematica other than Whitehead.\n\nNote that the last conjunct is true because there is exactly 1 object (namely, Bertrand Russell) which falls under the concept object other than Whitehead which falls under the concept of being an author of Principia Mathematica.\n\nThus, Frege has a definition of precedes which applies to the pairs <0,1>, <1,2>, <2,3>,…. Frege then defined the ancestral of this relation, namely, x is an ancestor of y in the predecessor-series. Though the exact definition will not be given here, we note that it has the following consequence: if 10 precedes 11 and 11 precedes 12, it follows that 10 is an ancestor of 12 in the predecessor-series. Note, however, that although 10 is an ancestor of 12, 10 does not precede 12, for the notion of precedes is that of immediately precedes. Note also that by defining the ancestral of the precedence relation, Frege had in effect defined x < y.\n\nRecall that Frege defined the number 0 as the number of the concept not being self-identical, and that 0 thereby becomes identified with the extension of all concepts which fail to be exemplified. Using this definition, Frege defined (1884, §83):\n\nx is a natural number iff either x=0 or 0 is an ancestor of x in the predecessor-series\n\nIn other words, a natural number is any member of the predecessor-series beginning with 0.\n\nUsing this definition as a basis, Frege later derived many important theorems of number theory. Philosophers only recently appreciated the importance of this work (C. Parsons 1965, Smiley 1981, Wright 1983, and Boolos 1987, 1990, 1995). Wright 1983 in particular showed how the Dedekind/Peano axioms for number might be derived from one of the consistent principles that Frege discussed in 1884, now known as Hume's Principle (‘The number of Fs is equal to the number of Gs if and only if there is a one-to-one correspondence between the Fs and the Gs’). It was recently shown by R. Heck that, despite the logical inconsistency in the system of Frege 1893/1903, Frege himself validly derived the Dedekind/Peano axioms from Hume's Principle. Although Frege used his inconsistent axiom, Basic Law V, to establish Hume's Principle, once Hume's Principle was established, the subsequent derivations of the Dedekind/Peano axioms make no further essential appeals to Basic Law V. Following the lead of George Boolos, philosophers today call the derivation of the Dedekind/Peano Axioms from Hume's Principle ‘Frege's Theorem’. The proof of Frege's Theorem was a tour de force which involved some of the most beautiful, subtle, and complex logical reasoning that had ever been devised. For a comprehensive introduction to the logic of Frege's Theorem, see the entry Frege's theorem and foundations for arithmetic.\n\n### 2.7 Frege's Conception of Logic\n\nBefore receiving the famous letter from Bertrand Russell informing him of the inconsistency in his system, Frege thought that he had shown that arithmetic is reducible to the analytic truths of logic (i.e., statements which are true solely in virtue of the meanings of the logical words appearing in those statements). It is recognized today, however, that at best Frege showed that arithmetic is reducible to second-order logic extended only by Hume's Principle. Some philosophers think Hume's Principle is analytically true (i.e., true in virtue of the very meanings of its words), while others resist the claim, and there is an interesting debate on this issue in the literature. However, for the purposes of this introductory essay, there are prior questions on which it is more important to focus, concerning the nature of Frege's logic, namely, ‘Did Frege's 1879 or 1893/1903 system (excluding Basic Law V) contain any extralogical resources?’, and ‘How did Frege's conception of logic differ from that of his predecessors, and in particular, Kant's?’ For even if Frege had been right in thinking that arithmetic is reducible to truths of logic, it is well known that Kant thought that arithmetic consisted of synthetic (a priori) truths and that it was not reducible to analytic logical truths. But, of course, Frege's view and Kant's view contradict each other only if they have the same conception of logic. Do they?\n\nMacFarlane 2002 addresses this question, and points out that their conceptions differ in various ways:\n\n… the resources Frege recognizes as logical far outstrip those of Kant's logic (Aristotelian term logic with a simple theory of disjunctive and hypothetical propositions added on). The most dramatic difference is that Frege's logic allows us to define concepts using nested quantifiers, while Kant's is limited to representing inclusion relations.\n\nMacFarlane goes on to point out that Frege's logic also contains higher-order quantifiers (i.e., quantifiers ranging over concepts), and a logical functor for forming singular terms from open sentences (i.e., the expression ‘the extension of’ takes the open sentence φ(x) to yield the singular term, ‘the extension of the concept φ(x)’). MacFarlane notes that if we were to try to express such resources in Kant's system, we would have to appeal to non-logical constructions which make sense only with respect to a faculty of ‘intuition’, that is, an extralogical source which presents our minds with (sensible) phenomena about which judgments can be formed. Frege denies Kant's dictum (A51/B75), ‘Without sensibility, no object would be given to us’, claiming that 0 and 1 are objects but that they ‘can't be given to us in sensation’ (1884, 101). (Frege's view is that our understanding can grasp them as objects if their definitions can be grounded in analytic propositions governing extensions of concepts.)\n\nThe debate over which resources require an appeal to intuition and which do not is an important one, since Frege dedicated himself to the idea of eliminating appeals to intuition in the proofs of the basic propositions of arithmetic. Frege thus continued a trend started by Bolzano (1817), who eliminated the appeal to intuition in the proof of the intermediate value theorem in the calculus by proving this theorem from the definition of continuity, which had recently been defined in terms of the definition of a limit (see Coffa 1991, 27). A Kantian might very well simply draw a graph of a continuous function which takes values above and below the origin, and thereby ‘demonstrate’ that such a function must cross the origin. But both Bolzano and Frege saw such appeals to intuition as potentially introducing logical gaps into proofs. There are good reasons to be suspicious about such appeals: (1) there are examples of functions which we can't graph or otherwise construct for presentation to our intuitive faculty — consider the function ƒ which maps rational numbers to 0 and irrational numbers to 1, or consider those functions which are everywhere continuous but nowhere differentiable; (2) once we take certain intuitive notions and formalize them in terms of explicit definitions, the formal definition might imply counterintuitive results; and (3) the rules of inference from statements to constructions and back are not always clear. Frege explicitly remarked upon the fact that he labored to avoid constructions and appeals to intuition in the proofs of basic propositions of arithmetic (1879, Preface/5, Part III/§23; 1884, § 62, 87; 1893, §0; and 1903, Appendix).\n\nThis brings us to one of the most important differences between the Frege's logic and Kant's. Frege's second-order logic included a Rule of Substitution (Grundgesetze I, 1893, §48, item 9), which allows one to substitute complex open formulas into logical theorems to produce new logical theorems. This rule is equivalent to a very powerful existence condition governing concepts known as the Comprehension Principle for Concepts. This principle asserts the existence of a concept corresponding to every open formula of the form φ(x) with free variable x, no matter how complex φ is. From Kant's point of view, existence claims were thought to be synthetic and in need of justification by the faculty of intuition. So, although it was one of Frege's goals to avoid appeals to the faculty of intuition, there is a real question as to whether his system, which involves an inference rule equivalent to a principle asserting the existence of a wide range of concepts, really is limited in its scope to purely logical laws of an analytic nature.\n\nOne final important difference between Frege's conception of logic and Kant's concerns the question of whether logic has any content unique to itself. As MacFarlane 2002 points out, one of Kant's most central views about logic is that its axioms and theorems are purely formal in nature, i.e., abstracted from all semantic content and concerned only with the forms of judgments, which are applicable across all the physical and mathematical sciences (1781/1787, A55/B79, A56/B80, A70/B95). By contrast, Frege took logic to have its own unique subject matter, which included not only facts about concepts (concerning negation, subsumption, etc.), identity, etc. (Frege 1906, 428), but also facts about ancestrals of relations and natural numbers (1879, 1893). Logic is not purely formal, from Frege's point of view, but rather can provide substantive knowledge of objects and concepts.\n\nDespite these fundamental differences in their conceptions of logic, Kant and Frege may have agreed that the most important defining characteristic of logic is its generality, i.e., the fact that it provides norms (rules, prescriptions) that are constitutive of thought. This rapprochement between Kant and Frege is developed in some detail in MacFarlane 2002. The reader will find there reasons for thinking that Kant and Frege may have shared enough of a common conception about logic for us to believe that equivocation doesn't undermine the apparent inconsistency between their views on the reducibility of arithmetic to logic. It is by no means settled as to how we should think of the relationship between arithmetic and logic, since logicians have not yet come to agreement about the proper conception of logic. Many modern logicians have a conception of logic that is yet different from both Kant and Frege. It is one which evolves out of the ideas that (1) certain concepts and laws remain invariant under permutations of the domain of quantification, and (2) that logic ought not to dictate the size of the domain of quantification. But this conception has not yet been articulated in a widely accepted way, and so elements common to Frege's and Kant's conception may yet play a role in our understanding of what logic is. (For other good discussions of Frege's conception of logic, see Goldfarb 2001 and Linnebo 2003.)\n\n## 3. Frege's Philosophy of Language\n\nWhile pursuing his investigations into mathematics and logic (and quite possibly, in order to ground those investigations), Frege was led to develop a philosophy of language. His philosophy of language has had just as much, if not more, impact than his contributions to logic and mathematics. Frege's seminal paper in this field ‘Über Sinn und Bedeutung’ (‘On Sense and Reference’, 1892a) is now a classic. In this paper, Frege considered two puzzles about language and noticed, in each case, that one cannot account for the meaningfulness or logical behavior of certain sentences simply on the basis of the denotations of the terms (names and descriptions) in the sentence. One puzzle concerned identity statements and the other concerned sentences with subordinate clauses such as propositional attitude reports. To solve these puzzles, Frege suggested that the terms of a language have both a sense and a denotation, i.e., that at least two semantic relations are required to explain the significance or meaning of the terms of a language. This idea has inspired research in the field for over a century and we discuss it in what follows. (See Heck and May 2006 for further discussion of Frege's contribution to the philosophy of language.)\n\n### 3.1 Frege's Puzzles\n\n#### 3.1.1 Frege's Puzzle About Identity Statements\n\nHere are some examples of identity statements:\n\n117+136 = 253.\nThe morning star is identical to the evening star.\nMark Twain is Samuel Clemens.\nBill is Debbie's father.\n\nFrege believed that these statements all have the form ‘a=b’, where ‘a’ and ‘b’ are either names or descriptions that denote individuals. He naturally assumed that a sentence of the form ‘a=b’ is true if and only if the object a just is (identical to) the object b. For example, the sentence ‘117+136 = 253’ is true if and only if the number 117+136 just is the number 253. And the statement ‘Mark Twain is Samuel Clemens’ is true if and only if the person Mark Twain just is the person Samuel Clemens.\n\nBut Frege noticed (1892) that this account of truth can't be all there is to the meaning of identity statements. The statement ‘a=a’ has a cognitive significance (or meaning) that must be different from the cognitive significance of ‘a=b’. We can learn that ‘Mark Twain=Mark Twain’ is true simply by inspecting it; but we can't learn the truth of ‘Mark Twain=Samuel Clemens’ simply by inspecting it — you have to examine the world to see whether the two persons are the same. Similarly, whereas you can learn that ‘117+136 = 117+136’ and ‘the morning star is identical to the morning star’ are true simply by inspection, you can't learn the truth of ‘117+136 = 253’ and ‘the morning star is identical to the evening star’ simply by inspection. In the latter cases, you have to do some arithmetical work or astronomical investigation to learn the truth of these identity claims. Now the problem becomes clear: the meaning of ‘a=a’ clearly differs from the meaning of ‘a=b’, but given the account of the truth described in the previous paragraph, these two identity statements appear to have the same meaning whenever they are true! For example, ‘Mark Twain=Mark Twain’ is true just in case: the person Mark Twain is identical with the person Mark Twain. And ‘Mark Twain=Samuel Clemens’ is true just in case: the person Mark Twain is identical with the person Samuel Clemens. But given that Mark Twain just is Samuel Clemens, these two cases are the same case, and that doesn't explain the difference in meaning between the two identity sentences. And something similar applies to all the other examples of identity statements having the forms ‘a=a’ and ‘a=b’.\n\nSo the puzzle Frege discovered is: how do we account for the difference in cognitive significance between ‘a=b’ and ‘a=a’ when they are true?\n\n#### 3.1.2 Frege's Puzzle About Propositional Attitude Reports\n\nFrege is generally credited with identifying the following puzzle about propositional attitude reports, even though he didn't quite describe the puzzle in the terms used below. A propositional attitude is a psychological relation between a person and a proposition. Belief, desire, intention, discovery, knowledge, etc., are all psychological relationships between persons, on the one hand, and propositions, on the other. When we report the propositional attitudes of others, these reports all have a similar logical form:\n\nx believes that p\nx desires that p\nx intends that p\nx discovered that p\nx knows that p\n\nIf we replace the variable ‘x ’ by the name of a person and replace the variable ‘p ’ with a sentence that describes the propositional object of their attitude, we get specific attitude reports. So by replacing ‘x ’ by ‘John’ and ‘p ’ by ‘Mark Twain wrote Huckleberry Finn’ in the first example, the result would be the following specific belief report:\n\nJohn believes that Mark Twain wrote Huckleberry Finn.\n\nTo see the problem posed by the analysis of propositional attitude reports, consider what appears to be a simple principle of reasoning, namely, the Principle of Identity Substitution (this is not to be confused with the Rule of Substitution discussed earlier). If a name, say n, appears in a true sentence S, and the identity sentence n=m is true, then the Principle of Identity Substitution tells us that the substitution of the name m for the name n in S does not affect the truth of S. For example, let S be the true sentence ‘Mark Twain was an author’, let n be the name ‘Mark Twain’, and let m be the name ‘Samuel Clemens’. Then since the identity sentence ‘Mark Twain=Samuel Clemens’ is true, we can substitute ‘Samuel Clemens’ for ‘Mark Twain’ without affecting the truth of the sentence. And indeed, the resulting sentence ‘Samuel Clemens was an author’ is true. In other words, the following argument is valid:\n\nMark Twain was an author.\nMark Twain=Samuel Clemens.\nTherefore, Samuel Clemens was an author.\n\nSimilarly, the following argument is valid.\n\n4 > 3\n4=8/2\nTherefore, 8/2 > 3\n\nIn general, then, the Principle of Identity Substitution seems to take the following form, where S is a sentence, n and m are names, and S(n) differs from S(m) only by the fact that at least one occurrence of m replaces n:\n\nFrom S(n) and n=m, infer S(m)\n\nThis principle seems to capture the idea that if we say something true about an object, then even if we change the name by which we refer to that object, we should still be saying something true about that object.\n\nBut Frege, in effect, noticed the following counterexample to the Principle of Identity Substitution. Consider the following argument:\n\nJohn believes that Mark Twain wrote Huckleberry Finn.\nMark Twain=Samuel Clemens.\nTherefore, John believes that Samuel Clemens wrote Huckleberry Finn.\n\nThis argument is not valid. There are circumstances in which the premises are true and the conclusion false. We have already described such circumstances, namely, one in which John learns the name ‘Mark Twain’ by reading Huckleberry Finn but learns the name ‘Samuel Clemens’ in the context of learning about 19th century American authors (without learning that the name ‘Mark Twain’ was a pseudonym for Samuel Clemens). John may not believe that Samuel Clemens wrote Huckleberry Finn. The premises of the above argument, therefore, do not logically entail the conclusion. So the Principle of Identity Substitution appears to break down in the context of propositional attitude reports. The puzzle, then, is to say what causes the principle to fail in these contexts. Why aren't we still saying something true about the man in question if all we have done is changed the name by which we refer to him?\n\n### 3.2 Frege's Theory of Sense and Denotation\n\nTo explain these puzzles, Frege suggested (1892a) that in addition to having a denotation, names and descriptions also express a sense. The sense of an expression accounts for its cognitive significance—it is the way by which one conceives of the denotation of the term. The expressions ‘4’ and ‘8/2’ have the same denotation but express different senses, different ways of conceiving the same number. The descriptions ‘the morning star’ and ‘the evening star’ denote the same planet, namely Venus, but express different ways of conceiving of Venus and so have different senses. The name ‘Pegasus’ and the description ‘the most powerful Greek god’ both have a sense (and their senses are distinct), but neither has a denotation. However, even though the names ‘Mark Twain’ and ‘Samuel Clemens’ denote the same individual, they express different senses. (See May 2006b for a nice discussion of the question of whether Frege believed that the sense of a name varies from person to person.) Using the distinction between sense and denotation, Frege can account for the difference in cognitive significance between identity statements of the form ‘a=a’ and those of the form ‘a=b’. Since the sense of ‘a’ differs from the sense of ‘b’, the components of the sense of ‘a=a’ and the sense of ‘a=b’ are different. Frege can claim that the sense of the whole expression is different in the two cases. Since the sense of an expression accounts for its cognitive significance, Frege has an explanation of the difference in cognitive significance between ‘a=a’ and ‘a=b’, and thus a solution to the first puzzle.\n\nMoreover, Frege proposed that when a term (name or description) follows a propositional attitude verb, it no longer denotes what it ordinarily denotes. Instead, Frege claims that in such contexts, a term denotes its ordinary sense. This explains why the Principle of Identity Substitution fails for terms following the propositional attitude verbs in propositional attitude reports. The Principle asserts that truth is preserved when we substitute one name for another having the same denotation. But, according to Frege's theory, the names ‘Mark Twain’ and ‘Samuel Clemens’ denote different senses when they occur in the following sentences:\n\nJohn believes that Mark Twain wrote Huckleberry Finn.\nJohn believes that Samuel Clemens wrote Huckleberry Finn.\n\nIf they don't denote the same object, then there is no reason to think that substitution of one name for another would preserve truth.\n\nFrege developed the theory of sense and denotation into a thoroughgoing philosophy of language. This philosophy can be explained, at least in outline, by considering a simple sentence such as ‘John loves Mary’. In Frege's view, the words ‘John’ and ‘Mary’ in this sentence are names, the expression ‘loves’ signifies a function, and, moreover, the sentence as a whole is a complex name. Each of these expressions has both a sense and a denotation. The sense and denotation of the names are basic; but sense and denotation of the sentence as a whole can be described in terms of the sense and denotation of the names and the way in which those words are arranged in the sentence alongside the expression ‘loves’. Let us refer to the denotation and sense of the words as follows:\n\nd[j] refers to the denotation of the name ‘John’.\nd[m] refers to the denotation of the name ‘Mary’.\nd[L] refers to the denotation of the expression ‘loves’.\ns[j] refers to the sense of the name ‘John’.\ns[m] refers to the sense of the name ‘Mary’.\ns[L] refers to the sense of the expression ‘loves’.\n\nWe now work toward a theoretical description of the denotation of the sentence as a whole. On Frege's view, d[j] and d[m] are the real individuals John and Mary, respectively. d[L] is a function that maps d[m] (i.e., Mary) to the function ( ) loves Mary. This latter function serves as the denotation of the predicate ‘loves Mary’ and we can use the notation d[Lm] to refer to it semantically. Now the function d[Lm] maps d[j] (i.e., John) to the denotation of the sentence ‘John loves Mary’. Let us refer to the denotation of the sentence as d[jLm]. Frege identifies the denotation of a sentence as one of the two truth values. Because d[Lm] maps objects to truth values, it is a concept. Thus, d[jLm] is the truth value The True if John falls under the concept d[Lm]; otherwise it is the truth value The False. So, on Frege's view, the sentence ‘John loves Mary’ names a truth value.\n\nThe sentence ‘John loves Mary’ also expresses a sense. Its sense may be described as follows. Although Frege doesn't appear to have explicitly said so, his work suggests that s[L] (the sense of the expression ‘loves’) is a function. This function would map s[m] (the sense of the name ‘Mary’) to the sense of the predicate ‘loves Mary’. Let us refer to the sense of ‘loves Mary’ as s[Lm]. Now again, Frege's work seems to imply that we should regard s[Lm] as a function which maps s[j] (the sense of the name ‘John’) to the sense of the whole sentence. Let us call the sense of the entire sentence s[jLm]. Frege calls the sense of a sentence a thought, and whereas there are only two truth values, he supposes that there are an infinite number of thoughts.\n\nWith this description of language, Frege can give a general account of the difference in the cognitive significance between identity statements of the form ‘a=a’ and ‘a=b’. The cognitive significance is not accounted for at the level of denotation. On Frege's view, the sentences ‘4=8/2’ and ‘4=4’ both denote the same truth value. The function ( )=( ) maps 4 and 8/2 to The True, i.e., maps 4 and 4 to The True. So d[4=8/2] is identical to d[4=4]; they are both The True. However, the two sentences in question express different thoughts. That is because s is different from s[8/2]. So the thought s[4=8/2] is distinct from the thought s[4=4]. Similarly, ‘Mark Twain=Mark Twain’ and ‘Mark Twain=Samuel Clemens’ denote the same truth value. However, given that s[Mark Twain] is distinct from s[Samuel Clemens], Frege would claim that the thought s[Mark Twain=Mark Twain] is distinct from the thought s[Mark Twain=Samuel Clemens].\n\nFurthermore, recall that Frege proposed that terms following propositional attitude verbs denote not their ordinary denotations but rather the senses they ordinarily express. In fact, in the following propositional attitude report, not only do the words ‘Mark Twain’, ‘wrote’ and ‘Huckleberry Finn ’ denote their ordinary senses, but the entire sentence ‘Mark Twain wrote Huckleberry Finn’ also denotes its ordinary sense (namely, a thought):\n\nJohn believes that Mark Twain wrote Huckleberry Finn.\n\nFrege, therefore, would analyze this attitude report as follows: ‘believes that’ denotes a function that maps the denotation of the sentence ‘Mark Twain wrote Huckleberry Finn’ to a concept. In this case, however, the denotation of the sentence ‘Mark Twain wrote Huckleberry Finn’ is not a truth value but rather a thought. The thought it denotes is different from the thought denoted by ‘Samuel Clemens wrote Huckleberry Finn’ in the following propositional attitude report:\n\nJohn believes that Samuel Clemens wrote Huckleberry Finn.\n\nSince the thought denoted by ‘Samuel Clemens wrote Huckleberry Finn’ in this context differs from the thought denoted by ‘Mark Twain wrote Huckleberry Finn’ in the same context, the concept denoted by ‘believes that Mark Twain wrote Huckleberry Finn’ is a different concept from the one denoted by ‘believes that Samuel Clemens wrote Huckleberry Finn’. One may consistently suppose that the concept denoted by the former predicate maps John to The True whereas the concept denoted by the latter predicate does not. Frege's analysis therefore preserves our intuition that John can believe that Mark Twain wrote Huckleberry Finn without believing that Samuel Clemens did. It also preserves the Principle of Identity Substitution—the fact that one cannot substitute ‘Samuel Clemens’ for ‘Mark Twain’ when these names occur after propositional attitude verbs does not constitute evidence against the Principle. For if Frege is right, names do not have their usual denotation when they occur in these contexts.\n\n## Bibliography\n\n### A. Primary Sources\n\n#### Frege's Complete Corpus\n\nChronological Catalog of Frege's Work\n\n#### Works by Frege Cited in this Entry\n\n 1873 Über eine geometrische Darstellung der imaginären Gebilde in der Ebene, Inaugural-Dissertation der Philosophischen Fakultät zu Göttingen zur Erlangung der Doktorwürde, Jena: A. Neuenhann, 1873; translated by H. Kaal, On a Geometrical Representation of the Imaginary Forms in the Plane, in B. McGuinness (ed.), Collected Papers on Mathematics, Logic, and Philosophy, Oxford: Blackwell, 1984, pp. 1–55. 1874 Rechnungsmethoden, die sich auf eine Erweiterung des Grössenbegriffes gründen, Dissertation zur Erlangung der Venia Docendi bei der Philosophischen Fakultät in Jena, Jena: Friedrich Frommann, 1874; translation by H. Kaal, Methods of Calculation based on an Extension of the Concept of Quantity, in B. McGuinness (ed.), Collected Papers on Mathematics, Logic, and Philosophy, Oxford: Blackwell, 1984, pp. 56–92) 1879 Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle a. S.: Louis Nebert. Translated as Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in J. vanHeijenoort (ed.), From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, Cambridge, MA: Harvard University Press, 1967. 1884 Die Grundlagen der Arithmetik: eine logisch mathematische Untersuchung über den Begriff der Zahl, Breslau: W. Koebner. Translated as The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, by J.L. Austin, Oxford: Blackwell, second revised edition, 1974. 1891 ‘Funktion und Begriff’, Vortrag, gehalten in der Sitzung vom 9. Januar 1891 der Jenaischen Gesellschaft für Medizin und Naturwissenschaft, Jena: Hermann Pohle. Translated as ‘Function and Concept’ by P. Geach in Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds. and trans.), Oxford: Blackwell, third edition, 1980. 1892a ‘Über Sinn und Bedeutung’, in Zeitschrift für Philosophie und philosophische Kritik, 100: 25–50. Translated as ‘On Sense and Reference’ by M. Black in Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds. and trans.), Oxford: Blackwell, third edition, 1980. 1892b ‘Über Begriff und Gegenstand’, in Vierteljahresschrift für wissenschaftliche Philosophie, 16: 192–205. Translated as ‘Concept and Object’ by P. Geach in Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds. and trans.), Oxford: Blackwell, third edition, 1980. 1893/1903 Grundgesetze der Arithmetik, Jena: Verlag Hermann Pohle, Band I/II. Complete translation by P. Ebert and M. Rossberg (with C. Wright) as Basic Laws of Arithmetic: Derived using concept-script, Oxford: Oxford University Press, 2013. Partial translation of Volume I, The Basic Laws of Arithmetic, by M. Furth, Berkeley: University of California Press, 1964. 1903b ‘Über die Grundlagen der Geometrie’ (First Series), Jahresbericht der Deutschen Mathematiker-Vereinigung 12 (1903): 319–324 (Part I), 368–375 (Part II). Translated ‘On the Foundations of Geometry (Second Series)’ by E.-H. W. Kluge, in McGuinness (ed.) 1984, op. cit., pp. 273–284. 1904 ‘Was ist eine Funktion?’, in Festschrift Ludwig Boltzmann gewidmet zum sechzigsten Geburtstage, 20. Februar 1904, S. Meyer (ed.), Leipzig: Barth, 1904, pp. 656–666. Translated as ‘What is a Function?’ by P. Geach in Translations from the Philosophical Writings of Gottlob Frege, P. Geach and M. Black (eds. and trans.), Oxford: Blackwell, third edition, 1980. 1906 ‘Über die Grundlagen der Geometrie’ (Second Series), Jahresbericht der Deutschen Mathematiker-Vereinigung 15: 293–309 (Part I), 377–403 (Part II), 423–430 (Part III). Translated as ‘On the Foundations of Geometry (Second Series)’ by E.-H. W. Kluge, in On the Foundatons of Geometry and Formal Theories of Arthmetic, New Haven: Yale University Press, 1971. 1918a ‘Der Gedanke. Eine Logische Untersuchung’, in Beiträge zur Philosophie des deutschen Idealismus, I (1918–1919): 58–77. Translated as ‘Thoughts’, by P. Geach and R. Stoothoff, in McGuinness (ed.) 1984, op. cit., pp. 351–372. 1918b ‘Die Verneinung. Eine Logische Untersuchung’, Beiträge zur Philosophie des deutschen Idealismus, I (1919): 143–157. Translated as ‘Negation’, by P. Geach and R. Stoothoff, in McGuinness (ed.) 1984, op. cit., pp. 373–389. 1923 ‘Logische Untersuchungen. Dritter Teil: Gedankengefüge’, Beiträge zur Philosophie des deutschen Idealismus, III (1923–1926): 36–51. Translated as ‘Compound Thoughts’, by P. Geach and R. Stoothoff, in McGuinness (ed.) 1984, op. cit., pp. 390–406. 1924 [Diary], G. Gabriel and W. Kienzler (eds.), ‘Diary: Written by Professor Gottlob Frege in the Time from 10 March to 9 April 1924’, R. Mendelsohn (trans.), Inquiry, 39 (1996): 303–342.\n\n### B. Secondary Sources\n\n• Angelelli, I., 1967, Studies on Gottlob Frege and Traditional Philosophy, Dordrecht: D. Reidel.\n• Beaney, M., 1996, Frege: Making Sense, London: Duckworth.\n• –––, 1997, The Frege Reader, Oxford: Blackwell\n• Bell, D., 1979, Frege's Theory of Judgment, Oxford: Clarendon.\n• Bolzano, B., 1817, ‘Rein analytischer Beweis des Lehrsatzes’, in Early Mathematical Works (1781–1848), L. Novy (ed.), Institute of Czechoslovak and General History CSAS, Prague, 1981. Translated in S. Russ (ed.), The Mathematical Works of Bernard Bolzano, Oxford: Oxford University Press.\n• Boolos, G., 1986, ‘Saving Frege From Contradiction’, Proceedings of the Aristotelian Society, 87: 137–151.\n• –––, 1987, ‘The Consistency of Frege's Foundations of Arithmetic’, in J. Thomson (ed.), On Being and Saying, Cambridge, MA: The MIT Press, 3–20.\n• –––, 1990, ‘The Standard of Equality of Numbers’, in G. Boolos (ed.), Meaning and Method: Essays in Honor of Hilary Putnam, Cambridge: Cambridge University Press, 261–77.\n• –––, 1995, ‘Frege's Theorem and the Peano Postulates’, The Bulletin of Symbolic Logic, 1: 317–26.\n• –––, 1998, Logic, Logic, and Logic, Cambridge, MA: Harvard University Press.\n• Burgess, J., 2005, Fixing Frege, Princeton: Princeton University Press.\n• Coffa, J.A., 1991, The Semantic Tradition from Kant to Carnap, L. Wessels (ed.), Cambridge: Cambridge University Press.\n• Currie, G., 1982, Frege: An Introduction to His Philosophy, Brighton, Sussex: Harvester Press.\n• Demopoulos, W., (ed.), 1995, Frege's Philosophy of Mathematics, Cambridge, MA: Harvard.\n• Dummett, M., 1973, Frege: Philosophy of Language, London: Duckworth.\n• –––, 1981, The Interpretation of Frege's Philosophy, Cambridge, MA: Harvard University Press.\n• –––, 1991, Frege: Philosophy of Mathematics, Cambridge, MA: Harvard University Press.\n• Furth, M., 1967, ‘Editor's Introduction’, in G. Frege, The Basic Laws of Arithmetic, M. Furth (translator and editor), Berkeley: University of California Press, v–lvii\n• Goldfarb, W., 2001, ‘Frege's Conception of Logic’, in J. Floyd and S. Shieh (eds.), Future Pasts: The Analytic Tradition in Twentieth-Century Philosophy, Oxford: Oxford University Press, 25–41.\n• Haaparanta, L., and Hintikka, J., (eds.), 1986, Frege Synthesized, Dordrecht: D. Reidel.\n• Heck, R., 1993, ‘The Development of Arithmetic in Frege's Grundgesetze der Arithmetik’, Journal of Symbolic Logic, 58/2 (June): 579–601.\n• Heck, R., and R. May, 2006, ‘Frege's Contribution to Philosophy of Language’, in E. Lepore and B. Smith (eds.), The Oxford Handbook of Philosophy of Language, Oxford: Oxford University Press.\n• Hodges, W., 2001, ‘Formal Features of Compositionality’, Journal of Logic, Language and Information, 10: 7–28.\n• Kant, I., 1781, Kritik der reinen Vernunft, Riga: Johann Friedrich Hartknoch, 1st edition (A), 1781; 2nd edition (B), 1787. Translated as Critique of Pure Reason by P. Guyer and A. Wood, Cambridge: Cambridge University Press, 1998.\n• Klemke, E. D. (ed.), 1968, Essays on Frege, Urbana, IL: University of Illinois Press.\n• Kratzsch, I., 1979, ‘Material zu Leben und Wirken Freges aus dem Besitz der Universitäts-bibliothek Jena’, in Begriffsschrift – Jenaer Frege-Konferenz (May 7–11, 1979), Jena: Friedrich-Schiller-Universität, 534–46.\n• Kreiser, L., 1984, ‘G. Frege “Die Grundlagen der Arithmetik” – Werk und Geschichte’, in G. Wechsung (ed.), Frege Conference 1984 (Proceedings of the International Conference Held at Schwerin, GDR, September 10–14, 1984), Berlin: Akademie-Verlag, 13–27.\n• –––, 2001, Gottlob Frege: Leben, Werk, Zeit, Hamburg: Meiner.\n• Linnebo, Øystein, 2003, ‘Frege's Conception of Logic: From Kant to Grundgesetze’, Manuscrito, 26(2): 235–252.\n• MacFarlane, J., 2002, ‘Frege, Kant, and the Logic in Logicism’, Philosophical Review, 111/1 (January): 25–66.\n• May, R., 2006a, ‘Frege on Indexicals’, The Philosophical Review, 115: 487–516.\n• –––, 2006b, ‘The Invariance of Sense’, The Journal of Philosophy, 103: 111–144.\n• Mendelsohn, R., 2005, The Philosophy of Gottlob Frege, Cambridge: Cambridge University Press.\n• Parsons, C., 1965, ‘Frege's Theory of Number’, in M. Black (ed.), Philosophy in America, Ithaca: Cornell, 180–203.\n• Parsons, T., 1981, ‘Frege's Hierarchies of Indirect Senses and the Paradox of Analysis’, Midwest Studies in Philosophy: VI, Minneapolis: University of Minnesota Press, 37–57.\n• –––, 1987, ‘On the Consistency of the First-Order Portion of Frege's Logical System’, Notre Dame Journal of Formal Logic, 28/1 (January): 161–168.\n• –––, 1982, ‘Fregean Theories of Fictional Objects’, Topoi, 1: 81–87.\n• Pelletier, F.J., 2001, ‘Did Frege Believe Frege's Principle’, Journal of Logic, Language, and Information, 10/1: 87–114.\n• Perry, J., 1977, ‘Frege on Demonstratives’, Philosophical Review, 86: 474–497.\n• Reck, E., and Awodey, S. (trans./eds.), 2004, Frege's Lectures on Logic: Carnap's Student Notes, 1910–1914, Chicago and La Salle, IL: Open Court.\n• Resnik, M., 1980, Frege and the Philosophy of Mathematics, Ithaca, NY: Cornell University Press.\n• Ricketts, T., 1997, ‘Truth-Values and Courses-of-Value in Frege's Grundgesetze’, in Early Analytic Philosophy, W. Tait (ed.), Chicago: Open Court, 187–211.\n• –––, 1986, ‘Logic and Truth in Frege’, Proceedings of the Aristotelian Society (Supplementary Volume) 70: 121–140.\n• Ricketts, T. and M. Potter (eds.), 2010, Cambridge Companion to Frege, Cambridge: Cambridge University Press.\n• Salmon, N., 1986, Frege's Puzzle, Cambridge, MA: MIT Press.\n• Schirn, M., (ed.), 1996, Frege: Importance and Legacy, Berlin: de Gruyter.\n• Sluga, H., 1980, Gottlob Frege, London: Routledge and Kegan Paul.\n• ––– (ed.), 1993, The Philosophy of Frege, New York: Garland, four volumes.\n• Smiley, T., 1981, ‘Frege and Russell’, Epistemologica, 4: 53–8.\n• Tappenden, J., 2006, ‘The Riemannian Background to Frege's Philosophy’, in The Architecture of Modern Mathematics: Essays in History and Philosophy, J. Ferreirós and J. Gray (eds.), Oxford: Oxford University Press, 97–132.\n• Wilson, M., 1992, ‘Frege: The Royal Road from Geometry’, Noûs, 26: 149–80; reprinted with a new Postscript in Demopoulos 1995, 108–159.\n• Weierstrass, K., 1872, ‘Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen,’ in Mathematische Werke von Karl Weierstrass (Volume II), Berlin, Germany: Mayer & Mueller, 1895 pages 71–74; English translation: ‘On continuous functions of a real argument that do not possess a well-defined derivative for any value of their argument’, in G.A. Edgar, Classics on Fractals, Addison-Wesley Publishing Company, 1993, 3–9.\n• Wright, C., 1983, Frege's Conception of Numbers as Objects, Aberdeen: Aberdeen University Press.", null, "How to cite this entry.", null, "Preview the PDF version of this entry at the Friends of the SEP Society.", null, "Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).", null, "Enhanced bibliography for this entry at PhilPapers, with links to its database." ]
[ null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/frege.jpg", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/judge.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/not.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/if-then.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/all.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/that-Hj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/not-Hj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/ifSs-thenHj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/Ss-and-Hj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/Ss-or-Hj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/Ss-iff-Hj.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/AllMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/SomeMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/NoMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/AllPxMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/SomePxMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/NoPxMx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/Px-iff-Mx.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/All-F-a-b.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/symbols/sepman-icon.jpg", null, "https://stanford.library.sydney.edu.au/archives/fall2019/symbols/sepman-icon.jpg", null, "https://stanford.library.sydney.edu.au/archives/fall2019/symbols/inpho.png", null, "https://stanford.library.sydney.edu.au/archives/fall2019/symbols/pp.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.908192,"math_prob":0.83687437,"size":88228,"snap":"2022-05-2022-21","text_gpt3_token_len":21022,"char_repetition_ratio":0.15985446,"word_repetition_ratio":0.067195065,"special_character_ratio":0.2291336,"punctuation_ratio":0.13021643,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9601204,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T02:17:47Z\",\"WARC-Record-ID\":\"<urn:uuid:bfffc0db-7a7f-4265-bdbb-3fa898d6b1f6>\",\"Content-Length\":\"125190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2af3f785-3b16-408e-b3f5-10168c8a4ea2>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d3235d4-3983-494d-9b1c-a13018d88910>\",\"WARC-IP-Address\":\"54.153.194.40\",\"WARC-Target-URI\":\"https://stanford.library.sydney.edu.au/archives/fall2019/entries/frege/\",\"WARC-Payload-Digest\":\"sha1:U3NQCNDGVQX7FTWYN7DZGBD5XZOGOMEM\",\"WARC-Block-Digest\":\"sha1:2XOXYC7CTD7ZZKHCG6QDOE3MOTRVTJPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662595559.80_warc_CC-MAIN-20220526004200-20220526034200-00296.warc.gz\"}"}
https://qb64.com/wiki/SQR
[ "# QB64.com\n\n## QB64 is a modern extended BASIC programming language that retains QBasic/QuickBASIC 4.5 compatibility and compiles native binaries for Windows, Linux, and macOS.\n\nThe SQR function returns the square root of a numerical value.\n\n## Syntax\n\nsquare_root = SQR(value)\n\n## Description\n\n• The square root returned is normally a SINGLE or DOUBLE numerical value.\n• The value parameter can be any positive numerical type. Negative parameter values will not work!\n• Other exponential root functions can use fractional exponents([^](^)) enclosed in parenthesis only. EX: root = c ^ (a / b)\n\n## Example(s)\n\nFinding the hypotenuse of a right triangle:\n\n``````\nA% = 3: B% = 4\nPRINT \"hypotenuse! =\"; SQR((A% ^ 2) + (B% ^ 2))\n\n``````\n``````\nhypotenuse = 5\n\n``````\n\nFinding the Cube root of a number.\n\n``````\nnumber = 8\ncuberoot = number ^ (1/3)\nPRINT cuberoot\n\n``````\n``````\n2\n\n``````\n\nNegative roots return fractional values of one.\n\n``````\nnumber = 8\nnegroot = number ^ -2\nPRINT negroot\n\n``````\n``````\n.015625\n\n``````\n\nExplanation: A negative root means that the exponent value is actually inverted to a fraction of 1. So x ^ -2 actually means the result will be: 1 / (x ^ 2).\n\nFast Prime number checker limits the numbers checked to the square root (half way).\n\n``````\nDEFLNG P\nDO\nPRIME = -1 'set PRIME as True\nINPUT \"Enter any number to check up to 2 million (Enter quits): \", guess\\$\nPR = VAL(guess\\$)\nIF PR MOD 2 THEN 'check for even number\nFOR P = 3 TO SQR(PR) STEP 2 'largest number that could be a multiple is the SQR\nIF PR MOD P = 0 THEN PRIME = 0: EXIT FOR 'MOD = 0 when evenly divisible by another\nNEXT\nELSE : PRIME = 0 'number to be checked is even so it cannot be a prime\nEND IF\nIF PR = 2 THEN PRIME = -1 '2 is the ONLY even prime\nIF PR = 1 THEN PRIME = 0 'MOD returns true but 1 is not a prime by definition\nIF PRIME THEN PRINT \"PRIME! How'd you find me? \" ELSE PRINT \"Not a prime, you lose!\"\nLOOP UNTIL PR = 0\n\n``````\n``````\nEnter any number to check up to 2 million (Enter quits): 12379\nPRIME! How'd you find me?\n\n``````\n\nNote: Prime numbers cannot be evenly divided by any other number except one." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71845937,"math_prob":0.9915206,"size":1918,"snap":"2022-40-2023-06","text_gpt3_token_len":545,"char_repetition_ratio":0.117032394,"word_repetition_ratio":0.032697547,"special_character_ratio":0.28988528,"punctuation_ratio":0.08493151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99932754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T23:45:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3d2e3511-210a-4306-a4a7-6c2bf986899d>\",\"Content-Length\":\"10790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db1a2d5b-d70b-45da-bbfd-f7b8fce125e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:458a6c0e-da69-490b-a659-ba65a9f750b9>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://qb64.com/wiki/SQR\",\"WARC-Payload-Digest\":\"sha1:4ROEJGAFM6EDORV2226R5S4OUKCRSCJQ\",\"WARC-Block-Digest\":\"sha1:SSWKQ75JKC2ARDNZQA27IZSBDOLUSZWS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499468.22_warc_CC-MAIN-20230127231443-20230128021443-00498.warc.gz\"}"}
https://exam-answers.com/2020/06/21/convert-decimal-ip-address-in-binary-and-binary-in-decimal/
[ "Categories\n\n# Convert Decimal IP address in Binary and Binary in Decimal\n\nThis tutorial explains how to convert a decimal IP address in binary IP address and a\nbinary IP address in a decimal IP address step by step with examples. Learn the easiest method of converting a decimal\nIP address and subnet mask in binary IP address and subnet mask respectively.\n\nAn IP address and a subnet mask both collectively provide a numeric identity to an interface. Both addresses are always used together. Without subnet mask, an IP address is an ambiguous address and without IP address a subnet mask is just a number.\n\nBoth addresses are 32 bits in length. These bits are divided in four parts. Each part is known as Octet and contains 8 bits.\nOctets are separated by periods and written in a sequence.", null, "Two popular notations are used for writing these addresses, binary and decimal.\n\nIn binary notation, all four octets are written in binary format.\n\nExamples of IP address in binary notation are following: –\n\n```00001010.00001010.00001010.00001010\n10101100.10101000.00000001.00000001\n11000000.10101000.00000001.00000001\n```\n\nExamples of subnet mask in binary notation are following: –\n\n```11111111.00000000.00000000.00000000\n11111111.11111111.00000000.00000000\n11111111.11111111.11111111.00000000\n```\n\nIn decimal notation, all four octets are written in decimal format. A decimal equivalent value of the bits is used in each octet.\n\nExamples of IP address in decimal notation are following: –\n\n```10.10.10.10\n172.168.1.1\n192.168.1.1\n```\n\nExamples of subnet mask in decimal notation are following: –\n\n```255.0.0.0\n255.255.0.0\n255.255.255.0\n```\n\nIn real life you rarely need to covert an IP address and subnet mask from decimal to binary format and vice versa. But if you are preparing for any Cisco exam, I highly recommend you to learn this conversion. Nearly all Cisco exams include questions about IP addresses. Learning this conversion will help you in solving IP addressing related questions more effectively.\n\n### Understanding base value and position\n\nExcept the base value, binary system works exactly same as decimal system works. Base value is the digits which are used to build the numbers in both systems.\nIn binary system, two digits (0 and 1) are used to build the numbers while in decimal system, ten digits (0,1,2,3,4,5,6,7,8,9) are used to build the numbers.\n\nIn order to convert a number from binary to decimal and vice versa, we have to change the base value. Once base value is changed, resulting number can be written in new system.\n\nSince IP address and subnet mask both are built from 32 bits and these bits are divided in 4 octets,\nin order to convert these addresses in binary from decimal and vice versa, we only need to understand the numbers which can be built from an octet or 8 bits.\n\nA bit can be either on or off. In binary system on bit is written as 1 and off bit is written as 0 in number.\nIn decimal system if bit is on, its position value is added in number and if bit is off, its position value is skipped in number.\n\nFollowing table lists the position value of each bit in an octet.\n\n Bit position 1 2 3 4 5 6 7 8 Position value 128 64 32 16 8 4 2 1\n###### Key points\n• Regardless which system we use to write the octet, it always contains all 8 bits. Bits are always written from left to right.\n• A number in which all 8 bits are off is written as 00000000 in binary system. Same number is written as 0 (0+0+0+0+0+0+0+0) in decimal system.\n• A number in which all 8 bits are on is written as 11111111 in binary system. Same number is written as 255 (128+64+32+16+8+4+2+1) in decimal system.\n\n## Converting decimal number in binary number\n\nTo convert a decimal number in binary number, follow these steps: –\n\n• Compare the position value of first bit with the given number. If given number is greater than the position value, write 0 in rough area of your worksheet. If given number is less than or equal to the position value, write the position value.\n• Add the position value of the second bit in whatever you written in first step and compare it with the position value of the second bit. If sum is greater than the position value, skip the position value. If sum is less than or equal to the position value, add the position value in sum.\n• Repeat this process until all 8 bits are compared. If sum becomes equal at any bit, write all reaming bits as 0.\n Operation In Decimal In Binary Add Use position value Set bit to 1 Skip Skip position value Set bit to 0\n\nLet’s take an example. Convert a decimal number 117 in binary.\n\n• Given decimal number is 117\n• Calculation direction is Left to Right\n Bit position position value Comparison Operation in decimal Value in decimal Operation in Binary Value in binary 1 128 128 is greater than 117 Skip 0 Off 0 2 64 0+64 = 64 is less than 117 Add 64 On 1 3 32 0+64+32 = 96 is less than 117 Add 32 On 1 4 16 0+64+32+16 = 112 is less than 117 Add 16 On 1 5 8 0+64+32+16+8 = 120 is greater than 117 Skip 0 Off 0 6 4 0+64+32+16+0+4 = 116 is less than 117 Add 4 On 1 7 2 0+64+32+16+0+4+2 = 118 is greater than 117 Skip 0 Off 0 8 1 0+64+32+16+0+4+0+1 = 117 is equivalent to 117 Add 1 On 1\n\nOnce above comparison is done in rough paper: –\n\n• To write the given number in decimal format, sum all the values of decimal field and write the result.\nIn this example, it would be 0+64+32+16+0+4+0+1 = 117.\n• To write the given number in binary format, write all the values of binary field from left to right. In this example, it would be 11110101.\n\n## Converting binary number in decimal number\n\nTo convert a binary number in decimal number, sum the values of all on bits. Let’s take an example. Convert a binary number 10101010 in decimal number.\n\n• Given binary number is 10101010\n• Calculation direction is Left to Right\n Bit position 1 2 3 4 5 6 7 8 position value 128 64 32 16 8 4 2 1 In binary 1 0 1 0 1 0 1 0 Bit status On Off On Off On Off On Off If bit status is on, use position value in decimal 128 0 32 0 8 0 2 0\n\nThe binary number 10101010 is equal to the number 170 (128+0+32+0+8+0+2+0) in decimal system.\n\nPractice for you\n\n• Pick any number from 0 – 255 and convert it in binary.\n• Pick any combination from 00000000 – 11111111 and convert it in decimal.\n\n#### Converting an IP address and subnet mask\n\nAs we know IP address and subnet mask both are built from 4 individual octets separated by periods. We can use above methods to convert all octets individually. Once all four octets are converted, we can merge them again separating by periods.", null, "That’s all for this tutorial. If you have any comment, suggestion and feedback about this tutorial, please mail me. If you like this tutorial, please don’t forget to share it through your favorite social network.\n\nPrerequisites for 200-301\n\n200-301 is a single exam, consisting of about 120 questions. It covers a wide range of topics, such as routing and switching, security, wireless networking, and even some programming concepts. As with other Cisco certifications, you can take it at any of the Pearson VUE certification centers.\n\nThe recommended training program that can be taken at a Cisco academy is called Implementing and Administering Cisco Solutions (CCNA). The successful completion of a training course will get you a training badge.\n\nFull Version 200-301 Dumps" ]
[ null, "https://www.computernetworkingnotes.org/images/cisco/ccna-study-guide/csg14-01-ip-address.png", null, "https://www.computernetworkingnotes.org/images/cisco/ccna-study-guide/csg14-02-convert-decimal-binary.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84942055,"math_prob":0.9862641,"size":7270,"snap":"2021-04-2021-17","text_gpt3_token_len":1921,"char_repetition_ratio":0.17437379,"word_repetition_ratio":0.11806098,"special_character_ratio":0.31526822,"punctuation_ratio":0.10046266,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964107,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T22:48:02Z\",\"WARC-Record-ID\":\"<urn:uuid:aa381f21-c168-458e-a4c3-56ce6c7aff06>\",\"Content-Length\":\"54444\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d768620-0dde-4a9a-84ab-54c469867cef>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff2133e6-5c38-473c-8f2e-063928a1ddc6>\",\"WARC-IP-Address\":\"198.252.98.69\",\"WARC-Target-URI\":\"https://exam-answers.com/2020/06/21/convert-decimal-ip-address-in-binary-and-binary-in-decimal/\",\"WARC-Payload-Digest\":\"sha1:C4FP2JSSJFI5EU56GALMHVDXUAO7TC5E\",\"WARC-Block-Digest\":\"sha1:MBDQLHOMLRIJ53GHBPK4J6W6GIRU5OYP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039491784.79_warc_CC-MAIN-20210420214346-20210421004346-00072.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/2-6-minus-2-30
[ "Solutions by everydaycalculation.com\n\nSubtract 2/30 from 2/6\n\n2/6 - 2/30 is 4/15.\n\nSteps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 6 and 30 is 30\n2. For the 1st fraction, since 6 × 5 = 30,\n2/6 = 2 × 5/6 × 5 = 10/30\n3. Likewise, for the 2nd fraction, since 30 × 1 = 30,\n2/30 = 2 × 1/30 × 1 = 2/30\n4. Subtract the two fractions:\n10/30 - 2/30 = 10 - 2/30 = 8/30\n5. After reducing the fraction, the answer is 4/15\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\n-" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5208929,"math_prob":0.99769115,"size":338,"snap":"2019-43-2019-47","text_gpt3_token_len":155,"char_repetition_ratio":0.2005988,"word_repetition_ratio":0.0,"special_character_ratio":0.5147929,"punctuation_ratio":0.056179777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99851274,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T15:08:17Z\",\"WARC-Record-ID\":\"<urn:uuid:63365a07-23e6-45db-baed-daef2e1857e9>\",\"Content-Length\":\"8324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65539630-a2cb-4151-91c2-5d16ae63d407>\",\"WARC-Concurrent-To\":\"<urn:uuid:048931f5-3916-40bb-9153-06de5204ba35>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/2-6-minus-2-30\",\"WARC-Payload-Digest\":\"sha1:LT7YXYH5ERL4C2K5DVXSN22V2HXQG4TG\",\"WARC-Block-Digest\":\"sha1:RB3LVDWY5YOZ7CZ4CKJMNFGXSXYQGNXL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675409.61_warc_CC-MAIN-20191017145741-20191017173241-00049.warc.gz\"}"}
https://metanumbers.com/351119
[ "## 351119\n\n351,119 (three hundred fifty-one thousand one hundred nineteen) is an odd six-digits composite number following 351118 and preceding 351120. In scientific notation, it is written as 3.51119 × 105. The sum of its digits is 20. It has a total of 2 prime factors and 4 positive divisors. There are 349,680 positive integers (up to 351119) that are relatively prime to 351119.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 20\n• Digital Root 2\n\n## Name\n\nShort name 351 thousand 119 three hundred fifty-one thousand one hundred nineteen\n\n## Notation\n\nScientific notation 3.51119 × 105 351.119 × 103\n\n## Prime Factorization of 351119\n\nPrime Factorization 311 × 1129\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 351119 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 351,119 is 311 × 1129. Since it has a total of 2 prime factors, 351,119 is a composite number.\n\n## Divisors of 351119\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 352560 Sum of all the positive divisors of n s(n) 1441 Sum of the proper positive divisors of n A(n) 88140 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 592.553 Returns the nth root of the product of n divisors H(n) 3.98365 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 351,119 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 351,119) is 352,560, the average is 88,140.\n\n## Other Arithmetic Functions (n = 351119)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 349680 Total number of positive integers not greater than n that are coprime to n λ(n) 174840 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 29972 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 349,680 positive integers (less than 351,119) that are coprime with 351,119. And there are approximately 29,972 prime numbers less than or equal to 351,119.\n\n## Divisibility of 351119\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 2\n\n351,119 is not divisible by any number less than or equal to 9.\n\n## Classification of 351119\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (351119)\n\nBase System Value\n2 Binary 1010101101110001111\n3 Ternary 122211122102\n4 Quaternary 1111232033\n5 Quinary 42213434\n6 Senary 11305315\n8 Octal 1255617\n10 Decimal 351119\n12 Duodecimal 14b23b\n20 Vigesimal 23hfj\n36 Base36 7ixb\n\n## Basic calculations (n = 351119)\n\n### Multiplication\n\nn×i\n n×2 702238 1053357 1404476 1755595\n\n### Division\n\nni\n n⁄2 175560 117040 87779.8 70223.8\n\n### Exponentiation\n\nni\n n2 123284552161 43287548670218159 15199080801538329769921 5336686051955336810484891599\n\n### Nth Root\n\ni√n\n 2√n 592.553 70.548 24.3424 12.8556\n\n## 351119 as geometric shapes\n\n### Circle\n\n Diameter 702238 2.20615e+06 3.8731e+11\n\n### Sphere\n\n Volume 1.81322e+17 1.54924e+12 2.20615e+06\n\n### Square\n\nLength = n\n Perimeter 1.40448e+06 1.23285e+11 496557\n\n### Cube\n\nLength = n\n Surface area 7.39707e+11 4.32875e+16 608156\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 1.05336e+06 5.33838e+10 304078\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.13535e+11 5.10149e+15 286687" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60007745,"math_prob":0.9908327,"size":4666,"snap":"2020-34-2020-40","text_gpt3_token_len":1628,"char_repetition_ratio":0.11947662,"word_repetition_ratio":0.038123168,"special_character_ratio":0.45820832,"punctuation_ratio":0.075351216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T10:52:03Z\",\"WARC-Record-ID\":\"<urn:uuid:e072374e-1ca7-4d94-8b66-a275b4da3099>\",\"Content-Length\":\"48306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78415cdb-138b-41be-a087-7d5ba5f7c5f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:dab311b2-06f6-4b54-8215-84b6e97ee4f1>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/351119\",\"WARC-Payload-Digest\":\"sha1:HYFYBBV4YZMCM32KQ43UZJSZP4KCYDP4\",\"WARC-Block-Digest\":\"sha1:5L7XIKHSMOFG2CF3N7VTKDO435FYZ7R3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400197946.27_warc_CC-MAIN-20200920094130-20200920124130-00145.warc.gz\"}"}
https://edurev.in/course/quiz/attempt/6414_Quantitative-Aptitude-Test-7/e5f51003-5222-4f20-a168-0524103f470d
[ "Courses\n\n# Quantitative Aptitude - Test 7\n\n## 25 Questions MCQ Test SSC CGL Tier 1 Mock Test Series | Quantitative Aptitude - Test 7\n\nDescription\nThis mock test of Quantitative Aptitude - Test 7 for SSC helps you for every SSC entrance exam. This contains 25 Multiple Choice Questions for SSC Quantitative Aptitude - Test 7 (mcq) to study with solutions a complete question bank. The solved questions answers in this Quantitative Aptitude - Test 7 quiz give you a good mix of easy questions and tough questions. SSC students definitely take this Quantitative Aptitude - Test 7 exercise for a better result in the exam. You can find other Quantitative Aptitude - Test 7 extra questions, long questions & short questions for SSC on EduRev as well by searching above.\nQUESTION: 1\n\n### A hollow sphere of internal and external radius 3 cm and 5 cm respectively is melted into a solid right circular cone of diameter 8 cm. The height of the cone is\n\nSolution:\n\nWe know that the formula of the volume of a hollow sphere is\n4π(R3 – r3)\nA hollow sphere of internal and external radius 3 cm and 5 cm respectively\nSo the volume of the hollow sphere = [4×π×(53 – 33)]/3 cc\nNow, after melting this sphere, we will get a right circular cone, which’s diameter is 8 cm\nSo, radius of that cone = 8/2 cm = 4 cm\nWe know that the formula of the volume of a right circular cone is πr2h/3\nHere, r is the radius of the cone and h is the height of the cone\nFrom the question,\nwe can make the equation,\nπ × 42 × h/3 =  [4×π×(53 – 33)]/3\n⇒ 4h = 98\n⇒ h = 24.5\nSo, the height of the cone is 24.5 cm\n\nQUESTION: 2\n\n### A and B can together do a piece of work in 28 days. If A, B and C can together finish the work in 14 days, how long will C take to do the work by himself?\n\nSolution:\n\nIf A and B can together do a piece of work in 28 days which means in 1 day, A and B will finish 1/28th of the work.", null, "If A, B and C can together finish the work in 14 days which means in 1 day, A, B and C will finish 1/14th of the work.", null, "Putting the first value in second", null, "QUESTION: 3\n\n### Two cars are moving with speeds v1, v2 towards a crossing along two roads. If their distances from the crossing be 40 metres and 50 metres at an instant of time then they do not collide if their speeds are such that\n\nSolution:", null, "", null, "then they will collide i.e. cars will reach at the same time.", null, "QUESTION: 4\n\n2/5 Part of a mixture of 3 l is water and the rest is sugar syrup. Another mixture of 8 l contains 3/5 part water and the rest is sugar syrup. It both mixture are mixed together then what will be the ratio of water & sugar syrup in the new mixture?\n\nSolution:\n\nAssume,\n1) Mixture 1: mixture of 2/5 part water & rest sugar syrup.\n2) Mixture 2: mixture of 3/5 part water & rest sugar syrup.\n3) Mixture 3: mixture of mixture 1 & mixture 2.\nAmount of water and sugar syrup in different mixtures are represented in tabulated form for better understanding.", null, "∴ The ratio of water & sugar syrup in the new mixture = 6:5\n\nQUESTION: 5\n\nA wire is bent in the form of an equilateral triangle encloses a region having area of 121√3 cm². If the same wire is rebent in the form of a circle, its radius will be:\n\nSolution:\n\nWe have area of an equilateral triangle = (√3/4) × side × side\n⇒ 121√3 = (√3/4) × side × side\n⇒ side × side = 121 × 4\n⇒ side = 22\nPerimeter = 3 × side\n= 3 × 22\n= 66 cm\nWe now have a circle with perimeter 66 cm.\n⇒ 2 × (22/7) × radius = 66 cm\n⇒ (44/7) × radius = 66 cm\n⇒ Radius = 7 × 1.5\n= 10.5 cm\n\nQUESTION: 6\n\nIn a class of 50 students, 20 students had an average score of 70 and remaining had an average score of 50. What is the average score of the class?\n\nSolution:\n\nTotal number of students = 50\n20 students had an average score of 70\n∴ Total score of these 20 students\n= Average score × Number of students\n= 70 × 20 = 1400\nRemaining number of students = 50 – 20 = 30\nRemainingstudents had an average score of 50\n∴ Total score of these 30 students\n= Average score × Number of students\n= 50 × 30 = 1500\n∴ Sum of scoresof these 50 students = 1400 + 1500 = 2900\n∴ Average score of these 50 students\n= Sum total of these 50 students/50 = 2900/50 = 58\n\nQUESTION: 7\n\nIn a factory 60% of the workers are above 30 years and of these 75% are males and the rest are females. If there are 1350 male workers above 30 years, the total number of workers in the factory is\n\nSolution:\n\nLet the total number of workers in the factory be X. Then,\nTotal number of workers above 30 years of age", null, "Also it is given that out of these 75% are male. Thus\nTotal number of male workers above 30 years of age", null, "It is given that total number of male workers above 30 years\n= 1350\nHence,\n⇒ 0.45X = 1350\n⇒ X = 3000\n∴ Total workers in the factory are 3000.\n\nQUESTION: 8\n\nThe difference between simple interest and compound interest for 2 years on the sum Rs. 2900 at a certain rate is Rs. 14.21. What is annual rate of interest?\n\nSolution:\n\nWe know that the difference between the simple interest (SI2) & compound interest (CI2) for 2 years [compounded annually] on a sum of Rs. P at a rate R is,", null, "Now the given information, P = Rs. 2900, CI2 – SI2 = Rs. 14.21\nPutting the values we get,", null, "The annual rate of interest is 7%\n\nQUESTION: 9\n\nThe third proportional to", null, "Solution:\n\nLet, the third proportional is ‘x’", null, "QUESTION: 10\n\nABC is a right angled triangle, B being the right angle. Midpoints of BC and AC are respectively B’ and A’. The ratio of the area of the quadrilateral AA’ B’B to the area of the triangle ABC is\n\nSolution:", null, "Formulas:\n\nArea of a triangle", null, "Area of a trapezium =", null, "Triangle ABC and A’B’C are similar by SAS", null, "By property of similar triangles,\n∠B = ∠A’B’C and 2A’B’ = AB\n∴ A’B’ is parallel to AB.\nThus quadrilateral AA’ B’B is a trapezium.\nHeight = BB’\n⇒ Height = BC/2\nSum of parallel sides = (A’B’ + AB) = 3AB/2\nArea of quadrilateral AA’ B’B", null, "⇒ Area of quadrilateral AA’ B’B", null, "Area of triangle ABC", null, "Ratio area of quadrilateral AA’ B’B to area of triangle ABC", null, "⇒ Ratio of area of quadrilateral AA’ B’B to area of triangle ABC = 3 : 4\n\nQUESTION: 11\n\nThe cost of making an article is divided between materials, labor and overheads in the ratio 3 : 4 : 2. If the cost of material is Rs.33.60 the cost of article is\n\nSolution:\n\nLet the multiplying factor be ‘x’.\nMaterials = 3x, Labor = 4x and Overheads = 2x\nGiven that, materials cost = Rs.33.60\nAs per sum,\n⇒ 33.60 = 3x\n⇒ x = 11.20\n⇒ Labour = 4x = 4(11.20) = 44.80\n⇒ Overheads = 2x = 2(11.20) = 22.40\n∴ Cost of article = 33.60 + 44.80 + 22.40 = 100.80.\n\nQUESTION: 12\n\nIf a = √2 + 1, b = √2 – 1, then the value of", null, "Solution:\n\nGiven,\na = √2 + 1\nb = √2 – 1\nGiven expression is,", null, "= 2/2\n= 1\n\nQUESTION: 13\n\nIf ‘a’ and ‘b’ are two odd positive integers, by which of the following is (a4 – b4) always divisible?\n\nSolution:\n\nWe know that, Summation and Subtraction between two odd integers gives an even integer.\nGiven, a and b are two odd positive integers.\n∴ a + b = 2k1 ……(1)\nAnd a – b = 2k........(2)\nWhere, k1 and k2are two integers.\n(1) + (2) ⇒ 2a = 2(k1 + k2) ⇒ a = k1 + k2\n(1) - (2) ⇒ 2b = 2(k1 - k2) ⇒ b = k1 - k2\n∴ a2 + b2\n= (k1 + k2)2 + (k1 - k2)2\n= k12 + k22 + 2k1k2 + k12 + k22 - 2k1k2\n= 2(k12 + k22) ………..(3)\nGiven expression is,\na4 – b4\n= (a2 – b2)(a2 + b2)\n= (a + b) (a - b)(a2 + b2)\nPutting values from (1), (2) and (3)\n= 2k× 2k2 × 2(k12 + k22)\n= 8× k× k2 × (k12 + k22)\n∴ a4 – b= 8× k× k2 × (k12 + k22) is always divisible by 8.\n\nQUESTION: 14\n\nIf (x - 2) and (x + 3) are the factors of x2 + k1x + k2 then\n\nSolution:\n\nIf (x - 2) and (x + 3) are the factors of the equation, then that means that\n⇒ x – 2 = 0\n⇒ x = 2\nSimilarly,\n⇒ x + 3 = 0\n⇒ x = -3\nPutting the value of one of the x in the equation such that the value is zero.\n⇒ x2 + k1x + k2\nPutting the value of ‘x’ as 2, we get\n⇒ (2)2 + k1(2) + k= 0\n⇒ 4 +2k1+ k= 0\nPutting the value of ‘x’ as -3, we get\n⇒ (-3)2 + k1(-3) + k= 0\n⇒ 9 -3k1+ k= 0\nSubtracting both the equations:\n⇒ 4 +2 k1+ k- (9 -3 k1+ k2) = 0\n⇒ 4 +2 k1+ k- 9 + 3 k1- k= 0\n⇒ 5 k- 5 = 0\n⇒ 5 k= 5\n⇒ k= 1\nPutting the value of k= 1,\n⇒ 4 +2 k1+ k= 0\n⇒ 4 +2 (1)+ k= 0\n⇒ 4 +2 + k= 0\n⇒ 6 + k= 0\n⇒ k= -6\nAlternate method:-\n(x – 2) and (x + 3) are factors of x2 + k1x + k2\n∴ (x – 2)(x + 3) = x2 + k1x + k2\n⇒ x2 + x – 6 = x2 + k1x + k2\nComparing, we get, k1 = 1 and k2 = - 6\n\nQUESTION: 15\n\nIf ab + bc + ca = 0, then the value of", null, "Solution:\n\nIn the given question let us add (ab + bc + ca) to all the denominators, such that the equation becomes:", null, "Taking L.C.M of the above equation", null, "⇒ 0\n\nQUESTION: 16\n\nThe heights of two poles are 180 m and 60 m respectively. If the angle of elevation of the top of the first pole form the foot of the second pole is 60°, what is the angle of elevation of the top of the second pole form the foot of the first?\n\nSolution:", null, "Given, heights of two poles are 180 m and 60 m.\nWe know that the distance between the two poles acts as base and will be same so,\nFor the first pole,\ntan 60° = height/base\n⇒ √3 = 180m / b\n⇒ b = 60√3\ntan θ = h / 60√3\n⇒ tan θ = 1/√3\n⇒ θ = 30°\n\nQUESTION: 17\n\nIf 3 sin θ + 5 cos θ = 5, then the value of 5 sin θ – 3 cos θ wil be\n\nSolution:\n\nFormula-\nsin2θ + cos2 θ = 1\nGiven,\n3 sin θ + 5 cos θ = 5 …….eq(1)\nLet, 5 sin θ – 3 cos θ = x …..eq(2)\nOn squaring and adding both equations\n(3 sin θ + 5 cos θ)2+ (5 sin θ – 3 cos θ)2 = 25 +x2\n⇒ 9sinθ + 25 cosθ +30sinθ cosθ + 25 sinθ + 9cos2θ – 30sinθ cosθ = 25 +x2\n⇒ 9sinθ + 9cosθ+ 25 cosθ +25 sinθ = 25 +x2\n⇒ 9(sinθ +cosθ) + 25(sinθ +cosθ) =25 +x2\n⇒ 9 + 25 = 25 +x2\n⇒ x2= 9\n⇒ x = ±3\n\nQUESTION: 18\n\nA shopkeeper sold an item for Rs. 1,510 after giving a discount of", null, "and there by incurred a loss of 10%. Had he sold the item without discount, his net profit would have been\n\nSolution:\n\nGiven, selling price (SP) = 1510,", null, "Marked price – 49/2% of marked price = 1510\n⇒ Marked price = 1510/0.755 = Rs. 2000\nMarked price is the selling price without discount.\nGiven, there is a loss of 10%.", null, "QUESTION: 19\n\nIf A(1, 2), B(4, y), C(x, 6) and D(3, 5) are the vertices of a parallelogram taken in order, then find the values of x and y.\n\nSolution:", null, "Given, vertices of parallelogram are A(1, 2), B(4, y), C(x, 6) and D(3, 5)\nWe know that diagonals of a parallelogram bisect each other, so let the diagonals cross at point O.\nSince O divides AC and BD in 1:1 ratio so,\nCoordinate of O (w.r.t. AC) =", null, "Coordinate of O (w.r.t. BD)", null, "Since both the coordinate s are same so equating them we get,\n1 + x = 7\n⇒ x = 6\nAnd,\n8 = 5 + y\n⇒ y = 3\n\nQUESTION: 20\n\nIn a circle of radius 5 cm, AB and AC are two equal chords such that AB = AC = 6 cm. The length of BC (in cm) is\n\nSolution:", null, "Pythagoras theorem:-\nHypotenuse2 = perpendicular2 + base2\n(a - b)2 = a2 + b2 - 2ab\nLet O be the center of the circle.\nSince Δ ABC is isosceles therefore line AO will bisect BC perpendicularly.\nSo, PB = CP\nBy Pythagoras theorem,\nAB2 = AP2 + BP2\n⇒ 36 = AP2 + BP2 …(1)\nAnd,\nBO2 = BP2 + OP2\n⇒ 25 = BP2 + (5 – AP)2 [∵AP + OP = 5]\n⇒ 25 = BP2 + 25 + AP2 – 10AP      …(2)\nSubstituting value of AP+ BP2 in (2)\n⇒ 10AP = 36\n⇒ AP = 3.6 cm\n⇒ BP2 = 36 – 12.96\n⇒ BP2 = 23.04\n⇒ BP = 4.8 cm\n⇒ BC = 9.6 cm\n\nQUESTION: 21\n\nIn a Δ PQR, ∠RPQ = 90°, PR = 8 cm and PQ = 6 cm, then the radius of the circumcircle of Δ PQR is\n\nSolution:", null, "Pythagoras theorem,\nHypotenuse2 = perpendicular2 + base2\nArea of a triangle = abc/4R\nWhere, a, b and c are sides of the triangle and R is the radius of circumcircle.\nArea of the triangle PQR = ½ × PQ × PR\nWhere PR = 8cm, PQ = 6cm\nArea of triangle =12×8×6=24cm2=12×8×6=24cm2\nNow, using Pythagoras theorem, a+ b2 = c2\n⇒ c= (8)2 + (6)2 = 64 + 36 = 100 = (10)2\n⇒ c = 10\nRadius", null, "QUESTION: 22\n\nDirections: Study the following table carefully and answer the questions.", null, "Q. If farmer D and farmer E, both sell 240 kg of Bajara each, what would be the respective ratio of their earnings?\n\nSolution:\n\nFrom the table,\nPrice per kg of Bajra sold by farmer D = 28\nPrice per kg of Bajra sold by farmer E = 30\nEarning on 240 kg Bajra by D = 240 × 28 = Rs. 6720\nEarning on 240 kg Bajra by E = 240 × 30 = Rs. 7200\n∴ Required ratio\n= 6720 : 7200 = 14 : 15\n\nQUESTION: 23\n\nDirections: Study the following table carefully and answer the questions.", null, "Q.  What is the average price per kg of Bajra sold by all the farmers together?\n\nSolution:\n\nPrice per kg of Bajra sold by farmer A = 22\nPrice per kg of Bajra sold by farmer B = 24.5\nPrice per kg of Bajra sold by farmer C = 21\nPrice per kg of Bajra sold by farmer D = 28\nPrice per kg of Bajra sold by farmer E = 30\n∴ Average price per kg of Bajra sold by all the farmers together\n= (Sum of price per kg of Bajra)/number of farmer\n= (22 + 24.5 + 21 + 28 + 30)/5\n= 125.5/5 = 25.1\n\nQUESTION: 24\n\nDirections: Study the following table carefully and answer the questions.", null, "Q. If farmer A sells 350 kg of rice, 150 kg of corn and 250 kg of jowar, how much would he earn?\n\nSolution:\n\nFrom the table,\nPrice per kg of rice sold by farmer A = 30\nPrice per kg of corn sold by farmer A = 22.5\nPrice per kg of jowar sold by farmer A = 18\n∴ Earning on 350 kg of rice by farmer A = 350 × 30 = Rs. 10500\nEarning on 150 kg of corn by farmer A = 150 × 22.5 = Rs. 3375\nEarning on 250 kg of jowar by farmer A = 250 × 18 = Rs. 4500\n∴ Total earn = (10500 + 3375 + 4500) = Rs. 18375\n\nQUESTION: 25\n\nDirections: Study the following table carefully and answer the questions.", null, "Q. Earnings on 150 kg of paddy sold by farmer B are approximately what percent of the earnings on the same amount of rice sold by the same farmer?\n\nSolution:\n\nGiven,\nPrice per kg of paddy sold by farmer B = 25\nPrice per kg of rice sold by farmer B = 36\n∴ Earning on 150 kg of paddy by farmer B = 150 × 25 = Rs. 3750\nEarning on 150 kg of rice by farmer B = 150 × 36 = Rs. 5400\n∴ Required per cent\n= (3750/5400) × 100\n= 3750/54 = 69.44%" ]
[ null, "https://cdn3.edurev.in/ApplicationImages/Temp/4ec0d0d8-a8ea-4a88-86c4-c75059efb630_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/02579477-f730-475a-86f3-74854ad54233_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/34e5faf7-0e87-4920-9db1-6291fce36862_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_820371fc-52de-4926-b97b-6f90d455ac3b_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_b8e6ae9e-edf5-45a7-b69f-217fd4a07f87_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_8d48d8f9-624e-42f6-be00-fe52c84c4e35_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/9e9e9241-b098-4aab-9bb7-21ba030a32bb_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/06121907-f8be-468a-8341-074e554c91b6_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/48b92e11-9861-4041-80df-03f4459e59ac_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/d385bb5e-880c-43da-90f7-43dd01bfdff1_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/08a713f3-1a3d-4f4f-a0cd-48044033c85b_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/1a6beb0e-2003-47d4-a386-7c3ea6909856_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/14c30ced-169f-4e65-8adb-c4f8e6c75e79_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_a7696762-15cd-4b27-8082-3bf11a256ceb_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/d780f1c7-a81a-4172-a813-65bc8bddc56c_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/b8c71edc-1ee9-4abe-902e-199fe47b7086_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/79a09551-5d43-4605-a0be-e37b490de286_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/aaaaeb89-a911-4af0-9975-9fcecb55f93a_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/c62164a2-cfae-4454-b45b-643461b0cbde_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/c90e9a4d-0ff4-4c19-a5a1-39d1a9b88fd1_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/94666cc3-5e32-4c37-8022-2b1e1448cc9c_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/b8991f3e-b9f6-4691-83f3-e47a6983ed30_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/8f8cd04b-0361-4dc4-88df-75370bcfdda7_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/884bcfa3-a60b-4257-b3e8-78b966287eae_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/a67c34cc-68e1-40af-92aa-3bd485bad262_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/1d611889-3905-46d7-a00a-b0830e1f6a20_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_38dc0900-100f-414f-9e82-71075c16f4c4_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/389ff4f3-c959-42ef-a79e-6d3c34c03a37_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/81702fea-c9cc-4ff2-b816-bb36af0ccb7f_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/ea2bd2e1-d0f6-4dd9-b590-152f6abdf0bf_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_628e4a33-c5ab-4ec3-9a87-3ce0a91b1564_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/c89e912e-9fb4-4597-9e13-4afccaff026d_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/4ca2bb50-ce09-46d4-a06b-a90f4f2701a6_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_41f8026b-eb41-4cf5-ac73-850ef3592206_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3096_2c11b930-2d12-478a-bdec-e382ccd26d83_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/244de9f2-45f9-4b2a-98c0-5595743ecfc1_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/a420b196-5fcb-4e11-8838-5fbc0ee4b395_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/ce5d5aa9-dfe8-4a55-8aeb-162c550930c5_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/fdea6d99-b5be-402c-b00b-178a9f8b7879_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/8b06c58f-22ca-400c-9011-faf4148c723f_lg.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8638297,"math_prob":0.9992362,"size":12463,"snap":"2020-34-2020-40","text_gpt3_token_len":4614,"char_repetition_ratio":0.11895016,"word_repetition_ratio":0.1548455,"special_character_ratio":0.38826928,"punctuation_ratio":0.09276504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998454,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T02:27:02Z\",\"WARC-Record-ID\":\"<urn:uuid:2836f239-f3ca-4e7d-839a-035aa0326e5b>\",\"Content-Length\":\"389374\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64c5c49f-53e9-4e8e-8d7c-9fee7e979263>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ca552f6-e5a0-437a-b841-ba9ddc931f91>\",\"WARC-IP-Address\":\"35.198.207.72\",\"WARC-Target-URI\":\"https://edurev.in/course/quiz/attempt/6414_Quantitative-Aptitude-Test-7/e5f51003-5222-4f20-a168-0524103f470d\",\"WARC-Payload-Digest\":\"sha1:7KN3YOPEZZ4OYXOMHZUPVDHIP5ZOXTJG\",\"WARC-Block-Digest\":\"sha1:DQQYRABEC3D44RQUOXDYRHRZKQD6NOEF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400193087.0_warc_CC-MAIN-20200920000137-20200920030137-00759.warc.gz\"}"}
https://se.mathworks.com/help/pde/ug/view-edit-and-delete-boundary-conditions.html
[ "## View, Edit, and Delete Boundary Conditions\n\n### View Boundary Conditions\n\nA PDE model stores boundary conditions in its `BoundaryConditions` property. To obtain the boundary conditions stored in the PDE model called `model`, use this syntax:\n\n`BCs = model.BoundaryConditions;`\n\nTo see the active boundary condition assignment for a region, call the `findBoundaryConditions` function.\n\nFor example, create a model and view the geometry.\n\n```model = createpde(3); importGeometry(model,\"Block.stl\"); pdegplot(model,\"FaceLabels\",\"on\",\"FaceAlpha\",0.5)```", null, "Set zero Dirichlet conditions for all equations and all regions in the model.\n\n`applyBoundaryCondition(model,\"dirichlet\",\"Face\",1:6,\"u\",[0,0,0]);`\n\nOn face 3, set the Neumann boundary condition for equation 1 and Dirichlet boundary condition for equations 2 and 3.\n\n```h = [0 0 0;0 1 0;0 0 1]; r = [0;3;3]; q = [1 0 0;0 0 0;0 0 0]; g = [10;0;0]; applyBoundaryCondition(model,\"mixed\",\"Face\",3,\"h\",h,\"r\",r,\"g\",g,\"q\",q);```\n\nView the boundary condition assignment for face 3. The result shows that the active boundary condition is the last assignment.\n\n```BCs = model.BoundaryConditions; findBoundaryConditions(BCs,\"Face\",3)```\n```ans = BoundaryCondition with properties: BCType: 'mixed' RegionType: 'Face' RegionID: 3 r: [3x1 double] h: [3x3 double] g: [3x1 double] q: [3x3 double] u: [] EquationIndex: [] Vectorized: 'off' ```\n\nView the boundary conditions assignment for face 1.\n\n`findBoundaryConditions(BCs,\"Face\",1)`\n```ans = BoundaryCondition with properties: BCType: 'dirichlet' RegionType: 'Face' RegionID: [1 2 3 4 5 6] r: [] h: [] g: [] q: [] u: [0 0 0] EquationIndex: [] Vectorized: 'off' ```\n\nThe active boundary conditions assignment for face 1 includes all six faces, though this assignment is no longer active for face 3.\n\n### Delete Existing Boundary Conditions\n\nTo remove all the boundary conditions in the PDE model called `pdem`, use `delete`.\n\n`delete(pdem.BoundaryConditions)`\n\nTo remove specific boundary conditions assignments from `pdem`, delete them from the `pdem.BoundaryConditions.BoundaryConditionAssignments` vector. For example,\n\n```BCv = pdem.BoundaryConditions.BoundaryConditionAssignments; delete(BCv(2))```\n\nTip\n\nYou do not need to delete boundary conditions; you can override them by calling `applyBoundaryCondition` again. However, removing unused assignments can make your model more concise.\n\n### Change a Boundary Conditions Assignment\n\nTo change a boundary conditions assignment, you need the boundary condition’s handle. To get the boundary condition’s handle:\n\n• Retain the handle when using `applyBoundaryCondition`. For example,\n\n```bc1 = applyBoundaryCondition(model,\"dirichlet\", ... \"Face\",1:6, ... \"u\",[0 0 0]);```\n• Obtain the handle using `findBoundaryConditions`. For example,\n\n```BCs = model.BoundaryConditions; bc1 = findBoundaryConditions(BCs,\"Face\",2)```\n```bc1 = BoundaryCondition with properties: BCType: 'dirichlet' RegionType: 'Face' RegionID: [1 2 3 4 5 6] r: [] h: [] g: [] q: [] u: [0 0 0] EquationIndex: [] Vectorized: 'off'```\n\nYou can change any property of the boundary conditions handle. For example,\n\n```bc1.BCType = \"neumann\"; bc1.u = []; bc1.g = [0 0 0]; bc1.q = [0 0 0]; bc1```\n```bc1 = BoundaryCondition with properties: BCType: 'neumann' RegionType: 'Face' RegionID: [1 2 3 4 5 6] r: [] h: [] g: [0 0 0] q: [0 0 0] u: [] EquationIndex: [] Vectorized: 'off'```\n\nNote\n\nEditing an existing assignment in this way does not change its priority. For example, if the active boundary condition was assigned after `bc1`, then editing `bc1` does not make `bc1` the active boundary condition." ]
[ null, "https://se.mathworks.com/help/examples/pde/win64/ViewBoundaryConditionsExample_01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8195412,"math_prob":0.9871909,"size":3453,"snap":"2023-14-2023-23","text_gpt3_token_len":972,"char_repetition_ratio":0.24731806,"word_repetition_ratio":0.16082475,"special_character_ratio":0.27251664,"punctuation_ratio":0.23941606,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9704128,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T08:09:37Z\",\"WARC-Record-ID\":\"<urn:uuid:7bb7af8c-96e8-4bc5-a152-e71ae9d71d64>\",\"Content-Length\":\"81844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:295a6a62-de7a-41aa-bb57-599d18db2ba2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c19b4af-b607-4556-af02-ea92a861fe7e>\",\"WARC-IP-Address\":\"23.196.74.206\",\"WARC-Target-URI\":\"https://se.mathworks.com/help/pde/ug/view-edit-and-delete-boundary-conditions.html\",\"WARC-Payload-Digest\":\"sha1:S57TBPXMYQGRDYKDJHHGXJHQQEPZCSYM\",\"WARC-Block-Digest\":\"sha1:EGKS3K44NUGUPDIPXFS6LCOFFIMGQESC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657144.94_warc_CC-MAIN-20230610062920-20230610092920-00368.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/1951_AHSME_Problems/Problem_28
[ "# 1951 AHSME Problems/Problem 28\n\n## Problem\n\nThe pressure", null, "$(P)$ of wind on a sail varies jointly as the area", null, "$(A)$ of the sail and the square of the velocity", null, "$(V)$ of the wind. The pressure on a square foot is", null, "$1$ pound when the velocity is", null, "$16$ miles per hour. The velocity of the wind when the pressure on a square yard is", null, "$36$ pounds is:", null, "$\\textbf{(A)}\\ 10\\frac{2}{3}\\text{ mph}\\qquad\\textbf{(B)}\\ 96\\text{ mph}\\qquad\\textbf{(C)}\\ 32\\text{ mph}\\qquad\\textbf{(D)}\\ 1\\frac{2}{3}\\text{ mph}\\qquad\\textbf{(E)}\\ 16\\text{ mph}$\n\n## Solution\n\nBecause", null, "$P$ varies jointly as", null, "$A$ and", null, "$V^2$, that means that there is a number", null, "$k$ such that", null, "$P=kAV^2$. You are given that", null, "$P=1$ when", null, "$A=1$ and", null, "$V=16$. That means that", null, "$1=k(1)(16^2) \\rightarrow k=\\frac{1}{256}$. Then, substituting into the original equation with", null, "$P=36$ and", null, "$A=9$ (because a square yard is", null, "$9$ times a square foot), you get", null, "$4=\\frac{1}{256}(V^2)$. Solving for", null, "$V$, we get", null, "$V^2=1024$, so", null, "$V=32$. Hence, the answer is", null, "$\\boxed{C}$.\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.", null, "" ]
[ null, "https://latex.artofproblemsolving.com/6/3/7/6370e3110be7e01afa16fd13d44c5115cfde2a4a.png ", null, "https://latex.artofproblemsolving.com/5/6/5/565ab5e3715db52b9176b04dd9991b71f4e93a86.png ", null, "https://latex.artofproblemsolving.com/1/3/0/130d96c1df523aa2fcb2263aa935de9ad0237560.png ", null, "https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ", null, "https://latex.artofproblemsolving.com/9/a/5/9a5b4928c8fe50ce3c2428da3bee3505e891b788.png ", null, "https://latex.artofproblemsolving.com/c/e/5/ce5d3cbb9ab0a992ed503c2d4499e4f4af2f9d65.png ", null, "https://latex.artofproblemsolving.com/2/0/4/204d6a7cfcdddb11648cc0c97a96f968f1ad3730.png ", null, "https://latex.artofproblemsolving.com/4/b/4/4b4cade9ca8a2c8311fafcf040bc5b15ca507f52.png ", null, "https://latex.artofproblemsolving.com/0/1/9/019e9892786e493964e145e7c5cf7b700314e53b.png ", null, "https://latex.artofproblemsolving.com/c/1/6/c16855265ece631eba44effb646b72798d459683.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/3/3/8/338cbf2809125223a361c17213cc74dfcd0615d6.png ", null, "https://latex.artofproblemsolving.com/6/d/9/6d99d5ffc20f99bef98a1861496b6758fa9541d2.png ", null, "https://latex.artofproblemsolving.com/f/1/8/f18428217dcd07944b2adbebcafd0978587a770a.png ", null, "https://latex.artofproblemsolving.com/e/f/6/ef6f8d2786d5c5bc670144947b277821e990628a.png ", null, "https://latex.artofproblemsolving.com/4/2/e/42e9e4a7fc3cc796c99b2fd09735692a5713bdb4.png ", null, "https://latex.artofproblemsolving.com/3/2/a/32a0d59a0ffcef2865a3490cf59d7dffc8f95218.png ", null, "https://latex.artofproblemsolving.com/8/2/6/826698ae674d37645f36277970439261b6e28d23.png ", null, "https://latex.artofproblemsolving.com/b/f/2/bf2c9074b396e3af0dea52d792660eea1c77f10f.png ", null, "https://latex.artofproblemsolving.com/e/4/c/e4ceb0aa4df61d250c3e257ed88497c03b99cb3d.png ", null, "https://latex.artofproblemsolving.com/1/2/d/12d58aa29201da09d8e620f8698e3a37547f6b4a.png ", null, "https://latex.artofproblemsolving.com/d/e/6/de65c001d428137c809284ccd786c5bdd11d7fde.png ", null, "https://latex.artofproblemsolving.com/0/6/5/065d2cf9376db017dc1cd67aef38959ff57f142c.png ", null, "https://latex.artofproblemsolving.com/3/f/5/3f58d2efaaa17d28cec9dd927d78708a63b43e24.png ", null, "https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7475247,"math_prob":1.0000006,"size":1101,"snap":"2023-40-2023-50","text_gpt3_token_len":352,"char_repetition_ratio":0.118505016,"word_repetition_ratio":0.0,"special_character_ratio":0.3996367,"punctuation_ratio":0.074418604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000026,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,null,null,null,null,6,null,null,null,null,null,null,null,4,null,null,null,null,null,4,null,null,null,4,null,4,null,null,null,4,null,4,null,4,null,7,null,null,null,4,null,null,null,4,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T14:33:46Z\",\"WARC-Record-ID\":\"<urn:uuid:97ec93fe-98e5-45eb-9b5e-ce6eaa0fffcd>\",\"Content-Length\":\"43816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58c9e3f4-e912-4b60-a3d7-c58cccb66294>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d50ef74-c99d-4531-bb7e-79479d3fd9b3>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/1951_AHSME_Problems/Problem_28\",\"WARC-Payload-Digest\":\"sha1:K6J66HIZWYZNYDBASBYI5D2HNBALSVAH\",\"WARC-Block-Digest\":\"sha1:KD5KJUVXHBZGBZIPQOHTCRDTST4NBEZC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510412.43_warc_CC-MAIN-20230928130936-20230928160936-00396.warc.gz\"}"}
https://bookboon.com/en/introductory-well-testing-ebook
[ "Categories Pricing Corporate", null, "", null, "Free Textbook\n\n# Introductory Well Testing\n\n20 Reviews\n(20 ratings)\n103\nLanguage:  English\nThe objective of this book is to provide an easy to read introduction to classical well test theory. No previous knowledge in well testing is required.\nDescription\nPreface\nContent\n\nWhile there are many excellent books on the subject of well testing, few provide an introduction at the very basic level. The objective is to provide an easy to read introduction to classical well test theory. No previous knowledge in well testing is required. The reader is expected to understand basic concepts of flow in porous media. Well test interpretation depends on mathematical models. Some calculus skill is required.\n\nWell testing is important in many disciplines: petroleum engineering, groundwater hydrology, geology and waste water disposal. The theory is the same, but different nomenclature and units are used. This book uses consistent units and petroleum engineering nomenclature. A consistent unit system leads to dimensionless constants in all equations. Equations in a consistent unit system are dimensionally transparent but inconvenient numerically. Hence, a myriad of practical unit systems have evolved. Conversion factors are easily available in the literature.\n\nWhile there are many excellent and important books on the subject of well testing, few provide an introduction to the topic at the very basic level. The objective is to provide an easy to read introduction to classical well test theory. No previous knowledge in well testing is required. The reader is expected to understand basic concepts of flow in porous media. Well test interpretation depends on mathematical models. Some calculus skill is required. Hopefully, this book will give the reader a useful introduction to a fascinating subject and stimulate further studies.\n\nWell testing is important in many disciplines: petroleum engineering, groundwater hydrology, geology and waste water disposal. The theory is the same, but different nomenclature and units are used. The present book use consistent units and petroleum engineering nomenclature. A consistent unit system leads to dimensionless constants in all equations. Equations in a consistent unit system are dimensional transparent but inconvenient numerically. Hence a myriad of practical unit systems have evolved. Many authors present equations first in consistent units and then convert them to the practical unit system of their choice. Conversion factors are easily available in the literature.\n\n1. Preface\n2. Introduction\n3. Productivity of Wells\n1. Introductory remarks\n2. Flow equations for boundary dominated flow\n3. Productivity index, PI\n4. Time dependency of the pseudo-steady solution\n5. Flow efficiency\n4. Skin Factor\n1. Introductory remarks\n3. Skin factor\n5. Hawkins’ Formula for Skin\n1. Introductory remarks\n2. Hawkins’ model\n1. Introductory remarks\n7. Drawdown Test\n1. Introductory remarks\n2. Drawdown test\n3. Determination of permeability\n4. Determination of skin factor\n8. Reservoir Limit Test\n1. Introductory remarks\n2. Reservoir limit test\n3. Determination of pore volume, circular drainage area\n9. Interference Test – Type Curve Matching\n1. Introductory remarks\n2. Interference test\n3. The line source solution\n4. Type curve matching\n10. Pressure Buildup Test\n1. Introductory remarks\n2. Pressure buildup test\n3. Infinite-acting reservoir\n4. Determination of permeability\n5. Determination of the initial reservoir pressure\n6. Determination of the skin factor\n7. Bounded reservoir\n8. Determination of the average pressure\n9. Average reservoir pressure\n10. Horner time\n11. Pressure Derivative\n1. Introductory remarks\n2. Drawdown\n3. Buildup\n4. Derivation algorithm\n12. Wellbore Storage\n1. Introductory remarks\n2. Drawdown\n3. Buildup\n13. Principle of Superposition\n1. Introductory remarks\n2. Several wells in an infinite reservoir\n3. Method of images\n4. Superposition in time\n14. Appendix A: Core Analysis\n1. Introduction\n2. Well testing\n3. Average porosity obtained by core analysis\n4. Average permeability obtained by core analysis\n5. Arithmetic average\n6. Harmonic average\n7. Probability distribution function\n8. Geometric average\n9. Powerlaw average\n10. Commingled reservoir\n15. Appendix B: A Note on Unit Systems\n1. Introductory remarks\n2. Consistent SI Units\n3. American Field Units\n4. Conclusion" ]
[ null, "https://bookboon.com/thumbnail/380/3b43a437-9e43-4c50-aaed-a1cd00bc8d6f/7e77721a-7fe8-44f3-925f-a58a00a450c4/introductory-well-testing.jpg", null, "https://bookboon.com/thumbnail/380/3b43a437-9e43-4c50-aaed-a1cd00bc8d6f/7e77721a-7fe8-44f3-925f-a58a00a450c4/introductory-well-testing.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9149445,"math_prob":0.72775126,"size":986,"snap":"2023-14-2023-23","text_gpt3_token_len":175,"char_repetition_ratio":0.098778,"word_repetition_ratio":0.0,"special_character_ratio":0.1663286,"punctuation_ratio":0.11515152,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98338276,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T17:29:11Z\",\"WARC-Record-ID\":\"<urn:uuid:4a2a5804-7a78-4d18-ba26-bd89960f7c2c>\",\"Content-Length\":\"81798\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06e8db37-0e77-41c5-b975-7efa3e1eb4d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:878919b6-6846-440f-8f7d-1f1395b43071>\",\"WARC-IP-Address\":\"81.7.185.32\",\"WARC-Target-URI\":\"https://bookboon.com/en/introductory-well-testing-ebook\",\"WARC-Payload-Digest\":\"sha1:W3OMOKYE7V2F5PMGIT7KQGXMCIXEMKLH\",\"WARC-Block-Digest\":\"sha1:O7MTG6CLLXLJV7QEIZYUAJYMVGKRGHRN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655092.36_warc_CC-MAIN-20230608172023-20230608202023-00328.warc.gz\"}"}
https://math.stackexchange.com/questions/2617901/when-are-left-fracab-rightc-and-fracacbc-equivalent/2617964
[ "# When are $\\left(\\frac{a}{b}\\right)^c$ and $\\frac{a^c}{b^c}$ equivalent?\n\nConsider the following two expressions\n\n\\begin{equation} \\left(\\frac{a}{b}\\right)^c \\end{equation} \\begin{equation} \\frac{a^c}{b^c} \\end{equation}\n\nwhere $a,b,c$ are complex numbers.\n\nI would like to know for what values of $a,b,c$ are the two expressions equivalent.\n\nOne example where they aren't equivalent is $b = c = 0$, then the first expression is undefined and the second expression is $1$. It's also clear that equivalence holds if $b \\neq 0$ and $c \\in \\mathbb{Z}$. Could somebody help me determine the complete set of values where equivalence holds?\n\nP.S. This question is derived from the SO question here.\n\n• $0^0$ is also undefined. What is equal to $1$ is the expression $\\lim_{x \\to 0^+} x^x = 1$. See wolframalpha.com/input/?i=0%5E0 – ArsenBerk Jan 23 '18 at 17:56\n• @ArsenBerk Oh I didn't know that. So it may be that these two expressions are always equivalent when defined? – jodag Jan 23 '18 at 18:01\n• This is really not about equivalence relations – amrsa Jan 23 '18 at 19:27\n\nCase 1 ($c \\in \\mathbb{Z}$): First, let $a = r_1e^{i\\theta_1}$ and $b = r_2e^{i\\theta_2}$. Then we have $$\\bigg(\\frac{a}{b}\\bigg)^c = \\bigg(\\frac{r_1e^{i\\theta_1}}{r_2e^{i\\theta_2}}\\bigg)^c = \\bigg(\\frac{r_1}{r_2}\\bigg)^c \\cdot e^{i(\\theta_1-\\theta_2)c} = \\frac{r_1^c}{r_2^c}\\cdot\\frac{e^{i\\theta_1c}}{e^{i \\theta_2c}} = \\frac{a^c}{b^c}$$ so whenever this expression is defined, they are equivalent.\nCase 2 ($c \\in \\mathbb{C}$): We will use the fact:\nLet $z,c \\in \\mathbb{C}$. Then $z^c = e^{c\\log(z)}$\nWe have $$\\frac{a^c}{b^c}= \\frac{e^{c \\log(a)}}{e^{c \\log(b)}} = e^{c(\\log(a)-\\log(b))} = e^{c \\log(\\frac{a}{b})} = \\bigg(\\frac{a}{b}\\bigg)^c$$ Therefore they are always equivalent when the expression is defined as you suggested." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84859234,"math_prob":1.0000043,"size":613,"snap":"2019-26-2019-30","text_gpt3_token_len":168,"char_repetition_ratio":0.13957307,"word_repetition_ratio":0.0,"special_character_ratio":0.25611746,"punctuation_ratio":0.10655738,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-15T22:58:44Z\",\"WARC-Record-ID\":\"<urn:uuid:1893106f-420e-43c7-8af9-700530cf52b9>\",\"Content-Length\":\"141838\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:593749f8-ea3d-4679-828f-35424bf271ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a2c3754-7306-45a5-8073-19c8c11faa5b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2617901/when-are-left-fracab-rightc-and-fracacbc-equivalent/2617964\",\"WARC-Payload-Digest\":\"sha1:VX2DLDI5KN3O3RHCR4JDORMLJ5FNOYVK\",\"WARC-Block-Digest\":\"sha1:7SMUF46HB66ZSHY7VSEND7G6ZUR5TJ23\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524254.28_warc_CC-MAIN-20190715215144-20190716001144-00164.warc.gz\"}"}
https://www.statsmodels.org/stable/generated/statsmodels.distributions.empirical_distribution.StepFunction.html
[ "# statsmodels.distributions.empirical_distribution.StepFunction¶\n\nclass statsmodels.distributions.empirical_distribution.StepFunction(x, y, ival=`0.0`, sorted=`False`, side=`'left'`)[source]\n\nA basic step function.\n\nValues at the ends are handled in the simplest way possible: everything to the left of x is set to ival; everything to the right of x[-1] is set to y[-1].\n\nParameters:\nxarray_like\nyarray_like\nival`float`\n\nival is the value given to the values to the left of x. Default is 0.\n\nsortedbool\n\nDefault is False.\n\nside{‘left’, ‘right’}, `optional`\n\nDefault is ‘left’. Defines the shape of the intervals constituting the steps. ‘right’ correspond to [a, b) intervals and ‘left’ to (a, b].\n\nExamples\n\n``````>>> import numpy as np\n>>> from statsmodels.distributions.empirical_distribution import (\n>>> StepFunction)\n>>>\n>>> x = np.arange(20)\n>>> y = np.arange(20)\n>>> f = StepFunction(x, y)\n>>>\n>>> print(f(3.2))\n3.0\n>>> print(f([[3.2,4.5],[24,-3.1]]))\n[[ 3. 4.]\n[ 19. 0.]]\n>>> f2 = StepFunction(x, y, side='right')\n>>>\n>>> print(f(3.0))\n2.0\n>>> print(f2(3.0))\n3.0\n``````\n\nMethods\n\n `__call__`(time) Call self as a function.\n\nMethods\n\nLast update: May 05, 2023" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5905627,"math_prob":0.9767383,"size":1149,"snap":"2023-40-2023-50","text_gpt3_token_len":354,"char_repetition_ratio":0.13799126,"word_repetition_ratio":0.0,"special_character_ratio":0.3516101,"punctuation_ratio":0.22033899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9729561,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T05:06:11Z\",\"WARC-Record-ID\":\"<urn:uuid:79afeb83-02b3-4305-b49a-7359bfad6e1c>\",\"Content-Length\":\"44313\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72e43ff6-6ec1-4fd2-b647-bdd84b71b420>\",\"WARC-Concurrent-To\":\"<urn:uuid:8067e60f-ea26-4d15-b92b-283ff8ec7dfc>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://www.statsmodels.org/stable/generated/statsmodels.distributions.empirical_distribution.StepFunction.html\",\"WARC-Payload-Digest\":\"sha1:7AUVHFOUSCPIVVY3KFXCST347LELXOOV\",\"WARC-Block-Digest\":\"sha1:NYGFTOZ2UYRDWGX3I23U26ZEJZWO36W4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510358.68_warc_CC-MAIN-20230928031105-20230928061105-00653.warc.gz\"}"}
https://www.akritinfo.com/statistical-mechanics-3/
[ "# QUANTUM STATISTICS\n\nThere are three types of statistics —\n\ni) Maxwell – Boltzmann statistics or MB statistics (classical statistics)\n\nii) Bose – Einstein statistics or BE statistics (quantum statistics)\n\nit) Fermi-Dirac statistics on FD statistics (quantum Statistics)\n\n## Maxwell – Boltzmann Statistics\n\nBasic postulates of MB-statistics :\n\ni) The particles are distinguishable and identifiable.\n\nii) The particles of the system are spinless.\n\niii) There is no restriction imposed on the number of particles having same energy (i.e. no priory restriction).\n\niv) The particles do not obey Pauli’s exclusion principle.\n\nv) Heisenberg’s uncertainty principle is not applicable for the particles .\n\nFurther, if the system is isolated\n\na) Total number of particles is constant i.e.\n\nN = ΣNi = Constant.\n\n𝛿N = Σ𝛿Ni = 0 —————– a\n\nNi being the no of particles in ith energy stage.\n\nb) Total energy of ine system is constant,if the particles are non interacting.\n\ni.e. E = ΣNiEi = constant.\n\n𝛿E = ΣEi𝛿Ni = 0 —————– b\n\nNote : The particles which obey MB statistics, called Maxwellian.\n\nExample : gas molecules.\n\n### Derivation of MB – distribution function\n\nTo determine MB distribution function we divide the problem into two parts –\n\ni) Calculate G\n\nii) Calculate G’\n\nThermodynamic probability  w = GG’\n\n### Derivation of MB distribution function\n\n#### (a) Thermodynamic probability\n\nLet N be the total number of distinguishable particles in the assembly ;  N1 , N2 , N3 , ……. Nn be the number of particles with energies E1 , E2 , E3 ,…….. En respectively and for the sake of generality gi be the no. of Q states for energy level Ei To determine the total number of ways G in which the total no of particle could be distributed among the quantum states.\n\ni) The number of ways in which the group of particles N1 , N2 , N3 , ——- Ni could be chosen from N particles is, from the rule of permutation,", null, "ii) Since there is no restriction on the number of particles with same energy, each of Ni particles can be distributed among gi sub shells (states) in gi ways.\n\nThus total number of ways of distributing Ni particles in gi  sub shells will be ,", null, "Considering all groups , Hence total number of ways of distributing all the N particle in different states ; i.e. Thermodynamic probability\n\nw = GG’", null, "#### (b) Most probable distribution function\n\nTaking logarithm on the both sides of equation 3 ,", null, "According to Stirling’s approximation ;\n\nIn (N!) = NInN – N", null, "For most probable distribution ; entropy of the system s = klnw must be maximum i.e.\n\n𝛿s = 0", null, "According to Lagrange’s method of undetermined multiplier , multiplying equation (a) by -α and equation (b) by -β and adding with equation 5 , we get ,", null, "Since 𝛿Ni is independent, and above equation holds for any value of i , thus", null, "The ratio of the number of particle Ni distributed in gi stales to the number of states gi, is called the distribution function .\n\nSo , MB distribution function ,", null, "#### Evaluation of e^α :\n\nTotal number of particles ,", null, "where ,", null, "is called partition function or sum of states .\n\n#### Evaluation of β :\n\nFrom equation 4 ,", null, "So , Entropy", null, "Introducing temperature T from thermodynamics ,", null, "Again ,", null, "MB distribution function :-", null, "### Calculation of G in details :\n\nLet N be the total number of distinguishable particles in the assembly ; N1 , N2, N3 , ………. be the number of particles with energies E1 , E2 , E3 , ………….. respectively .\n\nFor the first assembly ;\n\nnumber of ways choosing the 1st particle = N\n\nnumber of ways choosing the 2nd particle = N – 1\n\nand number of ways choosing the 3rd particle = N – 2\n\n& number of ways choosing the N1 th particle = N – N1 +1\n\nThus , number of ways to forming the group of N1, particles out of N particles", null, "Since the assembly or groups indistinguishable , hence number of ways , to form the 1st assembly\n\nSimilarly", null, "and so on\n\nHence , number of ways in which the group of particles N1 , N2, N3 , ………. Ni could be chosen from N particles ,", null, "### Limitations of MB Statistics\n\nMaxwell–Boltzmann statistics is valid only for the classical limit\n\n1)  This statistics is applicable only to an isolated gas- molecular system in equilibrium when the mean potential energy due to mutual interaction between the molecules is negligible compared to their mean kinetic energy.\n\n2) The expression for MB count does not lead to the correct expression for entropy of an ideal gas. It leads to the Gibbs paradox which can be resolved if the expression divided by N\n\n3) When the MB – statistics is applied to “electron gas” a number of discrepancies arise between the theory and the observation.\n\n4) When the MB -statistics is applied to “photon gas” i.e. a batch of electromagnetic radiant energy, it predicts a continuously increasing number of photons per unit range of frequency, as frequency increases.\n\nThe actual distribution however shown by Planck that shows a maximum , falling off asymptotically on either side.\n\nAll these difficulties with MB statistics have been satisfactory resolved by the quantum statistics.\n\n5) If we put T = 0 in expression for entropy of an ideal gas, s becomes a negative quantity which is at variance with the 3rd law of thermodynamics,\n\ns → 0 at T → 0.", null, "" ]
[ null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.24.33-PM-300x48.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.25.22-PM-300x40.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.26.03-PM-300x63.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.26.56-PM-300x40.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.28.04-PM-300x120.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.28.36-PM-300x124.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.29.17-PM-300x42.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.34.06-PM-300x111.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-2.34.49-PM-300x57.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.08.09-PM-300x164.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.08.35-PM-300x90.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.09.18-PM-300x195.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.10.11-PM-300x49.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.12.46-PM-300x64.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.18.21-PM-300x163.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.19.32-PM-300x51.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.31.47-PM-300x98.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.34.15-PM-300x177.jpeg", null, "https://www.akritinfo.com/wp-content/uploads/2022/02/WhatsApp-Image-2022-02-24-at-3.37.44-PM-300x128.jpeg", null, "https://secure.gravatar.com/avatar/c7c6ce97653ed025fcfe740e2d0382cf", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.822498,"math_prob":0.99024904,"size":5360,"snap":"2023-40-2023-50","text_gpt3_token_len":1258,"char_repetition_ratio":0.17979835,"word_repetition_ratio":0.087830685,"special_character_ratio":0.23041044,"punctuation_ratio":0.11566018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99834764,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T03:40:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a10870b3-3c67-4dbc-85d1-05e994bfa3ef>\",\"Content-Length\":\"95926\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bcced00-be2b-4720-95e0-54a70b9facc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:74b86ec4-eb16-4430-bb8b-0bd1a279925f>\",\"WARC-IP-Address\":\"104.21.11.126\",\"WARC-Target-URI\":\"https://www.akritinfo.com/statistical-mechanics-3/\",\"WARC-Payload-Digest\":\"sha1:J2FWKO4SAS3IKVXW2OJ64IUUNP6ZQHYJ\",\"WARC-Block-Digest\":\"sha1:ECFYDHSLUMPYKVMDCKPXANVJ72DV7QRZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511351.18_warc_CC-MAIN-20231004020329-20231004050329-00674.warc.gz\"}"}
https://www.exceldemy.com/excel-formula-to-compare-two-cells-in-different-sheets/
[ "# Excel Formula to Compare Two Cells in Different Sheets\n\nGet FREE Advanced Excel Exercises with Solutions!\n\nThis article illustrates how to use Excel formula to compare two cells in different sheets. Now and then, you might need to compare data in different worksheets in Excel. This article will help you to do that. The following picture highlights the purpose of this article.", null, "Imagine you have the following dataset in Sheet1 and Sheet2 containing the top 10 baby girl names in the USA in 2020 and 2021 respectively. Now you want to compare the names from the two sheets to see if the ranking has changed. Then the following 3 formulas might be helpful to do that.", null, "## 1. Applying Excel Formula to Compare Two Cells in Different Sheets\n\nTo create a simple formula, execute the following steps.\n\n• Enter the following formula in cell D5 in Sheet1. Then use the fill handle icon to apply the formula to the cells below. This formula checks whether the respective cells from the two sheets are the same.\n`=C5=Sheet2!C5`\n• Alternatively, you can apply the following formula in cell D5 in Sheet2 to check whether the respective cells are different.\n`=C5<>Sheet1!C5`", null, "Read More: Compare Two Cells in Excel and Return TRUE or FALSE\n\n## 2. Inserting Excel IF Formula to Compare Two Cells in Different Sheets\n\nYou can use the formula with the IF function to compare two cells in different sheets.\n\n• Apply the following formula in cell D5 in Sheet1. It will check if the respective cells between the two sheets match each other.\n`=IF(C5=Sheet2!C5,\"Match\",\"No Match\")`\n• Alternatively, you can use the following formula in cell D5 in Sheet2 to get the same result.\n`=IF(C5<>Sheet1!C5,\"No Match\",\"Match\")`", null, "Read More: How to Compare Text in Excel and Highlight Differences\n\n## 3. Using VLOOKUP Formula to Compare Two Ranges in Different Excel Sheets\n\nYou can also use the VLOOKUP function to compare two ranges of cells in two different sheets.\n\n• Enter the following formula in cell D5 in Sheet1. This formula checks if the data in range C5:C14 are also present in the respective range in Sheet2. The ISNA function in the formula returns True if the VLOOKUP function returns #N/A. Otherwise, it returns False.\n`=IF(ISNA(VLOOKUP(C5,Sheet2!\\$C\\$5:\\$C\\$14,1,FALSE)),\"No Match\",\"Match\")`\n• Alternatively, you can use the following formula in cell D5 in Sheet2 to do the same.\n`=IF(ISNA(VLOOKUP(C5,Sheet1!\\$C\\$5:\\$C\\$14,1,FALSE)),\"No Match\",\"Match\")`", null, "## 4. Comparing & Marking Two Cells in Different Sheets with Conditional Formatting Tool\n\nYou can compare and highlight two cells in different sheets using conditional formatting. Follow the steps below to be able to do that.\n\n📌 Steps\n\n• First, select the desired cells (C5:C14) in Sheet1. Then select Conditional Formatting >> New Rule from the Home tab. This will open a new dialog box.", null, "• Now choose to Use a formula to determine which cells to format as the rule type. Then enter the following formula in the field for Format values where this formula is true:.\n`=C5=Sheet2!C5`\n• Next select Format to open the Format Cells dialog box.", null, "• Now choose the desired color from the Fill tab and select the OK button.", null, "• After that, you will see a preview of what the cells will look like. Then hit the OK button again.", null, "• Finally, you will see the matching cells highlighted as follows.", null, "Read More: Compare Two Cells Using Conditional Formatting in Excel\n\n## 5. Applying VBA to Compare & Highlight Two Cells in Different Sheets\n\nYou can also use Excel VBA to compare and highlight two cells in different sheets. Follow the steps below to be able to do that.\n\n📌 Steps\n\n• First, press ALT+F11 to open the Microsoft Visual Basic for Applications window. Then select Insert >> Module to open a new blank module as shown in the following picture.", null, "• After that copy the following code.\n``````Sub CompareCellsBetweenSheets()\nDim Names As Range\nFor Each Names In Worksheets(\"Sheet1\").Range(\"C5:C14\")\nIf Names = Worksheets(\"Sheet2\").Cells(Names.Row, Names.Column) Then\nNames.Interior.Color = vbGreen\nEnd If\nNext Names\nEnd Sub``````\n• Now paste the copied code on the blank module as shown below. Then press F5 to run the code.", null, "• Finally, you will see the matching cells in Sheet1 highlighted as follows.", null, "## Things to Remember\n\n• Here the formulas use cell references of cells in different sheets in the same workbook. For cells in different sheets of different workbooks, you need to open the workbooks and use the respective cell references.\n• Conditional formatting does not work for cells in different workbooks.\n• The VBA code is applicable for cells in sheets in the same workbook. You need to change the sheet names according to your worksheets.\n\n## Conclusion\n\nNow you know how to use Excel formula to compare two cells on different sheets. Please use the comment section below. You can also visit our blog to read more on Excel. Stay with us and keep learning.\n\n## What is ExcelDemy?\n\nExcelDemy Learn Excel & Excel Solutions Center provides free Excel tutorials, free support , online Excel training and Excel consultancy services for Excel professionals and businesses. Feel free to contact us with your Excel problems.", null, "Md. Shamim Reza\n\nHello there! This is Md. Shamim Reza. Working as an Excel & VBA Content Developer at ExcelDemy. We try to find simple & easy solutions to the problems that Excel users face every day. Our goal is to gather knowledge, find innovative solutions through them and make those solutions available for everybody. Stay with us & keep learning.\n\nWe will be happy to hear your thoughts", null, "Advanced Excel Exercises with Solutions PDF", null, "", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20593%20528'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20531%20435'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20677%20467'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20699%20495'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20699%20513'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20603%20606'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20514%20368'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20534%20520'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20514%20368'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20266%20442'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20537%20326'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20696%20431'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20268%20442'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2069%2069'%3E%3C/svg%3E", null, "https://www.exceldemy.com/excel-formula-to-compare-two-cells-in-different-sheets/", null, "https://www.exceldemy.com/excel-formula-to-compare-two-cells-in-different-sheets/", null, "https://www.exceldemy.com/wp-content/uploads/2018/05/exceldemy-logo-diamond.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.828819,"math_prob":0.59132457,"size":4854,"snap":"2023-40-2023-50","text_gpt3_token_len":1111,"char_repetition_ratio":0.16865979,"word_repetition_ratio":0.1502463,"special_character_ratio":0.22620519,"punctuation_ratio":0.11251315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98227364,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T16:22:14Z\",\"WARC-Record-ID\":\"<urn:uuid:cc366d80-ec22-4688-aefe-282f2286dea0>\",\"Content-Length\":\"314471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:222ac0bd-9560-4cfa-9b17-37915dce6cea>\",\"WARC-Concurrent-To\":\"<urn:uuid:b44c09b4-b903-4cc3-bc73-1f073f78dd6b>\",\"WARC-IP-Address\":\"104.21.6.27\",\"WARC-Target-URI\":\"https://www.exceldemy.com/excel-formula-to-compare-two-cells-in-different-sheets/\",\"WARC-Payload-Digest\":\"sha1:SIOSAJZP2VHORZYBSIJKJXKECCK3GOBN\",\"WARC-Block-Digest\":\"sha1:4BH77VMAIU5DAVG47BIBMLB54MMQB43H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100229.44_warc_CC-MAIN-20231130161920-20231130191920-00066.warc.gz\"}"}
https://testbook.com/question-answer/m-is-a-point-on-side-ab-of-rectangle-abcd-if-ad--5fa0f0517323f4cf430098f2
[ "# M is a point on side AB of rectangle ABCD. If AD = 3 cm, DM = 5 cm and CM = 3√5 cm, then find the length of AB.\n\n1. 10 cm\n2. 12 cm\n3. 8 cm\n4. 9 cm\n\nOption 1 : 10 cm\n\n## Detailed Solution\n\nGiven:\n\nABCD is a rectangle\n\nDM = 5 cm\n\nCM = 3√5 cm\n\nConcept used:\n\nUsing the concept of Pythagoras theorem, i.e.,\n\nBase2 + Height2 = Hypotenous2\n\nCalculation:", null, "In triangle AMD,\n\nBy using Pythagoras Theorem\n\n⇒ 32 + AM2 = 52\n\n⇒ AM2 = 16\n\n⇒ AM = 4 cm                    …..(1)\n\nIn triangle BCM,\n\nBy using Pythagoras Theorem\n\nBC2 + BM2 = CM2\n\n⇒ 32 + BM2 = (3√5)2            [BC = AD]\n\n⇒ BM2 = 36\n\n⇒ BM = 6 cm                  …..(2)\n\nAB = AM + BM\n\n⇒ AB = 4 + 6              [From using (1) and (2)]\n\n⇒ AB = 10 cm\n\nThe length of AB is 10 cm." ]
[ null, "https://storage.googleapis.com/tb-img/production/21/06/F3__07-06-21_Harshit_Savita_D1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63856703,"math_prob":0.99988174,"size":595,"snap":"2021-43-2021-49","text_gpt3_token_len":264,"char_repetition_ratio":0.11505922,"word_repetition_ratio":0.0,"special_character_ratio":0.43361345,"punctuation_ratio":0.13043478,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99891233,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T12:04:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2ba55b34-71dc-4411-a0b5-dd7f88436881>\",\"Content-Length\":\"115705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3a49f1a-8312-4cbd-8eac-94a7bcd21719>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e952bb8-630c-42fc-9e04-429719ab6c2d>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/m-is-a-point-on-side-ab-of-rectangle-abcd-if-ad--5fa0f0517323f4cf430098f2\",\"WARC-Payload-Digest\":\"sha1:6OBKGM4PY6RAYUHGGVE3RCNYB52HRPWR\",\"WARC-Block-Digest\":\"sha1:J7V3B7DUB4AAO73JS5TOR5UMDD7XNGZM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587877.85_warc_CC-MAIN-20211026103840-20211026133840-00439.warc.gz\"}"}
https://governmentadda.com/data-sufficiency-quiz-for-upcoming-exams-answers/
[ "# Data Sufficiency Quiz For Upcoming Exams Answers\n\nQ1. Option A\n\nIn the two statements given in I, the common words are ‘But’, ‘None’, ‘And’ and thecommon code words are ‘Ne’, ‘Pa’, ,’Lo’. So, ‘Ne’, ‘Pa’ and ‘Lo’ are codes for ‘But’, ‘None’ and ‘And’. Thus, in the first statement, ‘Sic’ is the code for ‘No’.\n\nQ2. Option E\n\nFrom I, we have: M > V > Q.From II, we have: T > Q, T > M, P > T.Combining the above two, we have: P>T>M>V>Q i.e. Q<v<m<t<p.< p=””>Clearly, M is in the middle.\n\nQ3. Option C\n\nIn the given statement and I, the common word is ‘good’ and the common code word is ‘co’. So, ‘co’ is the code for ‘good’.In the given statement and II, the common words are ‘He’ and ‘is’ and the common code words are ‘sin’ and ‘bye’. So ‘sin’ and ‘bye’ are the codes for ‘He’ and ‘is’. Thus, in the given statement, ‘co’ is the code for ‘good’.\n\nQ4. Option D\n\nIn I and II, the common words are ‘me’ and ‘water’ and the common code numbers are ‘7’ and ‘1’. So, the code for ‘water’ is either ‘7’ or ‘1’.\n\nQ5. Option D\n\nFrom I and II, we find that maximum (243 x 3) i.e. 729 visitors saw the exhibition. But the exact number cannot be determined.\n\nQ6. Option C\n\nFrom I, we conclude that in a class of 47 students, Gaurav ranks 18th from the top and hence 30th from the last. From II, we conclude that there are 9 students above and 37 students below Jatin in rank. Thus, there are (9 + 1 + 37) = 47 students in the class.So, Gaurav who ranks 18th from the top, is 30th from the last.\n\nQ7. Option C\n\nFrom I, we conclude that P is 9th from the top. Thus, in a class of 30 students, P ranks 22nd from the bottom. From II, we conclude that P is 22nd from the bottom.\n\nQ8. Option E\n\nFrom I, we know that L is T’s brother and J’s husband. Since L is the only son of his parents, T is L’s sister. From II, we know that K is L’s daughter. Thus, from I and II, we conclude that T is the sister of K’s father i.e. T is K’s aunt.\n\nQ9. Option B\n\nFrom II, we know that P’s mother is married to J’s husband, which means that J is P’s mother.\n\nQ10. Option C\n\nFrom I, we conclude that Y is the child of D who is wife of X i.e. X is Y’s father.From II, X is married to Y’s father. This implies that X is Y’s mother." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9512227,"math_prob":0.91712815,"size":2127,"snap":"2022-05-2022-21","text_gpt3_token_len":676,"char_repetition_ratio":0.16768724,"word_repetition_ratio":0.05263158,"special_character_ratio":0.3248707,"punctuation_ratio":0.17173524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98439026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T13:38:26Z\",\"WARC-Record-ID\":\"<urn:uuid:349a3661-4512-4c19-9ad6-d304fbf01220>\",\"Content-Length\":\"153992\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fc9b22d-f817-49d2-a35a-800dc47e2096>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fa99f30-f5b0-4570-8faf-7b219dd1ba4a>\",\"WARC-IP-Address\":\"167.86.85.249\",\"WARC-Target-URI\":\"https://governmentadda.com/data-sufficiency-quiz-for-upcoming-exams-answers/\",\"WARC-Payload-Digest\":\"sha1:CHONIBOCNELN6EPN2KVAKOFBBBHB5OKZ\",\"WARC-Block-Digest\":\"sha1:Y4BP2TKWEQ3QTVRSU4BZLM67WU3IN5S7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558030.43_warc_CC-MAIN-20220523132100-20220523162100-00401.warc.gz\"}"}
https://gmatclub.com/forum/in-circle-o-above-if-poq-is-a-right-triangle-and-radius-op-2-what-261479.html
[ "GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 21 Oct 2019, 08:23", null, "GMAT Club Daily Prep\n\nThank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we’ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\nNot interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.", null, "", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what\n\n new topic post reply Question banks Downloads My Bookmarks Reviews Important topics\nAuthor Message\nTAGS:\n\nHide Tags\n\nMath Expert", null, "V\nJoined: 02 Sep 2009\nPosts: 58415\nIn circle O above, if POQ is a right triangle and radius OP = 2, what  [#permalink]\n\nShow Tags", null, "00:00\n\nDifficulty:", null, "", null, "", null, "25% (medium)\n\nQuestion Stats:", null, "85% (01:12) correct", null, "15% (02:01) wrong", null, "based on 43 sessions\n\nHideShow timer Statistics", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what is the area of the shaded region?\n\n(A) $$4\\pi – 2$$\n\n(B) $$4\\pi — 4$$\n\n(C) $$2\\pi – 2$$\n\n(D) $$2\\pi — 4$$\n\n(E) $$\\pi - 2$$\n\nAttachment:", null, "2018-03-16_1011.png [ 9.71 KiB | Viewed 807 times ]\n\n_________________\nMath Expert", null, "V\nJoined: 02 Sep 2009\nPosts: 58415\nRe: In circle O above, if POQ is a right triangle and radius OP = 2, what  [#permalink]\n\nShow Tags\n\nBunuel wrote:", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what is the area of the shaded region?\n\n(A) $$4\\pi – 2$$\n\n(B) $$4\\pi — 4$$\n\n(C) $$2\\pi – 2$$\n\n(D) $$2\\pi — 4$$\n\n(E) $$\\pi - 2$$\n\nAttachment:\n2018-03-16_1011.png\n\nFor other subjects:\nALL YOU NEED FOR QUANT ! ! !\nUltimate GMAT Quantitative Megathread\n_________________\nCEO", null, "", null, "D\nStatus: GMATINSIGHT Tutor\nJoined: 08 Jul 2010\nPosts: 2978\nLocation: India\nGMAT: INSIGHT\nSchools: Darden '21\nWE: Education (Education)\nIn circle O above, if POQ is a right triangle and radius OP = 2, what  [#permalink]\n\nShow Tags\n\nBunuel wrote:", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what is the area of the shaded region?\n\n(A) $$4\\pi – 2$$\n\n(B) $$4\\pi — 4$$\n\n(C) $$2\\pi – 2$$\n\n(D) $$2\\pi — 4$$\n\n(E) $$\\pi - 2$$\n\nAttachment:\n2018-03-16_1011.png\n\nArea of shaded region = Area of Quarter Circle - Area of triangle POQ\n\nArea of shaded region = (1/4)π2^2 - (1/2)*2*2 = π-2\n\n_________________\nProsper!!!\nGMATinsight\nBhoopendra Singh and Dr.Sushma Jha\ne-mail: [email protected] I Call us : +91-9999687183 / 9891333772\nOnline One-on-One Skype based classes and Classroom Coaching in South and West Delhi\nhttp://www.GMATinsight.com/testimonials.html\n\nACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION\nSenior SC Moderator", null, "V\nJoined: 22 May 2016\nPosts: 3565\nIn circle O above, if POQ is a right triangle and radius OP = 2, what  [#permalink]\n\nShow Tags\n\nBunuel wrote:", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what is the area of the shaded region?\n\n(A) $$4\\pi – 2$$\n\n(B) $$4\\pi — 4$$\n\n(C) $$2\\pi – 2$$\n\n(D) $$2\\pi — 4$$\n\n(E) $$\\pi - 2$$\n\nArea of shaded region = (Sector area) - (Triangle Area)\n\n• Sector area as fraction of the circle's area\n\nThe key to these kinds of problems often is:\n\"The sector is what fraction of the circle?\"\n\nUse the sector's central angle (given, = 90°) to find that fraction\n\n$$\\frac{Part}{Whole}=\\frac{SectorAngle}{360°}=\\frac{90}{360}=\\frac{1}{4}=\\frac{SectorArea}{CircleArea}$$\n\nArea of sector = $$\\frac{1}{4}$$ area of circle\n\n• Area of circle and area of sector\n\nCircle area, r = 2: $$πr^2 = 4π$$\nSector area: $$\\frac{4π}{4}$$= $$π$$\n\n• Area of triangle\n\nArea of triangle with radii as sides (s = b and h):\n$$\\frac{s*s}{2} = \\frac{4}{2}$$ = $$2$$\n\n• Area of shaded region\n= (Area of sector) - (Area of triangle)\n\nArea of shaded region = $$π - 2$$\n\n_________________\nSC Butler has resumed! Get two SC questions to practice, whose links you can find by date, here.\n\nInstructions for living a life. Pay attention. Be astonished. Tell about it. -- Mary Oliver", null, "In circle O above, if POQ is a right triangle and radius OP = 2, what   [#permalink] 16 Mar 2018, 07:56\nDisplay posts from previous: Sort by\n\nIn circle O above, if POQ is a right triangle and radius OP = 2, what\n\n new topic post reply Question banks Downloads My Bookmarks Reviews Important topics\n\n Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne", null, "", null, "" ]
[ null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png", null, "https://gmatclub.com/forum/styles/gmatclub_light/theme/images/search/close.png", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_73391.jpg", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_play.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_grey.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png", null, "https://gmatclub.com/forum/download/file.php", null, "https://gmatclub.com/forum/download/file.php", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_73391.jpg", null, "https://gmatclub.com/forum/download/file.php", null, "https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_7.svg", null, "https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_102046.jpg", null, "https://gmatclub.com/forum/download/file.php", null, "https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg", null, "https://gmatclub.com/forum/download/file.php", null, "https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/posts_bot.png", null, "https://www.facebook.com/tr", null, "https://www.googleadservices.com/pagead/conversion/1071875456/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87414014,"math_prob":0.9877972,"size":721,"snap":"2019-43-2019-47","text_gpt3_token_len":189,"char_repetition_ratio":0.09762901,"word_repetition_ratio":0.2,"special_character_ratio":0.23578364,"punctuation_ratio":0.16083916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99478674,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T15:23:30Z\",\"WARC-Record-ID\":\"<urn:uuid:a4105596-a957-48cc-9e60-fc06d57613dd>\",\"Content-Length\":\"835949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cac1618-1cff-457e-a309-1d93171a72b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c80d329-576a-492c-afee-fdfcb8f9d07b>\",\"WARC-IP-Address\":\"198.11.238.98\",\"WARC-Target-URI\":\"https://gmatclub.com/forum/in-circle-o-above-if-poq-is-a-right-triangle-and-radius-op-2-what-261479.html\",\"WARC-Payload-Digest\":\"sha1:KCKFOMP552ARDLQG3NN5N3IPR7YLFEKB\",\"WARC-Block-Digest\":\"sha1:BTHYLVZHII3E7P5QDYBUTGPFBGFLXGVJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987779528.82_warc_CC-MAIN-20191021143945-20191021171445-00299.warc.gz\"}"}
https://carlbrannen.wordpress.com/2008/06/29/
[ "# Daily Archives: June 29, 2008\n\n## The MNS Matrix as Magic Square\n\nRecently, Kea observed that the CKM matrix can be written as the sum of 1-circulant and 2-circulant matrices. The CKM matrix defines the relationship between flavor eigenstates and mass eigenstates for the quarks. Her observation naturally suggests one should look also at the MNS matrix, which defines the relationship between flavor and mass eigenstates for the leptons.\n\nCurrent experimental measurements show that the MNS matrix is approximately tribimaximal:", null, "The form of the matrix is chosen so that the rows and columns are orthonormal. This is the mathematical definition of unitary, it means that when the matrix is multiplied by its conjugate transpose, it will give the identity matrix. The physics definition of unitary is a little looser, it only requires that probabilities sum to 1.\n\nFor the case of a mixing matrix, probabilities are given by the squares of the absolute values of the entries. Conservation of probabilities means that the sum of the probabilities for any row or column is 1. Since in quantum mechanics, the phase of any quantum state is arbitrary, we can consider modifications to the tribimaximal matrix by multiplying any row or column by an arbitrary phase. The result will be a new matrix but its probabilities will be unchanged (and therefore consistent with experiment).\nContinue reading\n\n5 Comments\n\nFiled under physics" ]
[ null, "https://carlbrannen.files.wordpress.com/2008/06/eqn2141.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92250615,"math_prob":0.9878759,"size":1323,"snap":"2021-04-2021-17","text_gpt3_token_len":262,"char_repetition_ratio":0.13495073,"word_repetition_ratio":0.01923077,"special_character_ratio":0.17989418,"punctuation_ratio":0.08154506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971672,"pos_list":[0,1,2],"im_url_duplicate_count":[null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T20:18:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d05d6f9a-e387-4b81-b6d0-c77e19c7d78b>\",\"Content-Length\":\"53797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf812135-f46a-4f13-894c-6fa96ad7d413>\",\"WARC-Concurrent-To\":\"<urn:uuid:62f42e55-bd00-4794-ab14-23505e0276e3>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://carlbrannen.wordpress.com/2008/06/29/\",\"WARC-Payload-Digest\":\"sha1:7SWQ25UQLJ5NJHPL6EAVJEYEGAOI3KSN\",\"WARC-Block-Digest\":\"sha1:LZICH4HNZKY3F7J6Z5VPVYTZWKV4KRFJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039604430.92_warc_CC-MAIN-20210422191215-20210422221215-00329.warc.gz\"}"}
https://arxiv-export-lb.library.cornell.edu/abs/2103.01068
[ "math.AG\n\n# Title: Bridgeland stability conditions and the tangent bundle of surfaces of general type\n\nAuthors: Igor Reider\nAbstract: Let $X$ be a smooth compact complex surface with the canonical divisor $K_X$ ample and let $\\Theta_X$ be its holomorphic tangent bundle. Bridgeland stability conditions are used to study the space $H^1 (\\Theta_X)$ of infinitesimal deformations of complex structures of $X$ and its relation to the geometry/topology of $X$. The main observation is that for $X$ with $H^1 (\\Theta_X)$ nonzero and the Chern numbers $(c_2 (X), K^2_X)$ subject to $$\\tau_X :=2ch_2 (\\Theta_X)=K^2_X -2c_2(X) >0$$ the object $\\Theta_X $ of the derived category of bounded complexes of coherent sheaves on $X$ is Bridgeland unstable in a certain part of the space of Bridgeland stability conditions. The Harder-Narasimhan filtrations of $\\Theta_X $ for those stability conditions are expected to provide new insights into geometry of surfaces of general type and the study of their moduli. The paper provides a certain body of evidence that this is indeed the case.\n Comments: 182 pages, 3 figures Subjects: Algebraic Geometry (math.AG) MSC classes: 14J29, 14J60, 14F08 Cite as: arXiv:2103.01068 [math.AG] (or arXiv:2103.01068v1 [math.AG] for this version)\n\n## Submission history\n\nFrom: Igor Reider [view email]\n[v1] Mon, 1 Mar 2021 15:22:10 GMT (159kb)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82525194,"math_prob":0.9678232,"size":1318,"snap":"2021-43-2021-49","text_gpt3_token_len":338,"char_repetition_ratio":0.111872144,"word_repetition_ratio":0.01010101,"special_character_ratio":0.2579666,"punctuation_ratio":0.08658009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98552847,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T04:14:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c3c25048-8e49-4dfd-97cc-715800fa8440>\",\"Content-Length\":\"16414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d629184-7271-430f-ab98-8f86d013f82d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f422f89-277c-422e-a5cb-2351014fc228>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"https://arxiv-export-lb.library.cornell.edu/abs/2103.01068\",\"WARC-Payload-Digest\":\"sha1:RUDQW2TGB4F74LQTKHBQ2SBZF3M4VOOO\",\"WARC-Block-Digest\":\"sha1:USN5XGTA7P3CRFOZIUKIDUGOOX264NK5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587623.1_warc_CC-MAIN-20211025030510-20211025060510-00151.warc.gz\"}"}
https://proofwiki.org/wiki/Definition:Big_Model
[ "# Definition:Big Model\n\n## Definition\n\nLet $\\MM$ be an $\\LL$-structure with universe $M$.\n\nLet $\\kappa$ be a cardinal.\n\n$\\MM$ is $\\kappa$-big if for every subset $A \\subset M$ with cardinality $\\card A < \\kappa$, the following holds:\n\nif $\\LL_A$ is the language obtained from $\\LL$ by adding new constant symbols for each $a \\in A$, then\nfor every language $\\LL_A^*$ obtained by adding a new relation symbol $R$ to $\\LL_A$, and\nfor every $\\LL_A^*$-structure $\\NN$ such that $\\MM$ and $\\NN$ are elementary equivalent as $\\LL_A$-structures,\nthere is a relation $R^\\MM$ on $M$ such that $\\struct {\\MM, R^\\MM}$ is elementary equivalent to $\\NN$ as an $\\LL_A^*$-structure.\n\n## Note\n\nNote that any function symbol or constant symbol can be replaced by a relation symbol along with suitable sentences mentioning only that symbol. So, the focus on relation symbols in the definition is just for convenience.\n\nInformally, being $\\kappa$-big means that $\\MM$ already has all of the structural features that are consistent with the behavior of $\\MM$ and the parameters in $A$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8934132,"math_prob":0.99995184,"size":1091,"snap":"2022-40-2023-06","text_gpt3_token_len":285,"char_repetition_ratio":0.13431463,"word_repetition_ratio":0.0,"special_character_ratio":0.2749771,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999837,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T02:35:10Z\",\"WARC-Record-ID\":\"<urn:uuid:51b13aee-9a78-48d0-b4e5-f065a3c3ce40>\",\"Content-Length\":\"35939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62bd7ad8-a262-4946-a73d-fb757c534487>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d41b48b-4c18-4003-b4d7-4ff4ff5a5287>\",\"WARC-IP-Address\":\"104.21.84.229\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Definition:Big_Model\",\"WARC-Payload-Digest\":\"sha1:3MVXIQKGNIF4YTXIZUPXRZ2BNY4NONX5\",\"WARC-Block-Digest\":\"sha1:QCOQ2UXMLZHBJJ3JXJTU7GQKIZ7RB4Z4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499470.19_warc_CC-MAIN-20230128023233-20230128053233-00797.warc.gz\"}"}
https://math.stackexchange.com/questions/1702676/how-to-find-eigenvalues-for-t-without-given-a-matrix
[ "# How to find eigenvalues for T without given a matrix\n\nI found it a bit vague on how to find eigenvalues and vectors of $T$ if we do not have a matrix to represent it. Suppose if: $$T: V \\rightarrow W, V = W$$ I thought perhaps that the eignevalues might just be from the $det([T]_{\\alpha}^{\\beta} - \\lambda{}I) = 0$ where $\\alpha$ is a basis for $V$ and $\\beta$ is a basis for $W$. However, I have consistently got these answers wrong in my book and my book does not specify any obvious answers on how to approach problems such as these. In general, given known bases (or just selecting the standard ones, since the change of basis matrix are similar thus having the same characteristic polynomial), how can I find such eigenvalues and vectors?\n\nAn example would be, find the eigenvalues and vectors of $T: P_3 \\rightarrow P_3$ given that $T(p) = xp' -4p$\n\n• You could always find the answer by finding the matrix with respect to some fixed basis. For example, with $T:P_3 \\to P_3$, we can find the eigenvalues of $T$ by looking at the matrix of $T$ with respect to the basis $\\{1,x,x^2,x^3\\}$. Note: it is important that we use the same basis for both the starting space and the target space. – Omnomnomnom Mar 18 '16 at 3:06\n• Sometimes, however, it's easier to find/solve for the eigenvectors directly. For example, in your case we could note that $$xp' - 4p = \\lambda p$$ is a differential equation on $p(x)$. We could find a solution $p$ in terms of $x$ and $\\lambda$, and then select the $\\lambda$ for which the solution might be a polynomial in $P_3$. – Omnomnomnom Mar 18 '16 at 3:09\n\nFor your example, you can find the matrix of the transformation with respect to a standard basis, such as $\\alpha = \\{1,x,x^2,x^3\\}$. We then find that $$[T]_{\\alpha}^{\\alpha} = \\pmatrix{ -4 & 0&0&0\\\\ 0& -3&0&0\\\\ 0&0&-2&0\\\\ 0&0&0&-1 }$$ You may notice that it is particularly easy to find the eigenvalues of this matrix." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9387679,"math_prob":0.9997738,"size":800,"snap":"2020-10-2020-16","text_gpt3_token_len":208,"char_repetition_ratio":0.10427136,"word_repetition_ratio":0.0,"special_character_ratio":0.265,"punctuation_ratio":0.08125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999198,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-07T14:54:14Z\",\"WARC-Record-ID\":\"<urn:uuid:5e9d9f54-1e16-4a4f-b7e8-7973d086ced7>\",\"Content-Length\":\"140998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54d11231-d115-429e-97ea-8a02c081e956>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f5ee6f0-bbef-469b-a448-ef5524f95412>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1702676/how-to-find-eigenvalues-for-t-without-given-a-matrix\",\"WARC-Payload-Digest\":\"sha1:K3KRZFPTAWXGYRDUK5XVAKEABKFA5RVG\",\"WARC-Block-Digest\":\"sha1:YWJ3LZ7L6KW2OYII53DZ7HILEFN2R3QG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371799447.70_warc_CC-MAIN-20200407121105-20200407151605-00264.warc.gz\"}"}
https://books.google.com/books/about/Elementary_Algebra.html?id=Y902AAAAMAAJ
[ "# Elementary Algebra: Embracing the First Principles of the Science\n\nA. S. Barnes & Company, 1848 - Algebra - 279 pages\n0 Reviews\nReviews aren't verified, but Google checks for and removes fake content when it's identified\n\n### What people are saying -Write a review\n\nWe haven't found any reviews in the usual places.\n\n### Popular passages\n\nPage 230 - To express that the ratio of A to B is equal to the ratio of C to D, we write the quantities thus : A : B : : C : D ; and read, A is to B as C to D.\nPage 136 - The result of this operation, 1184, contains twice the product of the tens by the units, plus the square of the units.\nPage 231 - Quantities are said to be in proportion by alternation, or alternately, when antecedent is compared- with antecedent and consequent with consequent. Thus, if we have the proportion 3 : 6 : : 8 : 16, the alternate proportion would be 3 : 8 : : 6 : 16. QUEST. — 147. When are three quantities proportional ? What is the middle one called ? — 148. When are quantities said to be in proportion by inversion, or inversely?\nPage 155 - Divide the coefficient of the dividend by the coefficient of the divisor.\nPage 233 - AC and by clearing the equation of fractions we have BO=AD; that is, Of four proportional quantities, the product of the two extremes is equal to the product of the two means.\nPage 234 - If the product of two quantities is equal to the product of two other quantities, two of them may be made the extremes, and the other two the means of a proportion.\nPage 138 - Multiply the divisor, thus augmented, by the last figure of the root, and subtract the product from the dividend, and to the remainder bring down the next period for a new dividend.\nPage 273 - A person has four casks, the second of which being filled from the first, leaves the first four-sevenths full. The third being filled from the second, leaves it one-fourth full, and when the third is emptied into the fourth, it is found to fill only nine-sixteenths of it. But the first will fill the third and' fourth, and leave 15 quarts remaining.\nPage 79 - Ibs., his head weighed as much as his tail and half his body, and his body weighed as much as his head and t.ail together : what was the weight of the fish ? Let 2x = the weight of the body, in pounds.\nPage 116 - If A and B together can perform a piece of work in 8 days, A and C together in 9 days, and B and C in 10 days : how many days would it take each person to perform the same work alone ? Ans. A 14JA days, B 17fa, and C 23JT." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9228429,"math_prob":0.8647079,"size":3555,"snap":"2023-14-2023-23","text_gpt3_token_len":810,"char_repetition_ratio":0.11883976,"word_repetition_ratio":0.034591194,"special_character_ratio":0.23628692,"punctuation_ratio":0.11420205,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96158457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T20:49:46Z\",\"WARC-Record-ID\":\"<urn:uuid:388e448c-953c-436c-a71c-4ec5e2e54a75>\",\"Content-Length\":\"76039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40743ec0-0aa5-4ffb-948e-e62b0f2485c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:968cf8c8-90a1-4077-b40d-6b0a46c9bf39>\",\"WARC-IP-Address\":\"142.251.163.139\",\"WARC-Target-URI\":\"https://books.google.com/books/about/Elementary_Algebra.html?id=Y902AAAAMAAJ\",\"WARC-Payload-Digest\":\"sha1:GP42JZEQ44SSWYWHER4COKOR3QQXOZUD\",\"WARC-Block-Digest\":\"sha1:BAMOSPTBXGM73WGBMU442DCYOB5QWVSY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647409.17_warc_CC-MAIN-20230531182033-20230531212033-00752.warc.gz\"}"}
https://www.mocaxintelligence.com/
[ "# Machine Learning for Risk Calculations\n\nWe love the challenge of seemingly impossible calculations\n\n#### Applications\n\n• Acceleration of Counterparty Credit Risk simulations – XVAs, PFE, IMM capital\n• Acceleration of ES calculation in  IMA FRTB\n• Reduction of computational cost in Risk Calculations\n• Simulation of sensitivities inside a Monte Carlo engine\n• Dynamic Initial Margin (DIM) simulation\n• Portfolio optimisation algorithms\n• Balance Sheet optimisation\n• Pricing function cloning from pricing libraries to separated risk engines\n\n#### Quant\n\nThe solutions we have designed are grounded on a number of Machine Learing techniques, Deep Neural Nets and Chebyshev Tensors\n\n#### Implementation\n\nThere are a number of resources that we share so you can run your own research and investigate what you need", null, "In this book, I. Ruiz and M. Zeron share the line of research they have taken for several years on the topic of optimising the computation of risk calculations.\n\n##### Part I - Fundamental Approximation Methods\n\nChapter 1. Machine Learning\nChapter 2. Deep Neural Networks\nChapter 3. Chebyshev Tensors\n\n##### Part II - The toolkit, plugging in approximation methods\n\nChapter 4. Introduction, why a toolkit is needed\nChapter 5. Composition techniques\nChapter 6. Tensors in TT format and tensor extension algorithms\nChapter 7. Sliding technique\nChapter 8. The Jacobian projection technique\n\n##### Part III - Hybrid solutions, approximations methods and the toolkit\n\nChapter 9. Introduction to hybrid solutions\nChapter 10. The toolkit and Deep Neural Nets\nChapter 11. The toolkit and Chebyshev Tensors\nChapter 12. Hybrid Deep Neural Nets and Chebyshev Tensors frameworks\n\n##### Part IV - Applications\n\nChapter 13. The aim\nChapter 14. When to use Deep Neural Networks and when to use Chebyshev Tensors\nChapter 15. Counterparty credit risk\nChapter 16. Market risk\nChapter 17. Dynamic sensitivities\nChapter 18. Pricing model calibration\nChapter 19. Approximation of the implied volatility function\nChapter 20. Optimisation problems\nChapter 21. Pricing cloning\nChapter 22. XVA sensitivities\nChapter 23. Sensitivities of exotic derivatives\nChapter 24. Software libraries relevant to the book" ]
[ null, "https://i0.wp.com/www.mocaxintelligence.com/wp-content/uploads/2022/05/book_cover_withPK.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6864356,"math_prob":0.6780325,"size":2212,"snap":"2022-40-2023-06","text_gpt3_token_len":500,"char_repetition_ratio":0.17119566,"word_repetition_ratio":0.012232416,"special_character_ratio":0.19077757,"punctuation_ratio":0.10393258,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9879502,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T18:16:57Z\",\"WARC-Record-ID\":\"<urn:uuid:e77c1ee8-c28a-4b62-82ad-f39504f9ea18>\",\"Content-Length\":\"185824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32513ec0-4177-445d-9c77-ed2e63790edb>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca9dcd6b-4a28-4c79-9d16-1514ac1075f9>\",\"WARC-IP-Address\":\"52.56.235.142\",\"WARC-Target-URI\":\"https://www.mocaxintelligence.com/\",\"WARC-Payload-Digest\":\"sha1:6CSHSE7EVQ5OWIM4RAF6LLOGD4QTZO7J\",\"WARC-Block-Digest\":\"sha1:GPXMVYAIOOLOIYQSR3ZG3ONSXHM7YA3I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337339.70_warc_CC-MAIN-20221002181356-20221002211356-00108.warc.gz\"}"}
https://math.stackexchange.com/questions/2036790/on-quadratic-residues
[ "If $w^2 \\equiv p\\pmod q$ holds where $q\\equiv p\\equiv1\\pmod 4$ primes, is there explicit reasonably succinct expressions $f,g$ such that $f(p,q,w)\\equiv \\pmod p$ and $g(p,q,w)\\equiv q\\pmod p$ holds corresponding to the two roots of $x^2\\equiv p\\bmod p$?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64644796,"math_prob":1.0000092,"size":281,"snap":"2019-26-2019-30","text_gpt3_token_len":98,"char_repetition_ratio":0.1768953,"word_repetition_ratio":0.0,"special_character_ratio":0.30604982,"punctuation_ratio":0.10769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999248,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T00:48:55Z\",\"WARC-Record-ID\":\"<urn:uuid:fcd12aa2-d93f-4df3-97ea-b78da4fe10a4>\",\"Content-Length\":\"130767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0df04d9-cf8c-4169-812f-3f6cda9afef4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ce88339-2b29-4699-9781-ab2b7469790c>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2036790/on-quadratic-residues\",\"WARC-Payload-Digest\":\"sha1:4JOQATSJMPCGE6NG5RFQGSYO6JAMQ3WN\",\"WARC-Block-Digest\":\"sha1:6F7YP4RAVB4OPUWFZXU2NMCP5TEHIC5I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526401.41_warc_CC-MAIN-20190720004131-20190720030131-00352.warc.gz\"}"}
http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu.htm
[ "CARD: curve-fitting allelochemical response data\n\n1Wagga Wagga Agricultural Institute, NSW DPI, PMB, Wagga Wagga, NSW 2650, Australia, [email protected], [email protected]\n2\nWagga Wagga Agricultural Innovation Park, (CSU & NSW DPI), Wagga Wagga, NSW 2650, Australia\n3\nEnvironmental and Analytical Laboratories, Charles University, Wagga Wagga, NSW 2678, [email protected]\n\nAbstract\n\nBioassay techniques are often used to study the effects of allelochemicals on plant processes and it is generally observed that the processes are stimulated at low allelochemical concentrations and inhibited as the concentrations increase. Liu et al. (2003) developed a simple model to fit this type of allelochemical response data. Based on this model, CARD is developed as a Microsoft Windows® based program that can be easily used to fit the stimulation-inhibition response data. The fitted parameters and statistical properties are output in text file or on the screen and the comparison between the fitted and observed values can be viewed graphically.\n\nKey Words\n\nAllelopathy, CARD, modelling, computer software, stimulation-inhibition response\n\n# Introduction\n\nBioassay techniques are widely used for quantitative determination of biological responses to allelochemicals. Leather and Einhellig (1986, 1988) have extensively reviewed the nature and types of bioassay techniques used in studies of allelopathy. It is generally found that allelochemicals exhibit stimulation at low concentrations and inhibition at high concentrations (Lovett et al. 1989). The allelopathy dose-response relationship has, usually, an inverted U-shape but other kinds of response are also often found, such as absence of stimulation.\n\nSeveral models have been proposed to describe allelochemical dose-response relationships. A log-logistic equation (Finney 1979) was used in studying the allelopathic potential of wheat (Triticum aestivum L.) to fit the root length of annual ryegrass (Lolium rigidum) to wheat sowing density (Wu et al. 2000). The log-logistic equation is widely used in herbicide dose response, but it does not feature stimulation at low doses. Brain and Cousens (1989) modified the log-logistic equation and presented a model that can account for the stimulative responses. An et al. (1993) presented a model, based on enzyme kinetics, which includes the feature of stimulation, but it is not possible to fit the observed data statistically. Dias (2001) used a Weibull function to fit allelochemical effects on germination process, but the Weibull function, like many other equations, does not possess the feature of stimulation. Liu et al. (2003) developed a highly flexible but simple equation for describing the general pattern of stimulation-inhibition in dose-responses of allelochemicals. Even though the equation is simple, the calculation is quite time-consuming as it involves the determination of the number of ln-transformation that gives the best fit of the model to observed data. This paper introduces the Windows® based programm, CARD, that can be used to fit the stimulation-inhibition response curves, based on the model described by Liu et al. (2003).\n\n# The Software\n\nCARD is written in Visual Basic 6. The input data file is a two-column text-format file. The first column (X-axis) is “dose” and the second column (Y-axis) is the corresponding observation for the response to the dose. The data files are named with an extension of *.ARD which can be selected by press button “Get Data File”, shown in Figure 1.\n\nThe statistical results are shown on screen and are written into a file named as *.TXT. The labels for axis can be input from text boxes, Graphical labels. CARD runs and determines the best number of ln-transformation. The best fitted coefficients, α and β are reported with standard errors and t-test for the statistical tests of the fitted parameters. Coefficient of determination (R2), F-test, root mean square error (RMSE) (Janssen and Heuberger 1995) and model efficiency (ME) (Nash and Sutcliffe 1970) are calculated. R2 and F-test are based on a multiple linear regression with the transformed data, while RMSE and ME are, respectively, defined as:", null, "and,", null, "where O is the observed response value, P is the predicted response value. Ō is the mean of observed response values. It should be noted that the ME and R2 have identical values.", null, "Figure 1. The input data is in a two-column text formatted file (top right) and the files are named as *.ARD (top left).\n\nThe maximum value of stimulation and the dose at which the maximum stimulation is researched are reported (Figure 2). The doses for p% reduction are also calculated. While p = 0, 50 and 100 are reported, user can select a p% from a dropdown menu for calculating the corresponding dose. The dose for 50% reduction is suggested as a measure of the inhibition potency of an allelochemical or the sensitivity of the testing organism to the allelochemical (Liu et al. 2003).\n\nThe user can view the graphics showing the observed and predicted values with both actual doses and transformed doses (g(D), see Liu et al. 2003). While at the default, the predictions at the best fitted ln-transformation are plotted and the user can view the predicted and observed values with a given number of ln-transformations.\n\nThe predictions with various values of doses are also calculated and listed on screen and in the output file.\n\n# Results and Discussion\n\nCARD is a simple but useful program, for example, if one wants to fit dose-response data to the model described by Liu et al. (2003). The data of Selander et al. (1974) are used for an example (Figure 2). R2 and ME reached the highest value of 0.99 at the 4th ln-transformations. CARD calculates additional 4 ln-transformations after the best number of transformations is determined. At the best number of transformations the RMSE is the smallest, while the value of R2, ME and F-test are the highest. Further increase in ln-transformation will increase RMSE and decrease R2, ME and F-test. The criterion for determination of the best number of ln-transformations was detailed by Liu et al. (2003).\n\nWhile allelochemicals possess the nature of stimulation and inhibition, depending on concentrations, the shape of curves for stimulation-inhibition behaviour varies, depending on the nature of allelochemicals and the biological activities of receivers. The most important feature of the model described by Liu et al. (2003) is that the ln(D+1) cumulative transformations gave various sharps and at any number of cumulative ln-transformations the control remains the value of zero, where D is doses. In addition, after each ln-transformation, the model remains a simple quadratic equation, which can be fitted by a standard multiple linear regression. Because of this feature, CARD can be developed to fit the nonlinear relationship by a linear least squares regression.", null, "Figure 2. The observed and fitted values are compared, statistical results are shown and options can be selected in CARD.\n\nCARD on a CD Rom is available from “CARD Request, NSW DPI, Wagga Wagga Agricultural Institute, PMB, Wagga Wagga, NSW 2650, Australia” at the cost of \\$15 for material, postaage and handing, or email to the author for a free electronic copy. The setup for installation is simple and easy to use.\n\n# References\n\nAn M, Johnson IR, and Lovett JV. 1993. Mathematical modelling of allelopathy: Biological response to allelochemicals and its interpretation. Journal of Chemical Ecology 19, 2379-2388.\n\nBrain P, and Cousens R. 1989. An equation to describe dose responses where there is stimulation of growth at low doses. Weed Res 29:93-96.\n\nDias L. 2001. Describing phytotoxic effects on cumulative germination. Journal of Chemical Ecology 27, 411-418.\n\nFinney Y. 1979. Bioassay and the practice of statistical inference. Int. Statistical Review 47, 1-12.\n\nJanssen, P.H.M. and Heuberger, P.S.C., 1995. Calibration of process-oriented models. Ecological Modelling 83, 55-66.\n\nLeather G.R. and Einhellig F.A. 1986. Bioassay in the study of allelopathy. In: Putnam AR and Tang CS (ed), The Science of Allelopathy, pp 133-145. John Wiley and Sons, New York.\n\nLeather GR, and Einhellig FA. 1988. Bioassay of naturally occurring allelochemicals for phytotoxicity. Journal of Chemical Ecology 14, 1821-1828.\n\nLiu, D.L., M. An, I.R. Johnson and J.V. Lovett. 2003. Mathematical modelling of allelopathy. III. A model for curve-fitting allelochemical dose responses. Nonlinearity in Biology, Toxicology, and medicine: 1(1), 37-50.\n\nLovett JV, Ryuntyu MY, and Liu DL. 1989. Allelopathy, chemical communication, and plant defense. Journal of Chemical Ecology 15, 1193-1201.\n\nNash, J.E., and Sutcliffe, J.V., 1970. Rever flow forecasting through conceptual models. Part I. A discussion of principles. Journal of Hydrology 10, 282-290.\n\nSelander, J. Kalo, P. Kangas, E. and Pertunnen, V. 1974. Olfactory behaviours of Hylobium abietis L. (Col., Curculionidae). I. Response to several terpenoid fractions isolated from Scots pine phloem. Ann. Entom. Fenn. 40, 108-115.\n\nWu H, Pratley J, Lemerle D, and Haig T. 2000. Laboratory screening for allelopathic potential of wheat (Triticum aestivum) accessions against annual ryegrass (Lolium rigidum). Australian Journal of Agricultural Research 51, 259-266.", null, "", null, "", null, "" ]
[ null, "http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu-1.gif", null, "http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu-2.gif", null, "http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu-3.gif", null, "http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu-4.gif", null, "http://www.regional.org.au/au/allelopathy/2005/images/previous.gif", null, "http://www.regional.org.au/au/allelopathy/2005/images/top.gif", null, "http://www.regional.org.au/au/allelopathy/2005/images/next.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86150956,"math_prob":0.8138464,"size":9206,"snap":"2022-27-2022-33","text_gpt3_token_len":2192,"char_repetition_ratio":0.13540535,"word_repetition_ratio":0.019815994,"special_character_ratio":0.2281121,"punctuation_ratio":0.15715094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95769906,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T17:42:58Z\",\"WARC-Record-ID\":\"<urn:uuid:2330a55c-4fe6-44f5-9441-7b05ca0a9f93>\",\"Content-Length\":\"56600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aac5a11b-b361-4241-9242-45f5429db480>\",\"WARC-Concurrent-To\":\"<urn:uuid:1916ea2f-8620-49d1-bb97-5aaad10931ad>\",\"WARC-IP-Address\":\"54.252.201.116\",\"WARC-Target-URI\":\"http://www.regional.org.au/au/allelopathy/2005/2/3/2650_deli-liu.htm\",\"WARC-Payload-Digest\":\"sha1:WK5WMSJAAHXCFTHAB2AMOZXQCFKT2H2A\",\"WARC-Block-Digest\":\"sha1:L3PR3NQ4BROQNDAMCZZ2AOMZ6YBB6LZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104248623.69_warc_CC-MAIN-20220703164826-20220703194826-00234.warc.gz\"}"}
https://www.johndcook.com/blog/2017/05/13/grobner-bases/
[ "# Solving systems of polynomial equations\n\nIn a high school algebra class, you learn how to solve polynomial equations in one variable, and systems of linear equations. You might reasonably ask “So when do we combine these and learn to solve systems of polynomial equations?” The answer would be “Maybe years from now, but most likely never.” There are systematic ways to solve systems of polynomial equations, but you’re unlikely to ever see them unless you study algebraic geometry.\n\nHere’s an example from . Suppose you want to find the extreme values of", null, "on the unit sphere using Lagrange multipliers. This leads to the following system of polynomial equations where λ is the Lagrange multiplier.", null, "There’s no obvious way to go about solving this system of equations. However, there is a systematic way to approach this problem using a “lexicographic Gröbner basis.” This transforms the problem from into something that looks much worse but that is actually easier to work with. And most importantly, the transformation is algorithmic. It requires some computation—there are numerous software packages for doing this—but doesn’t require a flash of insight.\n\nThe transformed system looks intimidating compared to the original:", null, "We’ve gone from four equations to eight, from small integer coefficients to large fraction coefficients, from squares to seventh powers. And yet we’ve made progress because the four variables are less entangled in the new system.\n\nThe last equation involves only z and factors nicely:", null, "This cracks the problem wide open. We can easily find all the possible values of z, and once we substitute values for z, the rest of the equations simplify greatly and can be solved easily.\n\nThe key is that Gröbner bases transform our problem into a form that, although it appears more difficult, is easier to work with because the variables are somewhat separated. Solving one variable, z, is like pulling out a thread that then makes the rest of the threads easier to separate.\n\n* * *\n\n David Cox et al. Applications of Computational Algebraic Geometry: American Mathematical Society Short Course January 6-7, 1997 San Diego, California (Proceedings of Symposia in Applied Mathematics)\n\n## 7 thoughts on “Solving systems of polynomial equations”\n\n1. Ryan Dwyer\n\nVery interesting! I think a term is missing in the factored equation in z.\n\n2. So *that’s* what it was!\n\nIn the late ’80’s and early ’90’s I wrote software for R&D instrumentation, where my main job was algorithm extraction from scientists and their FORTRAN, then porting/re-implementing it in C running on bare metal.\n\nOne algorithm required a system of polynomials that I couldn’t solve in the minimal C domain, as it relied on a special FORTRAN library from a US national lab. One of the scientists transformed it into the above basis and simplified it, and after recovering from my initial disbelief that it was equivalent to the original, I was off and running.\n\nThat was about 30 years ago, and I’m amazed I recognized it at after all this time.\n\nBetter late than never, I suppose…\n\n3. John S.\n\nSets of equations like this arise often in engineering contexts. When I was in college we learned to solve them using an iterative method that is so simple it’s a wonder it even works.\n\nBasically, given a set on n algebraic equations in n unknowns, you solve each equation for a single variable, so that you end up with a transformed set of equations x1 = f1(x2,x3,…xn); x2 = f2(x1,x3,…,xn); etc.\n\nYou begin with an initial guess for the value of each variable (something that’s usually easy to do since there are physical constraints on most probles) and use the equations to calculate new values for each variable; iterate this way until the values converge (to within some predefined tolerance).\n\nThere are other approximate methods, such as Gauss-Jordan.\n\n4. One advantage of algebraic methods over numerical methods is that the former can tell you how many solutions there are. And of course it doesn’t have to be either-or.\n\nMaybe algebraic methods tell you how many solutions there are, and give you some idea where they are, then you compute them numerically. Maybe Grobner bases partially untangle your equations, but you still need to solve the transformed equations numerically.\n\n5. Jack M\n\nNice. But it’s interesting to append to the Groebner Basis successively each of the 4 factors of the z polynomial and then apply the algorithm\nto that set. The answers are very simple. For example, appending 128 z^2 -11 yields basis {128 z^2 -11, y + 3 z, 3 + 8 x, 8 (lambda) – 1.\n\n6. Hrishikesh\n\nExtremely interesting! I love your writing , I can see a programming analogy regarding your last statement:\nSolving one variable, z, is like pulling out a thread that then makes the rest of the threads easier to separate.\n\nSimilar to how I think of isolating concerns so it’s easier to deal with one thing at a time when keeping a program in the head." ]
[ null, "https://www.johndcook.com/grobner0.svg", null, "https://www.johndcook.com/Lagrange_system.svg", null, "https://www.johndcook.com/grobner.svg", null, "https://www.johndcook.com/z_only2.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.919048,"math_prob":0.9759454,"size":5179,"snap":"2020-45-2020-50","text_gpt3_token_len":1116,"char_repetition_ratio":0.10879227,"word_repetition_ratio":0.0472255,"special_character_ratio":0.21181695,"punctuation_ratio":0.103792414,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99359035,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,7,null,8,null,8,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T18:21:18Z\",\"WARC-Record-ID\":\"<urn:uuid:d513d76b-fe6c-4d4a-b840-a6e2868a5b39>\",\"Content-Length\":\"52285\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95a27917-6443-4ac5-8956-6011b128cc8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d28c9d34-7e92-4b02-8804-7f0c6da63225>\",\"WARC-IP-Address\":\"74.208.236.113\",\"WARC-Target-URI\":\"https://www.johndcook.com/blog/2017/05/13/grobner-bases/\",\"WARC-Payload-Digest\":\"sha1:TFFTN4ICJSKAZI4AD6NXGHHLIVXY2MSF\",\"WARC-Block-Digest\":\"sha1:4LMTQX4NQRUDUQU4PD5GE2ETTHJZ4IZJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681209.60_warc_CC-MAIN-20201201170219-20201201200219-00336.warc.gz\"}"}
http://www.sfzd5.com/doc/20064.html
[ "# 20064 平均年龄\n\n##### 输入样例\n\n``````3\n18\n17\n19\n``````\n##### 输出样例\n\n``````18.00\n``````\n\n[循环] [语法基础]\n\nC++题解代码\n\n``````#include <bits/stdc++.h>\nusing namespace std;\n\nint n;\nint i;\ndouble a;\ndouble h;\n\n// The main procedure\nint main() {\ncin>>n;\nh = 0;\ni = 1;\nwhile (i <= n) {\ncin>>a;\nh += a;\ni++;\n}\ncout<<fixed<<setprecision(2);\ncout<<(h/n);\nreturn 0;\n}\n``````\n\nBlockly题解代码图片", null, "" ]
[ null, "http://www.sfzd5.com/doc/pic/20064.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.75454384,"math_prob":0.7542531,"size":447,"snap":"2023-14-2023-23","text_gpt3_token_len":279,"char_repetition_ratio":0.09255079,"word_repetition_ratio":0.0,"special_character_ratio":0.36465323,"punctuation_ratio":0.17021276,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96481526,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T18:57:50Z\",\"WARC-Record-ID\":\"<urn:uuid:77ef409d-8615-4867-9dd0-5b40ae97fda8>\",\"Content-Length\":\"2216\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:742fa86a-64e3-48d8-b7d4-410d9549068c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad529d89-9160-4155-abbc-acac43595c08>\",\"WARC-IP-Address\":\"116.255.151.139\",\"WARC-Target-URI\":\"http://www.sfzd5.com/doc/20064.html\",\"WARC-Payload-Digest\":\"sha1:XBJ2JGGV67YVNTXGL3ZXIF6A2OMIVR47\",\"WARC-Block-Digest\":\"sha1:73MYE67T4SM26SZXV6TCI3JOWX6XNHL6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644506.21_warc_CC-MAIN-20230528182446-20230528212446-00044.warc.gz\"}"}
https://mcqslearn.com/engg/advance-engineering-mathematics/de-model-multiple-choice-questions.php
[ "Engineering Online Courses\n\nEngineering Mathematics MCQs\n\nEngineering Mathematics MCQ PDF - Topics\n\n# DE Model MCQ Quiz Online\n\nLearn DE Model Multiple Choice Questions (MCQ), DE Model quiz answers PDF to study engineering mathematics online course for engineering mathematics classes. First Order Ordinary Differential Equations Multiple Choice Questions and Answers (MCQs), DE Model quiz questions for online high school college acceptance. \"DE Model MCQ\" PDF Book: seperation of variables, concepts of solution, homogeneous and inhomogeneous differential equations test prep for global knowledge quiz.\n\n\"Ordinary differential equations model\" MCQ PDF: de model with choices no dimensional system, random dimensional system, one dimensional system, and two dimensional system for online high school college acceptance. Study de model quiz questions for merit scholarship test and certificate programs to enroll in online colleges.\n\n## MCQs on DE Model Quiz\n\nMCQ: Ordinary differential equations model\n\nno dimensional system\nrandom dimensional system\none dimensional system\ntwo dimensional system\n\nMCQ: A function is a mapping from one set, known as a domain to another set, known as the\n\nDistant\nProper\nBound\nRange\n\nMCQ: A function that assigns a real number to each member of its domain is\n\nreal value function\nreal transfer function\nreal discrete function\nreal domain function\n\n### More Topics from Engineering Mathematics Course", null, "", null, "", null, "", null, "" ]
[ null, "https://mcqslearn.com/images/appicons/engineeringmath.png", null, "https://mcqslearn.com/images/appicons/bbabusinessmath.png", null, "https://mcqslearn.com/images/appicons/allinone.png", null, "https://mcqslearn.com/images/appicons/allinone.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8659348,"math_prob":0.9032717,"size":1081,"snap":"2022-40-2023-06","text_gpt3_token_len":204,"char_repetition_ratio":0.10770659,"word_repetition_ratio":0.025806451,"special_character_ratio":0.1720629,"punctuation_ratio":0.107344635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99234307,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T14:52:00Z\",\"WARC-Record-ID\":\"<urn:uuid:3e67e677-8c7f-457c-a82b-979c59de9c33>\",\"Content-Length\":\"109983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6acf32fb-01c9-4686-af6a-6b225e3a1fbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:11fd1e35-edcf-4c2f-b665-d8b68f348918>\",\"WARC-IP-Address\":\"68.178.239.9\",\"WARC-Target-URI\":\"https://mcqslearn.com/engg/advance-engineering-mathematics/de-model-multiple-choice-questions.php\",\"WARC-Payload-Digest\":\"sha1:NTNI5W37GPOD5RN7IP7EV5PXJKH52TUE\",\"WARC-Block-Digest\":\"sha1:XFE4PBSHCHRT5NMPSNNLCTC2ELQY3YB6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500140.36_warc_CC-MAIN-20230204142302-20230204172302-00252.warc.gz\"}"}
https://www.mathlearnit.com/what-is-the-square-root-of-40
[ "# What is the square root of 40?\n\n## Solution to the √40\n\n√40 = 6.325\n\nA square root is the 2nd root of a number (a root of degree two). We recommend reading about this on our square root section.\n\nCalculating the square root is challenging to do by hand, and many people tend to memorize the first few perfect squares. As a reminder, a perfect square is a number that has an integer as a square root. Examples of perfect squares are 9, 49, and 144.\n\nFun fact: despite being incredibly difficult to calculate by hand, mathematicians four thousand years ago were able to calculate 9 decimal places of the square root of two! The square root of two is a notoriously irrational number.\n\n### Best way to calculate square root?\n\nWhile it is possible to calculate square roots by hand, the vast majority of numbers will have complicated roots (lots of decimal places, many irrational numbers). Due to this, the best way to calculate a square root is using a calculator or computer." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94604045,"math_prob":0.9986513,"size":1214,"snap":"2023-40-2023-50","text_gpt3_token_len":284,"char_repetition_ratio":0.30826446,"word_repetition_ratio":0.084821425,"special_character_ratio":0.24876441,"punctuation_ratio":0.112840466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99787074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T17:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:7aaa9828-24b0-4477-ade0-6b4c1f9b6afc>\",\"Content-Length\":\"14308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d296d64-d7ed-4941-9570-9478e5c776de>\",\"WARC-Concurrent-To\":\"<urn:uuid:9fae9a6d-b807-449e-84f5-445a21ad3a7d>\",\"WARC-IP-Address\":\"159.65.170.170\",\"WARC-Target-URI\":\"https://www.mathlearnit.com/what-is-the-square-root-of-40\",\"WARC-Payload-Digest\":\"sha1:FO7F6ELVSQ32SARV7NVM22BPBBQBUPUQ\",\"WARC-Block-Digest\":\"sha1:DDM7FUNG5DTEVWTS76ML7KQOHDQY3D6A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511002.91_warc_CC-MAIN-20231002164819-20231002194819-00185.warc.gz\"}"}
https://blog.sakurapuare.com/archives/2023/02/how-many-plants-can-hit-a-drought-one-or-two-more/
[ "## Abstract\n\n\"Global temperatures are warming, glaciers are melting, sea levels are rising …\" All parts of the world are facing huge environmental disasters. In recent years, unusual weather and climate events have become more and more intense, and the number of \"record-breaking\" extreme weather events has increased. Trees are also becoming thin and weak under drought conditions. How many plants can survive drought conditions without being defeated and become the last survivors?\n\nIn this paper, we investigate the change of species biomass over time under irregular weather conditions. For this we developed the Plant Community Interaction Model and an Improved Logistic Growth Competition Model. By using these two models, we investigated the tolerance of plant species and populations to the environment under drought conditions.\n\nFirst, we obtained a series of time-series images through the US nasa and worldview websites, and preprocessed the data through data visualization and cubic spline interpolation. In order to predict the changes of anomalous weather cycles and species populations, we developed the Lotka-Volterra competition model, further introduced the associated competition and growth coefficients, and derived a plot of actual species biomass versus time by analyzing the effects of their interspecific relationships on environmental biomass. In addition, we also proposed the climatic factors affecting environmental biomass, and selected the three most influential factors, temperature, precipitation and light, to derive the predicted species biomass versus time, and then determined the weight of each factor by difference fitting and entropy weight-multiple linear regression, and finally we obtained the predictive model of species biomass over time during the drought cycle.\n\nSecond, to explore the relationship of how species species benefit the community. We selected three species with the same climate in the ecosystem as the study object and proposed an Improved Logistic Growth Competition Model. Based on this, we introduced the drought factor and species contact coefficients by considering environmental effects and interspecies effects, and the species contact coefficients were calculated separately. In the study of community benefit, we considered two parameters, Fraction of Photosynthetically Active Radiation and Gross Primary Productivity, to evaluate the impact brought by the ecosystem, and finally obtained the functional relationship between the species contact coefficient and the benefit coefficient by fitting the function Finally, the functional relationship between the species exposure coefficient and the benefit coefficient was obtained by fitting the function. The minimum number of species to benefit the community was determined to be 460 when the species contact coefficient was 0.00937.\n\nFinally, we tested the robustness and sensitivity of the Improved Logistic Growth Competition Model and found that the model had better sensitivity by varying the images obtained from the species contact coefficients. We also evaluated the strengths and weaknesses of the model.\n\nKeywords: Lotka-Volterra competition model, improved logistic growth competition model, data visualization\n\n## Introduction\n\n### Problem Background\n\nSince plants can only grow in their pristine environment, they are vulnerable to various abiotic environmental impacts of stress such as sunlight, temperature, humidity, and drought throughout their growth and development. As one of the major abiotic stresses that affect normal plant growth and development and limit plant community diversity, drought affects plant life activities such as growth, development, and reproduction. Different species respond differently to their environment, while drought environments vary in their alteration of species traits. What is the minimum number of species required for a plant community to benefit from this local biodiversity? How does this phenomenon expand as the number of species increases? What does this imply for the long-term viability of plant communities?", null, "", null, "", null, "### Restatement of Problem\n\nConsidering the background information of plant and limiting conditions identified in the Problem Background , the main tasks of this paper are as follows:\n\n1. A mathematical model is developed for predicting trends in plant communities over time in the face of irregular weather cycles. For example, in periods of drought when rain is supposed to fall, we will consider the interspecific relationships of plants to better predict changes in plant community biodiversity.\n\n2. With environmental variations, the sensitivity of different plant species to these changes may also vary. When the number of species increases, the ecosystem of the plant community may change, including species competition, interactions, etc., which in turn affects the ecological balance of the plant community.\n\n3. Evaluate and predict the role and importance of species diversity for the ecosystems in which the species lives.\n\n4. Future droughts occurring with greater frequency and variability will have widespread impacts on ecosystems and human societies. This could lead to water scarcity, poor crop yields, and increased desertification, thereby threatening biodiversity and human well-being.\n\n5. Pollution factors and habitat destruction are objectively present problems in the environment, which may significantly affect the conclusions. These problems may lead to loss of biodiversity, loss of ecosystem function, or even to the destruction of entire ecological chains.\n\n6. It is important to evaluate and predict the role and importance of species diversity to the ecosystem in which it is found. Species diversity is a key feature of ecosystems and has important implications for ecosystem health and stability.\n\n### Literature Review\n\nIn recent years, the frequency of abnormal weather has increased significantly with global warming and human activities, and these abnormal weather events have had a dramatic impact on plant communities. It may affect the distribution range of species, and the composition of species and lead to changes in ecosystems. Thus, the analysis of the effects of the relationship between abnormal weather and plant populations has far-reaching implications for the succession of plant populations.\n\nClark, James S., et al. suggested that in the eastern United States, the effects of increased drought are better understood at the level of individual trees. Grossiord, Charlotte, et al. modeled the predicted climate impacts on biodiversity across the continental United States.\n\nTilman, D., Reich, P. B., et al. discussed the high climate variability during the growing season over a decade, resulting in year-to-year changes in plant species abundance and ecosystem productivity. The greater the number of plant species, the greater the temporal stability of the annual production of above-ground plants in the ecosystem. Huang, W., Wang, W., et al. studied the stability of grassland ecosystems under drought. concluded that alpine grasslands and alpine meadows were the most resistant but the least resilient. Meadow steppe and typical grassland were the least resistant but the most resilient.\n\nPrugh, Laura R., et al. responded that plants are most responsive to a year of water deficit under extreme drought conditions. Spinoni, Jonathan, et al. quantified the effects of extreme climatic events on contemporary ecological community composition.\n\nZvereva, Elena L., Eija Toivonen, and Mikhail V. Kozlov. showed for the first time that there are geographical differences in plant community responses to air emissions. By investigating the effects of IAP on Italian plant communities and Natura 2000 habitats, Lazzaro, Lorenzo, et al. concluded that competition was the main mechanism of influence. \n\n## Our work\n\nWe need to analyze the laws behind the irregular weather and the effects between different species, and then build mathematical models based on them. In this paper, our work is focused on the following points:\n\n• Establishing Lotka-Volterra model based on multiple regression entropy weighting method, which can predict the results more accurately by the given input parameter values within a certain range.\n\n• Predicting the effects of abnormal weather cycles and plant community biomass by analyzing the interspecific relationships of communities and the three factors affecting climate variables: temperature, precipitation, and light.\n\n• Based on the above Lotka-Volterra model, a variety of factors including the environment is considered and parameters are added to make the model more responsive to the parameters and have better sensitivity.\n\n• Successfully predicted the minimum number of species that would benefit itself from an ecosystem perspective.\n\n• Discusses how to face drought in the future and makes recommendations from an ecosystem perspective\n\n## Assumptions and Justifications\n\n• Assumption 1: There is no significant effect on the community except for the influencing factors proposed in the text.\n\nJustification: Nature is a chaotic system with many external changing conditions, such as soil moisture, rainfall rate, and ozone layer hole. In order to simplify our model, other than the factors mentioned in the text, other factors did not have significant effects on the community.\n\n• Assumption 2: The variation of species biomass is determined by its internal and external factors, and the internal and external factors do not influence or interfere with each other.\n\nJustification: Changes in species biomass are determined by both internal and external factors, and in order to make our model results strongly correlated with the factors that appear in the model, we consider that internal and external factors do not interfere with each other and only the factors that appear in the model need to be considered.\n\n• Assumption 3: The competitiveness index of each species is assumed to be fixed for each period of biological competition.\n\nJustification: Since the change in the number of species does not change its original competitive index at each stage of the biological competition, the competitive index can remain approximately constant at a fixed stage.\n\n• Assumption 4: The data at the data collection sites represent the data of the whole region.\n\nJustification: Since the climatic conditions of a region are roughly comparable, there may be a few points of difference, so we can assume that the climatic conditions of a region are roughly comparable.\n\n• Assumption 5: The growth rate of species growth is assumed to be continuous and stable.\n\nJustification: The growth rate of a species is continuous and stable according to its survival years and also the environment will affect its growth rate.\n\n## Notation\n\nAll symbols used in this paper are shown in Table \n\nSymbols Definitions\nN Number of individuals of the species\nr Species intrinsic growth rate\nK Environmental capacity\n\\alpha Drought coefficient scale factor\nD Drought coefficient\n\\beta Species contact factor\n\\delta Standard deviation of the linear fit\n\n## The Data\n\n### Data Collection\n\nData collection plays a role in mathematical modeling. the data we use in the paper comes from two main sources, one of which is NASA’s worldview. worldview provides data from several satellites, including Terra, Aqua, Suomi NPP, and NOAA. a worldview can be used to browse and analyze natural and anthropogenic activities on the Earth’s surface, such as meteorological phenomena, fires, climate change, land, and ocean surface temperatures, etc. We use Python to capture and visualize all data from 2010-2020 into images, the visualized image obtained from the search is shown in Figure", null, "", null, "We then used drought index data from Drought Monitor, a system developed by various U.S. government agencies to monitor and report on drought conditions. The system is updated weekly and provides nationwide drought data, analysis, and monitoring, as well as warnings and recommendations for different geographic areas. We collected drought levels from 2010-2020 for each U.S. state in the Drought Levels and DSCI data for use in our model tests, as shown in the Figure .", null, "", null, "", null, "Name URL\nNASA Worldview https://worldview.earthdata.nasa.gov/\nU.S. Drought Monitor https://droughtmonitor.unl.edu/\n\n### Data Visualization\n\nWe represent each data item in the database as a single graph element through data visualization techniques and use the data set collected in Table to form a data image while representing the individual attribute values of the data as multidimensional data, which allows for a deeper observation and analysis of the data by observing the data in different dimensions, as shown in Figure .", null, "Finally, we also used some data from the literature, which we have attached at the end of the paper.\n\n## Model 1: Plant Community Interaction Model\n\n### Model Prepraration\n\nPlant growth is influenced by multiple factors, which can be mainly divided into external and internal factors. For external factors, the environment is usually the most dominant, while for internal factors, the widespread internal influences among species are also important. Therefore, we will study the changes in species communities in terms of interactions between species and the three factors that mainly influence climate. From this, we consider the analysis of the curvilinear pattern of changes in the three factors over time on species growth and modify the Lotka-Volterra model by weighting and multivariate linear fitting the weights obtained by the entropy weighting method with the three factors.\n\n### Model Introduction\n\nA competition model for plant populations is a mathematical model used to describe the changes in numbers between plant populations due to contact. Common plant competition models include the Lotka-Volterra competition equation and the Tilman competition model.\n\nThis model uses the Lotka-Volterra competition equation, a common two-species competition model that is often used to describe predator-prey interactions, to model population changes due to contact between species. The model is based on the simple assumption that the population size of each species grows at a rate proportional to the number of individuals of that species, but that the growth rate does not decrease while the two species compete with each other to maintain a stable value. The mathematical form of the Lotka-Volterra competition equation is as follows.\n\n\\frac{dN_1}{dt} = r_1 N_1(1-\\frac{N_1}{K_1} - \\frac{a_{12}N_2}{K_1})\n\\frac{dN_2}{dt} = r_2 N_2(1-\\frac{N_2}{K_2} - \\frac{a_{21}N_1}{K_2})\n\nWhere N_1 and N_2 are the numbers of individuals of the two competing species; r_1 and r_2 are the endogenous growth rates of the two species, respectively; K_1 and K_2 are the environmental capacities of the two species, respectively; and a_{12} and a_{21} are the competition coefficients between the two species.\n\nThe Lotka-Volterra competition model can be used to simulate population changes due to competition between different species of plants and interactions between the same species, on the basis of which future changes in plant communities can be predicted at the same time.\n\nAt the same time, the Lotka-Volterra competition model can likewise be easily extended to use in the case of n species. On the basis of the above equation, we can describe the competition among n species by the following set of differential equations:\n\n\\frac{dN_i}{dt} = r N_i(1-\\frac{N_i}{K_i} - \\sum_{j=1}^n a_{ij} \\frac{N_j}{K_j})\n\nWhere N_i is the number of individuals of the i th species, r_i is the intrinsic growth rate of the i th species, K_i is the environmental capacity of the i th species, and a_{ij} is the competition coefficient of the j th species against the i th species. The Lotka-Volterra competition model takes into account both the environmental capacity and species relationship influences on community abundance, \\frac{N_i}{K_1} indicates that the specific growth rate is influenced by the environmental holding capacity K and also by the relative number and competitive ability between two species. In this model, we do not consider the effects caused by the same species to grab resources and space, i.e., the survival environment and space are sufficient. Therefore, the value of a_{ii} is zero here.\n\nFrom the theoretical model described above, we know that there is competition between two different species for food and habitat without considering the influence caused by the external environment. In general, the difference in competitiveness between two different species is not large, and eventually, the two populations reach a relative equilibrium and maintain it stably. For two species with a large difference in species competitiveness, the usual outcome is that the more competitive one competes the relatively less competitive one to extinction, and the autochthonous one reaches the environmental capacity allowed by the ecosystem. Therefore, we can predict the subsequent course of the final two species by giving the initial values between two different species, the environmental holding capacity, and their mutual competition coefficients.\n\n#### Solution to the Problem 1\n\nThe use of the Lotka-Volterra competition model is predicated on the so-called need to give initial values between two different species, the environmental accommodation, and their competition coefficients with each other in order to predict the subsequent course of the final two species. For Problem 1, using the present model does not give us the option to give these values. Instead, we innovatively propose an ecosystem-based competition model theory, in other words, instead of giving the number between two different species, we cleverly solve the problem that the series parameters between two different species are not easy to determine by considering the larger concept of ecosystem. By considering 2-3 unique species within the ecosystem, we successfully solve the problem of predicting the number of organisms when considering both species competition and environmental factors.\n\nSpecifically, it is assumed that only three different species exist in the ecosystem and that no other species exist in the ecosystem within a certain range, i.e., there is no interference with the model data from species other than these three. We use the observable \"Vegetation Index\" of the satellite species to predict the subsequent changes in biomass.\n\nIn the introduction, we have given the Lotka-Volterra competition model for N species, as in Eq. Therefore, the Lotka-Volterra competition equation based on three different species can be given as follows.\n\n\\frac{dp}{dt} = r_1p (1- \\frac{p - \\alpha (q + \\mu)}{K_1})\n\\frac{dq}{dt} = r_2q (1- \\frac{q - \\beta (p + \\mu)}{K_2})\n\\frac{d\\mu}{dt} = r_3\\mu (1- \\frac{\\mu - \\gamma (p + q)}{K_3})\n\nWhere p, q, \\mu is the biomass of three different species, and K_1, K_2 and K_3 denote the environmental capacity of the three species, i.e., the maximum amount that can support the survival of the species when only species competition is considered. r_1, r_2, and r_3 denote the growth rates of species 1,2, and 3, respectively, and \\alpha beta gamma denotes the competition coefficient between the three different species. The competition coefficient between the three different species. Meanwhile, the growth inhibition of p, q, and \\mu populations by their own populations is \\frac{1}{k_1},\\frac{1}{k_2}, \\frac{1}{k3} and the influence of p, q, \\mu populations by other populations is \\frac{a}{k_1}, \\frac{b}{k_2}, and \\frac{c}{k_3}, respectively. the specific values of each parameter are shown in Table\n\nr \\frac{1}{k} p,q,\\mu\n\\alpha 0.4 0.005 0.03\n\\beta 0.7 0.008 0.02\n\\gamma 0.5 0.012 0.01\n\nWe can change these coefficients to alter the different growth curve relationships of species, make different species grow at different rates by adjusting the growth rate of each species, and adjust the competition coefficients between different species to show the interspecific competition relationships of different species. On the basis of the table, we can draw the biomass of the three species as a function of time, as shown in Figure .", null, "This figure shows the change curves of the biological population of three different species with respect to time. It can be noted that the three different species have different growth rates and upper bounds for their three curves due to their different initial numbers of species, different growth rates of species, and different competition coefficients of species. In addition, we also plotted an image of the total ecosystem biomass on top of this graph, which is the red curve in the Figure . This is because we have used the concept of \"Vegetation index\" to measure the total biomass of species, as described earlier, by summing the three curves.\n\nWe collected all biomass data of Yellowstone Park from 2010 to 2020, collected the changes in biomass in the park over ten years, and plotted them in month classification. On top of this, we also superimposed the total biomass obtained from the graph with it and plotted the total biomass versus time for the theoretical model compared with the observed values as in Figure .", null, "Without considering other factors, although there are more than three species in Yellowstone, the growth trend of the total biomass should be consistent with the image obtained from the Lotka-Volterra competition model, but the fact is that the two curves do not overlap and the observed values are significantly lower than those calculated by the theory, and the model also has errors, as shown in Figure .", null, "### A Little Correction\n\nAlthough the above model can describe the species population change curve with time to some extent, it still cannot fit the data values we observed well, and there is an error value, thus indicating that there are some non-negligible factors that we accidentally ignored. Therefore, it is still necessary to correct the model we obtained on the basis of the above in order to finally obtain a predictable model image, as in Equation. where let the theoretical value of the observed total biomass be f(t) and the value we actually get is g(t), and there is an error \\delta between the two.\n\nf(t) = g(t) + \\delta\n\nAfter considering the internal factors between different species, the external factors objectively brought by the environment should not be neglected as well. Therefore, the impact of the variable climatic environment on organisms in the environment should be taken into account \\delta.\n\nThere are many factors in the environment that affect the growth of organisms, such as CO_2 concentration, microorganisms in the soil, and pH in rainwater, but the three most important factors for plant growth are temperature, precipitation, and sunlight, for the following reasons.\n\n1. Temperature: The life activities of living organisms are based on aerobic respiration, and the metabolic activities carried out by aerobic respiration require enzymes in a specific temperature range to work properly to catalyze the reactions. Too high or too low ambient temperature can affect the growth and development of the organism or even death.\n\n2. Precipitation: The life activities of living organisms are carried out with water as a carrier. Living organisms not only need water to maintain cellular functions and life activities but also need water as a carrier for information transfer and energy transport. Too much or too little water may affect the growth and development of living organisms.\n\n3. Sunlight: For plants, light is necessary for photosynthesis, and only in a place with proper and sufficient sunlight can plants properly carry out photosynthesis to release oxygen and produce organic matter. Therefore, we will consider the influence of the environment on organisms from the above three factors.\n\nTherefore, we will consider the influence of the environment on organisms from the above three factors.", null, "Temperature effects on changes in total biomass\n\nTemperature plays a key role in plant growth and development because many of the physiological processes that take place in plants require a certain temperature. In general, plants will grow faster at higher temperatures than at lower temperatures because the former respiration rate is higher than the latter. The photosynthesis of plants is affected by temperature, which in turn affects total biomass, and is modeled theoretically by the following equation.\n\n\\frac{dN}{dt} = rN(1-\\frac{N}{K}) - c(T-T_0)N\n\nWhere N denotes the population size of the plant, r is the growth rate of the plant population, K is the environmental holding capacity, c denotes the influence factor of temperature on plant growth, T is the current ambient temperature, and T_0 is the baseline temperature for plant growth. The above equation is solved by MATLAB and the image is obtained as Figure .", null, "Impact of precipitation on changes in total biomass\n\nPrecipitation is one of the most fundamental meteorological phenomena in nature and it has a profound impact on the environment. Precipitation is one of the necessary conditions for vegetation growth, and plants need water to grow, photosynthesize and metabolize substances. Suitable precipitation can promote plant growth and increase crop yield. The changes in the total biomass of plant communities due to the influence of precipitation can be expressed as follows.\n\n\\frac{dN}{dt} = rN (1 - \\frac{N}{K} ) \\frac{W} {W_0}\n\nWhere N is the number of individuals in the plant community, r is the specific growth rate, K is the environmental holding capacity, W are the amount of precipitation, and W_0 represents the optimum rainfall level for plants, the image of which is shown in Figure .", null, "It is not difficult to see from the images that the total amount of plants increases with the increase of precipitation within a certain precipitation range. However, after exceeding the optimum precipitation amount P_0 for plants, the total amount of plants instead decreases with the increase of precipitation.\n\nEffect of sunlight on changes in total biomass\n\nLight is also one of the key factors for plant growth because sunlight provides the necessary energy for photosynthesis, enabling plants to increase their total biomass by converting CO2 and water into organic matter, and the relationship between light and total plant mass has the differential equation.\n\n\\frac{dN}{dt} = rN(1-\\frac{N}{K})f(t)\n\nWhere N denotes the number of plant communities, r denotes the growth rate of plants, K denotes the capacity of the ecosystem, and f(t) denotes the variation of light intensity at time t. In the real situation, the variation of light is repeated in small cycles of one day and large cycles of one year. To simplify our model, we can choose a simple periodic function to replace the variation of light intensity, such as a simple sine function.\n\nf(t) = \\sin(\\omega t)", null, "The Entropy Weight Method\n\nFrom the above description we can know that in the actual environment, unlight, precipitation, and temperature generally speaking influence he growth of total biomass by affecting the photosynthesis of plants nd thus the growth of total biomass, but these factors affect different roportions of the same. Based on the need to determine the weights of ultiple indicators, we decided to evaluate the influence of each index n the correction value in a comprehensive way by the entropy weight ethod.\n\nThe entropy weighting method is a commonly used method to determine the weights of multiple indicators. By calculating the relative entropy values between multiple indicators, we can quickly and accurately calculate the weighting relationships between different indicators without a priori results, so as to improve the accuracy of the results. For the determined index, we need to calculate the entropy value E_i of the $$i$$th index, i.e.\n\nE_i = - \\frac{1}{ln(n)} \\sum_{j=1}^n \\frac{p_{ij}}{ln(p_{ij})}\n\nWhere n denotes the number of indicators, p_{ij} denotes the proportion of the ith indicator in the jth sample, and \\ln denotes the natural logarithm. Then, the weight w_i of each indicator is calculated based on the entropy value of the indicator.\n\n\\hat{w_i} = \\frac{w_i}{\\sum^n_{i=1}w_i}\n\nThe weights of the obtained indicators are normalized so that the sum is equal to 1\n\nw_i = \\frac{1 - E_i}{n - \\sum^n_{j=1} E_j}\n\nBased on the above description, we analyzed all the data of sunlight, precipitation, and temperature changes in Yellowstone Park during 2010-2020 using the entropy weighting method to determine the weights. The calculation results are as follows Figure .", null, "Solution\n\nWe obtained the relationship between the effects of temperature, precipitation, and light on total plant biomass. Then, we obtained the corresponding weights of the three variables by analyzing these three data. Next, we need to consider these three together and use them to correct our model.\n\nMultiple linear fitting is a statistical analysis method used to analyze the relationship between multiple independent variables and a dependent variable. Through multiple linear fitting, we can determine the degree of influence of the independent variables on the dependent variable and use the fitted results to make predictions or interpretations, the general equation of the multiple linear regression equation is as follows.\n\ny_i = b_1 x_{i1} + b_2 x_{i2} + \\cdots + b_n x_{in} + e_i\n\nThe three weights obtained above were then used to perform a ultivariate linear fit to them, and the actual error values were pproximated by least squares tests and minimizing the sum of squares of he residuals. For the three factors of temperature, precipitation, and ight, the following linear regression equations are available.\n\n\\delta = \\zeta x_1 + \\eta x_2 + \\xi x_n + \\epsilon\n\nBy solving and fitting their values, the parameter values of \\zeta, \\eta, \\xi, and \\epsilon are tabulated in Table .\n\n\\zeta \\eta \\xi \\epsilon\n0.34119 1.70569 0.5320 1.0962\n\\delta = 0.34119 x_1 + 1.70569 x_2 + 0.5320 x_n + 1.0962\n\nAs a result, we obtained the Lotka-Volterra competition model based on the multiple regression entropy methods, which can provide a good prediction of species numbers in relation to time after correcting for errors by taking into account the effects of species exchange and environmental factors, given sunlight, precipitation, and temperature parameters in advance, and the number of species over a period of time.\n\n## Model 2: Plant Community Interaction Environment Model\n\nUnlike Model 1, we selected three different ecosystems under the same climatic conditions as the subject of this model (in this model it is a temperate monsoon climate). Since we selected ecosystems with the same climatic conditions. Therefore, we do not consider the effects of sunlight, precipitation, and temperature due to climate in the model. For a single species in an ecosystem, the number of species is continuous over time and has a constant growth rate in the absolute ideal case. We can derive the one-dimensional differential equation for the number of species with respect to time as follows.\n\n\\frac{dN(t)}{dt} = cN(t)\n\nWhere c denotes the growth rate of the species. However, all of the above is possible only under ideal conditions, considering that the number of organisms that the environment can bear, i.e., the environmental holding capacity, is limited within a certain range of the community. According to the Logistic growth principle, the growth rate of a species population slowly decreases with time due to the limited environmental carrying capacity. Therefore, we consider the blocking factor \\frac{1-N(t)}{k} that affects the number of species and correct the equation to\n\n\\frac{dN(t)}{dt} = cN(1 - \\frac{N}{K})\n\n### Solution to the Problem 4\n\nAlthough the subjects we selected for the study are all in the same environment, i.e., for the effects brought about by climate the three are equivalent in time scale, in other words, the effects on the number of species are equivalent. However, in the case of extreme drought, the growth rate of organisms is likely to decrease or even stop due to the lack of water resources. Therefore, taking into account the influence of irregular weather conditions in the environment, and under the assumption that the influence of environmental factors and interspecific relationships on community biomass is linear, we derive the following equation.\n\n\\frac{dN(t)}{dt} = rN(t)(1-\\frac{N(t)}{K} - \\alpha D)\n\nThis equation is compared with Eq. with an additional coefficient \\alpha D. where \\alpha is the effect of environmental factors on community biomass, and since \\alpha is only a coefficient controlling the degree of drought impact, we can temporarily ignore the effect of changes in this value. d is the drought index. At this point, the number of species is not only limited by the environmental holding capacity K but also affected by irregular drought conditions. We use the Drought Monitor’s drought index to measure the drought conditions of the area. This drought index is obtained from the cumulative drought monitoring data, i.e., the percentages of D_0 to D_4 for a given week are summed to obtain the drought severity and coverage index for that week.\n\nThus, we can get the answer to question four. For more frequent and more variable droughts. An increase in drought frequency increases the effect of drought on plant community biomass by affecting the magnitude of \\alpha, while greater variability affects the change in plant community biomass by climatically changing the value of the drought index D.\n\n### Solution to the Problem 3\n\nAlthough the coefficient aD presented in question four takes into account the more frequent and variable drought climate scenario, the biomass in relation to time differs in our chosen study population, even though the effects of drought on the three ecosystems are simultaneous and equivalent. The Figure shows the total biomass of three ecosystems, namely forest, grassland, and desert, obtained after fitting processing.", null, "The graph is a plot of total biomass versus time for three ecosystems, namely forest, grassland, and desert, obtained after fitting processing. From the figure, it can be concluded that the growth rate of the three curves varies without considering the influence of climate and other factors, and the speed of their growth rate is determined by the inherent properties of their ecosystems. For different organisms, the curves grow faster in the initial period when the biomass is low. However, the growth rate of the curve slows down due to the influence of environmental capacity. As can be seen from the figure, the three curves do not overlap and have a certain distance.\n\nTherefore, we add a coefficient \\beta to the above equation and define it as the \"Species Contact Coefficient\". The so-called species contact coefficient is essentially the coefficient of the difference in the number of species between ecosystems, specifically the effect of interspecific relationships between species on the biomass of the community. This coefficient defines the relationship between species and the sum of the values of harmful and beneficial exposures between species.\n\n\\frac{dN(t)}{dt} = rN(t)(1-\\frac{N(t)}{K} - \\alpha D+ \\beta)\n\nThe answer to question 3 is that, given the same environmental factors, differences in species types affect the growth of total biomass by influencing what we define as the \"species contact factor\". As the number of species increases, their biomass grows faster and faster over time, and the gradient of growth becomes larger and larger.\n\n### Solution to the Problem 2\n\nThe differential equations of forest community biomass for different environmental factors and interspecific relationships can be evolved accordingly for the community biomass for different environmental factors and interspecific relationships under grassland, and desert regions, as follows.\n\n\\frac{dN_1(t)}{dt} = r_1N_1(t)(1-\\frac{N_1(t)}{K_1} - \\alpha D_1 + \\beta_1)\n\\frac{dN_2(t)}{dt} = r_2N_2(t)(1-\\frac{N_2(t)}{K_2} - \\alpha D_2 + \\beta_2)\n\\frac{dN_3(t)}{dt} = r_3N_3(t)(1-\\frac{N_3(t)}{K_3} - \\alpha D_3 + \\beta_3)\n\nWhere N_1, N_2, and N_3 represent the biomass of forest, grassland, and desert, respectively; r_1, r_2, and r_3 represent the growth rates of species 1,2, and 3, respectively; a, b, and c represent the interspecific competition coefficients, respectively; and k1, k_2, and k_3 represent the maximum environmental holding capacity of species, respectively.\n\nWe chose the grassland ecosystem as a benchmark and made the following scatter plot of its biomass with respect to time and fitted it as a function, as shown in Figure .", null, "By fitting the function and comparing it with our theoretical model, we can derive the difference \\delta, which is the value of the \"species contact coefficient\" mentioned in the previous section, which is only related to the species of the community ecosystem, but not to the environment and other factors.\n\nSimilarly, for the other two ecosystems, we can obtain the \"species contact coefficients\" for the three different ecosystems, as shown in the Table .\n\nForrest Grasslands Dessert\n0.025 0.017 0.008\n\nWe next focus on the issue of ecosystem benefit, which for plant communities benefits the community in several ways.\n\n1. Photosynthesis: Through photosynthesis, plants absorb carbon dioxide from the air and emit oxygen back to the atmosphere, maintaining the carbon cycle and oxygen balance of the ecosphere.\n\n2. Provision of food: As the bottom producer in the biosphere, plants have the function of transferring energy and providing food to the upper layers of the food chain. Through photosynthesis, plant bodies obtain energy from sunlight and transfer it to higher consumers within the food chain.\n\n3. Habitat: Some large plants can also provide habitat and protection functions for some small animals. The shading of sunlight by large trees and the range of their influence make animals protected.\n\n4. Maintain soil structure and prevent erosion: Plants have very well-developed root systems, which penetrate deep into the ground to maintain the soil structure and prevent erosion and erosion caused by rainfall.\n\nWe used two parameters, Fraction of Photosynthetically Active Radiation and Gross Primary Productivity, to evaluate the impact on the ecosystem.\n\nThe fraction of Photosynthetically Active Radiation is the proportion of radiant energy in the visible spectrum in the range of 400-700 nm to the full wavelength of radiant energy, which can be absorbed by pigments such as chlorophyll and converted into chemical energy, thus promoting photosynthesis in plants and thus increasing their oxygen production. Primary Productivity is the rate at which an organism converts inorganic substances into organic substances through photosynthesis or chemical synthesis in an ecosystem over a certain period of time. These organic substances supply the organism’s own energy and nutrient needs and support the survival of other organisms in the ecosystem.\n\nTherefore, based on the above elaboration, it is reasonable for us to use these two data to determine the impact of the community on the ecosystem.", null, "", null, "Since the periodicity of the images can be clearly observed for the above two variables, we first classify them, i.e., by month for the years 2010-2020, and perform a linear fit to the above two variables, using the slope k of their functions as their return coefficients.", null, "Similarly, the species exposure coefficients for the three ecosystems and their benefit coefficients can be calculated as Table .\n\nSpecies exposure factor Benefit factor\n0.025 0.06704\n0.017 0.04451\n0.008 0.01524\n\nThe analytical equation can be derived from the above equation\n\ny = 3.051x - 0.02858\n\nWhen y = 0, we have x = 0.00937.\n\nThat is when its species contact coefficient is 0.00937, the coefficient of the benefit of the population to the ecosystem is 0. At this point, the species contact coefficient is 1.17125 times the species contact coefficient of the grassland, and according to mentioned, the grassland has an average of 393 different species , so we can conclude that the minimum number of species that benefit the community is 460 species.\n\nThe equation we derived here also further proves that in Problem 4: When greater frequency and wider variation of the occurrence of droughts, their rainfall will decrease. Since the independent variable b is positively correlated with the response variable y. Therefore, the greater frequency and wider variation of the occurrence of droughts will decrease. Therefore, the greater frequency and wider variation of the occurrence of droughts will also lead to a decrease in species biomass.\n\n### Solution to Problem 5\n\nThe three questions above all explore the relationship between community biomass and time well without considering external factors, but real life is not so rosy. In addition to the effects of natural factors on biomass, human-caused impacts have more serious effects on plants. For example air pollution, dust pollution, toxins, crop diseases, etc.\n\nFor less severe impacts, we consider that such impacts are different from the previously mentioned environmental drought, and although they also reduce total biomass, the rate of reduction should be exponentially increasing.\n\nIn the equation, P represents the negative effect of environmental pollution on total biomass. This indicates that organisms are very sensitive to pollution, implying that a very small amount of pollution can be devastating to an entire biome or even cause the extinction of a specific species.\n\n## Sensitive Analysis\n\nConsidering that differences in the number of species types differ in the extent to which they are beneficial to the environment, we wanted to test the sensitivity of the model by performing a sensitivity analysis of the second problem using a second model. Based on the above study, we consider that the species exposure coefficient increases at 5% to obtain a set of control curves, from which it is calculated that", null, "", null, "From the data in the figure, it can be concluded that the benefit coefficient changes sensitively as the species exposure coefficient increases.\n\n## Discussion\n\n### Solution to Problem 6", null, "According to the flowchart, abnormal weather cycles have a double impact on the diversity of plant tribes. On the one hand, if plant tribes become more diverse, the competition between different plant species will become more intense and therefore the number of each species will decrease, which has a negative impact on plant diversity. On the other hand, biodiversity will certainly increase the stability of the ecosystem. When unusual weather cycles occur, ecosystems with high biodiversity are more likely to maintain reproductive efficiency and avoid collapse.\n\nThe importance of biodiversity is different in different environments. Arid environments prefer low plant diversity because in this case, the environment is stable, so maintaining efficient reproduction is a priority. As plant tribal diversity increases, it becomes more important to maintain the stability of the system, and therefore, in forest environments, plant tribal diversity is high.\n\n### Strengths\n\n• Our model is based on a certain theoretical foundation. By reviewing a large amount of literature, the corresponding parameters were selected. For the model solution, it can be found that it fits well with the real values, which shows that our model can predict the changes in plant populations under abnormal weather conditions more accurately.\n\n• Since nature is full of unknowns, we made a reasonable simplification by selecting among the climatic factors the more influential cases of temperature, precipitation, and light on plants. We used the forest scenario to predict, and then we calculated the grassland and desert, and we tested the accuracy of the model, which can reflect the universality of our model to some extent.\n\n• We verified the stability of the model through reasonable assumptions and sensitivity analysis of the model.\n\n### Weaknesses\n\n• The growth rate of the same species is not constant due to environmental changes and human factors, and this paper does not analyze the growth rate of such changes.\n\n• Nature is a chaotic system, and there are many uncontrollable factors in nature, such as severe weather conditions that we cannot predict.\n\n### Futher Work\n\nIn this paper, we only selected the study in the United States, the climate of drought areas around the world is not necessarily the same, so we should select a larger research scope to study, and in the future, we can propose better models or more advanced algorithms to expand the model to more objects with a wider range of application.\n\n## Reference\n\n• Clark, J., Iverson, L., Woodall, C., Allen, C., Bell, D., Bragg, D., D’Amato, A., Davis, F., Hersh, M., Ibanez, I., & others (2016). The impacts of increasing drought on forest dynamics, structure, and biodiversity in the United States. Global change biology, 22(7), 2329–2352.\n• Grossiord, C., Granier, A., Gessler, A., Jucker, T., & Bonal, D. (2014). Does drought influence the relationship between biodiversity and ecosystem functioning in boreal forests?. Ecosystems, 17, 394–404.\n• Tilman, D., Reich, P., & Knops, J. (2006). Biodiversity and ecosystem stability in a decade-long grassland experiment. Nature, 441(7093), 629–632.\n• Huang, W., Wang, W., Cao, M., Fu, G., Xia, J., Wang, Z., & Li, J. (2021). Local climate and biodiversity affect the stability of China’s grasslands in response to drought. Science of the Total Environment, 768, 145482.\n• Prugh, L., Deguines, N., Grinath, J., Suding, K., Bean, W., Stafford, R., & Brashares, J. (2018). Ecological winners and losers of extreme drought in California. Nature Climate Change, 8(9), 819–824.\n• Spinoni, J., Naumann, G., Carrao, H., Barbosa, P., & Vogt, J. (2014). World drought frequency, duration, and severity for 1951–2010. International Journal of Climatology, 34(8), 2792–2804.\n• Zvereva, E., Toivonen, E., & Kozlov, M. (2008). Changes in species richness of vascular plants under the impact of air pollution: a global perspective. Global Ecology and Biogeography, 17(3), 305–319.\n• Lazzaro, L., Bolpagni, R., Buffa, G., Gentili, R., Lonati, M., Stinca, A., Acosta, A., Adorni, M., Aleffi, M., Allegrezza, M., & others (2020). Impact of invasive alien plants on native plant communities and Natura 2000 habitats: State of the art, gap analysis and perspectives in Italy. Journal of Environmental Management, 274, 111140.\n• Wang, J., Wang, W., Li, J., Feng, Y., Wu, B., & Lu, Q. (2017). Biogeographic patterns and environmental interpretation of plant species richness in desert regions of Northwest China. Biodiversity Science, 25(11), 1192.\n\n## 发送评论编辑评论\n\n|´・ω・)ノ\nヾ(≧∇≦*)ゝ\n(☆ω☆)\n(╯‵□′)╯︵┴─┴\n ̄﹃ ̄\n(/ω\)\n∠( ᐛ 」∠)_\n(๑•̀ㅁ•́ฅ)\n→_→\n୧(๑•̀⌄•́๑)૭\n٩(ˊᗜˋ*)و\n(ノ°ο°)ノ\n(´இ皿இ`)\n⌇●﹏●⌇\n(ฅ´ωฅ)\n(╯°A°)╯︵○○○\nφ( ̄∇ ̄o)\nヾ(´・ ・`。)ノ\"\n( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃\n(ó﹏ò。)\nΣ(っ °Д °;)っ\n( ,,´・ω・)ノ\"(´っω・`。)\n╮(╯▽╰)╭\no(*////▽////*)q\n>﹏<\n( ๑´•ω•) \"(ㆆᴗㆆ)\n\nEmoji" ]
[ null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null, "data:image/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9156927,"math_prob":0.9601541,"size":46753,"snap":"2023-40-2023-50","text_gpt3_token_len":9784,"char_repetition_ratio":0.16860254,"word_repetition_ratio":0.036479205,"special_character_ratio":0.20740914,"punctuation_ratio":0.118962646,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97161835,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T13:10:43Z\",\"WARC-Record-ID\":\"<urn:uuid:bdab857c-d698-4826-9bff-67b95aaf1240>\",\"Content-Length\":\"172403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e79ee508-1ef9-4c8f-badf-a19ab0e3ae5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d7db284-0fea-4bbc-8e7b-9681e7f2e2ab>\",\"WARC-IP-Address\":\"42.194.158.93\",\"WARC-Target-URI\":\"https://blog.sakurapuare.com/archives/2023/02/how-many-plants-can-hit-a-drought-one-or-two-more/\",\"WARC-Payload-Digest\":\"sha1:OPAZGOLUY2EL44ZSJRQHNKPP7LCPVTTI\",\"WARC-Block-Digest\":\"sha1:4OQGAP7K2I7OM2XMK6AL4SHJ7KZKBEGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100227.61_warc_CC-MAIN-20231130130218-20231130160218-00562.warc.gz\"}"}
https://www.logicbig.com/tutorials/core-java-tutorial/java-language/ternary-numeric-rules.html
[ "# Java Ternary ?: operator - Numeric operands conversion rules\n\n[Last Updated: Jun 3, 2017]\n\nJava ternary expressions `(<boolean expression>? expr1: expr2)` are considered to be shorthand for if/else statement but they are not always equivalent and developers may have unexpected results in some cases. In this tutorial, we will go through numeric ternary expression type conversion rules (JLS 15.25.2) along with examples. We will also be quoting the statements from JLS 15.25.2.\n\n## Rule 1: Same Type Rule\n\nIf the second and third operands have the same type, then that is the type of the conditional expression.\n\nThis is very straight forward rule:\n\n` boolean b = true; int i = 5; int j = 7; int r = b ? j : i; System.out.println(r);`\n``` 7\n```\n` boolean b2 = true; Double d1 = Double.valueOf(3); Double d2 = Double.valueOf(4); System.out.println(b2 ? d1 : d2);`\n``` 3.0\n```\n` boolean b3 = true; Double d3 = null; Double d4 = Double.valueOf(4); System.out.println(b3 ? d3 : d4);`\n``` null\n```\n\n## Rule 2: A primitive with it's boxed operand rule\n\nIf one of the second and third operands is of primitive type T, and the type of the other is the result of applying boxing conversion (§5.1.7) to T, then the type of the conditional expression is T.\n\nThat means, the ternary expressions evaluates to the primitive one, given that one operands is primitive and other one is it's boxed version. The unboxing of boxed type may be performed, depending whether the boolean part of ternary expression is true or false.\n\n` boolean b = true; double op1 = 5d; Double op2 = 7d;//boxed double r = b ? op2 : op1; System.out.println(r);`\n``` 7.0\n```\n\nBut how do we know, in above example, that double r is the correct assigned type and we are not doing an unboxing on wrapper Double. Let's see another example to confirm that.\n\n```public class Rule2Example {\npublic static void main(String[] args) {\nboolean b = true;\ndouble op1 = 5d;\nDouble op2 = 7d;//boxed\nprint(b ? op2 : op1);\n}\n\nprivate static void print(double d) {\nSystem.out.println(\"primitive double\");\nSystem.out.println(d);\n}\n\nprivate static void print(Double d) {\nSystem.out.println(\"boxed double\");\nSystem.out.println(d);\n}\n}```\n\n#### Output\n\n`primitive double7.0`\n\nIn above example the overloaded method with primitive double gets called (most specific method is selected), that confirms our ternary expression evaluates to primitive double and not to it's wrapper Double.\n\n Be careful of the NullPointerException:` boolean b = true; Integer i = null; int j = 5; System.out.println(b ? i : j);```` java.lang.NullPointerException at com.logicbig.example.Test.convert(Test.java:17) at com.logicbig.Common.feature.tasks.dynamic.CodeParser.lambda\\$getHtmlOutput\\$0(CodeParser.java:85) at com.logicbig.Common.code.CmdUtils.captureStdOutput(CmdUtils.java:19) ...... ```\n Above NPE happens because null value of Integer i, cannot be unboxed to primitive int. One way to fix above example is: ` boolean b = true; Integer i = null; int j = 5; System.out.println(b ? i : Integer.valueOf(j));```` null ```\n After above change, Rule 2 does not apply anymore, because both operands are of same type Integer now (rule 1 applies here).\n\n## Rule 3: byte/Byte with short/Short rule\n\nIf one of the operands is of type byte or Byte and the other is of type short or Short, then the type of the conditional expression is short.\n\nIn this case result is always primitive short.\n\n```public class Rule3Example {\npublic static void main(String[] args) {\nboolean b = true;\nshort s = 2;\nbyte t = 3;\nprint(b ? t : s);\n\nShort s2 = 4;\nbyte t2 = 5;\nprint(b ? t2 : s2);\n\nshort s3 = 6;\nByte t3 = 7;\nprint(b ? t3 : s3);\n\nShort s4 = 8;\nByte t4 = 9;\nprint(b ? t4 : s4);\n}\n\nprivate static void print(byte b) {\nSystem.out.println(\"primitive byte\");\nSystem.out.println(b);\n}\n\nprivate static void print(Byte b) {\nSystem.out.println(\"Byte\");\nSystem.out.println(b);\n}\n\nprivate static void print(short s) {\nSystem.out.println(\"primitive short\");\nSystem.out.println(s);\n}\n\nprivate static void print(Short s) {\nSystem.out.println(\"Short\");\nSystem.out.println(s);\n}\n}```\n\n#### Output\n\n`primitive short3primitive short5primitive short7primitive short9`\n\n## Rule 4: byte/short/char with constant int rule\n\nIf one of the operands is of type T where T is byte, short, or char, and the other operand is a constant expression (§15.28) of type int whose value is representable in type T, then the type of the conditional expression is T.\n\nIn this case result is same as of the primitive operand type (byte/short/char).\n\n` boolean b = true; char c = 'a'; System.out.println(b ? 100 : c);`\n``` d\n```\nIn above example, the constant 100 is converted to char.\n\nOther examples:\n\n```public class Rule4Example {\npublic static void main(String[] args) {\nboolean b = true;\nbyte t = 2;\nshort s = 3;\nchar c = 'z';\nprint(b ? 120 : t);\nprint(b ? 120 : s);\nprint(b ? 120 : c);\n}\n\nprivate static void print(byte b) {\nSystem.out.print(\"byte: \");\nSystem.out.println(b);\n}\n\nprivate static void print(short s) {\nSystem.out.print(\"short: \");\nSystem.out.println(s);\n}\n\nprivate static void print(char c) {\nSystem.out.print(\"char: \");\nSystem.out.println(c);\n}\n\nprivate static void print(int i) {\nSystem.out.print(\"int: \");\nSystem.out.println(i);\n}\n}```\n\n#### Output\n\n`byte: 120short: 120char: x`\n\nWe should also understand that the target type of an assignment can change the final type:\n\n` boolean b = true; char c = 'a'; int i = b ? 100 : c; System.out.println(i);`\n``` 100\n```\n Generally, a final left side conversion of the assignment, may be performed during runtime. In above example, during compile time, the right side is decided to be converted to 'char' (according the rule we are discussing). During runtime, real conversion happens and the ternary expression returns a 'char' and then the char is converted (one more time) to left side int. Same rule is applied at other places like the method return type or the method argument. Let's see what happens, if we assign it to an Object. In that case right side, which evaluates to char, will be boxed to Character. ` boolean b = true; char c = 'a'; Object o = b ? 100 : c; System.out.println(o); System.out.println(o.getClass());```` d class java.lang.Character ```\n\n## Rule 5: Byte/Short/Character with constant int rule\n\nIf one of the operands is of type T, where T is Byte, Short, or Character, and the other operand is a constant expression of type int whose value is representable in the type U which is the result of applying unboxing conversion to T, then the type of the conditional expression is U.\n\nThat simply means, the result will be of the primitive int type. The unboxing of the Byte/Short/Character operand to an int value may be performed, depending whether boolean part of the ternary expression is true or false.\n\n```public class Rule5Example {\npublic static void main(String[] args) {\nboolean b = false;\nByte t = Byte.valueOf((byte) 2);\nShort s = Short.valueOf((short) 3);\nCharacter c = Character.valueOf('z');\nprint(b ? 120 : t);\nprint(b ? 120 : s);\nprint(b ? 120 : c);\n}\n\nprivate static void print(Byte b) {\nSystem.out.print(\"Byte: \");\nSystem.out.println(b);\n}\n\nprivate static void print(Short s) {\nSystem.out.print(\"Short: \");\nSystem.out.println(s);\n}\n\nprivate static void print(Character c) {\nSystem.out.print(\"Character: \");\nSystem.out.println(c);\n}\n\nprivate static void print(int i) {\nSystem.out.print(\"int: \");\nSystem.out.println(i);\n}\n}```\n\n#### Output\n\n`int: 2int: 3int: 122`\n\n## Rule 6: The binary promotion rule\n\nOtherwise, binary numeric promotion (§5.6.2) is applied to the operand types, and the type of the conditional expression is the promoted type of the second and third operands.\n\nThat means, if none of the above Rule 1 to Rule 5 applies then ternary expression will be evaluated to a type which can be assignable to both numeric operands. A widening primitive conversion may also be applied.\n\n` boolean b = true; int i = 4; double d = 5; System.out.println(b ? i : d);`\n``` 4.0\n```\n In above example the int value of 4 is promoted to double, that's because 'double' type is the one which is assignable to both 'int' and 'double'. More examples: ` boolean b = true; char c = 't'; double d = 5; System.out.println(b ? c : d);```` 116.0 ```\n An unboxing of a Number wrapper may be performed` boolean b = true; Short s = 2; float f = 5; System.out.println(b ? s : f);```` 2.0 ```\n\nAll primitives:\n\n```public class Rule6Example {\n\npublic static void main(String[] args) {\nboolean b = true;\nbyte t = 1;\nShort s = 2;\nint i = 3;\nlong l = 4;\nfloat f = 6;\ndouble d = 7;\n\nprint(b ? t : l);\nprint(b ? t : f);\nprint(b ? s : l);\nprint(b ? i : f);\nprint(b ? l : f);\nprint(b ? s : d);\nprint(b ? i : l);\n}\n\nprivate static void print(byte v) {\nSystem.out.print(\"primitive byte, \");\nSystem.out.println(v);\n}\n\nprivate static void print(short s) {\nSystem.out.print(\"primitive short: \");\nSystem.out.println(s);\n}\n\nprivate static void print(int v) {\nSystem.out.print(\"primitive int: \");\nSystem.out.println(v);\n}\n\nprivate static void print(long v) {\nSystem.out.print(\"primitive long: \");\nSystem.out.println(v);\n}\n\nprivate static void print(float v) {\nSystem.out.print(\"primitive float: \");\nSystem.out.println(v);\n}\n\nprivate static void print(double v) {\nSystem.out.print(\"primitive double: \");\nSystem.out.println(v);\n}\n\nprivate static void print(Object v) {\nSystem.out.printf(\"wrapper : %s, \", v.getClass());\nSystem.out.println(v);\n}\n}```\n\n#### Output\n\n`primitive long: 1primitive float: 1.0primitive long: 2primitive float: 3.0primitive float: 4.0primitive double: 2.0primitive long: 3`\n\nPrimitives and boxed type mixed (the result is always of primitive type):\n\n```public class Rule6Example2 {\n\npublic static void main(String[] args) {\nboolean b = true;\nByte t = 1;\nShort s = 2;\nint i = 3;\nInteger i2 = 4;\nLong l = 5L;\nFloat f = 6F;\nDouble d = 7d;\nDouble d2 = 8d;\n\nprint(b ? t : l);\nprint(b ? t : f);\nprint(b ? s : l);\nprint(b ? i : f);\nprint(b ? l : f);\nprint(b ? s : d);\nprint(b ? i : l);\nprint(b ? i2 : l);\nprint(b ? f : l);\nprint(b ? f : d);\nprint(b ? d2 : d);//it's rule 1\n}\n.............\n}```\n\n#### Output\n\n`primitive long: 1primitive float: 1.0primitive long: 2primitive float: 3.0primitive float: 5.0primitive double: 2.0primitive long: 3primitive long: 4primitive float: 6.0primitive double: 6.0wrapper : class java.lang.Double, 8.0`\n\n## Example Project\n\nDependencies and Technologies Used:\n\n• JDK 1.8\n• Maven 3.3.9\n\n Ternary Numeric Expression Examples", null, "Select All", null, "Download", null, "• ternary-examples\n• src\n• main\n• java\n• com\n• logicbig\n• example\n• Rule2Example.java\n Share", null, "" ]
[ null, "https://www.logicbig.com/images/view_fullscreen.png", null, "https://www.logicbig.com/images/select_all.png", null, "https://www.logicbig.com/images/download.png", null, "https://www.logicbig.com/images/share-blue.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6264903,"math_prob":0.9568067,"size":9983,"snap":"2023-40-2023-50","text_gpt3_token_len":2657,"char_repetition_ratio":0.22206634,"word_repetition_ratio":0.24939613,"special_character_ratio":0.29660422,"punctuation_ratio":0.24409449,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923041,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T02:37:48Z\",\"WARC-Record-ID\":\"<urn:uuid:bbec6819-25ea-47a2-ab93-fc8133dc0809>\",\"Content-Length\":\"69816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1c8f9f9-00ff-4995-b714-8549eb4bb896>\",\"WARC-Concurrent-To\":\"<urn:uuid:e647c482-083e-4014-b0ce-bcb99f60ff8f>\",\"WARC-IP-Address\":\"74.208.236.238\",\"WARC-Target-URI\":\"https://www.logicbig.com/tutorials/core-java-tutorial/java-language/ternary-numeric-rules.html\",\"WARC-Payload-Digest\":\"sha1:R73AC2IOY7PBYZ4STFQ4ZL25BYMMTIN5\",\"WARC-Block-Digest\":\"sha1:ZZDB7WFSKMUF4VVLNJ42NV353RYMXKEN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100264.9_warc_CC-MAIN-20231201021234-20231201051234-00787.warc.gz\"}"}
https://www.systutorials.com/docs/linux/man/docs/linux/man/3-std%3A%3Aldexp%2Cstd%3A%3Aldexpf%2Cstd%3A%3Aldexpl/
[ "# std::ldexp,std::ldexpf,std::ldexpl (3) - Linux Man Pages\n\n## NAME\n\nstd::ldexp,std::ldexpf,std::ldexpl - std::ldexp,std::ldexpf,std::ldexpl\n\n## Synopsis\n\nfloat ldexp ( float x, int exp );\nfloat ldexpf( float x, int exp ); (since C++11)\ndouble ldexp ( double x, int exp ); (1) (2)\nlong double ldexp ( long double x, int exp );\nlong double ldexpl( long double x, int exp ); (3) (since C++11)\ndouble ldexp ( IntegralType x, int exp ); (4) (since C++11)\n\n1-3) Multiplies a floating point value x by the number 2 raised to the exp power.\n4) A set of overloads or a function template accepting an argument of any integral_type. Equivalent to (2) (the argument is cast to double).\n\n## Parameters\n\nx - floating point value\nexp - integer value\n\n## Return value\n\nIf no errors occur, x multiplied by 2 to the power of exp (x×2exp\n) is returned.\nIf a range error due to overflow occurs, ±HUGE_VAL, ±HUGE_VALF, or ±HUGE_VALL is returned.\nIf a range error due to underflow occurs, the correct result (after rounding) is returned.\n\n## Error handling\n\nErrors are reported as specified in math_errhandling.\nIf the implementation supports IEEE floating-point arithmetic (IEC 60559),\n\n* Unless a range error occurs, FE_INEXACT is never raised (the result is exact)\n* Unless a range error occurs, the_current_rounding_mode is ignored\n* If x is ±0, it is returned, unmodified\n* If x is ±∞, it is returned, unmodified\n* If exp is 0, then x is returned, unmodified\n* If x is NaN, NaN is returned\n\n## Notes\n\nOn binary systems (where FLT_RADIX is 2), std::ldexp is equivalent to std::scalbn.\nThe function std::ldexp (\"load exponent\"), together with its dual, std::frexp, can be used to manipulate the representation of a floating-point number without direct bit manipulations.\nOn many implementations, std::ldexp is less efficient than multiplication or division by a power of two using arithmetic operators.\n\n## Example\n\n// Run this code\n\n#include <iostream>\n#include <cmath>\n#include <cerrno>\n#include <cstring>\n#include <cfenv>\n\n#pragma STDC FENV_ACCESS ON\nint main()\n{\nstd::cout << \"ldexp(7, -4) = \" << std::ldexp(7, -4) << '\\n'\n<< \"ldexp(1, -1074) = \" << std::ldexp(1, -1074)\n<< \" (minimum positive subnormal double)\\n\"\n<< \"ldexp(nextafter(1,0), 1024) = \"\n<< std::ldexp(std::nextafter(1,0), 1024)\n<< \" (largest finite double)\\n\";\n// special values\nstd::cout << \"ldexp(-0, 10) = \" << std::ldexp(-0.0, 10) << '\\n'\n<< \"ldexp(-Inf, -1) = \" << std::ldexp(-INFINITY, -1) << '\\n';\n// error handling\nerrno = 0;\nstd::feclearexcept(FE_ALL_EXCEPT);\nstd::cout << \"ldexp(1, 1024) = \" << std::ldexp(1, 1024) << '\\n';\nif (errno == ERANGE)\nstd::cout << \" errno == ERANGE: \" << std::strerror(errno) << '\\n';\nif (std::fetestexcept(FE_OVERFLOW))\nstd::cout << \" FE_OVERFLOW raised\\n\";\n}\n\n## Output:\n\nldexp(7, -4) = 0.4375\nldexp(1, -1074) = 4.94066e-324 (minimum positive subnormal double)\nldexp(nextafter(1,0), 1024) = 1.79769e+308 (largest finite double)\nldexp(-0, 10) = -0\nldexp(-Inf, -1) = -inf\nldexp(1, 1024) = inf\nerrno == ERANGE: Numerical result out of range\nFE_OVERFLOW raised\n\nfrexp\nfrexpf\nfrexpl decomposes a number into significand and a power of 2\n(function)\n\n(C++11)\n(C++11)\n\nscalbn\nscalbnf\nscalbnl\nscalbln\nscalblnf\nscalblnl multiplies a number by FLT_RADIX raised to a power\n(function)\n(C++11)\n(C++11)\n(C++11)\n(C++11)\n(C++11)\n(C++11)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67928284,"math_prob":0.9781343,"size":2227,"snap":"2020-45-2020-50","text_gpt3_token_len":635,"char_repetition_ratio":0.15294647,"word_repetition_ratio":0.09169055,"special_character_ratio":0.2739111,"punctuation_ratio":0.20042644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T06:04:04Z\",\"WARC-Record-ID\":\"<urn:uuid:e6dd239f-ef5e-4cd2-a3c4-30fe9bc4bae8>\",\"Content-Length\":\"16978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:493e8c87-b03d-4666-8717-8e3ab57e5c8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7af7034-17bd-4b01-bbc3-b7a75a7a622b>\",\"WARC-IP-Address\":\"104.18.35.215\",\"WARC-Target-URI\":\"https://www.systutorials.com/docs/linux/man/docs/linux/man/3-std%3A%3Aldexp%2Cstd%3A%3Aldexpf%2Cstd%3A%3Aldexpl/\",\"WARC-Payload-Digest\":\"sha1:B6S27QF6G3AJICBL3HZXCQCBDPAV6OGD\",\"WARC-Block-Digest\":\"sha1:KRKKJCYZGVL3HRZ45PLV3BXXD27363HL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141189141.23_warc_CC-MAIN-20201127044624-20201127074624-00228.warc.gz\"}"}
https://www.physicsforums.com/threads/discrete-math-question.504862/
[ "# Discrete math question?\n\n## Homework Statement\n\nHow many positive divisors does each of the following have?\n\n$$2^n$$ where n is a positive integer.\nand 30\n\n## The Attempt at a Solution\n\nfor 30 i get 2 , 5 , 3 , 10\nbut my book says 2 ,3 ,5 I dont understand why 10 isn't a divisor.\nand for 2^n im trying to look for a pattern if n=1 i get no divisors\nand n=2 i get 1 divisor and n=3 i get 2 divisors so would it be\n2^n has n-1 divisors?\n\ntiny-tim\nHomework Helper\nhi cragar!", null, "(try using the X2 tag just above the Reply box, and write \"itex\" rather than \"tex\", and it won't keep starting a new line", null, ")\nHow many positive divisors does each of the following have?\n\nfor 30 i get 2 , 5 , 3 , 10\nbut my book says 2 ,3 ,5\n\ni suspect that that's just a hint, and they're telling you those are the prime divisors, and leaving you to carry on from there\n\n(btw, you've missed out two more)\nand for 2^n im trying to look for a pattern if n=1 i get no divisors\nand n=2 i get 1 divisor and n=3 i get 2 divisors so would it be\n2^n has n-1 divisors?\n\nyes", null, "(though you should be able to prove it more rigorously than that!", null, ")\n\nok thanks for your post. so would all the divisors of 30 be 1 , 2 ,5,6,10 ,15 . is one a divisor. for $2^n$ to have divisors it has to be a multiple of 2 so would I divide it by 2 and then i would get $2^{n-1}$\nthen could i say it has n-1 divisors\n\ntiny-tim\nHomework Helper\nhi cragar!", null, "would all the divisors of 30 be 1 , 2 ,5,6,10 ,15 .\n\nyes", null, "(except i don't know whether 1 counts as a divisor", null, ")\nfor $2^n$ to have divisors it has to be a multiple of 2 so would I divide it by 2 and then i would get $2^{n-1}$\nthen could i say it has n-1 divisors\n\nbetter would be …\n\n2n has only one prime divisor, 2 …\n\nso its only divisors are 2k for 0 < k < n, of which there are n - 1", null, "(and now try a similar proof for 30", null, ")" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://www.physicsforums.com/styles/physicsforums/xenforo/smilies/oldschool/redface.gif", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8387721,"math_prob":0.94547874,"size":428,"snap":"2021-31-2021-39","text_gpt3_token_len":148,"char_repetition_ratio":0.1485849,"word_repetition_ratio":0.0,"special_character_ratio":0.35747662,"punctuation_ratio":0.09259259,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99483263,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T23:48:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b2608046-2787-4526-bc65-f84f0c9909ea>\",\"Content-Length\":\"69711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57f5b7ac-812f-43e7-b1a7-32d4db55f42d>\",\"WARC-Concurrent-To\":\"<urn:uuid:78e1ad8e-3317-4880-9baa-43b7f4f34141>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/discrete-math-question.504862/\",\"WARC-Payload-Digest\":\"sha1:7T4CILJCDAHSR6GM3JNYJRFUMLKT5TAC\",\"WARC-Block-Digest\":\"sha1:3G74EAOZVU24EVLAYKSZ5URIZH6R3GPL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150067.51_warc_CC-MAIN-20210723210216-20210724000216-00557.warc.gz\"}"}
https://encyclopediaofmath.org/wiki/Likelihood-ratio_test
[ "# Likelihood-ratio test\n\nA statistical test based on the ratio of the greatest values of the likelihood functions under the hypothesis being tested and under all possible states of nature. Let a random variable $X$ have values in the sample space $\\{ \\mathfrak X , {\\mathcal B} , {\\mathsf P} _ \\theta \\}$, $\\theta \\in \\Theta$, let the family of measures ${\\mathcal P} = \\{ { {\\mathsf P} _ \\theta } : {\\theta \\in \\Theta } \\}$ be absolutely continuous with respect to a $\\sigma$- finite measure $\\mu$ and let $p _ \\theta ( x) = d {\\mathsf P} _ \\theta ( x)/d \\mu ( x)$. Suppose it is necessary, via a realization of the random variable $X$, to test the composite hypothesis $H _ {0}$ according to which the unknown true value $\\theta _ {0}$ of the parameter $\\theta$ belongs to the set $\\Theta _ {0} \\subset \\Theta$, against the composite alternative $H _ {1} : \\theta _ {0} \\in \\Theta _ {1} = \\Theta \\setminus \\Theta _ {0}$. According to the likelihood-ratio test with significance level $\\alpha$, $0 < \\alpha < 1/2$, the hypothesis $H _ {0}$ has to be rejected if as a result of the experiment it turns out that $\\lambda ( x) \\leq \\lambda _ \\alpha$, where $\\lambda ( X)$ is the statistic of the likelihood-ratio test, defined by:\n\n$$\\lambda ( X) = \\frac{\\sup _ {\\theta \\in \\Theta _ {0} } \\ p _ \\theta ( X) }{\\sup _ {\\theta \\in \\Theta } p _ \\theta ( X) } ,$$\n\nwhile $\\lambda _ \\alpha$ is the critical level determined by the condition that the size of the test,\n\n$$\\sup _ {\\theta \\in \\Theta _ {0} } {\\mathsf P} _ \\theta \\{ \\lambda ( x) \\leq \\lambda _ \\alpha \\} = \\ \\sup _ {\\theta \\in \\Theta _ {0} } \\ \\int\\limits _ {\\{ {x } : {\\lambda ( x) \\leq \\lambda _ \\alpha } \\} } p _ \\theta ( x) \\mu ( dx) ,$$\n\nis equal to $\\alpha$. In particular, if the set $\\Theta$ contains only two points $\\Theta = \\{ {\\mathsf P} _ {0} , {\\mathsf P} _ {1} \\}$, with densities $p _ {0} ( \\cdot )$ and $p _ {1} ( \\cdot )$ respectively, corresponding to the concurrent hypotheses which, in this case, are simple, then the statistic of the likelihood-ratio test is simply\n\n$$\\lambda ( X) = \\ \\frac{p _ {0} ( X) }{\\max \\{ p _ {0} ( X), p _ {1} ( X) \\} } = \\ \\min \\left \\{ 1, \\frac{p _ {0} ( X) }{p _ {1} ( X) } \\right \\} .$$\n\nAccording to the likelihood-ratio test with significance level $\\alpha$, the hypothesis $H _ {0}$ has to be rejected if $p _ {0} ( X)/p _ {1} ( X) \\leq \\lambda _ \\alpha$, where the number $\\lambda _ \\alpha$, $0 < \\lambda _ \\alpha < 1$, is determined by the condition\n\n$${\\mathsf P} \\{ \\lambda ( X) < \\lambda _ \\alpha \\mid H _ {0} \\} =$$\n\n$$= \\ \\int\\limits _ {\\{ x: p _ {0} ( x) \\leq p _ {1} ( x) \\lambda _ \\alpha \\} } p _ {0} ( x) \\mu ( dx) = \\alpha .$$\n\nThe (generalized) likelihood-ratio test was proposed by J. Neyman and E.S. Pearson in 1928. They also proved (1933) that of all level- $\\alpha$ tests for testing one simple hypothesis against another, the likelihood-ratio test is the most powerful (see Neyman–Pearson lemma).\n\nHow to Cite This Entry:\nLikelihood-ratio test. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Likelihood-ratio_test&oldid=47635\nThis article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69077456,"math_prob":0.99960417,"size":3380,"snap":"2023-40-2023-50","text_gpt3_token_len":1073,"char_repetition_ratio":0.17387441,"word_repetition_ratio":0.11516035,"special_character_ratio":0.38698226,"punctuation_ratio":0.10339257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T05:22:57Z\",\"WARC-Record-ID\":\"<urn:uuid:163fb435-6c8a-46ff-9202-770d1ad6223a>\",\"Content-Length\":\"17581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:523f466f-5af1-4e9b-8456-26d9bd443af5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1f09615-3393-4a1a-a857-1dba9ad25620>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/wiki/Likelihood-ratio_test\",\"WARC-Payload-Digest\":\"sha1:3BFCABTQWJ5BCINVBW6AW6FSQNJSGLOD\",\"WARC-Block-Digest\":\"sha1:ULECO7IFJJU2QKWNIP7XEL6WNZB4ZYNK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00258.warc.gz\"}"}
http://applet-magic.com/curveturning.htm
[ " Theorems Concerning the Total Turning Angle of a Curve\nSan José State University\n\napplet-magic.com\nThayer Watkins\nSilicon Valley\nU.S.A.\n\nTheorems Concerning the\nTotal Turning Angle of a Curve\n\nWhen a curve such as the following one is traversed the direction of travel completes a full circle of 2π radians.", null, "In the following it is assumed that a curve is transversed in a counterclockwise direction; i.e., movement along the curve is with the interior of the curve on the left.\n\nLet A be the accumulated change in the direction angle for the curve at the beginning and end of its traversing.", null, "If the curve is simple; i.e., non-self-intersecting then A is equal to 2π. The starting point on the curve for the traversing does not matter.", null, "For a figure 8 curve A is equal to zero.", null, "For the curve below A is equal to 2(2π)=4π.", null, "## The Net Turning Angle for a Polygon\n\nConsider a polygon with n sides (and corners). For now its interior will be assumed to be convex and later the consequences of any nonconvexity will be examined.", null, "Choose a point P in the interior of the polygon and draw straight lines from P to each of the corner points.", null, "This creates n triangles. Let these triangles be labeled sequentially from 1 to n and their angles designated (ai, bi, ci) with ai being the angle impinging upon P.", null, "The interior angles di of the polygon are of the form\n\n#### di = ci-1 + biexcept for d1 = cn + b1\n\nThe exterior angles ei of the polygon are\n\nTherefore\n\n#### Σei = nπ − Σdi = nπ − (Σci + Σbi)\n\nBut for every triangle\n\nHence\n\n#### Σai + Σbi + Σci = nπ\n\nHowever Σai specifically is equal to 2π. Therefore\n\n#### Σbi + Σci = (n−2)π and hence Σei = nπ − (n−2)π = 2π\n\n(To be continued.)\n\nAppendix:\n\nLet X(t) be the vector of the coordinates x(t) and y(t) of a plane curve. The functions x(t) and y(t) are continuous and at least one of the right and left derivatives X'(t- and X'(t+, exist at every point. These derivatives define unit tangent vectors, T(t- and T(t+, at each point, where possibly the right and left unit tangents may be different. LIkewise there are unit normal vectors, N(t- and N(t+, defined at each point.\n\nThe angle α between a unit tangent vector and the unit vector in the x direction is given by\n\n#### T(t)·(1, 0) = Tx(t) = cos(α) and hence α = cos-1(Tx(t)) where Tx(t) = x'(t)/[(x'(t))²+(y'(t))²]½\n\nThis turn angle α may also be constructed by\n\n#### dα/dt = k(t) if T(t-)=T(t+) and otherwise Δα(t) = cos-1(T(t-)·T(t+))\n\nThe angle A after the completion of a circuit C can be represented as\n\n#### A = ∫Cdα(s)\n\nIf α(s) is differentiable then\n\n#### dα/ds = k, the curvature\n\notherwise Δα is the exterior angle of the curve at that point, the discontinuity." ]
[ null, "http://applet-magic.com/curveturn1.gif", null, "http://applet-magic.com/curveturn1a.gif", null, "http://applet-magic.com/curveturn1.gif", null, "http://applet-magic.com/curveturn2.gif", null, "http://applet-magic.com/curveturn3.gif", null, "http://applet-magic.com/curveturn4.gif", null, "http://applet-magic.com/curveturn5.gif", null, "http://applet-magic.com/curveturn5a.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9003355,"math_prob":0.9977358,"size":2208,"snap":"2022-05-2022-21","text_gpt3_token_len":533,"char_repetition_ratio":0.12976407,"word_repetition_ratio":0.0,"special_character_ratio":0.22327898,"punctuation_ratio":0.08971553,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988854,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,6,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T13:44:55Z\",\"WARC-Record-ID\":\"<urn:uuid:007c80be-2f89-45c3-a73c-e72284c4bc7d>\",\"Content-Length\":\"7369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adca47e2-665d-47fe-ac51-7f995abeba60>\",\"WARC-Concurrent-To\":\"<urn:uuid:35ed1c4a-0e29-42d2-a22e-db2c7dc8578b>\",\"WARC-IP-Address\":\"67.195.197.25\",\"WARC-Target-URI\":\"http://applet-magic.com/curveturning.htm\",\"WARC-Payload-Digest\":\"sha1:BN45BGCDPZLQXJ4TWNBYNNCAYRDHZAG7\",\"WARC-Block-Digest\":\"sha1:FN2OEOTIOEYCN4SBKG4M5RPNX2FIXWX7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00090.warc.gz\"}"}
https://gitlab.mpi-sws.org/iris/iris/-/commit/e181f73749d2fcabd38bb9d60a12fd7cf616a28d
[ "### The types of propositions for monPred lemma need to be [monPred I PROP] and...\n\n`The types of propositions for monPred lemma need to be [monPred I PROP] and not [bi_car (monPredI I PROP)], otherwise iIntoValid fails in a very weird way. Seems to be related to a Coq bug.`\nparent c78c8b7a\nPipeline #6266 passed with stages\nin 3 minutes and 27 seconds\n ... ... @@ -395,11 +395,12 @@ End canonical_sbi. Section bi_facts. Context {I : biIndex} {PROP : bi}. Local Notation monPred := (monPred I PROP). Local Notation monPredI := (monPredI I PROP). Local Notation monPred_at := (@monPred_at I PROP). Local Notation BiIndexBottom := (@BiIndexBottom I). Implicit Types i : I. Implicit Types P Q : monPredI. Implicit Types P Q : monPred. (** Instances *) ... ... @@ -565,9 +566,9 @@ Lemma monPred_at_or i P Q : (P ∨ Q) i ⊣⊢ P i ∨ Q i. Proof. by unseal. Qed. Lemma monPred_at_impl i P Q : (P → Q) i ⊣⊢ ∀ j, ⌜i ⊑ j⌝ → P j → Q j. Proof. by unseal. Qed. Lemma monPred_at_forall {A} i (Φ : A → monPredI) : (∀ x, Φ x) i ⊣⊢ ∀ x, Φ x i. Lemma monPred_at_forall {A} i (Φ : A → monPred) : (∀ x, Φ x) i ⊣⊢ ∀ x, Φ x i. Proof. by unseal. Qed. Lemma monPred_at_exist {A} i (Φ : A → monPredI) : (∃ x, Φ x) i ⊣⊢ ∃ x, Φ x i. Lemma monPred_at_exist {A} i (Φ : A → monPred) : (∃ x, Φ x) i ⊣⊢ ∃ x, Φ x i. Proof. by unseal. Qed. Lemma monPred_at_sep i P Q : (P ∗ Q) i ⊣⊢ P i ∗ Q i. Proof. by unseal. Qed. ... ... @@ -743,9 +744,10 @@ End bi_facts. Section sbi_facts. Context {I : biIndex} {PROP : sbi}. Local Notation monPred := (monPred I PROP). Local Notation monPredSI := (monPredSI I PROP). Implicit Types i : I. Implicit Types P Q : monPredSI. Implicit Types P Q : monPred. Global Instance monPred_at_timeless P i : Timeless P → Timeless (P i). Proof. move => [] /(_ i). unfold Timeless. by unseal. Qed. ... ... @@ -810,7 +812,7 @@ Lemma monPred_at_except_0 i P : (◇ P) i ⊣⊢ ◇ P i. Proof. by unseal. Qed. Lemma monPred_fupd_embed `{FUpdFacts PROP} E1 E2 (P : PROP) : ⎡|={E1,E2}=> P⎤ ⊣⊢ fupd E1 E2 (PROP:=monPred I PROP) ⎡P⎤. ⎡|={E1,E2}=> P⎤ ⊣⊢ fupd E1 E2 (PROP:=monPred) ⎡P⎤. Proof. unseal. split=>i /=. setoid_rewrite bi.pure_impl_forall. apply bi.equiv_spec; split. - by do 2 apply bi.forall_intro=>?. ... ...\n ... ... @@ -50,4 +50,9 @@ Section tests. iStartProof PROP. iIntros (i) \"HW\". iIntros (j ->) \"HP\". iSpecialize (\"HW\" with \"HP\"). done. Qed. Lemma test_apply_in_elim (P : monPredI) (i : I) : monPred_in i ∧ ⎡ P i ⎤ -∗ P. Proof. iIntros. by iApply monPred_in_elim. Qed. End tests.\nSupports Markdown\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8548025,"math_prob":0.6012676,"size":372,"snap":"2022-40-2023-06","text_gpt3_token_len":92,"char_repetition_ratio":0.13858695,"word_repetition_ratio":0.29032257,"special_character_ratio":0.23387097,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96368647,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T20:07:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ab714e22-d680-4f7d-b090-2bf1a4b85ca7>\",\"Content-Length\":\"202906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:330bb38c-eeaa-4dce-a7d5-abdc88325c9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:68ef8bc0-4590-42bc-8288-89de0d40d3c2>\",\"WARC-IP-Address\":\"139.19.205.205\",\"WARC-Target-URI\":\"https://gitlab.mpi-sws.org/iris/iris/-/commit/e181f73749d2fcabd38bb9d60a12fd7cf616a28d\",\"WARC-Payload-Digest\":\"sha1:PFLLUYQOAK37EAJ6TRW4JF55IMQMJTBC\",\"WARC-Block-Digest\":\"sha1:VMQOSV2VT2ETVXW56PDCFMEEL7BZXVCO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00557.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/14-30-plus-15-8
[ "Solutions by everydaycalculation.com\n\n1st number: 14/30, 2nd number: 1 7/8\n\n14/30 + 15/8 is 281/120.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 30 and 8 is 120\n2. For the 1st fraction, since 30 × 4 = 120,\n14/30 = 14 × 4/30 × 4 = 56/120\n3. Likewise, for the 2nd fraction, since 8 × 15 = 120,\n15/8 = 15 × 15/8 × 15 = 225/120", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5235893,"math_prob":0.998472,"size":402,"snap":"2019-43-2019-47","text_gpt3_token_len":178,"char_repetition_ratio":0.24874371,"word_repetition_ratio":0.0,"special_character_ratio":0.5522388,"punctuation_ratio":0.07920792,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99436015,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T04:24:07Z\",\"WARC-Record-ID\":\"<urn:uuid:7fa31107-0a72-4914-bd44-406e634bc1ee>\",\"Content-Length\":\"8300\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14bb0c91-42b0-4f3c-a32a-45301ed01572>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab4f0fd1-e10d-43ee-98c9-4a39e812283c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/14-30-plus-15-8\",\"WARC-Payload-Digest\":\"sha1:GSFFKOTVYPQIUP7LYIFDC2MTTECC7LU3\",\"WARC-Block-Digest\":\"sha1:ETCMKKPYJ223LDRYS72ZUUEMHKBEU27Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668787.19_warc_CC-MAIN-20191117041351-20191117065351-00197.warc.gz\"}"}
https://en.m.wikipedia.org/wiki/Bernstein%27s_theorem_on_monotone_functions
[ "# Bernstein's theorem on monotone functions\n\nIn real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line [0, ∞) that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value.\n\nTotal monotonicity (sometimes also complete monotonicity) of a function f means that f is continuous on [0, ∞), infinitely differentiable on (0, ∞), and satisfies\n\n$(-1)^{n}{\\frac {d^{n}}{dt^{n}}}f(t)\\geq 0$", null, "for all nonnegative integers n and for all t > 0. Another convention puts the opposite inequality in the above definition.\n\nThe \"weighted average\" statement can be characterized thus: there is a non-negative finite Borel measure on [0, ∞) with cumulative distribution function g such that\n\n$f(t)=\\int _{0}^{\\infty }e^{-tx}\\,dg(x),$", null, "the integral being a Riemann–Stieltjes integral.\n\nIn more abstract language, the theorem characterises Laplace transforms of positive Borel measures on [0, ∞). In this form it is known as the Bernstein–Widder theorem, or Hausdorff–Bernstein–Widder theorem. Felix Hausdorff had earlier characterised completely monotone sequences. These are the sequences occurring in the Hausdorff moment problem.\n\n## Bernstein functions\n\nNonnegative functions whose derivative is completely monotone are called Bernstein functions. Every Bernstein function has the Lévy–Khintchine representation:\n\n$f(t)=a+bt+\\int _{0}^{\\infty }(1-e^{-tx})\\,\\mu (dx),$\n\nwhere $a,b\\geq 0$  and $\\mu$  is a measure on the positive real half-line such that\n\n$\\int _{0}^{\\infty }(1\\wedge x)\\,\\mu (dx)<\\infty .$" ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a9518d6034172f05f3604cce016fc83e0fa45964", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/50480f8f80aa9433930aed779edd714cc3df9b37", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8371901,"math_prob":0.99748904,"size":1707,"snap":"2021-04-2021-17","text_gpt3_token_len":406,"char_repetition_ratio":0.13857898,"word_repetition_ratio":0.0,"special_character_ratio":0.21382542,"punctuation_ratio":0.14532872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998961,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T13:51:01Z\",\"WARC-Record-ID\":\"<urn:uuid:ca3855cc-6fca-465b-982e-10de121d6a06>\",\"Content-Length\":\"37481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e1244c5-324e-433a-836b-25b29ab0d2ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:e64f1ca1-088e-4e3a-82bd-9d0fea0230c6>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Bernstein%27s_theorem_on_monotone_functions\",\"WARC-Payload-Digest\":\"sha1:7OEERY6M262TFEL42QWR2YROHVBVCLNW\",\"WARC-Block-Digest\":\"sha1:ZVE65NVEA2QY63HTRBBOJ255O3SVQQ2J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703548716.53_warc_CC-MAIN-20210124111006-20210124141006-00795.warc.gz\"}"}
http://sandiegoarborist.com/read/an-introduction-to-celestial-mechanics
[ "# Download An Introduction to Celestial Mechanics by Professor Richard Fitzpatrick PDF", null, "By Professor Richard Fitzpatrick\n\nBest astronomy & astrophysics books\n\nThe Geology of Mars: Evidence from Earth-Based Analogs\n\nStudy into the geological methods working on Mars is dependent upon interpretation of pictures and different facts again by means of unmanned orbiters, probes and landers. Such interpretations are in keeping with our wisdom of tactics taking place in the world. Terrestrial analog stories accordingly play a big position in figuring out the geological positive aspects saw on Mars.\n\nExtra info for An Introduction to Celestial Mechanics\n\nExample text\n\nConsider a system consisting of N point particles. Let ri be the position vector of the ith particle, and let Fi be the external force acting on this particle. Any internal forces are assumed to be central in nature. The resultant force and torque Exercises 19 (about the origin) acting on the system are F= Fi i=1,N and τ= ri × Fi , i=1,N respectively. A point of action of the resultant force is defined as a point whose position vector r satisfies r × F = τ. 6 F F×τ +λ , F F2 where λ is arbitrary.\n\nBecause gravitational fields and gravitational potentials are superposable, the work done while moving the third mass from infinity to r3 is simply the sum of the works done against the gravitational fields generated by masses 1 and 2 taken in isolation: U3 = − G m3 m1 G m3 m2 − . 16) Thus, the total work done in assembling the arrangement of three masses is given by U=− G m2 m1 G m3 m1 G m3 m2 − − . 4 Axially symmetric mass distributions 25 This result can easily be generalized to an arrangement of N point masses, giving j\n\n50) The satellite will now be in a circular orbit at the aphelion distance, r2 . 4. Obviously, we can transfer our satellite from a larger to a smaller circular orbit by performing the preceding process in reverse. 46) that if we increase the √ tangential velocity of a satellite in a circular orbit about the Sun by a factor greater than 2, then we will transfer it into a hyperbolic orbit (e > 1), and it will eventually escape from the Sun’s gravitational field. 11 Elliptical orbits Let us determine the radial and angular coordinates, r and θ, respectively, of a planet in an elliptical orbit about the Sun as a function of time." ]
[ null, "https://images-na.ssl-images-amazon.com/images/I/51fMj9gmIwL._SX313_BO1,204,203,200_.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8712854,"math_prob":0.93924844,"size":3627,"snap":"2019-13-2019-22","text_gpt3_token_len":753,"char_repetition_ratio":0.09688104,"word_repetition_ratio":0.013468013,"special_character_ratio":0.20099255,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888142,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-18T18:17:39Z\",\"WARC-Record-ID\":\"<urn:uuid:73fac130-32ae-48dd-9bed-b781278ae858>\",\"Content-Length\":\"27860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a10d272-4a0f-459a-829d-74298a3c2f70>\",\"WARC-Concurrent-To\":\"<urn:uuid:dad66af7-d48f-4670-a16d-0c6daee5ba35>\",\"WARC-IP-Address\":\"184.168.204.1\",\"WARC-Target-URI\":\"http://sandiegoarborist.com/read/an-introduction-to-celestial-mechanics\",\"WARC-Payload-Digest\":\"sha1:44RMPXRQOVDUAIDIUIF5IY4MDNAT44WH\",\"WARC-Block-Digest\":\"sha1:IZBYYQUOMLPOPEH5FGD2OORERF3HGIY6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201521.60_warc_CC-MAIN-20190318172016-20190318194016-00055.warc.gz\"}"}
https://patents.google.com/patent/KR101621244B1/en
[ "# KR101621244B1 - Counter Circuit, Device Including the Same, and Counting Method - Google Patents\n\n## Info\n\nPublication number\nKR101621244B1\nKR101621244B1 KR1020090011692A KR20090011692A KR101621244B1 KR 101621244 B1 KR101621244 B1 KR 101621244B1 KR 1020090011692 A KR1020090011692 A KR 1020090011692A KR 20090011692 A KR20090011692 A KR 20090011692A KR 101621244 B1 KR101621244 B1 KR 101621244B1\nAuthority\nKR\nSouth Korea\nPrior art keywords\nsignal\ncounting\nresponse\nbit\ninput clock\nPrior art date\nApplication number\nKR1020090011692A\nOther languages\nKorean (ko)\nOther versions\nKR20100092542A (en\nInventor\n임용\n고경민\n김경민\nOriginal Assignee\n삼성전자주식회사\nPriority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)\nFiling date\nPublication date\nApplication filed by 삼성전자주식회사 filed Critical 삼성전자주식회사\nPriority to KR1020090011692A priority Critical patent/KR101621244B1/en\nPublication of KR20100092542A publication Critical patent/KR20100092542A/en\nApplication granted granted Critical\nPublication of KR101621244B1 publication Critical patent/KR101621244B1/en\n\n## Images\n\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "## Classifications\n\n• HELECTRICITY\n• H03BASIC ELECTRONIC CIRCUITRY\n• H03KPULSE TECHNIQUE\n• H03K21/00Details of pulse counters or frequency dividers\n• H03K21/02Input circuits\n• H03K21/023Input circuits comprising pulse shaping or differentiating circuits\n• HELECTRICITY\n• H03BASIC ELECTRONIC CIRCUITRY\n• H03KPULSE TECHNIQUE\n• H03K21/00Details of pulse counters or frequency dividers\n• H03K21/38Starting, stopping or resetting the counter\n• HELECTRICITY\n• H03BASIC ELECTRONIC CIRCUITRY\n• H03KPULSE TECHNIQUE\n• H03K23/00Pulse counters comprising counting chains; Frequency dividers comprising counting chains\n• H03K23/40Gating or clocking signals applied to all stages, i.e. synchronous counters\n• H03K23/50Gating or clocking signals applied to all stages, i.e. synchronous counters using bi-stable regenerative trigger circuits\n• H03K23/54Ring counters, i.e. feedback shift register counters\n• H03K23/548Reversible counters\n\n## Abstract\n\nThe counter circuit includes a first counting unit and a second counting unit. The first counting unit generates a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal and the second counting unit generates a rising edge and a falling edge of the input clock signal And generates a second bit signal that toggles in response to the second edge. The counter circuit performs counting twice every cycle period of the input clock signal to have an improved operating speed and operation margin and reduce the number of toggling of bit signals to reduce power consumption.\n\n## Description\n\nBACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a counter circuit,\n\nBACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to counting using a clock signal, and more particularly, to a double data rate (DDR) counter circuit capable of performing a counting operation efficiently, an apparatus including the counter circuit, and a counting method.\n\nA counter circuit can be used in various electronic devices to convert effective physical quantities such as intensity of light, intensity of sound, time, etc. to digital signals.\n\nFor example, an image sensor is an apparatus for acquiring an image using the property of a semiconductor that reacts with incident light, and includes an analog-to-digital converter for converting an analog signal output from the pixel array into a digital signal. The analog-to-digital converter may be implemented using a counter circuit that performs a counting operation using a clock signal.\n\nThe operating speed and power consumption of the counter circuit have a direct impact on the performance of the device or system comprising it. In particular, the CMOS image sensor may include a plurality of counter circuits for converting the analog signals output from the active pixel sensor array into digital signals according to the configuration thereof. The number of the counter circuits increases according to the resolution of the image sensor. As the number of the counter circuits increases, the operation speed and power consumption of the counter circuit can be an important factor for determining the overall performance of the image sensor.\n\nAn object of the present invention is to provide a counter circuit and a counting method capable of reducing power consumption and increasing an operating speed.\n\nIt is an object of the present invention to provide an analog-to-digital converter and an analog-to-digital conversion method which can reduce the consumed power and increase the operation speed by using the counter circuit.\n\nIt is an object of the present invention to provide an apparatus and a correlated double sampling method which can reduce consumption power and increase operation speed by using the counter circuit.\n\nTo achieve the above object, a counter circuit according to an embodiment of the present invention includes a first counting unit and a second counting unit. The first counting unit generates a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal. The second counting unit generates a second bit signal that toggles in response to a second one of a rising edge and a falling edge of the input clock signal.\n\nIn one embodiment, the counter circuit may further include a ripple counter for generating high bit signals that toggle in response to the second bit signal or the inverted signal of the second bit signal.\n\nWherein the first counting unit generates the first bit signal in response to a first one of a rising edge and a falling edge of the input clock signal and the second counting unit is responsive to a rising edge and a falling edge of the input clock signal, 2 &lt; / RTI &gt; edge.\n\nThe counter circuit may perform a counting operation twice per cycle period of the input clock signal.\n\nIn one embodiment, the counter circuit may further comprise a code converter for generating a least significant bit signal of the binary code based on the first bit signal and the second bit signal.\n\nWherein the first counting unit, the second counting unit and the ripple counter comprise a plurality of D-flip flops each outputting the first bit signal, the second bit signal and the upper bit signals, The first D-flip flop of the unit and the second D-flip flop of the second counting unit may complementarily toggle with respect to the rising or falling edge of the input clock signal.\n\nIn one embodiment, the first counting unit includes a rising edge triggered D-flip flop, the second counting unit includes a falling edge triggered D-flip flop, and the counter circuit performs an up- can do.\n\nIn another embodiment, the first counting unit includes a falling edge triggered D-flip flop, the second counting unit includes a rising edge triggered D-flip flop, and the counter circuit performs a down counting operation can do.\n\nIn one embodiment, the second counting unit may include a feedback switch for interrupting the toggling of the second bit signal in response to a comparison signal indicative of an end time of the counting operation.\n\nIn one embodiment, the counter circuit comprises: a clock control circuit for generating a clock control signal based on the first bit signal and the second bit signal; and a clock input for inverting the input clock signal in response to the clock control signal Circuit. &Lt; / RTI &gt;\n\nWherein the clock control circuit comprises: a logic gate for logically calculating and outputting the first bit signal and the second bit signal; and a control circuit for outputting the clock signal in response to a control signal applied to an output of the logic gate and a clock terminal applied to a data terminal, And a D-flip-flop for outputting a control signal.\n\nWherein the clock input circuit comprises a multiplexer for selecting and outputting a clock signal or an inverted clock signal in response to the clock control signal and a comparator for comparing the output signal of the multiplexer and the ending time of the counting operation, And outputting the logical product gate.\n\nIn one embodiment, the counter circuit may further include an inversion control section for inverting the second bit signal and the higher bit signals in response to the inversion control signal.\n\nThe inversion control unit may include a plurality of multiplexers for selecting and outputting one of a front output signal and a second inversion control signal in response to the first inversion control signal.\n\nIn one embodiment, the counter circuit may further include a mode switching control section for controlling an up-counting operation or a down-counting operation of the counter circuit in response to a mode control signal.\n\nThe mode switching control unit may include a plurality of multiplexers for selecting either one of the non-inverted output terminal of the previous stage or the inverted output terminal of the previous stage in response to the mode control signal and outputting the selected signal to the subsequent stage.\n\nTo achieve the above object, an analog-to-digital converter includes a comparator, a clock input circuit, and a counter circuit. The comparator compares an analog signal representing a physical quantity and a reference signal to generate a comparison signal. The clock input circuit generates an input clock signal based on the clock signal and the comparison signal. The counter circuit counts the input clock signal to generate a digital signal corresponding to the analog signal. Wherein the counter circuit includes a first counting unit for generating a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal and a second counting unit for producing a rising edge and a falling edge of the input clock signal, And a second counting unit for generating a second bit signal toggling in response to the second edge.\n\nIn one embodiment, the counter circuit may further comprise a ripple counter for generating high bit signals that toggle in response to the second bit signal or the inverted signal of the second bit signal\n\nThe second counting unit further includes a feedback switch for selectively connecting the inverted output terminal or the non-inverted output terminal of the second D flip-flop to the data terminal of the second D flip-flop in response to the comparison signal can do.\n\nTo achieve the above object, an apparatus includes a sensing unit, an analog-to-digital converter, and a control circuit. The sensing unit senses a physical quantity and generates an analog signal corresponding to the physical quantity. The analog-to-digital converter compares the analog signal with a reference signal and uses at least one counter circuit to generate a digital signal corresponding to the analog signal. The control circuit controls operation of the sensing unit and the analog-to-digital converter. The counter circuit includes a first counting unit for generating a first bit signal that toggles in response to a first one of a rising edge and a falling edge of an input clock signal and a second counting unit for generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of the input clock signal, And a second counting unit for generating a second bit signal toggling in response to the edge.\n\nIn one embodiment, the sensing unit includes a pixel array for sensing the incident light to generate the analog signal, and the apparatus may be an image sensor.\n\nWherein the pixel array sequentially outputs a first analog signal representative of a reset component for correlated double sampling and a second analog signal representative of an image signal component, the counter circuit sequentially counting the first analog signal The input clock signal can be inverted based on the first bit signal and the second bit signal before the start of counting for the second analog signal.\n\nAnd generates a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal in accordance with the counting method to accomplish the above object. And generates a second bit signal that toggles in response to a second one of a rising edge and a falling edge of the input clock signal.\n\nIn one embodiment, the first bit signal may generate a least significant bit signal of the binary code based on the second bit signal.\n\nAccording to the analog-to-digital conversion method for achieving the above object, a comparison signal is generated by comparing an analog signal representing a physical quantity and a reference signal. And generates an input clock signal based on the clock signal and the comparison signal. Generating a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal and generating a first bit signal that toggles in response to a second one of a rising edge and a falling edge of the input clock signal, Bit signal.\n\nTo achieve the above object, a correlated double sampling method includes a first counting step of counting one analog signal representing a reset component, a second counting step of counting a second analog signal representing a signal component, And generating a digital signal corresponding to a difference between the first analog signal and the second analog signal based on the second counting result. Wherein each of the first counting step and the second counting step comprises generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of an input clock signal, And generating a second bit signal toggling in response to a second one of the edge and the falling edge.\n\nThe input clock signal may be inverted based on the first bit signal and the second bit signal before the start of the second counting step after the first counting step is completed.\n\nThe counter circuit and counting method according to embodiments of the present invention can reduce power consumption by reducing the number of toggling of the output signal and can perform a counting operation twice every clock cycle period to increase the operation speed have.\n\nThe analog-to-digital converter and the analog-to-digital conversion method according to embodiments of the present invention can efficiently perform data conversion using the counter circuit and the counting method having reduced power consumption and increased operation speed have.\n\nThe apparatus including the counter circuit according to the embodiments of the present invention as described above has improved performance as the power consumption decreases and the operation speed increases. In particular, in the case of an image sensor including a plurality of count circuits, the consumed power can be remarkably reduced, and the operation margin of the image sensor can be increased by the fast operation speed of the counter circuit.\n\nThe image sensor and the correlated double sampling method including the counter circuit having the inversion function or the mode switching function according to the embodiments of the present invention can reduce the power consumption and increase the operation speed, It is possible to digitally perform correlated double sampling in the circuit and prevent errors in the correlated double sampling process to provide a more precise image signal.\n\nFor the embodiments of the invention disclosed herein, specific structural and functional descriptions are set forth for the purpose of describing an embodiment of the invention only, and it is to be understood that the embodiments of the invention may be practiced in various forms, The present invention should not be construed as limited to the embodiments described in Figs.\n\nThe present invention is capable of various modifications and various forms, and specific embodiments are illustrated in the drawings and described in detail in the text. It is to be understood, however, that the invention is not intended to be limited to the particular forms disclosed, but on the contrary, is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.\n\nThe terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms may be used for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.\n\nIt is to be understood that when an element is referred to as being \"connected\" or \"connected\" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being \"directly connected\" or \"directly connected\" to another element, it should be understood that there are no other elements in between. Other expressions that describe the relationship between components, such as \"between\" and \"between\" or \"neighboring to\" and \"directly adjacent to\" should be interpreted as well.\n\nOn the other hand, if an embodiment is otherwise feasible, the functions or operations specified in a particular block may occur differently from the order specified in the flowchart. For example, two consecutive blocks may actually be performed at substantially the same time, and depending on the associated function or operation, the blocks may be performed backwards.\n\nThe terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms \"comprise\", \"having\", and the like are intended to specify the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, , Steps, operations, components, parts, or combinations thereof, as a matter of principle.\n\nUnless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be construed as meaning consistent with meaning in the context of the relevant art and are to be construed as either ideal or overly formal in meaning unless expressly defined in the present application Do not.\n\nHereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The same reference numerals are used for the same constituent elements in the drawings and redundant explanations for the same constituent elements are omitted.\n\nFIG. 1 is a block diagram illustrating a counter circuit according to an embodiment of the present invention, and FIG. 34 is a flowchart illustrating a counting method according to an embodiment of the present invention.\n\nReferring to Figures 1 and 34, the counter circuit 100 includes a first counting unit 110 and a second counting unit 120. [ The counter circuit 100 may further include a ripple counter 10 in accordance with the number of bits of the digital signal corresponding to the counting result.\n\nThe first counting unit 110 generates a first bit signal D0 that toggles in response to a first edge (e.g., a falling edge) of the rising and falling edges of the input clock signal CLKi (Step S110). The second counting unit 120 generates a second bit signal D that toggles in response to a rising edge of the input clock signal CLKi and a second one of the falling edges (e.g., the rising edge) (Step S120). The ripple counter 10 may generate the upper bit signals D , D that toggle in response to the output signal OUT2 of the second counting unit 120. [ The output signal OUT2 of the second counting unit 120 is inverted by the second bit signal D or the second bit signal D according to the configuration of the counter circuit 100, Signal / D .\n\nIn one embodiment, the first counting unit 110 generates a first bit signal D0 in response to a first one of a rising edge and a falling edge of the input clock signal CLKi, May generate the second bit signal D in response to the second of the rising and falling edges of the input clock signal CLKi. By performing the toggling operation complementarily in response to different edges, the first bit signal D0 and the second bit signal D can have a phase difference of 90 degrees.\n\n1 shows only two counting units included in the ripple counter 10, that is, a third counting unit 130 and a fourth counting unit 140, for convenience of explanation, the counting units (not shown) included in the ripple counter 10 130, and 140 may be changed according to the number of bits of a digital signal, that is, a binary code D [0: n]. Hereinafter, it is assumed that the counter circuit 100 generates the 4-bit digital signals D0, D , D , and D , that is, the binary code D [0: 3] The configuration and operation of the counter circuit 100 will be described.\n\nThe ripple counter 10 has a cascaded configuration in which a plurality of counting units 130 and 140 sequentially toggle by the output signal of the previous stage. The third counting unit 130 toggles in response to the output signal OUT2 of the second counting unit 120 and the fourth counting unit 140 receives the output signal OUT3 of the third counting unit 130 The third bit signal D and the fourth bit signal D , which are sequentially multiplied by a period in response to a toggling method.\n\nThe counter circuit 100 further includes a code converter 50 for generating the least significant bit signal D of the binary code based on the first bit signal D0 and the second bit signal D can do. For example, the code converter 50 may be implemented with an exclusive-OR gate. The first through fourth bit signals D0, D , D , D represent the intermediate form code rather than a complete binary code, but represent a valid counting value by itself, The binary code D [0: 3] can be obtained by generating the least significant bit signal D The least significant bit signal D is not a signal to toggle during the counting operation, and the counting operation is completed and the first to fourth bit signals D0, D , D Is a signal provided by logic operation of the first bit signal D0 and the second bit signal D after the logic state of the first bit signal D and D is determined. Therefore, the code converter 50 is not necessarily included in the counter circuit 100, but may be implemented outside the counter circuit 100, and further outside the chip on which the counter circuit 100 is mounted.\n\nThe counter circuit 100 of the present invention may perform an up-counting operation or a down-counting operation according to the configuration thereof. Embodiments of the counter circuit for performing the up-counting operation will be described below with reference to FIGS. 2 to 6, and embodiments of the counter circuit for performing the down-counting operation will be described with reference to FIGS.\n\n2 is a timing chart showing an up-counting operation of the counter circuit of FIG.\n\n1 and 2, the first bit signal D0 generated in the first counting unit 110 is toggled in response to the rising edge of the input clock signal CLKi and is supplied to the second counting unit 120 The first bit signal D0 and the second bit signal D are shifted by 90 degrees in response to the falling edge of the input clock signal CLKi, Phase difference. The upper bit signals D , D generated in the ripple counter 10 all toggle in response to the output signal of the previous stage, for example, the falling edge of the near lower bit. That is, the third bit signal D toggles in response to the falling edge of the second bit signal D , and the fourth bit signal D ). As a result, the second bit signal D and the upper bit signals D , D are sequentially multiplied and the upper three bits of the binary code D [0: 3] . The least significant bit signal D of the binary code D [0: 3] is not the signal to be actually toggled as described above, but the first bit signal D0 and the second bit D0 [ Is a signal generated by logic operation of the signal D . FIG. 2 shows a result (D ) obtained by performing an exclusive OR operation on the first bit signal D0 and the second bit signal D at the respective end points of the counting operation. 2, the values of the binary codes D [0: 3] are displayed at the respective end points of the counting operation with the passage of time, and the binary codes D [0: 3] , 0010, 0011, and as a result, the up-counting operation is performed.\n\nAs shown in FIG. 2, the counter circuit 100 according to the embodiment of the present invention counts twice every cycle period of the input clock signal CLKi. Therefore, . Hereinafter, such double-speed counting will be referred to as DDR (Double Data Rate) counting, and the counter circuit for performing this will be referred to as a DDR counter circuit. Since the DDR counter circuit 100 according to an embodiment of the present invention has twice the operation speed as the conventional counter, it is possible to provide a clock signal of the same cycle and a binary code of 1 bit increased for the same counting time , It is possible to provide a more precise count value (for example, by adjusting the slope of the ramp signal). On the other hand, even when the clock signal whose frequency is halved (i.e., the cycle period is doubled) is used, the counter circuit 100 according to the embodiment of the present invention can provide the same- May reduce the consumption power as the frequency of the clock signal decreases and increase the operating margin of the counter circuit 100, the apparatus including it and the system.\n\nFigures 3 and 4 are circuit diagrams illustrating a counter circuit for performing an up-counting operation in accordance with embodiments of the present invention.\n\n3 and 4, the first counting unit 110, the second counting unit 120 and the ripple counter 10 receive the first bit signal D0, the second bit signal D ) And the upper bit signals D , D , respectively. The first D-flip-flops 110a and 110b of the first counting unit 110 and the second D-flip-flops 120a and 120b of the second counting unit 120 receive the rising edge of the input clock signal CLKi Toggling is performed complementarily with respect to the falling edge. In the embodiments of Figures 3 and 4, the first D-flip-flops 110a and 110b toggle in response to the rising edge of the input clock signal CLKi and the second D-flip-flops 120a and 120b And toggles in response to the falling edge of input clock signal CLKi.\n\n3, the first counting unit 110 includes a positive-edge triggered D-flip flop 110a and the second counting unit 120 includes a negative-edge triggered D-flip-flop 120a. The ripple counter 10 includes a plurality of falling edge triggered D-flip flops 130a and 140a for generating upper bit signals D , D .\n\n3, the first counting unit 110 includes a rising edge triggered D-flip flop 110b and the second counting unit 120 includes a falling edge triggered D- Flop 120b. The ripple counter 10 includes a plurality of rising edge triggered D-flip flops 130a and 140a for generating upper bit signals D , D .\n\nThe ripple counter 10 of FIG. 3 is implemented by falling edge triggered D-flip flops 130a and 140a, and the non-inverted output terminal Q of the previous stage is connected to the clock terminal CK of the following stage. In this case, the output signal OUTk of the k-th (k is an integer of 2 or more) counting unit corresponds to the k-bit signal D [k]. The ripple counter 10 of FIG. 4 is implemented by the rising edge triggered D-flip flops 130b and 140b unlike the ripple counter of FIG. 3, whereas the inverted output terminal / Q of the previous stage To a clock terminal (CK). In this case, the output signal OUTk of the k-th counting unit corresponds to the inverted signal of the k-bit signal D [k]. As a result, the counter circuits 100a, 100b of FIGS. 3 and 4 all perform the up-counting operation as shown in FIG.\n\nFigures 5 and 6 are circuit diagrams illustrating flip-flops that perform a toggling operation.\n\nFIG. 5 shows an example of a rising edge triggered D-flip flop, and FIG. 6 shows an example of a falling edge triggered D-flip flop. 5 and 6 illustrate the operation of the D-flip-flop included in the counter circuit 100 of the present invention, and are not limited to the configuration shown in FIG. 5, The configuration of the included flip-flops may be varied according to the embodiment.\n\nReferring to FIG. 5, the rising edge triggered D-flip flop 110a includes a first inverter 111, a second inverter 112, a first switch 113, and a second switch 114.\n\nThe output of the first inverter 111 is connected to the input of the second inverter 112 and the output of the second inverter 112 is connected to the input of the first inverter 111 via the second switch 114 Latch structure. In the example of FIG. 5, the output of the first inverter 111 corresponds to the inverted output terminal / Q and the output of the second inverter 112 corresponds to the non-inverted output terminal Q. The first switch 113 is connected between the data terminal D and the input of the first inverter 111 and the control terminal CK of the one switch 113 corresponds to the clock terminal. The input clock signal CLKi is applied to the control terminal CK of the first switch 113 and the inverted signal / CLKi of the input clock signal CLKi is applied to the control terminal / CK of the second switch 114 .\n\nThe rising edge triggered D-flip-flop 110a may further include a reset switch 115 for initializing the stored state. When the reset switch 115 is turned on in response to the reset signal RST, the logic state of the inverted output terminal / Q and the non-inverted output terminal Q becomes logic low logic low &lt; / RTI &gt; or logic high.\n\nWhen the input clock signal CLKi is logic low, the state of the flip-flop does not change even if the logic state of the D-flip-flop 110a is in the memory, i.e., the storage state, and the data terminal D is changed. When the input clock signal CLKi transits to logic high, i.e., at the rising edge of the input clock signal CLKi, the non-inverted output terminal Q stores the logic state of the data terminal D. The flip-flop in which the logic state changes in synchronization with the edge of the signal applied to the control terminal CK is referred to as an edge-triggered, and the D-flip-flop 110a in Fig. It corresponds to a flip-flop.\n\nThe rising edge triggered D-flip flop 110a is connected to the inverted output terminal (/ Q) of the data terminal D to perform a toggling operation. When the input clock signal CLKi falls and becomes a logic low, the second switch 114 is turned on and the logic state of the inverted output terminal / Q opposite to the non-inverted output terminal Q is set to the data terminal D However, the state of the flip-flop does not change. As a result, when the input clock signal CLKi rises to a logic high level, the logic state of the inverted output terminal / Q is applied to the input of the first inverter 111 to reverse the logic state of the non-inverted output terminal Q . As such, the rising edge triggered D-flip flop 110a performs a toggling operation in which the storage state is reversed from a logic high to a logic high or from a logic low to a logic high every rising edge of the input clock signal CLKi.\n\n6, the falling edge triggered D-flip flop 120a includes a first inverter 121, a second inverter 122, a first switch 123 and a second switch 124, And may further include a reset switch 125 according to the example.\n\nThe falling edge triggered D-flip flop 120a of FIG. 6 has a configuration similar to that of the rising edge triggered D-flip flop 110a of FIG. 5, but is connected to the control terminal / CK of the first switch 123 Except that the inverted signal / CLKi of the clock signal CLKi is applied and the input clock signal CLKi is applied to the control terminal CK of the second switch 124. [ That is, the flip flops of FIGS. 5 and 6 have a structure in which the clock terminals CK and / CK are inverted from each other.\n\nContrary to the rising edge triggered flip flop 110a of FIG. 5 that performs the toggling operation in response to the rising edge of the input clock signal CLKi, the falling edge triggered flip flop 120a of FIG. CLKi, &lt; / RTI &gt; a toggling operation is performed. When the input clock signal CLKi rises and becomes a logic high, the second switch 124 is turned on to set the logic state of the inverted output terminal / Q opposite to the non-inverted output terminal Q to the data terminal D However, the state of the flip-flop does not change. When the input clock signal CLKi falls and becomes a logic low, the logic state of the inverted output terminal / Q is applied to the input of the first inverter 121 so that the logic state of the non-inverted output terminal Q is reversed. Thus, the falling edge triggered D-flip flop 120a performs a toggling operation in which the storage state is reversed every falling edge of the input clock signal CLKi.\n\nThe counter circuit 100 for performing the up-counting operation or the down-counting operation described above using the flip-flops performing the toggling operation may be implemented.\n\n7 is a timing chart showing a down-counting operation of the counter circuit of FIG.\n\n1 and 7, the first bit signal D0 generated in the first counting unit 110 toggles in response to the falling edge of the input clock signal CLKi, and the second counting unit 120, The first bit signal D0 and the second bit signal D are toggled in response to the rising edge of the input clock signal CLKi so that the first bit signal D0 and the second bit signal D [ Phase difference. The upper bit signals D , D generated in the ripple counter 10 all toggle in response to the output signal of the previous stage, for example, the rising edge of the near lower bit. That is, the third bit signal D toggles in response to the rising edge of the second bit signal D , and the fourth bit signal D ). As a result, the second bit signal D and the upper bit signals D , D are sequentially multiplied and the upper three bits of the binary code D [0: 3] . The least significant bit signal D of the binary code D [0: 3] is not the signal to be actually toggled as described above, but the first bit signal D0 and the second bit D0 [ Is a signal generated by logic operation of the signal D . 7 shows a result (D ) obtained by performing an exclusive OR operation on the first bit signal D0 and the second bit signal D at the respective end points of the counting operation. 7, values of the binary codes D [0: 3] are displayed at the respective end points of the counting operation with the passage of time, and the binary codes D [0: 3] , 1110, and 1101, and as a result, the down-counting operation is performed.\n\nAs shown in FIG. 7, the counter circuit 100 according to the embodiment of the present invention counts twice every cycle period of the input clock signal CLKi as in the case of performing the up-counting operation, It can be seen that the operating speed is twice as high as that of the first embodiment.\n\nFigures 8 and 9 are circuit diagrams illustrating a counter circuit that performs a down-counting operation in accordance with embodiments of the present invention.\n\n8 and 9, the first counting unit 110, the second counting unit 120 and the ripple counter 10 receive the first bit signal D0, the second bit signal D [1 ] And the upper bit signals D , D , respectively. The first D-flip-flops 110c and 110d of the first counting unit 110 and the second D-flip-flops 120d and 120d of the second counting unit 120 receive the rising edge of the input clock signal CLKi Toggling is performed complementarily with respect to the falling edge. In the embodiments of Figures 8 and 9, the first D-flip flop 110c, 110d toggles in response to the falling edge of the input clock signal CLKi and the second D-flip flop 120c, And toggles in response to the rising edge of the input clock signal CLKi.\n\n8, the first counting unit 110 includes a falling edge triggered D-flip flop 110c and the second counting unit 120 includes a rising edge triggered D-flip flop 120c . The ripple counter 10 includes a plurality of rising edge triggered D-flip flops 130c and 140c for generating upper bit signals D , D .\n\n8, the first counting unit 110 includes a rising edge triggered D-flip flop 110d and the second counting unit 120 includes a falling edge triggered D- Flop 120d. The ripple counter 10 includes a plurality of falling edge triggered D-flip-flops 130d and 140d for generating upper bit signals D , D .\n\nThe ripple counter 10 of FIG. 8 is implemented as rising edge triggered D-flip flops 130c and 140c, and the non-inverting output terminal Q of the previous stage is connected to the clock terminal CK of the following stage. In this case, the output signal OUTk of the k-th (k is an integer of 2 or more) counting unit corresponds to the k-bit signal D [k]. The ripple counter 10 of FIG. 9 is implemented by falling edge triggered D-flip flops 130d and 140d different from the ripple counter of FIG. 8, whereas the inverted output terminal / Q of the previous stage To a clock terminal (CK). In this case, the output signal OUTk of the k-th counting unit corresponds to the inverted signal of the k-bit signal D [k]. As a result, the counter circuits 100c and 100d of FIGS. 8 and 9 all perform down counting operations as shown in FIG.\n\nAs described above, the rising edge triggered D-flip-flop and the falling edge triggered D-flip flop can be implemented in the same or similar configuration as in Figs. 5 and 6.\n\n10 is a timing chart showing the counting operation of the conventional counter circuit and the counter circuit according to the embodiments of the present invention.\n\nReferring to FIG. 10, the conventional counter includes the bit signals CD , CD , and CD that count values from 0000 to 1111 over 16 cycles of the input clock signal CLKi. , CD ). On the other hand, since the DDR counter 100 according to the embodiments of the present invention counts twice every cycle period of the input clock signal CLKi, the DDR counter 100 counts from 0000 to 1111 over eight cycles of the input clock signal CLKi The value can be counted. Therefore, even though the DDR counter circuit 100 according to the embodiments of the present invention has an operation speed twice that of the conventional counter and uses the input clock signal CLKi whose clock frequency is halved, Lt; / RTI &gt; The DDR counter circuit 100 according to the embodiment of the present invention can reduce the consumption power according to the decrease of the frequency of the clock signal and improve the operation margin of the counter circuit 100 and the apparatus and system including the same.\n\n11 shows the number of toggling of a conventional counter circuit and a counter circuit according to embodiments of the present invention.\n\n11, the number of toggling of each bit signal of the counter circuit 100 according to the conventional counter circuit and the embodiments of the present invention is described in the case of performing the counting operation from 0000 to 1111 shown in Fig. 10 have.\n\n11, the number of toggling of the first bit signal D0 of the DDR counter 100 according to an embodiment of the present invention is eight, and the number of toggling of the least significant bit signal CD It is found that the number of times of toggling is half that of 15 times. As described above, the DDR counter 100 according to the embodiment of the present invention not only can reduce the consumed power according to the decrease of the frequency of the clock signal but also has the highest toggling frequency even when the clock signal of the same frequency is used The consumption power can be further reduced by halving the number of toggling of the first bit signal D0.\n\nFIG. 12 is a circuit diagram showing an analog-to-digital converter including a counter circuit according to an embodiment of the present invention, and FIG. 35 is a flowchart showing an analog-to-digital conversion method according to an embodiment of the present invention.\n\n12, an analog-to-digital converter 200 performing an analog-to-digital conversion method according to an embodiment of the present invention includes a comparator 210, a clock input circuit 220, and a counter circuit 100e .\n\nThe comparator 210 compares the analog signal ANLG representing the physical quantity and the reference signal REF and generates a comparison signal CMP (step S210). The analog signal ANLG may represent any effective physical quantity such as intensity of light, intensity of sound, time, etc. For example, the physical quantity may correspond to the voltage level of the analog signal ANLG. In this case, in order to compare the voltage level of the analog signal ANLG, the reference signal REF may be provided as a ramp signal rising or falling with a constant slope. The comparator 210 compares the voltage level of the analog signal ANLG with the voltage level of the reference signal REF, that is, the ramp signal, to generate a comparison signal CMP that transits at a time when the voltage levels become equal. As a result, the physical quantity indicated by the voltage level of the analog signal ANLG is represented by the transition point of the comparison signal CMP, that is, the amount of time.\n\nThe clock input circuit 220 generates the input clock signal CLKi based on the clock signal CLK and the comparison signal CMP (step S220). For example, the clock input circuit 220 may be implemented as an AND gate as shown in FIG. In this case, the clock input circuit 220 outputs the clock signal CLK as the input clock signal CLKi while the comparison signal CMP maintains the logic high, and at the time when the comparison signal CMP transits to logic low The input clock signal CLKi is inactivated to terminate the counting operation of the counter circuit 100e.\n\nThe counter circuit 100e counts the input clock signal CLKi and generates digital signals D0 and D , D , and D corresponding to the analog signal ANLG. Although the counter circuit 100e for performing the up-counting operation is illustrated in FIG. 12, as described with reference to FIGS. 1 to 11, the counter circuit may be implemented to perform the down-counting operation. The counter circuit 100e includes a first counting unit 110e and a second counting unit 120e and may further include ripple counters 130e and 140e according to the number of bits of the digital signal corresponding to the counting result . The first counting unit 110e generates a first bit signal D0 that toggles in response to a first one of a rising edge and a falling edge of the input clock signal CLKi (step S230), and a second counting unit 120e generate a second bit signal D that toggles in response to the second of the rising and falling edges of the input clock signal CLKi (step S240). The ripple counters 130e and 140e receive the upper bit signals D and D that toggle in response to the inverted signals of the second bit signal D or the second bit signal D [ 3]).\n\nAlthough an example of the configuration of the counter circuit 100e for performing the up-counting operation is shown in FIG. 12 for convenience of explanation, as described above, the counter circuit 100e may be variously modified to perform the up- . The first counting unit 110e, the second counting unit 120e and the ripple counters 130e and 140e receive the first bit signal D0, the second bit signal D , and the upper bit signals D [ Flip-flops for outputting the first D-flip-flops of the first counting unit 110e and the second D-flip-flops of the second counting unit 120e, respectively, Flop toggles complementarily with respect to the rising edge or the falling edge of the input clock signal CLKi.\n\n14 is a timing chart showing the operation of the counter circuit of Fig.\n\nReferring to Fig. 14, the basic operation of the counter circuit 100e is the same as that of the counter circuit 100 of Fig. For example, as shown in FIG. 14, the first bit signal D0 generated in the first counting unit 110e toggles in response to the rising edge of the input clock signal CLKi and the second counting unit The second bit signal D generated at the input terminal 120e toggles in response to the falling edge of the input clock signal CLKi. The upper bit signals D , D generated in the ripple counters 130e and 140e are not shown for the sake of convenience.\n\nIn the configuration shown in Fig. 12 in which the input clock signal CLKi is inactivated on the basis of the comparison signal CMP, the falling edge of the comparison signal CMP indicating the ending point te of the counting operation is the input clock signal CLKi An erroneous error ERROR may be generated in which a falling edge is caused to unnecessarily toggle the second bit signal D at the ending point te of the counting operation. The second counting unit 120e of the counter circuit 100e may be configured to stop toggling the second bit signal in response to the comparison signal CMP indicating the end timing te of the counting operation. For the toggling interruption at the end timing te of the counting operation, as shown in Fig. 12, the second counter unit 120e includes a D-flip (not shown) for performing a toggling operation in response to the input clock signal CLKi, Flop 121e and a feedback switch 122e. Hereinafter, the configuration and operation of the second counting unit will be described with reference to FIG.\n\n13 is a circuit diagram showing a second counting unit included in the counter circuit of Fig.\n\nReferring to FIG. 13, the second counting unit 120e includes a second D-flip flop 121e and a feedback switch 122e. The second D-flip flop 121e includes a first inverter 121, a second inverter 122, and a first switch 123. The feedback switch 122e outputs the inverted output terminal / Q or the non-inverted output terminal Q of the second D-flip flop 121e to the second D-flip flop 121e in response to the comparison signal CMP And selectively connects to the data terminal D.\n\nThe feedback switch 122e connects the inverted output terminal / Q, which is the output of the first inverter 121, to the data terminal D when the comparison signal CMP is in the first logic state (for example, , In which case the second counting unit 120e performs a toggling operation. Even if the inverted signal / CLKi of the input clock signal CLKi is applied to the control terminal / CK of the first switch 123 and the input clock signal CLKi rises to become logic high, the D flip-flop 121e ) Does not change. When the input clock signal CLKi falls and becomes a logic low, the logic state of the inverted output terminal / Q is applied to the input of the first inverter 121 so that the logic state of the non-inverted output terminal Q is reversed. When the inverted output terminal / Q is feedback-connected to the data terminal D, the falling edge triggered D-flip flop 121e outputs a toggle signal in which the stored state is reversed every falling edge of the input clock signal CLKi, And performs an operation.\n\nThe feedback switch 122e connects the non-inverted output terminal Q, which is the output of the second inverter 121, to the data terminal D when the comparison signal CMP is in the second logic state (e.g., logic low) In which case the second counting unit 120e stops the toggling operation and maintains the stored state. 14, even if the input clock signal CLKi transitions on the falling edge of the comparison signal CMP indicating the ending point te of the counting operation, the potential difference between the data terminal D and the non-inverted output terminal Q Since the logic states are the same, the second counting unit 122e stops the toggling operation. Thus, the second bit signal D remains in a logic state that is toggled at the last falling edge of the input clock signal CLKi, without toggling at the end timing te of the counting operation.\n\n15 is a block diagram illustrating an apparatus including an analog-to-digital converter in accordance with an embodiment of the present invention.\n\nReferring to FIG. 15, the apparatus 300 includes a sensing unit 310, an analog-to-digital converter 200, and a control circuit 320.\n\nThe sensing unit 310 senses a physical quantity and generates an analog signal ANLG corresponding to the physical quantity. The analog-to-digital converter 200 uses the at least one counter circuit to compare the analog signal ANLG with the reference signal to generate a digital signal DGT corresponding to the analog signal ANLG. The control circuit 320 controls the operation timing of the sensing unit 310 and the analog-to-digital converter 200.\n\nAs described above, the analog-to-digital converter 200 includes a first counting unit and a second counting unit according to an embodiment of the present invention. The analog-to-digital converter 200 further includes a ripple counter according to the number of bits of the digital signal corresponding to the counting result And at least one DDR counter circuit including the DDR counter circuit. For a DDR counting operation, the first counting unit generates a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal, and the second counting unit generates a rising edge and a falling edge of the input clock signal And generates a second bit signal that toggles in response to a second one of the edges. The ripple counter generates upper bit signals that toggle in response to the second bit signal or the inverted signal of the second bit signal.\n\nThe sensing unit 310 senses an arbitrary effective physical quantity such as intensity of light, intensity of sound, time, etc., converts it into an analog signal ANLG, which is an electrical signal, and outputs the analog signal ANLG. The apparatus 300 may be implemented in a variety of electronic devices and systems such as image sensors such as Charge Coupled Device image sensors and CMOS (complementary metal oxide semiconductor) image sensors, digital cameras including them, noise meters, Lt; / RTI &gt; The device 300 may further include a digital signal processor (DSP) 330 for receiving and processing the digital signal DGT and the digital signal processor 330 may be coupled to the device &lt; RTI ID = 0.0 &gt; 300). &Lt; / RTI &gt;\n\nThe apparatus 300 including the analog-to-digital converter 200 according to an embodiment of the present invention may use at least one DDR counter circuit to increase the operating speed and reduce the power consumption.\n\nHereinafter, an image sensor and a correlated double sampling method among various electronic devices that can be implemented using a counter circuit according to an embodiment of the present invention will be described in more detail.\n\nFigures 16 and 17 are block diagrams illustrating an image sensor including a common counter circuit in accordance with an embodiment of the present invention.\n\n16, the image sensor 400 includes a pixel array 410, a driver / address decoder 420, a control circuit 430, a ramp signal generator 440, a correlated double sampling unit 450, 460, and a latch unit 470. \n\nIn the field of imaging devices, a CCD type or CMOS type image sensor that detects incident light as a physical quantity is used as an imaging device, and the image sensor 400 of FIG. 16 may be such a CCD image sensor or a CMOS image sensor.\n\nThe pixel array 410 includes a plurality of pixels arranged to convert the incident light into an electrical analog signal and output it by a unit element (for example, a unit pixel). In an image sensor called an active pixel sensor (APS) or a gain cell, an address is controlled with respect to a pixel portion including an array of unit pixels, and signals are read from individual unit pixels arbitrarily selected. The APS is an example of an address-controlled imaging device, and the driver / address decoder 420 is provided for controlling the operation of the pixel array in row and / or column units. The control circuit 430 generates control signals for controlling the operation timing of each component of the image sensor 400.\n\nThe analog pixel signals read from the pixel array 410 are converted into digital signals by an analog-to-digital converter implemented by a comparator 460, a latch unit 470, a counter circuit 100, and the like. The correlation double sampling unit 450, the comparing unit 460, and the latch unit 470 are connected to a plurality of CDS circuits (column units) 451, comparators 461 and latches 471,\n\nSince the analog signal output from the pixel array 410 has a variation in the reset component (or offset component) for each pixel, a valid signal component is extracted by taking the difference between the signal voltage according to the reset component and the signal voltage according to the signal component There is a need. Correlated double sampling (CDS) is a method of obtaining a reset component and a signal component (that is, an image signal component) when a pixel is initialized and extracting the difference as a valid signal component.\n\nThe correlated double sampling unit 450 obtains a difference between an analog voltage representing a reset component and an analog voltage representing a signal component detected through a photodiode or the like using a capacitor or a switch to perform analog double sampling (ADS) And outputs an analog voltage corresponding to a valid signal component. The comparison unit 460 compares the analog voltage output from the correlated double sampling unit 450 in units of columns and the ramp signal generated from the ramp signal generator 440 and outputs comparison signals having respective transition points according to valid signal components Output in column units. The bit signals D0, D , D , and D output from the counter circuit 100 are provided in common to the respective latches 471, Latches the bit signals D0, D , D , D output from the counter circuit 100 in response to the transition timing of the signal, and outputs the latched digital signal in units of columns .\n\nThe counter circuit 100 is implemented as a counter circuit that performs a DDR operation according to an embodiment of the present invention. As described above, the counter circuit 100 includes a first counting unit and a second counting unit. For a DDR counting operation, a first counting unit generates a first bit signal (D0) that toggles in response to a first one of a rising edge and a falling edge of an input clock signal, and the second counting unit generates a rising And a second bit signal D that toggles in response to a second one of the edge and the falling edge. The ripple counter which can be added in accordance with the number of bits of the digital signal corresponding to the counting result is toggled in response to the inverted signal of the second bit signal D or the second bit signal D And generates upper bit signals D , D .\n\nBy performing the analog-to-digital conversion operation using the DDR counter circuit 100 having twice the operation speed as compared with the conventional counter, the image sensor 400 can reduce the consumed power with the improved operation speed and operation margin .\n\n16, a DDR counter circuit according to an embodiment of the present invention is used in an image sensor 400 that performs analog double sampling. However, as described later with reference to FIGS. 17 and 18, May also be used for image sensors that perform digital double sampling (DDS). Digital double sampling refers to extracting a difference between two digital signals as valid signal components after converting an analog signal for a reset component and an analog signal for a signal component to a digital signal when the pixel is initialized.\n\nCompared with the image sensor 400 of FIG. 16, the latch unit 570 of the image sensor 500 of FIG. 17 has a configuration for performing digital double sampling. Each of the latches 571 provided on a column-by-column basis includes a first latch 572 and a second latch 573. The pixel array 510 sequentially outputs a first analog signal representing a reset component for correlated double sampling and a second analog signal representing an image signal component. In the first sampling process, the comparator 560 compares the first analog voltage representing the reset component with the ramp signal generated from the ramp signal generator 440, and outputs the comparison signals having the respective transition points according to the reset component, Output. The bit signals D0, D , D , D output from the counter circuit 100 are provided in common to the respective latches 571, D , D , D ) output from the counter circuit 100 in response to the transition timing of the comparison signal and outputs the digital signal related to the reset component to the first latch (572). In the second sampling process, the comparator 560 compares the second analog voltage representing the image signal component with the ramp signal generated from the ramp signal generator 440, and outputs the comparison signals having the respective transition points according to the image signal components, . The latch unit 570 latches the bit signals D0, D , D , D output from the counter circuit 100 in response to the transition timing of each comparison signal, In the second latch 573. The digital signals stored in the first latch 572 and the second latch 573 are provided to an internal circuit for performing a logical operation so that values indicating valid image signal components are calculated and digital double sampling can be performed in this manner have.\n\nThe counter circuit 100 is implemented as a counter circuit that performs a DDR operation according to an embodiment of the present invention. As described above, the counter circuit 100 includes a first counting unit and a second counting unit. For a DDR counting operation, a first counting unit generates a first bit signal (D0) that toggles in response to a first one of a rising edge and a falling edge of an input clock signal, and the second counting unit generates a rising And a second bit signal D that toggles in response to a second one of the edge and the falling edge. The ripple counter, which can be added according to the number of bits of the digital signal corresponding to the counting result, is a toggling signal in response to the inverted signal of the second bit signal D or the second bit signal D Bit signals D , D .\n\nBy performing the analog-to-digital conversion operation using the DDR counter circuit 100 having twice the operation speed as compared with the conventional counter, the image sensor 400 can reduce the consumed power with an improved operation margin. Compared to the image sensor 400 of FIG. 16, which performs analog double sampling, the image sensor 500 of FIG. 17 performs digital double sampling and therefore performs two counting operations to obtain one valid image signal component The performance improvement of the image sensor 500 exerted from the DDR counter circuit 100 is further enhanced.\n\n16 and 17, the image sensors 400 and 500 that perform correlated double sampling using a common counter circuit have been described. However, the image sensor may include a plurality of counter circuits provided on a column-by-column basis for high- May be implemented. Hereinafter, a DDR counter circuit having an inverting function or mode switching function according to embodiments of the present invention suitable for performing digital double sampling and an image sensor including a plurality of counter circuits provided on a column basis will be described.\n\n18 is a block diagram illustrating an image sensor including a plurality of counter circuits according to one embodiment of the present invention.\n\n18, the image sensor 600 includes a pixel array 610, a driver / address decoder 620, a control circuit 630, a ramp signal generator 640, a comparison unit 660, and a counting unit 680 ). &Lt; / RTI &gt;\n\nThe pixel array 610 includes a plurality of pixels arranged to convert incident light into an electrical analog signal by a unit element (for example, a unit pixel) and output the converted electrical analog signal. The driver / address decoder 620 is provided for controlling the operation of the pixel array in row and / or column units. The control circuit 630 generates a control signal CTRL for controlling the operation timing of each component of the image sensor 600. [ As will be described later, the control signal CTRL generated in the control circuit 630 includes signals INV1 and INV2 for controlling the inversion operation or signals for controlling the up / down mode switching operation HD, U / D).\n\nThe analog pixel signals read out from the pixel array 610 are converted into digital signals by the analog-to-digital converter implemented by the comparator 660 and the counting unit 680. [ The pixel signals are output and processed in units of columns. For this purpose, the comparator 660 and the counting unit 680 may include a plurality of comparators 661 and a plurality of counter circuits 700, . By simultaneously processing the pixel signals of one row in parallel using the plurality of signal processing means provided for each column, the image sensor 600 can perform high-speed operation with improved performance in terms of band performance and noise do.\n\nThe pixel array 610 sequentially outputs a first analog signal representative of a reset component for correlated double sampling and a second analog signal representative of an image signal component and sequentially outputs a first analog signal and a second analog signal based on the first analog signal and the second analog signal, 660 and the counting unit 680 performs digitally correlated double sampling, that is, digital double sampling.\n\n36 is a flowchart illustrating a correlated double sampling method according to an embodiment of the present invention.\n\n18 and 36, the analog-to-digital converter implemented by the comparing unit 660 and the counting unit 680 shown in FIG. 18 counts a first analog signal indicating a reset component (first counting step S310 ) And counts the second analog signal representing the signal component (second counting step 320). A digital signal corresponding to a difference between the first analog signal and the second analog signal is generated based on the first counting result and the second counting result (step 330). Wherein each of the first counting step and the second counting step is performed in a DDR counting manner as described above. That is, in the first and second counting steps, each counter circuit 700 generates a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal (step S110) And generates a second bit signal that toggles in response to a second one of a rising edge and a falling edge of the input clock signal (step S120). Depending on the embodiment, it may generate upper bit signals that toggle in response to the inverse of the second bit signal or the second bit signal.\n\nEach counter circuit 700 stores the first counting result, performs an inversion operation or an up / down mode switching operation as described later, and then performs a second counting based on the result. Thus, the digital signal finally output in the counting unit 680 corresponds to a valid image signal compensated by correlated double sampling.\n\nBy performing correlated double sampling using a DDR counter circuit 700 having twice the operating speed as the conventional counter, the image sensor 600 can reduce the consumed power with an improved operating speed and operation margin .\n\nEach of the counter circuits 700 not only performs a DDR counting operation but also has an inversion function or a mode switching function in order to perform the digital double sampling described above. Hereinafter, a counter circuit 700 having an inversion function or a mode switching function according to embodiments of the present invention will be described.\n\n19 is a block diagram showing a counter circuit according to an embodiment of the present invention.\n\n19, the counter circuit 700 includes a first counting unit 710, a second counting unit 720, a ripple counter 70, a clock control circuit 750, and a clock input circuit 760 .\n\nAs described above, to perform the DDR counting operation, the first counting unit 710 and the second counting unit 720 receive a first bit signal (not shown) toggling complementarily with respect to the rising edge or the falling edge of the input clock signal, (D0) and a second bit signal D . The ripple counter 10 includes a plurality of cascaded counting units 730 and 740 and receives the upper bit signals D [2 ], D ). The output signal OUT2 of the second counting unit 720 is inverted by the inverted signal / D (1) of the second bit signal D or the second bit signal D ).\n\nCompared with the counter circuit 100 of Fig. 1, the counter circuit 700 of Fig. 19 further includes a clock control circuit 750 and a clock input circuit 760. Fig. The clock control circuit 750 generates the clock control signal ST based on the first bit signal D0 and the second bit signal D , and the clock input circuit 760 generates the clock control signal ST Inverts the input clock signal CLKi. The first and second bit signals D0 and D generated in the counter circuit 700 represent intermediate codes that are not final binary codes, and in a digital double sampling process including an inversion operation or a mode switching operation An error may occur. Therefore, in order to provide an accurate counting value in the digital double sampling process, it is necessary to determine whether the input clock signal CLKi is inverted according to the first counting result after the first counting step is completed and before the start of the second counting step. The clock control circuit 750 and the clock input circuit 760 are added to prevent errors in the digital double sampling process and to provide accurate counting values.\n\nDigital double sampling in the image sensor 600 of Fig. 18 is performed by using the counter circuit 700a having the inverting function shown in Fig. 20 or the counter circuit 700b having the up / down mode switching function shown in Fig. 27 .\n\n20 is a circuit diagram showing a counter circuit having an inverting function according to an embodiment of the present invention.\n\nReferring to Fig. 20, the counter circuit 700a includes a first counting unit 710a and a second counting unit 720a, and may further include ripple counters 730a and 740a, depending on the embodiment. The clock control circuit 750 and the clock input circuit 760 shown in FIG. 19 are not shown for the sake of convenience and will be described later with reference to FIG.\n\nThe first counting unit 710a includes a first D-flip flop 711 and a first multiplexer 712.\n\nThe first D-flip flop 711 generates a first bit signal D0 that toggles in response to a first edge (e.g., a rising edge) of a rising edge and a falling edge of the input clock signal CLKi . The first multiplexer 712 outputs the inverted output terminal / Q or the non-inverted output terminal Q of the first D-flip flop 711 to the first D-flip flop 711 in response to the first inverted control signal INV1. To the data terminal (D) of the data driver (711). Therefore, when the first inversion control signal INV1 is inactive (for example, logic low), the first D flip-flop 711 feeds back the inverted output terminal / Q to the data terminal D, The non-inverted output terminal Q is fed back to the data terminal D so that the toggle of the input clock signal CLKi is performed. In this case, when the first inverted control signal INV1 is activated (for example, Maintains storage regardless of ring.\n\nThe second counting unit 720a includes a second D-flip flop 721, an inverse multiplexer 722, and a feedback multiplexer 723.\n\nThe inversion multiplexer 722 selects and outputs one of the input clock signal CLKi and the second inversion control signal INV2 in response to the first inversion control signal INV1. The second inversion control signal INV2 is input to the first D flip-flop 721 after the first counting operation is completed. The second inversion control signal INV2 is used to invert the second bit signal D Is provided in such a manner that one edge is applied while the signal INV1 is activated (e.g., logic high) to indicate the inversion point. The first and second inverted signals INV1 and INV2 may be the control signal CTRL output from the control circuit 630 that controls the operation timing of the image sensor 600 of Fig. The second D-flip-flop 721 performs a toggling operation in response to the output of the inverse multiplexer 722. The second D flip-flop 721 receives a second bit signal (e.g., a falling edge) that toggles in response to a rising edge of the input clock signal CLKi and a second edge of the falling edge D ). As described above, one of the first D-flip-flop 711 and the second D-flip-flop 721 is implemented as a rising edge trigger type and the other is implemented as a falling edge trigger type so that the first bit signal D0 The second bit signal D may have a phase difference of 90 degrees. The feedback multiplexer 723 is for performing the same function as the feedback switch 122e of FIG. 13, and redundant description will be omitted.\n\nThe ripple counter including the plurality of units 730a and 740a is configured to receive the upper bit signals D , D that toggle in response to the output signal OUT2 of the second counting unit 720a Occurs. A plurality of counting units such as a third counting unit 730a and a fourth counting unit 740a included in the ripple counter may be cascade-connected with the same configuration, and the third counting unit 730a Will be described.\n\n21 is a circuit diagram showing an example of the third counting unit in Fig.\n\n21, the third counting unit 730a includes a third D-flip flop 731 and an inverse multiplexer 732. The third D-\n\nThe inverting multiplexer 732 selects one of the output signals of the previous stage, that is, the output signal OUT2 of the second counting unit 720a and the second inverting control signal INV2 in response to the first inverting control signal INV1 Output. The inverting multiplexer 722 included in the second counting unit 720a and the inverting multiplexer 732 included in the third counting unit 730a are substantially identical to the D flip-flops 721 and 731 And the plurality of multiplexers invert the second bit signal D and the upper bit signals D , D in response to the inversion control signals INV1 and INV2 Thereby forming an inversion control section. The third D-flip flop 731 generates a third bit signal D that toggles in response to the output signal OUT2 of the previous stage in the normal counting operation. 21 shows an embodiment in which the third D flip-flop 731 is implemented as a falling edge trigger type and the output signal OUT3 corresponds to a third bit signal D . However, as described above, , The third D-flip flop 731 may be implemented as a rising edge trigger type and the output signal OUT3 may be implemented as an inverted signal of the third bit signal D , depending on the configuration of the counting units have.\n\n22 is a circuit diagram showing an example of a clock control circuit and a clock input circuit included in a counter circuit having an inverting function according to an embodiment of the present invention.\n\n22, the clock control circuit 750a generates the clock control signal ST based on the first bit signal D0 and the second bit signal D , and outputs the clock control signal ST to the clock input circuit 760a. Inverts the input clock signal CLKi in response to the clock control signal ST.\n\nThe clock control circuit 750a may be implemented including a logic gate 752 and a D-flip flop 751. The logic gate 752 logically calculates and outputs the first bit signal D0 and the second bit signal D . For example, the logic gate 752 may be an exclusive-NOR gate enabled in response to the first inverted signal INV1, where the output of the logic gate is the least significant bit signal D [ ]) / D of the input signal.\n\nThe D flip-flop 751 outputs the clock control signal ST in response to the first inverted control signal INV1 to which the output of the logic gate 752 is applied to the data terminal D and applied to the clock terminal CK Output. As a result, while the first inversion control signal INV1 is being activated, the inversion operation of the counter circuit 700a of FIG. 20 is performed, and the logic level of the clock control signal ST is set to the first level after the first counting operation is completed Is determined according to the logic levels of the bit signal D0 and the second bit signal D .\n\nClock input circuit 760a may be implemented including multiplexer 761 and AND gate 762. The multiplexer 761 selects and outputs the clock signal CLKc or the inverted clock signal / CLKc in response to the clock control signal ST. The AND gate 762 performs logical operation on the output signal CLKm of the multiplexer 761 and the comparison signal CMP indicating the end time of the counting operation and outputs the input clock signal CLKi. As a result, either the clock signal CLKc or the inverted clock signal / CLKc is output as the input clock signal CLKi depending on the logic level of the clock control signal ST. The clock signal CLKc may be a signal activated by the count enable signal CNT_EN as described later. The AND gate 60 shown together in FIG. 22 may be included in the control circuit 630 of FIG. 18 and may be configured to receive the clock signal CLKc only when the count enable signal CNT_EN is activated, for example, Activate.\n\nFIG. 23 is a view for explaining the counting operation by the inverting function of the counter circuit of FIG. 20, and FIGS. 24 and 25 are timing charts showing the counting operation by the inverting function of the counter circuit of FIG.\n\nAs described above, Digital Double Sampling (DDS) converts a first analog signal for a reset component when a pixel is initialized and a second analog signal for a signal component (i.e., image component) into a digital signal And then extracts the difference between the two digital signals as a valid signal component.\n\nReferring to FIGS. 24 and 25, the counter circuit 700a having an inverting function of FIG. 20 includes a first counting operation (1ST COUNT) for counting a first analog signal for a reset component into a digital signal, Digital double sampling is performed by a second counting operation (2ND COUNT) for counting a second analog signal with respect to a signal component as a digital signal based on an inversion operation (INVERSION) for inverting the result, and a result of the inversion operation .\n\nThe first bit signal D0, the second bit signal D and the least significant bit signal D for each of the inversion result and the first counting in the second counting operation as a result of the first counting operation, ]) Are shown in Fig. The least significant bit signal D represents the least significant bit when the digital signal is converted into a binary code and the exclusive OR operation of the first bit signal D0 and the second bit signal D .\n\nThe first bit signal D0 and the second bit signal D generated in the DDR counter circuits 100 and 700 according to an embodiment of the present invention do not express the lower 2 bits of the binary code as they are An error may occur when the first counting operation is performed by simply inverting the result of the first counting operation. 20, the first counting unit 710a does not include an inverting multiplexer, the first bit signal D0 remains the result of the first counting operation, and the clock control The input clock signal CLKi is inverted according to the result of the first counting operation using the circuit 750a and the clock input circuit 760a so that the second counting from the first edge of the input clock signal CLKi for all cases So that the operation is started.\n\n23, when the least significant bit signal D is logic low (i.e., '0') in the result of the first counting operation, the first counting in the second counting operation is the second bit signal D ) and if the least significant bit signal D in the result of the first counting operation is a logical high (i.e., '1'), then the first It can be seen that the counting should be to toggle the first bit signal D0.\n\nFIG. 24 shows a digital double sampling operation for the case where the least significant bit signal D is logic low in the result of the first counting operation. When the least significant bit signal D is logic low in the result of the first counting operation, the clock control signal ST output from the clock control circuit 750a of FIG. 22 is the same as that of the first inversion control signal INV1 And transitions from a logic low to a logic high in response to a rising edge. In the second counting operation, the inverted signal / CLKc of the clock signal CLKc is output as the input clock signal CLKi (that is, the input clock signal CLKi is inverted) by the clock input circuit 760a, The counting operation is initiated by toggling the second bit signal D at the first edge, or falling edge, of the input clock signal CLKi.\n\n25 shows a digital double sampling operation for the case where the least significant bit signal D is logic high in the result of the first counting operation. When the least significant bit signal D is logic high in the result of the first counting operation, the clock control signal ST output from the clock control circuit 750a of FIG. 22 is the same as that of the first inversion control signal INV1 A logic low is maintained even if the rising edge is applied. In the second counting operation, the clock signal CLKc is directly output as the input clock signal CLKi by the clock input circuit 760a, and the second counting operation is performed at the first edge of the input clock signal CLKi, Is started by toggling the first bit signal D0.\n\nAs described above, by using the clock control circuit 750a and the clock input circuit 760a, before the start of the second counting operation after the first counting operation is completed, the first bit signal D0 and the second bit signal D [ 1]), it is possible to prevent an error in the digital double sampling process.\n\n26 is a timing diagram showing a correlated double sampling operation of the image sensor including the counter circuit having the reversal function of FIG. 26 shows a correlated double sampling operation for one column.\n\nAt time t11, the count enable signal CNT_EN provided in the control circuit 630 of the image sensor 600 is activated to a logical high, and in response to the enable signal CNT_EN, the ramp signal generator 640 generates a ramp signal Lt; RTI ID = 0.0 &gt; (RAMP). &Lt; / RTI &gt; In this manner, in each of the counter circuits 700a included in the counting unit 680, the first counting operation is started on a column by column basis. At this time, the pixel voltage signal Vpix is supplied to the comparator 661 as a first analog signal indicating a reset component.\n\nAt time t12, the voltage levels of the ramp signal RAMP and the pixel voltage signal Vpix become equal, and the comparison signal CMP output from the comparator 661 transits to a logic low state and the counting operation is ended. Although not shown in FIG. 26, the input clock signal CLKi provided on the basis of the clock signal CLKc in response to the comparison signal CMP is inactivated (see FIGS. 24 and 25), and the counter circuit 700a is reset The result (3) of the first counting operation corresponding to the component (Vrst) is stored.\n\nAt time t13, when the count enable signal CNT_EN is deactivated to a logic low, the ramp signal generator 640 is disabled. The interval from the time t11 to the time t13 represents the maximum interval for counting the reset component and can be set to correspond to the number of clock cycles suitable for the characteristics of the image sensor.\n\nAt time t14, when the second inversion signal INV2 transits to a logic low while the first inversion control signal INV1 is activated to a logic high, the inversion control section including the plurality of inversion multiplexers 722 and 732 outputs By applying the falling edge of the second inverted signal INV2 to the clock terminals of the D-flip flops 721 and 731 included in the second counting unit 710a and the ripple counter 730a and 740a, D ) and the upper bit signals D , D are inverted. The result (-4) of the inversion operation is stored in the counter circuit 700a. As described above, in response to the rising edge of the first inverted control signal INV1, the clock control circuit 750a and the clock input circuit 760a determine whether the input clock signal CLKi is inverted for the second counting operation You can decide.\n\nAt time t15 the count enable signal CNT_EN is activated again to a logic high and in response to the enable signal CNT_EN the ramp signal generator 640 begins to decrease the voltage level of the ramp signal RAMP, In each of the counter circuits 700a, the second counting operation is started on a column-by-column basis. At this time, the pixel voltage signal Vpix is provided to the comparator 661 as a second analog signal representing an image signal component.\n\nAt time t16, the voltage levels of the ramp signal RAMP and the pixel voltage signal Vpix become equal, and the comparison signal CMP output from the comparator 661 transits to a logic low state, thereby completing the second counting operation. The input clock signal CLKi provided based on the clock signal CLKc in response to the comparison signal CMP is inactivated (see FIGS. 24 and 25), and finally the counter circuit 700a receives the reset component Vrst = 3 (Vsig-1 = 3) corresponding to the difference between the first analog signal representing the image signal component (Vrst + Vsig = 7) and the second analog signal representing the image signal component (Vrst + Vsig = 7) And is output as a digital signal represented by the first bit signal D0, the second bit signal D and the upper bit signals D , D . There is a difference of 1 due to the inversion operation between the effective image signal component Vsig and the final output value Vsig-1 of the counter circuit 700a, but this difference is common to all the columns, ) Can be canceled in the subsequent signal processing process.\n\nAt time t17, when the count enable signal CNT_EN is deactivated to a logic low, the ramp signal generator 640 is disabled. The interval from time t15 to time t17 represents a maximum interval for counting image signal components and may be set to correspond to the number of clock cycles suitable for the characteristics of the image sensor.\n\nIn this manner, the image sensor 600 can perform digital correlation double sampling using the counter circuit 700a having an inverting function. By using the DDR counter circuit 700a having the inverting function, the image sensor 600 can improve the operation margin and reduce the consumed power as the operation speed increases. In addition, the counter circuit 700a having an inverting function can improve the performance of the image sensor 600 including the structure in which an error in the digital double sampling process is prevented and a precise count value can be provided .\n\n27 is a circuit diagram showing a counter circuit having a mode switching function according to an embodiment of the present invention.\n\n27, the counter circuit 700b includes a first counting unit 710b, a second counting unit 720b, and ripple counters 730b and 740b. The clock control circuit 750 and the clock input circuit 760 shown in FIG. 19 are not shown for the sake of convenience, and will be described later with reference to FIG.\n\nThe first counting unit 710b includes a first D-flip flop 715 and a first multiplexer 716.\n\nThe first D-flip-flop 715 generates a first bit signal D0 that toggles in response to a first edge (e.g., a rising edge) of the rising and falling edges of the input clock signal CLKi . The first multiplexer 716 outputs the inverted output terminal / Q or the non-inverted output terminal Q of the first D-flip flop 715 to the first D-flip flop 715 in response to the first mode control signal HD. To the data terminal (D) of the data driver (715). For example, when the first mode control signal HD is logic low, the first D-flip flop 715 feeds back the inverted output terminal / Q to the data terminal D to perform a toggling operation Inverted output terminal Q is fed back to the data terminal D to maintain the stored state regardless of the toggling of the input clock signal CLKi when the first mode control signal HD is logic high.\n\nThe second counting unit 720b includes a second D-flip flop 725, an output multiplexer 726, a feedback multiplexer 727, and an OR gate 728.\n\nThe second D-flip-flop 725 receives a second bit signal D that toggles in response to a rising edge of the input clock signal CLKi and a second one of the falling edges (e.g., a falling edge) . As described above, one of the first D-flip-flop 715 and the second D-flip-flop 725 is implemented as a rising edge trigger type and the other as a falling edge trigger type so that the first bit signal D0 and The second bit signal D may have a phase difference of 90 degrees.\n\nThe output multiplexer 726 outputs one of the signal of the non-inverting output terminal Q of the second D-flip flop 725 or the signal of the inverted output terminal / Q in response to the second mode control signal U / As the output signal OUT2, and outputs it to the third counting unit 730b corresponding to the rear stage of the second counting unit 720b. For example, when the second mode control signal U / D is logic high, the signal of the inverted output terminal / Q of the second D flip-flop 725 becomes the output signal OUT2, As described, the second counting unit 720b and the ripple counters 730b and 740b perform down counting operations. When the second mode control signal U / D is logic low, the signal of the non-inverted output terminal Q of the second D flip-flop 725 becomes the output signal OUT2, and as described in FIG. 3, The second counting unit 720b and the ripple counters 730b and 740b perform an up-counting operation.\n\nThe OR gate 728 performs a logic operation on the inverted signal of the comparison circuit CMP indicating the ending point of the counting operation and the first mode control signal HD. The output of the OR gate 728 becomes logic high when the comparison signal CMP is logic low or the first mode signal HD is logic high and becomes logic low in the remaining cases. The feedback multiplexer 727 outputs the inverted output terminal / Q or the non-inverted output terminal Q of the second D-flip flop 725 to the second D-flip flop 725 in response to the output of the OR gate 728. [ To a data terminal (D) For example, the second D-flip-flop 725 performs a toggling operation by feeding back the inverted output terminal / Q to the data terminal D when the output of the OR gate 728 is logic low, When the output of the OR gate 728 is logic high, the non-inverted output terminal Q is fed back to the data terminal D to maintain the stored state regardless of the toggling of the input clock signal CLKi. The OR gate 728 is intended to implement both error prevention at the end of counting and error prevention at the mode inversion operation described with reference to FIG. Depending on the embodiment, the OR gate 728 may be omitted and a first mode signal HD may be provided for control of the feedback multiplexer 727. [\n\nThe ripple counter including the plurality of units 730b and 740b receives the upper bit signals D , D that toggle in response to the output signal OUT2 of the second counting unit 720b Occurs. A plurality of counting units such as the third counting unit 730b and the fourth counting unit 740b included in the ripple counter may be cascade-connected with the same configuration, and the third counting unit 730b Will be described.\n\n28 is a circuit diagram showing an example of the third counting unit in Fig.\n\n28, the third counting unit 730b includes a third D-flip-flop 735, an output multiplexer 736, and a feedback multiplexer 737. The third D-\n\nThe output multiplexer 736 outputs one of the signal of the non-inverting output terminal Q of the third D-flip flop 735 or the signal of the inverted output terminal / Q in response to the second mode control signal U / As the output signal OUT3 and outputs it to the fourth counting unit 740b corresponding to the succeeding stage of the third counting unit 730b. The output multiplexer 736 included in the third counting unit 730b of Fig. 28 is the same in configuration as the output multiplexer 726 included in the second counting unit 720b of Fig. 27, and a plurality of such output multiplexers 726 and 736 form a mode switching control section for controlling an up-counting operation or a down-counting operation of the counter circuit 730b. As a result, in response to the second mode control signal U / D, the mode switching control section selects one of the signal of the non-inverting output terminal Q of the previous stage or the signal of the inverting output terminal / Q of the previous stage, So as to control the up-counting operation or the down-counting operation of the counter circuit 730b.\n\nFor example, when the second mode control signal U / D is logic high, the signal of the inverted output terminal / Q of the second D flip-flop 725 becomes the output signal OUT2, As described, the second counting unit 720b and the ripple counters 730b and 740b perform down counting operations. When the second mode control signal U / D is logic low, the signal of the non-inverted output terminal Q of the second D flip-flop 725 becomes the output signal OUT2, and as described in FIG. 3, The second counting unit 720b and the ripple counters 730b and 740b perform an up-counting operation.\n\nThe feedback multiplexer 737 outputs the inverted output terminal / Q or the non-inverted output terminal Q of the third D flip-flop 735 to the third D flip-flop 735 in response to the first mode control signal HD 735 to the data terminal D of the memory cell array. For example, when the first mode control signal HD is logic low, the third D flip-flop 735 feeds back the inverted output terminal / Q to the data terminal D to perform a toggling operation Inverted output terminal Q is fed back to the data terminal D to maintain the stored state irrespective of the toggling of the output signal OUT2 of the previous stage when the first mode control signal HD is logic high.\n\nThe third D-flip-flop 735 generates a third bit signal D that toggles in response to the output signal OUT2 of the previous stage in the normal counting operation. 28 shows an embodiment in which the third D flip-flop 735 is implemented as a falling edge trigger type. However, as described above, according to the configuration of the counting units, the third D flip- And the output signal OUT3 is output in accordance with the inverted signal of the second mode control signal U / D, the up-counting operation or the down-counting operation can be performed in the same manner as in the configuration of Fig. 28 .\n\n29 is a circuit diagram showing an example of a clock control circuit and a clock input circuit included in a counter circuit having a mode switching function according to an embodiment of the present invention.\n\n29, the clock control circuit 750b generates the clock control signal ST based on the first bit signal D0 and the second bit signal D , and outputs the clock control signal ST to the clock input circuit 760b. Inverts the input clock signal CLKi in response to the clock control signal ST.\n\nThe clock control circuit 750b may be implemented including a logic gate 753 and a D-flip flop 751. The logic gate 753 logically calculates and outputs the first bit signal D0 and the second bit signal D . For example, the logic gate 752 may be an exclusive-OR gate enabled in response to the first mode control signal HD, in which case the output of the logic gate is the least significant bit signal D [ 0]).\n\nThe D-flip flop 751 outputs the clock control signal ST in response to the first mode control signal HD applied to the data terminal D and the output of the logic gate 753 is applied to the clock terminal CK Output. As a result, the mode switching operation of the counter circuit 700b of FIG. 27 is performed while the first mode control signal HD is activated, and the logic level of the clock control signal ST is set to the first level after the first counting is completed Is determined according to the logic levels of the bit signal D0 and the second bit signal D .\n\nThe clock input circuit 760b and the AND gate 60 are substantially the same as those of the clock input circuit 760a and the AND gate 60 of FIG. 22, and redundant description is omitted.\n\nFig. 30 is a view for explaining the counting operation by the mode switching function of the counter circuit of Fig. 27, and Figs. 31 and 32 are timing diagrams showing the counting operation by the mode switching function of the counter circuit of Fig.\n\nAs described above, Digital Double Sampling (DDS) converts a first analog signal for a reset component when a pixel is initialized and a second analog signal for a signal component (i.e., image component) into a digital signal And then extracts the difference between the two digital signals as a valid signal component.\n\nReferring to FIGS. 31 and 32, the counter circuit 700a having an inverting function of FIG. 27 includes a first counting operation (1ST COUNT) for counting a first analog signal with respect to a reset component into a digital signal, Digital double sampling is performed by a second counting operation (2ND COUNT) for counting the second analog signal for the signal component into a digital signal based on the result of the counting operation. For example, as shown in FIGS. 31 and 32, the first counting operation may be a down-counting operation and the second counting operation may be an up-counting operation.\n\nFor each of the result of the first counting operation and the first counting in the second counting operation, the first bit signal D0, the second bit signal D and the least significant bit signal D The values are shown in FIG. The least significant bit signal D represents the least significant bit when a digital signal is converted into a binary code. The least significant bit signal D .\n\nThe first bit signal D0 and the second bit signal D generated in the DDR counter circuits 100 and 700 according to an embodiment of the present invention do not express the lower 2 bits of the binary code as they are , An error may occur when merely performing the second counting operation from the result of the first counting operation. In order to prevent such an error, the clock control circuit 750b and the clock input circuit 760b are used to invert the input clock signal CLKi according to the result of the first counting operation so that the input clock signal CLKi To start the second counting operation.\n\nReferring to FIG. 30, if the lowest bit signal D is logic low (i.e., '0') in the result of the first counting operation, the first counting in the second counting operation is the first bit signal D0) and the first counting in the second counting operation should be toggled when the lowest bit signal D is logic high (i.e., '1') in the result of the first counting operation, It should be noted that the 2-bit signal D should be toggled.\n\n31 shows a digital double sampling operation for the case where the least significant bit signal D is logic low in the result of the first counting operation. In the first counting operation, down-counting is performed because the second mode control signal U / D is logic high, and in the second counting operation, the second mode control signal U / D transits to a logic low state, do. In the first counting operation, the clock control signal ST output from the clock control circuit 750b of FIG. 29 is logical high and therefore the inverted signal / CLKc of the clock signal CLKc is output as the input clock signal CLKi . The clock control signal ST transitions from a logic high to a logic low in response to the rising edge of the first mode control signal HD when the least significant bit signal D is logic low in the result of the first counting operation do. Thus, in the second counting operation, as opposed to the first counting operation, the clock signal CLKc is output as the input clock signal CLKi, and the second counting operation is performed at the first edge of the input clock signal CLKi, By toggling the 1-bit signal D0.\n\n32 shows a digital double sampling operation for the case where the least significant bit signal D is logic high in the result of the first counting operation. The clock control signal ST output from the clock control circuit 750b of FIG. 27 when the least significant bit signal D is logic high in the result of the first counting operation is the first mode control signal HD, The logic high is maintained even if the rising edge of &quot; 0 &quot; Therefore, in the second counting operation, the inverted signal / CLKc of the clock signal CLKc is outputted as the input clock signal CLKi in the same manner as the first counting operation, and the second counting operation is performed in the first counting operation of the input clock signal CLKi By toggling the second bit signal D at the edge, i.e., the falling edge.\n\nAs described above, by using the clock control circuit 750b and the clock input circuit 760b, before the start of the second counting operation after the first counting operation is completed, the first bit signal D0 and the second bit signal D [ 1]), it is possible to prevent an error in the digital double sampling process.\n\n33 is a timing diagram showing a correlated double sampling operation of the image sensor including the counter circuit having the mode switching function of FIG. 27; Figure 33 shows a correlated double sampling operation for one column.\n\nAt time t21, the count enable signal CNT_EN provided in the control circuit 630 of the image sensor 600 is activated to a logical high, and in response to the enable signal CNT_EN, the ramp signal generator 640 generates a ramp signal Lt; RTI ID = 0.0 &gt; (RAMP). &Lt; / RTI &gt; In this manner, in each of the counter circuits 700b included in the counting unit 680, a first counting operation, that is, a down counting operation is started on a column by column basis. At this time, the pixel voltage signal Vpix is supplied to the comparator 661 as a first analog signal indicating a reset component.\n\nAt time t22, the voltage levels of the ramp signal RAMP and the pixel voltage signal Vpix become equal, and the comparison signal CMP output from the comparator 661 transits to a logic low state and the counting operation is ended. 33, the input clock signal CLKi provided on the basis of the clock signal CLKc in response to the comparison signal CMP is inactivated (see FIGS. 31 and 32), and the counter circuit 700b is reset The result (-3) of the first counting operation corresponding to the component (Vrst) is stored.\n\nAt time t23, when the count enable signal CNT_EN is deactivated to a logic low, the ramp signal generator 640 is disabled. The interval from the time t21 to the time t23 represents the maximum interval for counting the reset component and can be set to correspond to the number of clock cycles suitable for the characteristics of the image sensor.\n\nAt time t24, when the second mode control signal U / D transitions from a logic high to a logic low, the mode switching control section including the plurality of output multiplexers 726 and 736 outputs the output signals of the respective counting units The mode switching operation is performed by setting the inverted output terminal (/ Q) or the non-inverted output terminal (Q) to the reverse of the first counting operation.\n\nAt time t25, the count enable signal CNT_EN is activated again to a logic high, and in response to the enable signal CNT_EN, the ramp signal generator 640 begins to decrease the voltage level of the ramp signal RAMP, The second counting operation, that is, the up counting operation, is started on a column by column basis in the counter circuit 700b. At this time, the pixel voltage signal Vpix is provided to the comparator 661 as a second analog signal representing an image signal component.\n\nAt time t26, the voltage levels of the ramp signal RAMP and the pixel voltage signal Vpix become equal, and the comparison signal CMP output from the comparator 661 transits to a logic low state, thereby terminating the second counting operation. The input clock signal CLKi provided on the basis of the clock signal CLKc in response to the comparison signal CMP is deactivated (see FIGS. 31 and 32), and finally the counter circuit 700b receives the reset component Vrst = 3 (Vsig = 4) corresponding to the difference between the first analog signal representing the image signal component (Vrst + Vsig = 7) and the second analog signal representing the image signal component (Vrst + Vsig = 7) Is expressed as a digital signal represented by the first bit signal D0, the second bit signal D and the upper bit signals D , D .\n\nAt time t27, when the count enable signal CNT_EN is deactivated to a logic low, the ramp signal generator 640 is disabled. The interval from the time t25 to the time t27 represents a maximum interval for counting image signal components and can be set to correspond to the number of clock cycles suitable for the characteristics of the image sensor.\n\nIn this manner, the image sensor 600 can perform digital correlation double sampling using the counter circuit 700b having a mode switching function. By using the DDR counter circuit 700b having the up / down mode switching function, the image sensor 600 can improve the operation margin and reduce the consumed power as the operation speed increases. In addition, the counter circuit 700b having a mode switching function is provided with a configuration capable of preventing an error in the digital double sampling process and providing a precise count value, so that the performance of the image sensor 600 including the counter circuit can be improved have.\n\nThe present invention can be usefully used in an apparatus and a system including a counter circuit. Particularly, the present invention can be more effectively used for an image sensor requiring a high operating speed and low power consumption, a portable electronic device such as a camera including the same, and the like.\n\nWhile the present invention has been described with reference to the preferred embodiments thereof, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the invention as defined in the appended claims. It will be understood.\n\n1 is a block diagram showing a counter circuit according to an embodiment of the present invention.\n\n2 is a timing chart showing an up-counting operation of the counter circuit of FIG.\n\nFigures 3 and 4 are circuit diagrams illustrating a counter circuit for performing an up-counting operation in accordance with embodiments of the present invention.\n\nFigures 5 and 6 are circuit diagrams illustrating flip-flops that perform a toggling operation.\n\n7 is a timing chart showing a down-counting operation of the counter circuit of FIG.\n\nFigures 8 and 9 are circuit diagrams illustrating a counter circuit that performs a down-counting operation in accordance with embodiments of the present invention.\n\n10 is a timing chart showing the counting operation of the conventional counter circuit and the counter circuit according to the embodiments of the present invention.\n\n11 shows the number of toggling of a conventional counter circuit and a counter circuit according to embodiments of the present invention.\n\n12 is a circuit diagram showing an analog-to-digital converter including a counter circuit according to an embodiment of the present invention.\n\n13 is a circuit diagram showing a second counting unit included in the counter circuit of Fig.\n\n14 is a timing chart showing the operation of the counter circuit of Fig.\n\n15 is a block diagram illustrating an apparatus including an analog-to-digital converter in accordance with an embodiment of the present invention.\n\nFigures 16 and 17 are block diagrams illustrating an image sensor including a common counter circuit in accordance with an embodiment of the present invention.\n\n18 is a block diagram illustrating an image sensor including a plurality of counter circuits according to one embodiment of the present invention.\n\n19 is a block diagram showing a counter circuit according to an embodiment of the present invention.\n\n20 is a circuit diagram showing a counter circuit having an inverting function according to an embodiment of the present invention.\n\n21 is a circuit diagram showing an example of the third counting unit in Fig.\n\n22 is a circuit diagram showing an example of a clock control circuit and a clock input circuit included in a counter circuit having an inverting function according to an embodiment of the present invention.\n\nFIG. 23 is a diagram for explaining the counting operation by the inverting function of the counter circuit of FIG. 20; FIG.\n\n24 and 25 are timing charts showing the counting operation by the inverting function of the counter circuit of Fig.\n\n26 is a timing diagram showing a correlated double sampling operation of the image sensor including the counter circuit having the reversal function of FIG.\n\n27 is a circuit diagram showing a counter circuit having a mode switching function according to an embodiment of the present invention.\n\n28 is a circuit diagram showing an example of the third counting unit in Fig.\n\n29 is a circuit diagram showing an example of a clock control circuit and a clock input circuit included in a counter circuit having a mode switching function according to an embodiment of the present invention.\n\nFig. 30 is a diagram for explaining the counting operation by the mode switching function of the counter circuit of Fig. 27; Fig.\n\nFigs. 31 and 32 are timing charts showing the counting operation by the mode switching function of the counter circuit of Fig.\n\n33 is a timing diagram showing a correlated double sampling operation of the image sensor including the counter circuit having the mode switching function of FIG. 27;\n\n34 is a flowchart illustrating a counting method according to an embodiment of the present invention.\n\n35 is a flowchart showing an analog-digital conversion method according to an embodiment of the present invention.\n\n36 is a flowchart illustrating a correlated double sampling method according to an embodiment of the present invention.\n\nDescription of the Related Art\n\n100, 700: a counter circuit 110, 710: a first counting unit\n\n120, 720: second counting unit 10, 70: ripple counter\n\n122e: feedback switch 400, 500, 600: image sensor\n\n750: clock control circuit 760: clock input circuit\n\nD0: first bit signal D : second bit signal\n\nD : least significant bit signal D , D : upper bit signals\n\n## Claims (26)\n\n1. A first counting unit for generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of an input clock signal; And\nAnd a second counting unit for generating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal,\nAnd the second counting unit includes a feedback switch for interrupting the toggling of the second bit signal in response to a comparison signal indicating an end time of the counting operation.\n2. The method according to claim 1,\nFurther comprising a ripple counter for generating upper bit signals toggling in response to the second bit signal or an inverted signal of the second bit signal.\n3. The method according to claim 1,\nAnd counting the input clock signal twice every cycle period of the input clock signal.\n4. The method according to claim 1,\nAnd a code converter for generating a least significant bit of the binary code based on the first bit signal and the second bit signal.\n5. 3. The method of claim 2,\nWherein the first counting unit, the second counting unit and the ripple counter comprise a plurality of D flip-flops each outputting the first bit signal, the second bit signal and the upper bit signals,\nWherein the first D flip-flop of the first counting unit and the second D flip-flop of the second counting unit complementarily toggle with respect to a rising edge or a falling edge of the input clock signal. Circuit.\n6. 6. The method of claim 5,\nWherein the first counting unit comprises a rising edge triggered D-flip flop,\nWherein the second counting unit comprises a falling edge triggered D-flip flop,\nWherein the counter circuit performs an up-counting operation.\n7. 6. The method of claim 5,\nWherein the first counting unit comprises a falling edge triggered D-flip flop,\nThe second counting unit includes a rising edge triggered D-flip flop,\nWherein the counter circuit performs a down-counting operation.\n8. delete\n9. A first counting unit for generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of an input clock signal;\nA second counting unit for generating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal;\nA ripple counter for generating upper bit signals toggling in response to an inverted signal of the second bit signal or the second bit signal;\nA clock control circuit for generating a clock control signal based on the first bit signal and the second bit signal; And\nAnd a clock input circuit for inverting the input clock signal in response to the clock control signal.\n10. 10. The clock control circuit according to claim 9,\nA logic gate for logically calculating the first bit signal and the second bit signal and outputting the result; And\nAnd a D-flip flop for outputting the clock control signal in response to a control signal applied to an output of the logic gate and a clock terminal applied to a data terminal.\n11. 10. The semiconductor memory device according to claim 9,\nA multiplexer for selecting and outputting a clock signal or an inverted clock signal in response to the clock control signal; And\nAnd a logical product gate for logically calculating an output signal of the multiplexer and a comparison signal indicating an ending point of the counting operation and outputting the input clock signal.\n12. 10. The method of claim 9,\nFurther comprising an inversion control section for inverting the second bit signal and the upper bit signals in response to an inversion control signal.\n13. 13. The apparatus according to claim 12,\nAnd a plurality of multiplexers for selecting one of an output signal of the previous stage and a second inverted control signal indicating the inverted timing in response to the first inverted control signal.\n14. 10. The method of claim 9,\nFurther comprising a mode switching control section for controlling an up-counting operation or a down-counting operation of said counter circuit in response to a mode control signal.\n15. 15. The apparatus according to claim 14,\nAnd a plurality of multiplexers for selecting one of the signals of the non-inverted output terminal of the previous stage or the inverted output terminal of the previous stage in response to the mode control signal, and outputting the selected signal to the subsequent stage.\n16. A comparator that compares an analog signal representing a physical quantity and a reference signal to generate a comparison signal;\nA clock input circuit for generating an input clock signal based on the clock signal and the comparison signal; And\nAnd a counter circuit for counting the input clock signal to generate a digital signal corresponding to the analog signal,\nWherein the counter circuit comprises:\nA first counting unit for generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of the input clock signal; And\nAnd a second counting unit for generating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal,\nWherein the second counting unit comprises a feedback switch for interrupting the toggling of the second bit signal in response to a comparison signal indicating an end time of the counting operation.\n17. delete\n18. delete\n19. A sensing unit for sensing a physical quantity and generating an analog signal corresponding to the physical quantity;\nAn analog-to-digital converter for comparing the analog signal with a reference signal and using at least one counter circuit to generate a digital signal corresponding to the analog signal; And\nAnd a control circuit for controlling operations of the sensing unit and the analog-to-digital converter,\nWherein the counter circuit comprises:\nA first counting unit for generating a first bit signal toggling in response to a first one of a rising edge and a falling edge of an input clock signal; And\nAnd a second counting unit for generating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal,\nAnd the second counting unit comprises a feedback switch for interrupting the toggling of the second bit signal in response to a comparison signal indicating the end of the counting operation.\n20. delete\n21. delete\n22. Generating a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal;\nGenerating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal; And\nAnd stopping the toggling of the second bit signal in response to a comparison signal indicating an end time of the counting operation.\n23. delete\n24. Comparing an analog signal representing a physical quantity and a reference signal to generate a comparison signal;\nGenerating an input clock signal based on the clock signal and the comparison signal;\nGenerating a first bit signal toggling in response to a first one of a rising edge and a falling edge of the input clock signal;\nGenerating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal; And\nAnd stopping toggling of the second bit signal in response to a comparison signal indicative of an end time of the counting operation.\n25. A first counting step of counting a first analog signal representing a reset component;\nA second counting step of counting a second analog signal representing a signal component; And\nGenerating a digital signal corresponding to a difference between the first analog signal and the second analog signal based on the first counting result and the second counting result,\nWherein each of the first counting step and the second counting step includes:\nGenerating a first bit signal that toggles in response to a first one of a rising edge and a falling edge of the input clock signal;\nGenerating a second bit signal toggling in response to a second one of a rising edge and a falling edge of the input clock signal; And\nAnd stopping toggling the second bit signal in response to a comparison signal indicative of an end time of the counting operation.\n26. delete\nKR1020090011692A 2009-02-13 2009-02-13 Counter Circuit, Device Including the Same, and Counting Method KR101621244B1 (en)\n\n## Priority Applications (1)\n\nApplication Number Priority Date Filing Date Title\nKR1020090011692A KR101621244B1 (en) 2009-02-13 2009-02-13 Counter Circuit, Device Including the Same, and Counting Method\n\n## Applications Claiming Priority (2)\n\nApplication Number Priority Date Filing Date Title\nKR1020090011692A KR101621244B1 (en) 2009-02-13 2009-02-13 Counter Circuit, Device Including the Same, and Counting Method\nUS12/590,830 US7990304B2 (en) 2009-02-13 2009-11-13 Double data rate (DDR) counter, analog-to-digital converter (ADC) using the same, CMOS image sensor using the same and methods in DDR counter, ADC and CMOS image sensor\n\n## Publications (2)\n\nPublication Number Publication Date\nKR20100092542A KR20100092542A (en) 2010-08-23\nKR101621244B1 true KR101621244B1 (en) 2016-05-16\n\n# Family\n\n## Family Applications (1)\n\nApplication Number Title Priority Date Filing Date\nKR1020090011692A KR101621244B1 (en) 2009-02-13 2009-02-13 Counter Circuit, Device Including the Same, and Counting Method\n\n## Country Status (2)\n\nUS (1) US7990304B2 (en)\nKR (1) KR101621244B1 (en)\n\n## Families Citing this family (23)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nJP4853445B2 (en) * 2007-09-28 2012-01-11 ソニー株式会社 A / D conversion circuit, solid-state imaging device, and camera system\nJP5254140B2 (en) * 2009-07-14 2013-08-07 株式会社東芝 A / D converter and solid-state imaging device including the same\nKR101647366B1 (en) * 2009-09-25 2016-08-11 삼성전자주식회사 Counter Circuit, Device Including the Same, and Counting Method\nJP5481221B2 (en) * 2010-02-08 2014-04-23 パナソニック株式会社 Solid-state imaging device and AD conversion method\nJP2012161061A (en) * 2011-02-03 2012-08-23 Toshiba Corp Digital filter circuit\nJP5784377B2 (en) * 2011-06-14 2015-09-24 オリンパス株式会社 AD conversion circuit and imaging apparatus\nUS8547267B2 (en) 2011-11-30 2013-10-01 Taiwan Semiconductor Manufacturing Co., Ltd. Idle tone suppression circuit\nJP6019793B2 (en) * 2012-06-20 2016-11-02 ソニー株式会社 Counter, counting method, ad converter, solid-state imaging element, and electronic device\nCN102932103B (en) * 2012-10-22 2016-04-20 武汉烽火富华电气有限责任公司 A kind of message transmission rate adaptive reception method based on digital transformer substation\nKR102009165B1 (en) * 2013-01-24 2019-10-21 삼성전자 주식회사 Image sensor, multi chip package, and electronic device\nCN103096003B (en) * 2013-02-07 2016-04-27 江苏思特威电子科技有限公司 Imaging device and formation method thereof\nKR102002466B1 (en) 2013-05-20 2019-07-23 에스케이하이닉스 주식회사 Digital counter\nKR101996491B1 (en) 2013-06-14 2019-07-05 에스케이하이닉스 주식회사 Double data rate counter, and analog-digital converting apparatus and cmos image sensor thereof using that\nKR20150020432A (en) * 2013-08-14 2015-02-26 삼성전자주식회사 Image sensor and analog to digital converter and analog to digital converting method tererof\nKR20150068599A (en) * 2013-12-12 2015-06-22 에스케이하이닉스 주식회사 Double data rate counter, and analog-digital converting apparatus and cmos image sensor thereof using that\nKR20160015607A (en) * 2014-07-31 2016-02-15 에스케이하이닉스 주식회사 Electronic device and electronic system with the same\nKR20160017500A (en) 2014-08-06 2016-02-16 에스케이하이닉스 주식회사 Double data rate counter, and analog-digital converting apparatus and cmos image sensor thereof using that\nKR20160024211A (en) 2014-08-25 2016-03-04 에스케이하이닉스 주식회사 digital counter\nUS20160309135A1 (en) 2015-04-20 2016-10-20 Ilia Ovsiannikov Concurrent rgbz sensor and system\nKR20160145217A (en) * 2015-06-09 2016-12-20 에스케이하이닉스 주식회사 Counting circuit, image sensing device with the counting circuit and read-out method of the mage sensing device\nKR20170053986A (en) 2015-11-09 2017-05-17 에스케이하이닉스 주식회사 Latch circuit, double data rate decoding apparatus based the latch\nKR20170053990A (en) 2015-11-09 2017-05-17 에스케이하이닉스 주식회사 Latch circuit, double data rate ring counter based the latch circuit, hybrid counting apparatus, analog-digital converting apparatus, and cmos image sensor\nKR20170065730A (en) * 2015-12-03 2017-06-14 삼성전자주식회사 Image sensor supportind various operating modes and operating method thereof\n\n## Family Cites Families (15)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS4521898A (en) * 1982-12-28 1985-06-04 Motorola, Inc. Ripple counter circuit having reduced propagation delay\nUS5771070A (en) 1985-11-15 1998-06-23 Canon Kabushiki Kaisha Solid state image pickup apparatus removing noise from the photoelectric converted signal\nUS4891827A (en) * 1988-03-07 1990-01-02 Digital Equipment Corporation Loadable ripple counter\nFR2698501B1 (en) * 1992-11-24 1995-02-17 Sgs Thomson Microelectronics Fast counter alternately for counting and counting pulse trains.\nJP3872144B2 (en) 1996-09-11 2007-01-24 オリンパス株式会社 Synchronous signal pulse generation circuit\nUS5877715A (en) * 1997-06-12 1999-03-02 International Business Machines Corporation Correlated double sampling with up/down counter\nDE69841968D1 (en) 1997-08-15 2010-12-09 Sony Corp Solid state imaging device and control method therefor\nUS7149275B1 (en) * 2004-01-29 2006-12-12 Xilinx, Inc. Integrated circuit and method of implementing a counter in an integrated circuit\nJP4107269B2 (en) 2004-02-23 2008-06-25 ソニー株式会社 Solid-state imaging device\nUS7129883B2 (en) 2004-02-23 2006-10-31 Sony Corporation Method and apparatus for AD conversion, semiconductor device for detecting distribution of physical quantity, and electronic apparatus\nJP4470700B2 (en) 2004-02-23 2010-06-02 ソニー株式会社 AD conversion method, AD converter, semiconductor device for detecting physical quantity distribution, and electronic apparatus\nJP4655500B2 (en) 2004-04-12 2011-03-23 ソニー株式会社 AD converter, semiconductor device for detecting physical quantity distribution, and electronic apparatus\nJP4289206B2 (en) 2004-04-26 2009-07-01 ソニー株式会社 Counter circuit\nKR100723517B1 (en) * 2005-12-14 2007-05-23 삼성전자주식회사 Counter keeping counting value and phase locked loop having the counter\nKR100830582B1 (en) 2006-11-13 2008-05-22 삼성전자주식회사 Digital double sampling method and cmos image senser performing thereof and digital camera including thereof\n\n## Also Published As\n\nPublication number Publication date\nUS7990304B2 (en) 2011-08-02\nUS20100207798A1 (en) 2010-08-19\nKR20100092542A (en) 2010-08-23\n\n## Similar Documents\n\nPublication Publication Date Title\nEP1655840B1 (en) Analog-to-digital conversion method, analog-to-digital converter, semiconductor device for detecting distribution of physical quantity, and electronic apparatus\nUS6885331B2 (en) Ramp generation with capacitors\nUS20150271404A1 (en) Solid-state image sensing apparatus\nTWI398101B (en) Counter circuit, ad conversion method, ad converter, semiconductor device for detecting distribution of physical quantities, and electronic apparatus\nCN101056364B (en) Solid state image pickup device, camera system and driving method thereof\nKR101194927B1 (en) Ad conversion method, ad converter, semiconductor device for detecting distribution of physical quantities, and electronic apparatus\nTWI392354B (en) A / D conversion circuit, solid-state imaging element, and camera system\nCN101365073B (en) Solid-state image capture device, analog/digital conversion method for solid state image capture device, and image capture device\nCN101512905B (en) Single slope analog-to-digital converter\nUS8456554B2 (en) Integrated AD converter, solid state imaging device, and camera system\nTWI381725B (en) Solid-state imaging device, imaging apparatus, and electronic apparatus\nTWI360952B (en) Data processing method, data processing apparatus,\nUS9479189B2 (en) A/D converter, solid-state imaging device and camera system\nJP4774064B2 (en) A / D conversion circuit and solid-state imaging device\nJP5375277B2 (en) Solid-state imaging device, imaging device, electronic device, AD conversion device, AD conversion method\nUS8039781B2 (en) Physical quantity detecting apparatus and method for driving the same\nTWI390854B (en) Analog-to-digital converter, analog-to-digital converting method, solid-state image pickup device, and camera system\nKR100517548B1 (en) Analog to didital converter for cmos image device\nJP2009038726A (en) Physical quantity detecting device, and method for driving the same\nCN101867687A (en) A/d converter, solid-state image sensing device, and camera system\nJP2011055196A (en) A/d converter, and solid-state imaging apparatus\nKR20080043141A (en) Digital double sampling method and cmos image senser performing thereof and digital camera including thereof\nCN101547318B (en) Image sensor and driving method therefor\nKR101614162B1 (en) Solid-state image sensor and camera system\nUS9473722B2 (en) Column A/D converter, column A/D conversion method, solid-state imaging element and camera system\n\n## Legal Events\n\nDate Code Title Description\nA201 Request for examination\nE902 Notification of reason for refusal\nE701 Decision to grant or registration of patent right\nGRNT Written decision to grant\nFPAY Annual fee payment\n\nPayment date: 20190429\n\nYear of fee payment: 4" ]
[ null, "https://patentimages.storage.googleapis.com/29/1f/74/cf72ba5c77f0c9/112009008905296-pat00001.png", null, "https://patentimages.storage.googleapis.com/a0/a2/d8/e583ad538e4457/112009008905296-pat00002.png", null, "https://patentimages.storage.googleapis.com/1f/47/da/560e346ba7cce6/112009008905296-pat00003.png", null, "https://patentimages.storage.googleapis.com/57/62/09/aa42014b0f2a52/112009008905296-pat00004.png", null, "https://patentimages.storage.googleapis.com/9d/c4/02/47813d3a830b64/112009008905296-pat00005.png", null, "https://patentimages.storage.googleapis.com/41/44/37/6699c1808f1498/112009008905296-pat00006.png", null, "https://patentimages.storage.googleapis.com/71/d3/2c/d8915b6d522f2b/112009008905296-pat00007.png", null, "https://patentimages.storage.googleapis.com/db/a5/43/24fc91c09ad87b/112009008905296-pat00008.png", null, "https://patentimages.storage.googleapis.com/3e/ed/b8/7016909e60f536/112009008905296-pat00009.png", null, "https://patentimages.storage.googleapis.com/dd/0e/a6/7b4bf41e0604d6/112009008905296-pat00010.png", null, "https://patentimages.storage.googleapis.com/a9/a1/5e/65d8b1754cc500/112009008905296-pat00011.png", null, "https://patentimages.storage.googleapis.com/77/da/cc/6e4c4f1f81344d/112009010683427-pat00037.png", null, "https://patentimages.storage.googleapis.com/42/3e/11/dc406a4ff926fc/112009010683427-pat00038.png", null, "https://patentimages.storage.googleapis.com/78/01/d1/6ba203935321b4/112009008905296-pat00014.png", null, "https://patentimages.storage.googleapis.com/7b/4f/2d/03098cd9169605/112009008905296-pat00015.png", null, "https://patentimages.storage.googleapis.com/7f/a5/2d/87c9f09fb18eac/112009008905296-pat00016.png", null, "https://patentimages.storage.googleapis.com/d4/e8/44/43329f3012b3f9/112009008905296-pat00017.png", null, "https://patentimages.storage.googleapis.com/65/42/82/ab6791f4646627/112009008905296-pat00018.png", null, "https://patentimages.storage.googleapis.com/2e/ea/67/e295af99ee5b19/112009008905296-pat00019.png", null, "https://patentimages.storage.googleapis.com/13/08/56/76e9da3212490d/112009010683427-pat00039.png", null, "https://patentimages.storage.googleapis.com/e6/9f/f5/ad2bddf42e805f/112009008905296-pat00021.png", null, "https://patentimages.storage.googleapis.com/38/56/f1/a371f1cdd0c5f4/112009008905296-pat00022.png", null, "https://patentimages.storage.googleapis.com/75/a1/7c/c701c47ee7e8cc/112009008905296-pat00023.png", null, "https://patentimages.storage.googleapis.com/4a/dc/d5/c9b67d88657870/112009008905296-pat00024.png", null, "https://patentimages.storage.googleapis.com/e0/10/22/cd569f0017c779/112009008905296-pat00025.png", null, "https://patentimages.storage.googleapis.com/e1/0b/67/818529deaf1504/112009008905296-pat00026.png", null, "https://patentimages.storage.googleapis.com/2d/13/b5/3a82062c698fbd/112009010683427-pat00040.png", null, "https://patentimages.storage.googleapis.com/e7/bc/c5/1c6018f86a08cd/112009008905296-pat00028.png", null, "https://patentimages.storage.googleapis.com/d0/3a/40/0c1ab138bc7ffe/112009008905296-pat00029.png", null, "https://patentimages.storage.googleapis.com/49/75/f7/dd968f6bbc1c92/112009008905296-pat00030.png", null, "https://patentimages.storage.googleapis.com/29/d5/23/37e8c43c4d45d3/112009010683427-pat00041.png", null, "https://patentimages.storage.googleapis.com/0f/b3/a6/6aca4fba9da5c2/112009008905296-pat00032.png", null, "https://patentimages.storage.googleapis.com/9f/d0/c4/0d3a4c492bac27/112009008905296-pat00033.png", null, "https://patentimages.storage.googleapis.com/dc/ed/41/5f5bb7bbd87fa5/112009008905296-pat00034.png", null, "https://patentimages.storage.googleapis.com/02/98/7d/7014a5b2e66bc6/112009008905296-pat00035.png", null, "https://patentimages.storage.googleapis.com/e5/da/2c/110e7811daa753/112009008905296-pat00036.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8678194,"math_prob":0.9175653,"size":124667,"snap":"2020-10-2020-16","text_gpt3_token_len":27376,"char_repetition_ratio":0.28078097,"word_repetition_ratio":0.48253983,"special_character_ratio":0.23221862,"punctuation_ratio":0.066354446,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9511638,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T14:22:23Z\",\"WARC-Record-ID\":\"<urn:uuid:576ea164-aad0-4f6c-ac6a-f43b31ce0186>\",\"Content-Length\":\"315545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94715c40-8ef7-417e-9a39-e6aa74d48d99>\",\"WARC-Concurrent-To\":\"<urn:uuid:3544125c-0194-42dc-bb94-ca2ad1a0e642>\",\"WARC-IP-Address\":\"172.217.7.142\",\"WARC-Target-URI\":\"https://patents.google.com/patent/KR101621244B1/en\",\"WARC-Payload-Digest\":\"sha1:BAXMLSD3G4H6PWKHAURJ5ONMIUI6I7NX\",\"WARC-Block-Digest\":\"sha1:I5XOMSBX2XY35T6KAEZ2FLOSUIUPN36X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144979.91_warc_CC-MAIN-20200220131529-20200220161529-00040.warc.gz\"}"}
https://ravecipivugoxivot.dirkbraeckmanvenice2017.com/general-one-dimensional-random-walk-with-absorbing-barriers-book-43457dl.php
[ "Last edited by Naramar\nFriday, May 8, 2020 | History\n\n2 edition of general one-dimensional random walk with absorbing barriers found in the catalog.\n\ngeneral one-dimensional random walk with absorbing barriers\n\nJohannes Henricus Bernardus Kemperman\n\n# general one-dimensional random walk with absorbing barriers\n\n## by Johannes Henricus Bernardus Kemperman\n\nPublished in \"s-Gravenhage .\nWritten in English\n\nSubjects:\n• Random walks (Mathematics)\n\n• Edition Notes\n\nClassifications The Physical Object Other titles One-dimensional random walk with absorbing barriers. LC Classifications QA273 .K34 Pagination 111 p. Number of Pages 111 Open Library OL6120433M LC Control Number 52030556\n\nConsider the Gamblers Ruin Problem (p Example for p = 1/2 and p Example for case of general p.) Let Dk denote the average number of plays it takes for the gambler to either go broke or win, given that he starts with k dollars. Compute Dk. In other words, for a . chain (this method has also been used by Schoning). In fact, we will be dealing with one dimensional random walks with two absorbing barriers, but we may also refer to them as Markov chains.. We are ready to state the theorem. Theorem 1. For all (including the hardest) ONE-IN-THREE SAT problems, the algorithm presented in section 2 will find.\n\nequivalent to a 2-dimensional walk with two reflecting barriers. (See also Cohen's book on random walks and boundary value problems for some related issues.) It does not seem that their techniques apply here because of the presence of a third absorbing barrier. However, it may be of interest to.   The mean first passage time for a random walk with reflecting and absorbing barriers is computed by assuming Onsager's reciprocal relation for the transition probabilities. The result, which is valid for an arbitrary dimensional random walk, appears as a quotient of determinants whose elements are the transition probabilities and the initial by:\n\ntween hxiwith an absorbing boundary and hxiwithout an absorbing boundary, which is also numerically con- rmed. From this relation, we also accurately nd the dependence of Con the bias probability. The random walk is de ned on a one-dimensional lat-tice with an absorbing boundary at x= 0 and a movable wall at the maximum position that can be. You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.,, Free ebooks since\n\nYou might also like\nJapanese haiku\n\nJapanese haiku\n\nPhilip Van Artevelde\n\nPhilip Van Artevelde\n\nThe August 14, 2003, Blackout: Effects on Small Business and Potential Solutions\n\nThe August 14, 2003, Blackout: Effects on Small Business and Potential Solutions\n\nold red bus.\n\nold red bus.\n\nNonlinear problems of engineering\n\nNonlinear problems of engineering\n\nHigh Speed Wireless Communications\n\nHigh Speed Wireless Communications\n\nCivil War brides & grooms at Davis Bend, Mississippi\n\nCivil War brides & grooms at Davis Bend, Mississippi\n\nDeath on the installment plan\n\nDeath on the installment plan\n\nsynthesis of asymmetric ligands for transition metal catalysis\n\nsynthesis of asymmetric ligands for transition metal catalysis\n\nWorkplace exchange of personnel between companies in Australia and in Japan\n\nWorkplace exchange of personnel between companies in Australia and in Japan\n\nThermal Refining of Low-Temperature Tar.\n\nThermal Refining of Low-Temperature Tar.\n\nCasting solutions for the automotive industry.\n\nCasting solutions for the automotive industry.\n\n### General one-dimensional random walk with absorbing barriers by Johannes Henricus Bernardus Kemperman Download PDF EPUB FB2\n\nThe general one-dimensional random walk with absorbing barriers, with applications to sequential analysis. Get this from a library. The general one-dimensional random walk with absorbing barriers with applications to sequential analysis. [Johannes Henricus Bernardus Kemperman; Universiteit van. 20 Random Walks Random Walks are used to model situations in which an object moves in a sequence Figure An unbiased, one-dimensional random walk with absorbing barriers at positions 0 and 3.\n\nThe walk begins at position 1. The tree diagram shows the general solution has the form: RnDa1nCbn1nDaCbn: Substituting in the boundary File Size: KB. A one-dimensional random walk is a Markov chain whose state space is a finite or infinite subset a, a + 1,b of the integers, in which the particle, if it is in state i, can in a single transition either stay in i or move to one of the neighboring states i − 1, i + 1.\n\nIf the state space is taken as the nonnegative integers, the transition matrix of a random walk has the form. A simple random walk is symmetric if the particle has the same probability for each of the neighbors. General random walks are treated in Chapter 7 in Ross’ book.\n\nHere we will only study simple random walks, mainly in one dimension. By studying a random walk with two absorbing barriers, one on each side of the staring point.\n\nA Bernoulli random walk is used in physics as a rough description of one-dimensional diffusion processes (cf. Diffusion process) and of the Brownian motion of material particles under collisions with molecules.\n\nImportant facts involved in a Bernoulli random walk will be described below. In so doing, it is assumed that. Probabilities of. One dimensional lattice random walks with absorption at a point/on a half line UCHIYAMA, Kôhei, Journal of the Mathematical Society of Japan, Weak convergence of the number of zero increments in the random walk with barrier Marynych, Alexander and Verovkin, Glib, Electronic Communications in Probability, Cited by: Lecture Simple Random Walk In William Feller published An Introduction to Probability Theory and Its Applications .\n\nAccording to Feller [11, p. vii], at the time “few mathematicians outside the Soviet Union recognized probability as a legitimate branch of mathemat-ics.”File Size: KB. A random walk is a mathematical object, known as a stochastic or random process, that describes a path that consists of a succession of random steps on some mathematical space such as the integers.\n\nAn elementary example of a random walk is the random walk on the integer number line, which starts at 0 and at each step moves +1 or −1 with equal probability. RANDOM WALKS IN EUCLIDEAN SPACE 5 10 15 20 25 30 35 2 4 6 8 10 Figure A random walk of length Theorem The probability of a return to the origin at time 2mis given by u 2m= µ 2m m 2¡2m: The probability of a return to the origin at an odd time is 0.\n\n2 A random walk is said to have a flrst return to the File Size: KB. Random walks in an inhomogeneous one-dimensional medium with reflecting and absorbing barriers Article (PDF Available) in Theoretical and Mathematical Physics (1) Author: Nikita Ratanov. ONE-DIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1.\n\nA random walk on the integers Z with step distribution F and initial state x 2Z is a sequenceSn of random variables whose increments are independent, identically distributed random variables ˘i with common distribution F, that is, (1) Sn =x + Xn i=1 ˘i.\n\nThe definition extends in an obvious way to random walks on the d File Size: KB. considering finite-length random walks. The presentation in this chapter is based on unpublished notes of H.\n\nFöllmer. We use this chapter to illustrate a number of useful concepts for one-dimensional random walk. In later chapters we will consider d-dimensional random walk File Size: 1MB.\n\nRandom walks have been studied for decades on regular structures such as lattices. We now give a brief historical review of the use of barriers in a one-dimensional discrete random walk. Weesakul () discussed the classical problem of random walk restricted between a reflecting and an absorbing barrier.\n\nRandom walk with barriers. Consider first the most general case, We calibrated our results for a quasi-one-dimensional disorder (random parallel membranes), which reproduced the exact limit with about 1% accuracy.\n\nThe random walk simulator was developed in C++. Simulations were performed on the NYU General by: Section 2 is a review of the mapping of balls-in-boxes models without energy barrier onto random walks with an absorbing trap, introduced in .\n\nThis mapping applies both to the zero-temperature. A particle moving in inhomogeneous, one-dimensional media is considered. Its velocity changes direction at Poisson times.\n\nFor such a random process, the backward and forward Kolmogorov equations are derived. The explicit formulas for the probability distributions of this process are obtained, as well as the formulas for similar processes in the presence of reflecting and absorbing by:   () Tail estimates for one-dimensional random walk in random environment.\n\nCommunications in Mathematical Physics() Anomalous diffusion in asymmetric random walks with a quasi-geostrophic flow by: According to Feller the reflecting barrier in random walk is defined as a special case of an elastic barrier. An elastic barrier, situated at the point (−1/2) on the x-axis between the positions m=0 and m=−1, is defined by the rule that from position m=0 the particle moves with the probability p to position m=1; with probability δq it stays at m=0; and with probability (1−δ)q it moves Author: Marius Orlowski.\n\nBiased Random Walk and PDF of Time of First Return. Ask Question Asked 8 years, 8 months ago. Probability of a biased random walk hitting an absorbing barrier in some number of steps.\n\nExplanation on one-dimensional random walk in Feller's book. Abstract. In many applications, successive observations of a process, say X 1, X 2, have an inherent time component associated with example, the X i could be the state of the weather at a particular location on the i th day, counting from some fixed day.\n\nIn a simplistic model, the state of the weather could be “dry” or “wet,” quantified as, say, 0 and : Anirban DasGupta.\\$\\begingroup\\$ +1 on the great answer. I wish to ask for a clarification on the r values.\n\nAs I understand, we want to include all paths to x that started from zero but want to exclude those that hit the barrier.ciently fast, then the resulting Z2 in Z3 random walk in varying dimension is recurrent.\n\nProof: Denote by ˇ zprojection to the z-axis and by ˇ xythe projection map to the x-yplane. Since fˇ xy(S k)gis a recurrent planar random walk, we may select a ninductively to satisfy P[9k2(a n;a n+1]: ˇ xy(S k) = 0] 1=2: (3) The process fˇ z(S anCited by: 3." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89482397,"math_prob":0.9028895,"size":9733,"snap":"2020-45-2020-50","text_gpt3_token_len":2194,"char_repetition_ratio":0.16692363,"word_repetition_ratio":0.01944793,"special_character_ratio":0.21021268,"punctuation_ratio":0.099616855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96664494,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T23:49:08Z\",\"WARC-Record-ID\":\"<urn:uuid:938c48bf-0f03-4089-b22f-97cffff698a9>\",\"Content-Length\":\"24723\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e414bdc5-a5d6-4e4b-a387-0107849091e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:22fac43e-3ce7-41c5-b41f-f192eedbc459>\",\"WARC-IP-Address\":\"172.67.177.238\",\"WARC-Target-URI\":\"https://ravecipivugoxivot.dirkbraeckmanvenice2017.com/general-one-dimensional-random-walk-with-absorbing-barriers-book-43457dl.php\",\"WARC-Payload-Digest\":\"sha1:RZBPTW2AJWJQZVF6SWNFIKG45D3VGSQB\",\"WARC-Block-Digest\":\"sha1:WH7ISN3XK4SO7FT3OWDDR7U5IVTPTSDP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141685797.79_warc_CC-MAIN-20201201231155-20201202021155-00109.warc.gz\"}"}
https://toto-share.com/2012/12/qt-imagesc-simple-code/
[ "# Qt : imagesc simple code\n\nI am very interested with imagesc function in matlab. We can plot our data and plot image with user defined colormap. I have try to create a Qt: imagesc simple code using QPainter. This is a simple idea how to create Qt imagesc simple code. You can modified this code like your problem.\n\nThis code create a random data with size wdata(width) and hdata(height). I am create a winter colormap using colormap matlab function. You can get other colormap with looking at colormap matlab function. This is a code how to create winter colormap :\n\n```//create winter colormap\nint **drawingWidget::Winter()\n{\nint **cmap = array2int(m_colorMapLength, 4);\nfloat *winter = new float[m_colorMapLength];\nfor (int i = 0; i < m_colorMapLength; i++)\n{\nwinter[i] = 1.0f * i / (m_colorMapLength - 1);\ncmap[i] = 255;\ncmap[i] = 0;\ncmap[i] = (int)(255 * winter[i]);\ncmap[i] = (int)(255 * (1.0f - 0.5f * winter[i]));\n}\n\ndelete [] winter;\nreturn cmap;\n}```\n\nWe can convert our data to colormap using this function :\n\n```//convert our data to colormap color\nQRgb drawingWidget::GetColor(float f)\n{\nint r, g, b, a;\nfloat tmp1 = (m_colorMapLength * (f - minData) + (maxData - f));\nfloat tmp2 = (maxData - minData);\nint cindex = (int) round( tmp1/tmp2 );\n\nif (cindex < 1)\ncindex = 1;\nif (cindex > m_colorMapLength)\ncindex = m_colorMapLength;\n\n//int alpha = cmap[cindex - 1, 0];\nr = cmap[cindex - 1];\ng = cmap[cindex - 1];\nb = cmap[cindex - 1];\na = 255;\nQRgb value = qRgba(r, g, b, a);\n\nreturn value;\n}```\n\nYou will get more information after reading my code. This is an output when running this Qt imagesc simple code :", null, "1.", null, "" ]
[ null, "https://lh4.googleusercontent.com/-mpEs6IfwnPg/UL2qgztDs-I/AAAAAAAAAEk/bpvRtWqcTmQ/s400/imagesc_demo.png", null, "https://secure.gravatar.com/avatar/a2de8c758b4ee7fcfe5cc40a40899490", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51774704,"math_prob":0.99670756,"size":1757,"snap":"2023-40-2023-50","text_gpt3_token_len":519,"char_repetition_ratio":0.14945807,"word_repetition_ratio":0.0,"special_character_ratio":0.31246442,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99310267,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T05:59:43Z\",\"WARC-Record-ID\":\"<urn:uuid:7c867afa-11ce-4428-920f-f6d970844045>\",\"Content-Length\":\"47482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8023ea3a-3f34-4008-b725-79032ad74d25>\",\"WARC-Concurrent-To\":\"<urn:uuid:6314f38a-cd28-472d-a477-391a74a58ac9>\",\"WARC-IP-Address\":\"207.148.64.203\",\"WARC-Target-URI\":\"https://toto-share.com/2012/12/qt-imagesc-simple-code/\",\"WARC-Payload-Digest\":\"sha1:WWFDZXC64V6SNAOEP3HJXO2NUHOUB7GC\",\"WARC-Block-Digest\":\"sha1:VYHBPK372GAVWVPNJDIELREFEN2COQ3R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510259.52_warc_CC-MAIN-20230927035329-20230927065329-00573.warc.gz\"}"}
https://medical-dictionary.thefreedictionary.com/Likelihood+ratio
[ "# likelihood ratio\n\nAlso found in: Dictionary, Thesaurus, Financial, Acronyms, Encyclopedia.\n\n## likelihood ratio\n\nusually preceded by \"maximum\" (that is, maximum likelihood ratio), this ratio maximizes the probability that the parameters in the ratio agree with the empirically observed data.\n\n## like·li·hood ra·ti·o\n\n(līk'lē-hud rā'shē-ō)\nThe ratio of the probability of a test result among patients with a certain disease or disorder to the probability of that same test result among patients who do not have the targeted disease or disorder.\n\n(līk′lē-hood″),\n\n## LR\n\nA statistical tool used to help determine the usefulness of a diagnostic test for including or excluding a particular disease. An LR = 1 suggests that the test ordered neither helps to diagnose the disease in question nor helps to rule it out. Higher LRs increase the probability that the disease will be present; LRs < 1.0 decrease the probability that the disease is present.\n\nA positive LR can be thought of as the probability that someone with a suspected condition will, accurately, have a positive test result, divided by the probability that a healthy person will, inaccurately, test positive for the disease. Mathematically this can be represented by the following equation: LR+ = sensitivity of the test/ (1− specificity of the test). A negative LR is the probability that a sick person will fail to be detected by the test, divided by the probability that a healthy person will be accurately shown by the test to have no sign of disease. Mathematically: LR− = (1 − sensitivity of the test) / specificity of the test.\n\n## likelihood ratio\n\nThe percentage of ill people with a given test result divided by the percentage of well people with the same result. Ratios near unity should not influence decisions. This useful guide to refining clinical diagnosis is little used mainly because of its complexity; The Fagan nomogram can simplify the matter.\nMentioned in ?\nReferences in periodicals archive ?\nShepherd, \"DbEmpLikeGOF: An R package for nonparametric likelihood ratio tests for goodness-of-fit and two-sample comparisons based on sample entropy,\" Journal of Statistical Software, vol.\nTable 5 demonstrate the value of the likelihood ratio statistic after the element is incorporated inside the model.\nAgain, we can get fancy and place the odds and the likelihood ratio on a logarithmic scale.\n* Once a weight and a likelihood have been determined for each scenario of the observed evidence, the likelihood ratio is given as the sum of the products of the likelihood and the corresponding prior weight for each scenario in the guilty set divided by the sum of the products of the likelihood and the corresponding prior weight for each scenario in the not guilty set.\nIn this situation the approach based on confidence sets using likelihood ratio tests can be used.\nLikelihood Ratio Chi-Square value###1890###1802###1839###1663###1721\nThis lemma shows the importance of the likelihood ratio or the noise distribution function.\nFor respected simulation study the processes for both models it can be seen that empirical likelihood ratio test confirmed that it has some mild size disturbance mainly for the small sample size which likely caused by the limiting distribution of .\nDifference between readings using HemoCue[TM] and HCS, and accuracy of HCS at different cutoff levels of haemoglobin (n=501) Parameter Less than or Less than or equal to 12 g/dL equal to 10 g/dL Sensitivity 0.96 (CI 0.93-0.98) 0.74 (CI 0.65-0.81) Specificity 0.22 (CI 0.15-0.3) 0.84 (CI 0.8-0.87) PPV 0.75 (CI 0.71-0.79) 0.56 (CI 0.47-0.64) NPV 0.69 (CI 0.53-0.81) 0.92 (CI 0.89-0.95) Likelihood ratio (positive) 1.23 4.63 Likelihood ratio (negative) 0.18 0.31 Efficiency 75% 82% Correlation coefficient 0.7 0.75 Parameter Less than or equal to 7 g/dL Sensitivity 0.83 (CI 0.51-0.98) Specificity 0.99 (CI 0.97-0.98) PPV 0.71 (CI 0.43-0.9) NPV 0.99 (CI 0.98-1.0) Likelihood ratio (positive) 83 Likelihood ratio (negative) 0.17 Efficiency 99% Correlation coefficient 0.87\nThe score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39 for diagnosis of appendicitis.\nThe sensitivity, specificity, PPV, NPV, and likelihood ratio positive and likelihood ratio negative of falciparum and vivax arm have been calculated using 2x2 table with Peripheral blood smear as gold standard (Tables 2, 3 and 4).\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9134477,"math_prob":0.9131012,"size":1692,"snap":"2020-24-2020-29","text_gpt3_token_len":363,"char_repetition_ratio":0.17180094,"word_repetition_ratio":0.03787879,"special_character_ratio":0.19858156,"punctuation_ratio":0.10197368,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98344594,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T23:53:18Z\",\"WARC-Record-ID\":\"<urn:uuid:fb725b68-c5d7-4892-a18c-fef31a694dcd>\",\"Content-Length\":\"49137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:778bbbf3-70f6-472b-a156-6c14a4fef812>\",\"WARC-Concurrent-To\":\"<urn:uuid:631fd8f9-6406-423f-8353-9c95e4b53958>\",\"WARC-IP-Address\":\"91.204.210.230\",\"WARC-Target-URI\":\"https://medical-dictionary.thefreedictionary.com/Likelihood+ratio\",\"WARC-Payload-Digest\":\"sha1:DGXHQFWX6OIHG6DPM27NQVKEHX2XAH2X\",\"WARC-Block-Digest\":\"sha1:BT6UCJ266I47GW4ULOAGFI66OX7DDJBL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657140337.79_warc_CC-MAIN-20200712211314-20200713001314-00509.warc.gz\"}"}
https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12.html
[ "Subscriber Authentication Point\nFree Access\n Issue A&A Volume 556, August 2013 A110 16 Planets and planetary systems https://doi.org/10.1051/0004-6361/201220237 05 August 2013\n\n© ESO, 2013\n\n## 1. Introduction\n\nIn the past 15 years or so, we have witnessed impressive progress in radial-velocity measurements. One spectrograph in particular, the High Accuracy Radial velocity Planet Searcher (HARPS; Mayor et al. 2003), broke the former 3 m/s precision barrier and enabled the detection of exoplanets in a yet unknown mass-period domain.\n\nNotably, planets with masses below 10 M and equilibrium temperatures possibly between ~175–270 K (for plausible albedos) have started to be detected. That subset of detections includes GJ 581d (Udry et al. 2007; Mayor et al. 2009), HD 85512b (Pepe et al. 2011), and GJ 667Cc (Bonfils et al. 2013; Delfosse et al. 2012) which lie in the so-called habitable zone (HZ) of their host star. Depending on the nature of their atmospheres, liquid water may flow on their surface and, because liquid water is thought to be a prerequisite for the emergence of life as we know it, these planets constitute a prized sample for further characterization of their atmosphere and the search for possible biosignatures.\n\nThe present paper reports on the detection of at least three planets orbiting the nearby M dwarf GJ 163. One of them, GJ 163c, might be of particular interest in terms of habitability. Our report is structured as follows. Section 2 profiles the host star GJ 163. Section 3 briefly describes the collection of radial-velocity data. Section 4 presents our orbital analysis based on both Markov chain Monte Carlo (MCMC) and periodogram algorithms. Then, we investigate more closely which signal could result from stellar activity rather than from planets (Sect. 5) and retain a solution with three planets. We next investigate the role of planet-planet interactions in the system (Sect. 6) and in particular whether planets b and c participate in a resonance. Section 7 discusses GJ 163c in term of habitability before we present our conclusions in Sect. 8.\n\n## 2. The properties of GJ 163\n\nThe star GJ 163 (HIP 19394, LHS 188) is an M3.5 dwarf (Hawley et al. 1996), at a distance of 15.0 ± 0.4  pc (π = 66.69 ± 1.82 mas; van Leeuwen 2007), and is seen in the Doradus constellation (α = 04h09m16s, δ =  −53°22′23′′).\n\nIts photometry (V = 11.811 ± 0.012; K = 7.135 ± 0.021; Koen et al. 2010; Cutri et al. 2003) and parallax imply absolute magnitudes of MV = 10.93 ± 0.14 and MK = 6.26 ± 0.14. The J − K color of GJ 163’s (=0.813; Cutri et al. 2003) and the Leggett et al. (2001) color-bolometric relation result in a K-band bolometric correction of BCK = 2.59 ± 0.07, and in a L = 0.022 ± 0.002  L luminosity, in good agreement with the Casagrande et al. (2008) direct determination (Mbol = 8.956; L = 0.021). The K-band mass-luminosity relation of Delfosse et al. (2000) gives a 0.40  M mass with a ~10% uncertainty.\n\nIts UVW galactic velocities place GJ 163 between the old disk and halo populations (Leggett 1992). We refined GJ 163’s UVW velocities using both the systemic velocity we measured from HARPS spectra (Table 2) and proper motion from Hipparcos (van Leeuwen 2007). We obtained U = 69.7, V =  −76.0, and W = 1.2 km s-1, which confirmed a membership in an old dynamical population.\n\nStellar metallicity is known to be statistically related to dynamical populations. For the halo population, the metallicity peaks at [Fe/H] ~ − 1.5 (Ryan & Norris 1991) whereas that of the old disk peaks at [Fe/H] ~ − 0.7 (Gilmore et al. 1995). The widths of these distributions are wide, however, and both populations have a small fraction of stars with solar metallicity. Casagrande et al. (2008) attributes a metallicity close to that of the median of the solar neighborhood to GJ 163 ([Fe/H] = − 0.08). And the Schlaufman & Laughlin (2010) photometric relation (or its slight update by Neves et al. 2012) finds a quasi-solar metallicity of [Fe/H] = −0.01. It is therefore difficult to conclude whether GJ 163 belongs to the metal-rich tail of an old population or if it is a younger star accelerated to the typical galactic velocity of an old population.\n\nThe star GJ 163 is not detected in the ROSAT All-Sky Survey. We thus used the survey sensitivity limit (", null, "erg/s; Schmitt et al. 1995) to estimate log LX < 5.39 × 1027 erg/s that, given GJ 163’s bolometric luminosity, translates to RX = log LX/LBOL <  − 4.17. For an M dwarf of ~0.4 M, the RX versus rotation period of Kiraga & Stepien (2007) gives Prot > 40 days for this level of X flux. To obtain a better estimate of the rotation period we compared Ca H & K chromospheric emission lines of GJ 163 with those of three other M-dwarf planet hosts with comparable spectral types and known rotational periods: GJ 176 (M2V; Prot = 39 d; Forveille et al. 2009), GJ 674 (M2.5V; Prot = 35 d; Bonfils et al. 2007), and GJ 581 (M3V; Prot = 94 d; Vogt et al. 2010). In Fig. 1 we show Ca emission for each star. The star GJ 163 has an activity level close to that of GJ 581 which is a very quiet M dwarf; GJ 163 is much quieter than the 35–40 days rotational period M dwarfs (GJ 176 and GJ 674) and should have a rotational period close to that of GJ 581.", null, "Fig. 1 Emission reversal in the Ca ii H line of GJ 674 (red line; M2.5V; Prot = 35 d), GJ 176 (green dots; M2V; Prot = 39 d), GJ 163 (black line; M3.5), and GJ 581 (blue dashes; M3V; Prot = 94 d), ordered from the most prominent to the least prominent peaks. GJ 163 displays a lower activity level, which is a strong indication of slow rotation. Open with DEXTER\n\nTable 1\n\nObserved and inferred stellar parameters for GJ 163.\n\n## 3. Observations\n\nTable 2\n\nModeled and inferred parameters for GJ 163 system.\n\nWe observed GJ 163 with HARPS, a fiber-fed spectrograph at the ESO/3.6 m telescope of La Silla Observatory (Mayor et al. 2003; Pepe et al. 2004). Our settings and computation of radial velocities (RV) remained the same as for our guaranteed time observations (GTO) program and we refer the reader to Bonfils et al. (2013) for a detailed description. We gathered RVs for 154 epochs spread over 2988 days (8.2 years) between UT 30 October 2003 and 04 January 2012. Table 6 (available in electronic form) lists all RVs in the barycentric reference frame of the solar system. Four measurements have significantly higher uncertainties: the RVs taken at epochs BJD = 2 454 804.7, 2 455 056.9, 2 455 057.9, and, 2 455 136.8 have uncertainties greater than twice the median uncertainty. We removed them and perform our analysis with the remaining 150 RVs.\n\nThe proper motion of GJ 163 (μ = 1.194 ± 0.002 arcsec/yr) implies a secular change in the orientation of its velocity vector. This results in an apparent radial acceleration dv/dt = 0.491 ± 0.013 m s-1 yr-1 (e.g., Kürster et al. 2003), that we subtracted from the RVs listed in Table 6 prior to our analysis. The RV time series is shown in Fig. 2.", null, "Fig. 2 RV time series of GJ 163. Open with DEXTER\n\n## 4. Radial-velocity analysis\n\nThe RV variability of GJ163 (σe = 6.31 m/s) is unexplained by photon noise and instrumental errors combined, which are expected to account only for a σi ~ 2.8  m/s dispersion (see Sect. 3 in Bonfils et al. 2013). We therefore analyzed the time series and found that this excess of variability results from up to five different superimposed signals. We describe our analysis below, made in a Bayesian framework using a MCMC algorithm (Sect. 4.1). We also report that similar results are obtained with a classical periodogram analysis (Sect. 4.2).\n\n### 4.1. MCMC modeling\n\nWe used a MCMC algorithm (Gregory 2005, 2007; Ford 2005), which starts with random values for all free parameters of a model, to sample the joint probability distribution of the model parameters. Then this initial solution evolves at the manner of a random walk: each iteration attempts to change the solution randomly, subsequent iterations are accepted following a pseudo-random process, and all accepted solutions form the so-called chain of solutions.\n\nMore precisely, for each iteration we generated a new solution and computed its posterior probability. The posterior probability is the product of the likelihood (the probability of observing the data given the parameter values) by the prior probability of the parameter values. The new solution was accepted with a probability that is a function of the ratio between its posterior probability and the posterior probability of the previous solution, such that solutions with a higher posteriori probability were accepted more often. Step-by-step, accepted solutions built a chain that reached a stationary state after enough iterations. We then discarded the first 10 000 iterations and kept only the stationary part of the chain. The distributions of parameter values of all the remaining chain links then corresponded to the targeted joint probability distribution for the model parameters.\n\nOur implementation closely follows that of Gregory (2007); however, we chose to run ten chains in parallel. Each chain was attributed a parameter β that scaled the likelihood such that chains with a lower β value presented a higher acceptance probability. We also paused the MCMC iteration after every ten steps and proposed the chains to permute their solutions (which was again accepted pseudo-randomly and according to the posterior likelihood ratio between solutions). This approach is reminiscent of simulated annealing algorithms and permits evasion outside of local minima and better exploration of the wide parameter space. Only the chain with β = 1 corresponds to the targeted probability distribution. Eventually, we discarded all chains but the one with β = 1. We adopted the median of the posterior distributions for the optimal parameter values, and the 68% centered interval for their uncertainties.\n\nWe fitted the data with different models. We chose a model without planets where the sole free parameter was the systemic velocity. We also chose models composed of either one, two, three, four, five, or six planets in Keplerian orbits. We ran our MCMC algorithm to build chains of 500 000 links and eventually removed the first 10 000 iterations.\n\nTable 2 reports optimal parameter values and uncertainties for the model composed of three planets. The parameter values are the median of the posterior distributions and the uncertainties are the 68.3% centered intervals (equivalent to 1-σ for gaussian distributions). Notably, the orbital periods of the three planets are Pb = 8.631 ± 0.002, Pc = 25.63 ± 0.03, and Pd = 604 ± 8 days. Assuming a mass M = 0.4  M for the primary, we estimated their minimum masses to msini = 10.6 ± 0.6, 6.8 ± 0.9, and 29 ± 3  M, respectively1. When we fitted the data with a model composed of only one planet we found b, and when we did with a model composed of two planets we found both planets b and d. When we tried a more complex model composed of four or five planets, we recovered the Keplerian orbits described in the three-planet model, as well as Keplerian orbits with periods P(e) = 19.4 and P(f) = 108 days. For the most complex model with six planets, the parameters never converged to a unique solution. The sixth orbit is found with orbital periods around 37, 42, 75, 85, and 134 days and, for a few thousand chain links, the 19.4-day period is not part of the solution but is replaced by one of the orbital periods found for the sixth planet.\n\nMore complex models include more free parameters and thus always lead to better fits (i.e., to higher likelihood). To choose whether the improvement in modeling the data justifies the additional complexity, we computed Bayes ratios between the different models. They lead to the posterior probability of one-, two-, and three-planet models over none-, one-, and two-planet models to be as high as 1016, 1011, and 107, respectively, whereas the posterior probabilities for the models with four, five, and six planets over the models with three, four, and five planets were only 75, 62, and 5, respectively. We required that more complex models needed a Bayes ratio >100 to be accepted and thus conclud that our data show strong evidence for at least three planetary signals, and perhaps some evidence for more planets.\n\n### 4.2. Periodogram analysis\n\nWe now present an alternative analysis of the radial-velocity time series based on periodograms. We used floating-mean Lomb-Scargle periodograms (Lomb 1976; Scargle 1982; Cumming et al. 1999) and implemented the algorithm as described in Zechmeister et al. (2009). We chose a normalization such that 1 indicates a perfect fit of the data by a sine wave at a given period whereas 0 indicates no improvement compared to a fit of the data by a constant. To evaluate the false-alarm probability of any peak, we generated faked data sets made of noise only. To make these virtual time series we used bootstrap randomization, i.e., we shuffled the original RVs and retained the date. Shuffling the RVs insures that no coherent signal is present in the virtual time series and keeping the dates conserves the sampling. For each trial we computed a periodogram and measured the power of the highest peak. With 10 000 trials we obtained a distribution of power maxima, which we used as a statistical description for the highest power one can expect if the periodogram was computed on data made of noise only. We searched for the power values that encompassed 68.3%, 95.4%, and 99.7% of the distribution of power maxima (equivalent to 1-, 2-, and 3-σ). A peak found with a power higher than those values (in a periodogram of the original time series) was attributed a false-alarm probability (FAP) lower than 31.7, 4.6, or 0.3%.\n\nWe started with a periodogram of the raw RVs. It shows sharp peaks around periods P = 8.6 and 1.13 days (Fig. 3, top panel). They have powers p = 0.50 and 0.41, respectively, much above the power p = 0.21 of a 0.3% FAP. We noted that they were both aliases of each other with our typical one-day sampling and thus tried both periods as starting values for a Keplerian fit. To perform the fit, we used a non-linear minimization with the Levenberg-Marquardt algorithm (Press et al. 1992). We converged on local solutions with reduced χ2 (respectively rms) of 2.52 ± 0.06 (resp. 4.53 m/s) and 3.02 ± 0.06 (resp. 5.02 m/s), respectively. We thus adopted Pb = 8.6 days for the orbital period of the first planet.\n\nWe continued by subtracting the Keplerian orbit of planet b to the raw RVs and by doing a periodogram of the residuals (Fig. 3, second panel). We computed a power p = 0.21 for the 0.3% FAP threshold and located eight peaks with more power. They had periods 0.996, 0.999, 1.002, 1.007, 1.038, 25.6, 227 and 625 day, and powers 0.48, 0.30, 0.30, 0.24, 0.30, 0.28, 0.25 and, 0.41, respectively. We identified that several candidates periods are aliases of each other and tried each as a starting value for a Keplerian fit, to a model now composed of two planets. We converged on local solutions with reduced χ2 (resp. rms) of 2.01 (resp. 3.55 m/s), 2.10 (resp. 3.71 m/s), 1.98 (resp. 3.50 m/s), 2.21 (resp. 3.91 m/s), 2.13 (resp. 3.76 m/s), 2.14 (resp. 3.77 m/s), 2.19 (resp. 3.87 m/s), and 1.84 (resp. 3.24 m/s), respectively. Among the peaks with highest significance, the one at P ~ 600 days provided the best fit and we thus adopted this solution.\n\nNext, we pursued the procedure and looked at the residuals around the two-planet solution (Fig. 3, third panel). We recovered some of the previous peaks, with slightly more power excesses (p = 0.30 and 0.28), at periods 25.6 and 1.038 days. We noted again that both periods are probably aliases of each other with the typical one-day sampling. We performed three-planet fit trying both periods as initial guesses for the third planet. We converged on χ2 = 1.50 (rms = 2.59 m/s) and χ2 = 1.53 (rms = 2.66 m/s) for the guessed periods of 25.6 and 1.038 days, respectively. With the periodogram analysis, the solution with Pb = 25.6 days is only marginally favored over the solution with Pb = 1.038 days.\n\nThe fourth iteration unveiled one significant power excess around the period 1.006 days (p = 0.22), as well as two other peaks above the two-σ confidence threshold, with periods 19.4 and 108 days (p = 0.16 and 0.14; Fig. 3, fourth panel). We noted that the periods 1.006 and 108 days were aliases under our typical one-day sampling. We tried all three periods (1.006, 19.4, and 108 days) as starting values and converged on χ2 = 1.26 (rms = 2.15 m/s), χ2 = 1.37 (rms = 2.32 m/s), and χ2 = 1.32 (rms = 2.26 m/s), respectively. Again, no period is significantly favored.", null, "Fig. 3 Periodogram analysis of the GJ 163 RV time series, first four iterations. Horizontal lines mark powers corresponding to 31.7, 4.6, and 0.3% false-alarm probability, respectively (i.e., equivalent to 1-, 2-, and 3-σ detections). Open with DEXTER\n\nWe adopted the solution with Pd = 108 days and computed the periodogram of the residuals. The maximum power is seen again around 19.4 day, now above the three-σ confidence level. Conversely, if we had adopted the solution with Pd = 19.4 days, the period around 108 day (and 1.006 days) would now be the most significant, and above the three-σ threshold too.\n\nEventually, the sixth iteration unveiled no additional significant power excess. The final five-Keplerian fit has a reduced χ2 = 1.21, for a rms = 2.02 m/s. For reference, we give the orbital elements derived in this section in Table 5 (available in electronic form only).", null, "Fig. 4 Radial velocity curves for planets b, c, and d, from top to bottom. Open with DEXTER", null, "Fig. 5 Seasonal periodograms of residual time series obtained after fitting the RV time series with four-planet models. From top to bottom, the rows are for seasons 2008, 2009, and 2010+2011, respectively. From left to right, the columns are periodograms to investigate signals b, (e), c, and (f), respectively. The periodicity of each signal is located with a vertical dashed red line. Power excesses are seen at all seasons for signals b and c, but not for signals (e) and (f). Open with DEXTER\n\n## 5. Challenging the planetary interpretation\n\nAt this point, we identified up to five significant signals entangled in the RV data. If not caused by planets orbiting GJ 163, some radial velocity periodic variations could be caused by stellar surface inhomogeneities such as plages or spots. The periodicity is then similar than the orbital period Prot, or might be one of its harmonics Prot/2, Prot/3, etc. (Boisse et al. 2011). Considering the activity of GJ 163 (Sect. 2), we found the rotation is moderate to long, probably greater than two more active stars of our sample, GJ 176 and GJ 674 (i.e., Prot > 35 days), and possibly as long as the rotation period of GJ 581 (~94 d). And therefore, up to three out of the five periodicities identified above might be confused with an activity-induced modulation: the 19.4-, 25.6-, and 108-day periodicities. In this section, we investigated time variability of these signals (Sect. 5.1) and searched for their possible counterparts in various activity indicators (Sect. 5.2).\n\n### 5.1. Search for changes in RV periodic signals\n\nTo explore the possible non-stationarity of one signal, we fitted the data with a model composed of the four other signals and looked at the residuals. In practice, we chose to start the minimization close from the five-planet solution. We used the solution with five planets (Sect. 4.2) and removed from the solution the planet corresponding to the signal we want to study. We then performed a local minimization and computed the residuals, which thus included the signal of interest. Next, we divided the residual time series in three observational seasons (2008, 2009, and 2010+2011). We did not included the observations before 2008 because there are too few and we grouped together 2010 and 2011 data.\n\nWe repeated the procedure for all signals except for the longest period (because the ~604-day signal can not be recovered on the time-scale of one season). This produced 4 × 3 = 12 peridograms, shown in Fig. 5. To help locate where the unfitted signal should appear we located its period with a vertical red dashed line.\n\nFor both signals b and c, we see clear power excesses at the right periods and for all seasons. This gives further credit that they are the result of orbiting planets. Conversely, the power excess expected for signal (e) is seen in season 2009 only and no power excess is seen for signal (f) in season 2009. This casts doubts on the nature of both signals (e) and (f) and there must be more data before we can draw further conclusions.\n\n### 5.2. Periodicities in activity indicators\n\nStellar activity can be diagnosed with spectral indices or by monitoring the shape of the spectral lines, both conveniently measured on the same spectra as those used to measure the radial velocities. We measured 2 spectral indices based on Ca ii H & K lines and on the Hα line, as well as the full width at half maximum (FWHM) and the bisector span (BIS) of the cross-correlation function (CCF). Their values are given in Table 6 along with the radial-velocity measurements.\n\nAmong these indicators, we identified a significant periodicity for the FWHM only. Its periodogram indicates some power excess around a period of 30 days, with a false-alarm probability <0.3% (i.e., a confidence level >3σ). We also looked for non stationarity in FWHM and found it is only pseudo-periodic. For instance, in 2008, the maximum power is seen at 30 days, with significant power around 19 days, compatible with the period P(e) identified in RV data. The possible link between this signal with the RV 19.4-day periodicity is however unclear since their strongest power is identified in periodograms of different seasons. We also show the periodogram of FWHM for the 2009 season, where the strongest peak is seen around the period of 38 days (i.e., twice 19 days), albeit with a modest significance.\n\nIt is also unclear whether this stellar activity can be linked to the stellar rotation, as a 19- to 38-day rotational period would be short compared to our estimate in Sect. 2.", null, "Fig. 6 Periodogram of the full width at half maximum of the cross-correlation function for both the whole data set (top panel), season 2008 only (middle panel), and season 2009 (bottom panel). For reference, the period of RV signals are shown with vertical red dashed lines. Open with DEXTER\n\n## 6. Dynamical analysis\n\nAfter analyzing the RV data with both a MCMC algorithm and with iterative periodograms, we identified up to five superimposed coherent signals. In Sect. 5 we scrutinized several activity indicators and looked for non-stationarity of these signals to finally cast doubts on the planetary nature for two of them. We retained a nominal solution with three planets (Table 2) and now perform a dynamical analysis.\n\nThe orbital solution given in Table 2, shows a planetary system composed of three planets, two of them in very tight orbits (ab = 0.06 and ac = 0.13 AU), and another farther away, but in an eccentric orbit, such that the minimum distance at pericenter is only 0.65 AU. The stability of this system is not straightforward, in particular taking into account that the minimum masses of the planets are of the same order as Neptune’s mass. As a consequence, mutual gravitational interactions between planets in the GJ 163 system cannot be neglected and may give rise to some instability.\n\n### 6.1. Secular coupling\n\nTable 3\n\nFundamental frequencies for the nominal orbital solution in Table 2.", null, "Fig. 7 Evolution of the angle Δϖ = ϖb − ϖc (red line) that oscillates around 180° with a maximum amplitude of 28°. The black line also gives the Δϖ evolution, but obtained with the linear secular model (Eq. (2)). Open with DEXTER\n\nThe ratio between the orbital periods of the two innermost planets determined by the fitting process (Table 2) is Pc/Pb = 2.97, suggesting that the system may be trapped in a 3:1 mean motion resonance. To test the accuracy of this scenario, we performed a frequency analysis of the nominal orbital solution listed in Table 2 computed over 1 Myr. The orbits of the planets are integrated with the symplectic integrator SABA4 of Laskar & Robutel (2001), using a step size of 0.01 yr and including general relativity corrections. We conclude that, in spite of the proximity of the 3:1 mean motion resonance, when we adopt the minimum values for the masses, the two planets in the GJ 163 system are not trapped in this resonance.\n\nThe fundamental frequencies of the systems are then the mean motions nb, nc, and nd, and the three secular frequencies of the pericenters g1, g2, and g3 (Table 3). Because of the proximity of the two innermost orbits, there is a strong coupling within the secular system (see Laskar 1990). Both planets b and c precess with the same precession frequency g2, which has a period of 1480 yr. The two pericenters are thus locked and Δϖ = ϖc − ϖb oscillates around 180°, with a maximum amplitude of about 28° (Fig. 7). This behavior is not a dynamical resonance, but merely the result of the linear secular coupling.", null, "Fig. 8 Evolution of the GJ 163 eccentricities with time, starting with the orbital solution from Table 2. The color lines are the complete solutions for the various planets (b: red, c: green, d: blue), while the black curves are the associated values obtained with the linear secular model (Eq. (2)). Open with DEXTER\n\nTo present the solution more clearerly, it is useful to make a linear change of variables into eccentricity proper modes (see Laskar 1990). In the present case, because of the proximity of the 3:1 mean motion resonance and because of the high value of the outer planet eccentricity, the linear transformation is numerically obtained by a frequency analysis of the solutions. Using the classical complex notation", null, "(1)for", null, ", we have for the linear Laplace-Lagrange solution", null, "(2)where", null, "is given by", null, "(3)The proper modes uk (with k = 1,2,3) are obtained from the zp by inverting the above linear relation. To a good approximation, we have uk ≈ ei(gkt + φk), where gk and φk are given in Table 3.\n\nFrom Eq. (2) it is then easy to understand the meaning of the observed libration between the pericenters ϖb and ϖc. Indeed, for both planets b and c, the dominant term is u2 with frequency g2, and they thus both precess with an average value of g2 (black line, Fig. 7).\n\nIt should also be noted that Eq. (2) provides good approximations of the long-term evolution of the eccentricities. In Fig. 8 we plot the eccentricity evolution with initial conditions from Table 2. Simultaneously, we plot the evolution of the same elements given by the above secular, linear approximation. The eccentricity variations are very limited and are described well by the secular approximation. The eccentricity of planets b and c are within the ranges 0.061 < eb < 0.101 and 0.067 < ec < 0.109, respectively. These variations are driven mostly by the secular frequency g2, of period approximately 1480 yr (Table 3). The eccentricity of planet d is nearly constant with 0.372 < ed < 0.374 (Fig. 8).\n\n### 6.2. Stability analysis", null, "Fig. 9 Stability analysis of the nominal fit (Table 2) of the GJ 163 planetary system. For fixed initial conditions, the phase space of the system is explored by varying the semi-major axis ap and eccentricity ep of each planet, b, c, and d, respectively. The step size is 10-5 AU in semi-major axis and 10-2 in eccentricity. For each initial condition, the system is integrated over 200 yr and a stability criterion is derived with the frequency analysis of the mean longitude (Laskar 1990, 1993). As in Correia et al. (2005, 2009, 2010), the chaotic diffusion is measured by the variation in the frequencies. The red zone corresponds to highly unstable orbits, while the dark blue region can be assumed to be stable on a billion-year timescale. The contour curves indicate the value of χ2 obtained for each choice of parameters. Open with DEXTER\n\nIn order to analyze the stability of the nominal solution (Table 2) and confirm that the inner subsystem is outside of the 3:1 mean motion resonance, we performed a global frequency analysis (Laskar 1993) in the vicinity of this solution, in the same way as achieved for other planetary systems (e.g. Correia et al. 2005, 2009, 2010).\n\nFor each planet, the system is integrated on a regular 2D mesh of initial conditions, with varying semi-major axis and eccentricity, while the other parameters are retained at their nominal values (Table 2). The solution is integrated over 200 yr for each initial condition and a stability indicator is computed to be the variation in the measured mean motion over the two consecutive 100 yr intervals of time (for more details see Correia et al. 2005). For regular motion, there is no significant variation in the mean motion along the trajectory, while it can vary significantly for chaotic trajectories. The result is reported in Fig. 9, where “red” represents the strongly chaotic trajectories, and “dark blue” the extremely stable ones.\n\nIn Fig. 9 we show the vicinity of the best-fitted solution where the minima of the χ2 level curves correspond to the nominal parameters (Table 2). For the inner system (top and center panels) we observe the presence of the large 3:1 mean motion resonance. We confirm that the present system is outside the 3:1 resonance, in a more stable area at the bottom-right side (Fig. 9, top), or at the bottom-left side (Fig. 9, center). These results are somehow surprising, because if the system had been previously captured inside the 3:1 mean motion resonance, we would expect that the subsequent evolution drive it to the opposite side, where the period ratio is above 3, instead of 2.97. Indeed, during the initial stages of planetary systems, capture in mean motion resonances can occur as a result of orbital migration due to the interactions within a primordial disk of planetesimals, (e.g., Papaloizou 2011). However, as the eccentricities of the planets are damped by tidal interactions with the star, this equilibrium becomes unstable. For first order mean motion resonances it has been demonstrated that the system exits the resonance with a higher period ratio (Lithwick & Wu 2012; Delisle et al. 2012; Batygin & Morbidelli 2013), and this behavior should not differ much for higher order resonances.\n\nFor the outer planet (Fig. 9, bottom), we observe that the planet lies in a very stable region. Nevertheless, since the contour curves of minimal χ2 vary smoothly is this zone (unlike those for the inner system), we conclude that this eccentricity may be overestimated. Additional observational data will help to solve this issue, since longer orbital periods become better determined as we acquire data for extended time spans (because we cover more revolutions of the planet around the star). Since the system is already stable with the nominal parameters from Table 2, we do not explore this possibility in great depth in the present paper, but more detailed dynamical studies on this system must take this possibility into account.\n\nWe also tested briefly the stability of the five-planet solution (Table 5) and found that it is not stable (even with eccentricities of planets e and f fixed to zero), in particular because of planet e.\n\n### 6.3. Long-term orbital evolution\n\nFrom the previous stability analysis, it is clear that the GJ 163 planetary system listed in Table 2 is stable over Gyr timescales. Nevertheless, we also tested directly this by performing a numerical integration of the orbits.\n\nIn a first experiment, we integrated the system over 1 Gyr using the symplectic integrator SABA4 of Laskar & Robutel (2001) with a step size of 0.01 yr, including general relativity corrections, but without tidal effects. The result displayed in Fig. 10 show that the orbits evolve in a regular way, and remain stable throughout the simulation, which is of the same order as the age of the star.", null, "Fig. 10 Long-term evolution of the GJ 163 planetary system over 1 Gyr starting with the orbital solution from Table 2. We did not include tidal effects in this simulation. The panel shows a face-on view of the system invariant plane. x and y are spatial coordinates in a frame centered on the star. Present orbital solutions are traced with solid lines and each dot corresponds to the position of the planet every 0.1 Myr. The semi-major axes are almost constant, and the eccentricities present slight variations (0.061 < eb < 0.101, 0.067 < ec < 0.109, and 0.372 < ed < 0.374). Open with DEXTER", null, "Fig. 11 Some possibilities for the long-term evolution of the GJ 163 planetary system over 1 Myr, including tidal effects with Δtp = 105 s. Time scales are inversely proportional to Δt (Eq. (4)), so 1 Myr of evolution roughly corresponds to 1 Gyr with Δtp = 100 s (Qp ~ 103) or 10 Gyr with Δtp = 10 s (Qp ~ 104). We show the ratio Pc/Pb of the orbital periods of the two inner planets (top) and their eccentricities eb (red) and ec (green) (bottom). We use three different sets of initial conditions: Table 2 (left); Table 2 with ac = 0.060679 and Δtc = 5 × 107 s (middle); Table 4 (right). Open with DEXTER\n\nSince the two inner planets are very close to the star, in a second experiment we aun a numerical simulation that included tidal effects. Several tidal models have been developed so far, from the simplest ones to the more complex (for a review see Correia et al. 2003; Efroimsky & Williams 2009). The qualitative conclusions are more or less unaffected, so for simplicity we adopt here a linear model with constant Δt (Singer 1968), where Δt is a time delay between the initial perturbation of the planet and the maximal tidal deformation. The tidal force acting on each planet is then given by (Mignard 1979)", null, "(4)where rp is the position of each planet relative to the star, k2 is the potential Love number, G is the gravitational constant, M is the mass of the star, Rp is the planet radius, and ωp is the spin vector of the planet. Because the spin evolves in a much shorter timescale than the orbit (e.g., Correia 2009), we consider that the spin axis is normal to the orbit, and its norm is given by the equilibrium rotation for a given eccentricity (Eq. (48), Correia et al. 2011)", null, "(5)In this experiment we use the ODEX integrator (e.g., Hairer et al. 2011) for the numerical simulations. We adopt k2 = 0.5 and Rp = 0.25   RJup for all planets, and M = 0.4   M (Table 1). Typical dissipation times for gaseous planets are Δtp ~ 10 to 100 s, corresponding to dissipation factors Qp ~ 104 to 103, respectively (", null, "). However, computations with such low Δtp values (or high Qp), become prohibitive on account of long evolution times. In order to speed up the simulations, in this paper we have considered artificially high values for the tidal dissipation, about one thousand times the expected values (Δtp = 105 s or Qp ~ 1). Time scales are inversely proportional to Δtp (Eq. (4)), so 1 Myr of evolution roughly corresponds to 1 Gyr with Δtp = 100 s (or 10 Gyr with Δtp = 10 s).\n\nIn Fig. 11 (left) we plot the evolution of the orbital period ratio of the two inner planets together with their orbital eccentricities. We observe that, although the system remains stable, the eccentricities are progressively damped, while the present period ratio increases towards the 3:1 mean motion resonance because of the inward migration of the semi-major axes. Around 0.35 Myr the system crosses the 3:1 resonance, but capture cannot occur because we have a divergent migration (e.g., Henrard & Lamaitre 1983). With a more realistic tidal dissipation (Δtp = 102 s), this event is scheduled to occur in less than 1 Gyr, so we may wonder why the present system is still evolving in such a dramatic way.\n\nOne possibility is that the system is already fully evolved by tidal effect, and the eccentricities of the two inner planets are overestimated (see next section). Another possibility is to suppose that planet c is terrestrial, since its minimum mass is 6.8  M (Table 2). Terrestrial planets usually dissipate much more energy than gaseous planets, with typical values Qp ~ 101 − 102 (e.g. Goldreich & Soter 1966). Thus, adopting Δtc = 5 × 107 s (that is, dissipation for planet c becomes 500 times larger than for the gaseous planets) we repeated the previous simulation, keeping all the other parameters equal, except the initial semi-major axis of this planet ac = 060679 AU. In Fig. 11 (middle) we observe that in this case the orbital period ratio of the two inner planets decreases. Therefore, the system may have crossed the 3:1 resonance in the past, but evolved to the present situation. We adopted ac above the value in the nominal solution (Table 2), so we can see the resonance crossing from above. If we use the nominal value, the orbital period ratio behavior is the same, but decreases to values below the initial 2.97 ratio.\n\nBoth the size of the planet and the dissipation rates (Δt) are poorly constrained. More generally, the evolution would be longer for a smaller planet and lower dissipation rates (Eq. (4)). For an Earth composition, planet c minimum mass converts to a radius of roughly 1.7 Rearth (Valencia et al. 2007) and, for the same Δtc, the evolution would take 10 Gyr instead of 1 Gyr. Even for smaller planetary sizes, that scenario would remain possible if Δtc assumed higher values.\n\n### 6.4. Dissipation constraints\n\nTable 4\n\nOrbital parameters for the planets orbiting GJ 163, obtained with a tidal constraint for the proper modes u1 and u2.\n\nTable 5\n\nFitted orbital solution for the GJ 163 planetary system: 5 Keplerians.\n\nIn the previous section we saw that the present orbits of the two inner planets in the GJ 163 are still evolving by tidal effect. Unless the system started with a much higher value for the eccentricities, and depending on its age, the present eccentricities should have already been damped to lower values. In addition, dissipation within a primordial disk should have also contributed to circularize the initial orbits (e.g., Papaloizou 2011). Thus, it is likely that the eccentricities given by the best-fit solution (Table 2) are overestimated, as it is usual when we use insufficient or inaccurate data (e.g., Pont et al. 2011).\n\nOne can perform a fit fixing both eccentricities eb and ec at zero. This procedure has been done in many previous works, but as explained in Lovis et al. (2011), it is not a good approach. Indeed, if we do so in the case of the GJ 163 system the subsequent evolution of the eccentricities shows a decoupled system (Δϖ is in circulation), where the eccentricities are mainly driven by angular momentum exchanges with the outer planet, and show some irregular variations.\n\nOver long times, the variations of the planetary eccentricities are usually well described by the secular equations (Eq. (2), Figs. 7 and 8). The best procedure to perform a fit to the observational data that takes into account the eccentricity damping constraint is then to make use of these equations. As for the Laplace-Lagrange linear system (Eq. (2)), we can linearize and average the tidal contribution from expression (4) to the eccentricity, and we obtain for each planet p an additional contribution (Correia et al. 2011)", null, "(6)Instead of directly damping the eccentricity, from the previous expression it can be shown that tidal effects damp the proper modes uk as (Laskar et al. 2012)", null, "(7)For the present GJ 163 system, only γb is relevant. However, since the inner system is strongly coupled, both proper modes u1 and u2 are damped with", null, "yr-1 (with Δtb = 100 s), which is compatible with the age of the system. Dissipation in a primordial disk can add some extra contribution to γb, so we expect proper modes u1 and u2 to be considerably damped today. The initial conditions for the GJ 163 planetary system should then take into account this extra information, as has been done for the HD 10180 system (Lovis et al. 2011). We have thus chosen to modify our fitting procedure in order to include a constraint for the tidal damping of the proper modes u1 and u2 using the additional constraint", null, "(8)For that purpose, we added an additional term to the χ2 minimization corresponding to these proper modes", null, "(9)where R is a positive constant that is chosen arbitrarily to obtain a small value for u1 and u2 simultaneously. Using R = 50 we get u1 ~ 0.03 and u2 ~ 0.12 and obtain a final", null, ", which is nearly identical to the results obtained without this additional constraint (R = 0,", null, ").\n\nThe best-fit solution obtained by this method is listed in Table 4. We believe that this solution is a more realistic representation of the true system than the nominal solution (Table 2). Indeed, with this constraint, eccentricity variations of the two innermost planets are regular and slowly damped, while the variations in the ratio of the orbital periods is almost imperceptible (Fig.11, right). In addition, the inner system is still coupled, the two pericentre being locked (Δϖ = ϖc − ϖb oscillates around 180°, with a maximal amplitude of about 26°).\n\n### 6.5. Additional constraints\n\nWe can assume that the dynamics of the three known planets is not disturbed much by the presence of an additional small-mass planet close by. We can thus test the possibility of an additional fourth planet in the system by varying the semi-major axis, the eccentricity, and the longitude of the pericenter over a wide range, and performing a stability analysis as in Fig. 9. The test was completed for a fixed value K = 0.2 m/s, corresponding to an Earth-mass object at approximately 1 AU, whose radial-velocity amplitude is at the edge of detection (Fig. 12).\n\nFrom the analysis of the stable areas in Fig. 12, one can see that additional planets are possible beyond 2.5 AU (well outside the outer planet’s apocenter), which corresponds to orbital periods longer than 6 yr. Because the eccentricity of the outer planet is high, there are some high-order mean motion resonances that destabilize several zones up to 4 AU. In addition, the same kind of resonances disturb the inner region between planet c and the pericenter of planet d (Fig. 10), although some stability appears to be possible in the range 0.3 < a < 0.5 AU. Stability can also be achieved for planets extremely close to the star, with orbital periods shorter than 8 days.", null, "Fig. 12 Possible location of an additional fourth planet in the GJ 163 system. The stability of an Earth-size planet (K = 0.2 m/s) is analyzed, for various semi-major axis versus eccentricity (top), or mean anomaly (bottom). All the angles of the putative planet are set to 0° (except the mean anomaly in the bottom panel), and in the bottom panel, its eccentricity to 0. The stable zones where additional planets can be found are the dark blue regions. Open with DEXTER\n\nWe can also try to find constraints on the maximal masses of the current three-planet system if we assume co-planarity of the orbits. Indeed, up to now we have been assuming that the inclination of the system to the line-of-sight is 90°, which gives minimum values for the planetary masses (Table 2).\n\nBy decreasing the inclination of the orbital plane of the system, we increased the mass values of all planets and repeated a stability analysis of the orbits, as in Fig. 9. As we decrease the inclination, the stable dark-blue areas become narrower, to a point that the minimum χ2 of the best-fit solution lies outside the stable zones. At that point, we conclude that the system cannot be stable anymore. It is not straightforward to find a transition inclination between the two regimes, but we can infer from our plots that the stability of the whole system is still possible for an inclination of 30°, but becomes impossible for an inclination of 5° or 10°. Therefore, we conclude that the maximum masses of the planets may be most probably computed for an inclination around 20°, corresponding to a scaling factor of about 3 for the possible masses.\n\nEven when adopting an inclination of 20°, the two inner planets lie outside the 3:1 mean motion resonance, more or less at the same place as for 90° (Fig. 9). The reason why the system becomes unstable for lower inclination values is because the mass of the outer planet d grows to a point such that high order mean motion resonances between planets d and b and/or c destroy the whole system. In particular, the 3:1 resonant island also disappears completely for low inclination values.\n\n## 7. Gl163c in the habitable zone?\n\nWith a separation of 0.1254 AU, GJ 163c receives ~1.34 times more energy from its star than Earth does from the Sun. Considering the case where the whole planetary surface re-radiates the absorbed flux (e.g., β factor of Kaltenegger & Sasselov (2011) equal to 1), the equilibrium temperature of GJ 163c is", null, "Scaled to our solar system, its illumination is equivalent to that of a planet located midway between Venus and Earth.\n\nTo be located in the HZ, and thus potentially harbor liquid water, the equilibrium temperature of a planet with an atmosphere as dense as the Earth’s should be between 175K and 270K (see Selsis et al. 2007, for a complete discussion). In the case of GJ 163c this condition is fulfilled for large range of Bond albedos (A = 0.34 − 0.89), but not for an albedo similar to that of the Earth’s. The albedo of the Earth is equal to 0.3 in the optical and is as low as 0.2 in the near-IR, where early M dwarfs radiate most of their energy. With these values GJ 163c would lie outside the HZ. An albedo greater than 0.34 is, however, possible if 40–50% of the atmosphere is covered by clouds (see, for example, Fig. 1 of Kaltenegger & Sasselov 2011). The precise location of GJ 163c with respect to the habitable zone may further depend on additional heating such as tidal (Barnes et al. 2012) or radiogenic (Ehrenreich et al. 2006) heatings, and more detailed studies are thus welcome.\n\nTwo other conditions besides host liquid water on its surface are needed for a planet in the HZ to be truly habitable. First, the planet should not have accreted a massive H-He envelope, otherwise the surface pressure would be too strong and could lead to a runaway greenhouse effect. In the 3–10 M range, planets can have very different structures for a given mass, and it is impossible to know without a radius measurement whether GJ 163c is embedded in a massive H-He envelope or not. Second, the planet should contain water among the components of its atmosphere.\n\nNumerous discussions exist about two characteristics of planets inside the HZ around M dwarfs and their effect on habitability: their location inside the tidal lock radius of their star and the high activity level of M dwarfs. In Delfosse et al. (2012) we summarized the results of recent works in this domain. The main conclusion is that tidal effect and atmospheric erosion from the neighborhood of active stars does not “preclude the habitability of terrestrial planets orbiting around cool stars (Barnes et al. 2011)”. In particular, the thick atmosphere that may enshroud a planet of ~7 Earth-mass seems stable even around very active M dwarfs (Tian 2009).\n\n## 8. Conclusion\n\nWe have presented the analysis of 150 HARPS RVs of the nearby M dwarf GJ 163 and demonstrated that it encodes at least three signals compatible with the gravitational pull of orbiting planets and identified two additional signals that need further observations before counting them as additional planets. Signals b and d have periodicities that seem incompatible with the possible rotational periods of the star. Signals b and c are also recovered when the data set is divided in observational season, lending credence that at least three planets orbit around GJ 163. We derived their orbital periods (~8.6, 25.6, and 604 days) and their minimum masses (~10.6, 6.8, and 29 M), which correspond to a hot, a temperate, and a cold planet in the super-Earth/Neptune mass regime. The super-Earth GJ 163c may retain further attention for its potential habitability. It receives about 30% more energy than Earth in our solar system and could qualify as a habitable-zone planet for a wide range of albedo values (175 ≤ Teq ≤ 270 K, for 0.34 ≤ A ≤ 0.89).\n\nWe also performed a detailed dynamical analysis of the system to show that, despite a period ratio Pc/Pb = 2.97, planets b and c do not participate in a 3:1 resonance. The system is found to be stable over a time comparable to the age of the system and, as far as the orbital parameters of the first three planets remain unchanged, it also appears complete down to Earth-mass planets for a wide range of separations (0.1 ≲ a ≲ 2.2 AU).\n\nThe system GJ 163 is singular both for its potentially habitable planet GJ 163c and for its particular hierarchical structure and dynamical history. And therefore, before its atmosphere can be characterized and searched for biomarkers with future observatories, it is already a unique system that connects the potential habitability of a planet with the dynamical history of a planetary system.\n\n1\n\nAn additional ~10% uncertainties should be added quadratically to the mass uncertainty when accounting for the ~10% stellar-mass uncertainty.\n\n## Acknowledgments\n\nOur first thanks go to the ESO La Silla staff to whom we are grateful for their continuous support. We wish to thank the anonymous referee for thoughtful comments and suggestions. We also acknowledge support by PNP-CNRS, CS of Paris Observatory, PICS05998 France-Portugal program, Fundação para a Ciência e a Tecnologia (FCT) through program Ciência 2007 funded by FCT/MCTES (Portugal) and POPH/FSE (EC), and by the European Research Council/European Community under the FP7 through a Starting Grant (grants PTDC/CTE-AST/098528/2008, PTDC/CTE-AST/098604/2008, PEst-C/CTM/LA0025/2011, and SFRH/BD/60688/2009), grant agreement 239953. MG is FNRS Research Associate.\n\n## References\n\n1. Barnes, J. R., Jeffers, S. V., & Jones, H. R. A. 2011, MNRAS, 401, 445 [NASA ADS] [CrossRef] [Google Scholar]\n2. Barnes, J. R., Jenkins, J. S., Jones, H. R. A., et al. 2012, MNRAS, 3165 [Google Scholar]\n3. Batygin, K., & Morbidelli, A. 2013, AJ, 145, 1 [NASA ADS] [CrossRef] [Google Scholar]\n4. Boisse, I., Bouchy, F., Hébrard, G., et al. 2011, A&A, 528, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n5. Bonfils, X., Delfosse, X., Forveille, T., Mayor, M., & Udry, S. 2007, Proc. Conf. In the Spirit of Bernard Lyot: The Direct Detection of Planets and Circumstellar Disks in the 21st Century, June 04–08, 21 [Google Scholar]\n6. Bonfils, X., Delfosse, X., Udry, S., et al. 2013, A&A, 549, A109 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n7. Casagrande, L., Flynn, C., & Bessell, M. 2008, MNRAS, 389, 585 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]\n8. Correia, A. C. M. 2009, ApJ, 704, L1 [NASA ADS] [CrossRef] [Google Scholar]\n9. Correia, A. C. M., Laskar, J., & Néron de Surgy, O. 2003, Icarus, 163, 1 [NASA ADS] [CrossRef] [Google Scholar]\n10. Correia, A. C. M., Udry, S., Mayor, M., et al. 2005, A&A, 440, 751 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n11. Correia, A. C. M., Udry, S., Mayor, M., et al. 2009, A&A, 496, 521 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n12. Correia, A. C. M., Couetdic, J., Laskar, J., et al. 2010, A&A, 511, A21 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n13. Correia, A. C. M., Laskar, J., Farago, F., & Boué, G. 2011, Celest. Mech. Dyn. Astron., 111, 105 [NASA ADS] [CrossRef] [Google Scholar]\n14. Cumming, A., Marcy, G. W., & Butler, R. P. 1999, ApJ, 526, 890 [NASA ADS] [CrossRef] [Google Scholar]\n15. Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al. 2003, The IRSA 2MASS All-Sky Point Source Catalog [Google Scholar]\n16. Delfosse, X., Forveille, T., Ségransan, D., et al. 2000, A&A, 364, 217 [NASA ADS] [Google Scholar]\n17. Delfosse, X., Bonfils, X., Forveille, T., et al. 2013, A&A, 553, A8 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n18. Delisle, J.-B., Laskar, J., Correia, A. C. M., & Boué, G. 2012, A&A, 546, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n19. Efroimsky, M., & Williams, J. G. 2009, Celest. Mech. Dyn. Astron., 104, 257 [NASA ADS] [CrossRef] [Google Scholar]\n20. Ehrenreich, D., Lecavelier Des Etangs, A., Beaulieu, J.-P., & Grasset, O. 2006, ApJ, 651, 535 [NASA ADS] [CrossRef] [Google Scholar]\n21. Ford, E. B. 2005, AJ, 129, 1706 [NASA ADS] [CrossRef] [Google Scholar]\n22. Forveille, T., Bonfils, X., Delfosse, X., et al. 2009, A&A, 493, 645 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n23. Gilmore, G., Wyse, R. F. G., & Jones, J. B. 1995, AJ, 109, 1095 [NASA ADS] [CrossRef] [Google Scholar]\n24. Goldreich, P., & Soter, S. 1966, Icarus, 5, 375 [NASA ADS] [CrossRef] [Google Scholar]\n25. Gregory, P. C. 2005, ApJ, 631, 1198 [NASA ADS] [CrossRef] [Google Scholar]\n26. Gregory, P. C. 2007, MNRAS, 374, 1321 [NASA ADS] [CrossRef] [Google Scholar]\n27. Hairer, E., Nørsett, S., & Wanner, G. 2011, Solving Ordinary Differential Equations I: Nonstiff Problems, Springer Series in Computational Mathematics (Springer) [Google Scholar]\n28. Hawley, S. L., Gizis, J. E., & Reid, I. N. 1996, AJ, 112, 2799 [NASA ADS] [CrossRef] [Google Scholar]\n29. Henrard, J., & Lamaitre, A. 1983, Celest. Mech., 30, 197 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]\n30. Kaltenegger, L., & Sasselov, D. 2011, ApJ, 736, L25 [NASA ADS] [CrossRef] [Google Scholar]\n31. Kiraga, M., & Stepien, K. 2007, Acta Astron., 57, 149 [NASA ADS] [Google Scholar]\n32. Koen, C., Kilkenny, D., van Wyk, F., & Marang, F. 2010, MNRAS, 403, 1949 [NASA ADS] [CrossRef] [Google Scholar]\n33. Kürster, M., Endl, M., Rouesnel, F., et al. 2003, A&A, 403, 1077 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n34. Laskar, J. 1990, Icarus, 88, 266 [NASA ADS] [CrossRef] [Google Scholar]\n35. Laskar, J. 1993, Phys. D Nonlinear Phenomena, 67, 257 [NASA ADS] [CrossRef] [Google Scholar]\n36. Laskar, J., & Correia, A. C. M. 2009, A&A, 496, L5 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n37. Laskar, J., & Robutel, P. 2001, Celest. Mech. Dyn. Astron., 80, 39 [NASA ADS] [CrossRef] [Google Scholar]\n38. Laskar, J., Boué, G., & Correia, A. C. M. 2012, A&A, 538, A105 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n39. Leggett, S. K. 1992, ApJS, 82, 351 [NASA ADS] [CrossRef] [Google Scholar]\n40. Leggett, S. K., Allard, F., Geballe, T. R., Hauschildt, P. H., & Schweitzer, A. 2001, ApJ, 548, 908 [NASA ADS] [CrossRef] [Google Scholar]\n41. Lithwick, Y., & Wu, Y. 2012, ApJ, 756, L11 [NASA ADS] [CrossRef] [Google Scholar]\n42. Lomb, N. R. 1976, Ap&SS, 39, 447 [NASA ADS] [CrossRef] [Google Scholar]\n43. Lovis, C., Ségransan, D., Mayor, M., et al. 2011, A&A, 528, A112 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n44. Mardling, R. A. 2007, MNRAS, 382, 1768 [NASA ADS] [Google Scholar]\n45. Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 [NASA ADS] [Google Scholar]\n46. Mayor, M., Bonfils, X., Forveille, T., et al. 2009, A&A, 507, 487 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n47. Mignard, F. 1979, Moon and Planets, 20, 301 [NASA ADS] [CrossRef] [Google Scholar]\n48. Neves, V., Bonfils, X., Santos, N. C., et al. 2012, A&A, 538, A25 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n49. Papaloizou, J. C. B. 2011, Celest. Mech. Dyn. Astron., 111, 83 [NASA ADS] [CrossRef] [Google Scholar]\n50. Papaloizou, J. C. B., & Terquem, C. 2010, MNRAS, 405, 573 [NASA ADS] [Google Scholar]\n51. Pepe, F., Mayor, M., Queloz, D., et al. 2004, A&A, 423, 385 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n52. Pepe, F., Lovis, C., Ségransan, D., et al. 2011, A&A, 534, A58 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n53. Pont, F., Husnoo, N., Mazeh, T., & Fabrycky, D. 2011, MNRAS, 414, 1278 [NASA ADS] [CrossRef] [Google Scholar]\n54. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992 (Cambridge: University Press) [Google Scholar]\n55. Ryan, S. G., & Norris, J. E. 1991, AJ, 101, 1865 [NASA ADS] [CrossRef] [Google Scholar]\n56. Scargle, J. D. 1982, ApJ, 263, 835 [NASA ADS] [CrossRef] [Google Scholar]\n57. Schlaufman, K. C., & Laughlin, G. 2010, A&A, 519, A105 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n58. Schmitt, J. H. M. M., Fleming, T. A., & Giampapa, M. S. 1995, AJ, 450, 392 [NASA ADS] [CrossRef] [Google Scholar]\n59. Selsis, F., Kasting, J. F., Levrard, B., et al. 2007, A&A, 476, 1373 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n60. Singer, S. F. 1968, Geophys. J. R. Astron. Soc., 15, 205 [CrossRef] [Google Scholar]\n61. Tian, F. 2009, ApJ, 703, 905 [NASA ADS] [CrossRef] [Google Scholar]\n62. Udry, S., Bonfils, X., Delfosse, X., et al. 2007, A&A, 469, L43 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n63. Valencia, D., Sasselov, D. D., & O’Connell, R. J. 2007, ApJ, 665, 1413 [NASA ADS] [CrossRef] [Google Scholar]\n64. van Leeuwen, F. 2007, A&A, 474, 653 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n65. Vogt, S. S., Butler, R. P., Rivera, E. J., et al. 2010, ApJ, 723, 954 5733 [NASA ADS] [CrossRef] [Google Scholar]\n66. Zechmeister, M., Kürster, M., & Endl, M. 2009, A&A, 505, 859 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]\n\n## Online material\n\nTable 6\n\nRadial velocity time series of GJ 163 given in the solar system barycentric reference frame (the secular acceleration due to GJ 163 proper motion is not removed), together with measurements of the full width at half maximum (FWHM) and bisector span (BIS) of the cross-correlation function, as well as Ca ii H+K and Hα indices.\n\n## All Tables\n\nTable 1\n\nObserved and inferred stellar parameters for GJ 163.\n\nTable 2\n\nModeled and inferred parameters for GJ 163 system.\n\nTable 3\n\nFundamental frequencies for the nominal orbital solution in Table 2.\n\nTable 4\n\nOrbital parameters for the planets orbiting GJ 163, obtained with a tidal constraint for the proper modes u1 and u2.\n\nTable 5\n\nFitted orbital solution for the GJ 163 planetary system: 5 Keplerians.\n\nTable 6\n\nRadial velocity time series of GJ 163 given in the solar system barycentric reference frame (the secular acceleration due to GJ 163 proper motion is not removed), together with measurements of the full width at half maximum (FWHM) and bisector span (BIS) of the cross-correlation function, as well as Ca ii H+K and Hα indices.\n\n## All Figures", null, "Fig. 1 Emission reversal in the Ca ii H line of GJ 674 (red line; M2.5V; Prot = 35 d), GJ 176 (green dots; M2V; Prot = 39 d), GJ 163 (black line; M3.5), and GJ 581 (blue dashes; M3V; Prot = 94 d), ordered from the most prominent to the least prominent peaks. GJ 163 displays a lower activity level, which is a strong indication of slow rotation. Open with DEXTER In the text", null, "Fig. 2 RV time series of GJ 163. Open with DEXTER In the text", null, "Fig. 3 Periodogram analysis of the GJ 163 RV time series, first four iterations. Horizontal lines mark powers corresponding to 31.7, 4.6, and 0.3% false-alarm probability, respectively (i.e., equivalent to 1-, 2-, and 3-σ detections). Open with DEXTER In the text", null, "Fig. 4 Radial velocity curves for planets b, c, and d, from top to bottom. Open with DEXTER In the text", null, "Fig. 5 Seasonal periodograms of residual time series obtained after fitting the RV time series with four-planet models. From top to bottom, the rows are for seasons 2008, 2009, and 2010+2011, respectively. From left to right, the columns are periodograms to investigate signals b, (e), c, and (f), respectively. The periodicity of each signal is located with a vertical dashed red line. Power excesses are seen at all seasons for signals b and c, but not for signals (e) and (f). Open with DEXTER In the text", null, "Fig. 6 Periodogram of the full width at half maximum of the cross-correlation function for both the whole data set (top panel), season 2008 only (middle panel), and season 2009 (bottom panel). For reference, the period of RV signals are shown with vertical red dashed lines. Open with DEXTER In the text", null, "Fig. 7 Evolution of the angle Δϖ = ϖb − ϖc (red line) that oscillates around 180° with a maximum amplitude of 28°. The black line also gives the Δϖ evolution, but obtained with the linear secular model (Eq. (2)). Open with DEXTER In the text", null, "Fig. 8 Evolution of the GJ 163 eccentricities with time, starting with the orbital solution from Table 2. The color lines are the complete solutions for the various planets (b: red, c: green, d: blue), while the black curves are the associated values obtained with the linear secular model (Eq. (2)). Open with DEXTER In the text", null, "Fig. 9 Stability analysis of the nominal fit (Table 2) of the GJ 163 planetary system. For fixed initial conditions, the phase space of the system is explored by varying the semi-major axis ap and eccentricity ep of each planet, b, c, and d, respectively. The step size is 10-5 AU in semi-major axis and 10-2 in eccentricity. For each initial condition, the system is integrated over 200 yr and a stability criterion is derived with the frequency analysis of the mean longitude (Laskar 1990, 1993). As in Correia et al. (2005, 2009, 2010), the chaotic diffusion is measured by the variation in the frequencies. The red zone corresponds to highly unstable orbits, while the dark blue region can be assumed to be stable on a billion-year timescale. The contour curves indicate the value of χ2 obtained for each choice of parameters. Open with DEXTER In the text", null, "Fig. 10 Long-term evolution of the GJ 163 planetary system over 1 Gyr starting with the orbital solution from Table 2. We did not include tidal effects in this simulation. The panel shows a face-on view of the system invariant plane. x and y are spatial coordinates in a frame centered on the star. Present orbital solutions are traced with solid lines and each dot corresponds to the position of the planet every 0.1 Myr. The semi-major axes are almost constant, and the eccentricities present slight variations (0.061 < eb < 0.101, 0.067 < ec < 0.109, and 0.372 < ed < 0.374). Open with DEXTER In the text", null, "Fig. 11 Some possibilities for the long-term evolution of the GJ 163 planetary system over 1 Myr, including tidal effects with Δtp = 105 s. Time scales are inversely proportional to Δt (Eq. (4)), so 1 Myr of evolution roughly corresponds to 1 Gyr with Δtp = 100 s (Qp ~ 103) or 10 Gyr with Δtp = 10 s (Qp ~ 104). We show the ratio Pc/Pb of the orbital periods of the two inner planets (top) and their eccentricities eb (red) and ec (green) (bottom). We use three different sets of initial conditions: Table 2 (left); Table 2 with ac = 0.060679 and Δtc = 5 × 107 s (middle); Table 4 (right). Open with DEXTER In the text", null, "Fig. 12 Possible location of an additional fourth planet in the GJ 163 system. The stability of an Earth-size planet (K = 0.2 m/s) is analyzed, for various semi-major axis versus eccentricity (top), or mean anomaly (bottom). All the angles of the putative planet are set to 0° (except the mean anomaly in the bottom panel), and in the bottom panel, its eccentricity to 0. The stable zones where additional planets can be found are the dark blue regions. Open with DEXTER In the text\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.\n\nInitial download of the metrics may take a while." ]
[ null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq41.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq194.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq195.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq196.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq197.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq198.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig10_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig11_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq229.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq236.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq243.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq291.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq292.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq294.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq297.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq299.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq305.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq307.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig12_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-eq317.png", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig10_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig11_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12-fig12_small.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86113006,"math_prob":0.9346073,"size":66535,"snap":"2021-04-2021-17","text_gpt3_token_len":18035,"char_repetition_ratio":0.15885827,"word_repetition_ratio":0.20031713,"special_character_ratio":0.27780867,"punctuation_ratio":0.164628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9707799,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82],"im_url_duplicate_count":[null,2,null,4,null,2,null,4,null,4,null,4,null,4,null,4,null,4,null,2,null,2,null,2,null,2,null,2,null,4,null,4,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,4,null,2,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T08:06:45Z\",\"WARC-Record-ID\":\"<urn:uuid:d4cdc3e4-e653-471b-8fd7-dcf48714c28a>\",\"Content-Length\":\"263123\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f7886a66-92e1-47fa-9599-52cfda3c886b>\",\"WARC-Concurrent-To\":\"<urn:uuid:29c316ba-ad5c-41be-a730-8dd7e223d7dd>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://www.aanda.org/articles/aa/full_html/2013/08/aa20237-12/aa20237-12.html\",\"WARC-Payload-Digest\":\"sha1:4QLOLE3TTXWZ5HXXNB5AAUCKLXRYHBCE\",\"WARC-Block-Digest\":\"sha1:D7MPQNTXKUDNZKEZ5CW2M3TRMWXCY3PX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038061562.11_warc_CC-MAIN-20210411055903-20210411085903-00392.warc.gz\"}"}
https://academy.binance.com/ph/glossary/compound-interest
[ "Home\nGlossary\nCompound Interest\n\n# Compound Interest\n\nBeginner\n\nCompound interest refers to the interest accumulated on the principal amount, in addition to the interest from previous periods; this allows you to maximize your earnings on the principal sum. Interest can be compounded on any frequency schedule, whether it’s daily, monthly, or annual. The formula for compound interest is as follows:\n\nA = P(1 + r/n)^nt\n\nWhere:\n\nA = the total amount of money at the end\n\nP = the principal amount invested or borrowed\n\nr = the annual interest rate\n\nn = the number of times interest is compounded within a specific time period\n\nt = the number of these time periods that have elapsed\n\nCompounding interest is a good way to make the most of your principal amount when it comes to saving and investing. For instance, holding \\$10,000 in an account with a 4% annual interest rate compounded for five years will eventually leave you with \\$12,166.53 — \\$166.53 more than if the interest does not compound.\nCompound interest can also apply to loans. For example, if you borrow \\$10,000 at an annual interest rate of 5% without compounding, you will be required to pay \\$500 in interest after a year. However, if you pay this loan off monthly on a compound interest basis, you would have paid \\$511.62 in interest payments by the end of the year.\n\nCompound interest can be an effective way to grow wealth over time, as the interest earned on the accumulated interest can compound and eventually grow exponentially. On the flip side, compound interest on debt can result in significant costs over time if the debt is not paid off quickly.\n\nShare Posts\nRelated Glossaries\nMagrehistro ng isang account\nGamitin ang iyong nalalaman sa pamamagitan ng pagbubukas ng account sa Binance ngayon." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9412755,"math_prob":0.915476,"size":898,"snap":"2023-40-2023-50","text_gpt3_token_len":181,"char_repetition_ratio":0.18232661,"word_repetition_ratio":0.0,"special_character_ratio":0.20155902,"punctuation_ratio":0.078313254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9703899,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T16:24:48Z\",\"WARC-Record-ID\":\"<urn:uuid:591eac45-1e75-4194-af4f-f2aaa59df0cb>\",\"Content-Length\":\"158272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a24df778-ff86-4a7a-b78f-4e25fa5281d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:362093cb-b3db-493d-b1e2-24333f6ef2e4>\",\"WARC-IP-Address\":\"18.67.76.71\",\"WARC-Target-URI\":\"https://academy.binance.com/ph/glossary/compound-interest\",\"WARC-Payload-Digest\":\"sha1:WS7KZM7JCDIV7QMY6QQ32H3OUKKWGZJD\",\"WARC-Block-Digest\":\"sha1:L2YA7HDWY3EHP7EWJ7HWFOC5KKLBUOPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00730.warc.gz\"}"}
https://www.colorhexa.com/0062da
[ "# #0062da Color Information\n\nIn a RGB color space, hex #0062da is composed of 0% red, 38.4% green and 85.5% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 55% magenta, 0% yellow and 14.5% black. It has a hue angle of 213 degrees, a saturation of 100% and a lightness of 42.7%. #0062da color hex could be obtained by blending #00c4ff with #0000b5. Closest websafe color is: #0066cc.\n\n• R 0\n• G 38\n• B 85\nRGB color chart\n• C 100\n• M 55\n• Y 0\n• K 15\nCMYK color chart\n\n#0062da color description : Pure (or mostly pure) blue.\n\n# #0062da Color Conversion\n\nThe hexadecimal color #0062da has RGB values of R:0, G:98, B:218 and CMYK values of C:1, M:0.55, Y:0, K:0.15. Its decimal value is 25306.\n\nHex triplet RGB Decimal 0062da `#0062da` 0, 98, 218 `rgb(0,98,218)` 0, 38.4, 85.5 `rgb(0%,38.4%,85.5%)` 100, 55, 0, 15 213°, 100, 42.7 `hsl(213,100%,42.7%)` 213°, 100, 85.5 0066cc `#0066cc`\nCIE-LAB 43.939, 23.466, -67.688 17.02, 13.796, 68.092 0.172, 0.139, 13.796 43.939, 71.641, 289.12 43.939, -22.198, -101.899 37.143, 16.794, -82.693 00000000, 01100010, 11011010\n\n# Color Schemes with #0062da\n\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #da7800\n``#da7800` `rgb(218,120,0)``\nComplementary Color\n• #00cfda\n``#00cfda` `rgb(0,207,218)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #0b00da\n``#0b00da` `rgb(11,0,218)``\nAnalogous Color\n• #cfda00\n``#cfda00` `rgb(207,218,0)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #da0b00\n``#da0b00` `rgb(218,11,0)``\nSplit Complementary Color\n• #62da00\n``#62da00` `rgb(98,218,0)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #da0062\n``#da0062` `rgb(218,0,98)``\n• #00da78\n``#00da78` `rgb(0,218,120)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #da0062\n``#da0062` `rgb(218,0,98)``\n• #da7800\n``#da7800` `rgb(218,120,0)``\n• #00408e\n``#00408e` `rgb(0,64,142)``\n• #004ba7\n``#004ba7` `rgb(0,75,167)``\n• #0057c1\n``#0057c1` `rgb(0,87,193)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #006df4\n``#006df4` `rgb(0,109,244)``\n• #0e7aff\n``#0e7aff` `rgb(14,122,255)``\n• #2888ff\n``#2888ff` `rgb(40,136,255)``\nMonochromatic Color\n\n# Alternatives to #0062da\n\nBelow, you can see some colors close to #0062da. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0099da\n``#0099da` `rgb(0,153,218)``\n• #0086da\n``#0086da` `rgb(0,134,218)``\n• #0074da\n``#0074da` `rgb(0,116,218)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #0050da\n``#0050da` `rgb(0,80,218)``\n• #003eda\n``#003eda` `rgb(0,62,218)``\n• #002cda\n``#002cda` `rgb(0,44,218)``\nSimilar Colors\n\n# #0062da Preview\n\nThis text has a font color of #0062da.\n\n``<span style=\"color:#0062da;\">Text here</span>``\n#0062da background color\n\nThis paragraph has a background color of #0062da.\n\n``<p style=\"background-color:#0062da;\">Content here</p>``\n#0062da border color\n\nThis element has a border color of #0062da.\n\n``<div style=\"border:1px solid #0062da;\">Content here</div>``\nCSS codes\n``.text {color:#0062da;}``\n``.background {background-color:#0062da;}``\n``.border {border:1px solid #0062da;}``\n\n# Shades and Tints of #0062da\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000102 is the darkest color, while #eef5ff is the lightest one.\n\n• #000102\n``#000102` `rgb(0,1,2)``\n• #000a16\n``#000a16` `rgb(0,10,22)``\n• #001329\n``#001329` `rgb(0,19,41)``\n• #001b3d\n``#001b3d` `rgb(0,27,61)``\n• #002451\n``#002451` `rgb(0,36,81)``\n• #002d64\n``#002d64` `rgb(0,45,100)``\n• #003678\n``#003678` `rgb(0,54,120)``\n• #003f8c\n``#003f8c` `rgb(0,63,140)``\n• #00489f\n``#00489f` `rgb(0,72,159)``\n• #0050b3\n``#0050b3` `rgb(0,80,179)``\n• #0059c6\n``#0059c6` `rgb(0,89,198)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\n• #006bee\n``#006bee` `rgb(0,107,238)``\n• #0274ff\n``#0274ff` `rgb(2,116,255)``\n• #167fff\n``#167fff` `rgb(22,127,255)``\n• #2989ff\n``#2989ff` `rgb(41,137,255)``\n• #3d94ff\n``#3d94ff` `rgb(61,148,255)``\n• #519fff\n``#519fff` `rgb(81,159,255)``\n• #64aaff\n``#64aaff` `rgb(100,170,255)``\n• #78b5ff\n``#78b5ff` `rgb(120,181,255)``\n• #8cbfff\n``#8cbfff` `rgb(140,191,255)``\n• #9fcaff\n``#9fcaff` `rgb(159,202,255)``\n• #b3d5ff\n``#b3d5ff` `rgb(179,213,255)``\n• #c6e0ff\n``#c6e0ff` `rgb(198,224,255)``\n• #daebff\n``#daebff` `rgb(218,235,255)``\n• #eef5ff\n``#eef5ff` `rgb(238,245,255)``\nTint Color Variation\n\n# Tones of #0062da\n\nA tone is produced by adding gray to any pure hue. In this case, #656c75 is the less saturated color, while #0062da is the most saturated one.\n\n• #656c75\n``#656c75` `rgb(101,108,117)``\n• #5c6b7e\n``#5c6b7e` `rgb(92,107,126)``\n• #546a86\n``#546a86` `rgb(84,106,134)``\n• #4b6a8f\n``#4b6a8f` `rgb(75,106,143)``\n• #436997\n``#436997` `rgb(67,105,151)``\n• #3b689f\n``#3b689f` `rgb(59,104,159)``\n• #3267a8\n``#3267a8` `rgb(50,103,168)``\n• #2a66b0\n``#2a66b0` `rgb(42,102,176)``\n• #2265b8\n``#2265b8` `rgb(34,101,184)``\n• #1965c1\n``#1965c1` `rgb(25,101,193)``\n• #1164c9\n``#1164c9` `rgb(17,100,201)``\n• #0863d2\n``#0863d2` `rgb(8,99,210)``\n• #0062da\n``#0062da` `rgb(0,98,218)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0062da is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5154878,"math_prob":0.8226842,"size":3676,"snap":"2023-14-2023-23","text_gpt3_token_len":1633,"char_repetition_ratio":0.1459695,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.55386287,"punctuation_ratio":0.22892939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99002266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T23:04:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7ee5a5bb-4253-451a-93ca-104975166ec5>\",\"Content-Length\":\"36122\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83690d9d-12dc-408c-a4f3-663311d4d9e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fef6d2a-526c-48cb-a11f-0bd0f2181af8>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0062da\",\"WARC-Payload-Digest\":\"sha1:LL63R3TWCDSVE6MINTHVCSE537SIMOVN\",\"WARC-Block-Digest\":\"sha1:SPWJF23ABSLG3EXUG27SJTIU2327Y5MK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950363.89_warc_CC-MAIN-20230401221921-20230402011921-00051.warc.gz\"}"}
https://math.meta.stackexchange.com/questions/24995/can-we-have-a-tag-for-integer-base-b-representation/24998
[ "# Can we have a tag for integer base $b$ representation?\n\nI see a lot of interesting questions, broadly in the category of contest math and recreational math, about the base $b$ expansion of an integer, often $b = 10$. For example: the sum of the digits, when this expansion has repeated digits, how many $0$s at the end, and so on.\n\nI would like a tag to use for these questions other than just . Although these questions fall into a clear common category, I have found tagging them to be an awkward task -- no existing tag seems to fit the bill for a question about the sum of the base $b$ digits of an integer, for example.\n\nIn the answer to this old question it is suggested to use . However the tag wiki seems to have changed since then, to make it even more broad and less about integers than it already was. The tag wiki discusses not just base $b$ representation of integers, but also roman numerals, floating point numbers, factorial base, Fibonacci base, and so on.\n\nOther related tags include\n\n• , helpful and appropriate for base $2$ but not for other bases;\n\n• , possibly helpful for base $10$, but more appropriate for real number representation, and certainly not helpful for other bases;\n\n• , which is unfortunately just a synonym of .\n\nNone of these tags are about base $b$ representation for a general $b$, which is a very common and important topic in elementary number theory. It also strikes me as odd that we have decimal expansion and binary tags, but no tags for base $b$, of which these tags are just special cases. Therefore, I am suggesting we have a new tag for base $b$ representation of numbers. As a rough suggestion:\n\n: Questions about the base $b$ representation of an integer, where $b$ is an integer, including: arithmetic in base $b$, converting between bases, the number of digits, the sum of digits, and other features of such representations. Often appropriate with , but may also be used for base $b$ representation of rational or real numbers. For questions about base $2$ or base $10$ in particular, consider using or . For questions about some other representation, use .\n\nDoes the community agree this would be a useful tag, or have any suggestions for improving the scope and the above tag excerpt?\n\n• Upvoting not because I necessarily support the proposed tag but because it seems worth discussing. There may be other actions that could be taken. – David K Sep 15 '16 at 1:55\n\nLooking at the questions that are actually tagged number-systems, setting aside a few that clearly do not fit the wiki description at all (I have just untagged four of these), it seems to me the vast majority of them are about base-$b$ representation where $b$ is an integer.\n\nThere are a handful of questions regarding negative bases (which would be covered by the proposed tag, though that could be fixed by inserting the words \"greater than $1$\") or fractional bases.\n\nOut of 473 questions tagged [number-systems],\n\n• a search for [number-systems] float retrieved 2 questions;\n• a search for [number-systems] roman retrieved 11 questions, but only 6 of them were actually about Roman numerals.\n\nIt would appear that only a tiny fraction of the questions properly tagged [number-systems] are really about anything that isn't base-$b$ representations of numbers, where $b$ is an integer. There are quite a few questions about representation of rational numbers, which is not about \"about the base $b$ expansion of an integer,\" but would still be allowed under the proposed new tag.\n\nI just don't see how a new \"base $b$\" tag would narrow down the search results much better (other than not returning many questions that fit the description but were not tagged).\n\nI would rather discuss changing the tag description to emphasize the focus on base-$b$ numbers (for integer $b$) more and to encourage questions about such things as sums of digits and divisibility tests. (The current excerpt mentions algorithms only near the end, and neither the excerpt nor the full descriptions mentions the very interesting topic of digit patterns.) Frankly, I'm the one responsible for the current state of the tag description; it was (I think) my first attempt at a major edit of a tag, there was no existing full description to go by, and I may have gotten a little over-enthusiastic in some parts (especially in listing obscure subtopics in the excerpt).\n\nIn my own defense, the other things covered under the current tag description are mainly there because they are tangent to the main topic (floating-point number representations are strongly based on base-$b$, and fractional-radix or mixed-radix systems are extensions of the base-$b$ idea). Roman numerals are allowed because there are few such questions and they have no other obvious home (that I know of).\n\nIf there must be a new tag, I think the most reasonable thing is to convert the existing tag so it more specifically covers base-$b$ representation, create the new tag instead to cover the more \"unusual\" numeric systems, and move things that aren't really base-$b$ to the new tag. I'm just not sure there would be enough questions under the new tag to justify doing this." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9268027,"math_prob":0.93267274,"size":2328,"snap":"2019-35-2019-39","text_gpt3_token_len":494,"char_repetition_ratio":0.14672978,"word_repetition_ratio":0.005194805,"special_character_ratio":0.21434708,"punctuation_ratio":0.121412806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9598588,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T06:33:23Z\",\"WARC-Record-ID\":\"<urn:uuid:95fec9ef-92b5-466a-92df-0eb1fde448ce>\",\"Content-Length\":\"116306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed94a38e-44d6-4c85-bc10-730776cdab71>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a03a407-2757-4399-a048-63e56693ebf8>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.meta.stackexchange.com/questions/24995/can-we-have-a-tag-for-integer-base-b-representation/24998\",\"WARC-Payload-Digest\":\"sha1:IZ4U4ICGAI2IYKBEHLAAZ6QQA7HARUMI\",\"WARC-Block-Digest\":\"sha1:RMQDFFIMLTRP7JTFGT6YM44XXN7OARUB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514570740.10_warc_CC-MAIN-20190915052433-20190915074433-00340.warc.gz\"}"}
https://hpccsystems.com/bb/viewtopic.php?f=8&t=5373
[ "Wed Dec 08, 2021 9:45 am\n\n## generate unique random numbers\n\nComments and questions related to the Enterprise Control Language\nHi everyone,\n\nI am wondering is there a way to generate unique random numbers in ECL?\nI tried to use RANDOM() but the result contains same random numbers.\n\nThanks,\nLily\nlily\n\nPosts: 13\nJoined: Fri Nov 04, 2016 1:02 pm\n\nLily,\n\nIn my experience, it can be difficult to make RANDOM() generate the same numbers when you want it to, so I'm wondering how you used RANDOM()? In what context? Can you show me the code that produces the \"same\" non-unique numbers, please?\n\nHTH,\n\nRichard\nrtaylor", null, "Posts: 1606\nJoined: Wed Oct 26, 2011 7:40 pm\n\nHi Taylor,\n\nThanks for help!\n\nThe code that generated the repeated random numbers is as shown below:\n\nout := DATASET(5, TRANSFORM({INTEGER r}, SELF.r:= RANDOM()%10));\nOUTPUT(out);\n\nThe output result is as shown below:\n\n## r\n1 8\n2 0\n3 8\n4 7\n5 1\n\nThanks,\nLily\n\nrtaylor wrote:Lily,\n\nIn my experience, it can be difficult to make RANDOM() generate the same numbers when you want it to, so I'm wondering how you used RANDOM()? In what context? Can you show me the code that produces the \"same\" non-unique numbers, please?\n\nHTH,\n\nRichard\nlily\n\nPosts: 13\nJoined: Fri Nov 04, 2016 1:02 pm\n\nLily,\n\nThe appearance of duplication is due to your use of the modulus operator limiting your result to only 10 possibilities (the remainders of dividing the actual RANDOM() number by 10).\n\nThis example demonstrates that the RANDOM() function itself returns a very different value for each use:\nCode: Select all\n`out := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := RANDOM(),                            SELF.r1 := SELF.r % 10,                            SELF.r2 := SELF.r % 100));OUTPUT(out);`\nI just ran this code and got this result:\nCode: Select all\n`1911583916   0   563647224352   8   21636695419   8   513689116099   3   61012783233   9   79`\nYou will note in the first record that the first RANDOM() result is 1911583916, the modulus 10 result is 0, and the modulus 100 result is 56. This doesn't make sense if the first random value (1911583916) is used for the two modulus calculations. But it does, because the RANDOM() function is actually called again each time an expression is calculated using it, producing this (correct) result.\n\nSo, as you see, RANDOM() actually DOES return unique values each time. If you really did want the modulus values to use the first RANDOM() in each record you would have to do it this way:\nCode: Select all\n`ds := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := RANDOM(),                            SELF.r1 := 0,                            SELF.r2 := 0));                                          out := PROJECT(ds, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := LEFT.r,                            SELF.r1 := LEFT.r % 10,                            SELF.r2 := LEFT.r % 100));OUTPUT(out);`\nto produce this result:\nCode: Select all\n`3040707218   8   181727978997   7   973116981210   0   102703444385   5   854157152711   1   11`\nAnd now the modulus results definitely come from the generated RANDOM() values.\nHTH,\n\nRichard\nrtaylor", null, "Posts: 1606\nJoined: Wed Oct 26, 2011 7:40 pm\n\nThank you very much Taylor! It really helps!\n\nLily\n\nrtaylor wrote:Lily,\n\nThe appearance of duplication is due to your use of the modulus operator limiting your result to only 10 possibilities (the remainders of dividing the actual RANDOM() number by 10).\n\nThis example demonstrates that the RANDOM() function itself returns a very different value for each use:\nCode: Select all\n`out := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := RANDOM(),                            SELF.r1 := SELF.r % 10,                            SELF.r2 := SELF.r % 100));OUTPUT(out);`\nI just ran this code and got this result:\nCode: Select all\n`1911583916   0   563647224352   8   21636695419   8   513689116099   3   61012783233   9   79`\nYou will note in the first record that the first RANDOM() result is 1911583916, the modulus 10 result is 0, and the modulus 100 result is 56. This doesn't make sense if the first random value (1911583916) is used for the two modulus calculations. But it does, because the RANDOM() function is actually called again each time an expression is calculated using it, producing this (correct) result.\n\nSo, as you see, RANDOM() actually DOES return unique values each time. If you really did want the modulus values to use the first RANDOM() in each record you would have to do it this way:\nCode: Select all\n`ds := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := RANDOM(),                            SELF.r1 := 0,                            SELF.r2 := 0));                                          out := PROJECT(ds, TRANSFORM({INTEGER r,INTEGER r1,INTEGER r2},                             SELF.r  := LEFT.r,                            SELF.r1 := LEFT.r % 10,                            SELF.r2 := LEFT.r % 100));OUTPUT(out);`\nto produce this result:\nCode: Select all\n`3040707218   8   181727978997   7   973116981210   0   102703444385   5   854157152711   1   11`\nAnd now the modulus results definitely come from the generated RANDOM() values.\nHTH,\n\nRichard\nlily\n\nPosts: 13\nJoined: Fri Nov 04, 2016 1:02 pm\n\nHi,\n\nI tried the proposed solution to generate unique random numbers. However, I am still getting duplicates. Is there a way to get unique random numbers with 100% certainty using ECL?\n\nBest regards,\nVannel,\nvzeufack\n\nPosts: 41\nJoined: Tue Sep 25, 2018 3:52 pm\n\nVannel,\n\nPlease post your code that produced duplicate values and the result showing the duplicates so I can try to recreate the problem.\n\nHTH,\n\nRichard\nrtaylor", null, "Posts: 1606\nJoined: Wed Oct 26, 2011 7:40 pm\n\nThis is the code:\nCode: Select all\n`ds := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1},                            SELF.r  := RANDOM(),                            SELF.r1 := 0));                                          out := PROJECT(ds, TRANSFORM({INTEGER r,INTEGER r1},                            SELF.r  := LEFT.r,                            SELF.r1 := LEFT.r % 10));OUTPUT(out);`\n\nThis is the output:\nCode: Select all\n`1   819659058    82   1733070309   93   535821437    74   3518949408   85   905533075    5`\nvzeufack\n\nPosts: 41\nJoined: Tue Sep 25, 2018 3:52 pm\n\nVannel,\n\nHere's an example of how to generate any number of guaranteed unique random numbers:\nCode: Select all\n`GenerateUniqueRandoms(UNSIGNED4 U) := FUNCTION  //generate 10% extra  ds := DATASET(U*1.1,                    TRANSFORM({UNSIGNED4 r},                           SELF.r  := RANDOM()));  //then dedup the result                                            out := DEDUP(SORT(ds,r),r);   //and limit to the desired number  RETURN out[1..U];END;COUNT(GenerateUniqueRandoms(1000000));  //I want a million uniquesCOUNT(GenerateUniqueRandoms(1000));     //now I want a thousand`\n\nHTH,\n\nRichard\nrtaylor", null, "Posts: 1606\nJoined: Wed Oct 26, 2011 7:40 pm\n\nVannel,\nThis is the code:\nCODE: SELECT ALL\nds := DATASET(5, TRANSFORM({INTEGER r,INTEGER r1},\nSELF.r := RANDOM(),\nSELF.r1 := 0));\n\nout := PROJECT(ds, TRANSFORM({INTEGER r,INTEGER r1},\nSELF.r := LEFT.r,\nSELF.r1 := LEFT.r % 10));\nOUTPUT(out);\n\nThis is the output:\nCODE: SELECT ALL\n1 819659058 8\n2 1733070309 9\n3 535821437 7\n4 3518949408 8\n5 905533075 5\nNot to put too fine a point on it, but none of those random numbers are duplicated. The modulus 10 numbers do have duplicates, but those are hardly \"random\" -- they are simply the remainders after division by 10 of the actual random numbers.\n\nHTH,\n\nRichard\nrtaylor", null, "Posts: 1606\nJoined: Wed Oct 26, 2011 7:40 pm\n\nNext" ]
[ null, "https://hpccsystems.com/bb/images/ranks/Icon-Advisor-Member.png", null, "https://hpccsystems.com/bb/images/ranks/Icon-Advisor-Member.png", null, "https://hpccsystems.com/bb/images/ranks/Icon-Advisor-Member.png", null, "https://hpccsystems.com/bb/images/ranks/Icon-Advisor-Member.png", null, "https://hpccsystems.com/bb/images/ranks/Icon-Advisor-Member.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76488984,"math_prob":0.9663553,"size":8145,"snap":"2021-43-2021-49","text_gpt3_token_len":2369,"char_repetition_ratio":0.13020514,"word_repetition_ratio":0.70293486,"special_character_ratio":0.33701658,"punctuation_ratio":0.19488636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99618995,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T09:45:54Z\",\"WARC-Record-ID\":\"<urn:uuid:5e413f73-1827-40e2-be98-229adf8e80e4>\",\"Content-Length\":\"50895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77949571-f831-42ef-a9a8-4db12577925c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a542b3af-91eb-4fe9-977a-8f9d84c220e9>\",\"WARC-IP-Address\":\"209.243.50.23\",\"WARC-Target-URI\":\"https://hpccsystems.com/bb/viewtopic.php?f=8&t=5373\",\"WARC-Payload-Digest\":\"sha1:5EZW5VFUEWJ4L3ZTEQQFSFPJKNHWSAXC\",\"WARC-Block-Digest\":\"sha1:5ZZ54XERAAQ2V47LIA4LIGFZZSAH5SXV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363465.47_warc_CC-MAIN-20211208083545-20211208113545-00626.warc.gz\"}"}
https://openstax.org/books/statistics/pages/9-4-rare-events-the-sample-and-the-decision-and-conclusion
[ "Statistics\n\n# 9.4Rare Events, the Sample, and the Decision and Conclusion\n\nStatistics9.4 Rare Events, the Sample, and the Decision and Conclusion\n\nEstablishing the type of distribution, sample size, and known or unknown standard deviation can help you figure out how to go about a hypothesis test. However, there are several other factors you should consider when working out a hypothesis test.\n\n### Rare Events\n\nThe thinking process in hypothesis testing can be summarized as follows: You want to test whether or not a particular property of the population is true. You make an assumption about the true population mean for numerical data or the true population proportion for categorical data. This assumption is the null hypothesis. Then you gather sample data that is representative of the population. From this sample data you compute the sample mean (or the sample proportion). If the value that you observe is very unlikely to occur (a rare event) if the null hypothesis is true, then you wonder why this is happening. A plausible explanation is that the null hypothesis is false.\n\nFor example, Didi and Ali are at a birthday party of a very wealthy friend. They hurry to be first in line to grab a prize from a tall basket that they cannot see inside because they will be blindfolded. There are 200 plastic bubbles in the basket, and Didi and Ali have been told that there is only one with a $100 bill. Didi is the first person to reach into the basket and pull out a bubble. Her bubble contains a$100 bill. The probability of this happening is $12001200$ = 0.005. Because this is so unlikely, Ali is hoping that what the two of them were told is wrong and there are more $100 bills in the basket. A rare event has occurred (Didi getting the$100 bill) so Ali doubts the assumption about only one \\$100 bill being in the basket.\n\n### Using the Sample to Test the Null Hypothesis\n\nAfter you collect data and obtain the test statistic (the sample mean, sample proportion, or other test statistic), you can determine the probability of obtaining that test statistic when the null hypothesis is true. This probability is called the p-value.\n\nWhen the p-value is very small, it means that the observed test statistic is very unlikely to happen if the null hypothesis is true. This gives significant evidence to suggest that the null hypothesis is false, and to reject it in favor of the alternative hypothesis. In practice, to reject the null hypothesis we want the p-value to be smaller than 0.05 (5 percent) or sometimes even smaller than 0.01 (1 percent).\n\n### Example 9.9\n\nSuppose a baker claims that his bread height is more than 15 cm, on average. Several of his customers do not believe him. To persuade his customers that he is right, the baker decides to do a hypothesis test. He bakes 10 loaves of bread. The mean height of the sample loaves is 17 cm. The baker knows from baking hundreds of loaves of bread that the standard deviation for the height is 0.5 cm and the distribution of heights is normal.\n\nThe null hypothesis could be H0: μ ≤ 15. The alternate hypothesis is Ha: μ > 15.\n\nThe words is more than translates as a \">\" so \"μ > 15\" goes into the alternate hypothesis. The null hypothesis must contradict the alternate hypothesis.\n\nSince σ is known (σ = 0.5 cm), the distribution for the population is known to be normal with mean μ = 15 and standard deviation $σ n = 0.5 10 =0.16 σ n = 0.5 10 =0.16$.\n\nSuppose the null hypothesis is true (which is that the mean height of the loaves is no more than 15 cm). Then is the mean height (17 cm) calculated from the sample unexpectedly large? The hypothesis test works by asking the question how unlikely the sample mean would be if the null hypothesis were true. The graph shows how far out the sample mean is on the normal curve. The p-value is the probability that, if we were to take other samples, any other sample mean would fall at least as far out as 17 cm.\n\nThe p-value, then, is the probability that a sample mean is the same or greater than 17 cm when the population mean is, in fact, 15 cm. We can calculate this probability using the normal distribution for means. In Figure 9.2, the p-value is the area under the normal curve to the right of 17. Using a normal distribution table or a calculator, we can compute that this probability is practically zero.\n\nFigure 9.2\n\np-value = P($x ¯ x ¯$ > 17), which is approximately zero.\n\nBecause the p-value is almost 0, we conclude that obtaining a sample height of 17 cm or higher from 10 loaves of bread is very unlikely if the true mean height is 15 cm. We reject the null hypothesis and conclude that there is sufficient evidence to claim that the true population mean height of the baker’s loaves of bread is higher than 15 cm.\n\nTry It 9.9\n\nA normal distribution has a standard deviation of 1. We want to verify a claim that the mean is greater than 12. A sample of 36 is taken with a sample mean of 12.5.\n\nH0: μ ≤ 12\nHa: μ > 12\nThe p-value is 0.0013.\nDraw a graph that shows the p-value.\n\n### Decision and Conclusion\n\nA systematic way to make a decision of whether to reject or not reject the null hypothesis is to compare the p-value and a preset or preconceived α, also called the level of significance of the test. A preset α is the probability of a Type I error (rejecting the null hypothesis when the null hypothesis is true). It may or may not be given to you at the beginning of the problem.\n\nWhen you make a decision to reject or not reject H0, do as follows:\n\n• If p-value $<α <α$, reject H0. The results of the sample data are significant. There is sufficient evidence to conclude that H0 is an incorrect belief and that the alternative hypothesis, Ha, may be correct.\n• If p-value $≥α ≥α$, do not reject H0. The results of the sample data are not significant.There is not sufficient evidence to conclude that the alternative hypothesis, Ha, may be correct.\n• When you do not reject H0, it does not mean that you should believe that H0 is true. It simply means that the sample data have failed to provide sufficient evidence to cast serious doubt about the truthfulness of H0.\n\nConclusion: After you make your decision, write a thoughtful conclusion about the hypotheses in terms of the given problem.\n\n### Example 9.10\n\nWhen using the p-value to evaluate a hypothesis test, you might find it useful to use the following mnemonic device:\n\nIf the p-value is low, the null must go.\n\nIf the p-value is high, the null must fly.\n\nThis memory aid relates a p-value less than the established alpha (the p is low) as rejecting the null hypothesis and, likewise, relates a p-value higher than the established alpha (the p is high) as not rejecting the null hypothesis.\n\nFill in the blanks.\n\nReject the null hypothesis when ______________________________________.\n\nThe results of the sample data _____________________________________.\n\nDo not reject the null hypothesis when __________________________________________.\n\nThe results of the sample data ____________________________________________.\n\nTry It 9.10\n\nIt’s a Boy Genetics Labs, a genetics company, claims their procedures improve the chances of a boy being born. The results for a test of a single population proportion are as follows:\n\nH0: p = 0.50, Ha: p > 0.50\n\nα = 0.01\n\np-value = 0.025\n\nInterpret the results and state a conclusion in simple, nontechnical terms." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93663156,"math_prob":0.9765786,"size":7038,"snap":"2020-45-2020-50","text_gpt3_token_len":1543,"char_repetition_ratio":0.1920671,"word_repetition_ratio":0.036829464,"special_character_ratio":0.24822393,"punctuation_ratio":0.10070922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99616355,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T16:12:32Z\",\"WARC-Record-ID\":\"<urn:uuid:6f87275c-d5d9-4aae-9a34-9317cb15d958>\",\"Content-Length\":\"218286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0ebdd32-597c-43f2-8b45-6bda1ec4ed76>\",\"WARC-Concurrent-To\":\"<urn:uuid:be66e762-21b6-478f-a44f-52265cdfd904>\",\"WARC-IP-Address\":\"99.84.191.125\",\"WARC-Target-URI\":\"https://openstax.org/books/statistics/pages/9-4-rare-events-the-sample-and-the-decision-and-conclusion\",\"WARC-Payload-Digest\":\"sha1:H2T2YVNTCQZS4AAYK7L52OS6J7IDL4LH\",\"WARC-Block-Digest\":\"sha1:DDFTQYGR6HVFJBSTIGTPXODTTK4BGC7V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107919459.92_warc_CC-MAIN-20201031151830-20201031181830-00174.warc.gz\"}"}
https://www.grapecity.com/spreadnet/docs/online-formula/functionPERCENTILE.INC.html
[ " PERCENTILE.INC | Spread.NET 15 Formula Reference\nFormula Functions / Functions M to Q / PERCENTILE.INC\nIn This Topic\nPERCENTILE.INC\nIn This Topic\n\n#### Summary\n\nThis function returns the kth percentile of values in a range where k is between 0..1, inclusive.\n\n#### Syntax\n\nPERCENTILE.INC(array,k)\n\n#### Arguments\n\nThis function has these arguments:\n\nArgument Description\narray Array of values representing the data\nk Value representing the percentile value between 0 and 1\n\n#### Remarks\n\nThis function returns the #NUM! error value if the array is empty. If k is nonnumeric, #VALUE! is returned. If k < 0 or > 1, #NUM! is returned. The function interpolates to determine the value at the kth percentile if k is not a multiple of 1/(n-1).\n\n#### Data Types\n\nAccepts numeric data for both arguments. Returns numeric data.\n\n#### Examples\n\nPERCENTILE.INC(A1:A12,0.95)\n\nPERCENTILE.INC(R1C1:R1C45,0.866)\n\nPERCENTILE.INC({5,15,25,50,65},0.45) gives the result 23\n\n#### Version Available\n\nThis function is available in Spread for Windows Forms 11.0 or later." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6099354,"math_prob":0.8928734,"size":786,"snap":"2022-05-2022-21","text_gpt3_token_len":226,"char_repetition_ratio":0.1342711,"word_repetition_ratio":0.0,"special_character_ratio":0.2760814,"punctuation_ratio":0.19886364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97815025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T10:29:06Z\",\"WARC-Record-ID\":\"<urn:uuid:76cc54e4-6b09-4182-a10c-3ca99b9c9d49>\",\"Content-Length\":\"13363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:815e2e07-8cc6-4ff5-aa2e-517abef91540>\",\"WARC-Concurrent-To\":\"<urn:uuid:80672fba-ac45-4c4a-be64-51355c470207>\",\"WARC-IP-Address\":\"20.36.236.120\",\"WARC-Target-URI\":\"https://www.grapecity.com/spreadnet/docs/online-formula/functionPERCENTILE.INC.html\",\"WARC-Payload-Digest\":\"sha1:7X6WJ3VLPSIF6IDWY4OC4WDVZ3ILVUP3\",\"WARC-Block-Digest\":\"sha1:WLRL2SMTETXE4AT4BMPJQELVDMEDECIU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517245.1_warc_CC-MAIN-20220517095022-20220517125022-00563.warc.gz\"}"}
https://support.microsoft.com/zh-cn/office/tinv-%E5%87%BD%E6%95%B0-a7c85b9d-90f5-41fe-9ca5-1cd2f3e1ed7c
[ "# TINV 函数\n\n## 语法\n\nTINV(probability,deg_freedom)\n\nTINV 函数语法具有下列参数:\n\n• Probability     必需。 与双尾学生 t 分布相关的概率。\n\n• Deg_freedom     必需。 代表分布的自由度数。\n\n## 说明\n\n• 如果任一参数是非数字的,则 TINV 返回#VALUE! 错误值。\n\n• 如果 probability <= 0 或 probability > 1,则 TINV 返回 #NUM! 错误值。\n\n• 如果 deg_freedom 不是整数,则将被截尾取整。\n\n• 如果deg_freedom < 1,则 TINV 返回#NUM! 错误值。\n\n• TINV 返回 t 值,P(|X| > t) = probability,其中 X 为服从 t 分布的随机变量,且 P(|X| > t) = P(X < -t or X > t)。\n\n• 通过将 probability 替换为 2*probability,可以返回单尾 t 值。 对于概率为 0.05 以及自由度为 10 的情况,使用 TINV(0.05,10)(返回 2.28139)计算双尾值。 对于相同概率和自由度的情况,可以使用 TINV(2*0.05,10)(返回 1.812462)计算单尾值。\n\n注意:  在某些表格中,概率被描述为 (1-p)。\n\n如果已给定概率值,则 TINV 使用 TDIST(x, deg_freedom, 2) = probability 求解数值 x。 因此,TINV 的精度取决于 TDIST 的精度。 TINV 使用迭代搜索技术。 如果搜索在 100 次迭代之后没有收敛,则函数返回错误值 #N/A。\n\n## 示例\n\n 数据 说明 0.05464 对应于双尾学生 t 分布的概率 60 自由度 公式 说明 结果 =TINV(A2,A3) 基于 A2 和 A3 中的参数算出的学生 t 分布的 t 值。 1.96\n\n×" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.98271424,"math_prob":0.99158955,"size":1075,"snap":"2021-21-2021-25","text_gpt3_token_len":734,"char_repetition_ratio":0.14659198,"word_repetition_ratio":0.0,"special_character_ratio":0.33581394,"punctuation_ratio":0.086124405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9735886,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T01:19:28Z\",\"WARC-Record-ID\":\"<urn:uuid:5f6066aa-9b84-417e-a5a0-5d31de3c68e8>\",\"Content-Length\":\"99039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60482f3c-094a-4925-a15e-a9f1280f9323>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b446747-62ff-4af0-bdd2-4555a69c5147>\",\"WARC-IP-Address\":\"23.62.164.116\",\"WARC-Target-URI\":\"https://support.microsoft.com/zh-cn/office/tinv-%E5%87%BD%E6%95%B0-a7c85b9d-90f5-41fe-9ca5-1cd2f3e1ed7c\",\"WARC-Payload-Digest\":\"sha1:RX2Y2RF3N5S7CXAE2JMRLGEBQMGYGLTJ\",\"WARC-Block-Digest\":\"sha1:WWGFHCAATMEGE4B2SSUPIYMDKS43ORN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643354.47_warc_CC-MAIN-20210618230338-20210619020338-00029.warc.gz\"}"}
http://puzzlesite.nl/basic/ten_sentences_us.html
[ "## Solution to: Ten Sentences\n\nPossibility 1:\n\nThe number of times the digit 0 appears in this puzzle is 1.\nThe number of times the digit 1 appears in this puzzle is 7.\nThe number of times the digit 2 appears in this puzzle is 3.\nThe number of times the digit 3 appears in this puzzle is 2.\nThe number of times the digit 4 appears in this puzzle is 1.\nThe number of times the digit 5 appears in this puzzle is 1.\nThe number of times the digit 6 appears in this puzzle is 1.\nThe number of times the digit 7 appears in this puzzle is 2.\nThe number of times the digit 8 appears in this puzzle is 1.\nThe number of times the digit 9 appears in this puzzle is 1.\n\nPossibility 2:\n\nThe number of times the digit 0 appears in this puzzle is 1.\nThe number of times the digit 1 appears in this puzzle is 11.\nThe number of times the digit 2 appears in this puzzle is 2.\nThe number of times the digit 3 appears in this puzzle is 1.\nThe number of times the digit 4 appears in this puzzle is 1.\nThe number of times the digit 5 appears in this puzzle is 1.\nThe number of times the digit 6 appears in this puzzle is 1.\nThe number of times the digit 7 appears in this puzzle is 1.\nThe number of times the digit 8 appears in this puzzle is 1.\nThe number of times the digit 9 appears in this puzzle is 1.", null, "Back to the puzzle\nThis website uses cookies. By further use of this website, or by clicking on 'Continue', you give permission for the use of cookies. If you want more information, look at our cookie policy." ]
[ null, "http://puzzlesite.nl/images/navicon3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93384326,"math_prob":0.9986457,"size":1296,"snap":"2022-40-2023-06","text_gpt3_token_len":341,"char_repetition_ratio":0.20433436,"word_repetition_ratio":0.88432837,"special_character_ratio":0.26003087,"punctuation_ratio":0.0779661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99819845,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T09:59:24Z\",\"WARC-Record-ID\":\"<urn:uuid:12e9dd29-ed4f-4786-bbb6-f4523131b43d>\",\"Content-Length\":\"7096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:91339089-1081-40bf-958f-e1ebc57e8810>\",\"WARC-Concurrent-To\":\"<urn:uuid:e149601e-d956-4b80-a901-3eb97385b457>\",\"WARC-IP-Address\":\"81.88.57.69\",\"WARC-Target-URI\":\"http://puzzlesite.nl/basic/ten_sentences_us.html\",\"WARC-Payload-Digest\":\"sha1:KTGCGWVUE3US57Y75GQVERGYUR3MXGIW\",\"WARC-Block-Digest\":\"sha1:D4N7I3GFLDS76HVO32FJNK6U6RAB74KE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337803.86_warc_CC-MAIN-20221006092601-20221006122601-00752.warc.gz\"}"}
https://c-cube.github.io/qcheck/0.5/QCheck.html
[ "# Module QCheck\n\n`module QCheck: `sig` .. `end``\n\n# Quickcheck inspired property-based testing\n\nThe library takes inspiration from Haskell's QuickCheck library. The rough idea is that the programer describes invariants that values of a certain type need to satisfy (\"properties\"), as functions from this type to bool. She also needs to desribe how to generate random values of the type, so that the property is tried and checked on a number of random instances.\n\nThis explains the organization of this module:\n\n• `QCheck.Test` is used to describe a single test, that is, a property of type `'a -> bool` combined with an `'a arbitrary` that is used to generate the test cases for this property. Optional parameters allow to specify the random generator state, number of instances to generate and test, etc.\nExamples:\n\n• List.rev is involutive:\n``````let test =\nQCheck.(Test.make ~count:1000\n(list int) (fun l -> List.rev (List.rev l) = l));;\n\nQCheck.Test.run_exn test;;\n``````\n\n• Not all lists are sorted (false property that will fail. The 15 smallest counter-example lists will be printed):\n``````let test = QCheck.(\nTest.make\n~count:10_000 ~max_fail:3\n(list small_int)\n(fun l -> l = List.sort compare l));;\nQCheck.Test.check_exn test;;\n``````\n\n• generate 20 random trees using `Arbitrary.fix` :\n``````type tree = Leaf of int | Node of tree * tree\n\nlet leaf x = Leaf x\nlet node x y = Node (x,y)\n\nlet g = QCheck.Gen.(sized @@ fix\n(fun self n -> match n with\n| 0 -> map leaf nat\n| n ->\nfrequency\n[1, map leaf nat;\n2, map2 node (self (n/2)) (self (n/2))]\n))\n\nGen.generate ~n:20 g;;\n``````\n\nMore complex and powerful combinators can be found in Gabriel Scherer's `Generator` module. Its documentation can be found here.\n\n`val (==>) : `bool -> bool -> bool``\n`b1 ==> b2` is the logical implication `b1 => b2` ie `not b1 || b2` (except that it is strict and will interact better with `QCheck.Test.check_exn` and the likes, because they will know the precondition was not satisfied.).\n`module Gen: `sig` .. `end``\nGenerate Random Values\n\n## Pretty printing\n\n`module Print: `sig` .. `end``\nShow Values\n`module Iter: `sig` .. `end``\nIterators\n`module Shrink: `sig` .. `end``\nShrink Values\n\n## Arbitrary\n\nA value of type `'a arbitrary` glues together a random generator, and optional functions for shrinking, printing, computing the size, etc. It is the \"normal\" way of describing how to generate values of a given type, to be then used in tests (see `QCheck.Test`)\n\n``type 'a arbitrary = {``\n `  ` `gen : 'a Gen.t;` `  ` `print : ('a -> string) option;` `(*` print values `*)` `  ` `small : ('a -> int) option;` `(*` size of example `*)` `  ` `shrink : 'a Shrink.t option;` `(*` shrink to smaller examples `*)` `  ` `collect : ('a -> string) option;` `(*` map value to tag, and group by tag `*)`\n}\na value of type `'a arbitrary` is an object with a method for generating random values of type `'a`, and additional methods to compute the size of values, print them, and possibly shrink them into smaller counterexamples\n\nNOTE the collect field is unstable and might be removed, or moved into `QCheck.Test`.\n\n`val make : `?print:'a Print.t -> ?small:('a -> int) -> ?shrink:'a Shrink.t -> ?collect:('a -> string) -> 'a Gen.t -> 'a arbitrary``\nBuilder for arbitrary. Default is to only have a generator, but other arguments can be added\n`val set_print : `'a Print.t -> 'a arbitrary -> 'a arbitrary``\n`val set_small : `('a -> int) -> 'a arbitrary -> 'a arbitrary``\n`val set_shrink : `'a Shrink.t -> 'a arbitrary -> 'a arbitrary``\n`val set_collect : `('a -> string) -> 'a arbitrary -> 'a arbitrary``\n`val choose : `'a arbitrary list -> 'a arbitrary``\nChoose among the given list of generators. The list must not be empty; if it is Invalid_argument is raised.\n`val unit : `unit arbitrary``\nalways generates `()`, obviously.\n`val bool : `bool arbitrary``\nuniform boolean generator\n`val float : `float arbitrary``\n\ngenerates regular floats (no nan and no infinities)\n`val pos_float : `float arbitrary``\npositive float generator (no nan and no infinities)\n`val neg_float : `float arbitrary``\nnegative float generator (no nan and no infinities)\n`val int : `int arbitrary``\nint generator. Uniformly distributed\n`val int_bound : `int -> int arbitrary``\n`int_bound n` is uniform between `0` and `n` included\n`val int_range : `int -> int -> int arbitrary``\n`int_range a b` is uniform between `a` and `b` included. `b` must be larger than `a`.\n`val (--) : `int -> int -> int arbitrary``\nSynonym to `QCheck.int_range`\n`val int32 : `int32 arbitrary``\nint32 generator. Uniformly distributed\n`val int64 : `int64 arbitrary``\nint generator. Uniformly distributed\n`val pos_int : `int arbitrary``\npositive int generator. Uniformly distributed\n`val small_int : `int arbitrary``\npositive int generator. The probability that a number is chosen is roughly an exponentially decreasing function of the number.\n`val small_int_corners : `unit -> int arbitrary``\nAs `small_int`, but each newly created generator starts with a list of corner cases before falling back on random generation.\n`val neg_int : `int arbitrary``\nnegative int generator. The distribution is similar to that of `small_int`, not of `pos_int`.\n`val char : `char arbitrary``\nUniformly distributed on all the chars (not just ascii or valid latin-1)\n`val printable_char : `char arbitrary``\n\nuniformly distributed over a subset of chars\n`val numeral_char : `char arbitrary``\nuniformy distributed over `'0'..'9'`\n`val string_gen_of_size : `int Gen.t -> char Gen.t -> string arbitrary``\n`val string_gen : `char Gen.t -> string arbitrary``\ngenerates strings with a distribution of length of `small_int`\n`val string : `string arbitrary``\ngenerates strings with a distribution of length of `small_int` and distribution of characters of `char`\n`val small_string : `string arbitrary``\nSame as `QCheck.string` but with a small length (that is, `0--10`)\n`val string_of_size : `int Gen.t -> string arbitrary``\ngenerates strings with distribution of characters if `char`\n`val printable_string : `string arbitrary``\ngenerates strings with a distribution of length of `small_int` and distribution of characters of `printable_char`\n`val printable_string_of_size : `int Gen.t -> string arbitrary``\ngenerates strings with distribution of characters of `printable_char`\n`val small_printable_string : `string arbitrary``\n`val numeral_string : `string arbitrary``\ngenerates strings with a distribution of length of `small_int` and distribution of characters of `numeral_char`\n`val numeral_string_of_size : `int Gen.t -> string arbitrary``\ngenerates strings with a distribution of characters of `numeral_char`\n`val list : `'a arbitrary -> 'a list arbitrary``\ngenerates lists with length generated by `small_int`\n`val list_of_size : `int Gen.t -> 'a arbitrary -> 'a list arbitrary``\ngenerates lists with length from the given distribution\n`val array : `'a arbitrary -> 'a array arbitrary``\ngenerates arrays with length generated by `small_int`\n`val array_of_size : `int Gen.t -> 'a arbitrary -> 'a array arbitrary``\ngenerates arrays with length from the given distribution\n`val pair : `'a arbitrary -> 'b arbitrary -> ('a * 'b) arbitrary``\ncombines two generators into a generator of pairs\n`val triple : `'a arbitrary -> 'b arbitrary -> 'c arbitrary -> ('a * 'b * 'c) arbitrary``\ncombines three generators into a generator of 3-uples\n`val option : `'a arbitrary -> 'a option arbitrary``\nchoose between returning Some random value, or None\n`val fun1 : `'a arbitrary -> 'b arbitrary -> ('a -> 'b) arbitrary``\ngenerator of functions of arity 1. The functions are always pure and total functions:\n• when given the same argument (as decided by Pervasives.(=)), it returns the same value\n• it never does side effects, like printing or never raise exceptions etc. The functions generated are really printable.\n\n`val fun2 : `'a arbitrary -> 'b arbitrary -> 'c arbitrary -> ('a -> 'b -> 'c) arbitrary``\ngenerator of functions of arity 2. The remark about `fun1` also apply here.\n`val oneofl : `?print:'a Print.t -> ?collect:('a -> string) -> 'a list -> 'a arbitrary``\nPick an element randomly in the list\n`val oneofa : `?print:'a Print.t -> ?collect:('a -> string) -> 'a array -> 'a arbitrary``\nPick an element randomly in the array\n`val oneof : `'a arbitrary list -> 'a arbitrary``\nPick a generator among the list, randomly\n`val always : `?print:'a Print.t -> 'a -> 'a arbitrary``\nAlways return the same element\n`val frequency : `?print:'a Print.t -> ?small:('a -> int) -> ?shrink:'a Shrink.t -> ?collect:('a -> string) -> (int * 'a arbitrary) list -> 'a arbitrary``\nSimilar to `QCheck.oneof` but with frequencies\n`val frequencyl : `?print:'a Print.t -> ?small:('a -> int) -> (int * 'a) list -> 'a arbitrary``\nSame as `QCheck.oneofl`, but each element is paired with its frequency in the probability distribution (the higher, the more likely)\n`val frequencya : `?print:'a Print.t -> ?small:('a -> int) -> (int * 'a) array -> 'a arbitrary``\nSame as `QCheck.frequencyl`, but with an array\n`val map : `?rev:('b -> 'a) -> ('a -> 'b) -> 'a arbitrary -> 'b arbitrary``\n`map f a` returns a new arbitrary instance that generates values using `a#gen` and then transforms them through `f`.\n`rev` : if provided, maps values back to type `'a` so that the printer, shrinker, etc. of `a` can be used. We assume `f` is monotonic in this case (that is, smaller inputs are transformed into smaller outputs).\n`val map_same_type : `('a -> 'a) -> 'a arbitrary -> 'a arbitrary``\nSpecialization of `map` when the transformation preserves the type, which makes shrinker, printer, etc. still relevant\n`val map_keep_input : `?print:'b Print.t -> ?small:('b -> int) -> ('a -> 'b) -> 'a arbitrary -> ('a * 'b) arbitrary``\n`map_keep_input f a` generates random values from `a`, and maps them into values of type `'b` using the function `f`, but it also keeps the original value. For shrinking, it is assumed that `f` is monotonic and that smaller input values will map into smaller values\n`print` : optional printer for the `f`'s output\n\n## Tests\n\n`module TestResult: `sig` .. `end``\n`module Test: `sig` .. `end``" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5508777,"math_prob":0.9050636,"size":6878,"snap":"2021-04-2021-17","text_gpt3_token_len":1878,"char_repetition_ratio":0.22985162,"word_repetition_ratio":0.12889637,"special_character_ratio":0.294417,"punctuation_ratio":0.19224924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98887396,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T06:29:52Z\",\"WARC-Record-ID\":\"<urn:uuid:fff2d1ec-2159-4fa8-8600-5566b771ab05>\",\"Content-Length\":\"29009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11d51b30-c631-4188-8cf8-b9de8a3df24e>\",\"WARC-Concurrent-To\":\"<urn:uuid:23f948b5-d20b-41c5-90ce-148f567d4b39>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://c-cube.github.io/qcheck/0.5/QCheck.html\",\"WARC-Payload-Digest\":\"sha1:LP22OV3FMWJNN6FI73DAY5FG37JK2VQY\",\"WARC-Block-Digest\":\"sha1:QSC3FVLJSHQYZIL4JQRCXT7GBZUB3LJH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703509973.34_warc_CC-MAIN-20210117051021-20210117081021-00623.warc.gz\"}"}
https://blueheart0621.github.io/2020/06/24/Technique/Math/Theory/%E5%8D%A1%E8%BF%88%E5%85%8B%E5%B0%94%E5%87%BD%E6%95%B0/
[ "# 卡迈克尔函数\n\n## 1. 定义\n\n\\begin{array}{c} \\lambda(n) = \\left\\{ \\begin{aligned} \\phi(n) \\,\\, & \\,\\, n = 1,2,3,4,5,6,7,9,10,\\cdots \\\\ {1 \\over 2}\\phi(n) \\,\\, & \\,\\, n = 8,16,32,64,128,256,\\cdots \\end{aligned} \\right. \\end{array}\n\n## 2. 性质\n\n• 对于任意整数 $n$,由算数基本定理(整数唯一分解定理):$n = p_1^{a_1}p_2^{a_2} \\cdots p_w^{a_w}$,则卡迈克尔函数满足:\n\n$\\begin{array}{c} \\lambda(n) = lcm[\\lambda(p_1^{a_1}),\\lambda(p_2^{a_2}),\\cdots,\\lambda(p_w^{a_w})] \\end{array}$" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8967208,"math_prob":1.0000077,"size":280,"snap":"2023-40-2023-50","text_gpt3_token_len":277,"char_repetition_ratio":0.07971015,"word_repetition_ratio":0.0,"special_character_ratio":0.475,"punctuation_ratio":0.2173913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T14:14:46Z\",\"WARC-Record-ID\":\"<urn:uuid:863edb03-5bca-4f4b-af8f-dc84df424032>\",\"Content-Length\":\"46178\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d50ddd34-64ec-4466-a35d-f28837c64a08>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f9237c9-7873-4fcf-8219-917f8d83c49f>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://blueheart0621.github.io/2020/06/24/Technique/Math/Theory/%E5%8D%A1%E8%BF%88%E5%85%8B%E5%B0%94%E5%87%BD%E6%95%B0/\",\"WARC-Payload-Digest\":\"sha1:RM3OGRK5LCPGEBTRPIEOYY5LL5PR5UAV\",\"WARC-Block-Digest\":\"sha1:4APWLS2YWGT2T3I7W2YNP5Y34II6ESQX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506481.17_warc_CC-MAIN-20230923130827-20230923160827-00850.warc.gz\"}"}
https://m.scirp.org/papers/97834
[ "An Examination of Greenhouse Gas Convergence in OECD Countries\nAbstract: Global warming has become one of the most critical factors affecting the world, especially in the last decade. Therefore, it is of great importance to analyze the impact of global warming and take measures. The main factor leading to global warming is considered to be people’s consumption and production behaviors. The primary indicator of this is greenhouse gases. Relevant policy changes need to be made to control greenhouse gases. In this context, it is necessary to determine the differences in greenhouse gas emissions at the national level. To identify these differences, this study applies the convergence hypothesis, which has been the subject of numerous researchers since the 1980s. In this study, we analyzed the greenhouse gas intensity convergence for countries in the Organization for Economic Cooperation and Development (OECD) using linear and nonlinear panel unit root tests. The results of this study show that the greenhouse gas emissions in the OECD countries do not converge to the OECD average.\n\n1. Introduction\n\nToday, global warming is one of the significant problems that affect the world and is projected to increase its impact in the upcoming years. The main factor leading to global warming is considered to be people’s consumption and production behaviors. The primary indicator of this is greenhouses gases. Relevant policy changes need to be made in order to control greenhouse gases. Global warming and the resulting carbon emissions have become one of the most controversial topics in the world. Tremendous efforts are spent on increasing environmental awareness, especially since the 1970s, and reducing greenhouse gas emissions has become the very first priority of international meetings. Global politics and global economic interests stand as essential obstacles to getting concrete results from these efforts.\n\nThe OECD countries generate a significant portion of the greenhouse gas emissions in the world. The consequences of the policies that these countries currently implement regarding production and consumption patterns will have a substantial impact on how global warming will take shape in the future. Figure 1 presents the percent change in total greenhouse gas emissions since 1990 in the OECD countries and the world.\n\nThis study aims to analyze whether there is a convergence between the OECD countries regarding greenhouse gas emissions using linear and nonlinear time series and panel unit root tests. The convergence hypothesis, which has been the subject of many studies since the 1980s, is one of the necessary inferences of the neoclassical growth theory and suggests that the relatively developing countries would converge with the more prosperous countries by eliminating income differences in general terms. The theory of convergence has become a controversial issue that attracts the attention of economists since it was first introduced. Studies investigated how these current differences of countries, regions, or international organizations with different natural resource distribution and different income levels will continue. In other words, the issue of how the inequalities between economies will change constitutes the basis of convergence discussions.\n\nConvergence is an attractive concept used in areas such as economic growth, finance, theoretical econometrics, European political and monetary union, regional planning as well as geography, entertainment, multimedia technology, and the software industry. The fact that the countries do not converge to the country group indicates that the applied policies differ. From this point of view, it possible to ensure that the country for which convergence findings cannot be obtained converges to the group of the country with a policy change. The convergence hypothesis can be empirically investigated using unit root tests. The rejection of the unit root hypothesis indicates the existence of convergence.\n\nUnlike many of the other studies which use linear methods, this study uses the tests that focus on nonlinearity recently introduced in literature and frequently seen in economic structures.\n\nThe article is organized as follows. The next section presents the literature review. Econometric methods are introduced in section three. Section four provides the data and empirical findings used in the study, and conclusions and suggestions for further research are covered in section five.\n\nFigure 1. Total greenhouse gas emissions (% charge from 1990).\n\n2. Literature Review\n\nEl-Montassera et al. investigated greenhouse gas emissions convergence among the G7 countries for the period between 1990 and 2011. They examined the convergence using the pairwise testing technique. The results obtained from this study do not confirm the hypothesis of convergence for the countries included in their study.\n\nStrazicich and List examined CO2 emission convergence among 21 industrialized countries. Their study concluded that convergence existed in the years between 1960 and 1997. Romero-Ávila, D. examined the existence of stochastic and deterministic convergence of carbon dioxide emissions in 23 countries over the period 1960-2002 by employing the recently developed panel stationarity test. The results obtained from this study provided strong evidence supporting both stochastic and deterministic convergence in carbon dioxide emissions.\n\nLee and Chang used the data for per capita carbon dioxide emissions relative to the average per capita emissions for 21 OECD countries covering the period 1960-2000. Empirical findings obtained from this paper provide evidence that relative per capita carbon dioxide emissions in OECD countries are a mixture of I(0) and I(1) processes, in which 14 out of 21 OECD countries exhibited divergence.\n\nBarassi et al. investigated the convergence of per capita carbon dioxide emissions in OECD countries for the period 1950-2002. This paper employed stationarity and unit root tests, including those that allow for cross-sectional dependencies within the panel. The results indicated that carbon dioxide emissions did not converge among OECD countries during the period under consideration. Barassi et al. examined the convergence of carbon dioxide emissions within the OECD over the period 1870-2004. Their results suggest that carbon dioxide emissions within 13 out of 18 OECD countries are fractionally integrated, implying that they converge over time.\n\nPanopoulou and Pantelidis examined convergence in carbon dioxide emissions among 128 countries for the period 1960-2003 utilizing a new methodology. Their results suggest convergence in per capita CO2 emissions among all the countries in the early years of the sample period. Li and Lin examined the topic (CO2 emissions) for 110 countries over the period 1971-2008. Their results showed that there was convergence within subgroups of countries with similar income levels, but no overall convergence was achieved.\n\nThis study is different from other studies in the sense that we use an empirical methodology and both linear and nonlinear panel unit tests, which have been recently introduced in the literature.\n\n3. Econometric Methods\n\nThe econometric method used in the empirical part of the study is the linear and nonlinear panel unit root tests. It is clear that theoretical and practical studies on the panel data econometrics have considerably increased recently and that many researchers are interested in this topic. The reason for the growing interest in the panel data is that it offers certain advantages over using only time-series data or only horizontal cross-section data.\n\nOne of the most significant studies in the studies on advanced panel data techniques is the panel unit root tests. It is essential to determine whether the series or the panel analyzed is stationary in the search for a theory of economy or finance. In the analysis of panel data, classical panel unit root, panel unit root with breaks, and nonlinear panel unit root tests with different properties have been developed in determining whether the panel contains a unit root. In this study, we used linear and nonlinear panel unit root tests, which have certain advantages over classical time series methods. The next section explains the nonlinear panel unit root tests by Ucar and Omay and Emirmahmutoğlu and Omay .\n\nIn the last quarter of a century, panel unit root applications have been expanded considerably in nonstationary panel data. Panel unit root tests are more powerful than standard time series unit root tests because they use both time-series and horizontal cross-section size . The panel unit root tests utilized in the empirical part of the study are introduced here, respectively.\n\nSeveral panel unit root tests are introduced in the panel data literature. One of these areas is the nonlinear panel unit root tests. These tests have a decade-old history and are limited. The use of nonlinear panel unit root tests gives more reliable results when the series to be employed in the analysis and the panel exhibit a nonlinear structure. In this context, Ucar and Omay and Emirmahmutoğlu and Omay tests are introduced.\n\nUcar and Omay propose the unit root test for nonlinear heterogeneous panels by using the nonlinear time series framework Kapetanios, Shin, and Snell test and the panel unit root testing framework of Im, Pesaran, and Shin test.\n\nAs suggested by Ucar and Omay test, ${y}_{i,t}$ be the panel exponential smooth transition autoregressive process of order one (PESTAR(1)) on the time domain $t=1,2,\\cdots ,T$ for the cross-section units $i=1,2,\\cdots ,N$ . This test data generating process with the fixed effect parameter ${\\alpha }_{i}$ is,\n\n$\\Delta {y}_{i,t}={\\alpha }_{i}+{\\varphi }_{i}{y}_{i,t-1}+{\\gamma }_{i}{y}_{i,t-1}\\left[1-\\mathrm{exp}\\left(-{\\theta }_{i}{y}_{i,t-d}^{2}\\right)\\right]+{\\epsilon }_{i,t}$\n\nin which case $d\\ge 1$ is the delay parameter and ${\\theta }_{i}>0$ implies the speed of mean reversion for all i.\n\nUcar and Omay suggest that the panel unit root tests are computed by taking the average of individual Kapetanios, Shin, and Snell test statistic. The Kapetanios, Shin, and Snell statistic for the i individual is simply t-ratio of ${\\delta }_{i}$ in auxiliary regression defined by,\n\n${t}_{i,NL}=\\frac{\\Delta {{y}^{\\prime }}_{i}{M}_{\\tau }{y}_{i,-1}^{3}}{{\\stackrel{^}{\\sigma }}_{i,NL}{\\left({{y}^{\\prime }}_{i,-1}{M}_{\\tau }{y}_{i,-1}\\right)}^{3/2}}$\n\nwhere ${\\stackrel{^}{\\sigma }}_{i,NL}^{2}$ is the consistent estimator. Ucar and Omay propose ${\\stackrel{¯}{t}}_{NL}$ invariant average statistic for fixed T,\n\n${\\stackrel{¯}{t}}_{NL}=\\frac{1}{N}\\underset{i=1}{\\overset{N}{\\sum }}{t}_{i,NL}$\n\nwhere ${t}_{i,NL}$ is invariant concerning initial observations ${y}_{i,0}$, heterogeneous moments ${\\sigma }_{i}^{2}$ and ${\\sigma }_{i}^{4}$ if ${y}_{10}=0$ for all $i=1,2,\\cdots ,N$. Individual statistic ${t}_{i,NL}$ are iid random variables with finite means and variances, an average statistic ${\\stackrel{¯}{t}}_{i,NL}$ as defined in the previous equation have limiting standard normal distribution as $N\\to \\infty$ .\n\n${\\stackrel{¯}{Z}}_{NL}=\\frac{\\sqrt{N}\\left({\\stackrel{¯}{t}}_{NL}-E\\left({t}_{i,NL}\\right)\\right)}{\\sqrt{Var\\left({t}_{i,NL}\\right)}}\\stackrel{d}{\\to }N\\left(0,1\\right)$\n\nwhere the values of $E\\left({t}_{i,NL}\\right)$ and $Var\\left({t}_{i,NL}\\right)$ for different numbers of T are tabulated in Table 1 by Ucar and Omay .\n\nTable 1. Linear panel unit root test results.\n\nNote: The symbol * means rejection of the null hypothesis of a unit root. Source: Authors’ calculation.\n\nEmirmahmutoglu and Omay propose the panel asymmetric nonlinear unit root test as an extended version of the asymmetric ESTAR unit root test by Sollis , which allows for symmetric or asymmetric nonlinear adjustment under the alternative hypothesis to a unit root .\n\nThe unit root test by Kapetanios et al. only assumes symmetric mean reversion behavior, but the unit root test by Sollis takes into account asymmetric behavior. Sollis can be extended to nonlinear asymmetric heterogeneous panels as follows:\n\n$\\Delta {y}_{it}={G}_{it}\\left({\\gamma }_{1i},{y}_{i,t-1}\\right)×\\left\\{{S}_{it}\\left({\\gamma }_{2i},{y}_{i,t-1}\\right){\\rho }_{1i}+\\left(1-{S}_{it}\\left({\\gamma }_{2i},{y}_{i,t-1}\\right)\\right){\\rho }_{2i}\\right\\}{y}_{i,t-1}+{\\epsilon }_{it}$\n\n${G}_{it}\\left({\\gamma }_{1i},{y}_{i,t-1}\\right)=1-\\mathrm{exp}\\left(-{\\gamma }_{1i}{y}_{i,t-1}^{2}\\right),\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{\\gamma }_{1i}\\ge 0\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{for}\\text{\\hspace{0.17em}}\\text{all}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}i,$\n\n${S}_{it}\\left({\\gamma }_{2i},{y}_{i,t-1}\\right)={\\left[1+\\mathrm{exp}\\left(-{\\gamma }_{2i}{y}_{i,t-1}\\right)\\right]}^{-1},\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{\\gamma }_{2i}\\ge 0\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{for}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{all}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}i,$\n\nwhere ${\\epsilon }_{it}\\sim iid\\left(0,{\\sigma }_{i}^{2}\\right)$. In this case, the deviation is the negative of the state variable, the outer regime is $\\Delta {y}_{it}={\\rho }_{i2}{y}_{i,t-1}+{\\epsilon }_{it}$, and the deviation is in the positive direction, and the outer regime is $\\Delta {y}_{it}={\\rho }_{i1}{y}_{i,t-1}+{\\epsilon }_{it}$, where the transition function takes the extreme values 0 and 1, respectively, for these two cases.\n\nEmirmahmutoglu and Omay suggest replacing ${G}_{it}\\left({\\gamma }_{1i},{y}_{i,t-1}\\right)$ in the first equation with a first-order Taylor expansion around ${\\gamma }_{1i}=0$ gives as follows:\n\n$\\Delta {y}_{it}={\\rho }_{1i}{\\gamma }_{1i}{y}_{i,t-1}^{3}{S}_{it}\\left({\\gamma }_{2i},{y}_{i,t-1}\\right)+{\\rho }_{2i}{\\gamma }_{1i}{y}_{i,t-1}^{3}\\left(1-{S}_{it}\\left({\\gamma }_{2i},{y}_{i,t-1}\\right)\\right)+{\\epsilon }_{it}$\n\nThe augmented auxiliary equation is obtained as\n\n$\\Delta {y}_{it}={\\varphi }_{1i}{y}_{i,t-1}^{3}+{\\varphi }_{2i}{y}_{i,t-1}^{4}+\\underset{j=1}{\\overset{{p}_{i}}{\\sum }}{\\delta }_{ij}\\Delta {y}_{i,t-j}+{\\epsilon }_{it}$\n\nThe proposed test statistic is computed by taking the average of the individual ${F}_{i,AE}$ statistic.\n\n${\\stackrel{¯}{F}}_{AE}={N}^{-1}\\underset{i=1}{\\overset{N}{\\sum }}{F}_{i,AE}$\n\nSollis proposed using the individual t statistic ( ${t}_{i,AE}^{as}$ ) with the standard t distribution. Emirmahmutoglu and Omay compute ${\\stackrel{¯}{t}}_{AE}^{as}$, taking the average of the individual statistic with the standard distribution. They suggest a sequential panel selection method (SPSM) by Chortareas and Kapetanios .\n\n4. Data and Empirical Results\n\nThe convergence of the intensity and the components of the greenhouse gas concentrations for OECD countries have been analyzed using linear and nonlinear time series and panel unit root tests. The data utilized in the study were obtained from the World Bank-World Development Indicators database. The variables are CO2, Methane, Nitrous, and total greenhouse gas. The data of 29 OECD countries for the period 1970-2012 were investigated in the study. Czech Republic, Germany, Estonia, Latvia, Slovakia, and Slovenia were not included in the study.\n\nIn this part of the study where per capita greenhouse gas convergence in OECD countries is investigated, linear and nonlinear panel methods have been used, which attract the attention of many researchers and have significant advantages in empirical studies. The linear panel unit root test results and the nonlinear panel unit root test results for greenhouse gas convergence are shown in Table 1 and Table 2, respectively.\n\nThe validity of the greenhouse gas convergence in Table 1 was investigated applying the tests by Maddala et al. , Harris and Tzavallis , Breitung , Im et al. and Pesaran . According to findings, the null hypothesis of a unit root was not rejected in all unit root tests for per capita methane gas, and stationarity null hypothesis was rejected in the Hadri stationarity test . According to these results, per capita, methane gas convergence in OECD countries does not converge. For the nitrous gas per person, only the null hypothesis of a unit root was rejected in the Choi test, and no stationarity was detected in cases other than this test. The unit root test hypothesis was rejected for CO2 gas per capita according to the results of Levin et al. , Im et al. , and Choi tests. In other words, per capita, CO2 gas convergence is applied in OECD countries. In the last column, the analysis results for total greenhouse gas per capita were given. According to this, Levin et al. and Choi tests showed that per capita greenhouse convergence was valid, while the other six tests were the opposite. In general, it is concluded that convergence is not the case mostly by looking at the results of per capita greenhouse gas linear panel unit root tests in the OECD countries.\n\nIn Table 2, the validity of per capita greenhouse gas convergence in OECD countries was investigated using Ucar and Omay and Emirmahmutoglu and Omay tests from nonlinear panel unit root tests. According to Ucar and Omay and Emirmahmutoğlu and Omay tests for methane gas per capita, the null hypothesis of a unit root is not rejected; in other words, the methane gas convergence is not valid. Both test results for nitrous gas per capita indicate the presence of a unit root. Similarly, both of the test results indicate the presence of a unit root for per capita CO2 gas. Finally, the presence of the null hypothesis of a unit root cannot be rejected according to the results of both tests for per capita total greenhouse gas; in other words, convergence is not valid. Looking at the results of greenhouse gas nonlinear panel unit root tests per capita in OECD countries in general, we conclude that convergence is not valid in all cases.\n\nTable 2. Nonlinear panel unit root test results.\n\nNote: The symbol * means rejection of the null hypothesis of unit root. Source: Authors’ calculation.\n\n5. Conclusions\n\nGlobal warming is on the rise in the world and is expected to have more impact in the upcoming years. Different measures have been taken in recent years to control global warming. In this context, the convergence of greenhouse gas, which is regarded as the core indicator of global warming, for OECD countries is necessary to guide policy makers.\n\nThe convergence analysis has been carried out in recent years using linear and nonlinear panel unit root tests in the literature. The conclusion is that the convergence of per capita greenhouse gas emissions in OECD countries is not valid. The findings indicate that the long-term greenhouse gas emissions in the OECD countries will not be close to the same long-term values. According to these results, the differences in the long-term greenhouse gas emissions in the OECD countries will not disappear. Various policy changes need to be made in order to take these gases under control.\n\nFuture studies can focus on a similar analysis for country groups. Different results could be obtained from homogenous country groups.\n\nCite this paper: Canel, C. , Güriş, S. , Öktem, R. , Güriş, B. , Öktem, B. , Yaşgül, Y. and Tıraşoğlu, M. (2020) An Examination of Greenhouse Gas Convergence in OECD Countries. Modern Economy, 11, 79-88. doi: 10.4236/me.2020.111008.\nReferences\n\n   El-Montasser, G., Inglesi-Lotz, R. and Gupta, R. (2015) Convergence of Greenhouse Gas Emissions among G7 Countries. Applied Economics, 47, 6543-6552.\nhttps://doi.org/10.1080/00036846.2015.1080809\n\n   Strazicich, M.C. and List, J.A. (2003) Are CO2 Emission Levels Converging among Industrial Countries? Environmental and Resource Economics, 24, 263-271.\nhttps://doi.org/10.1023/A:1022910701857\n\n   Romero-Avila, D. (2008) Convergence in Carbon Dioxide Emissions among Industrialised Countries Revisited. Energy Economics, 30, 2265-2282.\nhttps://doi.org/10.1016/j.eneco.2007.06.003\n\n   Lee, C.C. and Chang, C.P. (2008) New Evidence on the Convergence of Per Capita Carbon Dioxide Emissions from Panel Seemingly Unrelated Regressions Augmented Dickey-Fuller Tests. Energy, 33, 1468-1475.\nhttps://doi.org/10.1016/j.energy.2008.05.002\n\n   Barassi, M.R., Cole, M.A. and Elliott, R.J. (2008) Stochastic Divergence or Convergence of Per Capita Carbon Dioxide Emissions: Re-Examining the Evidence. Environmental and Resource Economics, 40, 121-137.\nhttps://doi.org/10.1007/s10640-007-9144-1\n\n   Barassi, M.R., Cole, M.A. and Elliott, R.J. (2011) The Stochastic Convergence of CO2 Emissions: A Long Memory Approach. Environmental and Resource Economics, 49, 367-385.\nhttps://doi.org/10.1007/s10640-010-9437-7\n\n   Panopoulou, E. and Pantelidis, T. (2009) Club Convergence in Carbon Dioxide Emissions. Environmental and Resource Economics, 44, 47-70.\nhttps://doi.org/10.1007/s10640-008-9260-6\n\n   Li, X. and Lin, B. (2013) Global Convergence in Per Capita CO2 Emissions. Renewable and Sustainable Energy Reviews, 24, 357-363.\nhttps://doi.org/10.1016/j.rser.2013.03.048\n\n   Ucar, N. and Tolga, O. (2009) Testing for Unit Root in Nonlinear Heterogeneous Panels. Economics Letters, 104, 5-8.\nhttps://doi.org/10.1016/j.econlet.2009.03.018\n\n   Emirmahmutoglu, F. and Tolga, O. (2014) Reexamining the PPP Hypothesis: A Nonlinear Asymmetric Heterogeneous Panel Unit Root Test. Economic Modelling, 40, 184-190.\nhttps://doi.org/10.1016/j.econmod.2014.03.028\n\n   Mario, C. and Nicholas, S. (2007) A Bootstrap Panel Unit Root Test under Cross-Sectional Dependence, with an Application to PPP. Computational Statistics & Data Analysis, 51, 4028-4037.\nhttps://doi.org/10.1016/j.csda.2006.12.025\n\n   Kapetanios, G., Shin, Y. and Snell, A. (2003) Testing for a Unit Root in the Nonlinear STAR Framework. Journal of Econometrics, 112, 359-379.\nhttps://doi.org/10.1016/S0304-4076(02)00202-6\n\n   Pesaran, M.H. (2007) A Simple Panel Unit Root Test in the Presence of Cross-Section Dependence. Journal of Applied Econometrics, 22, 265-312.\nhttps://doi.org/10.1002/jae.951\n\n   Karadagli, E.C. and Nazli, C.O. (2012) Testing Weak Form Market Efficiency of Emerging Markets: A Nonlinear Approach. Journal of Applied Economic Sciences, 7, 235-245.\n\n   Bozoklu, S. and Yilanci, V. (2014) Current Account Sustainability in Emerging Markets: An Analysis with Linear and Nonlinear Panel Unit Root Tests. Atatürk üniversitesi Iktisadi ve Idari Bilimler Dergisi, 28, 251-264.\n\n   Sollis, R. (2009) A Simple Unit Root Test against Asymmetrical STAR Nonlinearity with an Application to Real Exchange Rates in Nordic Countries. Economic Modelling, 26, 118-125.\nhttps://doi.org/10.1016/j.econmod.2008.06.002\n\n   Bahmani-Oskooee, M., Tsangyao, C. and Kuei-Chiu, L. (2016) Panel Asymmetric Nonlinear Unit Root Test and PPP in Africa. Applied Economics Letters, 23, 554-558.\nhttps://doi.org/10.1080/13504851.2015.1088132\n\n   Chortareas, G. and Kapetanios, G. (2009) Getting PPP Right: Identifying Mean-Reverting Real Exchange Rates in Panels. Journal of Banking and Finance, 33, 390-404.\nhttps://doi.org/10.1016/j.jbankfin.2008.08.010\n\n   Maddala, G.S. and Shaowen, W. (1999) A Comparative Study of Unit Root Tests with Panel Data and A New Simple Test. Oxford Bulletin of Economics and Statistics, 61, 631-652.\nhttps://doi.org/10.1111/1468-0084.61.s1.13\n\n   Harris, R.D.F. and Elias, T. (1999) Inference for Unit Roots in Dynamic Panels Where the Time Dimension Is Fixed. Journal of Econometrics, 91, 203-205.\nhttps://doi.org/10.1016/S0304-4076(98)00076-1\n\n   Breitung, J. (2001) The Local Power of Some Unit Root Tests for Panel Data. In: Nonstationary Panels, Panel Cointegration, and Dynamic Panels, Emerald Group Publishing Limited, Bingley, 161-167.\nhttps://doi.org/10.1016/S0731-9053(00)15006-6\n\n   Im, K.S., Hashem, P. and Yongcheol, S. (2003) Testing for Unit Roots in Heterogeneous Panels. Journal of Econometrics, 115, 53-74.\nhttps://doi.org/10.1016/S0304-4076(03)00092-7\n\n   Hadri, K. (2000) Testing for Stationarity in Heterogeneous Panel Data. Econometrics Journal, 3, 148-161.\nhttps://doi.org/10.1111/1368-423X.00043\n\n   Choi, I. (2001) Unit Root Tests for Panel Data. Journal of International Money and Finance, 20, 249-272.\nhttps://doi.org/10.1016/S0261-5606(00)00048-6\n\n   Levin, A., Chien-Fu, L. and Chia-Shang, J.C. (2002) Unit Root Tests in Panel Data: Asymptotic and Finite-Sample Properties. Journal of Econometrics, 108, 1-24.\nhttps://doi.org/10.1016/S0304-4076(01)00098-7\n\nTop" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8648956,"math_prob":0.8499273,"size":20606,"snap":"2021-21-2021-25","text_gpt3_token_len":4779,"char_repetition_ratio":0.15833414,"word_repetition_ratio":0.06944897,"special_character_ratio":0.2453169,"punctuation_ratio":0.14952794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98023766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T05:44:14Z\",\"WARC-Record-ID\":\"<urn:uuid:27ed0bce-1805-433b-ae91-2296c9e70108>\",\"Content-Length\":\"96863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03f9e6c2-7c1c-44ca-a627-113084c286e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa403e97-eaec-49ad-8c9d-5421d96b2b47>\",\"WARC-IP-Address\":\"192.111.37.22\",\"WARC-Target-URI\":\"https://m.scirp.org/papers/97834\",\"WARC-Payload-Digest\":\"sha1:G6RSSV5SVYNF4TAKI2WHLYI4WEDORTF6\",\"WARC-Block-Digest\":\"sha1:2MJXVFLQPYDW7VT7XDHZSDFUTQ7KXRWZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991537.32_warc_CC-MAIN-20210513045934-20210513075934-00582.warc.gz\"}"}
https://mathvideoprofessor.com/courses/seventh-grade/lessons/unit-6-expressions-equations-and-inequalities/topic/reasoning-about-equations-with-tape-diagrams/
[ "# Reasoning about Equations with Tape Diagrams\n\nWarmup\n\nSort the equations into categories of your choosing.\n\nActivity #1\n\n` Exercise Questions.`", null, "• Select all the equations that match the diagram below.\n\n• Select all the equations that match the diagram below.\n\n• Select all the equations that match the diagram below.\n\n• Select all the equations that match the diagram below.\n\n• Select all the equations that match the diagram below.\n\nActivity #2\n\n` Draw a Tape Diagram.`", null, "• Draw a tape diagram to match the equations below.\n• Use any method to find the values of x and y that make the equations true.\n\n(1.) 114=3x+18.\n\n(2.) 114=3(y+18).\n\n(3.) 5(x+1) = 20\n\n(4.) 5x+1 = 20\n\nActivity #3\n\n` Draw Tape Diagrams.`", null, "• Draw a tape diagram to match the equation 5(x + 1) = 20.\n\n• Draw a tape diagram to match the equation 5x + 1 = 20.\n\nSelect all the equations that match the tape diagram below.", null, "Challenge #1\n\nSort the equations into categories of your choosing.\n\nChallenge #2\n\nComplete the magic squares by typing in the empty boxes so that the sum of each row, each column, and each diagonal in a grid are all equal.​\n\nChallenge #3\n\nQuiz Time" ]
[ null, "https://i0.wp.com/mathvideoprofessor.com/wp-content/uploads/2021/08/Instructions-3.png", null, "https://i0.wp.com/mathvideoprofessor.com/wp-content/uploads/2021/08/Instructions-3.png", null, "https://i0.wp.com/mathvideoprofessor.com/wp-content/uploads/2021/08/Instructions-3.png", null, "https://i0.wp.com/mathvideoprofessor.com/wp-content/uploads/2021/08/Capture-197.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6721231,"math_prob":0.9997615,"size":611,"snap":"2023-40-2023-50","text_gpt3_token_len":172,"char_repetition_ratio":0.11532125,"word_repetition_ratio":0.09195402,"special_character_ratio":0.28968903,"punctuation_ratio":0.140625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999523,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T19:20:58Z\",\"WARC-Record-ID\":\"<urn:uuid:62c3565e-df48-450d-9f71-7278f364c6bd>\",\"Content-Length\":\"253311\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d74e4090-2d45-4fbd-802b-216d9892836e>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb5bd171-0034-4fc4-876f-2329ddbbded3>\",\"WARC-IP-Address\":\"35.208.47.198\",\"WARC-Target-URI\":\"https://mathvideoprofessor.com/courses/seventh-grade/lessons/unit-6-expressions-equations-and-inequalities/topic/reasoning-about-equations-with-tape-diagrams/\",\"WARC-Payload-Digest\":\"sha1:OUMJ4PTKEPTGYNMETEAZ3NP3ROP6TRVS\",\"WARC-Block-Digest\":\"sha1:C5GDGGP6FBYVH2W3DIHXJQ43B45E36AG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00103.warc.gz\"}"}
https://oa.journalfeeds.online/2021/12/30/document-level-medical-relation-extraction-via-edge-oriented-graph-neural-network-based-on-document-structure-and-external-knowledge-bmc-medical-informatics-and-decision-making/
[ "# Document-level medical relation extraction via edge-oriented graph neural network based on document structure and external knowledge – BMC Medical Informatics and Decision Making\n\n#### ByTao Li, Ying Xiong, Xiaolong Wang, Qingcai Chen and Buzhou Tang\n\nDec 30, 2021", null, "In this section, we first introduce RE based on document structure, and then RE based on external knowledge from two aspects: knowledge graph and entity description.\n\n### Relation extraction based on document structure\n\nA document usually has a hierarchical structure like an example, as shown in Fig. 2, where a document (d_{1}) consists of two chapters (c_{1}) and (c_{2}), and each chapter contains some sentences with many entity mentions. Suppose that a sentence (s = w_{1} w_{2} ldots w_{left| s right|}), it can be represented as (H_{s}^{local} = left[ {h_{1}^{local} ,h_{2}^{local} , ldots ,h_{left| s right|}^{local} } right]) via an encoding layer.\n\nIn a document with |d| sentences (d = s_{1} ,s_{2} , ldots ,s_{left| d right|}), there are five kinds of nodes corresponding to document structure as follows:\n\n• Mention Node (M). Each mention node (m) is represented as (n_{m} = { }left[ {avg_{{w_{i} in m}} left( {h_{i}^{local} } right);t_{m} } right]), where ‘;’ denotes concatenation operation, and (t_{m}) is an embedding to represent the node type of mention node.\n\n• Entity Node (E). An entity (e) is represented as (n_{e} = left[ {avg_{m in e} left( {n_{m} } right);t_{e} } right]), where (avg_{m in e} left( {n_{m} } right)) is the average representation of all mentions corresponding to (e), and (t_{e}) is an embedding to represent the node type of entity node.\n\n• Sentence Node (S). Each sentence node (s) is represented as (n_{s} = { }left[ {avg_{{w_{i} in s}} left( {h_{i}^{local} } right);t_{s} } right]), where (t_{s}) is an embedding to represent the node type of sentence.\n\n• Chapter Node (C). A chapter node c is represented by the average representation of all sentence nodes it contains and the embedding of the node type of chapter, that is, (n_{c} = left[ {avg_{s in c} left( {h_{s}^{global} } right);t_{c} } right]);\n\n• Document Node (D). A document node (d) is represented by the average representation of all chapter nodes and the embedding of the node type of document (n_{d} = left[ {avg_{{{text{c}} in d}} left( {n_{c} } right);t_{d} } right]).\n\nGiven the five kinds of nodes above, we connect them with the following six kinds of edges, as shown in Fig. 3:\n\n• Mention-Sentence (MS). When an entity mention (m) appears in a sentence, there is an edge between the corresponding entity mention node and the sentence node (s), and the edge is represented as (e_{MS} = left[ {n_{m} ;n_{s} } right]);\n\n• Mention-Mention (MM). When two entity mentions (m_{1}) and (m_{2}) appear in the same sentence (s), there is an edge between the two corresponding entity mention nodes (n_{{m_{1} }}) and (n_{{m_{2} }}). The edge can be represented as (e_{MM} = left[ {n_{{m_{1} }} ;n_{{m_{2} }} ;c_{{m_{1} m_{2} }} ;dleft( {s_{1} ,{text{s}}_{2} } right)} right]), where (dleft( {m_{1} ,{text{m}}_{2} } right)) is the representation of the relative distance between the two entity mentions in the sentence, and (c_{{m_{1} m_{2} }}) is the attention vector between the two entity mentions calculated by the following equations:\n\n$$begin{array}{*{20}c} {alpha_{k,i} = n_{{m_{k} }}^{T} w_{i} ,} \\ end{array}$$\n\n(1)\n\n$$begin{array}{*{20}c} {a_{k,i} = frac{{exp left( {alpha_{k,i} } right)}}{{mathop sum nolimits_{{j in left[ {1,n} right],j notin m_{k} }} exp left( {alpha_{k,j} } right)}},} \\ end{array}$$\n\n(2)\n\n$$begin{array}{*{20}c} {a_{i} = frac{{a_{1,i} + a_{2,i} }}{2},} \\ end{array}$$\n\n(3)\n\n$$begin{array}{*{20}c} {c_{{m_{1} ,m_{2} }} = H^{T} a,} \\ end{array}$$\n\n(4)\n\nwhere (k in left{ {1,2} right}), (a_{i}) is the attention weight of the ith word in the entity mention pair <(m_{1} ,m_{2})>, and (H in R^{hidden_dim times left| s right|}) is the representation of sentence (s);\n\n• Entity-Mention (ME). There is an edge between an entity mention node (m) and the corresponding entity node (e), that is, (e_{ME} = left[ {n_{m} ;n_{e} } right]);\n\n• Sentence-Sentence (SS). For all sentence nodes in a document, there are edges between any two sentence nodes. An SS edge is represented by (e_{SS} = left[ {n_{{S_{i} }} ;n_{{s_{j} }} ;dleft( {s_{i} ,{text{s}}_{j} } right);left| {n_{{S_{i} }} – n_{{s_{j} }} } right|} right](i ne j)), where (n_{{S_{i} }}) and (n_{{S_{j} }}) are the representation of (s_{{text{i}}}) and the representation of (s_{j}), and (dleft( {s_{i} ,{text{s}}_{j} } right)) is the representation of the relative distance between (s_{{text{i}}}) and (s_{j}) measured by the number of sentences between them;\n\n• Entity-Sentence (ES). When there is an entity mention node (m) corresponding to an entity node (e) in a sentence (s), there is an edge between (e) and (s). The edge is represented as (e_{ES} = left[ {n_{e} ;n_{s} } right]);\n\n• Sentence-Chapter (SC). There is an edge between a sentence node (s) and a chapter node (c), and it is represented as (e_{SC} = left[ {n_{s} ;n_{c} } right]);\n\n• Chapter-Chapter (CC). There is an edge between two chapter nodes (c_{1}) and (c_{2}) in a document, and it is represented as (e_{CC} = left[ {n_{{c_{1} }} ;n_{{c_{2} }} } right]);\n\n• Chapter-Document (CD). There is an edge between a chapter node (c) and a document node (d), and it is represented as (e_{DC} = left[ {n_{d} ;n_{c} } right]).\n\nWe further apply a linear transformation to all edge representations using the following equation:\n\n$$begin{array}{*{20}c} {v_{z}^{left( 1 right)} = {varvec{W}}_{z} e_{z} ,} \\ end{array}$$\n\n(5)\n\nwhere (z in left{ {MS,{ }MM,ME,SS,ES,SC,CC,CD} right}) and ({varvec{W}}_{z}) is a learnable parameter matrix.\n\n### Relation extraction based on external knowledge\n\nTo utilize external knowledge, we regard any entity in external knowledge that also appears in text as an additional node and connect it to the corresponding entity node in text. In this paper, we introduce two kinds of knowledge nodes according to the forms of external knowledge of entities: (1) entity description and (2) knowledge graph.\n\nSuppose that (e_{1}), (e_{2}) and (e_{3}) have their external description, (e_{1}) and (e_{3}) exist in an external knowledge graph, the graph based on document structure as shown in Fig. 3 can be extended to the graph as shown in Fig. 4 after adding knowledge nodes, where (kd_{i}) and (ks_{j}) denote knowledge node based on entity description and knowledge node based on knowledge graph, respectively. In this way, we can obtain a graph that takes full advantage of external knowledge as much as possible.\n\n### Knowledge node representation based on knowledge graph\n\nWe deploy a translation distance model, a semantic matching model and a graph model, that is, TransE , RESCAL and GAT , to represent knowledge nodes based on knowledge graph respectively.\n\nTransE assumes that any triple (leftlangle {h, r, t} rightrangle), where (h) is a head entity node, (r) is a relation, and (t) is a tail entity node, satisfies the hypothesis of (h + r approx t), so as to ensure that the distance between two entity nodes is close to the representation of the relation between the two nodes. In this way, the multi-hop relation between two entities can be represented by additive transitivity, that is, if there is a relation (r_{1}) between (h_{1}) and (t_{1}), a relation (r_{2}) between (t_{1}) and (t_{2}), …, and a relation (r_{K}) between (t_{K – 1}) and (t_{K}), there is an implicit relation between (h_{1}) and (t_{K}) as follows:\n\n$$begin{array}{*{20}c} {h_{1} + r_{1} + r_{2} + cdots + r_{K} approx t_{K} ,} \\ end{array}$$\n\n(6)\n\nThe max-margin function of negative sampling is used as the objective function of TransE:\n\n$$begin{array}{*{20}c} {L = mathop sum limits_{{left( {h,r,t} right) in Delta }} mathop sum limits_{{left( {h^{prime},r^{prime},t^{prime}} right) in Delta^{prime}}} maxleft( {f_{r} left( {h,t} right) + gamma – f_{{r^{prime}}} left( {h^{prime},t^{prime}} right),0} right),} \\ end{array}$$\n\n(7)\n\nwhere (left( {h,r,t} right) in {Delta }) is a true triplet, while (left( {h^{prime},r^{prime},t^{prime}} right) in{Delta^{prime}}) is a negative triplet obtained by sampling, (f_{r} left( {h,t} right)) is the score of (left( {h,r,t} right)), and (gamma > 0) denotes the margin usually set to 1. Finally, the learned (h) is regarded as (h_{ks}), the knowledge node representation corresponding to node (ks) without considering its type.\n\nRESCAL captures the potential semantics between two entities through the bilinear function as follows:\n\n$$begin{array}{*{20}c} {f_{r} left( {h,t} right) = h^{T} M_{r} t,} \\ end{array}$$\n\n(8)\n\nAs shown in Fig. 5, RESCAL represents relation triples as a three-dimensional tensor ({mathcal{X}}), where ({mathcal{X}}_{ijk} = 1) indicates that there is a true triplet (leftlangle {e_{i} ,r_{k} ,e_{j} } rightrangle). The tensor decomposition model is used to model the relationship implicitly:\n\n$$begin{array}{*{20}c} {{mathcal{X}}_{k} approx AR_{k} A^{T} ,;{text{for }};k = 1, ldots ,m,} \\ end{array}$$\n\n(9)\n\nwhere ({mathcal{X}}_{{text{k}}}) is the (k)th component of ({mathcal{X}}), (A in R^{n times r}) contains the potential representations of entities, (R_{k} in R^{r times r}) is a symmetric matrix used to model the potential interactions in the (k)th relation:\n\n$$begin{array}{*{20}c} {fleft( {A,R_{k} } right) = frac{1}{2}mathop sum limits_{i,j,k} left( {{mathcal{X}}_{ijk} – {varvec{a}}_{i}^{T} R_{k} {varvec{a}}_{j} } right)^{2} ,} \\ end{array}$$\n\n(10)\n\nwhere (h_{ks}) is the component of (A) corresponding to node (ks).\n\nIn addition, we also represent the knowledge node (ks) by the subgraph centered on the node using GAT.\n\nBased on knowledge graph, a node (ks) is represented by (n_{ks} = left[ {h_{ks} ;t_{ks} } right]), where (h_{ks}) is the representation obtained from TransE, RESCAL or GAT, and (t_{ks}) is the embedding of the node type of knowledge graph node. The edge between an entity node (e) and the corresponding knowledge node (k{text{s}}) is represented as (e_{EKS} = left[ {n_{e} ;n_{ks} } right]), and it is also further transformed into (v_{EKS}^{left( 1 right)}) via a linear transformation function:\n\n$$begin{array}{*{20}c} {v_{EKS}^{left( 1 right)} = {varvec{W}}_{EKS} e_{EKS} ,} \\ end{array}$$\n\n(11)\n\nwhere ({varvec{W}}_{EKS}) is a learnable parameter matrix.\n\n### Knowledge node representation based on description\n\nIn this paper, we use the following two methods to obtain knowledge node representation based on the entity description:\n\n1. 1.\n\nDoc2vec (also called paragraph2vec), inspired by word2vec proposed by Tomas Mikolov, which can transform a sentence or a short text into a corresponding low dimensional vector representation of fixed length.\n\n2. 2.\n\nAn end-to-end neural network, as shown in Fig. 6, which are used to encode the description text of a given knowledge node, called EMB.\n\nSimilar to knowledge node (ks), knowledge node (kd) based on description is represented as (n_{kd} = left[ {h_{kd} ;t_{kd} } right]). The edge between (kd) and the corresponding entity node (e) is represented as (e_{EKD} = left[ {n_{e} ;n_{EKD} } right]) and is further transformed by\n\n$$begin{array}{*{20}c} {v_{EKD}^{left( 1 right)} = {varvec{W}}_{EKD} e_{EKD} ,} \\ end{array}$$\n\n(12)\n\nwhere ({varvec{W}}_{EKD}) is a learnable parameter matrix.\n\n### Inference\n\nFollowing KEoG, with the help of the walk aggregation layer , a path between two entity nodes (i) and (k) of length (2l) can be represented as\n\n$$begin{array}{*{20}c} {fleft( {v_{ik}^{left( l right)} ,v_{kj}^{left( l right)} } right) = sigma left( {v_{ik}^{left( l right)} odot left( {{varvec{W}}v_{kj}^{left( l right)} } right)} right),} \\ end{array}$$\n\n(13)\n\nwhere (sigma) is the sigmoid activation function, (odot) is the element-wise multiplication, and ({varvec{W}} in {mathbb{R}}^{{{varvec{d}}_{{varvec{z}}} times {varvec{d}}_{{varvec{z}}} }}) is a learnable parameter matrix used to combine two short paths of length (l) (path between (i) and (j), and path between (j) and (k)) to generate one long path of length (2l).\n\nAll paths from node (i) to node (k) are aggregated to form the representation of the edge from node (i) to node (j) of length (2l) as follows:\n\n$$begin{array}{*{20}c} {v_{ij}^{{left( {2l} right)}} = alpha v_{ij}^{left( l right)} + left( {1 – alpha } right)mathop sum limits_{k ne i,j} fleft( {v_{ik}^{left( l right)} ,v_{kj}^{left( l right)} } right),} \\ end{array}$$\n\n(14)\n\nwhere (alpha in left[ {0,1} right]) is a linear interpolation scalar to control the contribution of edges of length (l).\n\nAfter obtaining the path representation of any entity pair of interest, we adopt the softmax function as classifier. Like in KEoG, both cross-entropy loss function and soft F-measure loss function are used as a part of the total loss function." ]
[ null, "https://media.springernature.com/w200/springer-static/cover/journal/12911.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8247469,"math_prob":0.99890757,"size":13772,"snap":"2022-27-2022-33","text_gpt3_token_len":4118,"char_repetition_ratio":0.17090355,"word_repetition_ratio":0.04524034,"special_character_ratio":0.3225385,"punctuation_ratio":0.105321504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999012,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T18:50:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3fcc45eb-7552-4a5e-b70e-f92e3200eb6d>\",\"Content-Length\":\"182791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfdc5738-99f2-4924-abff-538e54ee2b47>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e00965f-c361-4ffb-8061-1e331570917c>\",\"WARC-IP-Address\":\"194.195.116.19\",\"WARC-Target-URI\":\"https://oa.journalfeeds.online/2021/12/30/document-level-medical-relation-extraction-via-edge-oriented-graph-neural-network-based-on-document-structure-and-external-knowledge-bmc-medical-informatics-and-decision-making/\",\"WARC-Payload-Digest\":\"sha1:OGFJYCGEMJJFFL6JV7YP4X52O5LEB7LV\",\"WARC-Block-Digest\":\"sha1:S6BJKHPILB2Z6XSZWEIGVCTBVYJE6OKE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572515.15_warc_CC-MAIN-20220816181215-20220816211215-00165.warc.gz\"}"}
https://www.clutchprep.com/chemistry/practice-problems/151321/what-is-the-average-mass-of-a-single-argon-in-gram
[ "# Problem: What is the average mass of a single argon in gram?\n\n###### FREE Expert Solution\n\nWe’re being asked to determine the mass of single argon in gram.\n\nWe can calculate the mass of the atom using the following steps:\n\nStep 1: Convert the number of atoms to moles (using the Avogadro’s number)\n\n89% (204 ratings)", null, "###### Problem Details\n\nWhat is the average mass of a single argon in gram?" ]
[ null, "https://cdn.clutchprep.com/assets/button-view-text-solution.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9101726,"math_prob":0.9538997,"size":608,"snap":"2021-21-2021-25","text_gpt3_token_len":133,"char_repetition_ratio":0.13741721,"word_repetition_ratio":0.0,"special_character_ratio":0.19901316,"punctuation_ratio":0.075630255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98599666,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T01:44:39Z\",\"WARC-Record-ID\":\"<urn:uuid:80746d1a-3bf8-4e1d-b459-19e66809b72f>\",\"Content-Length\":\"114022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:185bed00-93ab-4408-9bc7-df3da875db6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e65a1aa9-4275-4d98-b97a-872447708620>\",\"WARC-IP-Address\":\"35.171.215.128\",\"WARC-Target-URI\":\"https://www.clutchprep.com/chemistry/practice-problems/151321/what-is-the-average-mass-of-a-single-argon-in-gram\",\"WARC-Payload-Digest\":\"sha1:GZF5E4F4WGKLIFIUCRFJSMAZQ2ITISJL\",\"WARC-Block-Digest\":\"sha1:4NYSBJVQ7MLRCSSXO4HEWUT36DRF63K7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988953.13_warc_CC-MAIN-20210509002206-20210509032206-00556.warc.gz\"}"}
https://topepo.github.io/caret/model-training-and-tuning.html
[ "# 5 Model Training and Tuning\n\nContents\n\n## 5.1 Model Training and Parameter Tuning\n\nThe caret package has several functions that attempt to streamline the model building and evaluation process.\n\nThe train function can be used to\n\n• evaluate, using resampling, the effect of model tuning parameters on performance\n• choose the “optimal” model across these parameters\n• estimate model performance from a training set\n\nFirst, a specific model must be chosen. Currently, 238 are available using caret; see train Model List or train Models By Tag for details. On these pages, there are lists of tuning parameters that can potentially be optimized. User-defined models can also be created.\n\nThe first step in tuning the model (line 1 in the algorithm below) is to choose a set of parameters to evaluate. For example, if fitting a Partial Least Squares (PLS) model, the number of PLS components to evaluate must be specified.", null, "Once the model and tuning parameter values have been defined, the type of resampling should be also be specified. Currently, k-fold cross-validation (once or repeated), leave-one-out cross-validation and bootstrap (simple estimation or the 632 rule) resampling methods can be used by train. After resampling, the process produces a profile of performance measures is available to guide the user as to which tuning parameter values should be chosen. By default, the function automatically chooses the tuning parameters associated with the best value, although different algorithms can be used (see details below).\n\n## 5.2 An Example\n\nThe Sonar data are available in the mlbench package. Here, we load the data:\n\nlibrary(mlbench)\ndata(Sonar)\nstr(Sonar[, 1:10])\n## 'data.frame': 208 obs. of 10 variables:\n## $V1 : num 0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ... ##$ V2 : num 0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ...\n## $V3 : num 0.0428 0.0843 0.1099 0.0623 0.0481 ... ##$ V4 : num 0.0207 0.0689 0.1083 0.0205 0.0394 ...\n## $V5 : num 0.0954 0.1183 0.0974 0.0205 0.059 ... ##$ V6 : num 0.0986 0.2583 0.228 0.0368 0.0649 ...\n## $V7 : num 0.154 0.216 0.243 0.11 0.121 ... ##$ V8 : num 0.16 0.348 0.377 0.128 0.247 ...\n## $V9 : num 0.3109 0.3337 0.5598 0.0598 0.3564 ... ##$ V10: num 0.211 0.287 0.619 0.126 0.446 ...\n\nThe function createDataPartition can be used to create a stratified random sample of the data into training and test sets:\n\nlibrary(caret)\nset.seed(998)\ninTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] We will use these data illustrate functionality on this (and other) pages. ## 5.3 Basic Parameter Tuning By default, simple bootstrap resampling is used for line 3 in the algorithm above. Others are available, such as repeated K-fold cross-validation, leave-one-out etc. The function trainControl can be used to specifiy the type of resampling: fitControl <- trainControl(## 10-fold CV method = \"repeatedcv\", number = 10, ## repeated ten times repeats = 10) More information about trainControl is given in a section below. The first two arguments to train are the predictor and outcome data objects, respectively. The third argument, method, specifies the type of model (see train Model List or train Models By Tag). To illustrate, we will fit a boosted tree model via the gbm package. The basic syntax for fitting this model using repeated cross-validation is shown below: set.seed(825) gbmFit1 <- train(Class ~ ., data = training, method = \"gbm\", trControl = fitControl, ## This last option is actually one ## for gbm() that passes through verbose = FALSE) gbmFit1 ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.7935784 0.5797839 ## 1 100 0.8171078 0.6290208 ## 1 150 0.8219608 0.6386184 ## 2 50 0.8041912 0.6027771 ## 2 100 0.8302059 0.6556940 ## 2 150 0.8283627 0.6520181 ## 3 50 0.8110343 0.6170317 ## 3 100 0.8301275 0.6551379 ## 3 150 0.8310343 0.6577252 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 10 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 150, ## interaction.depth = 3, shrinkage = 0.1 and n.minobsinnode = 10. For a gradient boosting machine (GBM) model, there are three main tuning parameters: • number of iterations, i.e. trees, (called n.trees in the gbm function) • complexity of the tree, called interaction.depth • learning rate: how quickly the algorithm adapts, called shrinkage • the minimum number of training set samples in a node to commence splitting (n.minobsinnode) The default values tested for this model are shown in the first two columns (shrinkage and n.minobsinnode are not shown beause the grid set of candidate models all use a single value for these tuning parameters). The column labeled “Accuracy” is the overall agreement rate averaged over cross-validation iterations. The agreement standard deviation is also calculated from the cross-validation results. The column “Kappa” is Cohen’s (unweighted) Kappa statistic averaged across the resampling results. train works with specific models (see train Model List or train Models By Tag). For these models, train can automatically create a grid of tuning parameters. By default, if p is the number of tuning parameters, the grid size is 3^p. As another example, regularized discriminant analysis (RDA) models have two parameters (gamma and lambda), both of which lie between zero and one. The default training grid would produce nine combinations in this two-dimensional space. There is additional functionality in train that is described in the next section. ## 5.4 Notes on Reproducibility Many models utilize random numbers during the phase where parameters are estimated. Also, the resampling indices are chosen using random numbers. There are two main ways to control the randomness in order to assure reproducible results. • There are two approaches to ensuring that the same resamples are used between calls to train. The first is to use set.seed just prior to calling train. The first use of random numbers is to create the resampling information. Alternatively, if you would like to use specific splits of the data, the index argument of the trainControl function can be used. This is briefly discussed below. • When the models are created inside of resampling, the seeds can also be set. While setting the seed prior to calling train may guarantee that the same random numbers are used, this is unlikely to be the case when parallel processing is used (depending which technology is utilized). To set the model fitting seeds, trainControl has an additional argument called seeds that can be used. The value for this argument is a list of integer vectors that are used as seeds. The help page for trainControl describes the appropriate format for this option. How random numbers are used is highly dependent on the package author. There are rare cases where the underlying model function does not control the random number seed, especially if the computations are conducted in C code. Also, please note that some packages load random numbers when loaded (directly or via namespace) and this may affect reproducibility. ## 5.5 Customizing the Tuning Process There are a few ways to customize the process of selecting tuning/complexity parameters and building the final model. ### 5.5.1 Pre-Processing Options As previously mentioned,train can pre-process the data in various ways prior to model fitting. The function preProcess is automatically used. This function can be used for centering and scaling, imputation (see details below), applying the spatial sign transformation and feature extraction via principal component analysis or independent component analysis. To specify what pre-processing should occur, the train function has an argument called preProcess. This argument takes a character string of methods that would normally be passed to the method argument of the preProcess function. Additional options to the preProcess function can be passed via the trainControl function. These processing steps would be applied during any predictions generated using predict.train, extractPrediction or extractProbs (see details later in this document). The pre-processing would not be applied to predictions that directly use the object$finalModel object.\n\nFor imputation, there are three methods currently implemented:\n\n• k-nearest neighbors takes a sample with missing values and finds the k closest samples in the training set. The average of the k training set values for that predictor are used as a substitute for the original data. When calculating the distances to the training set samples, the predictors used in the calculation are the ones with no missing values for that sample and no missing values in the training set.\n• another approach is to fit a bagged tree model for each predictor using the training set samples. This is usually a fairly accurate model and can handle missing values. When a predictor for a sample requires imputation, the values for the other predictors are fed through the bagged tree and the prediction is used as the new value. This model can have significant computational cost.\n• the median of the predictor’s training set values can be used to estimate the missing data.\n\nIf there are missing values in the training set, PCA and ICA models only use complete samples.\n\n### 5.5.2 Alternate Tuning Grids\n\nThe tuning parameter grid can be specified by the user. The argument tuneGrid can take a data frame with columns for each tuning parameter. The column names should be the same as the fitting function’s arguments. For the previously mentioned RDA example, the names would be gamma and lambda. train will tune the model over each combination of values in the rows.\n\nFor the boosted tree model, we can fix the learning rate and evaluate more than three values of n.trees:\n\ngbmGrid <- expand.grid(interaction.depth = c(1, 5, 9),\nn.trees = (1:30)*50,\nshrinkage = 0.1,\nn.minobsinnode = 20)\n\nnrow(gbmGrid)\n\nset.seed(825)\ngbmFit2 <- train(Class ~ ., data = training,\nmethod = \"gbm\",\ntrControl = fitControl,\nverbose = FALSE,\n## Now specify the exact models\n## to evaluate:\ntuneGrid = gbmGrid)\ngbmFit2\n## Stochastic Gradient Boosting\n##\n## 157 samples\n## 60 predictor\n## 2 classes: 'M', 'R'\n##\n## No pre-processing\n## Resampling: Cross-Validated (10 fold, repeated 10 times)\n## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ...\n## Resampling results across tuning parameters:\n##\n## interaction.depth n.trees Accuracy Kappa\n## 1 50 0.78 0.56\n## 1 100 0.81 0.61\n## 1 150 0.82 0.63\n## 1 200 0.83 0.65\n## 1 250 0.82 0.65\n## 1 300 0.83 0.65\n## : : : :\n## 9 1350 0.85 0.69\n## 9 1400 0.85 0.69\n## 9 1450 0.85 0.69\n## 9 1500 0.85 0.69\n##\n## Tuning parameter 'shrinkage' was held constant at a value of 0.1\n##\n## Tuning parameter 'n.minobsinnode' was held constant at a value of 20\n## Accuracy was used to select the optimal model using the largest value.\n## The final values used for the model were n.trees = 1200,\n## interaction.depth = 9, shrinkage = 0.1 and n.minobsinnode = 20.\n\nAnother option is to use a random sample of possible tuning parameter combinations, i.e. “random search”(pdf). This functionality is described on this page.\n\nTo use a random search, use the option search = \"random\" in the call to trainControl. In this situation, the tuneLength parameter defines the total number of parameter combinations that will be evaluated.\n\n### 5.5.3 Plotting the Resampling Profile\n\nThe plot function can be used to examine the relationship between the estimates of performance and the tuning parameters. For example, a simple invokation of the function shows the results for the first performance measure:\n\ntrellis.par.set(caretTheme())\nplot(gbmFit2)", null, "Other performance metrics can be shown using the metric option:\n\ntrellis.par.set(caretTheme())\nplot(gbmFit2, metric = \"Kappa\")", null, "Other types of plot are also available. See ?plot.train for more details. The code below shows a heatmap of the results:\n\ntrellis.par.set(caretTheme())\nplot(gbmFit2, metric = \"Kappa\", plotType = \"level\",\nscales = list(x = list(rot = 90)))", null, "A ggplot method can also be used:\n\nggplot(gbmFit2)", null, "There are also plot functions that show more detailed representations of the resampled estimates. See ?xyplot.train for more details.\n\nFrom these plots, a different set of tuning parameters may be desired. To change the final values without starting the whole process again, the update.train can be used to refit the final model. See ?update.train\n\n### 5.5.4 The trainControl Function\n\nThe function trainControl generates parameters that further control how models are created, with possible values:\n\n• method: The resampling method: \"boot\", \"cv\", \"LOOCV\", \"LGOCV\", \"repeatedcv\", \"timeslice\", \"none\" and \"oob\". The last value, out-of-bag estimates, can only be used by random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models. GBM models are not included (the gbm package maintainer has indicated that it would not be a good idea to choose tuning parameter values based on the model OOB error estimates with boosted trees). Also, for leave-one-out cross-validation, no uncertainty estimates are given for the resampled performance measures.\n• number and repeats: number controls with the number of folds in K-fold cross-validation or number of resampling iterations for bootstrapping and leave-group-out cross-validation. repeats applied only to repeated K-fold cross-validation. Suppose that method = \"repeatedcv\", number = 10 and repeats = 3,then three separate 10-fold cross-validations are used as the resampling scheme.\n• verboseIter: A logical for printing a training log.\n• returnData: A logical for saving the data into a slot called trainingData.\n• p: For leave-group out cross-validation: the training percentage\n• For method = \"timeslice\", trainControl has options initialWindow, horizon and fixedWindow that govern how cross-validation can be used for time series data.\n• classProbs: a logical value determining whether class probabilities should be computed for held-out samples during resample.\n• index and indexOut: optional lists with elements for each resampling iteration. Each list element is the sample rows used for training at that iteration or should be held-out. When these values are not specified, train will generate them.\n• summaryFunction: a function to computed alternate performance summaries.\n• selectionFunction: a function to choose the optimal tuning parameters. and examples.\n• PCAthresh, ICAcomp and k: these are all options to pass to the preProcess function (when used).\n• returnResamp: a character string containing one of the following values: \"all\", \"final\" or \"none\". This specifies how much of the resampled performance measures to save.\n• allowParallel: a logical that governs whether train should use parallel processing (if availible).\n\nThere are several other options not discussed here.\n\n### 5.5.5 Alternate Performance Metrics\n\nThe user can change the metric used to determine the best settings. By default, RMSE, R2, and the mean absolute error (MAE) are computed for regression while accuracy and Kappa are computed for classification. Also by default, the parameter values are chosen using RMSE and accuracy, respectively for regression and classification. The metric argument of the train function allows the user to control which the optimality criterion is used. For example, in problems where there are a low percentage of samples in one class, using metric = \"Kappa\" can improve quality of the final model.\n\nIf none of these parameters are satisfactory, the user can also compute custom performance metrics. The trainControl function has a argument called summaryFunction that specifies a function for computing performance. The function should have these arguments:\n\n• data is a reference for a data frame or matrix with columns called obs and pred for the observed and predicted outcome values (either numeric data for regression or character values for classification). Currently, class probabilities are not passed to the function. The values in data are the held-out predictions (and their associated reference values) for a single combination of tuning parameters. If the classProbs argument of the trainControl object is set to TRUE, additional columns in data will be present that contains the class probabilities. The names of these columns are the same as the class levels. Also, if weights were specified in the call to train, a column called weights will also be in the data set. Additionally, if the recipe method for train was used (see this section of documentation), other variables not used in the model will also be included. This can be accomplished by adding a role in the recipe of \"performance var\". An example is given in the recipe section of this site.\n• lev is a character string that has the outcome factor levels taken from the training data. For regression, a value of NULL is passed into the function.\n• model is a character string for the model being used (i.e. the value passed to the method argument of train).\n\nThe output to the function should be a vector of numeric summary metrics with non-null names. By default, train evaluate classification models in terms of the predicted classes. Optionally, class probabilities can also be used to measure performance. To obtain predicted class probabilities within the resampling process, the argument classProbs in trainControl must be set to TRUE. This merges columns of probabilities into the predictions generated from each resample (there is a column per class and the column names are the class names).\n\nAs shown in the last section, custom functions can be used to calculate performance scores that are averaged over the resamples. Another built-in function, twoClassSummary, will compute the sensitivity, specificity and area under the ROC curve:\n\nhead(twoClassSummary)\n##\n## 1 function (data, lev = NULL, model = NULL)\n## 2 {\n## 3 lvls <- levels(data$obs) ## 4 if (length(lvls) > 2) ## 5 stop(paste(\"Your outcome has\", length(lvls), \"levels. The twoClassSummary() function isn't appropriate.\")) ## 6 requireNamespaceQuietStop(\"ModelMetrics\") To rebuild the boosted tree model using this criterion, we can see the relationship between the tuning parameters and the area under the ROC curve using the following code: fitControl <- trainControl(method = \"repeatedcv\", number = 10, repeats = 10, ## Estimate class probabilities classProbs = TRUE, ## Evaluate performance using ## the following function summaryFunction = twoClassSummary) set.seed(825) gbmFit3 <- train(Class ~ ., data = training, method = \"gbm\", trControl = fitControl, verbose = FALSE, tuneGrid = gbmGrid, ## Specify which metric to optimize metric = \"ROC\") gbmFit3 ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees ROC Sens Spec ## 1 50 0.86 0.86 0.69 ## 1 100 0.88 0.85 0.75 ## 1 150 0.89 0.86 0.77 ## 1 200 0.90 0.87 0.78 ## 1 250 0.90 0.86 0.78 ## 1 300 0.90 0.87 0.78 ## : : : : : ## 9 1350 0.92 0.88 0.81 ## 9 1400 0.92 0.88 0.80 ## 9 1450 0.92 0.88 0.81 ## 9 1500 0.92 0.88 0.80 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1450, ## interaction.depth = 5, shrinkage = 0.1 and n.minobsinnode = 20. In this case, the average area under the ROC curve associated with the optimal tuning parameters was 0.922 across the 100 resamples. ## 5.6 Choosing the Final Model Another method for customizing the tuning process is to modify the algorithm that is used to select the “best” parameter values, given the performance numbers. By default, the train function chooses the model with the largest performance value (or smallest, for mean squared error in regression models). Other schemes for selecting model can be used. Breiman et al (1984) suggested the “one standard error rule” for simple tree-based models. In this case, the model with the best performance value is identified and, using resampling, we can estimate the standard error of performance. The final model used was the simplest model within one standard error of the (empirically) best model. With simple trees this makes sense, since these models will start to over-fit as they become more and more specific to the training data. train allows the user to specify alternate rules for selecting the final model. The argument selectionFunction can be used to supply a function to algorithmically determine the final model. There are three existing functions in the package: best is chooses the largest/smallest value, oneSE attempts to capture the spirit of Breiman et al (1984) and tolerance selects the least complex model within some percent tolerance of the best value. See ?best for more details. User-defined functions can be used, as long as they have the following arguments: • x is a data frame containing the tune parameters and their associated performance metrics. Each row corresponds to a different tuning parameter combination. • metric a character string indicating which performance metric should be optimized (this is passed in directly from the metric argument of train. • maximize is a single logical value indicating whether larger values of the performance metric are better (this is also directly passed from the call to train). The function should output a single integer indicating which row in x is chosen. As an example, if we chose the previous boosted tree model on the basis of overall accuracy, we would choose: n.trees = 1450, interaction.depth = 5, shrinkage = 0.1, n.minobsinnode = 20. However, the scale in this plots is fairly tight, with accuracy values ranging from 0.863 to 0.922. A less complex model (e.g. fewer, more shallow trees) might also yield acceptable accuracy. The tolerance function could be used to find a less complex model based on (x-xbest)/xbestx 100, which is the percent difference. For example, to select parameter values based on a 2% loss of performance: whichTwoPct <- tolerance(gbmFit3$results, metric = \"ROC\",\ntol = 2, maximize = TRUE)\ncat(\"best model within 2 pct of best:\\n\")\n## best model within 2 pct of best:\ngbmFit3$results[whichTwoPct,1:6] ## shrinkage interaction.depth n.minobsinnode n.trees ROC Sens ## 32 0.1 5 20 100 0.9139707 0.8645833 This indicates that we can get a less complex model with an area under the ROC curve of 0.914 (compared to the “pick the best” value of 0.922). The main issue with these functions is related to ordering the models from simplest to complex. In some cases, this is easy (e.g. simple trees, partial least squares), but in cases such as this model, the ordering of models is subjective. For example, is a boosted tree model using 100 iterations and a tree depth of 2 more complex than one with 50 iterations and a depth of 8? The package makes some choices regarding the orderings. In the case of boosted trees, the package assumes that increasing the number of iterations adds complexity at a faster rate than increasing the tree depth, so models are ordered on the number of iterations then ordered with depth. See ?best for more examples for specific models. ## 5.7 Extracting Predictions and Class Probabilities As previously mentioned, objects produced by the train function contain the “optimized” model in the finalModel sub-object. Predictions can be made from these objects as usual. In some cases, such as pls or gbm objects, additional parameters from the optimized fit may need to be specified. In these cases, the train objects uses the results of the parameter optimization to predict new samples. For example, if predictions were created using predict.gbm, the user would have to specify the number of trees directly (there is no default). Also, for binary classification, the predictions from this function take the form of the probability of one of the classes, so extra steps are required to convert this to a factor vector. predict.train automatically handles these details for this (and for other models). Also, there are very few standard syntaxes for model predictions in R. For example, to get class probabilities, many predict methods have an argument called type that is used to specify whether the classes or probabilities should be generated. Different packages use different values of type, such as \"prob\", \"posterior\", \"response\", \"probability\" or \"raw\". In other cases, completely different syntax is used. For predict.train, the type options are standardized to be \"class\" and \"prob\" (the underlying code matches these to the appropriate choices for each model. For example: predict(gbmFit3, newdata = head(testing)) ## R M R M R M ## Levels: M R predict(gbmFit3, newdata = head(testing), type = \"prob\") ## M R ## 1 3.215213e-02 9.678479e-01 ## 2 1.000000e+00 3.965815e-08 ## 3 6.996088e-13 1.000000e+00 ## 4 9.070652e-01 9.293483e-02 ## 5 2.029754e-03 9.979702e-01 ## 6 9.999662e-01 3.377548e-05 ## 5.8 Exploring and Comparing Resampling Distributions ### 5.8.1 Within-Model There are several lattice functions than can be used to explore relationships between tuning parameters and the resampling results for a specific model: • xyplot and stripplot can be used to plot resampling statistics against (numeric) tuning parameters. • histogram and densityplot can also be used to look at distributions of the tuning parameters across tuning parameters. For example, the following statements create a density plot: trellis.par.set(caretTheme()) densityplot(gbmFit3, pch = \"|\")", null, "Note that if you are interested in plotting the resampling results across multiple tuning parameters, the option resamples = \"all\" should be used in the control object. ### 5.8.2 Between-Models The caret package also includes functions to characterize the differences between models (generated using train, sbf or rfe) via their resampling distributions. These functions are based on the work of Hothorn et al. (2005) and Eugster et al (2008). First, a support vector machine model is fit to the Sonar data. The data are centered and scaled using the preProc argument. Note that the same random number seed is set prior to the model that is identical to the seed used for the boosted tree model. This ensures that the same resampling sets are used, which will come in handy when we compare the resampling profiles between models. set.seed(825) svmFit <- train(Class ~ ., data = training, method = \"svmRadial\", trControl = fitControl, preProc = c(\"center\", \"scale\"), tuneLength = 8, metric = \"ROC\") svmFit ## Support Vector Machines with Radial Basis Function Kernel ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C ROC Sens Spec ## 0.25 0.8438318 0.7373611 0.7230357 ## 0.50 0.8714459 0.8083333 0.7316071 ## 1.00 0.8921354 0.8031944 0.7653571 ## 2.00 0.9116171 0.8358333 0.7925000 ## 4.00 0.9298934 0.8525000 0.8201786 ## 8.00 0.9318899 0.8684722 0.8217857 ## 16.00 0.9339658 0.8730556 0.8205357 ## 32.00 0.9339658 0.8776389 0.8276786 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were sigma = 0.01181293 and C = 16. Also, a regularized discriminant analysis model was fit. set.seed(825) rdaFit <- train(Class ~ ., data = training, method = \"rda\", trControl = fitControl, tuneLength = 4, metric = \"ROC\") rdaFit ## Regularized Discriminant Analysis ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## gamma lambda ROC Sens Spec ## 0.0000000 0.0000000 0.6426029 0.9311111 0.3364286 ## 0.0000000 0.3333333 0.8543564 0.8076389 0.7585714 ## 0.0000000 0.6666667 0.8596577 0.8083333 0.7766071 ## 0.0000000 1.0000000 0.7950670 0.7677778 0.6925000 ## 0.3333333 0.0000000 0.8509276 0.8502778 0.6914286 ## 0.3333333 0.3333333 0.8650372 0.8676389 0.6866071 ## 0.3333333 0.6666667 0.8698115 0.8604167 0.6941071 ## 0.3333333 1.0000000 0.8336930 0.7597222 0.7542857 ## 0.6666667 0.0000000 0.8600868 0.8756944 0.6482143 ## 0.6666667 0.3333333 0.8692981 0.8794444 0.6446429 ## 0.6666667 0.6666667 0.8678547 0.8355556 0.6892857 ## 0.6666667 1.0000000 0.8277133 0.7445833 0.7448214 ## 1.0000000 0.0000000 0.7059797 0.6888889 0.6032143 ## 1.0000000 0.3333333 0.7098313 0.6830556 0.6101786 ## 1.0000000 0.6666667 0.7129489 0.6672222 0.6173214 ## 1.0000000 1.0000000 0.7193031 0.6626389 0.6296429 ## ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were gamma = 0.3333333 and lambda ## = 0.6666667. Given these models, can we make statistical statements about their performance differences? To do this, we first collect the resampling results using resamples. resamps <- resamples(list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) resamps ## ## Call: ## resamples.default(x = list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## Performance metrics: ROC, Sens, Spec ## Time estimates for: everything, final model fit summary(resamps) ## ## Call: ## summary.resamples(object = resamps) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.6964286 0.874504 0.9375000 0.9216270 0.9821429 1 0 ## SVM 0.7321429 0.905878 0.9464286 0.9339658 0.9821429 1 0 ## RDA 0.5625000 0.812500 0.8750000 0.8698115 0.9392361 1 0 ## ## Sens ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.5555556 0.7777778 0.8750000 0.8776389 1 1 0 ## SVM 0.5000000 0.7777778 0.8888889 0.8730556 1 1 0 ## RDA 0.4444444 0.7777778 0.8750000 0.8604167 1 1 0 ## ## Spec ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.4285714 0.7142857 0.8571429 0.8133929 1.0000000 1 0 ## SVM 0.4285714 0.7142857 0.8571429 0.8205357 0.9062500 1 0 ## RDA 0.1428571 0.5714286 0.7142857 0.6941071 0.8571429 1 0 Note that, in this case, the option resamples = \"final\" should be user-defined in the control objects. There are several lattice plot methods that can be used to visualize the resampling distributions: density plots, box-whisker plots, scatterplot matrices and scatterplots of summary statistics. For example: theme1 <- trellis.par.get() theme1$plot.symbol$col = rgb(.2, .2, .2, .4) theme1$plot.symbol$pch = 16 theme1$plot.line$col = rgb(1, 0, 0, .7) theme1$plot.line\\$lwd <- 2\ntrellis.par.set(theme1)\nbwplot(resamps, layout = c(3, 1))", null, "trellis.par.set(caretTheme())\ndotplot(resamps, metric = \"ROC\")", null, "trellis.par.set(theme1)\nxyplot(resamps, what = \"BlandAltman\")", null, "splom(resamps)", null, "Other visualizations are availible in densityplot.resamples and parallel.resamples\n\nSince models are fit on the same versions of the training data, it makes sense to make inferences on the differences between models. In this way we reduce the within-resample correlation that may exist. We can compute the differences, then use a simple t-test to evaluate the null hypothesis that there is no difference between models.\n\ndifValues <- diff(resamps)\ndifValues\n##\n## Call:\n## diff.resamples(x = resamps)\n##\n## Models: GBM, SVM, RDA\n## Metrics: ROC, Sens, Spec\n## Number of differences: 3\n## p-value adjustment: bonferroni\nsummary(difValues)\n##\n## Call:\n## summary.diff.resamples(object = difValues)\n##\n## p-value adjustment: bonferroni\n## Upper diagonal: estimates of the difference\n## Lower diagonal: p-value for H0: difference = 0\n##\n## ROC\n## GBM SVM RDA\n## GBM -0.01234 0.05182\n## SVM 0.3388 0.06415\n## RDA 5.988e-07 2.638e-10\n##\n## Sens\n## GBM SVM RDA\n## GBM 0.004583 0.017222\n## SVM 1.0000 0.012639\n## RDA 0.5187 1.0000\n##\n## Spec\n## GBM SVM RDA\n## GBM -0.007143 0.119286\n## SVM 1 0.126429\n## RDA 5.300e-07 1.921e-10\ntrellis.par.set(theme1)\nbwplot(difValues, layout = c(3, 1))", null, "trellis.par.set(caretTheme())\ndotplot(difValues)", null, "## 5.9 Fitting Models Without Parameter Tuning\n\nIn cases where the model tuning values are known, train can be used to fit the model to the entire training set without any resampling or parameter tuning. Using the method = \"none\" option in trainControl can be used. For example:\n\nfitControl <- trainControl(method = \"none\", classProbs = TRUE)\n\nset.seed(825)\ngbmFit4 <- train(Class ~ ., data = training,\nmethod = \"gbm\",\ntrControl = fitControl,\nverbose = FALSE,\n## Only a single model can be passed to the\n## function when no resampling is used:\ntuneGrid = data.frame(interaction.depth = 4,\nn.trees = 100,\nshrinkage = .1,\nn.minobsinnode = 20),\nmetric = \"ROC\")\ngbmFit4\n## Stochastic Gradient Boosting\n##\n## 157 samples\n## 60 predictor\n## 2 classes: 'M', 'R'\n##\n## No pre-processing\n## Resampling: None\n\nNote that plot.train, resamples, confusionMatrix.train and several other functions will not work with this object but predict.train and others will:\n\npredict(gbmFit4, newdata = head(testing))\n## R M R R M M\n## Levels: M R\npredict(gbmFit4, newdata = head(testing), type = \"prob\")\n## M R\n## 1 0.264671996 0.73532800\n## 2 0.960445979 0.03955402\n## 3 0.005731862 0.99426814\n## 4 0.298628996 0.70137100\n## 5 0.503935367 0.49606463\n## 6 0.813716635 0.18628336" ]
[ null, "https://topepo.github.io/caret/premade/TrainAlgo.png", null, "https://topepo.github.io/caret/basic/train_plot1-1.svg", null, "https://topepo.github.io/caret/basic/train_plot2-1.svg", null, "https://topepo.github.io/caret/basic/train_plot3-1.svg", null, "https://topepo.github.io/caret/basic/train_ggplot1-1.svg", null, "https://topepo.github.io/caret/basic/4-1.svg", null, "https://topepo.github.io/caret/basic/train_resample_box-1.svg", null, "https://topepo.github.io/caret/basic/train_resample_ci-1.svg", null, "https://topepo.github.io/caret/basic/train_resample_ba-1.svg", null, "https://topepo.github.io/caret/basic/train_resample_scatmat-1.svg", null, "https://topepo.github.io/caret/basic/train_diff_box-1.svg", null, "https://topepo.github.io/caret/basic/train_diff_ci-1.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72691435,"math_prob":0.926358,"size":34336,"snap":"2019-13-2019-22","text_gpt3_token_len":9459,"char_repetition_ratio":0.14511243,"word_repetition_ratio":0.11211841,"special_character_ratio":0.3095876,"punctuation_ratio":0.17296815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97968954,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T05:29:37Z\",\"WARC-Record-ID\":\"<urn:uuid:0ae44056-eedb-4b58-858c-68054f648030>\",\"Content-Length\":\"112286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:506890d0-dcf1-483b-b74e-ed8a1993d574>\",\"WARC-Concurrent-To\":\"<urn:uuid:95f135a2-d41d-4601-955d-272186ae5198>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://topepo.github.io/caret/model-training-and-tuning.html\",\"WARC-Payload-Digest\":\"sha1:YK2LPF32BNLUFUUUFVPLY5BMQXQNL33H\",\"WARC-Block-Digest\":\"sha1:22OTK2JA3ZJ4ICZ4PIUAAABKPB4JM3PO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257100.22_warc_CC-MAIN-20190523043611-20190523065611-00040.warc.gz\"}"}
http://www.worldlibrary.in/articles/eng/Taylor-Proudman_theorem
[ "### Taylor-Proudman Theorem\n\nIn fluid mechanics, the Taylor–Proudman theorem (after G. I. Taylor and Joseph Proudman) states that when a solid body is moved slowly within a fluid that is steadily rotated with a high angular velocity $\\Omega$, the fluid velocity will be uniform along any line parallel to the axis of rotation. $\\Omega$ must be large compared to the movement of the solid body in order to make the coriolis force large compared to the acceleration terms.\n\nThat this is so may be seen by considering the Navier–Stokes equations for steady flow, with zero viscosity and a body force corresponding to the Coriolis force, which are:\n\n\n\n\\rho({\\mathbf u}\\cdot\\nabla){\\mathbf u}={\\mathbf F}-\\nabla p, where $\\left\\{\\mathbf u\\right\\}$ is the fluid velocity, $\\rho$ is the fluid density, and $p$ the pressure. If we now make the assumption that $F=\\nabla\\Phi$ is scalar potential and the advective term may be neglected (reasonable if the Rossby number is much less than unity) and that the flow is incompressible (density is constant) then the equations become:\n\n\n\n2\\rho\\Omega\\times{\\mathbf u}=\\nabla \\Phi -\\nabla p, where $\\Omega$ is the angular velocity vector. If the curl of this equation is taken, the result is the Taylor–Proudman theorem:\n\n\n\n({\\mathbf\\Omega}\\cdot\\nabla){\\mathbf u}={\\mathbf 0}.\n\nTo derive this, one needs the vector identities\n\n$\\nabla\\times\\left(A\\times B\\right)=A\\left(\\nabla\\cdot B\\right)-\\left(A\\cdot\\nabla\\right)B+\\left(B\\cdot\\nabla\\right)A-B\\left(\\nabla\\cdot A\\right)$\n\nand\n\n$\\nabla\\times\\left(\\nabla p\\right)=0\\$\n\nand\n\n$\\nabla\\times\\left(\\nabla \\Phi\\right)=0\\$\n\n(because the curl of the gradient is always equal to zero). Note that $\\nabla\\cdot\\left\\{\\mathbf\\Omega\\right\\}=0$ is also needed (angular velocity is divergence-free).\n\nThe vector form of the Taylor–Proudman theorem is perhaps better understood by expanding the dot product:\n\n\n\n\\Omega_x\\frac{\\partial {\\mathbf u}}{\\partial x} + \\Omega_y\\frac{\\partial {\\mathbf u}}{\\partial y} + \\Omega_z\\frac{\\partial {\\mathbf u}}{\\partial z}=0.\n\nNow choose coordinates in which $\\Omega_x=\\Omega_y=0$ and then the equations reduce to\n\n\n\n\\frac{\\partial{\\mathbf u}}{\\partial z}=0, if $\\Omega_z\\neq 0$. Note that the implication is that all three components of the velocity vector are uniform along any line parallel to the z-axis.\n\n## Taylor Column\n\nThe Taylor column is an imaginary cylinder projected above and below a real cylinder that has been placed parallel to the rotation axis (anywhere in the flow, not necessarily in the center). The flow will curve around the imaginary cylinders just like the real due to the Taylor–Proudman theorem, which states that the flow in a rotating, homogenous, inviscid fluid are 2-dimensional in the plane orthogonal to the rotation axis and thus there is no variation in the flow along the $\\vec\\left\\{\\Omega\\right\\}$axis, often taken to be the $\\hat\\left\\{z\\right\\}$ axis.\n\nThe Taylor column is a simplified, experimentally observed effect of what transpires in the Earth's atmospheres and oceans." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88631624,"math_prob":0.9981958,"size":3782,"snap":"2021-04-2021-17","text_gpt3_token_len":843,"char_repetition_ratio":0.116463736,"word_repetition_ratio":0.017605634,"special_character_ratio":0.21152829,"punctuation_ratio":0.116546765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998671,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T07:11:07Z\",\"WARC-Record-ID\":\"<urn:uuid:a977cc85-e11c-4e8c-863d-310fffe3df36>\",\"Content-Length\":\"65272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48f47d29-09eb-44be-9539-17b012c3169c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0218b08b-ef0f-47c8-93f8-1b36f41b1e14>\",\"WARC-IP-Address\":\"72.235.245.98\",\"WARC-Target-URI\":\"http://www.worldlibrary.in/articles/eng/Taylor-Proudman_theorem\",\"WARC-Payload-Digest\":\"sha1:INGMSD2YLYVJOTMQEUWHU4JODNABFJNN\",\"WARC-Block-Digest\":\"sha1:CKVSJQUGAGNS4ULZY5IUWAFXEQII3EBA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038878326.67_warc_CC-MAIN-20210419045820-20210419075820-00335.warc.gz\"}"}
https://socratic.org/questions/how-do-you-graph-the-system-4x-24-and-10x-5y-20
[ "# How do you graph the system -4x > -24 and 10x+5y<20?\n\nFirst graph the lines 4x=24 and 10x+5y=20. The graph of the region would be that lying the left of both 4x=24 and 10x+5y=20. The graph would look like this", null, "" ]
[ null, "https://d2jmvrsizmvf4x.cloudfront.net/7rrXNGxLSDea97yAMSVE_Capture.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8521604,"math_prob":0.87495357,"size":360,"snap":"2020-10-2020-16","text_gpt3_token_len":114,"char_repetition_ratio":0.1264045,"word_repetition_ratio":0.032786883,"special_character_ratio":0.32222223,"punctuation_ratio":0.05263158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872872,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T09:35:50Z\",\"WARC-Record-ID\":\"<urn:uuid:87a2aad2-1f31-4693-9c65-d35406b782b2>\",\"Content-Length\":\"32584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:833946bc-3b64-4fc9-b6f6-277abde59cc9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e63f768-f802-42a2-9cbf-6138c3cfa068>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-graph-the-system-4x-24-and-10x-5y-20\",\"WARC-Payload-Digest\":\"sha1:B65CX5XQON64AWSMEZ4K6JKILOHJXE7F\",\"WARC-Block-Digest\":\"sha1:2YH3ARXDZE2FUJUEHRTKI3R77URWWS2I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370496901.28_warc_CC-MAIN-20200330085157-20200330115157-00296.warc.gz\"}"}
https://docs.fast.ai/callbacks.html
[ "Callbacks implemented in the fastai library\n\n## List of callbacks¶\n\nfastai's training loop is highly extensible, with a rich callback system. See the `callback` docs if you're interested in writing your own callback. See below for a list of callbacks that are provided with fastai, grouped by the module they're defined in.\n\nEvery callback that is passed to `Learner` with the `callback_fns` parameter will be automatically stored as an attribute. The attribute name is snake-cased, so for instance `ActivationStats` will appear as `learn.activation_stats` (assuming your object is named `learn`).\n\n## `Callback`¶\n\nThis sub-package contains more sophisticated callbacks that each are in their own module. They are (click the link for more details):\n\n### `LRFinder`¶\n\nUse Leslie Smith's learning rate finder to find a good learning rate for training your model. Let's see an example of use on the MNIST dataset with a simple CNN.\n\n```path = untar_data(URLs.MNIST_SAMPLE)\ndata = ImageDataBunch.from_folder(path)\ndef simple_learner(): return Learner(data, simple_cnn((3,16,16,2)), metrics=[accuracy])\nlearn = simple_learner()\n```\n\nThe fastai librairy already has a Learner method called `lr_find` that uses `LRFinder` to plot the loss as a function of the learning rate\n\n```learn.lr_find()\n```\n```LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.\n```\n```learn.recorder.plot()\n```", null, "In this example, a learning rate around 2e-2 seems like the right fit.\n\n```lr = 2e-2\n```\n\n### `OneCycleScheduler`¶\n\nTrain with Leslie Smith's 1cycle annealing method. Let's train our simple learner using the one cycle policy.\n\n```learn.fit_one_cycle(3, lr)\n```\nTotal time: 00:07\n\nepoch train_loss valid_loss accuracy time\n0 0.109439 0.059349 0.980864 00:02\n1 0.039582 0.023152 0.992149 00:02\n2 0.019009 0.021239 0.991659 00:02\n\nThe learning rate and the momentum were changed during the epochs as follows (more info on the dedicated documentation page).\n\n```learn.recorder.plot_lr(show_moms=True)\n```", null, "### `MixUpCallback`¶\n\nData augmentation using the method from mixup: Beyond Empirical Risk Minimization. It is very simple to add mixup in fastai :\n\n```learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy]).mixup()\n```\n\n### `CSVLogger`¶\n\nLog the results of training in a csv file. Simply pass the CSVLogger callback to the Learner.\n\n```learn = Learner(data, simple_cnn((3, 16, 16, 2)), metrics=[accuracy, error_rate], callback_fns=[CSVLogger])\n```\n```learn.fit(3)\n```\nTotal time: 00:07\n\nepoch train_loss valid_loss accuracy error_rate time\n0 0.127259 0.098069 0.969578 0.030422 00:02\n1 0.084601 0.068024 0.974975 0.025025 00:02\n2 0.055074 0.047266 0.983317 0.016683 00:02\n\nYou can then read the csv.\n\n```learn.csv_logger.read_logged_file()\n```\nepoch train_loss valid_loss accuracy error_rate\n0 0 0.127259 0.098069 0.969578 0.030422\n1 1 0.084601 0.068024 0.974975 0.025025\n2 2 0.055074 0.047266 0.983317 0.016683\n\n### `GeneralScheduler`¶\n\nCreate your own multi-stage annealing schemes with a convenient API. To illustrate, let's implement a 2 phase schedule.\n\n```def fit_odd_shedule(learn, lr):\nn = len(learn.data.train_dl)\nphases = [TrainingPhase(n).schedule_hp('lr', lr, anneal=annealing_cos),\nTrainingPhase(n*2).schedule_hp('lr', lr, anneal=annealing_poly(2))]\nsched = GeneralScheduler(learn, phases)\nlearn.callbacks.append(sched)\ntotal_epochs = 3\nlearn.fit(total_epochs)\n```\n```learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)\nfit_odd_shedule(learn, 1e-3)\n```\nTotal time: 00:07\n\nepoch train_loss valid_loss accuracy time\n0 0.176607 0.157229 0.946025 00:02\n1 0.140903 0.133690 0.954367 00:02\n2 0.130910 0.131156 0.956820 00:02\n```learn.recorder.plot_lr()\n```", null, "### `MixedPrecision`¶\n\nUse fp16 to take advantage of tensor cores on recent NVIDIA GPUs for a 200% or more speedup.\n\n### `HookCallback`¶\n\nConvenient wrapper for registering and automatically deregistering PyTorch hooks. Also contains pre-defined hook callback: `ActivationStats`.\n\n### `RNNTrainer`¶\n\nCallback taking care of all the tweaks to train an RNN.\n\n### `TerminateOnNaNCallback`¶\n\nStop training if the loss reaches NaN.\n\n### `EarlyStoppingCallback`¶\n\nStop training if a given metric/validation loss doesn't improve.\n\n### `SaveModelCallback`¶\n\nSave the model at every epoch, or the best model for a given metric/validation loss.\n\n```learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)\nlearn.fit_one_cycle(3,1e-4, callbacks=[SaveModelCallback(learn, every='epoch', monitor='accuracy')])\n```\nTotal time: 00:07\n\nepoch train_loss valid_loss accuracy time\n0 0.679189 0.646599 0.804220 00:02\n1 0.527475 0.497290 0.908243 00:02\n2 0.464756 0.462471 0.917076 00:02\n```!ls ~/.fastai/data/mnist_sample/models\n```\n```best.pth\t bestmodel_2.pth model_1.pth model_4.pth stage-1.pth\nbestmodel_0.pth bestmodel_3.pth model_2.pth model_5.pth tmp.pth\nbestmodel_1.pth model_0.pth\t model_3.pth one_epoch.pth trained_model.pth\n```\n\n### `ReduceLROnPlateauCallback`¶\n\nReduce the learning rate each time a given metric/validation loss doesn't improve by a certain factor.\n\n### `PeakMemMetric`¶\n\nGPU and general RAM profiling callback\n\n### `StopAfterNBatches`¶\n\nStop training after n batches of the first epoch.\n\n### `LearnerTensorboardWriter`¶\n\nBroadly useful callback for Learners that writes to Tensorboard. Writes model histograms, losses/metrics, embedding projector and gradient stats.\n\n## `train` and `basic_train`¶\n\n### `Recorder`¶\n\nTrack per-batch and per-epoch smoothed losses and metrics.\n\n### `ShowGraph`¶\n\nDynamically display a learning chart during training.\n\n### `BnFreeze`¶\n\nFreeze batchnorm layer moving average statistics for non-trainable layers." ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAYsAAAEKCAYAAADjDHn2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl8nGW99/HPb7bszdKka9KmK7SF0iVUSqWUrVRUFhEsuICoeFTk0XP0eenR53gEj8cjetRzxAU5IC6IiKAgetgsUKDQpkApbelC17TQJk2bNs06M9fzx9wt0zTJpO1MJpn5vl+veXXmmuue+5fpZL657uW6zTmHiIhIb3zpLkBERAY+hYWIiCSksBARkYQUFiIikpDCQkREElJYiIhIQgoLERFJSGEhIiIJKSxERCShQLoLSJby8nJXXV2d7jJERAaVlStXNjjnKhL1S2lYmNki4EeAH7jTOfedLs//ADjPe5gPDHPOlXjPXQd83XvuW865e3pbV3V1NbW1tcksX0Qk45nZtr70S1lYmJkfuB24CKgDVpjZw865tYf7OOe+GNf/88BM734Z8A2gBnDASm/ZfamqV0REepbKfRZzgE3Ouc3OuQ7gPuCyXvpfA/zOu38x8IRzrtELiCeARSmsVUREepHKsBgN7Ih7XOe1HcPMxgLjgL8f77IiIpJ6qQwL66atp/nQFwMPOOcix7Osmd1oZrVmVltfX3+CZYqISCKpDIs6oCrucSWwq4e+i3lnE1Sfl3XO3eGcq3HO1VRUJNyZLyIiJyiVYbECmGRm48wsRCwQHu7aycxOAUqBZXHNjwELzazUzEqBhV6biIikQcqOhnLOhc3sJmJf8n7gLufcGjO7Bah1zh0OjmuA+1zcJfucc41mdiuxwAG4xTnXmKpaRUSkd5Ypl1WtqalxmXSeRVtnhDfrm9m0p5mtDS1EncPvM3wGoYCPwpwghbkBinIC+HyGcw7nIOocnRFHZyRKZyQKQG7QT17QT07Qh88MA8wMvw+Cft+RW07AR37IT17IT27Aj8/X3a6j4xONOjoiUdo6I7SHYzUF/T4CPiPg9xGJOto6I7R2Rmg7covS2hEh4hw5AR85AT+hgI+i3ABDcoMMyQuQF/RjdvL1iWQ7M1vpnKtJ1C9jzuA+Uc45DrSGGZIXSPjl45xjX0sn9QfbqT/YTkNzO42HOmjpCNPcHuFQe5h9LR3sbe5g76F2mtvClBflMGJILiOKcwn5fTS1dtLU2smBtk5aO6O0e1+iHeEokagj6mK3xkMdRNOc4yG/j5ygj9ygn5DfR8BvsS95n49wNEpHJEp7Z6xuv89iIeA3whFHa2eElo4wbZ3RlNU2bEgOI4tzGT4kl5L8ICF/LFRCXugVhPwU5MSC5XBtQb+P3KCPvGDgSDDG/7fnBv0U5ST+LIhkm6wPi6bWTmbc8gQhv4+hhSEqinIozQ+R631J5gR87G/pZHtjCzsaWzjUEen2dUJ+H/k5fkrzQwwtCDGuvICCnAANzR1s3XuIZZv30hmJUpIXojgvSHFekJK8IDlFOeR6X2Z+H/h9hplRUZjDpOGFTBpWRHV5PiHvr/Cog/ZwhEPtEZrbOznYFibqHOaNGHwW+0IMBWL/Ogdt4QitHbG/2A+PJB0Qib4zAmkPx25tnRFaOmL9Dz9uD8fuR6KOcNQRjkQJ+GJfyodDJPZajkg0is+MvJDf+zIOxN7LgJ/coJ+AzwjHrTfgs9jIJ+QnJxD7Ny/oJ9cbBR0OpLZwLIwPtIY50NbJvpYOdje18faBNl7f2cSBtjAd4ViAdYRPLqBCAR/lBSHKi3IozgseGc0MyQtSXpDD0MIQQwtzKMkLUpQboMh7PifgP6n1igxkWR8Wfp/x9fdOoaG548hoYV9Lx5EvqLbOCEW5QcaW5XPW+KFUleUzfEgOFYU5lBflUJYfoiAnQCiQ+jkZA/7YX7uxTTJBIDfl6xyMolHnhUssYFo6IoSjUTojsaBrC0dp7Yi1t3ZGiN8S29oRoeFQOw0HO2hobqeptZNd+1s50BamqaWTjkjPQVRRlEP10HzGDi2gsjSPoQUhSvJDlBWEGF2SR2VpHgG/5u6UwSnrw6IoN8gnzxmf7jIkiXw+Iz8UID8UoKIoJ2mv65yjuT3M3uZYkOxv6eSgN7rb39JJ3b4Wtu5tYenGenYfaD9m+ZDfR3V5PhOHFTJtVDFnVJZwemUxxXnBpNUokipZHxYifWVmFOUGKcoNUl1e0GvfzkiU/S2dR/Zh7djXwpv1zby55xCv7zzAX1e/faTv9MpiPj6vmvdNH0VQIw8ZoHQ0lEga7G/p4LW6Jlbt2M+fV+1i055mRgzJ5bqzq/nwWWMYkqvRhvSPvh4NpbAQSbNo1PHMhnrufG4zz2/ay5DcAJ86ZzzXz6v29k2JpI7CQmQQen1nEz98cgNPrttDSX6QG+eP57q51RTkaIuxpIbCQmQQW7VjPz98cgNL1tdTVhDiH84dz0fPqiYvpMNzJbkUFiIZ4OXt+/jBExtYurGB8sIcvvqeU7lydmW6y5IM0tew0KEXIgPYrDGl/PoT7+L+T89l7NB8/ukPq7jtsTfIlD/yZPBQWIgMAnPGlXHfjWdxzZwqbl/yJl/4/au0h7ufTUAkFbTXTGSQCPp9fPuK06kqy+e7/7uet5ra+MXHanRSn/QLjSxEBhEz47MLJvKjxTN4Zfs+rr97Oc3t4XSXJVlAYSEyCF02YzQ/vnYWr9U18YlfrqC1hwkuRZJFYSEySF08bQT/efUZLN/ayI2/rtU+DEkphYXIIHbZjNH8xwems3RjAzf/7hWi6b4IimQshYXIIHf1mVV87ZIpPLZmN795aVu6y5EMpbAQyQCfPGcc8ydX8O2/rmNzfXO6y5EMpLAQyQBmxnevnE7I7+Of/rCKcC8XaRI5EQoLkQwxojiXWy8/jVe27+fnz25OdzmSYRQWIhnk0jNG8d7TR/LDJzewZldTusuRDKKwEMkgZsatl5/GkNwg3/nbG+kuRzKIwkIkw5QVhLjh3eNYurGBtbsOpLscyRAKC5EM9JF3jSU/5OfOpdp3IcmhsBDJQMX5QT50ZhUPr9rFW02t6S5HMoDCQiRD3TBvHA64+/mt6S5FMoDCQiRDVZXlc8npI7n3pe0caOtMdzkyyCksRDLYjeeMp7k9zH3Lt6e7FBnkFBYiGez0ymLmjh/KXc9tpSOss7rlxCksRDLcp+aP4+0DbTy25u10lyKDmMJCJMMtmDyMMWX5/OZFzUgrJ05hIZLhfD7j2neN4aUtjWzYfTDd5cggpbAQyQJXza4k5PfxW40u5AQpLESywNDCHN47fSQPvryTQ+3hdJcjg1BKw8LMFpnZejPbZGZf6aHP1Wa21szWmNm9ce0RM3vVuz2cyjpFssFHzhrDwfYwD6/ale5SZBAKpOqFzcwP3A5cBNQBK8zsYefc2rg+k4CvAvOcc/vMbFjcS7Q652akqj6RbDNrTCmnjiji18u2sfjMKsws3SXJIJLKkcUcYJNzbrNzrgO4D7isS59PAbc75/YBOOf2pLAekaxmZnzkrLGsfesAr+zYn+5yZJBJZViMBnbEPa7z2uJNBiab2fNm9qKZLYp7LtfMar32y1NYp0jWuHzmaApzAjqMVo5bKsOiuzGu6/I4AEwCFgDXAHeaWYn33BjnXA1wLfBDM5twzArMbvQCpba+vj55lYtkqMKcAJfOGMXfVr9NS4d2dEvfpTIs6oCquMeVQNc9a3XAn51znc65LcB6YuGBc26X9+9m4GlgZtcVOOfucM7VOOdqKioqkv8TiGSg908fRWtnhL+/oa2+0nepDIsVwCQzG2dmIWAx0PWopj8B5wGYWTmxzVKbzazUzHLi2ucBaxGRkzZnXBkVRTk8+tpb6S5FBpGUhYVzLgzcBDwGrAPud86tMbNbzOxSr9tjwF4zWwssAb7snNsLTAFqzWyV1/6d+KOoROTE+X3GJaeN4O9v7KFZ51xIH5lzXXcjDE41NTWutrY23WWIDAortjZy1c+W8aPFM7hsRtfjTiSbmNlKb/9wr3QGt0gWmj2mlBFDcnlklTZFSd8oLESykM9nXHL6SJ7dUK+r6EmfKCxEstT7zhhJRyTKE2t2p7sUGQQUFiJZamZVCaNL8vjLa5orShJTWIhkKTPjvdNHsnRjA/tbOtJdjgxwCguRLPbe00cSjjoe16YoSUBhIZLFplcWM7okT9fnloQUFiJZzMxYOG04Szc16KJI0iuFhUiWWzh1BB3hKM9u0GSc0jOFhUiWO7O6lJL8II+v1X4L6ZnCQiTLBfw+Ljh1OE+t201nJJrucmSAUliICAunDedAW5jlWxrTXYoMUAoLEWH+pApygz4e11FR0gOFhYiQF/Izf1IFj6/dTabMRC3JpbAQEQAWThvBW01tvL7zQLpLkQFIYSEiAFxw6jB8Bo+v1aYoOZbCQkQAKC0IMWdcmc7mlm4pLETkiIVTR7BhdzPb9h5KdykywCgsROSIC6cMB+DJdXvSXIkMNAoLETlizNB8Jg0r5Kl1OptbjqawEJGjXDBlOMu3NOpyq3IUhYWIHOXCKcMIRx3PrNfEgvIOhYWIHGXmmFLKCkLaFCVHUViIyFH8PmPBKRUsWV9PWBMLikdhISLHuHDKcJpaO1m5bV+6S5EBQmEhIsc4Z1I5Qb/x1Bs6hFZiFBYicoyi3CBnjR/Kk9pvIR6FhYh064JTh7G5/hBbGnQ2tygsRKQHF3hnc+uoKAGFhYj0oKosn1NHFPHQKzt1jQtRWIhIzz4+r5o1uw6wdGNDukuRNFNYiEiPrphZyYghudy+ZFO6S5E0U1iISI9CAR+fmj+el7Y0snJbY7rLkTRSWIhIr66ZU0VpfpCfLHkz3aVIGiksRKRX+aEAN8wbx1Nv7GHtLl2fO1ulNCzMbJGZrTezTWb2lR76XG1ma81sjZndG9d+nZlt9G7XpbJOEendx+ZWU5gT4KfPaHSRrVIWFmbmB24H3gNMBa4xs6ld+kwCvgrMc85NA77gtZcB3wDeBcwBvmFmpamqVUR6V5wf5MNnjeHR13bpJL0slcqRxRxgk3Nus3OuA7gPuKxLn08Btzvn9gE45w5PRHMx8IRzrtF77glgUQprFZEEPvHucQT8Pu54dnO6S5E0SGVYjAZ2xD2u89riTQYmm9nzZvaimS06jmVFpB8NK8rlqtmV/HFlHXsOtKW7HOlnqQwL66at62mgAWASsAC4BrjTzEr6uCxmdqOZ1ZpZbX29ruolkmqfnj+BcDTK/zy3Jd2lSD9LZVjUAVVxjyuBXd30+bNzrtM5twVYTyw8+rIszrk7nHM1zrmaioqKpBYvIscaMzSf900fxW9e3EZTi67RnU1SGRYrgElmNs7MQsBi4OEuff4EnAdgZuXENkttBh4DFppZqbdje6HXJiJp9g/nTuBQR4TfvLQt3aVIP0pZWDjnwsBNxL7k1wH3O+fWmNktZnap1+0xYK+ZrQWWAF92zu11zjUCtxILnBXALV6biKTZ1FFDWHBKBXc9t4W2zki6y5F+Ypkym2RNTY2rra1NdxkiWWH5lkau/vkybr1sGh+dW53ucuQkmNlK51xNon46g1tEjtuZ1aXMHlvKHUs3a/ryLKGwEJHjZmYsPrOKHY2trNEUIFmhT2FhZhPMLMe7v8DMbvYOcRWRLHXeqcMwQ9fpzhJ9HVn8EYiY2UTgf4BxwL29LyIimay8MIdZY0p5at2exJ1l0OtrWES9o5uuAH7onPsiMDJ1ZYnIYHDBlGGs3tnE2006ozvT9TUsOs3sGuA64C9eWzA1JYnIYHHhlOEAPPWGNkVlur6GxceBucC/Oee2mNk44DepK0tEBoNJwwqpKsvTpqgsEOhLJ+fcWuBmAO+M6iLn3HdSWZiIDHxmxgWnDufe5dtp6QiTH+rTV4oMQn09GuppMxviXWdiFXC3mf1naksTkcHgoqnD6QhHeW5jQ7pLkRTq62aoYufcAeADwN3OudnAhakrS0QGizOryyjKCWhTVIbra1gEzGwkcDXv7OAWESEU8DH/lAqeemMP0ajO5s5UfQ2LW4hN+vemc26FmY0HNqauLBEZTC6cMoyG5nZW1e1PdymSIn0KC+fcH5xz051zn/Eeb3bOXZna0kRksFgweRi+Pp7N3doR4fzvPc2jr73VD5VJsvR1B3elmT1kZnvMbLeZ/dHMKlNdnIgMDqUFIc6sLuPJtYn3W6zY2sjmhkPc/byutjeY9HUz1N3ELlw0iti1sB/x2kREgNhRUet3H2Tb3kO99lu2eS8Atdv2JewrA0dfw6LCOXe3cy7s3X4J6DqmInLEwqkjAHhibe+bopa9uZfqofmYwUOv7OyP0iQJ+hoWDWb2ETPze7ePAHtTWZiIDC5jhuZz6ogiHl/Tc1g0t4dZvbOJ900fxdzxQ3nolZ26HsYg0dewuIHYYbNvA28BHyQ2BYiIyBELpw6ndlsje5vbu31+xZZGIlHH3AlD+cCsSrbtbeHl7fv6uUo5EX09Gmq7c+5S51yFc26Yc+5yYifoiYgcsXDaCKIOnnqj+x3dL7zZQMjvY/bYUhadNoLcoI8HX9amqMHgZK6U949Jq0JEMsK0UUMYVZzb436LZZv3MmNMCblBP4U5AS6eNoK/vPYW7eFIP1cqx+tkwsKSVoWIZAQz46Kpw1m6sZ7WjqMDoKmlkzW7DnD2hKFH2j4wq5Km1k6W9DASkYHjZMJCe6VE5BgXTR1BW2eUpRvrj2p/actenIO5498Ji3kThlJRlNPjpijnHH9b/RZtnRp5pFuvYWFmB83sQDe3g8TOuRAROcq7xpdRlBvg8S6bopZt3ktOwMeMMSVH2gJ+H5fPGMWS9Xtoau085rVeeHMvn/nty/z33zW7ULr1GhbOuSLn3JBubkXOOU1cLyLHCPp9nH/qMJ5at5twJHqkfdmbe6mpLiUn4D+q/6LTRtAZcd1Ocf70+tjmqTuXbmHX/tbUFi69OpnNUCIi3brk9JHsa+nkk7+qZc/BNvY2t/PG2weP2gR12IyqUkrygyxZf+x+i2c21HPqiCIAbntsfcrrlp4pLEQk6RZOHc43L53Gsjf3cvEPnuX7T2wAYO6E8mP6+n3G/EkVPL2+/qgpznftb2XD7maunFXJJ949jode2cnquqZ++xnkaAoLEUk6M+O6s6t59OZzqCzN596XtpMf8jO9srjb/gtOqaChuZ01uw4caXt2Q2wH+fzJFXxmwQSGFoT41qNrdcZ3migsRCRlJg4r5MHPns2XLz6FLy08haC/+6+c+ZMrMOOoTVHPbKhnxJBcJg8vpCg3yBcumsxLWxoTzj0lqaGwEJGUCvp9fO68idzw7nE99ikvzGF6ZcmRsAhHojy3qYFzJ1dgFjul65ozq5hQUcAtf1mrnd1poLAQkQHhvFMqeHXHfhoPdfDqjv0cbAtz7invTG4d8Pu47aozaGrp5MqfvsDG3QePWn51XRP//dRGnZORIgoLERkQzjtlGM7B0o31PLOhHr/PmDfx6B3is8aU8vtPzyUcdXzwZ8tY6V0T4/O/e4X3//g5vv/EBv6wsi5NP0FmU1iIyIBw+uhihhaEWPLGHp7ZUM/MqhKK84LH9Js6aggPfuZsSvODXPuLF7nwP5/hybW7+fz5E5leWcxdz2056qgqSQ6FhYgMCD6fce4pFTy1bg+rdzZx7uSer69WVZbPA585m7PGD+Xqmiqe+fIC/mnhKdw4fzxbGg716Vrgcnx0FraIDBgLThl2ZJ6o+b2EBcR2it9zw5yj2hZNG8HokjzuXLqFhdNGpKzObKSRhYgMGPMnleMzKCsIcfro7s/J6E3A7+OGd49j+dZGVu3Yn4IKs1dKw8LMFpnZejPbZGZf6eb5682s3sxe9W6fjHsuEtf+cCrrFJGBoSQ/xPvPGMVVNZX4fCd2FYQPnVlFUW6AXyzdnOTqslvKNkOZmR+4HbgIqANWmNnDzrm1Xbr+3jl3Uzcv0eqcm5Gq+kRkYPrR4pkntXxhToBr54zhzue2sKOxhaqy/CRVlt1SObKYA2xyzm12znUA9wGXpXB9IiIAXD+vGgPufn5rukvJGKkMi9HAjrjHdV5bV1ea2Wtm9oCZVcW155pZrZm9aGaXd7cCM7vR61NbX1/fXRcRyUIji/N4/xmjuG/FdvYd6kh3ORkhlWHR3QbHrgc/PwJUO+emA08C98Q9N8Y5VwNcC/zQzCYc82LO3eGcq3HO1VRU9H7khIhkl88smEBLR4S7n9+S7lIyQirDog6IHylUArviOzjn9jrn2r2HvwBmxz23y/t3M/A0cHIbMkUkq0weXsTF04bzyxe2crDt2KvwyfFJZVisACaZ2TgzCwGLgaOOajKzkXEPLwXWee2lZpbj3S8H5gFdd4yLiPTqpvMmcaAtzK9f3JbuUga9lIWFcy4M3AQ8RiwE7nfOrTGzW8zsUq/bzWa2xsxWATcD13vtU4Bar30J8J1ujqISEenV6ZXFzJ9cwf8s3UJrhyYYPBmWKRcSqampcbW1tekuQ0QGmOVbGrn658v4xvun8vF5PU+Tnq3MbKW3f7hXOoNbRDLanHFlzKku445nN9Me1ujiRCksRCTjfe78ibzV1MYfV+5MdymDlsJCRDLe/EnlnFFVwu1LNtERjqa7nEFJYSEiGc/M+MKFk9i5v5UHdHGkE6KwEJGssGByBTM0ujhhCgsRyQpmxhcvmszO/a3cX7sj8QJyFIWFiGSN+ZPKmTmmhJ8s2aQjo46TwkJEsoaZ8cULJ7OrqY37a7Xv4ngoLEQkq5wzqZzZY0u5/e+b2NvcnngBARQWIpJlzIyvv3cK+1s7+PCdL9GoKcz7RGEhIlln5phS7vzYmWxpOMS1v3hR17zoA4WFiGSld08q587ratjccIgP3/mSNkkloIkERSSrPbOhnk/9qpZI1DFlZBGzxpQyZ1wZi6aNIODP/L+nNZGgiEgfnDu5goc+ezafWzCB4rwgf1xZx033vsK3Hl2X7tIGlEC6CxARSbdpo4qZNqoYgEjU8a1H13L381s5fXQxV86uTHN1A4NGFiIicfw+42uXTGHu+KH880OrWV3XlO6SBgSFhYhIFwG/jx9fO5Pywhw+/evao3Z+Z8p+3uOlzVAiIt0YWpjDzz86myt/+gKLfrSUkN/HgbZOWjoifORdY/jXS6dhZukus98oLEREenDa6GJ++pFZ/G75DopyAgzJC7L3UAf3LNtGbtDPV95zatoD4+fPvElLR4QvXjQ5petRWIiI9OL8U4dz/qnDjzx2zlGSF+Tnz25mSF6Qz503MW21dUai/GLpFmaNKUn5uhQWIiLHwcz45qXTONDWyW2Prac4L8hHzhqbllqeXl9PQ3M7V9dUpXxdCgsRkePk8xnfu+oMDraF+X9/fp3x5QWcPbG83+u4v3YH5YU5LDilIuXr0tFQIiInIOgdMVU9tIAvP/AaB9s6+3X99QfbWfLGHq6cNbpfzjRXWIiInKD8UIDvXXUGbzW1cutf1vbruv/0yk7CUcdVNf1z0qDCQkTkJMweW8qnz53A/bV1PLVud7+s0znH/bU7mDmmhInDivplnQoLEZGT9IULJ3HqiCK+8uDqfpnufFVdExv3NPfLju3DFBYiIicpJ+Dn+1efwf6Wjn7ZHHV/7Q5ygz7eN31kytd1mMJCRCQJpo0q5uPzxvGnV3eyc39rytbT2hHhkVd3cclpIynKDaZsPV0pLEREkuRjc2PnW/zmxW0pW8ezG+s52B7mg/08G67CQkQkSSpL87lwynDuW76dts5IStaxpeEQAKdXFqfk9XuisBARSaLrz65mX0snj6zalZLXr9vXQkl+sF83QYHCQkQkqeZOGMrk4YXcs2xrSqYz39HYSmVpXtJfNxGFhYhIEpkZH5tbzes7D/Dy9n1Jf/26fS1UleYn/XUTUViIiCTZFTNHU5Qb4JcvJHdHt3OOun0aWYiIZISCnABX11Txt9VvsftAW9Jet6G5g/ZwlMpMG1mY2SIzW29mm8zsK908f72Z1ZvZq97tk3HPXWdmG73bdamsU0Qk2T42dyzhqOO+5TuS9pp1+1oAMmtkYWZ+4HbgPcBU4Bozm9pN198752Z4tzu9ZcuAbwDvAuYA3zCz0lTVKiKSbGOHFvDuieXcX7uDSDQ5O7p37Iud7FdVllkjiznAJufcZudcB3AfcFkfl70YeMI51+ic2wc8ASxKUZ0iIimxeE4VO/e38tymhqS83uGRxeiSDBpZAKOB+PFXndfW1ZVm9pqZPWBmh2fF6tOyZnajmdWaWW19fX2y6hYRSYqLpg6nrCDEfcu3J+X16va1UlYQoiCn/69bl8qw6O4q5l3HYo8A1c656cCTwD3HsSzOuTucczXOuZqKitRfKUpE5HjkBPx8YOZonli7m/qD7Sf9euk6EgpSGxZ1QPz8uZXAUac0Ouf2OucOv4O/AGb3dVkRkcFg8ZwqwlHHgy/XnfRr1e1ryciwWAFMMrNxZhYCFgMPx3cws/j5dS8F1nn3HwMWmlmpt2N7odcmIjKoTBxWRM3YUn6/YsdJndEdjcbOsUjHCXmQwrBwzoWBm4h9ya8D7nfOrTGzW8zsUq/bzWa2xsxWATcD13vLNgK3EgucFcAtXpuIyKCzeM4YNjccYvmWE/8aa2hupyMcTdvIIqV7SZxzfwX+2qXtX+LufxX4ag/L3gXclcr6RET6w3tPH8k3H17DfSt28K7xQ0/oNQ4fNpuOE/JAZ3CLiKRcXsjPZTNH8dfVb53wju50npAHCgsRkX5xw7xxdEai/PyZN09o+TpvZDFaYSEikrnGVxRyxcxKfv3iNvacwHxRdftaKS8MkR/q/3MsQGEhItJvbr5gIuGo4ydPH//oom5fC6PTtL8CFBYiIv1m7NACPjirknuXb+ftpuMbXaTzhDxQWIiI9Kubzp9INOr4ydOb+rxMNOrYqbAQEckeVWX5XFVTxX3Ld7Bzf2uflqlvbqcjEk3bCXmgsBAR6Xc3nT8Rh+OnfRxdpPuwWVBYiIj0u9EleVwxczQPrKxjf0tHwv47GtN7Qh4oLERE0uLj88bR1hnl9ysSX0lPIwsRkSw1ZeQQzhqPlz05AAAJuklEQVRfxq+WbSMcifbaN3aORQ65QX8/VXcshYWISJpcf/Y4du5v5cl1u3vtV7evlaqy9I0qQGEhIpI2F00dzuiSPO5+fmuv/WLXsUjf/gpQWIiIpI3fZ1x39lhe2tLI2l0Huu2z52AbO/a1Mm6owkJEJGt9qGYMeUE/v3xhS7fP3/vSdiJRx+UzR/dzZUdTWIiIpFFxfpArZo3mT6/uYm/z0dOXd4Sj/Pal7Zw7uYLxFYVpqjBGYSEikmY3zBtHJOq47bH1R7X/7fXY9S+un1ednsLiKCxERNJs4rBCPvnucdy3Ygcvbt57pP2eF7YyrryAcydVpLG6GIWFiMgA8IULJ1NVlsc/P7iats4Iq+uaeHn7fj561lh8Pkt3eQoLEZGBIC/k59tXnM7mhkP8+O+b+OULW8kP+flgTWW6SwMgPZdcEhGRY5wzqYIPzBzNz555E58ZHzqziiG5wXSXBWhkISIyoHz9fVMpyg3QEYly3dlj013OERpZiIgMIGUFIf7rmpms3XWAicOK0l3OEQoLEZEB5pxJFZwzAI6AiqfNUCIikpDCQkREElJYiIhIQgoLERFJSGEhIiIJKSxERCQhhYWIiCSksBARkYTMOZfuGpLCzOqB/UBTN08Xd2nv7fHh+921lQMNx1la13X19fkTqTn+/snU3FtdvT2fqG0g1txduz4fiWXL52Mw1txde2+PJznnihNW4pzLmBtwR1/ae3t8+H4PbbXJqikVNXdX/4nUfKJ1J2obiDXr86HPR6bVfDKfj95umbYZ6pE+tvf2+JFe2pJZU6LnT6Tm+PsnU3Nflu/u+URtA7Hm7tr1+UgsWz4fg7Hm7tr7+vnoUcZshuoPZlbrnKtJdx3HQzX3n8FYt2ruH4Ox5q4ybWSRaneku4AToJr7z2CsWzX3j8FY81E0shARkYQ0shARkYSyMizM7C4z22Nmr5/AsrPNbLWZbTKz/zIzi3vu82a23szWmNl3k1t1auo2s381s51m9qp3u2Sg1xz3/JfMzJlZefIqTtn7fKuZvea9x4+b2ahk1pzCum8zsze82h8ys5JBUPNV3u9g1MyStp/gZGrt4fWuM7ON3u26uPZeP/dpcyKHcw32GzAfmAW8fgLLLgfmAgb8DXiP134e8CSQ4z0eNkjq/lfgS4PpvfaeqwIeA7YB5QO9ZmBIXJ+bgZ8NhvcaWAgEvPv/AfzHIKh5CnAK8DRQk+5avTqqu7SVAZu9f0u9+6W9/VzpvmXlyMI59yzQGN9mZhPM7H/NbKWZLTWzU7suZ2Yjif3SL3Ox/9VfAZd7T38G+I5zrt1bx55BUndKpbDmHwD/F0j6TrdU1OycOxDXtWAQ1f24cy7sdX0RqBwENa9zzq1PZp0nU2sPLgaecM41Ouf2AU8Ai9L5u5pIVoZFD+4APu+cmw18CfhJN31GA3Vxj+u8NoDJwDlm9pKZPWNmZ6a02necbN0AN3mbGe4ys9LUlXrESdVsZpcCO51zq1JdaJyTfp/N7N/MbAfwYeBfUlhrvGR8Pg67gdhfuqmWzJpTrS+1dmc0sCPu8eH6B8rPdQxdgxsws0LgbOAPcZsHc7rr2k3b4b8QA8SGk2cBZwL3m9l476+DlEhS3T8FbvUe3wp8n9iXQkqcbM1mlg98jdjmkX6RpPcZ59zXgK+Z2VeBm4BvJLnUo4tJUt3ea30NCAO/TWaNxxSSxJpTrbdazezjwP/x2iYCfzWzDmCLc+4Keq4/7T9XTxQWMT5gv3NuRnyjmfmBld7Dh4l9scYPwyuBXd79OuBBLxyWm1mU2Hww9QO5bufc7rjlfgH8JYX1wsnXPAEYB6zyfkErgZfNbI5z7u0BWnNX9wKPkuKwIEl1eztf3wdckMo/fjzJfq9TqdtaAZxzdwN3A5jZ08D1zrmtcV3qgAVxjyuJ7duoI/0/V/fSvdMkXTegmrgdVcALwFXefQPO6GG5FcRGD4d3Pl3itf8DcIt3fzKxIaYNgrpHxvX5InDfQK+5S5+tJHkHd4re50lxfT4PPDBIPteLgLVARSrqTeXngyTv4D7RWul5B/cWYlsjSr37ZX393KfjlvYC0vJDw++At4BOYkn+CWJ/rf4vsMr75fiXHpatAV4H3gR+zDsnNoaA33jPvQycP0jq/jWwGniN2F9sIwd6zV36bCX5R0Ol4n3+o9f+GrG5eEYPks/HJmJ/+Lzq3ZJ6FFeKar7Ce612YDfwWDprpZuw8Npv8N7fTcDHj+dzn46bzuAWEZGEdDSUiIgkpLAQEZGEFBYiIpKQwkJERBJSWIiISEIKC8loZtbcz+u708ymJum1IhabpfZ1M3sk0YyvZlZiZp9NxrpFutKhs5LRzKzZOVeYxNcLuHcm1kup+NrN7B5gg3Pu33rpXw38xTl3Wn/UJ9lFIwvJOmZWYWZ/NLMV3m2e1z7HzF4ws1e8f0/x2q83sz+Y2SPA42a2wMyeNrMHLHath98evuaA117j3W/2Jg9cZWYvmtlwr32C93iFmd3Sx9HPMt6ZSLHQzJ4ys5ctdt2Dy7w+3wEmeKOR27y+X/bW85qZfTOJb6NkGYWFZKMfAT9wzp0JXAnc6bW/Acx3zs0kNivst+OWmQtc55w733s8E/gCMBUYD8zrZj0FwIvOuTOAZ4FPxa3/R976E877482LdAGxM+wB2oArnHOziF1H5fteWH0FeNM5N8M592UzWwhMAuYAM4DZZjY/0fpEuqOJBCUbXQhMjZspdIiZFQHFwD1mNonYTJ/BuGWecM7FX8tguXOuDsDMXiU2Z9BzXdbTwTsTM64ELvLuz+WdaxTcC3yvhzrz4l57JbFrHkBszqBve1/8UWIjjuHdLL/Qu73iPS4kFh7P9rA+kR4pLCQb+YC5zrnW+EYz+29giXPuCm/7/9NxTx/q8hrtcfcjdP+71One2SnYU5/etDrnZphZMbHQ+RzwX8Suh1EBzHbOdZrZViC3m+UN+Hfn3M+Pc70ix9BmKMlGjxO7ngQAZnZ4iuliYKd3//oUrv9FYpu/ABYn6uycayJ2KdYvmVmQWJ17vKA4DxjrdT0IFMUt+hhwg3fdBcxstJkNS9LPIFlGYSGZLt/M6uJu/0jsi7fG2+m7ltj08gDfBf7dzJ4H/Cms6QvAP5rZcmAk0JRoAefcK8RmNl1M7AJENWZWS2yU8YbXZy/wvHeo7W3OuceJbeZaZmargQc4OkxE+kyHzor0M+9qf63OOWdmi4FrnHOXJVpOJJ20z0Kk/80GfuwdwbSfFF7GViRZNLIQEZGEtM9CREQSUliIiEhCCgsREUlIYSEiIgkpLEREJCGFhYiIJPT/ASHFM/jclx2tAAAAAElFTkSuQmCC ", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAuoAAAEKCAYAAABEy3C5AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzs3Xd8lfX5//HXlU0gCZAFhBFG2FOWIsuFYhUEJ45Ka4ttv9ph7a9oq7VaqtY66qh11lUnikVFUZG9BGSPkBD2yGCEJJB9/f44N5rGQA4hJ/c5Odfz8bgfOec+933nfRh3rtznc18fUVWMMcYYY4wx/iXE7QDGGGOMMcaY77NC3RhjjDHGGD9khboxxhhjjDF+yAp1Y4wxxhhj/JAV6sYYY4wxxvghK9SNMcYYY4zxQ1aoG2OMMcYY44esUDfGGGOMMcYPWaFujDHGGGOMHwpzO4CbEhISNDU11e0Yxhhz2latWpWnqolu52hIds42xgSyupy3g7pQT01NZeXKlW7HMMaY0yYiO93O0NDsnG2MCWR1OW/b0BdjjDHGGGP8kBXqxhhjjDHG+CEr1I0xxhhjjPFDVqgbY4wxxhjjh6xQN8YYY4wxxg/5tFAXkUtEJF1EMkVkag2vR4rIO87ry0Uk1Vl/kYisEpH1ztfzq+wz0FmfKSJPiog461uKyBcikuF8beHL92aMMcYYY4wv+axQF5FQ4BlgLNATmCQiPattdgtwWFW7AI8DDzvr84DLVbUPcDPwepV9ngWmAGnOcomzfiowR1XTgDnOc2OMMcYYYwKSL/uoDwEyVTULQETeBsYDm6psMx64z3k8HXhaRERVV1fZZiMQJSKRQEsgVlWXOsd8DbgC+NQ51mhnn1eBecDv6/1dmTOiqmw5UMD6vfkcyC8mNEQICxHimoQT3yyShGYRdEpoRlx0uNtRjTGNwOcbD5CRU0ir2CjO7hxPSvMmbkcyxgSIo8VlbM8tIreghINFJRw+VkZxWQW928RxYc/kBsngy0I9Bdhd5fkeYOjJtlHVchHJB+LxXFE/4UpgtaqWiEiKc5yqx0xxHier6n7nWPtFJKmmUCIyBc8Vedq3b1+X92XqoLJSmbF6L/+av42MnMJat0+MiaRrcjPOat+CQaktOat9c2KirHg3xpyeuek5vPX1dz+KBqe24DcXdmVYlwQXUxlj/E1RSTmrdx1hxY5DfLPrMOkHCsgpKKlx2xuGtm8UhbrUsE5PZxsR6YVnOMyY0zjmKanq88DzAIMGDTqtfU3d7D50jF++vZrVu47Qq00s0yb05tzOCbRp3gRFqahUDh8r42BhCdlHS8jKLSQzp5DNB47yzNxMKhXCQoShnVpyUY9kxvRqRRu7KmaM8cKDE/vyp8t7sevQMeZszuHVJTu4/sXlXD2wLX8a14tmkUE9QbcxQS37aDGfbzzAF5tzWLotj7IKJUSgW6tYRqQl0iWpGZ0Sm9IqNor4ZhG0bBpBVFgoISE1laO+4csz1B6gXZXnbYF9J9lmj4iEAXHAIQARaQvMAH6oqtuqbN/2JMfMFpHWztX01kBOfb4ZUzdLtx3k1tdXosBj1/Tjiv4pNf4Dj44Iq/KR9He/pRaWlLNm1xEWZuby5aZs7vtoE3/+eBPndk7g6kFtubhXK6LCQxvmzRhjAlJUeChdk2PomhzDj85N5emvMnlmXibr9+bz6o+HkBwb5XZEY0wDKS2vZM7mbN5ZuZsFW3OpVEiNj2bysFSGpyUyoH1zYv3oE3xfFuorgDQR6QjsBa4Drq+2zUw8N4suBa4CvlJVFZHmwCfAXaq6+MTGThFeICJnA8uBHwJPVTvWQ87X//rsnRmvzN2Sw61vrKJDy2heunkw7eOjT/sYzSLDGJ6WwPC0BO4a24Os3EJmrt3Heyv38Ku319CyaQQ3nd2BH57TgfhmkT54F8aYxiQqPJQ7L+7GkI4t+dkbq7jy2SW897NzaB1nn9IZ05jlHyvjtaU7eHXpDvIKS2kVG8UvRnfhigFt6JzYDKeJoN8RVd+N/hCRS4EngFDgZVWdJiL3AytVdaaIROHp6DIAz5X061Q1S0T+CNwFZFQ53BhVzRGRQcArQBM8N5He7hT38cC7QHtgF3C1qh46Vb5BgwbpypUr6/MtG8eqnYe4/oXldE2O4bUfD6FF04h6PX5lpbIs6yAvL97Ol5tziAoP4eqB7bjt/C52dcwEBRFZpaqD3M7RkOr7nL129xFueHE5bZpH8d6tw+wmdmMaoZyCYl5YkMWby3dRVFrBed0S+eE5qYzsmkhoAw5hgbqdt31aqPs7K9R9Y/ehY1z+9CJaREcw/Wfn+PxKd2ZOAS8s2M4Hq/cQGiLcMrwjt47q7FcfXRlT36xQrx9LMvOY/O8VnN05nn9PHtzgP7iNMb5RVFLO8wuyeGFhFiXllVzetzW3jupMj9axrmWqy3nbZiY19aq0vJLb3lpNRYXy78mDG2Q4SpekGB6+qi9z7hjNxb1a8czcbYz821xeX7aTysrg/UXUGFO7YV0SuG9cLxZszeWprzJq38EY49cqK5V3Vuxi9N/n8Y85GYzulsicO0bxxHUDXC3S68oKdVOv/vbZFtbuPsLfrupLakLTBv3e7eOj+cd1A/j49uH0aBXLPR9uYOKzS9i4L79BcxhjAsukIe2YOCCFJ+dksGb3EbfjGGPqKCO7gGueW8rv319P+5bRvP/zYfzzhoENXo/UJyvUTb1ZlnWQFxdt56azOzC2T2vXcvROiePNnw7l8Wv7sfvQMcY9vZgHZ22mpLzCtUzGGP8lItw3vhfJsVH87r21dq4wJsCUllfy2BdbufTJhWTmFvLIVX2Z/rNzGNihhdvRzpgV6qZeFJdVcPeM9bRr2YS7Lu3udhxEhAkD2jLnt6O4emBbnluQxfinF7PlwFG3oxlj/FBsVDh/ndCHjJxCnpqT6XYcY4yXMnMKmPDPxTw5J4PL+rZhzh2juHpQO7/t4nK6rFA39eKf87aRlVvEtCv6EB3hPxOINI+O4KEr+/LvyYPJKyxl3FOLeWFBlo1dN8Z8z3ndk5g4IIXnFmxjR16R23GMMaegqryxbCeXPbWI/fnFPH/TQB6/tn+ja9Vshbo5Y7sOHuNf87ZxRf82jOya6HacGp3XPYnZvx7B6G6JTJu1mSmvryT/WJnbsYwxfmbq2O6Eh4bw4Keb3Y5ijDmJguIyfv7GN/zxww0MTm3JZ78awZherdyO5RNWqJsz9rfZWwgNEaaO7eF2lFOKbxbJczcN5M/jejEvPZfLn15kN5oaY/5HUmwUvxjdmdkbs1myLc/tOMaYajJzChj/zGK+2JzN3Zd259UfDSGpEc+fYoW6OSOrdx3m43X7+emIjrSK8///KCLCzcNSeefWcygtr2TiP5fw4eq9bscyxviRn4zoRErzJjw4awvBPNeIMf7m0/X7Gf/0Yo4eL+M/PxnKlJGdCWnkcx9YoW7qTFV58NMtJDSLZMqozm7HOS0DO7Tg418Op3+75vz6nTU8/sVW+4FsjAEgKjyUX12Yxvq9+Xy1JcftOMYEPVXlmbmZ/Pw/39C1VQwf3z6CszvFux2rQVihbupsWdYhvt5+iNvP70KzSP+5gdRbCc0ief2WoVw1sC3/mJPBHe9aWzZjjMeEASm0bxnNE19m2C/xxriorKKSqe+v55HZ6Yzv34a3p5wdEJ/g1xcr1E2dPT03g8SYSK4d3M7tKHUWERbCI1f15XcXd2PG6r3c+OJyDheVuh3LmIAhIpeISLqIZIrI1Bpe7yAic0RknYjME5G21V6PFZG9IvJ0w6WuXXhoCLed38WuqhvjoqPFZfz4lRW8s3I3t5/fhSeu7U9kWKjbsRqUFeqmTlbtPMzizINMGdGJqPDA/k8jIvzfeV14atIA1u7J59rnl5J9tNjtWMb4PREJBZ4BxgI9gUki0rPaZn8HXlPVvsD9wIPVXn8AmO/rrHVx4qr6U19ZX3VjGlpeYQnXPreMpdsO8rcr+/LbMd0aTW/002GFuqmTp7/KoEV0ODec3d7tKPXm8n5teOVHg9l7+DhX/2spuw8dczuSMf5uCJCpqlmqWgq8DYyvtk1PYI7zeG7V10VkIJAMfN4AWU9beGgItwzvyJrdR1i187DbcYwJGvuOHOeafy1le14hL00ezDUB/Mn9mbJC3Zy2jfvymZuey09GdPKryY3qw7DOCfznp2dztLiMK59dwtbsArcjGePPUoDdVZ7vcdZVtRa40nk8AYgRkXgRCQEeBX7n85Rn4KqBbYmNCuPlRdvdjmJMUNh5sIir/7WU3IISXr9lKKP8dH6WhmKFujltLy/aQXREKDee3cHtKD7Rv11z3plyDgDXPLfUeq0bc3I1fQ5d/c7LO4FRIrIaGAXsBcqBXwCzVHU3pyAiU0RkpYiszM3NrY/Mp6VpZBjXD+3Apxv226dsxvhYRnYBV/9rKcdKy3nzp2czOLWl25Fc59NC3YubjCJF5B3n9eUikuqsjxeRuSJSWPUGIxGJEZE1VZY8EXnCeW2yiORWee0nvnxvwSqvsISP1u7jqoFtiWsS7nYcn+nWKobpPxtGdHgoN764nM37j7odyRh/tAeo+pl0W2Bf1Q1UdZ+qTlTVAcAfnHX5wDnAbSKyA8849h+KyEPVv4GqPq+qg1R1UGKiO1fWbh7WgRARXlmyw5Xvb0wwyMwpZNILywB459Zz6NM2zuVE/sFnhbqXNxndAhxW1S7A48DDzvpi4B48V2K+paoFqtr/xALsBD6ossk7VV5/sf7flXlz+S5KKyq5eViq21F8rn18NG9NOZvIsFBueHE56QdsGIwx1awA0kSko4hEANcBM6tuICIJzjAXgLuAlwFU9QZVba+qqXjO9a+p6vcu6PiD1nFN+EHf1ryzYjeFJeVuxzGm0dmeV8T1LywDhLemnE3X5Bi3I/kNX15R9+Ymo/HAq87j6cAFIiKqWqSqi/AU7DUSkTQgCVhY/9FNTUrLK3l92U5GdU2kc2Izt+M0iA7xTXlrytmEhQg3vLiMzBwr1o05QVXLgduA2cBm4F1V3Sgi94vIOGez0UC6iGzFc+PoNFfCnqGbh6VSWFLOzDX7at/YGOO1XQePcf0LyyivVN786dCgqS+85ctC3ZubjL7dxjnh5wPeTjU1Cc8V9KrjIa90evVOF5HgvUXYRz7dsJ/cghImn5vqdpQG1THBU6yLCJNeWM6ugzZO1ZgTVHWWqnZV1c6qOs1Zd6+qznQeT1fVNGebn6hqSQ3HeEVVb2vo7KdjQLvmdG8Vw1tf73I7ijGNxt4jx5n0wjKOl1Xwxi1D7Up6DXxZqHtzk5E325zMdcBbVZ5/BKQ6vXq/5Lsr9f/7DV2+MSmQvb50Jx0TmjIqLfjuwO6c2Iw3fzKUsopKbnp5ObkF36s1jDGNmIgwaUh71u/NZ/0eu8HcmDN1qKiUm15aztHiMt64ZSg928S6Hckv+bJQr/Umo6rbiEgYEAccqu3AItIPCFPVVSfWqerBKldqXgAG1rSvP9yYFIgycwpYufMwk4a0IyQk+CYcAEhLjuHlyYPJOVrCzS9/zdHiMrcjGWMa0BUDUogMC+GtFXZV3Zgzcay0nB+/soI9h4/z0s2D6Z1iN46ejC8L9VpvMnKe3+w8vgr4qtpQlpOZxP9eTUdEWld5Og7PeElTT95ZsZuwEGHiWW1r37gRO6t9C5698Sy2Zhcw5bWVFJdVuB3JGNNA4pqEc1nfNsxcs48iu6nUmDopq6jk5298w7o9R3hq0gCGdLQWjKfis0Ldy5uMXgLiRSQTuAP49o5/p2XXY8BkEdlTrWPMNVQr1IFfishGEVkL/BKY7IO3FZRKyit4/5u9XNQzmYRmkW7Hcd3obkk8ek0/lmUd4ldvr6ai0tvRWsaYQHf90HYUlpTz8Tq7qdSY01VZqfy/6euYvzWXaRP6cHGvVm5H8ns+nVZSVWcBs6qtu7fK42Lg6pPsm3qK43aqYd1deFp/mXr25aYcDhWVcm0QT+Fb3fj+KRwsLOX+jzfxwMebuG9cL7cjGWMawFntW9A5sSnvr9rLtYPbux3HmIDy8OwtzFi9l99e1JVJQ+z/jzdsZlJTq7dX7KJNXBQjgvAm0lP58fCO/GR4R15ZsoNXFtv04sYEAxFhwoAUvt5xyGYqNeY0vP31Lp6bn8WNZ7fntvO7uB0nYFihbk5p35HjLMrM46pB7QgN0ptIT+WuS3twYY9k7v94E19tyXY7jjGmAYzv7+k0/N81e11OYkxgWJyZxx8/3MCoroncd3kvRKye8JYV6uaUZq7dhypceVb1FvgGIDREeHJSf3q0juX2N1ezad9RtyMZY3ysXctohnRsyQer9+Jd/wNjgldmTiE/f2MVnRKb8tT1AwgLtdLzdNifljmlD1fv5az2zekQ39TtKH4rOiKMl24eTExUOLe8uoLsoyedUNcY00hMGJBCVm4R66ynujEndaiolFteXUF4aAgv3TyY2KhwtyMFHCvUzUlt3n+ULQcKuGKAXU2vTau4KF6aPIj842VMeX2VtW00ppG7tE9rIsJCmLHahr8YU5PS8kp+9sYq9ucX8/wPB9GuZbTbkQKSFermpD5cs5ewEOEHfVrXvrGhV5s4HrumP2t3H+He/26wj8SNacTimoRzYY8kPl63j/KKSrfjGON3/vLJJr7efohHrurLwA4t3I4TsKxQNzWqrFRmrtnHqK6JxFvvdK9d0rsVt5/fhXdX7uGNZTvdjmOM8aHL+rYhr7CUr7fXOqG2MUHl3ZW7eW3pTqaM7PTtzdembqxQNzVavv0Q+/OLGW/DXk7bby7syvndk/jzR5vsB7gxjdh53ZJoEh7KJ+v3ux3FGL+xbs8R/vjhBs7tEs//u7ib23ECnhXqpkYfrt5L04hQLuqR7HaUgBMSIjx+bX/atYzmF/9Zxf78425HMsb4QJOIUM7vkcRnGw7Y8BdjgLzCEn72+ioSm0Xy1KSzrMNLPbA/QfM9peWVfLphPxf3bkWTiFC34wSkuCbhPH/TQI6XVvAzu7nUmEbrB31ac7DIhr8YU1ZRyf/95xsOFpXy3E0Dadk0wu1IjYIV6uZ7Fm/L42hxOZf3beN2lICWlhzDY9f2Z+2efP780Sa34xhjfODE8JePbfiLCXIPfbqF5dsP8eDEPvROiXM7TqNhhbr5nlnr9hMTFcawLvFuRwl4F/dqxc9Gdeatr3fxobVxM6bROTH8ZbYNfzFB7LMNB3hp0XYmD0tl4llt3Y7TqFihbv5HWUUln2/K5qIeyUSG2bCX+nDnmK4MSW3J3TPWk5lT4HYcY0w9u8wZ/rLchr+YILTr4DF+N30t/drGcfelPdyO0+hYoW7+x9JtB8k/XsZY651eb8JCQ3hy0gCahIfyi/98w7HScrcjGWPq0Wjr/mKCVEl5Bbe99Q0CPH39WUSEWVlZ3+xP1PyPTzfsp2lEKCPSEtyO0qi0iovi8Wv7k5FTyL3/3eh2HGNMPWoSEcr53ZP4fGM2lZU20ZkJHg/O2sK6Pfk8cnU/m3nUR6xQN98qr6hk9sZsLuiRTFS4DXupbyO7JnL7eV2YvmoP767c7XYcY0w9GtMrmbzCElbvPuJ2FGMaxGcb9vPKkh38+NyOXNyrldtxGi2fFuoicomIpItIpohMreH1SBF5x3l9uYikOuvjRWSuiBSKyNPV9pnnHHONsySd6ljGe19vP8SholIu7WP/4XzlVxd25ZxO8dzz4QbSD9h4dWMai9HdkggLEb7YlO12FGN8zjMufR392jVn6tjubsdp1HxWqItIKPAMMBboCUwSkZ7VNrsFOKyqXYDHgYed9cXAPcCdJzn8Dara31lyajmW8dIn6/fTJDyUUV2T3I7SaIWGCP+Y1J+YqDB++dZq669uTCMR1yScoZ1a8sWmA25HMcanSssrvxuXPmmAjUv3MV/+6Q4BMlU1S1VLgbeB8dW2GQ+86jyeDlwgIqKqRaq6CE/B7q0aj1X3+MGlolKZvfEA53dPskmOfCwpJopHrupHenYBD326xe04xph6clGPZLblFpGVW+h2FGN85tEv0m1cegPyZaGeAlQdiLvHWVfjNqpaDuQD3jTv/rcz7OWeKsV4XY9lgJU7DpFXWMpYG/bSIM7rnsSPzk3llSU7+GqLfVRuTGNwYc9kABv+YhqtJZl5PL8gi+uHtrdx6Q3El4V6TVezq98O78021d2gqn2AEc5y0+kcS0SmiMhKEVmZm5tby7cKHl9syiYiNITR3WzYS0P5/SXd6d4qhjvfW0fO0dP58MgY44/atoimZ+tYK9RNo3S4qJQ73l1Lx4Sm3POD6iOZja/4slDfA7Sr8rwtsO9k24hIGBAHnHLGCFXd63wtAN7EM8TG62Op6vOqOkhVByUmJp7mW2qcVJUvNmdzTud4mkWGuR0naESFh/LUpAEUlZTz2/fWWls3YxqBi3oms2rXYfIKS9yOYky9UVXunrGeg0UlPHndABsi24B8WaivANJEpKOIRADXATOrbTMTuNl5fBXwlaqetFoRkTARSXAehwOXARvqcizznW25hew8eIyLnI9tTcNJS47hnst6sjAjj5cXb3c7jjHmDF3UMxlV+GpzTu0bGxMg3l25m083HODOMd3onRLndpyg4rNC3RknfhswG9gMvKuqG0XkfhEZ52z2EhAvIpnAHcC3LRxFZAfwGDBZRPY4HWMigdkisg5YA+wFXqjtWObUPnc+pr2ghw17ccMNQ9szpmcyD3+2hQ17892OY4w5A73axJLSvMm351VjAl1WbiH3zdzEsM7x/HREJ7fjBB2fjnNQ1VnArGrr7q3yuBi4+iT7pp7ksANPsv1Jj2VO7ctN2fRJiaN1XBO3owQlEeHhK/ty8RMLuOPdNcy8bbhNOGVMgBIRLuyRxDsrd3O8tMKGCJiAVlZRya/fWUNEWAiPXtOPkBBrptfQrPllkMst8Mykd2EPG/biphZNI/jbVX3Zml3IY19sdTuOMeYMXNgzmeKySpZm5bkdxZgz8sSXW1m3J5+HJvaxi3kusUI9yM3dkoMqNj7dD4zulsQNQ9vzwsIslmUddDuOMV7xYgbqDiIyR0TWOTNLt3XW9xeRpSKy0Xnt2oZP7xtDOrakSXgo89Kts5gJXKt2HubZedu4ZlBbxvZp7XacoGWFepD7YnM2Kc2b0KN1jNtRDHD3pT1o3zKaO99bS0FxmdtxjDklL2eg/jvwmqr2Be4HHnTWHwN+qKq9gEuAJ0SkecMk963IsFDO7RLPV1tysJ4GJhAdL63gzvfW0jquCfdcZq0Y3WSFehA7XlrBwoxcLuyRhE3i6h+aRobx6NX92HfkOH/5eLPbcYypjTczUPcE5jiP5554XVW3qmqG83gfkAM0mp65o7slsefwcbblFrkdxZjT9vBnW9ieV8QjV/UlJirc7ThBzQr1ILY4M4/iskou6mmzi/mTQaktuXVUZ95ZuZsvrXOE8W/ezEC9FrjSeTwBiBGR/5k1WkSGABHANh/lbHCju3l+55iXbm0aTWBZsi2PV5bsYPKwVIZ1SXA7TtCzQj2Ifbk5m5jIMIZ0bOl2FFPNry9Mo3urGKZ+sI6DNnGK8V/ezAh9JzBKRFYDo/C01S3/9gAirYHXgR+pauX3vkGAzibdtkU0XZObMdcKdRNACkvK+d1760iNj+b/XdLN7TgGK9SDVmWl8uXmHEZ1SyQizP4Z+JvIsFAev7Y/+cfL+MOMDTbO1firWmegVtV9qjpRVQcAf3DW5QOISCzwCfBHVV1W0zcI5Nmkz+uWxNfbD1FYUl77xsb4gWmfbGJ//nEevaYf0RE2U7k/sAotSK3fm09eYYm1ZfRjPVrHcsdF3fhs4wFmrt1X+w7GNLxaZ6AWkQQROfGz5i7gZWd9BDADz42m7zVg5gYzulsSZRXK4kxr02j837z0HN76ejc/HdmJgR3sk3Z/YYV6kJq/NRcRGJFm48/82ZSRnejfrjn3zdxIboENgTH+xcsZqEcD6SKyFUgGpjnrrwFG4pl9eo2z9G/Yd+Bbg1Jb0CwyzMapG7+Xf6yM37+/jrSkZvzmwq5uxzFVWKEepOZvzaVvShzxzSLdjmJOITREeOSqvhSVVHDvfze4HceY71HVWaraVVU7q+o0Z929qjrTeTxdVdOcbX6iqiXO+jdUNVxV+1dZ1rj5XupbeGgIw7skMHdLrg1fM37tvo82kldYymPX9LeZsf2MV4W6iAwXkR85jxNFpKNvYxlfyj9WxupdhxnVLcntKMYLackx/OrCND7dcIBP1u13O44x5jSc1z2RA0eLSc8ucDuKMTWavfEAM1bv5bbzutCnbZzbcUw1tRbqIvIn4Pd4xhYChANv+DKU8a2FmblUKozqGlg3ZgWzW0d2ok9KHPf+d4N1gTEmgIx2LojM3RI4HWtM8Mg/XsY9H26gR+tYbju/i9txTA28uaI+ARgHFMG3E1PYNJYBbH56LnFNwulnvzkHjLDQEB65ui9Hi8u476NNbscxxngpOTaKnq1jmbvFxqkb//O3z7aQV1jCw1f2ITzURkP7I2/+VkrVM7hOAUSkqW8jGV9SVeZvzWV4WgJh9p8yoHRvFcvt56fx0dp9zN54wO04xhgvje6WyKpdhykoLnM7ijHfWrHjEP9ZvosfnduRvm2bux3HnIQ3ldq7IvIc0FxEfgp8Cbzo21jGVzbvLyCnoITRNuwlIP18dGd6to7lDzM2cORYqdtxjDFeGNk1kYpKZem2g25HMQaAkvIKpr6/jpTmTbjjIuvy4s9qLdRV9e/AdOB9oBtwr6o+6etgxjfmb/WMk7Tx6YEp3BkCc+RYKffbEBhjAsJZ7VsQHRHKggwbp278w7PztrEtt4i/TOhN00ib2MifeXMz6cOq+oWq/k5V71TVL0TkYW8OLiKXiEi6iGSKyNQaXo8UkXec15eLSKqzPl5E5opIoYg8XWX7aBH5RES2iMhGEXmoymuTRSS3Sj/en3iTMdjM35pDj9axJMVGuR3F1FGvNnH8YnRnPli9lzmbs92OY4ypRURYCOd0imdhhk18ZNyXmVPAP+duY1y/Npxn3d/8njdDXy6qYd3Y2nYSkVDgGWfbnsAkEelZbbNskxggAAAgAElEQVRbgMOq2gV4HDjxC0AxcA9wZw2H/ruqdgcGAOeKSNUs71Tpx2vDc6opLCln5Y7DjO5mV9MD3W3np9G9VQx3z1jPURv3aozfG5GWwM6Dx9h5sMjtKCaIVVYqd32wniYRodx7efWSzPijkxbqIvJzEVkPdBORdVWW7cA6L449BMhU1SxVLQXeBsZX22Y88KrzeDpwgYiIqhap6iI8Bfu3VPWYqs51HpcC3wBtvchigMWZeZRXqg17aQQiwkJ4+Mq+5BaU8LfPtrgdxxhTi5HOedeuqhs3vbViFyt2HOYPP+hBgk14GBBOdUX9TeByYKbz9cQyUFVv9OLYKcDuKs/3OOtq3MaZijofiPcmuIg0d/LMqbL6SueXieki0u4k+00RkZUisjI3N7jGC87fmkuzyDDOat/C7SimHvRr15wfnduRN5btYsWOQ27HMQFORAaJyAwR+cY5j64XEW8uyhgvdExoSkrzJiy0cerGJdlHi3lo1haGdY7n6oF2jTNQnLRQV9V8Vd2hqpNUdSdwHE+LxmYi0t6LY0tNh63DNt8/sEgY8BbwpKpmOas/AlJVtS+ezjSv1rSvqj6vqoNUdVBiYvBcWVZV5qfnMqxzPBFh1paxsfjtmK60bdGEqe+vo6S8wu04JrD9B/g3cCWeiyCXOV9NPRARRnZNYEnmQcorKt2OY4LQfTM3UlpRyV8n9EGkpvLL+CNvbia9XEQygO3AfGAH8KkXx94DVL2q3RbYd7JtnOI7DvDm0uDzQIaqPnFihaoeVNUTUza+AAz04jhBY1tuEXuPHP92ljzTOERHhDFtQh+25RbxzNxtbscxgS1XVWeq6nZV3XlicTtUYzIiLZGCknLW7D7idhQTZD7feIBPNxzglxekkZpg0+EEEm8urf4FOBvYqqodgQuAxV7stwJIE5GOIhIBXIdnGE1VM4GbncdXAV85kyudlIj8BU9B/+tq61tXeToO2OxFxqAxL90zK97IrgkuJzH1bVTXRCYMSOHZeZmkHyhwO44JXH8SkRdFZJKITDyxuB2qMTm3cwIhAgtsnLppQAXFZdz73410bxXDlJGd3I5jTpM3hXqZqh4EQkQkxLmZs39tOzljzm8DZuMpmt9V1Y0icr+IjHM2ewmIF5FM4A7g2xaOIrIDeAyYLCJ7RKSniLQF/oCni8w31dow/tJp2bgW+CUw2Yv3FjTmb82lS1Iz2raIdjuK8YF7LutJTFQ4Uz9YR0VlraPHjKnJj/Cc2y/hu3uSLnM1USMTFx1Ov3bNbZy6aVCPzE4nu6CYh67sS7jNSB5wvOlyf0REmgELgP+ISA5Q7s3BVXUWMKvaunurPC4Grj7JvqknOWyNA6tU9S7gLm9yBZvjpRUs336Im87u4HYU4yMtm0Zwz2U9+M07a3lj2U5uHpbqdiQTePqpah+3QzR2I9ISefqrDPKPlREXHe52HNPIrdp5mNeX7eTmc1Lp366523FMHXjzq9V44BjwG+AzYBt2g1FAWbb9IKXlldY/vZG7on8KI7sm8rfPtrD3yHG345jAs6yGuS5MPRuZlkClwpJtNvzF+FZpeSV3fbCO1rFR3HlxN7fjmDqqtVB3eppXqmq5qr6KZxKjS3wfzdSX+em5RIWHMDi1pdtRjA+JCNOu6E2lwj0fbqCW2z2MqW44sMaZTdraM/pIv3bNiYkMs3Hqxueem7+NrdmFPHBFb5pFejOAwvijU014FCsid4nI0yIyRjxuA7KAaxouojlT87fmck6neKLCQ92OYnysXctofjumK19tyeHjdfvdjmMCyyVAGjAGa8/oM+GhIQzrEs+Crbn2y7TxmW25hTz1VSY/6NuaC3okux3HnIFTXVF/HegGrAd+AnyOZzz5eFWtPsOo8VM7DxaxPa/IZiMNIj86tyP92sZx38yNHC4qdTuOCRx6ksXUsxFpiew9cpzteUVuRzGNUGWlcvcH64kKD+FPl9totkB3qkK9k6pOVtXngEnAIOAyVV3TMNFMfViw1dNdwPqnB4/QEOHBiX3JP17GtFnWpdR47RPgY+frHDyfnnozZ4Y5TSPTPBdOFtrwF+MD763azfLth7j70h4kxUS5HcecoVMV6mUnHqhqBbBdVa1Jc4CZl55Lh/hom+AgyPRsE8uUkZ2YvmoPi6wYMF5Q1T6q2tf5mgYMARa5nasxah8fTYf46G8vpBhTX3IKipn2yWaGdmzJtYPb1b6D8XunKtT7ichRZykA+p54LCJHGyqgqbuS8gqWbDtow16C1C8vSKNjQlPunrGe46UVbscxAUZVvwEGu52jsRqRlsCyrIOUVVS6HcU0In/+aBPF5ZX8dWIfRGrsZm0CzEkLdVUNVdVYZ4lR1bAqj2MbMqSpm5U7DnO8rMIK9SAVFR7KtAm92XXoGE9+leF2HOPnROSOKsudIvImYJd8fWR4l0SKSitYveuI21FMIzFnczafrNvP7ed1oXNiM7fjmHpiU1Q1YvPSc4gIDeGczvFuRzEuGdY5gWsGteX5BVls2mcfhJlTiqmyROIZq26NA3zknM7xhAgssllKTT0oLCnnng830DW5GbeO6ux2HFOPrFBvxOZvzWVIx5ZER1j/1GB296U9aN4knLs+WEdFpTXxMCe1SVX/7CzTVPU/WHtGn4lrEk7/ds2tn7qpF49+ns7+o8U8OLEvEWFW2jUm9rfZSO07cpyt2YU27MXQPDqCey/vydo9+by2dIfbcYz/usvLdaaeDE9LZN2eI+QfK6t9Y2NOYs3uI7yyZAc3nd2BgR1auB3H1DMr1Bup79oyWqFuYFy/Nozulsgjs9PZe+S423GMHxGRsSLyFJAiIk9WWV4Byl2O16iNTEugUmFpll1VN3VTVlHJ1PfXkRwTxe8u7uZ2HOMDtRbqJ7q8VFt2i8gMEenUECHN6ZuXnkubuCi6JNkNJQZEhAfG90YV7vlwg82IaKraB6wEioFVVZaZwMUu5mr0+rVrTrPIMBv+YurshYVZbDlQwP3jexETFe52HOMD3gxefgzPifxNQIDrgFZAOvAyMNpX4UzdlFVUsjgzj8v6tbb2TOZb7VpG89sxXfnLJ5uZtf4AP+jb2u1Ixg+o6lpgrYi8qao2BqMBhYeGcHaneJvrwNTJjrwi/vFlBmN7t2JMr1ZuxzE+4s3Ql0tU9TlVLVDVo6r6PHCpqr4D2GAoP7R61xEKSsptfLr5nsnDUumTEsefZm60cbGmuiEi8oWIbBWRLBHZLiJZbodq7EZ2TWDXoWPsPFjkdhQTQFSVu2esJyIshPvG9XI7jvEhbwr1ShG5RkRCnOWaKq+d8vNzEblERNJFJFNEptbweqSIvOO8vlxEUp318SIyV0QKReTpavsMFJH1zj5PinPJWERaOj9kMpyvQftLxPytOYSFCMO6JLgdxfiZsNAQHpzYh8PHSnnos81uxzH+5SU8n6AOxzPR0SC8mPDIi/N8BxGZIyLrRGSeiLSt8trNzjk7Q0Rursf3EjCGO+fphXZV3ZyG6av2sGTbQaaO7U5ybJTbcYwPeVOo3wDcBOQA2c7jG0WkCXDbyXYSkVDgGWAs0BOYJCI9q212C3BYVbsAjwMPO+uLgXuAO2s49LPAFCDNWS5x1k8F5jhTX89xngeleem5nNWhBbE2Xs3UoHdKHLcM78hbX+9medZBt+MY/5Gvqp+qao6qHjyxnGoHL8/zfwdeU9W+wP3Ag86+LYE/AUOBIcCfgvECS8eEpqQ0b2LDX4zX8gpLmDZrM4NTWzBpcHu34xgfq7VQV9UsVb1cVRNUNdF5nKmqx1V10Sl2HQJkOvuXAm/z/ckzxgOvOo+nAxeIiKhqkXPs4qobi0hrIFZVl6rnbrjXgCtqONarVdYHlZyCYjbuO2rDXswp/frCNNq2aMJdM9ZTUl7hdhzjH+aKyCMico6InHViqWUfb87zPfFcPAGYW+X1i4EvVPWQqh4GvuC7Cy9BQ0QYkZbA4m15lFdUuh3HBIAHPt7EsZIKHpzYh5AQuw+tsfOm60uiiNwtIs+LyMsnFi+OnQLsrvJ8j7Ouxm1UtRzIB041jWaKc5yajpmsqvudY+0HkrzI2Ogs3Oq5KmOFujmV6Igwpk3oQ1ZuEc/M3eZ2HOMfhuIZ7vJX4FFn+Xst+3hznl8LXOk8ngDEiEi8l/sGheFpCRQUl7Nub77bUYyfm5uew3/X7OMX53WmS1KM23FMA/Cm68t/gYXAl8DpXHqr6de86mPavdnmTLb//gFEpuAZOkP79o3vI6P5W3NJjImkV5tYt6MYPzeqayJX9G/Ds/Myubxva9KS7aQfzFT1vDrs5s05+U7gaRGZDCwA9uLpz+7V+byxn7MBzu2cgAgsysjjrPZBN/rHeOlYaTl/nLGBLknN+Pnozm7HMQ3EmzHq0ar6e1V9V1XfP7F4sd8eoF2V523xtHmscRsRCQPigEO1HLNtledVj5ntDI05MUQmp6YDqOrzqjpIVQclJjauq84VlcqCjFxGpiVaW0bjlT9e1pOmkWHc9cF6Kiutt3owE5FkEXlJRD51nvcUkVtq2a3W87yq7lPViao6APiDsy7fm32dbRvtOfuEFk0j6JMSx8KMXLejGD/22Odb2XvkOA9O7ENkWKjbcUwD8aZQ/1hELq3DsVcAaSLSUUQi8PRfn1ltm5nAiTv9rwK+0lPMxOIMaSkQkbOdbi8/xHPFv/qxbq6yPmis23OEI8fKGGWzkRovJTSL5I8/6MnKnYd58+tdbscx7noFmA20cZ5vBX5dyz61nudFJEFETvysuQvP/Bs432uMiLRwbiId46wLSsO7JHha6xZb21Tzfev2HOHlxdu5YWh7Bqe2dDuOaUDeFOq/wlOsH3dmJS0QkaO17eSMOb8Nz4l3M/Cuqm4UkftFZJyz2UtAvIhkAndQpVOLiOzA0ypssojsqdJJ4OfAi0AmsA341Fn/EHCRiGQAFznPg8r8rbmECIywtozmNFx5VgrDOsfz8KdbyD5aXPsOprFKUNV3gUr49hx+yuGOXp7nRwPpIrIVSAamOfseAh7AU+yvAO531gWlEWmJlFcqy7KC9o/AnER5RSVT319PQrNIfj+2u9txTAOrdYy6qtZ54KqqzgJmVVt3b5XHxcDVJ9k39STrVwK9a1h/ELigrlkbg3npufRr15wWTSPcjmICiIjw1wl9uPiJBdw3cyPP3jjQ7UjGHUXOTZ4KICJn47nB/5S8OM9Px9PVq6Z9X+a7K+xB7awOzWkSHsqijFwu6pnsdhzjR15atJ1N+4/yrxvPsrbLQeikV9RFpLvz9ayaloaLaLxxuKiUtXuOWLcXUyepCU355QVpfLrhAJ9vPOB2HOOOO/AMW+ksIovxtL+93d1IwSMyLJShnVqyMNP6qZvv7Dp4jMe/3MqYnslc0ru123GMC051Rf0OPHfaP1rDawqc75NEpk4WZuaham0ZTd1NGdmJj9bu497/buSczvHE2JWboKKq34jIKKAbno4s6apqA6Yb0Ii0RB74eBN7jxwnpXkTt+MYl6kqf/hwPWEhIdw//nsDCUyQOOkVdVWd4nw9r4bFinQ/Mz89lxbR4fRt29ztKCZAhYeG8ODEPmQXFPP32eluxzENzJll9FI8QwjHALeLyB3upgouI9I89xctsu4vBpixei8LM/L4/SXdaBUX5XYc4xJv+qgjIsOA1Krbq+prPspkTlNlpTJ/ay4j0hIJtVnKzBkY0L4FN5+TyqtLdzB+QIr1dA4uH+GZDXo9zg2lpmGlJTUjOTaShRl5XGtTwwe1Q0WlPPDxJgZ2aMENQzu4Hce4qNZCXUReBzoDa/iuA4DiGb9o/MCm/UfJKyyxYS+mXtx5cTdmbzzAXe+v56PbhxMR5k1zKNMItFXVvm6HCGYiwvAuiXy1JZvKSrXp4YPYXz7eRGFJOQ9O7GP/DoKcNz+BBwHnquovVPV2Z/mlr4MZ781L98ztNNIKdVMPmkWG8cD43qRnF/DCwiy345iG86mIjHE7RLAbkZbA4WNlbNxXaxdk00gtzMjlg9V7+fmoznS1GaODnjeF+gagla+DmLqbl55Ln5Q4EmMi3Y5iGokLeyZzaZ9W/GNOBtvzityOYxrGMmDG6c6ZYerXuc48GAtsnHpQOl5awR9mbKBTYlN+cV4Xt+MYP+BNoZ4AbBKR2SIy88Ti62DGO/nHyvhm12FG22ykpp7dd3kvIsNCuPuD9ZxiwmDTeDwKnANEq2qsqsaoaqzboYJNYkwkPVrHsijD2jQGoyfmbGXXoWM8OKEPUeGhbscxfsCbm0nv83UIU3cLM3OpVKxQN/UuKTaKu8b24O4Z63lv1R6uGdTO7UjGtzKADWq/lbluRFoCryzewbHScqIjvOr5YBqBDXvzeXHhdiYNacfQTvFuxzF+4pRnAKdd1z2qemED5TGnaV56LnFNwunfzrpzmPp33eB2zFi9h2mfbOb87kkkNLPhVY3YfmCeiHwKlJxYqaqPuRcpOI1IS+D5BVks336I87oluR3HNIDyikru+mA9LaIjmHpJD7fjGD9yyqEvqloBHBORuAbKY07Dd20ZE6wto/GJkBDhwYl9OF5awf0fbXI7jvGt7cAcIAKIqbKYBjY4tSURYSE2/CWI/HvxDtbvzef+8b2Ii7bJ5sx3vPlMrRhYLyJfAN/eVWadX9y3af9RcgtK7IqL8akuSTH84rzOPPFlBhPOSrF/b42Uqv4ZQERiPE+10OVIQSsqPJShHVtaoR4kdh08xqNfpHNRz2TG9rbeHeZ/eXMz6SfAPcACYFWVxbhs/lZPVwBry2h87eejO9M5sSl/nLGBY6XlbscxPiAivUVkNZ5OXxtFZJWI9HI7V7Aa3iWB9OwCso8Wux3F+JCqcveM9YSHhPDA+N6I2Kfj5n/VWqir6qs1LQ0Rzpza3C051pbRNIjIsFAenNiXvUeO8/gXW92OY3zjeeAOVe2gqh2A3wIvuJwpaA1P87RptKvqjdv73+xlUWYevx/bnVZxUW7HMX6o1kJdRNJEZLqIbBKRrBNLQ4QzJ2dtGU1DG9KxJZOGtOelRdtZvyff7Tim/jVV1bknnqjqPKCpe3GCW49WsSQ0i2Ch9VNvtHILSnjg400MTm3B9UPaux3H+Clvhr78G3gWKAfOA14DXvdlKFM7a8to3DB1bHfim0Uy9YN1lFdUuh3H1K8sEblHRFKd5Y94bjA1LggJEc7tksCizIM2j0Ej9eePNnK8tIIHJ/YlxBpCmJPwplBvoqpzAFHVnap6H3C+NwcXkUtEJF1EMkVkag2vR4rIO87ry0Uktcprdznr00XkYmddNxFZU2U5KiK/dl67T0T2VnntUm8yBipry2jcENcknD+P68XGfUf59+Idbscx9evHQCLwATDDefwjVxMFueFdEsgrLGHLgQK3o5h69uWmbD5et5/bz+9Cl6Rmbscxfsyrri8iEgJkiMhtwF6g1rYPTg/2Z4CLgD3AChGZqapVe7zdAhxW1S4ich3wMHCtiPQErgN6AW2AL0Wkq6qmA/2rHH8vnh8oJzyuqn/34j0FNGvLaNw0tncrLuyRxGNfbOWS3q1o1zLa7UimHqjqYcC6efmREWmeT0wXZuTSo7VNEttYFBSXcc9/N9AtOYZbR3V2O47xc95cUf81EI3nBD4QuBG42Yv9hgCZqpqlqqXA28D4atuMB07cmDoduEA8tzyPB95W1RJV3Q5kOser6gJgm6ru9CJLo3KiLeNoa5NnXCAi3D++NyECf/hwg30sH+BEZOapFrfzBbNWcVGkJTVjod1Q2qg8MjudA0eLeejKPkSEeVOGmWBW6xV1VV0BICKqqqfzMWgKsLvK8z3A0JNto6rlIpIPxDvrl1XbN6XavtcBb1Vbd5uI/BBYCfzWuULU6JxoyzjK2jIal7Rp3oTfXdyN+z7axMy1+xjfv/p/TxNAzsFzHn4LWA7Yx3R+ZERaIv9ZvpPisgqiwkPdjmPO0Modh3h92U4mD0tlQHsbumpq503Xl3NEZBOw2XneT0T+6cWxazrZV7/0drJtTrmviEQA44D3qrz+LNAZz9CY/cCjNYYSmSIiK0VkZW5uYN5NPy89h94psdaW0bjqpnNS6deuOfd/tInDRaVuxzF11wq4G+gN/APPcMU8VZ2vqvNdTWYYkZZASXklK3c0yutOQaWkvIKpH6ynTVwT7hzTze04JkB485nLE8DFwEEAVV0LjPRivz1AuyrP2wL7TraNiIQBccAhL/YdC3yjqtknVqhqtqpWqGolnt6/1YfKnNjueVUdpKqDEhMD74r04aJSVu08zPk27MW4LDREeGhiH/KPlzFt1ma345g6cs6bn6nqzcDZeIYazhOR212OZoChnVoSHioszAzMC0vmO8/M3UZmTiF/mdCbppHe3CJojHeFOqq6u9qqCi92WwGkiUhH5wr4dUD18Y4z+W68+1XAV+oZ8DoTuM7pCtMRSAO+rrLfJKoNexGR1lWeTsAzu16jMzc9h0qFC3okux3FGHq0jmXKyE5MX7WHJZk2jjZQOefaicAbwP8BT+Lp/mJcFh0RxsAOLVi41f5/BbJN+47yz7mZXNG/DefZhTZzGrwp1HeLyDBARSRCRO7EGQZzKqpaDtwGzHa2f1dVN4rI/SIyztnsJSBeRDKBO4Cpzr4bgXeBTcBnwP+pagWAiETj+Wi2+g+Rv4nIehFZh6ff+2+8eG8BZ87mHBJjIumTEud2FGMA+OUFaXSIj+buGespLvPmd3jjT0TkVWAJcBbwZ1UdrKoPqOpel6MZx4i0RDbtP0peYYnbUUwdlFVU8rvpa2keHc6fLu/ldhwTYLwp1H+G5wpLCp4hKf2BX3hzcFWdpapdVbWzqk5z1t2rqjOdx8WqerWqdlHVIaqaVWXfac5+3VT10yrrj6lqvKrmV/teN6lqH1Xtq6rjVHW/NxkDSWl5JfO35nJB9ySbHMH4jajwUP46oQ87Dh7jyTkZbscxp+8moCvwK2CJMz/FUREpEJGjLmczeMapAyy2T60C0nPzt7Fx31H+ckVvWjSNcDuOCTC1FuqqmqeqN6hqsqomqeqNwA8bIJup5uvthygsKbdhL8bvnNslgasGtuX5BVls2me1XSBR1RBVjXGW2CpLjKpa824/0KtNHM2jw61NYwBKP1DAP+ZkcFnf1lzSu3XtOxhTTV0beN5RrymMV77cnE1kWAjDuyS4HcWY7/nDpT1oHh3Bne+tpbS80u04xjQaoSHCuZ0TWJSRZ/MWBJByZ8hLbJRnRmdj6qKuhbqNu2hgqsqcLdmc2yWBJhHWS9f4nxZNI/jrhN5s2n+UZ+Zmuh3HmEZlRFoCB44Wsy230O0oxkvPL8xi3Z587h/fm/hm1k7Z1E1dC3X7lb6BZeQUsvvQcS7oYXeLG/81plcrJgxI4Zm5mWzYm1/7DsYYrwx3xqkvsO4vASEju4AnvshgbO9W/KCvDXkxdXfSQv3EjUQ1LAVAmwbMaPAMewG4oLuNTzf+7U+X96RlU88QmJJy6wJjTH1o2yKaTglNv52Z2vivikrld9PX0TQylPvH93Y7jglwJy3Ua7ixqOoNRtapv4HN2eyZjbRVXJTbUYw5pebRETw4sQ9bDhTw1BwbAmNMfRndLYmlWQc5VlrudhRzCi8tymLN7iPcN66XzSBuzlhdh76YBnSwsIRvdh22q+kmYFzQI5mrBrbl2fnbWLv7iNtxjA+JyCUiki4imSIytYbX24vIXBFZLSLrRORSZ324iLzqzH+xWUTuavj0geX87kmUlleyJPOg21HMSWzLLeTvn29lTM9kxvWzwQfmzFmhHgDmpueiChdaW0YTQO65rCeJzSK58721NhFSIyUiocAzwFigJzBJRHpW2+yPeCa8G4Bnhup/OuuvBiJVtQ8wELhVRFIbInegGtKxJU0jQvkqPcftKKYG5RWV3PHuWpqEh/KXCb0Rsb4b5sxZoR4AvtyUTXJsJL1TrKWxCRxxTcJ56Mo+ZOQU8sSXNhFSIzUEyFTVLFUtBd4GxlfbRoETJ684YF+V9U1FJAxoApQC1oT/FCLCQhielsDcLTnWptEPPTvP8wniX67oTVKMDVM19cMKdT93vLSCeVtzGNOzlf12bgLO6G5JTBrSjucXbOObXYfdjmPqXwqwu8rzPc66qu4DbhSRPcAs4HZn/XSgCNgP7AL+rqqHfJq2ETi/exL784vZcqDA7SimivV78vnHnAzG9WvD5TbkxdQjK9T93PytORSXVTK2dyu3oxhTJ3df2oPWcU248921HC+1ITCNTE1XD6pf6p0EvKKqbYFLgddFJATP1fgKPF3EOgK/FZFO3/sGIlNEZKWIrMzNtY4n53XztOj9aosNf/EXxWUV/ObdNcQ3i+D+8TaxkalfVqj7uc82HKB5dDhDOrZ0O4oxdRITFc4jV/UlK6+IBz/d7HYcU7/2AO2qPG/Ld0NbTrgFeBdAVZcCUUACcD3wmaqWqWoOsBgYVP0bqOrzqjpIVQclJib64C0ElqTYKHqnxDLXCnW/8cjsdDJzCnnkqn40j45wO45pZKxQ92Ol5ZXM2ZzDRT2SCQu1vyoTuIZ1SeAnwzvy2tKdVmA0LiuANBHpKCIReG4WnVltm13ABQAi0gNPoZ7rrD9fPJoCZwNbGix5ADu/WxLf7DrM4aJSt6MEvSXb8nhp0XZ+eE4HRna1XyRN/bPqz48t3pZHQUk5Y/vYsBcT+O68uBvdW8Xwu+nrOFhY4nYcUw9UtRy4DZgNbMbT3WWjiNwvIuOczX4L/FRE1gJvAZPVcyfkM0AzYAOegv/fqrquwd9EADqvexKVik1+5LKjxWXc+e5aOiY0ZerY7m7HMY2UFep+bPaGAzSLDGNY5wS3oxhzxqLCQ3niuv4cPV7G1A/WW9eKRkJVZ6lqV1XtrKrTnHX3qupM5/EmVT1XVfupan9V/dxZX6iqV6tqL1XtqaqPuN8/PfcAACAASURBVPk+Akm/ts2Jbxph49Rddt/MjRw4Wsxj1/QjOsLmgTS+YYW6nyqvqOTzTdmc1z2JqPBQt+MYUy+6t4rl/13SjS82ZfPOit3/v737js+qvP8//vokgYQRAoQZhuxtGCKCiOICnKDFCtZVB1pxa622/frrt7b9dautrdXWXQfDhUUFRStVFAiBsEeQvcLekPX5/nEfNKYBw7hz7jt5Px+P+5H7Pvc5J+8Tbq58cs51ruvbNxCR/5KQYJzVsSGfLN1MYVFx2HGqpPfmbeCN7HWMPrsdPVvWCzuOVGJRLdTLMWNdspmNCd6fXnKyCzN7KFi+xMwGl1i+MpjJbo6ZZZVYXt/MPjCzZcHXuP6fM3PldrbtzWdIV3V7kcrlhv6t6d8unf99ZyErtuwNO45IXDqnUyN27i9gtmb+rXDrduznR6/PpXvzNO44p33YcaSSi1qhXs4Z624Etrt7O+BR4DfBtl2I3JTUFRgC/DXY3yFnB5dQS44Q8CAwxd3bA1OC13Fr0oKNJCclMLCjbk6RyiUhwfj9Fd2pnpTA3WPmUKAzgiJHbUD7hiQlGB8u2hR2lCqlqNi557U5FBU7fxrZk+pJ6pgg0RXNT1h5ZqwbCrwQPB8PnGuRWX2GAq+5+0F3XwHkBvs7kpL7egEYdgKOIRTFxc778zdyZoeG1EpWvzepfJqm1eBXl51Mzpod/Pmj3LDjiMSdtBrV6Nc2nckLNul+jwr0xEe5zFi5jUeGdeOk9Fphx5EqIJqFenlmrPtqnWD0gJ1A+rds68BkM5tlZqNKrNPY3TcE+9oANDpBx1HhslZtZ+OuA1yo0V6kErsosynf6dWcJz5axufLt4YdRyTuDOrSmBVb9pKbtyfsKFVC1sptPD5lKcN6ZHB5r+Zhx5EqIpqFenlmrDvcOkfatr+79yLSpWa0mZ15VKHiYJa7d3LWk5yUwPldVKhL5fbzoV1p1aAWd702W0M2ihylQ78jJi3YGHKSym/n/gLuem0OzevV5JFh3cKOI1VINAv18sxY99U6ZpYEpAHbjrStux/6mge8ydddYjaZWdNgX02BMsetivVZ7gqLinl33gbO69yY2ur2IpVcreQknhjZix37C7hvXA7FxbqEL1JeTdJS6NGiLpMWqJ96NLk7P35zHpt2HeDxET1ITakWdiSpQqJZqJdnxroJwHXB8+HAR8FEGBOAEcGoMK2B9sAMM6tlZqkAwUx2g4hMllF6X9cBb0fpuKLqs+Vb2bo3n0u6Z4QdRaRCdMmow/9c3IV/L9nMPz79Muw4InFlcNcmzFu3k3U79ocdpdIam7WGiXM3cM/5HTQUo1S4qBXq5Zyx7hkg3cxygXsJRmpx9wXAWGAh8D4w2t2LgMbAp8EMdzOAie7+frCvXwPnm9ky4PzgddyZMGc9qclJGu1FqpSrT2vJBd2a8Nv3lzB79faw44jEjcFdGwPwgbq/RMWiDbt4+O0F9G+Xzq1ntQ07jlRBUe1b4e7vAu+WWvZwiecHgCsOs+0vgV+WWvYl0P0w628Fzj3OyKE6UFDE5AUbGdytiSY5kirFzPj1dzKZt+4/3PHqbCbeOYC0Grq8LPJt2jSsTftGtZm0YBPX928ddpxKZfeBAm57OZu0GtV47MqeJCaUdfucSHRpANAY8u8leew+WMil6vYiVVBajWr8eWRPNu48wIOvz9WQcyLlNKhrY2as3Mb2vflhR6k03J0HX5/H6m37+PPInjRMTQ47klRRKtRjyISc9TSoXZ3T26aHHUUkFD1b1uNHQzrx3vyNPPPpirDjiMSFwV2bUFTsTFlc5hgKcgxe/HwVE+dt4P5BHTmtjX4nS3hUqMeInfsK+HBRHhdnZpCUqH8WqbpuGtCaIV2b8P/fW8wXX2p8dZFvc3KzNDLSUnh/vvqpnwhz1uzgFxMXcm6nRtxyZpuw40gVp4owRkyYu578wmKGn6JJFKRqMzN+d0UmJ6XX5PZXstm480DYkURimpkxuFsTpi7bzK4DBWHHiWs79uUz+uVsGqWm8IfvdidB/dIlZCrUY8T4rDV0apJK14w6YUcRCV1qSjWeuvoU9uUXMfqVbPILi8OOJBLTLumeQX5hMR9oTPVjVlhUzB2vziZv9wGeuKondWtWDzuSiAr1WLB0025y1u5k+CnNMdNf7yIA7Run8tvhmcxatZ1fvbso7DgiMa1ni7o0q1uDd+aWnldQyut3k5bwn2VbeGRoN42XLjFDhXoMGJe1hqQE47KezcKOIhJTLs7M4KYzWvP8tJW8NXtd2HFEYpaZcUn3DD5dtoVtGv3lqL09Zx1PTf2Sq/u2ZESflmHHEfmKCvWQFRQV8+bs9ZzTqRHptTX8k0hpP7qgE31a1+dHr88lZ82OsOOIxKxLujelsNh1U+lRmr9uJw+Mn0ufVvV5+OKuYccR+QYV6iH7eHEeW/Yc5IreLcKOIhKTqiUm8OT3etEwNZmbX8zSzaUih9GlaR3aNKzFOznq/lJeW/cc5JaXZlG/VnX+8r1eVE9SWSSxRZ/IkL30xSqapqVwdseGYUcRiVnptZN55rpT2XuwkJtfzGJ/flHYkURijplxSWYGX6zYSt4u/UH7bQqKihn9Sjab9xzkqWtO0aRGEpNUqIdoxZa9/GfZFq7q01Jjp4t8i45NUvnTyJ7MX7+T+8flUFysmUtFSruke1PcYeK8DWFHiWnuzk/fnM8XX27j15efTGbzumFHEimTqsMQvfzFKpISjCv7qNuLSHmc27kxD13QiYnzNvD4lGVhxxGJOe0apdK5aR3emqPuL0fy5CfLGZO1hjvOacflvTR/icQuFeoh2Z9fxLhZaxncrQmNUlPCjiMSN24e0IYrTmnO41OWMUF9cUX+y3d6NSNnzQ6WbdoddpSY9K+56/nt+0u4tHsG957fIew4IkekQj0k78xdz879BVzT96Swo4jEFTPjF5d1o0+r+tw/Nodpy7eEHUkkpgzr2YykBGN89tqwo8ScWau2c+/YHHqfVI/fDs/U3CUS81Soh8DdefbTFXRoXJvTWtcPO45I3ElOSuTv1/bmpPSa3PLiLBZv3BV2JJGY0aB2MgM7NuLN7HUUFmlW30NWb93HqBezaJqWwtPX9ialWmLYkUS+lQr1EHyydDOLN+7m5gFt9Ne8yDFKq1mNF27oQ63kJK57dgbrduwPO5JIzBh+SnPydh/kP7m64gSQt/sA1zw7nSJ3nrv+VOrXqh52JJFyiWqhbmZDzGyJmeWa2YNlvJ9sZmOC96ebWasS7z0ULF9iZoODZS3M7GMzW2RmC8zsrhLr/8zM1pnZnOBxYTSP7Xg89cmXNKmTwtAemolU5Hhk1K3B8zecyr78Iq5/dgY79mlGRhGAczo1ol7NaozPUveXnfsLuO7ZmWzefZDnrj+VNg1rhx1JpNyiVqibWSLwF+ACoAsw0sy6lFrtRmC7u7cDHgV+E2zbBRgBdAWGAH8N9lcI3OfunYG+wOhS+3zU3XsEj3ejdWzHI2fNDj7/cis3ntFaEyuInACdmtTh6Wt6s2rrPm56IYt9+YVhRxIJXfWkBIb2aMYHCzexfW/V/QN2f34RN70wk9y83fzt6lPo2bJe2JFEjko0K8U+QK67f+nu+cBrwNBS6wwFXgiejwfOtUhfkKHAa+5+0N1XALlAH3ff4O7ZAO6+G1gExNVp6aemLqdOShIjT2sZdhSRSqNf23QeG9GD7NXbufnFLA4UaEIkkRF9WpBfVMy4WWvCjhKKgqJibn8lm6xV23n0yh6c2UETC0r8iWah3gwo2Tqs5b+L6q/WcfdCYCeQXp5tg24yPYHpJRbfbmZzzexZMyvzz2YzG2VmWWaWtXnz5qM9puOybNNu3pu/kWv6nUTt5KQK/d4ild2FJzfl91d0Z9ryrdz2cjb5hbqJTqq2Tk3q0KdVff75xeoqN0FYUbHzw3E5TFmcxyNDu3FxZkbYkUSOSTQL9bLukizdUhxunSNua2a1gdeBu9390HAPTwJtgR7ABuAPZYVy96fdvbe7927YsGL/uv7jB0upVT2Jm85oU6HfV6SquLxXc3457GQ+WpzHna/O1ogXFaAc9yK1DO4tmh2cSLmwxHuZZvZ5cM/RPDPTpBIn2NX9TmL1tn1MXVaxJ6bCdKhIf2vOen44uCNXaxhkiWPRLNTXAiWn3GwOlJ6d5Kt1zCwJSAO2HWlbM6tGpEh/2d3fOLSCu29y9yJ3Lwb+TqTrTcyYt3Yn783fyI1ntKae7jYXiZqrTmvJwxd34f0FG7lvXA5FVexMYkUq571IPwXGuntPIvce/TXYNgn4J3Cru3cFBgIFFRS9yhjStQkNalfnn1+sCjtKhSgqdn44Poc3Zq/j/kEdGH12u7AjiRyXaBbqM4H2ZtbazKoTaaAnlFpnAnBd8Hw48JG7e7B8RDAqTGugPTAj6L/+DLDI3f9Yckdm1rTEy8uA+Sf8iI7D7ycvoW7Natw0oHXYUUQqvRvOaM0DQzry9pz13D1mDgU6sx4t5bkXyYE6wfM0vj5hMwiY6+45AO6+1d11c8EJVj0pgRGntmTK4jzWbNsXdpyoKip2Hhg/lzey13Hv+R24/Zz2YUcSOW5RK9SDPue3A5OI3PQ51t0XmNnPzezSYLVngHQzywXuBR4Mtl0AjAUWAu8Do4MGvD9wDXBOGcMw/ja4dDoXOBu4J1rHdrQ+XpLHJ0s3c9vAtqSmVAs7jkiVcNvAdjx0QSfeyVnPbS9nc7BQNWAUlOdepJ8BV5vZWuBd4I5geQfAzWySmWWb2QPRDltVjTytJQaV+qx6QVEx94/L4fXstdxzXgfuPFdFulQOUb2jMRgi8d1Syx4u8fwAcMVhtv0l8MtSyz6l7P7ruPs1x5s3Gg4WFvHzdxbSpkEtrj9dZ9NFKtItZ7WlRvVEHn57ATe9kMXT1/SmRnXNRngCledepJHA8+7+BzPrB7xkZt2I/P45AzgV2AdMMbNZ7j7lG9/AbBQwCqBlS42WdSya1a3BRZkZvDx9Nbed3Y60GpXrhNGBgiJufyWbDxflcf8gnUmXykUDeUfZc5+tZMWWvTx8SReNmy4Sgmv7teK3wzP5LHcL1z03g5371Q36BCrPvUg3ErlCirt/DqQADYJtP3H3Le6+j8hJnV6lv0GYAwBUJree1YY9Bwsr3Vn1XQcKuPbZGZHRXYZ1U5EulY4qxyhasWUvj3+4jPM6N2Zgx0ZhxxGpsr7buwWPj+jJ7NXbGf7kNNZur9x9dStQee5FWg2cC2BmnYkU6puJdIvMNLOawY2lZxHp7ihR0DUjjTM7NOS5z1ZWmnkGNu8+yMinvyB71XYeu7IH12h0F6mEVKhHSWFRMfeNnUO1ROMXw7qFHUekyrukewYv3NCHjbsOcNlfpzF/3c6wI8W9ct6LdB9ws5nlAK8C13vEduCPRIr9OUC2u0+s+KOoOn5wVlu27DnI+Flrw45y3JZs3M2wv3zG8s17+Pu1vRnaI67mPhQpNxXqUfLU1C/JXr2DR4Z1o0mahgYWiQWnt23A6z84neqJCXz3qc/5eHFe2JHinru/6+4d3L1tcG8R7v6wu08Ini909/7u3t3de7j75BLb/tPdu7p7N3fXzaRR1rdNfXq0qMtTU5fH9YRgHy/J4ztPTqOgqJixt/Tj7E66Yi2Vlwr1KPgsdwt/mLyEizKbcml3zYYmEks6NE7lzdtOp3WDWtzwwkz+8nEukVFhRSo3M+Ou89qzZtt+xsxcHXaco+buPP/ZCm58fiYt69fk7dv7k9m8btixRKJKhfoJtmLLXm5/JZu2DWvzm+9kEhn6XURiSaM6KYy7tR8XZ2bwu0lLuPWfs9hzsDDsWCJRN7BDQ/q0rs/jU3LZlx8/n/n9+UXcP24uP3tnIed0asy4W/vRNK1G2LFEok6F+gm0fsd+rv7HdMyMp6/tTe3kqI5+KSLHoWb1JP40ogc/vagzHy7KY+gTn5KbtyfsWCJRZWb8aEhHtuw5yLOfrgg7Trks37yHYX/5jDdmr+XOc9vz1DWnUEu/X6WKUKF+gizbtJsr/vY5u/YX8OINfWjdoFbYkUTkW5gZNw1ow0s39mH7vgIu+fOnvDpjtbrCSKV2ykn1Ob9LY57893I27jwQdpzDcnfemr2OS//8KXm7D/DC9/tw7/kdSEzQlWqpOlSoH6dDDcnlT04jv6iYV27uS7dmaWHHEpGjcHrbBrx75wBOOakeD70xj1EvzWLb3vywY4lEzf9c1IXCYueRibE5Iub2vfnc/sps7h4zh05N6zDxzgGc2UHj6EvVo0L9GBUUFTN16Wau+vt07h4zh/aNavPmbadzcnMV6SLxqElaCi/e0IefXtSZT5ZsZtCjU5k4d4POrkul1DK9JqPPbsfEuRuYunRz2HG+4aPFmxj02FQmL9zIDwd3ZMyovmTUVX90qZrUyeso3Tc2hyWbdrFqyz52HyykYWoy/++SLlzbr5Uux4nEuYSESFeY09s24IHXcxj9SjbndmrEz4d1o5kKBalkbjmrDW/OXsdDb8zj3bsGkFajWqh5NuzczyP/Wsi78zbSsXEqz3//VLpm6OSXVG0q1I9SUXExDWsn06NFXc5o15CBHRuSUi0x7FgicgJ1yajDW7f15/lpK/nD5KWc/8dPGH12O27o35oa1fX/XSqH5KRE/vjd7gz/2+f8z1vzeXxEj1BGKssvLOaFaSt59MOlFBU79w/qwM1ntiE5Sf/XRFSoH6XHRvQMO4KIVICkxARuGtCGwV2b8PN/LeR3k5bw0ueruG9QBy7v1VxX0KRS6NmyHvec157fT17KgPYNuKJ3iwr73kXFzttz1vHoh0tZs20/53ZqxM8u7UqL+jUrLINIrFOhLiJyBC3q1+Tv1/Zm+pdb+dV7i/nh+Lk8NfVLbj2rLUN7ZFAtUbf6SHz7wcB2fJa7lZ+8OZ/WDWrRu1X9qH6/wqJi3l+wkT9NWcbSTXvomlGH57/fjYEdNcOoSGlWlW+U6t27t2dlZYUdQ0TihLszcd4Gnvgol8Ubd5ORlsINZ7Rm+CnNqVuzeoVmMbNZ7t67Qr9pyNRmR8+Offlc9tdp7NiXz8s39aVLRp0T/j12HyhgzMw1PPfZStbt2E+bBrW4b1BHLujWhARdoZIq4Fja7aieCjKzIWa2xMxyzezBMt5PNrMxwfvTzaxVifceCpYvMbPB37ZPM2sd7GNZsM+K/a0pIpWemXFxZgbv3TWA564/leb1avKLiYvo86sp3PHqbP6zbDNFxVX35IfEr7o1q/P8908lpVoiV/3jC7JXbz8h+y0udqblbuHesXM47VdT+MXERTSrV4OnrzmFD+49i4sym6pIFzmCqJ1RN7NEYClwPrAWmAmMdPeFJda5Dch091vNbARwmbtfaWZdgFeBPkAG8CHQIdiszH2a2VjgDXd/zcz+BuS4+5NHyqizMyJyvBas38m4rLW8OXsdO/cXUL9Wdc7p1IjzOjemX5t00mpGZyQNnVGXaFizbR/f+8d0Nuzcz08u7Mw1xzCi2e4DBUxbvpWPFuXx8ZI88nYfJDU5iYsym3LVaS3JbF43SulFYtuxtNvR7KPeB8h19y8BzOw1YChQcnaFocDPgufjgScscsv5UOA1dz8IrDCz3GB/lLVPM1sEnANcFazzQrDfIxbqIiLHq2tGGl0vTePBCzoxZVEekxduZPKCjYyftRaAjo1T6XVSPTo0rk37Rqm0alCTBrWTNVqUxKQW9Wsy4fb+3DNmDj97ZyHjZq1l1JltOK9zY2olf7NkyC8sJm/3AVZu2ceKLXtYuGEX2at2sDRvN+6QmpzEmR0aMrhbEwZ1aazPvMgxiGah3gxYU+L1WuC0w63j7oVmthNID5Z/UWrbZsHzsvaZDuxw98Iy1hcRibqUaolclNmUizKbUlBUzKxV28lauY2ZK7czce56Xj1Q+I31U5OTqFOjGinVEvjxhZ05t3PjkJKLfFPdmtV59vpTeWfuBn4/aQl3vTYHM8hIq0FSolFY5OzYl8/e/KJvbFcnJYkeLetxwclNOK11Or1b1dPN1iLHKZqFelnXykr3szncOodbXtb/+COt/9+hzEYBowBatmxZ1ioiIselWmICfduk07dNOhC5CXXznoPk5u1h9dZ9bN2bz+bdB9m1v4CDRcXUCXmiGZHSzIxLu2dw8clNmb5iGzNWbGPl1r0Uu5OYYNStUZ16NauRXjuZ1g1q0aZhLRqlJocyDrtIZRbNQn0tUHJA1ubA+sOss9bMkoA0YNu3bFvW8i1AXTNLCs6ql/W9AHD3p4GnIdLf8egPS0Tk6JgZjVJTaJSawultw04jUn4JCUa/tun0a5sedhSRKima16RmAu2D0ViqAyOACaXWmQBcFzwfDnzkkbtbJwAjglFhWgPtgRmH22ewzcfBPgj2+XYUj01EREREJKqidkY96HN+OzAJSASedfcFZvZzIMvdJwDPAC8FN4tuI1J4E6w3lsiNp4XAaHcvAihrn8G3/BHwmpn9Apgd7FtEREREJC5pwiMN9SUicUjDM4qIxJeYm/BIRERERESOjQp1EREREZEYpEJdRERERCQGqVAXEREREYlBKtRFRERERGJQlR71xcw2A6uOYdMGRCZZilfKH654zh/P2aFy5T/J3RuGGaaiqc2OW8ofLuUPT+nsR91uV+lC/ViZWVY8D4um/OGK5/zxnB2Uv6qK95+b8odL+cMVz/lPRHZ1fRERERERiUEq1EVEREREYpAK9WPzdNgBjpPyhyue88dzdlD+qiref27KHy7lD1c85z/u7OqjLiIiIiISg3RGXUREREQkBqlQPwpmNsTMlphZrpk9GHaespjZs2aWZ2bzSyyrb2YfmNmy4Gu9YLmZ2Z+C45lrZr3CS/5V1hZm9rGZLTKzBWZ2V7A8Lo7BzFLMbIaZ5QT5/zdY3trMpgf5x5hZ9WB5cvA6N3i/VZj5g0yJZjbbzP4VvI6b7ABmttLM5pnZHDPLCpbFy+enrpmNN7PFwf+BfvGSPVap3Y4utdnWKsz8h8Rzux3PbXaQKarttgr1cjKzROAvwAVAF2CkmXUJN1WZngeGlFr2IDDF3dsDU4LXEDmW9sFjFPBkBWU8kkLgPnfvDPQFRgc/53g5hoPAOe7eHegBDDGzvsBvgEeD/NuBG4P1bwS2u3s74NFgvbDdBSwq8Tqesh9ytrv3KDEsVrx8fh4H3nf3TkB3Iv8O8ZI95qjdrhBqs2NDvLfb8dpmQ7TbbXfXoxwPoB8wqcTrh4CHws51mKytgPklXi8BmgbPmwJLgudPASPLWi9WHsDbwPnxeAxATSAbOI3IhAdJpT9LwCSgX/A8KVjPQszcPGhUzgH+BVi8ZC9xDCuBBqWWxfznB6gDrCj9M4yH7LH6ULsdynGoza743HHdbsdrmx18/6i32zqjXn7NgDUlXq8NlsWDxu6+ASD42ihYHtPHFFyS6wlMJ46OIbgEOQfIAz4AlgM73L0wWKVkxq/yB+/vBNIrNvE3PAY8ABQHr9OJn+yHODDZzGaZ2ahgWTx8ftoAm4HngkvY/zCzWsRH9lgVzz+juPt3V5sdmnhvt+O1zYYKaLdVqJeflbEs3ofMidljMrPawOvA3e6+60irlrEs1GNw9yJ370HkLEcfoHNZqwVfYya/mV0M5Ln7rJKLy1g15rKX0t/dexG5xDjazM48wrqxdAxJQC/gSXfvCezl68ulZYml7LGqMv6MYvKY1GaHo5K02/HaZkMFtNsq1MtvLdCixOvmwPqQshytTWbWFCD4mhcsj8ljMrNqRBr8l939jWBxXB0DgLvvAP5NpN9mXTNLCt4qmfGr/MH7acC2ik36lf7ApWa2EniNyGXUx4iP7F9x9/XB1zzgTSK/eOPh87MWWOvu04PX44n8AoiH7LEqnn9GcfPvrjY71HYv7tvtOG6zoQLabRXq5TcTaB/cSV0dGAFMCDlTeU0ArgueX0ekD+Gh5dcGdyH3BXYeulQTFjMz4Blgkbv/scRbcXEMZtbQzOoGz2sA5xG5seRjYHiwWun8h45rOPCRBx3XKpq7P+Tuzd29FZHP90fu/j3iIPshZlbLzFIPPQcGAfOJg8+Pu28E1phZx2DRucBC4iB7DFO7HWVqs8Nt9+K93Y7nNhsqqN0OqwN+PD6AC4GlRPqv/STsPIfJ+CqwASgg8pfbjUT6n00BlgVf6wfrGpEREZYD84DeMZD/DCKXgeYCc4LHhfFyDEAmMDvIPx94OFjeBpgB5ALjgORgeUrwOjd4v03Y/wZBroHAv+Ite5A1J3gsOPT/NI4+Pz2ArODz8xZQL16yx+pD7XbUs6vNjoHPUJAt7trteG+zg0xRbbc1M6mIiIiISAxS1xcRERERkRikQl1EREREJAapUBcRERERiUEq1EVEREREYpAKdRERERGRGKRCXao8M9sTfG1lZled4H3/uNTraSdy/yIiVY3abKlKVKiLfK0VcFSNvpklfssq32j03f30o8wkIiJla4XabKnkVKiLfO3XwAAzm2Nm95hZopn9zsxmmtlcM7sFwMwGmtnHZvYKkQkLMLO3zGyWmS0ws1HBsl8DNYL9vRwsO3QmyIJ9zzezeWZ2ZYl9/9vMxpvZYjN7OZj5DzP7tZktDLL8vsJ/OiIisUVttlR6SWEHEIkhDwL3u/vFAEHjvdPdTzWzZOAzM5scrNsH6ObuK4LXN7j7tmAK6plm9rq7P2hmt7t7jzK+1+VEZjPrDjQItpkavNcT6AqsBz4D+pvZQuAyoJO7+6Epr0VEqjC12VLp6Yy6yOENAq41sznAdCJTArcP3ptRosEHuNPMcoAvgBYl1jucM4BX3b3I3TcBnwCnltj3WncvJjIddytgF3AA+IeZXQ7sO+6jExGpXNRmS6WjQl3k8Ay4w917BI/W7n7o7Mzer1YyGwicB/Rz9+7AbCClHPs+nIMlnhcBSe5eSOSMb5brmAAAAQpJREFU0OvAMOD9ozoSEZHKT222VDoq1EW+thtILfF6EvADM6sGYGYdzKxWGdulAdvdfZ+ZdQL6lniv4ND2pUwFrgz6VDYEzgRmHC6YmdUG0tz9XeBuIpdgRUSqMrXZUumpj7rI1+YChcHl0OeBx4lcwswObg7aTOTMSGnvA7ea2VxgCZFLqYc8Dcw1s2x3/16J5W8C/YAcwIEH3H1j8EujLKnA22aWQuTMzj3HdogiIpWG2myp9Mzdw84gIiIiIiKlqOuLiIiIiEgMUqEuIiIiIhKDVKiLiIiIiMQgFeoiIiIiIjFIhbqIiIiISAxSoS4iIiIiEoNUqIuIiIiIxCAV6iIiIiIiMej/AOvEZiRMhTG5AAAAAElFTkSuQmCC ", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAZoAAAEKCAYAAAArYJMgAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xd8VVW2wPHfSocQAoQQQgiEEnooEhiwgo5UFbtgGfQ54zwHns5Ycd4443Mcy9i7YtdREXFUVAQVC6i0gNJb6KHX0EtgvT/uiWZiyk0599yyvp/P/eTec8/Zd20Nd+Wss8/eoqoYY4wxbonyOgBjjDHhzRKNMcYYV1miMcYY4ypLNMYYY1xlicYYY4yrLNEYY4xxlSUaY4wxrrJEY4wxxlWWaIwxxrgqxusAvNS4cWPNysryOgxjjAkpc+fO3aGqqf7uH9GJJisri7y8PK/DMMaYkCIi66qyv5XOjDHGuMoSjTHGGFdZojHGGOMqSzTGGGNcZYnGGGOMq1xNNCIySESWi0i+iIwp4/14EXnHeX+WiGSVeO8OZ/tyERlYYvvLIrJNRBaVaquRiHwuIiudnw3d7Jsxxhj/uJZoRCQaeBoYDHQCRohIp1K7XQvsVtW2wKPAA86xnYDhQGdgEPCM0x7Aq8620sYAU1U1G5jqvDbGGOMxN89oegP5qrpaVY8C44BhpfYZBrzmPJ8AnCUi4mwfp6pHVHUNkO+0h6pOA3aV8Xkl23oNOL82O1PS+z8U8PRX+bw1az0/rN/N0aITbn2UMcaEPDdv2MwANpR4XQD8qrx9VLVIRAqBFGf7zFLHZlTyeWmqutlpa7OINClrJxG5DrgOoEWLFv71pJSP5m/my2XbfnqdFB/DkJx0rj4li47p9avVpqm5gt0HWb5lH2d1TPM6FGNMCW4mGiljm/q5jz/HVouqjgXGAuTm5larzZev7sXhY8fZvu8IizYW8sXSbXy0YBPj527g3K7N+Ms5HWmSlFAb4ZoqeGPGOsZOX82kG06zhG9MEHGzdFYAZJZ43RzYVN4+IhIDJOMri/lzbGlbRSTdaSsd2FbJ/jWSEBtNZqO6DM5J5+FLuzFjzFn8oV8bpizewtmPTOOTBZvd/HhThmPHFVV4+LMVXodijCnBzUQzB8gWkVYiEofv4v7EUvtMBEY6zy8GvlRVdbYPd0altQKygdmVfF7JtkYCH9ZCH/yWXDeWWwd2YNKNp9GqcSKj3prHA5OXcfxErZyImSr4YulW5q7b7XUYxhiHa4lGVYuA0cAUYCkwXlUXi8jdInKes9tLQIqI5AM34YwUU9XFwHhgCTAZGKWqxwFE5G1gBtBeRApE5FqnrfuBs0VkJXC28zrg2qTWY/zv+3L5r1rw7NeruPXd+ZZsAkRREmKjaFwvjgenLMP3N4sxxmuuzt6sqpOASaW2/bXE88PAJeUc+w/gH2VsH1HO/juBs2oSb22Ji4ni3gtySK+fwMOfr+DYCeXRS7sRE233x7pJFWKjoxjdvy13fbSEb/N3cFq23zOZG2NcYt98Lvqfs7K5fVAHPpq/ib98sMj+wg4AAUb8qgUZDerw4JTl9t/cmCBgicZl1/drw+j+bRk3ZwPPT1vtdTgRIT4mmht/nc2CgkKmLN7idTjGRDxLNAFw09ntOLdbM+7/dBmTF9kXn1tUFd/9vnBhjwzapCby0Gcr7BqZMR6zRBMAUVHCgxd3pVtmA259dz7rdx70OqSw5eQZYqKjuHlAe/K37ef9HzZ6G5QxEc4STYAkxEbz1IgeIPA/b8+zaWtcUPq8ZXCXpuRkJPPo5ys4UnTck5iMMZZoAiqzUV0evLgr8wsKefiz5V6HE3ZU/3NKCRHhloHt2bjnEO/M2VDuccYYd1miCbBBXdIZ0bsFL0xfzQ/r7abC2qT8fI2m2OnZjendqhFPTM3n4NEijyIzJrJZovHAn4d0IK1+ArdNWGAlHZeJCLcNbM+O/Ud49fu1XodjTESyROOBpIRY7rswh5Xb9vPUl/lehxM2SpfOiuVmNeLMDk147utVFB48FvC4jIl0lmg80q99Ey7skcHz36xmzY4DXocTFpSfR52VdsuA9uw7UsSz36wKaEzGGEs0nhozpANxMVHc/dFir0MJI2Vnmk7N6nN+9wxe+W4Nm/YcCnBMxkQ2SzQeapKUwB9/nc1Xy7czdelWr8MJeZXNNnPzgHaowiOf2zICxgSSJRqPjTw5i7ZN6vH3j5dw7LjdW1MzWm7pDKB5w7qMPLkl780rYNmWvYELy5gIZ4nGY7HRUdwxuANrdx5knN3rUWMV5BkARvVvS1J8DA98uiwg8RhjLNEEhTM7NKFXVkOemLrS7vWoAX8mam5QN45R/dvy1fLtfL9qh/tBGWMs0QQDEWHM4A5s33eEV75b63U4IUu1/FFnJY08OYtmyQnc/+kyTtiEm8a4zhJNkOjZshG/7pjGc1+vYveBo16HE7Kk0uKZb965mwa0Z0FBIZ8s3ByAqIyJbJZogshtg9qz/2iRrVtTTfqLaTXLd0GPDDo0TeLBKcttglNjXGaJJoi0S0tiaE46b8xYy56DdlZTVf6WzgCio3zlyvW7DvLmrHXuBmZMhLNEE2RGn9mWA0eP27WaavIzzwBwRrtUTm6TwpNf5rP3sE1NY4xbLNEEmQ5N6zOgUxqvfLeGffblVyVVvawvItwxuCO7DhzleZuaxhjXWKIJQqPPbMvew0W8MdNKOlXhK51V5ZwGcponc163Zrz07Ro2F9rUNMa4wRJNEOravAFntEvlxelrOHTUlhHwV1UGA5R068D2nDgBD06xxeiMcYMlmiD1h35t2HXgKO/NK/A6lLCX2agu15yaxb/nbWRBwR6vwzEm7FiiCVK9WzWia/NkXv52jd1U6K8qjDorbXT/tqQkxnHPx0tRf6YYMMb4zRJNkBIRrj21Fat3HOCr5du8DickVLQeTWWSEmK5aUA7Zq/dxZTFW2o1LmMinSWaIDYkJ5305ARemG43cPrLn5kBynNZbibt0upx76RltsS2MbXIEk0Qi42O4ppTspi5eheLNhZ6HU7Qq2nJKyY6ir8M7cT6XQd5/Xsb8WdMbbFEE+Qu69WCxLhoXvp2jdehBL2alM6Knd4ulX7tU3niy5Xs3H+kVuIyJtJZoglyyXViubhncz5ZsNm++PxQwzwDwP8O6cjBo8d5fOrKWmjNGONqohGRQSKyXETyRWRMGe/Hi8g7zvuzRCSrxHt3ONuXi8jAytoUkbNEZJ6I/Cgi34pIWzf7FkhX9mnJ0eMnGJ9nQ50rUluDxbLTkri8dwvenLWelVv31U6jxkQw1xKNiEQDTwODgU7ACBHpVGq3a4HdqtoWeBR4wDm2EzAc6AwMAp4RkehK2nwWuEJVuwNvAX9xq2+Blp2WRJ/WjXhz1jqO21DncvlKZ7VxTgN/OrsddeOiuXfS0lppz5hI5uYZTW8gX1VXq+pRYBwwrNQ+w4DXnOcTgLPE900xDBinqkdUdQ2Q77RXUZsK1HeeJwObXOqXJ67qk0XB7kN8s8KGOlekdtIMNEqM44Yzs/lq+XamrdheS60aE5ncTDQZwIYSrwucbWXuo6pFQCGQUsGxFbX5W2CSiBQAVwH310ovgsSAzmk0SYrn9Rk2Gqo8tX2j5W9ObknLlLrc88kSio7bmjXGVJebiaasPy5LfxOUt09VtwP8CRiiqs2BV4BHygxK5DoRyRORvO3bQ+cv1djoKEb0bsE3K7azbucBr8MJSgq1d0oDxMdEc8fgjqzYup83Z62vvYaNiTBuJpoCILPE6+b8spz10z4iEoOv5LWrgmPL3C4iqUA3VZ3lbH8HOLmsoFR1rKrmqmpuampqdfrlmRG9WxAlwlv2pVeuWswzAAzsnMapbRvz8GfL2WVLbBtTLW4mmjlAtoi0EpE4fBf3J5baZyIw0nl+MfCl+uofE4Hhzqi0VkA2MLuCNncDySLSzmnrbCDsruI2TU7g7I5pjM/bYMsPl8WFcRIiwt/O7cSBo8d56DOb3dmY6nAt0TjXXEYDU/B96Y9X1cUicreInOfs9hKQIiL5wE3AGOfYxcB4YAkwGRilqsfLa9PZ/jvgPRGZj+8aza1u9c1Lw3tnsvvgMb5YutXrUIKOorU26qyk7LQkRvbN4u3Z622GBmOqQSJ5ptrc3FzNy8vzOowqOX5COfWBL2nfNIlXr+ntdThB5fp/zSV/234+v+mMWm+78NAxznzoa1o1TuTd/+7rSkIzJlSIyFxVzfV3f5sZIMRERwkX92zOtBXbbUXIAEquE8ttg9qTt243E+eH1ch5Y1xniSYEXdIzkxMKE2ymgP+gNViPxh+X9Myka/Nk7p20lANHitz7IGPCjCWaENQipS59W6fw7twCWxStBEVrtExAZaKihL+d25mte4/w1Ff5rn2OMeHGEk2IurRXc9bvOsjMNTu9DiWouH3ppGfLhlx4UgYvTV/Dmh12P5Mx/rBEE6IGd0knKSGGd6189pNAjWsZM6gDsdHCPR8vCcwHGhPiLNGEqITYaM7r1oxJCzdTeOiY1+EEhUAVEZvUT+CGs7KZumwbXy2zueeMqYwlmhB2Wa9MjhSdsFFQJQRq2PE1p7SideNE7v54iS37bEwlLNGEsJyMZNqnJfH+PCufQeBKZwBxMVHcdV5n1uw4wNhvVgfug40JQZZoQpiIcH6PDOat32MTbQKgLo45+6XT26UyNCedp77KZ/3OgwH8ZGNCiyWaEDesezNE4P0fNnodSlAI9A37d57TiZgo4W8TF9X6MgXGhAtLNCGuWYM69GmVwgc/bIz4Lzovut80OYE/nd2Or5Zv57MlNv+cMWWxRBMGLuiRwdqdB/lxwx6vQ/GUbynnwH/uyJOz6NA0if+buJiDR23GAGNKs0QTBgblNCU+JsrKZ+DqzADliY2O4p7zu7Cp8DBPTLUZA4wpzRJNGKifEMuvO6Xx0fxNHIvgJYe9LB3mZjXi0tzmvDh9NSu37vMsDmOCkSWaMHFB9wx2HzzGtBWhszx1bfOqdFZszOCO1EuI4S8f2MAAY0qyRBMmTm+XSsO6sRFdPlOt/aWcq6JRYhy3D+rArDW7+ODHyP3/YExplmjCRFxMFOd2a8bnS7ay97BNSeOVy3Iz6Z7ZgH98stSmBjLGYYkmjJzfI4MjRSeYvGiL16F4QsHb2hm+pQTuOb8Luw4c5cEpyzyNxZhgYYkmjPTIbEBWSl0+iNDymWpgZwYoT5eMZEaenMWbs9Yzd91ur8MxxnOWaMKIiHBet2bMXL2TbfsOex2OJzw+ofnJLQPa0yy5Dnf8ewFHiyJ3JKAx4GeiEZFTReQa53mqiLRyNyxTXed2a8YJhU8XRmb5LFgkxsfw9/M7s2Lrfp7/ZpXX4RjjqUoTjYj8DbgduMPZFAv8y82gTPVlpyXRPi2JjyJw6QCvR52VdmaHNM7pms6TX+azavt+r8MxxjP+nNFcAJwHHABQ1U1AkptBmZo5t1s6eet2s2nPIa9DCbhArUfjr7+e24mE2Cju+PdCTpywe2tMZPIn0RxV391nzqAeSXQ3JFNT53RtBsAnCzZ7HElgacDW2PRfk6QE/ndoR2av2cX4vA1eh2OMJ/xJNONF5HmggYj8DvgCeNHdsExNZDVOJCcjmY8WRFb5LNhKZ8Uuzc2kT+tG3DtpacQO0jCRrdJEo6oPAROA94D2wF9V9Qm3AzM1c263dBYUFEbcgmhBVjkDfOW8ey/I4XDRCf7voyVeh2NMwPkzGOABVf1cVW9V1VtU9XMReSAQwZnqG+qUzz6OoPJZME8v1jq1Hjec2ZZPFmxm6lJbt8ZEFn9KZ2eXsW1wbQdialdGgzr0bNkwokafKerJMgH+uu70NrRPS+IvHyxi/xFbt8ZEjnITjYhcLyILgfYisqDEYw2wIHAhmuo6t2s6y7bsi6xp64M3zxAXE8V9F+WwZe9h/jnZpqcxkaOiM5q3gHOBic7P4kdPVb0yALGZGhqSk44IfBQh5bNgLp0VO6lFQ0b2zeL1GeuYuXqn1+EYExDlJhpVLVTVtao6QlXXAYfwDXGuJyItAhahqbYm9RPo0yqFjxdsioj1UZSgPqH5yW2D2tOiUV1uf28Bh44e9zocY1znz2CAc0VkJbAG+AZYC3zqT+MiMkhElotIvoiMKeP9eBF5x3l/lohklXjvDmf7chEZWFmb4vMPEVkhIktF5AZ/Ygx353ZrxurtB1iyea/XoQREMI46K61uXAwPXNSVdTsP8uCU5V6HY4zr/BkMcA/QB1ihqq2As4DvKjtIRKKBp/ENHOgEjBCRTqV2uxbYraptgUeBB5xjOwHDgc7AIOAZEYmupM2rgUygg6p2BMb50bewN6hLU2KiJDJGn4XQSVvfNilc1aclr3y/hry1u7wOxxhX+ZNojqnqTiBKRKJU9Sugux/H9QbyVXW1qh7F98U/rNQ+w4DXnOcTgLPEN4fIMGCcqh5R1TVAvtNeRW1eD9ytqicAVHWbHzGGvUaJcfRtk8KnCzeHffks2EedlTZmcAcyGtThtgkLOHzMSmgmfPmTaPaISD1gGvCmiDwO+DM2MwMoOedGgbOtzH1UtQgoBFIqOLaiNtsAl4lInoh8KiLZZQUlItc5++Rt377dj26EviE56azdeTDsy2eqoVE6K5YY7yuhrd5xgEc+X+F1OMa4xp9EMww4CPwJmAyswjf6rDJl/ZMv/Sd1eftUdTtAPHBYVXOBF4CXywpKVceqaq6q5qamppYZeLgZ2Lkp0VESEUsHhFKiATilbWNG9G7Bi9NXM2+9LZJmwpM/U9AcUNUTqlqkqq/hu0YyyI+2C/BdMynWHCh99+BP+4hIDJAM7Krg2IraLMA3TQ7A+0BXP2KMCI0S4+jTuhGTwrx8Fqo9+/OQDjStn8Ct7863EpoJSxXdsFnfGfn1lIgMcEZ1jQZWA5f60fYcIFtEWolIHL6L+xNL7TMRGOk8vxj40pkpeiIw3BmV1grIBmZX0uYHwJnO8zMAq0WUMLhLOqt3HGB5GN+86VvKOcROaYCkhFjuu6grq7Yf4LEvVnodjjG1rqIzmjfwTaK5EPgt8BlwCTBMVUtf1P8F55rLaGAKsBQYr6qLReRuETnP2e0lIEVE8oGbgDHOsYuB8cASfOW6Uap6vLw2nbbuBy5yZjO4z4nZOAZ2bkqUwKQwL5+FWums2BntUrksN5Ox01Yxd52V0Ex4kfJKKSKyUFVznOfRwA6ghaqGzZ/Eubm5mpeX53UYATN87Ax27D/KFzed4XUorrjgme+oFx/DG9f+yutQqmXf4WMMemw6sdHCpBtPo25cjNchGVMmEZnrXA/3S0VnNMeKn6jqcWBNOCWZSDQ0J538bfvDdu6zUL/8lJQQy8OXdmPdroPcO2mp1+EYU2sqSjTdRGSv89gHdC1+LiLhPU42TA3s0hQR+GRh+N68GWxLOVdVn9YpXHtKK/41cz3frIiM4fcm/FU011m0qtZ3HkmqGlPief1ABmlqR5OkBHplNQrbYc4hfkLzk1sGtie7ST1ufXc+ew4e9TocY2rMn/toTBgZ0qUpy7fuI3/bfq9DqX2qITjm7JcSYqN59LLu7DpwlDs/XFz5AcYEOUs0EWZQl3QAPg3T8lmIV85+0iUjmRvPyuaj+ZuYGEGL15nwZIkmwjRNTiC3ZUMmLQq/8lm4lM6KXd+vDd0zG3DnB4vYUnjY63CMqTZLNBFocE46SzfvZc2OA16HUqtUQ2M9Gn/FREfxyKXdOFJ0nNveWxDWszqY8ObPejT7Sow+K35sEJH3RaR1III0tWtwl6YATArD8lmojzorrXVqPf53SEemrdjO6zPWeR2OMdXizxnNI8Ct+GZJbg7cgm/SynGUM3GlCW7NGtShR4sGYZdoNOyKZz5X9mlJ//ap/GPSUpZtsTsLTOjxJ9EMUtXnVXWfqu5V1bHAEFV9B2jocnzGJUNz0lm8aS/rdoZP+SzcSmfFRIQHL+lG/YRYbnj7B5t404QcfxLNCRG5VESinEfJCTXD80/ICDDIKZ99GkaDAkJtPZqqaFwvnocv7caKrftt1gATcvxJNFcAVwHbgK3O8ytFpA6+CS5NCGresC7dMsOvfBbOzmiXym9PbcXrM9bxxZKtXodjjN/8WY9mtaqeq6qNVTXVeZ6vqodU9dtABGncMaRLUxYUFLJh10GvQ6kVvtPrMD2lcdw6qD2dm9Xn1gnz2brXhjyb0ODPqLNUEfmziIwVkZeLH4EIzrhrSI5z8+ai8DirUdWwLZ0Vi4+J5okRPTh87AQ3jf+REyesem2Cnz+lsw/xrXz5BfBJiYcJcZmN6pKTkcwnYTT3WZjnGQDapNbjb+d24rv8nbwwfbXX4RhTKX8WvKirqre7HonxxOCcpvxz8nI27jlERoM6Xodj/HRZr0y+WbGdB6csp2+bFLo2b+B1SMaUy58zmo9FZIjrkRhPDM0Jn7nPwnnUWWkiwn0X5tAkKZ7Rb/3A3sPHKj/IGI/4k2huxJdsDtl6NOGnZUoinZvVD5s1aiQiimc+DerG8eTlPdi45xC3T7Apakzw8mfUWZKqRqlqHVuPJjwNyUnnh/V72LTnkNeh1Ei4zgxQkZ4tG3HbwPZ8umiLTVFjgla5iUZEOjg/TyrrEbgQjduKR5+F+j01kVQ6K+l3p7XmrA5NuOeTJSwo2ON1OMb8QkVnNDc5Px8u4/GQy3GZAGrVOJGO6fVDPtFAZCaaqCjhoUu6kVovnlFvzaPwkF2vMcGloqWcr3N+9i/jcWbgQjSBMDSnKfPW72FzYeiWzyKvcPazholxPHn5SWzec5jbJsy36zUmqPi1Ho2InCwil4vIb4ofbgdmAuunmzdD+J4aVY2owQCl9WzZkNsHdWDK4q28+v1ar8Mx5if+zAzwBr5S2alAL+eR63JcJsBap9ajQ9Ok0C+fRW6eAeC3p7Xi1x2bcO+kpfy4wa7XmODgzxlNLnCKqv5BVf/HedzgdmAm8IbmpJO3bnfILhtsxSLf/TUPXdKNJkkJjHpzHrsPHPU6JGP8SjSLgKZuB2K8N6RriM99Fqbr0VRVg7pxPHPFSWzfd4Qbxv3AcZsPzXjMn0TTGFgiIlNEZGLxw+3ATOC1Sa1H+7TQLZ8p4beUc3V1y2zA3cM6M33lDh7+bLnX4ZgI589cZ3e5HYQJHkNy0nls6gq27j1MWv0Er8MxNTC8dwvmF+zhma9X0S2zAQM7W2HCeKPCMxoRiQbuVNVvSj8CFJ8JsKFdm6IKk0Nw5U3fqDNT0l3ndaZb82RuHj+fVdv3ex2OiVAVJhpVPQ4cFJHkAMVjPNa2SRLt0uqF5NxnvtKZ11EEl/iYaJ69sifxMVH8/o257D9S5HVIJgL5c43mMLBQRF4SkSeKH/40LiKDRGS5iOSLyJgy3o8XkXec92eJSFaJ9+5wti8XkYFVaPNJEbE/3WpgSE46c9buYlsIruBoeeaXmjWow5OX92D19v12M6fxhD+J5hPgTmAaMLfEo0JO2e1pYDDQCRghIp1K7XYtsFtV2wKPAg84x3YChgOdgUHAMyISXVmbIpIL2MIcNTQ0J91XPlscWuUz+/4s38ltGjNmcAcmLdzC2Gm2WJoJrEoHA6jqa9VsuzeQr6qrAURkHDAMWFJin2H8PNhgAvCU+IYNDQPGqeoRYI2I5DvtUV6bThJ6ELgcuKCaMRsgOy2Jtk3q8cmCzfymb5bX4fhNURt1VoHfndaa+RsKeWDyMjqm1+f0dqleh2QihD8zA2SLyAQRWSIiq4sffrSdAWwo8brA2VbmPqpaBBQCKRUcW1Gbo4GJqhp6FxeC0JCcdGav3cW2faFVPrM0Uz4R4Z8Xd6VdWhKj3ppngwNMwPhTOnsFeBYoAvoDrwNv+HFcWf/mSxc3ytunSttFpBlwCfBkpUGJXCcieSKSt3379sp2j1jF5bMpITT6zEpnlUuMj+HFkbnERUfxu9fyKDxoMz0b9/mTaOqo6lRAVHWdqt4F+DN7cwGQWeJ1c2BTefuISAyQDOyq4NjytvcA2gL5IrIWqOuU235BVceqaq6q5qamWumgPO3S6tEmNZFJITTJppb3p4j5D80b1uW5q3qyYfdBRr89j6LjJ7wOyYQ5v0adiUgUsFJERovIBUATP46bA2SLSCsRicN3cb/0jAITgZHO84uBL9U3JGYiMNwZldYKyAZml9emqn6iqk1VNUtVs4CDzgADU00iwtCcdGat2cmO/Ue8DsdvkTx7c1X0ymrEP87PYfrKHdzzyVKvwzFhzp9E80egLnAD0BO4kp+TQ7mcay6jgSnAUmC8qi4WkbtF5Dxnt5eAFOfs4yZgjHPsYmA8voEDk4FRqnq8vDb97aypmiFd0zkRojdvmspd2iuTa09txavfr+Xt2eu9DseEMX9Gnc0BEBFV1Wuq0riqTgImldr21xLPD+O7tlLWsf8A/uFPm2XsU68qcZqytU9LonXjRCYt3MyVfVp6HU6lVNVu2KyiOwZ3IH/bfu78YBGtGifSp3WK1yGZMOTPqLO+IrIE3xkEItJNRJ5xPTLjORFhSE46M1eHTvnM8kzVxERH8cSIHrRIqcv1/5rL+p0HvQ7JhCF/SmePAQOBnQCqOh843c2gTPAY6pTPPg2B8pkNOque5DqxvDSyFycUrnl1to1EM7XOr6WcVXVDqU3HXYjFBKEOTX03b370Y+kBg8FH1eY6q65WjRN5/qqebNh1iOveyONIkf0TN7XHn0SzQUROxne/SpyI3IJTRjPhT0Q4r1szZq/dxaY9h7wOp0KK2qizGujTOoUHL+nKrDW7uH3CApsTzdQafxLNfwOj8N2BXwB0B/7gZlAmuJzXrRkAHy8I/rMaUzPDumdwy4B2fPDjJh75fIXX4ZgwUWmiUdUdqnqFqqapahNVvRL4TQBiM0Eiq3EiXZsnM3F+cCcaK53VjlH923JZbiZPfpnP+Dmlq+bGVJ1f12jKcFOtRmGC3nndmrFo415WB/H8WLYeTe0QEe65oAunZTfmz+8vZPpKm6rJ1Ex1E439c44w53RthghBf1Zjv5q1IzY6iqevOIm2Tepx/b/msWTTXq9DMiGsuonGrhJGmKbJCfTOasTE+ZvLUOWtAAAYkklEQVSC9iJxkIYVsuonxPLy1b1ISohh5Cuz7R4bU23lJhoR2Scie8t47AOaBTBGEyTO696M1dsPsDho/7q1mQFqW7MGdXj9v3pztOgEV708i+37QuPGXRNcyk00qpqkqvXLeCSpaqVT15jwM6RLOjFRwkdBXD6zPFP7stOSePnqXmzde5irX5nNvsN2Q6epmuqWzkwEapgYx2nZjflo/iZOnAi+OpWVztzTs2VDnr2yJ8u37OO61+dy+Jjd0Gn8Z4nGVMl53ZuxqfAwc9fv9jqUX7BRZ+7q374JD17SlRmrd/Knd37keBD+sWGCkyUaUyVnd2pKfEwUE4N0ShqbGcBdF/Rozl+GduTTRVu488NFQTswxAQXSzSmSurFx3BWxyZMWriZY0G2MqN96QXGb09rzfX92vDWrPXc9+ky++9uKmWJxlTZ+d0z2HngaNDdyGels8C5bWB7ftO3JWOnreaxL1Z6HY4JcjZ6zFRZv/ZNaFg3lvfmbuTMDmleh/MfLM8Ehohw17mdOXzsOI9PXUlCbDTX92vjdVgmSFmiMVUWFxPFsO4ZvDV7PYUHj5FcN9brkAAbdRZoUVHCfRd25fCxEzwweRl1YqO4+pRWXodlgpCVzky1XHhSBkeLTvDxwuAZFOBbytnOaQIpOkp4+NJuDOycxl0fLWHc7PVeh2SCkCUaUy05GclkN6nHv+dt9DoU47FYZznofu1TueP9hbz/Q4HXIZkgY4nGVIuIcOFJzZm7bjdrdhzwOhzAJuDzUnxMNM9d2ZO+rVO4efx8SzbmP1iiMdV2QY8MROD9eUHypWLr0XgqITaaF0fm0qd1CjeNn8+EuUHye2E8Z4nGVFvT5ARObduY9+ZtDIopaRS7YdNrdeNieGlkL05p05hbJ8znnTl2zcZYojE1dNFJzdm45xCz1+7yOhTAzmiCQZ0435nNadmp3P7eQt6aZckm0lmiMTUyoHMaiXHRvBcEZRK7Qz14JMRGM/aqnvRvn8qf31/IGzPWeh2S8ZAlGlMjdeNiGJKTzqSFmzl01NsZfX2lMxMsEmKjee6qnvy6YxPu/HAxL3+7xuuQjEcs0Zgau6hncw4cPc4nCzd7HYqVzoJMfEw0z1zRk4Gd07j74yU8+vkKO/OMQJZoTI39qlUjWjVO9PxmPfv+Ck5xMVE8fflJXNyzOY9PXcn/fbQkKAaPmMCxRGNqTEQY3iuTvHW7Wbl1n2dxKDYzQLCKiY7inxd15dpTW/Hq92u55d35QTf7t3GPJRpTKy7q2ZzYaOHt2Rs8jcPSTPCKihL+MrQjtwxox79/2Mj1/5pnK3VGCEs0plY0rhfPgM5NeW9egWdfHlY6C34iwugzs/n7sM5MXbaVq1+Zzb7Dx7wOy7jM1UQjIoNEZLmI5IvImDLejxeRd5z3Z4lIVon37nC2LxeRgZW1KSJvOtsXicjLIhIcUwpHkMt7t6Dw0DEmL9riyecr2ClNiLiqbxaPXdadvLW7ueS5GWwpPOx1SMZFriUaEYkGngYGA52AESLSqdRu1wK7VbUt8CjwgHNsJ2A40BkYBDwjItGVtPkm0AHIAeoAv3Wrb6ZsfVun0DKlLm97OCjAZgYIHcO6Z/Dy1b0o2H2IC575jmVb9nodknGJm2c0vYF8VV2tqkeBccCwUvsMA15znk8AzhLf1dxhwDhVPaKqa4B8p71y21TVSeoAZgPNXeybKUNUlHBZr0xmrdnFqu37Ax+Alc5CzuntUhn/+76owiXPzuDblTu8Dsm4wM1EkwGUvDJc4Gwrcx9VLQIKgZQKjq20TadkdhUwuaygROQ6EckTkbzt24NrKeJwcHHP5sREiSdDnX2jzgL+saaGOjWrz/ujTiajYR2ufmU27+Z5O6DE1D43E01Z/+RL/81Z3j5V3V7SM8A0VZ1eVlCqOlZVc1U1NzU1taxdTA00SUrg7E5pTJjrzaAAyzOhKT25DuP/uy99Wqdw64QFdmNnmHEz0RQAmSVeNwdKL8f40z4iEgMkA7sqOLbCNkXkb0AqcFOt9MBUy1V9W7L74DEmzg/s6pv2vRTa6ifE8so1vX66sfOGcT96Pq2RqR1uJpo5QLaItBKROHwX9yeW2mciMNJ5fjHwpXONZSIw3BmV1grIxnfdpdw2ReS3wEBghKranWAe6ts6hfZpSbzy3dqA/lWq2BQ0oS42OooHL+7K7YM68PGCTVz6/Aw2Fx7yOixTQ64lGueay2hgCrAUGK+qi0XkbhE5z9ntJSBFRPLxnYWMcY5dDIwHluC71jJKVY+X16bT1nNAGjBDRH4Ukb+61TdTMRHh6lOyWLp5L7PXBG75AFW1UWdhQES4vl8bXrgql9Xb93PeU98xb/1ur8MyNSCRXAfNzc3VvLw8r8MIS4eOHqfv/VPp2zqFZ6/sGZDPbH3HJ4zq35abB7QPyOcZ963Yuo/fvpbHlr2Hue+CHC7qaYNJg4GIzFXVXH/3t5kBjCvqxEUzvFcLpizeQsHugwH5zMj9kyl8tUtL4sNRp9CzRUNufnc+d3+0xOZIC0GWaIxrrurbEhHhjZnrAvJ5qjbqLBw1TIzj9Wt7c80pWbz83RqGj51pMwmEGEs0xjUZDeowsHMa42Zv4ODRosB8qI0GCEux0VH87dzOPDmiB8s272XoE9P5Lt9u7gwVlmiMq649tRWFh47xzhy7Cc/U3LndmvHh6FNplBjHVS/N4qkvV9raNiHAEo1xVc+Wjeid1YgXpq12tbZePKjFzmfCX9sm9fhg1Cmc07UZD322gv96bQ479h/xOixTAUs0xnXX92vDpsLDTPzR/Rs4rXIWGRLjY3h8eHf+Pqwz36/ayaDHpvPNCptSKlhZojGu69c+lQ5Nk3jum1WulTkieJR+xBIRruqbxcTRp9AoMZaRL8/mno+XcKTIZhMINpZojOuKb8BbuW0/U5dtc+UzivOM3bAZeTo0rc/E0adyVZ+WvPjtGi585ntvZg835bJEYwJiaE46mY3q8NSXK12dlsZKZ5EpITaav5/fhbFX9WTjnkMMfWI6r363xgYKBAlLNCYgYqKjGN2/LfMLCpm6tPbPaiJ5hgvzswGdmzL5xtPp0zqFuz5awogXZrJ+Z2BuGDbls0RjAubCk5rTMqUuj3y+otb/0vy5dGYiXdPkBF65uhf/vKgrSzbtZdDj03hjxlo7u/GQJRoTMLHRUdx4VjZLNu9lyuItrnyGlc4M+K4LXtorkyl/Op2eLRty54eLufKlWazbecDr0CKSJRoTUMO6Z9AmNZFHPl/B8Vr8C9MqZ6YszRrU4fX/6s39F+awoKCQAY9O48mpK21kWoBZojEBFR0l/PHX7Vi5bT8f/LCx1tpVp3gmdkpjShERhvduwdSbz+DXHdN4+PMVDHl8OjNW7fQ6tIhhicYE3NCcdLo2T+bBKctrbQ40O6MxlUmrn8DTV5zEK9f04ujxE4x4YSY3j59vswoEgCUaE3BRUcJfz+nElr2Hee6b1bXatp3QmMr0b9+Ez/54Bn/o14YPf9xI/we/Zuy0VVZOc5ElGuOJ3KxGnNM1nee/WcXGPbZUrwmsOnHR3DaoA5P/eBq5WQ25d9IyBjw6jSmLt9hQeRdYojGeGTO4AwAPfLqsxm0VfzfYzACmKto2SeKVa3rz6jW9iI2O4vdvzGXECzNZtLHQ69DCiiUa45nmDevy+9NbM3H+Jr5dWTtri1jpzFRHv/ZNmHzjadw9rDPLt+zjnCe/ZdRb88jfZlPZ1AZLNMZTf+jflqyUuvz5/YUcOlr9GrnaQs6mhmKio/hN3yy+vrU/o/u35atl2xjw6Dfc8u58Nuyy2QVqwhKN8VRCbDT3XpjD+l0HeWzqimq383PpzJiaSa4Tyy0D2zPttv5cc0orJs7fxJkPf81fPlhoCaeaLNEYz53cpjHDe2Xy4vQ1zFu/u0ZtWenM1JbG9eK585xOTLu1P5fmZvLOnA30e+hr/vTOjyzfss/r8EKKJRoTFP48tCPpyQncOO4H9h4+VuXjrXBm3NI0OYF/XJDjO8M5OYspi7cw8LFpXPvqHPLW7vI6vJBgicYEhfoJsTw+vAeb9hzmrx8sqvLxPy/lbKc0xh3pyXX4yzmd+H7Mmdx0djvmrd/Nxc/N4LynvuXdvA0cPmb34ZTHEo0JGj1bNuSPZ2XzwY+beGPmumq1YaUz47YGdeO44axsvhtzJn8f1pmDR49z64QF9LlvKvdNWmrXccoQ43UAxpT0h/5t+WHDHu6auJg2jRM5uW1jv46z0pkJtLpxMVzVN4sr+7RkxuqdvDFjHS9+u4ax01dzatvGXNyzOQM6NaVOXLTXoXrOEo0JKtFRwuPDu3PhM99z/ZvzeO/6k2nbpF6lx9nN3MYrIsLJbRpzcpvGbC48xNuzN/De3AJuHPcjSfExDMlJ56Kezclt2ZCoqMg85bbSmQk6SQmxvDSyF7HRwhUvzmTtDv/XELHZm42X0pPrcNPZ7Zh+W3/e+t2vGNC5KR8t2MSlz8+g7/1T+duHi5ixametLpERCiSS5/XJzc3VvLw8r8Mw5Vi+ZR/Dx86gTmw0b1/Xh5YpieXuW3jwGN3u/ow7z+nEtae2CmCUxlTs4NEiPlu8lUkLN/PNiu0cKTpBSmIcZ3dKo1/7JpzSNoWkhFivw6wSEZmrqrn+7m+lMxO02jdN4l+//RVXvDiLC575nhd+05OeLRuVue9P69EEMkBj/FA3Lobze2Rwfo8MDhwp4uvl2/l00WY+XrCZcXM2EBMlnNSiIWe0T+WUto3p3Kw+sdHhVWxytTciMkhElotIvoiMKeP9eBF5x3l/lohklXjvDmf7chEZWFmbItLKaWOl02acm30zgdG5WTL/vv5k6ifEMGLsLF6cvrrCtd+tcmaCWWJ8DEO7pvPU5Scx786zGXddH647vTUHjhbx4JTlnP/0d3S96zMuf2Emj3y+gukrt1N4qOr3lQUb10pnIhINrADOBgqAOcAIVV1SYp8/AF1V9b9FZDhwgapeJiKdgLeB3kAz4AugnXNYmW2KyHjg36o6TkSeA+ar6rMVxWils9Cx+8BRbntvAZ8v2UrPlg356zmd6JbZ4D/e7/H3z/nbuZ245hQrnZnQs33fEWav2cWctbvIW7eLJZv2Uvw3VfOGdeiUXp/OzZLpmJ5E69R6tGhUl7gYb858gql01hvIV9XVACIyDhgGLCmxzzDgLuf5BOAp8V3NHQaMU9UjwBoRyXfao6w2RWQpcCZwubPPa067FSYaEzoaJsYx9qqeTJhbwAOTlzPs6e/o07oRl/TM5LTsxsQ4pQY7oTGhKjUpnqFd0xnaNR2A/UeK+GH9bhZt3MviTYUs2byXz5du/WmEZZRARsM6ZKUk0qJRXdLqJ5BWP54mSQmkJsWTUi+OxPgYEuNiiPZ4tJubiSYD2FDidQHwq/L2UdUiESkEUpztM0sdm+E8L6vNFGCPqhaVsb8JEyLCJbmZDM5J540Z6/jXzHXc/O78X+xjTDioFx/DadmpnJad+tO2A0eKWL51H2t3HGDtjgOs2XmQtTsOsHDjZvYcLL/EVjcumsT4GOKio4iNFmKio3hpZG6FA2xqk5uJpqx/8aXrdOXtU972ss4TK9r/l0GJXAdcB9CiRYuydjFBrl58DNf3a8PvT2/Nks17mbl6J+t3HWTvoWOc0S618gaMCVGJ8TGc1KIhJ7Vo+Iv3Dh87zvZ9R9i27wjb9x1m98FjHDhSxL7DRRw4UsT+I0UcPX6CouNK0YkTxMcE7kZSNxNNAZBZ4nVzYFM5+xSISAyQDOyq5Niytu8AGohIjHNWU9ZnAaCqY4Gx4LtGU/VumWARFSV0yUimS0ay16EY47mE2GgyG9Uls1Fdr0P5BTevJM0Bsp3RYHHAcGBiqX0mAiOd5xcDX6pvdMJEYLgzKq0VkA3MLq9N55ivnDZw2vzQxb4ZY4zxk2tnNM41l9HAFCAaeFlVF4vI3UCeqk4EXgLecC7278KXOHD2G49v4EARMEpVjwOU1abzkbcD40TkHuAHp21jjDEes5kBbHizMcZUSVWHN4fX7afGGGOCjiUaY4wxrrJEY4wxxlWWaIwxxrjKEo0xxhhXRfSoMxHZDlRvcXpojO9G0XASbn0Kt/6A9SkUhFt/4Jd9aqmqfk/DEdGJpiZEJK8qw/tCQbj1Kdz6A9anUBBu/YGa98lKZ8YYY1xlicYYY4yrLNFU31ivA3BBuPUp3PoD1qdQEG79gRr2ya7RGGOMcZWd0RhjjHGVJZpqEJFBIrJcRPJFZIzX8fhDRF4WkW0isqjEtkYi8rmIrHR+NnS2i4g84fRvgYic5F3k5RORTBH5SkSWishiEbnR2R6S/RKRBBGZLSLznf78n7O9lYjMcvrzjrNEBs4yGu84/ZklIllexl8REYkWkR9E5GPndUj3SUTWishCEflRRPKcbSH5ewcgIg1EZIKILHP+PfWtzf5YoqkiEYkGngYGA52AESLSyduo/PIqMKjUtjHAVFXNBqY6r8HXt2zncR3wbIBirKoi4GZV7Qj0AUY5/y9CtV9HgDNVtRvQHRgkIn2AB4BHnf7sBq519r8W2K2qbYFHnf2C1Y3A0hKvw6FP/VW1e4lhv6H6ewfwODBZVTsA3fD9v6q9/qiqParwAPoCU0q8vgO4w+u4/Iw9C1hU4vVyIN15ng4sd54/D4woa79gfuBb7O7scOgXUBeYB/wK341yMc72n37/8K3L1Nd5HuPsJ17HXkZfmjtfVGcCH+Nbej3U+7QWaFxqW0j+3gH1gTWl/zvXZn/sjKbqMoANJV4XONtCUZqqbgZwfjZxtodcH50SSw9gFiHcL6fE9COwDfgcWAXsUd8S5fCfMf/UH+f9QiAlsBH75THgNuCE8zqF0O+TAp+JyFwRuc7ZFqq/d62B7cArTnnzRRFJpBb7Y4mm6qSMbeE2dC+k+igi9YD3gD+q6t6Kdi1jW1D1S1WPq2p3fGcBvYGOZe3m/Az6/ojIOcA2VZ1bcnMZu4ZMnxynqOpJ+MpIo0Tk9Ar2DfY+xQAnAc+qag/gAD+XycpS5f5Yoqm6AiCzxOvmwCaPYqmprSKSDuD83OZsD5k+ikgsviTzpqr+29kc8v1S1T3A1/iuPTUQkeJl10vG/FN/nPeT8S2JHkxOAc4TkbXAOHzls8cI7T6hqpucn9uA9/H9URCqv3cFQIGqznJeT8CXeGqtP5Zoqm4OkO2MmokDhgMTPY6puiYCI53nI/Fd4yje/htndEkfoLD4FDqYiIgALwFLVfWREm+FZL9EJFVEGjjP6wC/xndR9ivgYme30v0p7ufFwJfqFM2DhareoarNVTUL37+VL1X1CkK4TyKSKCJJxc+BAcAiQvT3TlW3ABtEpL2z6SxgCbXZH68vRIXiAxgCrMBXP/9fr+PxM+a3gc3AMXx/kVyLr/Y9FVjp/Gzk7Cv4RtatAhYCuV7HX06fTsV3yr4A+NF5DAnVfgFdgR+c/iwC/upsbw3MBvKBd4F4Z3uC8zrfeb+1132opH/9gI9DvU9O7POdx+Li74BQ/b1zYuwO5Dm/ex8ADWuzPzYzgDHGGFdZ6cwYY4yrLNEYY4xxlSUaY4wxrrJEY4wxxlWWaIwxxrjKEo0x1SQi+52fWSJyeS23/edSr7+vzfaNCSRLNMbUXBZQpUTjzAJekf9INKp6chVjMiZoWKIxpubuB05z1ib5kzMx5oMiMsdZr+P3ACLST3zr57yF70Y3ROQDZ2LGxcWTM4rI/UAdp703nW3FZ0/itL3IWQ/lshJtf11iTZE3nZkTEJH7RWSJE8tDAf+vYyJeTOW7GGMqMQa4RVXPAXASRqGq9hKReOA7EfnM2bc30EVV1ziv/0tVdzlTzswRkfdUdYyIjFbf5JqlXYjvLu5uQGPnmGnOez2AzvjmnfoOOEVElgAXAB1UVYunuDEmkOyMxpjaNwDfXFA/4lu2IAXfIlEAs0skGYAbRGQ+MBPfRIXZVOxU4G31zfK8FfgG6FWi7QJVPYFvOp4sYC9wGHhRRC4EDta4d8ZUkSUaY2qfAP+jvtUXu6tqK1UtPqM58NNOIv3wTZzZV32rav6Ab66vytouz5ESz4/jW1isCN9Z1HvA+cDkKvXEmFpgicaYmtsHJJV4PQW43lnCABFp58zyW1oyvmWLD4pIB3xLAhQ7Vnx8KdOAy5zrQKnA6fgmnyyTs1ZPsqpOAv6Ir+xmTEDZNRpjam4BUOSUwF7Ft/56FjDPuSC/Hd/ZRGmTgf8WkQX4lsOdWeK9scACEZmnvmn1i72Pb+nj+fhmrr5NVbc4iaosScCHIpKA72zoT9XrojHVZ7M3G2OMcZWVzowxxrjKEo0xxhhXWaIxxhjjKks0xhhjXGWJxhhjjKss0RhjjHGVJRpjjDGuskRjjDHGVf8Pj9aCYq2p0VkAAAAASUVORK5CYII= ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6703667,"math_prob":0.8515167,"size":3166,"snap":"2019-51-2020-05","text_gpt3_token_len":788,"char_repetition_ratio":0.12618595,"word_repetition_ratio":0.02116402,"special_character_ratio":0.22836387,"punctuation_ratio":0.18050541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96422404,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T17:26:15Z\",\"WARC-Record-ID\":\"<urn:uuid:ba87ad87-394b-483a-9b3a-5bc667feec4b>\",\"Content-Length\":\"121149\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1daab976-f9ea-42a1-9f4a-7f3a473d4f5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:484976a1-7895-4a37-bb39-8fc3d6cc7c85>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://docs.fast.ai/callbacks.html\",\"WARC-Payload-Digest\":\"sha1:AAAK7WJINW7RKK5SQCOHGCROVEPUXYDA\",\"WARC-Block-Digest\":\"sha1:5A7WZVJVHZOTVGJR6TVVUKEKL42ETEOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589861.0_warc_CC-MAIN-20200117152059-20200117180059-00179.warc.gz\"}"}
https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Elementary_Differential_Equations_with_Boundary_Value_Problems_(Trench)/05%3A_Linear_Second_Order_Equations/5.01%3A_Homogeneous_Linear_Equations/5.1E%3A_Homogeneous_Linear_Equations_(Exercises)
[ "$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 5.1E: Homogeneous Linear Equations (Exercises)\n\n•", null, "• Contributed by William F. Trench\n• Andrew G. Cowles Distinguished Professor Emeritus (Mathamatics) at Trinity University\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n## Q5.1.1\n\n1.\n\n1. Verify that $$y_1=e^{2x}$$ and $$y_2=e^{5x}$$ are solutions of $y''-7y'+10y=0 \\tag{A}$ on $$(-\\infty,\\infty)$$.\n2. Verify that if $$c_1$$ and $$c_2$$ are arbitrary constants then $$y=c_1e^{2x}+c_2e^{5x}$$ is a solution of (A) on $$(-\\infty,\\infty)$$.\n3. Solve the initial value problem $y''-7y'+10y=0,\\quad y(0)=-1,\\quad y'(0)=1.\\nonumber$\n4. Solve the initial value problem $y''-7y'+10y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1.\\nonumber$\n\n2.\n\n1. Verify that $$y_1=e^x\\cos x$$ and $$y_2=e^x\\sin x$$ are solutions of $y''-2y'+2y=0 \\tag{A}$ on $$(-\\infty,\\infty)$$.\n2. Verify that if $$c_1$$ and $$c_2$$ are arbitrary constants then $$y=c_1e^x\\cos x+c_2e^x\\sin x$$ is a solution of (A) on $$(-\\infty,\\infty)$$.\n3. Solve the initial value problem $y''-2y'+2y=0,\\quad y(0)=3,\\quad y'(0)=-2.\\nonumber$\n4. Solve the initial value problem $y''-2y'+2y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1.\\nonumber$\n\n3.\n\n1. Verify that $$y_1=e^x$$ and $$y_2=xe^x$$ are solutions of $y''-2y'+y=0 \\tag{A}$ on $$(-\\infty,\\infty)$$.\n2. Verify that if $$c_1$$ and $$c_2$$ are arbitrary constants then $$y=e^x(c_1+c_2x)$$ is a solution of (A) on $$(-\\infty,\\infty)$$.\n3. Solve the initial value problem $y''-2y'+y=0,\\quad y(0)=7,\\quad y'(0)=4.\\nonumber$\n4. Solve the initial value problem $y''-2y'+y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1.\\nonumber$\n\n4.\n\n1. Verify that $$y_1=1/(x-1)$$ and $$y_2=1/(x+1)$$ are solutions of $(x^2-1)y''+4xy'+2y=0 \\tag{A}$ on $$(-\\infty,-1)$$, $$(-1,1)$$, and $$(1,\\infty)$$. What is the general solution of (A) on each of these intervals?\n2. Solve the initial value problem $(x^2-1)y''+4xy'+2y=0,\\quad y(0)=-5,\\quad y'(0)=1.\\nonumber$ What is the interval of validity of the solution?\n3. Graph the solution of the initial value problem.\n4. Verify Abel’s formula for $$y_1$$ and $$y_2$$, with $$x_0=0$$.\n\n5. Compute the Wronskians of the given sets of functions.\n\n1. $$\\{1, e^{x}\\}$$\n2. $$\\{e^{x}, e^{x}\\sin x\\}$$\n3. $$\\{x+1, x^{2}+2\\}$$\n4. $$\\{x^{1/2}, x^{-1/3}\\}$$\n5. $$\\{\\frac{\\sin x}{x},\\frac{\\cos x}{x}\\}$$\n6. $$\\{x\\ln |x|, x^{2}\\ln |x|\\}$$\n7. $$\\{e^{x}\\cos\\sqrt{x}, e^{x}\\sin\\sqrt{x}\\}$$\n\n6. Find the Wronskian of a given set $$\\{y_1,y_2\\}$$ of solutions of\n\n$y''+3(x^2+1)y'-2y=0,\\nonumber$\n\ngiven that $$W(\\pi)=0$$.\n\n7. Find the Wronskian of a given set $$\\{y_1,y_2\\}$$ of solutions of\n\n$(1-x^2)y''-2xy'+\\alpha(\\alpha+1)y=0,\\nonumber$\n\ngiven that $$W(0)=1$$. (This is Legendre’s equation.)\n\n8. Find the Wronskian of a given set $$\\{y_1,y_2\\}$$ of solutions of\n\n$x^2y''+xy'+(x^2-\\nu^2)y=0 ,\\nonumber$\n\ngiven that $$W(1)=1$$. (This is Bessel’s equation.)\n\n9. (This exercise shows that if you know one nontrivial solution of $$y''+p(x)y'+q(x)y=0$$, you can use Abel’s formula to find another.)\n\nSuppose $$p$$ and $$q$$ are continuous and $$y_1$$ is a solution of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\nthat has no zeros on $$(a,b)$$. Let $$P(x)=\\int p(x)\\,dx$$ be any antiderivative of $$p$$ on $$(a,b)$$.\n\n1. Show that if $$K$$ is an arbitrary nonzero constant and $$y_2$$ satisfies $y_1y_2'-y_1'y_2=Ke^{-P(x)} \\tag{B}$ on $$(a,b)$$, then $$y_2$$ also satisfies (A) on $$(a,b)$$, and $$\\{y_1,y_2\\}$$ is a fundamental set of solutions on (A) on $$(a,b)$$.\n2. Conclude from (a) that if $$y_2=uy_1$$ where $$u'=K{e^{-P(x)}\\over y_1^2(x)}$$, then $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of (A) on $$(a,b)$$.\n\n## Q5.1.2\n\nIn Exercises 5.1.10-5.1.23 use the method suggested by Exercise 5.1.9 to find a second solution $$y_{2}$$ that isn’t a constant multiple of the solution $$y_{1}$$. Choose $$K$$ conveniently to simplify $$y_{2}$$.\n\n10. $$y''-2y'-3y=0$$; $$y_1=e^{3x}$$\n\n11. $$y''-6y'+9y=0$$; $$y_1=e^{3x}$$\n\n12. $$y''-2ay'+a^2y=0$$ ($$a=$$ constant); $$y_1=e^{ax}$$\n\n13. $$x^2y''+xy'-y=0$$; $$y_1=x$$\n\n14. $$x^2y''-xy'+y=0$$; $$y_1=x$$\n\n15. $$x^2y''-(2a-1)xy'+a^2y=0$$ ($$a=$$ nonzero constant);  $$x>0$$; $$y_1=x^a$$\n\n16. $$4x^2y''-4xy'+(3-16x^2)y=0$$; $$y_1=x^{1/2}e^{2x}$$\n\n17. $$(x-1)y''-xy'+y=0$$; $$y_1=e^x$$\n\n18. $$x^2y''-2xy'+(x^2+2)y=0$$; $$y_1=x\\cos x$$\n\n19. $$4x^2(\\sin x)y''-4x(x\\cos x+\\sin x)y'+(2x\\cos x+3\\sin x)y=0$$; $$y_1=x^{1/2}$$\n\n20. $$(3x-1)y''-(3x+2)y'-(6x-8)y=0$$; $$y_1=e^{2x}$$\n\n21. $$(x^2-4)y''+4xy'+2y=0$$; $$y_1={1\\over x-2}$$\n\n22. $$(2x+1)xy''-2(2x^2-1)y'-4(x+1)y=0$$;$$y_1={1\\over x}$$\n\n23. $$(x^2-2x)y''+(2-x^2)y'+(2x-2)y=0$$;$$y_1=e^x$$\n\n## Q5.1.3\n\n24. Suppose $$p$$ and $$q$$ are continuous on an open interval $$(a,b)$$ and let $$x_0$$ be in $$(a,b)$$. Use Theorem 5.1.1 to show that the only solution of the initial value problem\n\n$y''+p(x)y'+q(x)y=0,\\quad y(x_0)=0,\\quad y'(x_0)=0\\nonumber$\n\non $$(a,b)$$ is the trivial solution $$y\\equiv0$$.\n\n25. Suppose $$P_0$$, $$P_1$$, and $$P_2$$ are continuous on $$(a,b)$$ and let $$x_0$$ be in $$(a,b)$$. Show that if either of the following statements is true then $$P_0(x)=0$$ for some $$x$$ in $$(a,b)$$.\n\n1. The initial value problem $P_0(x)y''+P_1(x)y'+P_2(x)y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1\\nonumber$ has more than one solution on $$(a,b)$$.\n2. The initial value problem $P_0(x)y''+P_1(x)y'+P_2(x)y=0,\\quad y(x_0)=0,\\quad y'(x_0)=0\\nonumber$ has a nontrivial solution on $$(a,b)$$.\n\n26. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$y_1$$ and $$y_2$$ are solutions of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\non $$(a,b)$$. Let\n\n$z_1=\\alpha y_1+\\beta y_2\\quad\\text{ and} \\quad z_2=\\gamma y_1+\\delta y_2,\\nonumber$\n\nwhere $$\\alpha$$, $$\\beta$$, $$\\gamma$$, and $$\\delta$$ are constants. Show that if $$\\{z_1,z_2\\}$$ is a fundamental set of solutions of (A) on $$(a,b)$$ then so is $$\\{y_1,y_2\\}$$.\n\n27. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\non $$(a,b)$$. Let\n\n$z_1=\\alpha y_1+\\beta y_2\\quad\\text{ and} \\quad z_2=\\gamma y_1+\\delta y_2,\\nonumber$\n\nwhere $$\\alpha,\\beta,\\gamma$$, and $$\\delta$$ are constants. Show that $$\\{z_1,z_2\\}$$ is a fundamental set of solutions of (A) on $$(a,b)$$ if and only if $$\\alpha\\gamma-\\beta\\delta\\ne0$$.\n\n28. Suppose $$y_1$$ is differentiable on an interval $$(a,b)$$ and $$y_2=ky_1$$, where $$k$$ is a constant. Show that the Wronskian of $$\\{y_1,y_2\\}$$ is identically zero on $$(a,b)$$.\n\n29. Let\n\n$y_1=x^3\\quad\\mbox{ and }\\quad y_2=\\left\\{\\begin{array}{rl} x^3,&x\\ge 0,\\\\ -x^3,&x<0.\\end{array}\\right.\\nonumber$\n\n1. Show that the Wronskian of $$\\{y_1,y_2\\}$$ is defined and identically zero on $$(-\\infty,\\infty)$$.\n2. Suppose $$a<0<b$$. Show that $$\\{y_1,y_2\\}$$ is linearly independent on $$(a,b)$$.\n3. Use Exercise 5.1.25b to show that these results don’t contradict Theorem 5.1.5, because neither $$y_1$$ nor $$y_2$$ can be a solution of an equation $y''+p(x)y'+q(x)y=0\\nonumber$ on $$(a,b)$$ if $$p$$ and $$q$$ are continuous on $$(a,b)$$.\n\n30. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$\\{y_1,y_2\\}$$ is a set of solutions of\n\n$y''+p(x)y'+q(x)y=0\\nonumber$\n\non $$(a,b)$$ such that either $$y_1(x_0)=y_2(x_0)=0$$ or $$y_1'(x_0)=y_2'(x_0)=0$$ for some $$x_0$$ in $$(a,b)$$. Show that $$\\{y_1,y_2\\}$$ is linearly dependent on $$(a,b)$$.\n\n31. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of\n\n$y''+p(x)y'+q(x)y=0\\nonumber$\n\non $$(a,b)$$. Show that if $$y_1(x_1)=y_1(x_2)=0$$, where $$a<x_1<x_2<b$$, then $$y_2(x)=0$$ for some $$x$$ in $$(x_1,x_2)$$.\n\n32. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and every solution of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\non $$(a,b)$$ can be written as a linear combination of the twice differentiable functions $$\\{y_1,y_2\\}$$. Use Theorem 5.1.1 to show that $$y_1$$ and $$y_2$$ are themselves solutions of (A) on $$(a,b)$$.\n\n33. Suppose $$p_1$$, $$p_2$$, $$q_1$$, and $$q_2$$ are continuous on $$(a,b)$$ and the equations\n\n$y''+p_1(x)y'+q_1(x)y=0 \\quad \\text{and} \\quad y''+p_2(x)y'+q_2(x)y=0\\nonumber$\n\nhave the same solutions on $$(a,b)$$. Show that $$p_1=p_2$$ and $$q_1=q_2$$ on $$(a,b)$$.\n\n34. (For this exercise you have to know about $$3\\times 3$$ determinants.) Show that if $$y_1$$ and $$y_2$$ are twice continuously differentiable on $$(a,b)$$ and the Wronskian $$W$$ of $$\\{y_1,y_2\\}$$ has no zeros in $$(a,b)$$ then the equation\n\n$\\frac{1}{W} \\left| \\begin{array}{ccc} y & y_1 & y_2 \\\\ y' & y'_1 & y'_2 \\\\ y'' & y_1'' & y_2'' \\end{array} \\right|=0\\nonumber$\n\ncan be written as\n\n$y''+p(x)y'+q(x)y=0, \\tag{A}$\n\nwhere $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of (A) on $$(a,b)$$.\n\n35. Use the method suggested by Exercise 5.1.34 to find a linear homogeneous equation for which the given functions form a fundamental set of solutions on some interval.\n\n1. $$e^{x}\\cos 2x, e^{x}\\sin 2x$$\n2. $$x, e^{2x}$$\n3. $$x, x\\ln x$$\n4. $$\\cos (\\ln x), \\sin (\\ln x)$$\n5. $$\\cosh x, \\sinh x$$\n6. $$x^{2}-1, x^{2}+1$$\n\n36. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\non $$(a,b)$$. Show that if $$y$$ is a solution of (A) on $$(a,b)$$, there’s exactly one way to choose $$c_1$$ and $$c_2$$ so that $$y=c_1y_1+c_2y_2$$ on $$(a,b)$$.\n\n37. Suppose $$p$$ and $$q$$ are continuous on $$(a,b)$$ and $$x_0$$ is in $$(a,b)$$. Let $$y_1$$ and $$y_2$$ be the solutions of\n\n$y''+p(x)y'+q(x)y=0 \\tag{A}$\n\nsuch that\n\n$y_1(x_0)=1, \\quad y'_1(x_0)=0\\quad \\text{and} \\quad y_2(x_0)=0,\\; y'_2(x_0)=1.\\nonumber$\n\n(Theorem 5.1.1 implies that each of these initial value problems has a unique solution on $$(a,b)$$.)\n\n1. Show that $$\\{y_1,y_2\\}$$ is linearly independent on $$(a,b)$$.\n2. Show that an arbitrary solution $$y$$ of (A) on $$(a,b)$$ can be written as $$y=y(x_0)y_1+y'(x_0)y_2$$.\n3. Express the solution of the initial value problem $y''+p(x)y'+q(x)y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1\\nonumber$ as a linear combination of $$y_1$$ and $$y_2$$.\n\n38. Find solutions $$y_1$$ and $$y_2$$ of the equation $$y''=0$$ that satisfy the initial conditions\n\n$y_1(x_0)=1, \\quad y'_1(x_0)=0 \\quad \\text{and} \\quad y_2(x_0)=0, \\quad y'_2(x_0)=1.\\nonumber$\n\nThen use Exercise 5.1.37 (c) to write the solution of the initial value problem\n\n$y''=0,\\quad y(0)=k_0,\\quad y'(0)=k_1\\nonumber$\n\nas a linear combination of $$y_1$$ and $$y_2$$.\n\n39. Let $$x_0$$ be an arbitrary real number. Given (Example 5.1.1) that $$e^x$$ and $$e^{-x}$$ are solutions of $$y''-y=0$$, find solutions $$y_1$$ and $$y_2$$ of $$y''-y=0$$ such that\n\n$y_1(x_0)=1, \\quad y'_1(x_0)=0\\quad \\text{and} \\quad y_2(x_0)=0,\\; y'_2(x_0)=1.\\nonumber$\n\nThen use Exercise 5.1.37 (c) to write the solution of the initial value problem\n\n$y''-y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1\\nonumber$\n\nas a linear combination of $$y_1$$ and $$y_2$$.\n\n40. Let $$x_0$$ be an arbitrary real number. Given (Example 5.1.2) that $$\\cos\\omega x$$ and $$\\sin\\omega x$$ are solutions of $$y''+\\omega^2y=0$$, find solutions of $$y''+\\omega^2y=0$$ such that\n\n$y_1(x_0)=1, \\quad y'_1(x_0)=0\\quad\\text{ and} \\quad y_2(x_0)=0,\\; y'_2(x_0)=1.\\nonumber$\n\nThen use Exercise 5.1.37 (c) to write the solution of the initial value problem\n\n$y''+\\omega^2y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1\\nonumber$\n\nas a linear combination of $$y_1$$ and $$y_2$$. Use the identities\n\n\\begin{aligned} \\cos(A+B)&=\\cos A\\cos B-\\sin A\\sin B\\\\ \\sin(A+B)&=\\sin A\\cos B+\\cos A\\sin B\\end{aligned}\\nonumber\n\nto simplify your expressions for $$y_1$$, $$y_2$$, and $$y$$.\n\n41. Recall from Exercise 5.1.4 that $$1/(x-1)$$ and $$1/(x+1)$$ are solutions of\n\n$(x^2-1)y''+4xy'+2y=0 \\tag{A}$\n\non $$(-1,1)$$. Find solutions of (A) such that\n\n$y_1(0)=1, \\quad y'_1(0)=0\\quad \\text{and} \\quad y_2(0)=0,\\; y'_2(0)=1.\\nonumber$\n\nThen use Exercise 5.1.37 (c) to write the solution of initial value problem\n\n$(x^2-1)y''+4xy'+2y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1\\nonumber$\n\nas a linear combination of $$y_1$$ and $$y_2$$.\n\n42.\n\n1. Verify that $$y_1=x^2$$ and $$y_2=x^3$$ satisfy $x^2y''-4xy'+6y=0 \\tag{A}$  on $$(-\\infty,\\infty)$$ and that $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of (A) on $$(-\\infty,0)$$ and $$(0,\\infty)$$.\n2. Let $$a_1$$, $$a_2$$, $$b_1$$, and $$b_2$$ be constants. Show that $y=\\left\\{\\begin{array}{rr} a_1x^2+a_2x^3,&x\\ge 0,\\\\ b_1x^2+b_2x^3,&x<0\\phantom{,} \\end{array}\\right.\\nonumber$  is a solution of (A) on $$(-\\infty,\\infty)$$ if and only if $$a_1=b_1$$. From this, justify the statement that $$y$$ is a solution of (A) on $$(-\\infty,\\infty)$$ if and only if $y=\\left\\{\\begin{array}{rr} c_1x^2+c_2x^3,&x\\ge 0,\\\\ c_1x^2+c_3x^3,&x<0, \\end{array}\\right.\\nonumber$  where $$c_1$$, $$c_2$$, and $$c_3$$ are arbitrary constants.\n3. For what values of $$k_0$$ and $$k_1$$ does the initial value problem $x^2y''-4xy'+6y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1\\nonumber$  have a solution? What are the solutions?\n4. Show that if $$x_0\\ne0$$ and $$k_0,k_1$$ are arbitrary constants, the initial value problem $x^2y''-4xy'+6y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1 \\tag{B}$  has infinitely many solutions on $$(-\\infty,\\infty)$$. On what interval does (B) have a unique solution?\n\n43.\n\n1. Verify that $$y_1=x$$ and $$y_2=x^2$$ satisfy $x^2y''-2xy'+2y=0 \\tag{A}$  on $$(-\\infty,\\infty)$$ and that $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of (A) on $$(-\\infty,0)$$ and $$(0,\\infty)$$.\n2. Let $$a_1$$, $$a_2$$, $$b_1$$, and $$b_2$$ be constants. Show that $y=\\left\\{\\begin{array}{rr} a_1x+a_2x^2,&x\\ge 0,\\\\ b_1x+b_2x^2,&x<0\\phantom{,} \\end{array}\\right.\\nonumber$  is a solution of (A) on $$(-\\infty,\\infty)$$ if and only if $$a_1=b_1$$ and $$a_2=b_2$$. From this, justify the statement that the general solution of (A) on $$(-\\infty,\\infty)$$ is $$y=c_1x+c_2x^2$$, where $$c_1$$ and $$c_2$$ are arbitrary constants.\n3. For what values of $$k_0$$ and $$k_1$$ does the initial value problem $x^2y''-2xy'+2y=0,\\quad y(0)=k_0,\\quad y'(0)=k_1\\nonumber$  have a solution? What are the solutions?\n4. Show that if $$x_0\\ne0$$ and $$k_0,k_1$$ are arbitrary constants then the initial value problem $x^2y''-2xy'+2y=0,\\quad y(x_0)=k_0,\\quad y'(x_0)=k_1\\nonumber$  has a unique solution on $$(-\\infty,\\infty)$$.\n\n44.\n\n1. Verify that $$y_1=x^3$$ and $$y_2=x^4$$ satisfy $x^2y''-6xy'+12y=0 \\tag{A}$  on $$(-\\infty,\\infty)$$, and that $$\\{y_1,y_2\\}$$ is a fundamental set of solutions of (A) on $$(-\\infty,0)$$ and $$(0,\\infty)$$.\n2. Show that $$y$$ is a solution of (A) on $$(-\\infty,\\infty)$$ if and only if $y=\\left\\{\\begin{array}{rr} a_1x^3+a_2x^4,&x\\ge 0,\\\\ b_1x^3+b_2x^4,&x<0, \\end{array}\\right.\\nonumber$  where $$a_1$$, $$a_2$$, $$b_1$$, and $$b_2$$ are arbitrary constants.\n3. For what values of $$k_0$$ and $$k_1$$ does the initial value problem $x^2y''-6xy'+12y=0, \\quad y(0)=k_0,\\quad y'(0)=k_1\\nonumber$  have a solution? What are the solutions?\n4. Show that if $$x_0\\ne0$$ and $$k_0,k_1$$ are arbitrary constants then the initial value problem $x^2y''-6xy'+12y=0, \\quad y(x_0)=k_0,\\quad y'(x_0)=k_1 \\tag{B}$  has infinitely many solutions on $$(-\\infty,\\infty)$$. On what interval does (B) have a unique solution?" ]
[ null, "https://math.libretexts.org/@api/deki/files/13236/trench-1995.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6982699,"math_prob":1.00001,"size":14822,"snap":"2020-34-2020-40","text_gpt3_token_len":6356,"char_repetition_ratio":0.1939533,"word_repetition_ratio":0.34096825,"special_character_ratio":0.47982728,"punctuation_ratio":0.13507822,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T13:03:15Z\",\"WARC-Record-ID\":\"<urn:uuid:74d36934-fb45-476f-8e24-563eea0768ae>\",\"Content-Length\":\"117946\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0640242-3a84-45d6-8e81-95602a5b47e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f584660-49c2-4724-857b-059641634c80>\",\"WARC-IP-Address\":\"13.249.40.3\",\"WARC-Target-URI\":\"https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Elementary_Differential_Equations_with_Boundary_Value_Problems_(Trench)/05%3A_Linear_Second_Order_Equations/5.01%3A_Homogeneous_Linear_Equations/5.1E%3A_Homogeneous_Linear_Equations_(Exercises)\",\"WARC-Payload-Digest\":\"sha1:ESY3MHMV2G5CDQWFSATXRNVHFRH3SD3S\",\"WARC-Block-Digest\":\"sha1:XA3AGGND54TUS7C6NISOTWRGT46NJF4C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735958.84_warc_CC-MAIN-20200805124104-20200805154104-00440.warc.gz\"}"}
https://www.wallstreetprep.com/knowledge/solvency-ratio/
[ "", null, "", null, "# Solvency Ratio\n\nGuide to Understanding the Solvency Ratio Concept", null, "## How to Calculate a Solvency Ratio (Step-by-Step)\n\nA solvency ratio assesses the long-term viability of a company – namely, if the financial performance of the company appears sustainable and if operations are likely to continue into the future.\n\n• Liabilities: Liabilities are defined as obligations that represent cash outflows, most notably debt, which is the most frequent cause of companies becoming distressed and having to undergo bankruptcy. If debt is added to a company’s capital structure, a company’s solvency is put at increased risk, all else being equal.\n• Assets: On the other hand, assets are defined as resources with economic value that can be turned into cash (e.g. accounts receivable, inventory) or generate cash (e.g. property, plant & equipment, or “PP&E”).\n\nWith that said, for a company to remain solvent, the company must have more assets than liabilities – otherwise, the burden of the liabilities will eventually prevent the company from staying afloat.\n\n## Solvency Ratio Formula\n\nSolvency ratios compare the overall debt load of a company to its assets or equity, which effectively shows a company’s level of reliance on debt financing to fund growth and reinvest into its own operations.\n\n### 1. Debt-to-Equity Ratio Formula\n\nThe debt-to-equity ratio compares a company’s total debt balance to the total shareholders’ equity account, which shows the percentage of financing contributed by creditors as compared to that of equity investors.", null, "• Higher D/E ratios mean a company relies more heavily on debt financing as opposed to equity financing – and therefore, creditors have a more substantial claim on the company’s assets if it were to be hypothetically liquidated.\n• A D/E ratio of 1.0x means that investors (equity) and creditors (debt) have an equal stake in the company (i.e. the assets on its balance sheet).\n• Lower D/E ratios imply the company is more financially stable with less exposure to solvency risk.\n\n### 2. Debt-to-Assets Ratio Formula\n\nThe debt-to-assets ratio compares a company’s total debt burden to the value of its total assets.", null, "This ratio evaluates whether the company has enough assets to satisfy all its obligations, both short-term and long-term – i.e. the debt-to-assets ratio estimates how much value in assets would be remaining after all the company’s liabilities are paid off.\n\n• Lower debt-to-assets ratios mean the company has sufficient assets to cover its debt obligations.\n• A debt-to-assets ratio of 1.0x signifies the company’s assets are equal to its debt – i.e. the company must sell off all of its assets to pay off its debt liabilities.\n• Higher debt-to-assets ratios are often perceived as red flags, since the company’s assets are inadequate to cover its debt obligations. This may imply that the current debt burden is too much for the company to handle.\n\nLike the debt-to-equity ratio, a lower ratio (<1.0x) is viewed more favorably, as it indicates the company is stable in terms of its financial health.\n\n### 3. Equity Ratio Formula\n\nThe third solvency ratio we’ll discuss is the equity ratio, which measures the value of a company’s equity to its assets amount.", null, "The equity ratio shows the extent to which the company’s assets are financed with equity (e.g. owners’ capital, equity financing) rather than debt.\n\nIn other words, if all the liabilities are paid off, the equity ratio is the amount of remaining asset value left over for shareholders.\n\n• Lower equity ratios are viewed as more favorable since it means that more of the company is financed with equity, which implies that the company’s earnings and contributions from equity investors are funding its operations – as opposed to debt lenders.\n• Higher equity ratios signal that more assets were purchased with debt as the source of capital (i.e. implying the company carries a substantial debt load).\n\n## Solvency Ratios vs. Liquidity Ratios\n\nBoth solvency and liquidity ratios are measures of leverage risk; however, the major difference lies in their time horizons.\n\n• Liquidity Ratio: Liquidity ratios are short-term oriented (i.e. current assets, short-term debt coming due in <12 months).\n• Solvency Ratio: In contrast, a solvency ratio takes on more of a long-term view, i.e. the sustainability of the company and ability to continue operating as a “going concern”.\n\nNevertheless, both ratios are closely related and provide important insights regarding the financial health of a company.\n\n## Solvency Ratio Calculator – Excel Template\n\nWe’ll now move to a modeling exercise, which you can access by filling out the form below.", null, "Submitting ...\n\n## Step 1. Balance Sheet Assumptions\n\nIn our modeling exercise, we’ll begin by projecting a hypothetical company’s financials across a five-year time span.\n\nOur company has the following balance sheet data as of Year 1, which is going to be held constant throughout the entirety of the forecast.\n\nAs of Year 1, our company has \\$120m in current assets and \\$220m in total assets, with \\$50m in total debt.\n\nFor illustrative purposes, we’ll assume the only liabilities that the company has are debt-related items, so the total equity is \\$170m – in effect, the balance sheet is in balance (i.e. assets = liabilities + equity).\n\nFor the rest of the forecast – from Year 2 to Year 5 – the short-term debt balance will grow by \\$5m each year, whereas the long-term debt will grow by \\$10m.\n\n## Step 2. Debt to Equity Ratio Calculation Analysis (D/E)\n\nThe debt-to-equity ratio (D/E) is calculated by dividing the total debt balance by the total equity balance, as shown below.\n\nIn Year 1, for instance, the D/E ratio comes out to 0.3x.\n\n• Debt-to-Equity Ratio (D/E) = \\$50m / \\$170m = 0.3x", null, "## Step 3. Debt to Assets Ratio Calculation Analysis\n\nNext, the debt-to-assets ratio is calculated by dividing the total debt balance by the total assets.\n\nFor example, in Year 1, the debt-to-assets ratio is 0.2x.\n\n• Debt-to-Assets Ratio = \\$50m / \\$220m = 0.2x", null, "## Step 4. Equity Ratio Calculation Analysis\n\nAs for our final solvency metric, the equity ratio is calculated by dividing total assets by the total equity balance.\n\nIn Year 1, we arrive at an equity ratio of 1.3x.\n\n• Equity Ratio = \\$220m / \\$170m = 1.3x", null, "## Step 5. Solvency Ratio Calculation Example\n\nFrom Year 1 to Year 5, the solvency ratios undergo the following changes.\n\n• D/E Ratio: 0.3x → 1.0x\n• Debt-to-Assets Ratio: 0.2x → 0.5x\n• Equity Ratio: 1.3x → 2.0x\n\nBy the end of the projection, the debt balance is equal to the total equity (i.e. 1.0x), showing that the company’s capitalization is evenly split between creditors and equity holders on a book value basis.\n\nThe debt-to-assets ratio increases to approximately 0.5x, which means the company must sell off half of its assets to pay off all of its outstanding financial obligations.\n\nAnd finally, the equity ratio increases to 2.0x, as the company is incurring more debt each year to finance the purchase of its assets and operations.", null, "", null, "Step-by-Step Online Course\n\n#### Everything You Need To Master Financial Modeling\n\nEnroll in The Premium Package: Learn Financial Statement Modeling, DCF, M&A, LBO and Comps. The same training program used at top investment banks.", null, "Inline Feedbacks", null, "Learn Financial Modeling Online\n\nEverything you need to master financial and valuation modeling: 3-Statement Modeling, DCF, Comps, M&A and LBO.\n\nX\n\nThe Wall Street Prep Quicklesson Series\n\n7 Free Financial Modeling Lessons\n\nGet instant access to video lessons taught by experienced investment bankers. Learn financial statement modeling, DCF, M&A, LBO, Comps and Excel shortcuts.", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://s3.amazonaws.com/wspimage/wsp-logo-full.svg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/18191939/Solvency-Ratio-Formula-960x400.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/28000043/Debt-Equity-Formula.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/28000041/Debt-Assets-Formula.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/28000045/Equity-Ratio-Formula.jpg", null, "https://s3.amazonaws.com/wspimage/wsp-spinner.gif", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/27234035/Debt-to-Equity-Ratio.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/27234032/Debt-to-Assets-Formula.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/27234038/Equity-Ratio.jpg", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2021/11/27234041/Solvency-Ratio-Model.jpg", null, "https://wspimage.s3.amazonaws.com/salesbanner_laptop_premium_cert_900.png", null, "https://secure.gravatar.com/avatar/d41d8cd98f00b204e9800998ecf8427e", null, "https://wsp-blog-images.s3.amazonaws.com/uploads/2020/10/16110050/wsp-tile-ad-premium2.jpg", null, "https://s3.amazonaws.com/wspimage/lifting.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92787206,"math_prob":0.9809988,"size":7876,"snap":"2022-40-2023-06","text_gpt3_token_len":1827,"char_repetition_ratio":0.183562,"word_repetition_ratio":0.054095827,"special_character_ratio":0.22701879,"punctuation_ratio":0.11059602,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813244,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,null,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T10:47:18Z\",\"WARC-Record-ID\":\"<urn:uuid:6e5debf5-d8e1-467a-8ce9-6aea298586c7>\",\"Content-Length\":\"140538\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e204b62-97c6-4969-8663-343457d9213c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a8d0ae7-f417-4f03-9799-260d8d5ea85f>\",\"WARC-IP-Address\":\"172.66.43.51\",\"WARC-Target-URI\":\"https://www.wallstreetprep.com/knowledge/solvency-ratio/\",\"WARC-Payload-Digest\":\"sha1:EMBUWUDHYJQPFJOTJCIH5CLGELG34D35\",\"WARC-Block-Digest\":\"sha1:3CQBOWU6WURFC4YBCKCMSNYYHZQC3GW7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499541.63_warc_CC-MAIN-20230128090359-20230128120359-00288.warc.gz\"}"}
https://www.bystudin.com/what-is-called-the-mass-fraction-of-a-solute/
[ "# What is called the mass fraction of a solute?", null, "The mass fraction of a solute is the ratio of the mass of a solute to the mass of a solution.\nω (in-va) = (m (in-va) / m (solution)) * 100% or ω (in-va) = m (in-va) / m (solution),\nwhere ω (in-va) is the volume fraction of gas in the mixture,\nm (in-va) – the mass of the solute,\nm (solution) is the mass of the solution.", null, "Remember: The process of learning a person lasts a lifetime. The value of the same knowledge for different people may be different, it is determined by their individual characteristics and needs. Therefore, knowledge is always needed at any age and position." ]
[ null, "https://www.bystudin.com/wp-content/uploads/2020/10/pervo-25.jpg", null, "https://www.bystudin.com/img.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92341685,"math_prob":0.9908287,"size":656,"snap":"2022-05-2022-21","text_gpt3_token_len":178,"char_repetition_ratio":0.18865031,"word_repetition_ratio":0.0,"special_character_ratio":0.27286586,"punctuation_ratio":0.096296296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9929509,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T00:21:20Z\",\"WARC-Record-ID\":\"<urn:uuid:d206c6a7-2c51-44ab-a366-2d4062f9ca1b>\",\"Content-Length\":\"22793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d635d1e1-eb7d-4246-b275-b15109f45b21>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fa28d2e-5d66-45e2-ae51-067a75bc2acf>\",\"WARC-IP-Address\":\"37.143.13.208\",\"WARC-Target-URI\":\"https://www.bystudin.com/what-is-called-the-mass-fraction-of-a-solute/\",\"WARC-Payload-Digest\":\"sha1:S7JEZGIFTDCRC625D5X7643J2KJUPFOE\",\"WARC-Block-Digest\":\"sha1:FTKOOW27SFVSWDQBQC4O7IY2PKUDX7HE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662520936.24_warc_CC-MAIN-20220517225809-20220518015809-00754.warc.gz\"}"}
https://www.hitopec.com/How-to-estimate-the-size-of-the-container-if-the-container-data-is-not-received-id8502734.html
[ "+86-571-82202632                                      [email protected]\nYou are here: / / / How to estimate the size of the container if the container data is not received?\n\n# How to estimate the size of the container if the container data is not received?\n\nViews:0     Author:Site Editor     Publish Time: 2019-09-29      Origin:Site\n\n# How to estimate the size of the container if the container data is not received?\n\nTips for estimating container size\nMany people will ask how to accurately estimate the size of the container? In fact, the method is very simple. Knowing this little trick, you don't have to wait for the factory data of the slow supply because of the size of the container.\nTake a coil notebook as an example. Suppose you know that the book size is A5 size, 250gsm white card cover, 70gsm offset paper* 60 sheets.\nAt this time, you only know the information, and there is no real thing on hand. Even\nthe scale and weight of a single product are not known. How do you count it?", null, "First, first determine the individual components of the product\n\nThe A5 scale is about 21 cm * 14.8 cm\n\nGsm is the meaning of g/m2\n\nSingle component\n\n= 0.21 * 0.148 * 70 * 60 + 0.21 * 0.148 * 250 * 2 + 15 = 160g\n\n(According to the weight of the paper and the number of sheets, calculate the components of the inner page and the cover, and then estimate the weight of one coil)\n\nSecond, then determine the amount of each box\n\nFor example, the gross weight of the outer box is less than 15 kg.\n\nWe can calculate by the gross weight of 12KGS, one hundred and sixty-one thousand thousand = 75 (maybe many time customers like to pack according to the call, then 72 per box is more reasonable, of course, if you do not want to supply In the middle box, you can also directly press 48 one outer box. For details, please refer to the characteristics of your own products to decide)\ncontainer\nThird, determine the carton scale\nIn most cases, we will know the scale of our products. In this analogy, the length and width of the notebook are known, but if you don’t know the thickness, you can only estimate it first, for example, the coil of 60 sheets. Let's press the thickness of about 6 mm. Considering that the diameter of the coil will be higher than the thickness of the book, then the time of packaging is usually two pairs of two, so we can calculate the thickness of the book by about 7 mm.\nIf it is 12 inner boxes, then the inner box scale is like this:\nProduct scale: 21cm x 14.8cm x 0.7cm\nInner box size: +1 x +1 x * 12 + 1\n=:22cm x 16cm x 9.5cm\n\nAfter stacking the products, the inner box size is usually 1 cm in length, width and height.\n\nOuter box scale: (In this case, there will be 6 inner boxes in one outer box, which is assumed to be calculated by 3 layers and 2 rows)\n\nInner box size: 22cm x 16cm x 9.5cm\nCarton size: +2 x * 2 + 2 x * 3 + 3\n\n=:24cm x 34cm x 31.5cm\n\nThat is to say, the size of the outer box is 34 x 24cm x 31.5cm.\n\nAfter the inner box is piled up, the size of the outer box is usually 2 cm in length and width, plus 3 cm in height.\n\nGross weight: 160 * 72 g + 2 kg inner box outer box weight = about 13.5 kgs\n\nRemarks\n\n1) This method is estimated after all, and it must be different from the scale and component of the test, so it is only used if the condition does not allow the test.\n\n2) Some irregularly shaped products should be considered when considering the size of the inner box. It is reasonable to consider how the product is placed. For example, it may be necessary to consider two product pairs to save space. This time division unit scale is 2 The products are placed on the same scale to calculate.\n\n3) If you know the scale and weight of individual products clearly, skip the first one and look at it." ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB/AAffA0nNPuCLAAAAAElFTkSuQmCC", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89161414,"math_prob":0.9899529,"size":3508,"snap":"2021-43-2021-49","text_gpt3_token_len":945,"char_repetition_ratio":0.14126712,"word_repetition_ratio":0.013533834,"special_character_ratio":0.2767959,"punctuation_ratio":0.12144703,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9516947,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T05:35:45Z\",\"WARC-Record-ID\":\"<urn:uuid:79061609-7465-418c-9c67-4c92123fe301>\",\"Content-Length\":\"111324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0daf72cc-0f66-4d3a-9af5-8aadfd0d4b7f>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e91b151-b8a1-4424-b420-1ff301c50616>\",\"WARC-IP-Address\":\"18.214.214.190\",\"WARC-Target-URI\":\"https://www.hitopec.com/How-to-estimate-the-size-of-the-container-if-the-container-data-is-not-received-id8502734.html\",\"WARC-Payload-Digest\":\"sha1:H357EFGL7P6QH737GAO6KYWADVZUGC3M\",\"WARC-Block-Digest\":\"sha1:H3GVG76ERL4R7LAJ2BF2SBIXSZUU4D6G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00484.warc.gz\"}"}
https://docs.astropy.org/en/latest/api/astropy.stats.biweight_midcovariance.html
[ "# biweight_midcovariance#\n\nastropy.stats.biweight_midcovariance(data, c=9.0, M=None, modify_sample_size=False)[source]#\n\nCompute the biweight midcovariance between pairs of multiple variables.\n\nThe biweight midcovariance is a robust and resistant estimator of the covariance between two variables.\n\nThis function computes the biweight midcovariance between all pairs of the input variables (rows) in the input data. The output array will have a shape of (N_variables, N_variables). The diagonal elements will be the biweight midvariances of each input variable (see biweight_midvariance()). The off-diagonal elements will be the biweight midcovariances between each pair of input variables.\n\nFor example, if the input array data contains three variables (rows) x, y, and z, the output ndarray midcovariance matrix will be:\n\n$\\begin{split}\\begin{pmatrix} \\zeta_{xx} & \\zeta_{xy} & \\zeta_{xz} \\\\ \\zeta_{yx} & \\zeta_{yy} & \\zeta_{yz} \\\\ \\zeta_{zx} & \\zeta_{zy} & \\zeta_{zz} \\end{pmatrix}\\end{split}$\n\nwhere $$\\zeta_{xx}$$, $$\\zeta_{yy}$$, and $$\\zeta_{zz}$$ are the biweight midvariances of each variable. The biweight midcovariance between $$x$$ and $$y$$ is $$\\zeta_{xy}$$ ($$= \\zeta_{yx}$$). The biweight midcovariance between $$x$$ and $$z$$ is $$\\zeta_{xz}$$ ($$= \\zeta_{zx}$$). The biweight midcovariance between $$y$$ and $$z$$ is $$\\zeta_{yz}$$ ($$= \\zeta_{zy}$$).\n\nThe biweight midcovariance between two variables $$x$$ and $$y$$ is given by:\n\n$\\zeta_{xy} = n_{xy} \\ \\frac{\\sum_{|u_i| < 1, \\ |v_i| < 1} \\ (x_i - M_x) (1 - u_i^2)^2 (y_i - M_y) (1 - v_i^2)^2} {(\\sum_{|u_i| < 1} \\ (1 - u_i^2) (1 - 5u_i^2)) (\\sum_{|v_i| < 1} \\ (1 - v_i^2) (1 - 5v_i^2))}$\n\nwhere $$M_x$$ and $$M_y$$ are the medians (or the input locations) of the two variables and $$u_i$$ and $$v_i$$ are given by:\n\n\\begin{align}\\begin{aligned}u_{i} = \\frac{(x_i - M_x)}{c * MAD_x}\\\\v_{i} = \\frac{(y_i - M_y)}{c * MAD_y}\\end{aligned}\\end{align}\n\nwhere $$c$$ is the biweight tuning constant and $$MAD_x$$ and $$MAD_y$$ are the median absolute deviation of the $$x$$ and $$y$$ variables. The biweight midvariance tuning constant c is typically 9.0 (the default).\n\nIf $$MAD_x$$ or $$MAD_y$$ are zero, then zero will be returned for that element.\n\nFor the standard definition of biweight midcovariance, $$n_{xy}$$ is the total number of observations of each variable. That definition is used if modify_sample_size is False, which is the default.\n\nHowever, if modify_sample_size = True, then $$n_{xy}$$ is the number of observations for which $$|u_i| < 1$$ and/or $$|v_i| < 1$$, i.e.\n\n$n_{xx} = \\sum_{|u_i| < 1} \\ 1$\n$n_{xy} = n_{yx} = \\sum_{|u_i| < 1, \\ |v_i| < 1} \\ 1$\n$n_{yy} = \\sum_{|v_i| < 1} \\ 1$\n\nwhich results in a value closer to the true variance for small sample sizes or for a large number of rejected values.\n\nParameters:\ndata2D or 1D array_like\n\nInput data either as a 2D or 1D array. For a 2D array, it should have a shape (N_variables, N_observations). A 1D array may be input for observations of a single variable, in which case the biweight midvariance will be calculated (no covariance). Each row of data represents a variable, and each column a single observation of all those variables (same as the numpy.cov convention).\n\ncfloat, optional\n\nTuning constant for the biweight estimator (default = 9.0).\n\nMfloat or 1D array_like, optional\n\nThe location estimate of each variable, either as a scalar or array. If M is an array, then its must be a 1D array containing the location estimate of each row (i.e. a.ndim elements). If M is a scalar value, then its value will be used for each variable (row). If None (default), then the median of each variable (row) will be used.\n\nmodify_sample_sizebool, optional\n\nIf False (default), then the sample size used is the total number of observations of each variable, which follows the standard definition of biweight midcovariance. If True, then the sample size is reduced to correct for any rejected values (see formula above), which results in a value closer to the true covariance for small sample sizes or for a large number of rejected values.\n\nReturns:\nbiweight_midcovariancendarray\n\nA 2D array representing the biweight midcovariances between each pair of the variables (rows) in the input array. The output array will have a shape of (N_variables, N_variables). The diagonal elements will be the biweight midvariances of each input variable. The off-diagonal elements will be the biweight midcovariances between each pair of input variables.\n\nReferences\n\nExamples\n\nCompute the biweight midcovariance between two random variables:\n\n>>> import numpy as np\n>>> from astropy.stats import biweight_midcovariance\n>>> # Generate two random variables x and y\n>>> rng = np.random.default_rng(1)\n>>> x = rng.normal(0, 1, 200)\n>>> y = rng.normal(0, 3, 200)\n>>> # Introduce an obvious outlier\n>>> x = 30.0\n>>> # Calculate the biweight midcovariances between x and y\n>>> bicov = biweight_midcovariance([x, y])\n>>> print(bicov)\n[[0.83435568 0.02379316]\n[0.02379316 7.15665769]]\n>>> # Print standard deviation estimates\n>>> print(np.sqrt(bicov.diagonal()))\n[0.91343072 2.67519302]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.731055,"math_prob":0.999936,"size":5110,"snap":"2023-40-2023-50","text_gpt3_token_len":1526,"char_repetition_ratio":0.19095182,"word_repetition_ratio":0.14756258,"special_character_ratio":0.31311154,"punctuation_ratio":0.1136108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T06:43:28Z\",\"WARC-Record-ID\":\"<urn:uuid:559a2558-a308-4306-9aec-930809a76bdf>\",\"Content-Length\":\"44010\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1123d52b-239c-4207-8cda-ec9bb1afbbe5>\",\"WARC-Concurrent-To\":\"<urn:uuid:b32ad693-7d63-407e-918c-bd30286c29b0>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://docs.astropy.org/en/latest/api/astropy.stats.biweight_midcovariance.html\",\"WARC-Payload-Digest\":\"sha1:RSPTWZ4WELA5MJT5V7KES3O3BYERE44W\",\"WARC-Block-Digest\":\"sha1:3CM6XOSES24KCETFTIFTDH6UADKYE5FF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510603.89_warc_CC-MAIN-20230930050118-20230930080118-00612.warc.gz\"}"}
https://cs.stackexchange.com/questions/45104/how-to-turing-reduce-equivalent-languages-q-to-infinite-language-i
[ "# How to turing reduce equivalent languages $Q$ to infinite language $I$\n\nGiven two languages:\n\n$Q= \\{(\\langle M_1 \\rangle , \\langle M_2 \\rangle ) \\mid L(M_1) = L(M_2)\\}$\n\n$I= \\{\\langle M \\rangle \\mid \\;\\vert L(M) \\vert = \\infty \\}$\n\nI'm trying to Turing reduce $Q$ to $I$ ($Q \\le_T I$), not the other way around as solved here\n\nAny ideas on how to solve this? What exactly will the Turing machine do here and which part is getting solved by the mysterious Oracle?\n\n• @DavidRicherby, that question asks on the other direction $I \\le Q$, and a comment there asks the OP to post a new question if he wants to ask about the $Q\\le I$ direction. – Ran G. Aug 8 '15 at 16:24\n• @RanG. Ah, that's what I didn't spot. The edit makes that clear. – David Richerby Aug 8 '15 at 17:35\n\nGiven $M_1$ and $M_2$ you construct a machine $M$ that on input $w$ does the following:\n\n1. it runs $M_1$ on the first $w$ inputs ($\\epsilon$,0,1,00,01,..) for $w$ steps each.\n1. if $M_1$ accepts some input, you run $M_2$ on the same input and check it accepts too. (note, $M_2$ may not halt!)\n2. If $M_1$ rejects some input, you run $M_2$ for $w$ steps and verify it doesn't accept during that time.\n2. you do the same with $M_2$: run $w$ steps on each of the first $w$ inputs, and verify everything works, or reject otherwise\n3. if all checks pass - accept. Otherwise reject.\n\nThe idea is the following: as long as $M_1$ and $M_2$ behave the same, you will keep accepting all $w$'s. but as long as you find a difference, then you will reject that $w$ and all inputs $w'>w$, thus the accepted language becomes finite. You should be careful because machines may not halt. For instance, $M_1$ may reject some input, but $M_2$ won't halt on it -- still, they both \"reject\" it, and this case should be carefully analyzed.\n\n• Can I put it like this? Please correct me if I'm doing something wrong: Create TM $M'$ that runs $M$ here for $Q$ as a subprogram. It takes input $w$ and checks if $M_1$ accept and then if the same $w$ gets accepted by $M_2$. Also if $w$ gets rejected by $M_1$ double check if it gets rejected by $M_2$. Do the same thing vice verse with $M_2$ as described by you in (1.) and (2.). How does this now proof that $\\in I$? Can I just ask an oracle $I$ if $\\in I$? – Kevin Goedecke Aug 8 '15 at 19:01\n• I think you can do without the second part of step 1, since this will be checked by step 2 (and conversely for the implicit second part of step two). --- I do not understand what problem there is with halting in your last two sentences. If one accept and the other rejects or does not halt, you have a difference and not halting is the proper behavior. – babou Aug 8 '15 at 22:52\n• @KevinGoedecke Try to understand what is written, The idea of the proof is as follows. Suppose $v$ is a number that belongs to one language and not the other, and $n$ is the number of steps needed by the machine $M_1$ or $M_2$ that recognizes it. Then the machine $M$ will not accept any input larger than both $v$ and $n$. Thus it will recognize only a finite language. Conversely, if both languages are equal, them $M$ recognizes all inputs, hence infinitely many. Thus a pair $(⟨M_1⟩,⟨M_2⟩)$ is in Q iff the language accepted by $M$ is infinite. – babou Aug 8 '15 at 22:53" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88407916,"math_prob":0.9933909,"size":1011,"snap":"2020-10-2020-16","text_gpt3_token_len":303,"char_repetition_ratio":0.14399205,"word_repetition_ratio":0.011049724,"special_character_ratio":0.3125618,"punctuation_ratio":0.14814815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980134,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T14:55:00Z\",\"WARC-Record-ID\":\"<urn:uuid:6aadb838-f272-4272-bfe0-416c51b784e9>\",\"Content-Length\":\"146344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2fb7918-eefc-4a8a-b748-fa58ec5d76af>\",\"WARC-Concurrent-To\":\"<urn:uuid:21aa0ef7-747c-4430-a040-c244a20d455f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/45104/how-to-turing-reduce-equivalent-languages-q-to-infinite-language-i\",\"WARC-Payload-Digest\":\"sha1:T4XRHXOGCQRKMKGKKBKXJ6T6ZYR2RIJP\",\"WARC-Block-Digest\":\"sha1:NQ3SAWTPJMZRYIYOKOPPGUSY6DXDK5NH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146714.29_warc_CC-MAIN-20200227125512-20200227155512-00440.warc.gz\"}"}
https://vtk.org/Wiki/index.php?title=Statistical_analysis&diff=63002&oldid=16519
[ "# Difference between revisions of \"Statistical analysis\"\n\nJump to navigationJump to search\n\n## ParaView Statistics Filters\n\nSince version 3.6.2, ParaView comes with a set of statistics filters. These filters provide a way to use vtkStatisticsAlgorithm subclasses from within ParaView.\n\nOnce ParaView is started, you should see a submenu in the Filters menu bar named Statistics that contains\n\n• Contingency Statistics\n• Descriptive Statistics\n• K-Means\n• Multicorrelative Statistics\n• Principal Component Analysis\n\n## Using the filters\n\nIn the simplest use case, just select a dataset in ParaView's pipeline browser, create a statistics filter from the Filter→Statistics menu, hit return to accept the default empty second filter input, select the arrays you are interested in, and click Apply.\n\nThe default task for all of the filters (labeled \"Model and assess the same data\") is to use a small, random portion of your dataset to create a statistical model and then use that model to evaluate all of the data. There are 4 different tasks that filters can perform:\n\n1. \"Statistics of all the data,\" which creates an output table (or tables) summarizing the entire input dataset;\n2. \"Model a subset of the data,\" which creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;\n3. \"Assess the data with a model,\" which adds attributes to the first input dataset using a model provided on the second input port; and\n4. \"Model and assess the same data,\" which is really just the 2 operations above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.\n\nWhen the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.\n\nThe first output of statistics filters is always the model table(s). The model may be newly-created (tasks 1, 2, or 4) or a copy of the input model (task 3). The second output will either be empty (tasks 1 and 2) or a copy of the input dataset with additional attribute arrays (tasks 3 and 4).\n\n## Caveats\n\nWarning: When computing statistics on point arrays and running pvserver with data distributed across more than a single process, the statistics will be skewed because points stored on both processes (due to cells that neighbor each other on different processes) will be counted once for each process they appear in.\n\nOne way to resolve this issue is to force a redistribution of the data, which is not simple. Another approach is to keep a reverse lookup table of data points so that those already visited can be marked as such and not factored in a second time; this might be inefficient.\n\n## Filter-specific options\n\n### Descriptive Statistics\n\nThis filter computes the mininmum, maximum, mean, M2, M3, and M4 aggregates, standard deviation, skewness, and kurtosis for each selected array. Various estimators are available for those 3 last statistics.", null, "Descriptive statistics in action. Notice that the filter has 2 outputs: the assessed dataset at the top right and the summary statistics in the bottom pane.\n\nThe assessment of data is performed by providing the 1-D Mahalanobis distance from a given reference value with respect to a given deviation; these 2 quantities can be, but do not have to be, a mean and standard deviation, for instance when the implicit assumption of a 1-dimensional Gaussian distribution is made. In this context The Signed Deviations option allows for the control of whether the reported number of deviations will always be positive or whether the sign encodes if the input point was above or below the mean.\n\n### Contingency Statistics\n\nThis filter computes contingency tables between pairs of attributes. These contingency tables are empirical joint probability distributions; given a pair of attribute values, the observed frequency per observation is retured. Thus the result of analysis is a tabular bivariate probability distribution. This table serves as a Bayesian-style prior model when assessing a set of observations. Data is assessed by computing\n\n• the probability of observing both variables simultaneously;\n• the probability of each variable conditioned on the other (the two values need not be identical); and\n• the pointwise mutual information (PMI).\n\nFinally, the summary statistics include the information entropy of the observations.\n\n### K-Means\n\nThis filter iteratively computes the center of k clusters in a space whose coordinates are specified by the arrays you select. The clusters are chosen as local minima of the sum of square Euclidean distances from each point to its nearest cluster center. The model is then a set of cluster centers. Data is assessed by assigning a cluster center and distance to the cluster to each point in the input data set.\n\nThe K option lets you specify the number of clusters. The Max Iterations option lets you specify the maximum number of iterations before the search for cluster centers terminates. The Tolerance option lets you specify the relative tolerance on cluster center coordinate changes between iterations before the search for cluster centers terminates.\n\n### Multicorrelative Statistics\n\nThis filter computes the covariance matrix for all the arrays you select plus the mean of each array. The model is thus a multivariate Gaussian distribution with the mean vector and variances provided. Data is assessed using this model by computing the Mahalanobis distance for each input point. This distance will always be positive.\n\nThe learned model output format is rather dense and can be confusing, so it is discussed here. The first filter output is a multiblock dataset consisting of 2 tables.\n\n• Raw covariance data.\n• Covariance matrix and its Cholesky decomposition.\n\n#### Raw covariances\n\nThe first table has 3 meaningful columns: 2 titled \"Column1\" and \"Column2\" whose entries generally refer to the N arrays you selected when preparing the filter and 1 column titled \"Entries\" that contains numeric values. The first row will always contain the number of observations in the statistical analysis. The next N rows contain the mean for each of the N arrays you selected. The remaining rows contain covariances of pairs of arrays.\n\n#### Correlations\n\nThe second table contains information derived from the raw covariance data of the first table. The first N rows of the first column contain the name of one array you selected for analysis. These rows are followed by a single entry labeled \"Cholesky\" for a total of N+1 rows. The second column, Mean contains the mean of each variable in the first N entries and the number of observations processed in the final (N+1) row.\n\nThe remaining columns (there are N, one for each array) contain 2 matrices in triangular format. The upper right triangle contains the covariance matrix (which is symmetric, so its lower triangle may be inferred). The lower left triangle contains the Cholesky decomposition of the covariance matrix (which is triangular, so its upper triangle is zero). Because the diagonal must be stored for both matrices, an additional row is required — hence the N+1 rows and the final entry of the column named \"Column\".\n\n### Principal Component Analysis\n\nThis filter performs additional analysis above and beyond the multicorrelative filter. It computes the eigenvalues and eigenvectors of the covariance matrix from the multicorrelative filter. Data is then assessed by projecting the original tuples into a possibly lower-dimensional space. For more information see the Wikipedia entry on principal component analysis (PCA).\n\nThe Normalization Scheme option allows you to choose between no normalization — in which case each variable of interest is assumed to be interchangeable (i.e., of the same dimension and units) — or diagonal covariance normalization — in which case each (i,j)-entry of the covariance matrix is normalized by sqrt(cov(i,i) cov(j,j)) before the eigenvector decomposition is performed. This is useful when variables of interest are not comparable but their variances are expected to be useful indications of their full range, and their full ranges are expected to be useful normalization factors.\n\nAs PCA is frequently used for projecting tuples into a lower-dimensional space that preserves as much information as possible, several settings are available to control the assessment output. The Basis Scheme allows you to control how projection to a lower dimension is performed. Either no projection is performed (i.e., the output assessment has the same dimension as the number of variables of interest), or projection is performed using the first N entries of each eigenvector, or projection is performed using the first several entries of each eigenvector such that the \"information energy\" of the projection will be above some specified amount E.\n\nThe Basis Size setting specifies N, the dimension of the projected space when the Basis Scheme is set to \"Fixed-size basis\". The Basis Energy setting specifies E, the minimum \"information energy\" when the Basis Scheme is set to \"Fixed-energy basis\".\n\nSince the PCA filter uses the multicorrelative filter's analysis, it shares the same raw covariance table specified above. The second table in the multiblock dataset comprising the model output is an expanded version of the multicorrelative version.\n\n#### PCA Derived Data Output\n\nAs above, the second model table contains the mean values, the upper-triangular portion of the symmetric covariance matrix, and the non-zero lower-triangular portion of the Cholesky decomposition of the covariance matrix. Below these entries are the eigenvalues of the covariance matrix (in the column labeled \"Mean\") and the eigenvectors (as row vectors) in an additional NxN matrix." ]
[ null, "https://vtk.org/Wiki/images/thumb/b/bd/DescriptiveStatisticsExample.png/300px-DescriptiveStatisticsExample.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8351102,"math_prob":0.93241787,"size":19342,"snap":"2021-21-2021-25","text_gpt3_token_len":4131,"char_repetition_ratio":0.12814148,"word_repetition_ratio":0.5424967,"special_character_ratio":0.21336986,"punctuation_ratio":0.092846274,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904318,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T07:04:28Z\",\"WARC-Record-ID\":\"<urn:uuid:df72e33e-1336-4d4e-b230-36646ee57a3b>\",\"Content-Length\":\"62929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:314b5662-aa65-414b-b09f-724efa793799>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5f0f535-06cf-4411-9ada-d63641e4af1a>\",\"WARC-IP-Address\":\"66.194.253.19\",\"WARC-Target-URI\":\"https://vtk.org/Wiki/index.php?title=Statistical_analysis&diff=63002&oldid=16519\",\"WARC-Payload-Digest\":\"sha1:U5IEOSVML4NBVJ3NDLPEI5CJMYFKCPDO\",\"WARC-Block-Digest\":\"sha1:O4R7DCLQDN7J7LZHEQAAYQYOTUP5Y7AU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488268274.66_warc_CC-MAIN-20210621055537-20210621085537-00239.warc.gz\"}"}
https://deepai.org/publication/multilinear-compressive-learning
[ "", null, "", null, "", null, "", null, "Multilinear Compressive Learning\n\nCompressive Learning is an emerging topic that combines signal acquisition via compressive sensing and machine learning to perform inference tasks directly on a small number of measurements. Many data modalities naturally have a multi-dimensional or tensorial format, with each dimension or tensor mode representing different features such as the spatial and temporal information in video sequences or the spatial and spectral information in hyperspectral images. However, in existing compressive learning frameworks, the compressive sensing component utilizes either random or learned linear projection on the vectorized signal to perform signal acquisition, thus discarding the multi-dimensional structure of the signals. In this paper, we propose Multilinear Compressive Learning, a framework that takes into account the tensorial nature of multi-dimensional signals in the acquisition step and builds the subsequent inference model on the structurally sensed measurements. Our theoretical complexity analysis shows that the proposed framework is more efficient compared to its vector-based counterpart in both memory and computation requirement. With extensive experiments, we also empirically show that our Multilinear Compressive Learning framework outperforms the vector-based framework in object classification and face recognition tasks, and scales favorably when the dimensionalities of the original signals increase, making it highly efficient for high-dimensional multi-dimensional signals.\n\nAuthors\n\n03/16/2020\n\nMetrics for Evaluating the Efficiency of Compressing Sensing Techniques\n\nCompressive sensing has been receiving a great deal of interest from res...\n02/17/2020\n\nMultilinear Compressive Learning with Prior Knowledge\n\nThe recently proposed Multilinear Compressive Learning (MCL) framework c...\n12/26/2018\n\nUncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization\n\nThe goal of statistical compressive sensing is to efficiently acquire an...\n03/21/2013\n\nMulti-dimensional sparse structured signal approximation using split Bregman iterations\n\nThe paper focuses on the sparse approximation of signals using overcompl...\n09/22/2020\n\nPerformance Indicator in Multilinear Compressive Learning\n\nRecently, the Multilinear Compressive Learning (MCL) framework was propo...\n04/29/2019\n\nA Cross-Layer Approach to Data-aided Sensing using Compressive Random Access\n\nIn this paper, data-aided sensing as a cross-layer approach in Internet-...\n11/16/2015\n\nCross-scale predictive dictionaries\n\nWe propose a novel signal model, based on sparse representations, that c...\n\nCode Repositories\n\nMultilinearCompressiveLearningFramework\n\nNone\n\nThis week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\nI Introduction\n\nThe classical sample-based signal acquisition and manipulation approach usually involve separate steps of signal sensing, compression, storing or transmitting, then the reconstruction. This approach requires the signal to be sampled above the Nyquist rate in order to ensure high-fidelity reconstruction. Since the existence of spatial-multiplexing cameras, over the past decade, Compressive Sensing (CS) has become an efficient and a prominent approach for signal acquisition at sub-Nyquist rates, combining the sensing and compression step at the hardware level. This is due to the assumption that the signal often possesses specific structures that exhibit sparse or compressible representation in some basis, thus, can be sensed at a lower rate than the Nyquist rate but still allows almost perfect reconstruction [2, 3]. In fact, many data modalities that we operate on are often sparse or compressible. For example, smooth signals are compressible in the Fourier domain or subsequent frames in a video are piecewise smooth, thus compressible in a wavelet domain. With the efficient realization at the hardware level such as the popular Single Pixel Camera, CS becomes an efficient signal acquisition framework, however, making the signal manipulation an intimidating task. Indeed, over the past decade, since reversing the signal to its original domain is often considered the necessary step for signal manipulation, a significant amount of works have been dedicated to signal reconstruction, giving certain insights and theoretical guarantees for the successful recovery of the signal from compressively sensed measurements [2, 1, 3].\n\nWhile signal recovery plays a major role in some sensing applications such as image acquisition for visual purposes, there are many scenarios in which the primary objective is the detection of certain patterns or inferring some properties in the acquired signal. For example, in many radar applications, one is often interested in anomaly patterns in the measurements, rather than signal recovery. Moreover, in certain applications [4, 5], signal reconstruction is undesirable since the step can potentially disclose private information, leading to the infringement of data protection legislation. These scenarios naturally led to the emergence of Compressive Learning (CL) concept [6, 7, 8, 9] in which the inference system is built on top of the compressively sensed measurements without the explicit reconstruction step. While the amount of literature in CL is rather insignificant compared to signal reconstruction in CS, different attempts have been made to modify the sensing component in accordance with the learning task [10, 11], to extract discriminative features [7, 12] from the randomly sensed measurements or to jointly optimize the sensing matrix [13, 14] and the subsequent inference system. Improvements to different components of CL pipeline have been proposed, however, existing frameworks utilize the same compressive acquisition step that performs a linear projection of the vectorized data, thereby operating on the vector-based measurements and thus losing the tensorial structure in the measurements of multi-dimensional data.\n\nIn fact, many data modalities naturally possess the tensorial format such as color images, videos or multivariate time-series. The multi-dimensional representation naturally reflects the semantic differences inherent in different dimensions or tensor modes. For example, the spatial and temporal dimensions in a video or the spatial and the spectral dimensions in hyperspectral images represent two different concepts, having different properties. Thus by exploiting this natural form of the signals and considering the semantic differences between different dimensions, many tensor-based signal processing, and learning algorithms have shown its superiority over the vector-based approach, which simply operates on the vectorized data [15, 16, 17, 18, 19, 20, 21]. Indeed, tensor representation and its associated mathematical operations and properties have found various applications in the Machine Learning community. For example, in multivariate time-series analysis, the multilinear projection was utilized in [18, 22] to model the dependencies between data points along the feature and temporal dimension separately. Several multilinear regression [23, 24] or discriminant models [25, 26]\n\nhave been developed to replace their linear counterparts, with improved performance. In neural network literature, multilinear techniques have been employed to compress pre-trained networks\n\n[27, 28, 29], or to construct novel neural network architectures [19, 30, 22].\n\nIt is worth noting that CS plays an important role in many applications that involves high-dimensional tensor signals because the standard point-based signal acquisition is both memory and computationally intensive. Representative examples include Hyperspectral Compressive Imaging (HCI), Synthetic Aperture Radar (SAR) imaging, Magnetic Resonance Imaging (MRI) or Computer Tomography (CT). Therefore, the tensor-based approach has also found its place in CS, also known as Multi-dimensional Compressive Sensing (MCS) , which replaces the linear sensing and reconstruction model with multilinear one. Similar to vector-based CS, thereupon simply referred to as CS, the majority of efforts in MCS are dedicated to constructing multilinear models that induce sparse representation along each tensor mode with respect to a set of bases. For example, the adoption of sparse Tucker representation and the Kronecker sensing scheme in MRI allows computationally efficient signal recovery with very low Peak Signal to Noise Ratio (PSNR) [31, 32]. In addition, the availability of optical implementations of separable sensing operators such as naturally enables MCS, significantly reducing the amount of data collection and reconstruction cost.\n\nWhile multilinear models have been successfully applied in Compressive Sensing and Machine Learning, to the best of our knowledge, we have not seen their utilization in Compressive Learning, which is the joint framework combining CS and ML. In this paper, in order to leverage the multi-dimensional structure in many data modalities, we propose Multilinear Compressive Learning framework, which adopts a multilinear sensing operator and a neural network classifier that is designed to utilize the multi-dimensional structure-preserving compressed measurements. The contribution of this paper is as follows:\n\n• We propose Multilinear Compressive Learning (MCL), a novel CL framework that consists of a multilinear sensing module, a multilinear feature synthesis component, both taking into account the multi-dimensional property of the signals, and a task-specific neural network. The multilinear sensing module compressively senses along each separate mode of the original tensor signal, producing structurally encoded measurements. Similarly, the feature synthesis component performs the feature learning steps separately along each mode of the compressed measurements, producing inputs to the subsequent task-specific neural network which has the structure depending on the inference problem.\n\n• We show both theoretically and empirically that the proposed MCL framework is highly cost-effective in terms of memory and computational complexity. In addition, theoretical analysis and experimental results also indicate that our framework scales well when the dimensionalities of the original signal increases, making it highly efficient for high-dimensional tensor signals.\n\n• We conduct extensive experiments in object classification and face recognition tasks to validate the performance of our framework in comparison with its vector-based counterpart. Besides, the effect of different components and hyperparameters in the proposed framework were also empirically analyzed.\n\n• We publicly provide our implementation of the experiments reported in this paper to facilitate future research. By following our detailed instructions on how to set up the software environment, all experiment results can be reproduced in just one line of code.\n\nThe remainder of the paper is organized as follows: in Section 2, we review the background information in Compressive Sensing, Multi-dimensional Compressive Sensing and Compressive Learning. In Section 3, the detailed description of the proposed Multilinear Compressive Learning framework is given. Complexity analysis and comparison with the vector-based framework are also given in Section 3. In Section 4, we provide details of our experiment protocols and quantitative analysis of different experiment configurations. Section 5 concludes our work with possible future research directions.\n\nIi Related Work\n\nIi-a Notation\n\nIn this paper, we denote scalar values by either lower-case or upper-case characters , vectors by lower-case bold-face characters , matrices by upper-case or Greek bold-face characters and tensor as calligraphic capitals . A tensor with modes and dimension in the mode- is represented as . The entry in the th index in mode- for is denoted as . In addition, denotes the vectorization operation that rearranges elements in to the vector representation.\n\nDefinition 1 (The Kronecker Product)\n\nThe Kronecker product between two matrices and is denoted as having dimension , is defined by:\n\n A⊗B=⎡⎢ ⎢⎣A11B…A1NB⋮⋱⋮AM1B…AMNB⎤⎥ ⎥⎦ (1)\nDefinition 2 (Mode-n Product)\n\nThe mode- product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .\n\nThe following relationship between the Kronecker product and -mode product is the cornerstone in MCS:\n\n Y=X×1W1×⋯×NWN (2)\n\ncan be written as\n\n y=(W1⊗⋯⊗WN)x (3)\n\nwhere and\n\nIi-B Compressive Sensing\n\nCompressive Sensing (CS) is a signal acquisition and manipulation paradigm that performs simultaneous sensing and compression on the hardware level, leading to large reduction in computation cost and the number of measurements. The signal working under CS is assumed to have a sparse or compressible representation in some basis or dictionary , that is:\n\n y=Ψxwith∥x∥0≤KandK≪I (4)\n\nwhere denotes the number of non-zero entries in . While the dictionary presented in Eq. (4) is complete, i.e., the number of columns in is equal to the signal dimension , we should note that signal models with over-complete dictionaries can also work, i.e., with some modifications .\n\nWith the assumption on the sparsity, CS performs the linear sensing step using the sensing operator , acquiring a small number of measurements with , from analog signal :\n\n z=Φy (5)\n\nEq. (5) represents both the sensing and compression step that can be efficiently implemented at the sensor level. Thus, what we obtain from CS sensors is a limited number of measurements that is used for other processing steps. By combining Eq. (4, and 5), the CS model is usually expressed as:\n\n z=ΦΨxwith∥x∥0≤KandK≪I (6)\n\nIn some applications, we are interested in recovering the signal from . This involves developing theoretical properties and algorithms to determine the sensing operator , the dictionary or basis , and the number of nonzero coefficients in order to ensure that the reconstruction is unique, and of high-fidelity [2, 35, 3]. The reconstruction of is often posed as finding the sparsest solution of the under-determined linear system , particularly:\n\n argminx∥x∥0s.t∥z−ΦΨx∥2≤ϵ (7)\n\nwhere is a small constant specifying the amount of residual error allowed in the approximation. A large body of research has been dedicated to solve the problem in Eq. (7) and its variants with two main approaches: basis pursuit (BP) which transforms Eq. (7\n\n) to a convex one to be solved by linear programming\n\n or second-order cones programs , and matching pursuit (MP), a class of greedy algorithms, which iteratively refines the solution to the sparsest [38, 39]. Both BP and MP algorithms are computationally intensive when the number of elements in is big, especially in the case of multi-dimensional signals.\n\nIi-C Multi-dimensional Compressive Sensing\n\nGiven a multi-dimensional signal , a direct application of the sparse representation in Eq. (4) requires vectorizing and the calculations on , which is a very big matrix with the number of elements scales exponentially with . Instead of assuming is sparse in some basis or dictionary, MCS adopts a sparse Tucker model as follows:\n\n (8)\n\nwhich assumes that the signal is sparse with respect to a set of bases or dictionaries . Since in some cases, the sensing step can be taken in a multilinear way, i.e., by using a set of linear operators along each mode separately, also known as separable sensing operators:\n\n (9)\n\nthat allows us to obtain the measurements with retained multi-dimensional structure. From Eq. (2, 3, 8 and 9), the MCS model is often expressed as:\n\n z=(B1⊗⋯⊗BN)xwith∥x∥0≤K (10)\n\nwhere , and (). The formulation in Eq. (10) is also known as Kronecker CS .\n\nSince MCS can be expressed in the vector form, the existing algorithms and theoretical bounds for vector-based CS have also been extended for MCS. Representative examples include Kronecker OMP and its tensor block-sparsity extension that improves the computation significantly. It is worth noting that by adopting a multilinear structure, MCS operates with a set of smaller sensing and dictionaries, requiring much lower memory and computation compared to the vectorization approach .\n\nIi-D Compressive Learning\n\nThe idea of learning directly from the compressed measurements dates back to the early work of in which the authors proposed a framework termed compressive classification which introduces the concept of smashed filters and operates directly on the compressive measurements without reconstruction as the first proxy step. The result in was subsequently strengthened in showing that when sufficiently large random sensing matrix is used, it can capture the structure of the data manifold. Later, further extensions that extract discriminative features from compressive measurements for activity recognition [44, 45] or face recognition have also been proposed.\n\nThe concept of CL was introduced in , which provides theoretical analysis illustrating that learning machines can be built directly in the compressed domain. Particularly, given certain conditions of the sensing matrix\n\n, the performance of a linear Support Vector Machine (SVM) trained on compressed measurements is as good as the best linear threshold classifier trained on the original signal\n\n. Later, for compressive learning of signals described by a Gaussian Mixture Model, asymptotic behavior of the upper-bound\n\n and its extension to learn the sensing matrix were also derived.\n\nThe idea of jointly optimizing the sensing matrix with the classifier was also adopted in in which the authors proposed an adaptive version of feature-specific imaging system to learn an optimal sensing matrix based on past measurements. With the advances in computing hardware and stochastic optimization techniques, end-to-end CL system was proposed in , and several follow-up extensions and applications [46, 47, 48], indicating the superior performance when simultaneously optimizing the sensing component and the classifier via task-specific data. Our work is closely related to the end-to-end CL system in in that we also optimize the CL system via stochastic optimization in an end-to-end manner. Different from , our proposed framework efficiently utilizes the tensor structure inherent in many types of signals, thus outperforming the approach in in both inference performance and computational efficiency.\n\nIii Multilinear Compressive Learning Framework\n\nIn this Section, we first give our description of the proposed Multilinear Compressive Learning (MCL) framework that operates directly on the tensor representation of the signals. Then, the initialization scheme and optimization procedures of the proposed framework is discussed. Lastly, theoretical analysis of the framework’s complexity in comparison with its vector-based counterpart is provided.\n\nIii-a Motivation\n\nIn order to model the multi-dimensional structure in the signal of interest, we assume that the discriminative structure in can be captured in a lower-dimensional multilinear subspace of with ():\n\n Y=¯X×1¯Ψ1×⋯×N¯ΨN (11)\n\nwhere denotes the factor matrices and is the signal representation in this multilinear subspace.\n\nHere we should note that although Eq. (11) in our framework and Eq. (8) in MCS look similar in its mathematical form, the assumption and motivation are different. The objective in MCS is to reconstruct the signal by assuming the existence of the set of sparsifying dictionaries or bases and optimizing to induce the sparsest . Since our objective is to learn a classification or regression model, we make no assumption or constraint on the sparsity of but assume that the factorization in Eq. (11) can lead to a tensor subspace in which the representation is discriminative or meaningful for the learning problem.\n\nAs mentioned in the previous Section, in some applications, the measurements can be taken in a multilinear fashion, with different linear sensing operators operating along different tensor modes, i.e., separable sensing operators, we obtain the measurements from the following sensing equation:\n\n (12)\n\nwhere () represent the sensing matrices of those linear operators.\n\nIn cases where the measurements of the multi-dimensional signals are taken in a vector-based fashion, i.e., the following sensing model:\n\n z=Φvec(Y) (13)\n\nwith a single sensing operator , we can still enforce a structure-preserving sensing operation similar to the multilinear sensing scheme in Eq. (12) by setting:\n\n Φ=Φ1⊗⋯⊗ΦN (14)\n\nto obtain in Eq. (12) from in Eq. (13).\n\nCombining Eq. (11 and 12), we can express our measurements as:\n\n Z=¯X×1(Φ1¯Ψ1)×⋯×N(ΦN¯ΨN) (15)\n\nBy setting the sensing matrices to be pseudo-inverse of for all , we obtain the measurements that lie in the discriminative-induced tensor subspace mentioned previously.\n\nIii-B Design\n\nFigure 1 illustrates our proposed MCL framework which consists of the following components:\n\n• CS component: the data acquisition step of the multi-dimensional signals is done via separable linear sensing operators . As mentioned previously, in cases where the actual hardware implementation only allows vector-based sensing scheme, Eq. (14) allows the simulation of this multilinear sensing step. This component produces measurements with encoded tensor structure, having the same number of tensor modes () as the original signal.\n\n• Feature Synthesis (FS) component: from\n\n, this step performs feature extraction along\n\nmodes of the measurements with the set of learnable matrices . Since the measurements typically have many fewer elements compared to the original signal , the FS component expands the dimensions of\n\n, allowing better separability between the sensed signals from different classes in a higher multi-dimensional space that is found through optimization. While the sensing step performs linear interpolations for computational efficiency, the FS component can be either multilinear or nonlinear transformations. A typical nonlinear transformation step is to perform zero-thresholding, i.e., ReLU, on\n\nbefore multiplying with , i.e., . In applications which require the transmission of to be analyzed, this simple thresholding step can, before transmission, increase the compression rate by sparsifying the encoded signal and discarding the sign bits. While nonlinearity is often considered beneficial for neural networks, adding the thresholding step as described above further restricts the information retained in a limited number of measurements , thus, adversely affects the inference system. In the Experiments Section, we provide empirical analysis on the effect of nonlinearity towards the inference tasks at different measurement rates. Here we should note that while our FS component resembles the reprojection step in the vector-based framework , our FS and CS components have different weights ( and , ) and the dimensionality of the tensor feature produced by FS component is task-dependent, and is not constrained to that of the original signal.\n\n• Task-specific Neural Network : from the tensor representation produced by FS step, a neural network with task-dependent architecture is built on top to generate the regression or classification outputs. For example, when analyzing visual data, the\n\ncan be a Convolutional Neural Network (CNN) in case of static images or a Convolutional Recurrent Neural Network in case of videos. In CS applications that involve distributed arrays of sensors that continuously collect data, specific architectures for time-series analysis such as Long-Short Term Memory Network should be considered for\n\n. Here we should note that the size of is also task-dependent and should match with the neural network component. For example, in object detection and localization task, it is desirable to keep the spatial aspect ratio of similar to to allow precise localization.\n\nIii-C Optimization\n\nIn our proposed MCL framework, we aim to optimize all three components, i.e., , and , with respect to the inference task. A simple and straightforward approach is to consider all components in this framework as a single computation graph, then randomly initialize the weights according to some popular initialization scheme [49, 50]\n\nand perform stochastic gradient descend on this graph with respect to the loss function defined by the learning task. However, this approach does not take into account any existing domain knowledge of each component that we have.\n\nAs mentioned in Section III.A, with the assumption of the existence of a tensor subspace and the factorization in Eq. (11), the sensing matrix in the CS component can be initialized equal to the pseudo-inverse of for all to obtain initial that are discriminative or meaningful. There have been several algorithms proposed to learn the factorization in Eq. (11) with respect to different criteria such as the multi-class discriminant , class-specific discriminant , max-margin or Tucker Decomposition with non-negative constraint .\n\nIn a general setting, we propose to apply Higher Order Singular Value Decomposition (HOSVD)\n\n and initialize with the left singular vectors that correspond to the largest singular values in mode\n\n. The sensing matrices are then adjusted together with other components during the stochastic optimization process. This initialization scheme resembles the one proposed for vector-based CL framework which utilizes Principal Component Analysis (PCA). In a general case where one has no prior knowledge on the structure of\n\n, a transformation that retains the most energy in the signal such as PCA or HOSVD is a popular choice when reducing dimensionalities of the signal. While for higher-order data, HOSVD only provides a quasi-optimal condition for data reconstruction in the least-square sense , since our objective is to make inferences, this initialization scheme works well as indicated in our Experiments Section.\n\nWith the aforementioned initialization scheme of CS component for a general setting, it is natural to also initialize in FS component with the right singular vectors corresponding to the largest singular values in mode of the training data. With this initialization of\n\n, during the initial forward steps in stochastic gradient descent, the FS component produces an approximate version of\n\n, and in cases where a classifier pre-trained on or its approximated version exists, the weights of neural network can be initialized with that of . It is worth noting that the reprojection step in the vector-based framework in shares the weights with the sensing matrices, performing inexplicit signal reconstruction while we have different sensing and feature extraction weights. Since the vector-based framework involves large sensing and reprojection matrices, from the optimization point of view, enforcing shared weights might be essential in their framework to reduce overfitting as indicated by their empirical results.\n\nAfter performing the aforementioned initialization steps, all three components in our MCL framework are optimized using Stochastic Gradient Descent method. It is worth noting that above initialization scheme for CS and FS component is proposed in a generic setting, which can serve as a good starting point. In cases where certain properties of the tensor subspace or the tensor feature are known to improve the learning task, one might adopt a different initialization strategy for CS and FS components to induce such properties.\n\nIii-D Complexity Analysis\n\nSince the complexity of the neural network component\n\nvaries with the choice of the architecture, we will estimate the theoretical complexity for the CS and FS component and make comparison with the vector-based framework\n\n. Let and denote the dimensionality of the original signal and its measurements , respectively. In addition, to compare with the vector-based framework, we also assume that the dimensionality of the feature is also . Thus, belongs to and belongs to for in our CS and FS component, while in , the sensing matrix and the reconstruction matrix belong to and , respectively.\n\nIt is clear that the memory complexity of CS and FS component in our MCL framework is , and that of the vector-based framework is . To see the huge difference between the two frameworks, let us consider 3D MRI image of size with the sampling ratio , i.e., , the memory complexity in our framework is while that of the vector-based framework is\n\nRegarding computational complexity of our framework, the CS component performs having complexity of , and the FS component performs having complexity of . For the vector-based framework, the sensing step computes and reprojection step computes , resulting in total complexity of . With the same 3D MRI example as in the previous paragraph, the total computational complexity of our framework is while that of the vector-based framework is .\n\nTable I summarizes the complexity of the two frameworks. It is worth noting that by taking into account the multi-dimensional structure of the signal, the proposed framework has both memory and computational complexity several orders of magnitudes lower than its vector-based counterpart.\n\nIv Experiments\n\nIn this section, we provide a detailed description of our empirical analysis of the proposed MCL framework. We start by describing the datasets and the experiments’ protocols that have been used. In the standard set of experiments, we analyze the performance of MCL in comparison with the vector-based framework proposed in . We further investigate the effect of different components in our framework in the Ablation Study Subsection.\n\nIv-a Datasets and Experiment Protocol\n\nWe have conducted experiments on the object classification and face recognition tasks on the following datasets:\n\n• CIFAR-10 and CIFAR-100: CIFAR dataset is a color (RGB) image dataset for evaluating object recognition task. The dataset consists of images for training and images for testing with resolution pixels. CIFAR-10 refers to the -class objection recognition task in which each individual image has a single class label coming from different categories. Likewise, CIFAR-100 refers to a more fine-grained classification task with each image having a label coming from different categories. In our experiment, from the training set of CIFAR-10 and CIFAR-100, we randomly selected images for validation purpose and only trained the algorithms on images.\n\n• CelebA: CelebA is a large-scale face attributes dataset with more than images at different resolutions from more than identities. In our experiment, we used a subset of identities in this dataset which corresponds to , , and samples for training, validation, and testing, respectively. In order to evaluate the scalability of our proposed framework, we resized the original images to different set of resolutions, including: , , , and pixels, which are subsequently denoted as CelebA-32, CelebA-48, CelebA-64, and CelebA-80, respectively.\n\nIn our experiments, two types of network architecture have been employed for the neural network component : the AllCNN architecture and the ResNet architecture \n\n. AllCNN is a simple 9-layer feed-forward architecture which has no max-pooling (pooling is done via convolution with stride more than 1) and no fully-connected layer. ResNet is a 110-layer CNN with residual connections. The exact topologies of AllCNN and ResNet in our experiment can be found in our publicly available implementation\n\n.\n\nSince all of the datasets contain RGB images, we followed the implementation proposed in for the vector-based framework, which has 3 different sensing matrices for each of the color channel, and the corresponding reprojection matrices are enforced to share weights with the sensing matrices. The sensing matrices in MCL were initialized with the HOSVD decomposition on the training sets while the sensing matrices in the vector-based framework were initialized with PCA decomposition on the training set. Likewise, the bases obtained from HOSVD and PCA were also used to initialize the FS component in our framework and the reprojection matrices in the vector-based framework. In addition, we also trained the neural network component on uncompressed data with respect to the learning tasks and initialized the classifier in each framework with these pre-trained networks’ weights. After the initialization step, both frameworks were trained in an end-to-end manner.\n\nAll algorithms were trained with ADAM optimizer with the following learning rate the schedule\n\n, changing at epoch\n\nand . Each algorithm was trained for epochs in total. Weight decay coefficient was set to to regularize all the trainable weights in all experiments. We performed no data preprocessing step, except scaling all the pixel values to . In addition, data augmentation was employed by random flipping on the horizontal axis and image shifting within of the spatial dimensions. In all experiments, the final model weights which are used to measure the performance on the test sets, are obtained from the epoch which has the highest validation accuracy.\n\nFor each experiment configuration, we performed\n\nruns and the mean and standard deviation of test accuracy are reported.\n\nIv-B Comparison with the vector-based framework\n\nIn order to compare with the vector-based framework in , we performed experiments on 3 datasets: CIFAR-10, CIFAR-100, and CelebA-32. To compare the performances at different measurement rates, we employed three different measurement values for the vector-based framework: , , and . Here indicates that the vector-based framework has different sensing matrices for each color channel. Since we cannot always select the size of the measurements in MCL to match the number of measurements in the vector-based framework, we try to find the configurations of that closely match with the vector-based ones. In addition, with a target number of measurements, there can be more than one configuration of that yields a similar number of measurements. For each measurement value () in the vector-based framework, we evaluated two different values of , particularly, the following sizes of were used: , , , , and . The measurement configurations are summarized in Table II.\n\nIn order to effectively compare the CS and FS component in MCL with those in , two different neural network architectures with different capacities have been used. Table III and IV show the accuracy on the test set with AllCNN and ResNet architecture, respectively. The second row of each table shows the performance of the base classifier on the uncompressed data, which we term as Oracle.\n\nIt is clear that our proposed framework outperforms the vector-based framework in all compression rates and datasets with both AllCNN and ResNet architecture, except for CIFAR-100 dataset at the lowest measurement rate (). The performance gaps between the proposed MCL framework and the vector-based one are huge, with more than differences for the CIFAR datasets at measurement rates and . In case of CelebA-32 dataset and at measurement rate (configuration ), the inference systems learned by our proposed framework even slightly outperform the Oracle setting for both AllCNN and ResNet architecture.\n\nAlthough the capacities of AllCNN and ResNet architecture are different, their performances on the uncompressed data are roughly similar. Regarding the effect of two different base classifiers in the two Compressive Learning pipelines, it is clear that the optimal configurations of our framework at each measurement rate are consistent between the two classifiers, i.e., the bold patterns from both Table III and IV are similar. When switching from AllCNN to ResNet, the vector-based framework observes performance drop at the highest measurement rate (), but increases in lower rates ( and ). For our framework when switching from AllCNN to ResNet, the test accuracies stay approximately similar or improve.\n\nTable V shows the empirical complexity of both frameworks with respect to different measurement configurations, excluding the base classifiers. Since all three datasets employed in this experiment have the same input size and the size of the feature tensor in MCL was set similar to the original input size, the complexities of CS and FS components in all three datasets are similar. It is clear that our proposed MCL framework has much lower memory and computational complexity compared to the vector-based counterpart. In our proposed framework, even operating at the highest measurement rate , the CS and FS components require only parameters and FLOPs, which are approximately times fewer than that of the vector-based framework operating at the lowest measurement rate . Interestingly, the optimal configuration at each measurement rate obtained in our framework also has lower or similar complexity than the other configuration.\n\nIn Figure 2, we visualize the features obtained from the reprojection step and the FS component in the proposed framework, respectively. It is worth noting that the sensing matrices and the reprojection matrices (in case of the vector-based framework) or (in FS component of MCL framework) were initialized with PCA and HOSVD. In addition, the base network classifiers were also initialized with the ones trained on the original data. Thus, it is intuitive to expect the features obtained from both frameworks to be visually interpretable for human, despite no explicit reconstruction objective was incorporated during the training phase. Indeed, from Figure 2, we can see that with the highest number of measurements, the feature images obtained from both frameworks look very similar to the original images. Particularly, the ones synthesized by the vector-based framework look visually closer to the original images than those obtained from our MCL framework. Since the sensing and reprojection steps in the vector-based framework share the same weight matrices during the optimization procedure, the whole pipeline is more constrained to reconstruct the images at the reprojection step.\n\nWhen the number of measurements drops to approximately of the original signal, the reverse scenario happens: the feature images (in configuration , ) obtained from our framework retain more facial features compared to those from the vector-based framework (), especially in the configuration. This is due to the fact that most of the facial information in particular, and natural images in general lie on the spatial dimensions, i.e., height and width. Besides, when the dimension of the third mode of the measurement is set to (as in configuration , ), after the optimization procedure, our proposed framework effectively discards the color information which is less relevant to the facial recognition task, and retains more lightness details, thus, performs better than the configurations with the -mode dimension set to (in configuration , ).\n\nWith the above observations from the empirical analysis, it is clear that structure-preserving Compressive Sensing and Feature Synthesis components in our proposed MCL framework can better capture essential information inherent in the multi-dimensional signal for the learning tasks, compared with the vector-based framework.", null, "Fig. 2: Illustration of the feature images (inputs to ResNet) synthesized by the proposed framework and the vector-based counterpart. The original images come from the test set of CelebA-32.\n\nIv-C Ablation Study\n\nIn this subsection, we provide the empirical analysis on the effect of different components in MCL framework. These factors include the effect of the popular nonlinear thresholding step discussed in Section III.B; the choice of having shared or separate weights in CS and FS component; the initialization step discussed in Section III.C; the scalability of the proposed framework when the original dimensionalities of the signal increase. Since the total number of experiment settings when combining all of the aforementioned factors is huge, and the results involved multiple factors are difficult to interpret, we analyze these factors in a progressive manner.\n\nIv-C1 Linearity versus Nonlinearity and Shared versus Separate Weights\n\nFirstly, the choice of linearity or nonlinearity and the choice of shared or separate weights in CS and FS component are analyzed together since the two factors are closely related. In this setting, the CS and FS components are initialized by HOSVD decomposition as described in Section III.C. The neural network classifier has the AllCNN architecture with the weights initialized from the corresponding pre-trained network on the original data. Table VI shows the test accuracies on CIFAR-10, CIFAR-100 and CelebA-32 at different measurements. It is clear that most of the highest test accuracies are obtained without the thresholding step and with separate weights in CS and FS component, i.e., most bold-face numbers appear in the lower quarter on the left side of Table VI. Comparing between linearity and nonlinearity option, it is obvious that the nonlinearity effect of adversely affect the performances, especially when the number of measurements decreases. The reason might be that applying to the compressed measurements restricts the information to be represented in the positive subspace only, thus further losing the representation power in the compressed measurements when only a limited number of measurements allowed.\n\nIn the linearity setting, while the performance differences between shared and separate weights in some configurations are small, here we should note that allowing non-shared weights can be beneficial in cases where we know that certain features should be synthesized in the FS component in order to make inferences.\n\nIv-C2 Effect of The Initialization Step\n\nFrom the observation obtained from the above analysis on the effect of linearity and separate weights, we investigated the effect of the initialization step discussed in Section III.C. All setups were trained with a multilinear FS component having separate weights from CS component. From Table VII, we can easily observe that by initializing the CS and FS components with HOSVD, the performances of the learning systems increase significantly. When CS and FS components are initialized with HOSVD, utilizing a pre-trained network further improves the inference performance of the systems, especially in the low measurement rate regime. Thus, the initialization strategy proposed in Section III.C is beneficial in a general setting for the learning tasks.\n\nIv-C3 Scalability\n\nFinally, the scalability of the proposed framework is validated in different resolutions of the CelebA dataset. All of the previous experiments were demonstrated with CelebA-32 dataset, which we assume that there are only elements in the original signal. To investigate the scalability, we pose the following question: What if the original dimensions of the signal are higher than , with the same numbers of measurements presented in Table II, can we still learn to recognize facial images with feasible costs?. To answer this question, we trained our framework on CelebA-32, CelebA-48, CelebA-64 and CelebA-80 and recorded the test accuracies, the number of parameters and the number of FLOPs at different number of measurements, which are shown in Table VIII. It is clear that at each measurement configuration, when the original signal resolution increases, the measurement rate drops at a similar rate, however, without any adverse effect on the inference performance. Particularly, if we look into the last column of Table VIII, with a sampling rate of only , the proposed framework achieves accuracy, which is only lower compared to that of the base classifier trained on the original data. Here we should note that most of the images in CelebA dataset have higher resolution than pixel, therefore, 4 different versions of CelebA (, , , , ) in our experiments indeed contain increasing levels of data fidelity. From the performance statistics, we can observe that the performance of our framework is characterized by the number of measurements, rather than the measurement rates or compression rates.\n\nDue to the memory limitation when training the vector-based framework at higher resolutions, we could not perform the same set of experiments for the vector-based framework. However, to compare the scalability in terms of computation and memory between the two frameworks, we measured the number of FLOPs and parameters in the vector-based framework, excluding the base classifier and visualize the results on Figure 3. It is worth noting that on the y-axis is the log scale and as the dimensions of the original signal increase, the complexity of the vector-based framework increases by an order of magnitude while our proposed MCL framework scales favorably in both memory and computation.", null, "Fig. 3: #FLOP and #PARAMETER versus the original dimensionalities of the signal, measured in the proposed framework and the vector-based framework, excluding the base classifier. The x-axis represents the original dimension of the input signal. The y-axis on the first row represents the number of FLOPs in log scale while the y-axis on the second row represents the number of parameters\n\nV Conclusions\n\nIn this paper, we proposed Multilinear Compressive Learning, an efficient framework to tackle the Compressive Learning task that operates on multi-dimensional signals. The proposed framework takes into account the tensorial nature of the multi-dimensional signals and performs the compressive sensing as well as the feature extraction step along different modes of the original data, thus being able to retain and synthesize essential information on a multilinear subspace for the learning task. We show theoretically and empirically that the proposed framework outperforms its vector-based counterpart in both inference performance and computational efficiency. Extensive ablation study has been conducted to investigate the effect of different components in the proposed framework, giving insights into the importance of different design choices.\n\nReferences\n\n• E. J. Candès and M. B. Wakin, “An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE signal processing magazine, vol. 25, no. 2, pp. 21–30, 2008.\n• E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, vol. 59, no. 8, pp. 1207–1223, 2006.\n• D. L. Donoho et al., “Compressed sensing,” IEEE Transactions on information theory, vol. 52, no. 4, pp. 1289–1306, 2006.\n• P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38, IEEE, 2017.\n• E. Hesamifard, H. Takabi, and M. Ghasemi, “Cryptodl: Deep neural networks over encrypted data,” arXiv preprint arXiv:1711.05189, 2017.\n• R. Calderbank and S. Jafarpour, “Finding needles in compressed haystacks,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3441–3444, IEEE, 2012.\n• M. A. Davenport, M. F. Duarte, M. B. Wakin, J. N. Laska, D. Takhar, K. F. Kelly, and R. G. Baraniuk, “The smashed filter for compressive classification and target recognition,” in Computational Imaging V, vol. 6498, p. 64980H, International Society for Optics and Photonics, 2007.\n• M. A. Davenport, P. Boufounos, M. B. Wakin, R. G. Baraniuk, et al., “Signal processing with compressive measurements.,” J. Sel. Topics Signal Processing, vol. 4, no. 2, pp. 445–460, 2010.\n• H. Reboredo, F. Renna, R. Calderbank, and M. R. Rodrigues, “Compressive classification,” in 2013 IEEE International Symposium on Information Theory, pp. 674–678, IEEE, 2013.\n• P. K. Baheti and M. A. Neifeld, “Adaptive feature-specific imaging: a face recognition example,” Applied optics, vol. 47, no. 10, pp. B21–B31, 2008.\n• H. Reboredo, F. Renna, R. Calderbank, and M. R. Rodrigues, “Projections designs for compressive classification,” in 2013 IEEE Global Conference on Signal and Information Processing, pp. 1029–1032, IEEE, 2013.\n• S. Lohit, K. Kulkarni, P. Turaga, J. Wang, and A. C. Sankaranarayanan, “Reconstruction-free inference on compressive measurements,” in\n\nProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops\n\n, pp. 16–24, 2015.\n• A. Adler, M. Elad, and M. Zibulevsky, “Compressed learning: A deep neural network approach,” arXiv preprint arXiv:1610.09615, 2016.\n• S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 1913–1917, IEEE, 2016.\n• D. Nion and N. D. Sidiropoulos, “Tensor algebra and multidimensional harmonic retrieval in signal processing for mimo radar,” IEEE Transactions on Signal Processing, vol. 58, no. 11, pp. 5693–5705, 2010.\n• F. Miwakeichi, E. Martınez-Montes, P. A. Valdés-Sosa, N. Nishiyama, H. Mizuhara, and Y. Yamaguchi, “Decomposing eeg data into space–time–frequency components using parallel factor analysis,” NeuroImage, vol. 22, no. 3, pp. 1035–1045, 2004.\n• D. M. Dunlavy, T. G. Kolda, and W. P. Kegelmeyer, “Multilinear algebra for analyzing data with multiple linkages,” in Graph algorithms in the language of linear algebra, pp. 85–114, SIAM, 2011.\n• D. T. Tran, M. Magris, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Tensor representation in high-frequency financial data for price change prediction,” in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7, IEEE, 2017.\n• D. T. Tran, A. Iosifidis, and M. Gabbouj, “Improving efficiency in convolutional neural networks with multilinear filters,” Neural Networks, vol. 105, pp. 328–339, 2018.\n• A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. Phan, “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145–163, 2015.\n• F. Malgouyres and J. Landsberg, “Multilinear compressive sensing and an application to convolutional linear networks,” 2018.\n• D. T. Tran, A. Iosifidis, J. Kanniainen, and M. Gabbouj, “Temporal attention-augmented bilinear network for financial time-series data analysis,” IEEE transactions on neural networks and learning systems, 2018.\n• T. L. Youd, C. M. Hansen, and S. F. Bartlett, “Revised multilinear regression equations for prediction of lateral spread displacement,” Journal of Geotechnical and Geoenvironmental Engineering, vol. 128, no. 12, pp. 1007–1017, 2002.\n• Q. Zhao, C. F. Caiafa, D. P. Mandic, Z. C. Chao, Y. Nagasaka, N. Fujii, L. Zhang, and A. Cichocki, “Higher order partial least squares (hopls): a generalized multilinear regression method,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 7, pp. 1660–1673, 2013.\n• Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higher-order tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.\n• D. T. Tran, M. Gabbouj, and A. Iosifidis, “Multilinear class-specific discriminant analysis,” Pattern Recognition Letters, vol. 100, pp. 131–136, 2017.\n• E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in neural information processing systems, pp. 1269–1277, 2014.\n• M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” arXiv preprint arXiv:1405.3866, 2014.\n• V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speeding-up convolutional neural networks using fine-tuned cp-decomposition,” arXiv preprint arXiv:1412.6553, 2014.\n• Y. Yang, D. Krompass, and V. Tresp, “Tensor-train recurrent neural networks for video classification,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3891–3900, JMLR. org, 2017.\n• C. F. Caiafa and A. Cichocki, “Multidimensional compressed sensing and their applications,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 3, no. 6, pp. 355–380, 2013.\n• Y. Yu, J. Jin, F. Liu, and S. Crozier, “Multidimensional compressed sensing mri using tensor decomposition-based sparsifying transform,” PloS one, vol. 9, no. 6, p. e98441, 2014.\n• R. Robucci, L. K. Chiu, J. Gray, J. Romberg, P. Hasler, and D. Anderson, “Compressive sensing on a cmos separable transform image sensor,” in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 5125–5128, IEEE, 2008.\n• M. Aharon, M. Elad, A. Bruckstein, et al., “K-svd: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on signal processing, vol. 54, no. 11, p. 4311, 2006.\n• D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003.\n• J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 948–958, 2010.\n• S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM review, vol. 43, no. 1, pp. 129–159, 2001.\n• J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Transactions on Information theory, vol. 50, no. 10, pp. 2231–2242, 2004.\n• J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655–4666, 2007.\n• L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000.\n• M. F. Duarte and R. G. Baraniuk, “Kronecker compressive sensing,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 494–504, 2012.\n• C. F. Caiafa and A. Cichocki, “Computing sparse representations of multidimensional signals using kronecker bases,” Neural computation, vol. 25, no. 1, pp. 186–220, 2013.\n• R. G. Baraniuk and M. B. Wakin, “Random projections of smooth manifolds,” Foundations of computational mathematics, vol. 9, no. 1, pp. 51–77, 2009.\n• K. Kulkarni and P. Turaga, “Recurrence textures for human activity recognition from compressive cameras,” in 2012 19th IEEE International Conference on Image Processing, pp. 1417–1420, IEEE, 2012.\n• K. Kulkarni and P. Turaga, “Reconstruction-free action inference from compressive imagers,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 772–784, 2016.\n• B. Hollis, S. Patterson, and J. Trinkle, “Compressed learning for tactile object recognition,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1616–1623, 2018.\n• A. Değerli, S. Aslan, M. Yamac, B. Sankur, and M. Gabbouj, “Compressively sensed image recognition,” in 2018 7th European Workshop on Visual Information Processing (EUVIP), pp. 1–6, IEEE, 2018.\n• Y. Xu and K. F. Kelly, “Compressed domain image classification using a multi-rate neural network,” arXiv preprint arXiv:1901.09983, 2019.\n• X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in\n\nProceedings of the thirteenth international conference on artificial intelligence and statistics\n\n, pp. 249–256, 2010.\n• K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European conference on computer vision, pp. 630–645, Springer, 2016.\n• F. Wu, X. Tan, Y. Yang, D. Tao, S. Tang, and Y. Zhuang, “Supervised nonnegative tensor factorization with maximum-margin constraint,” in Twenty-Seventh AAAI Conference on Artificial Intelligence, 2013.\n• Y.-D. Kim and S. Choi, “Nonnegative tucker decomposition,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, IEEE, 2007.\n• L. Grasedyck, D. Kressner, and C. Tobler, “A literature survey of low-rank tensor approximation techniques,” GAMM-Mitteilungen, vol. 36, no. 1, pp. 53–78, 2013.\n• A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” tech. rep., Citeseer, 2009.\n• \n\nZ. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in\n\nProceedings of International Conference on Computer Vision (ICCV), 2015.\n• J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.\n• \n\nK. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in\n\nProceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.\n• E. Zisselman, A. Adler, and M. Elad, “Compressed learning for image classification: A deep neural network approach,” Processing, Analyzing and Learning of Images, Shapes, and Forms, vol. 19, p. 1, 2018.\n• D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014." ]
[ null, "https://deepai.org/static/images/logo.png", null, "https://deepai.org/static/images/twitter-icon-blue-circle.svg", null, "https://deepai.org/static/images/linkedin-icon-blue-circle.svg", null, "https://deepai.org/static/images/discord-icon-blue-circle.svg", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9194677,"math_prob":0.87831265,"size":45064,"snap":"2022-05-2022-21","text_gpt3_token_len":8626,"char_repetition_ratio":0.1680648,"word_repetition_ratio":0.036420394,"special_character_ratio":0.18928635,"punctuation_ratio":0.098263614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97033733,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T08:30:32Z\",\"WARC-Record-ID\":\"<urn:uuid:71da0eca-5f37-4927-af27-08ab2dbe8ec6>\",\"Content-Length\":\"841736\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05416cdb-285f-4b6c-834a-c2bd080bbe1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:00321bd3-47f0-45c9-8711-9356fe8321b0>\",\"WARC-IP-Address\":\"35.162.213.168\",\"WARC-Target-URI\":\"https://deepai.org/publication/multilinear-compressive-learning\",\"WARC-Payload-Digest\":\"sha1:KHETC3CLSTF4ACDBJRXDNWEC5BR7PMOJ\",\"WARC-Block-Digest\":\"sha1:T5EJVU7X47URUF2MXW66WHE2VFLF5GAG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305242.48_warc_CC-MAIN-20220127072916-20220127102916-00204.warc.gz\"}"}
http://repository.bilkent.edu.tr/browse?type=subject&value=Number%20theory
[ "Now showing items 1-7 of 7\n\n• #### Comparison of the formulations for a hub-and-spoke network design problem under congestion \n\n(Elsevier, 2016)\nIn this paper, we study the hub location problem with a power-law congestion cost and propose an exact solution approach. We formulate this problem in a conic quadratic form and use a strengthening method which rests on ...\n• #### Condition number in recovery of signals from partial fractional fourier domain information \n\n(2013)\nThe problem of estimating unknown signal samples from partial measurements in fractional Fourier domains arises in wave propagation. By using the condition number of the inverse problem as a measure of redundant information, ...\n• #### Linear algebraic analysis of fractional fourier domain interpolation \n\n(IEEE, 2009)\nIn this work, we present a novel linear algebraic approach to certain signal interpolation problems involving the fractional Fourier transform. These problems arise in wave propagation, but the proposed approach to these ...\n• #### Multiplier free co-difference matrix for image and video processing \n\n(2009)\nIn this paper, we propose a new image feature extraction method. We define a matrix called co-difference matrix for a given region. This matrix can be computed without performing any multiplications. The operator that we ...\n• #### On a conjecture of Ilmonen, Haukkanen and Merikoski concerning the smallest eigenvalues of certain GCD related matrices \n\n(Elsevier Inc., 2016)\nLet Kn be the set of all n×n lower triangular (0,1)-matrices with each diagonal element equal to 1, Ln={YYT:Y ∈ Kn} and let cn be the minimum of the smallest eigenvalue of YYT as Y goes through Kn. The Ilmonen-Haukkanen-Merikoski ...\n• #### Shadow detection using 2D cepstrum \n\n(2009)\nShadows constitute a problem in many moving object detection and tracking algorithms in video. Usually, moving shadow regions lead to larger regions for detected objects. Shadow pixels have almost the same chromaticity as ...\n• #### Wildfire detection using LMS based active learning \n\n(2009)\nA computer vision based algorithm for wildfire detection is developed. The main detection algorithm is composed of four sub-algorithms detecting (i) slow moving objects, (ii) gray regions, (iii) rising regions, and (iv) ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8516976,"math_prob":0.91095716,"size":2262,"snap":"2019-35-2019-39","text_gpt3_token_len":487,"char_repetition_ratio":0.10717449,"word_repetition_ratio":0.0,"special_character_ratio":0.2082228,"punctuation_ratio":0.121518984,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T12:20:11Z\",\"WARC-Record-ID\":\"<urn:uuid:1dc29de6-ef71-4ef3-956b-d264f2b4878c>\",\"Content-Length\":\"36969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72dde759-efb4-4e5b-a3e7-498efa44b31a>\",\"WARC-Concurrent-To\":\"<urn:uuid:15293635-2a15-4b84-8201-b5a2f1199d6d>\",\"WARC-IP-Address\":\"139.179.38.52\",\"WARC-Target-URI\":\"http://repository.bilkent.edu.tr/browse?type=subject&value=Number%20theory\",\"WARC-Payload-Digest\":\"sha1:AI3QC5UTHQVMW5KAVOWGDHTURU7DUJAL\",\"WARC-Block-Digest\":\"sha1:VZPD6PBBRR7P7VVT6LHVAUZP6SDRIQRM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573071.65_warc_CC-MAIN-20190917121048-20190917143048-00510.warc.gz\"}"}
https://franklin.dyer.me/post/96
[ "## Iterated Polynomials\n\n2017 April 9\n\nFind the formula for the nth iterate of the function $f(x)=2x^2-1$ for $|x| \\le 1$. Find the formula for the nth iterate of a quadratic with a fixed point at its minimum.\n\nIt seems that some polynomial functions, such as $f(x)=x^2+1$, simply can't have an iteration formula in the closed form. However, some do because they can be put in the form $f(x)=(g\\circ h\\circ g^{-1})(x)$. For example, any polynomial function of the form\n\ncan be iterated, because it takes the aforementioned form with\n\nmeaning that\n\nLet's apply this to quadratics. This formula tells us that any quadratic with its vertex on the line $y=x$ can be iterated by this formula, because a quadratic with its vertex on $y=x$ takes the form\n\nor\n\nWhich we can see takes the form $f(x)=(g\\circ h\\circ g^{-1})(x)$ where $g(x)=\\frac{1}{a}x+h$ and $h(x)=x^2$, so\n\nThat's about it for polynomials, aside from a few quadratics that can be iterated somewhat using trigonometric functions. For example, the quadratic\n\ncan be split up into\n\nwhich, once again, takes our special form and can be iterated as\n\nHowever, quadratics iterated this way can only be iterated to a certain extent because of the limited domain of the inverse trigonometric functions. Let's try an example anyways. What about $f^{10}(0.5)$? Using our formula, this will be\n\nWell, that was a bad example to try, because $f(0.5)=f(-0.5)=-0.5$, so we could have found that without our formula. How about $f^{10}(0.1)$?\n\nI think I'm noticing a pattern here. When I iterate a quadratic whose vertex is on $y=x$, the graphs of the iterates exhibit predictable behavior. Here are the graphs of $y=\\frac{1}{2}(x-2)^2+2$ and its first few iterates:", null, "", null, "", null, "", null, "They all seem to predictably flatten out at the bottom and then become steeper more and more quickly. Now here's the function $f(x)=2x^2-1$ and its first couple iterates:", null, "", null, "", null, "", null, "These display neat sinusoidal behavior and then head straight up. In fact, the crests of each of the waves are at a height of $1$, and the points $(1,1)$ and $(-1,1)$ lie on the curve directly before and after it begins to grow out of control, which is perhaps the reason why we cannot iterate it outside of that interval. Now observe these pictures of a quadratic of neither form:", null, "", null, "", null, "", null, "This quadratic's iterates display erratic behavior that grows insanely large before coming down again. Here's one more interesting case - a parabola tangent to the line $y=x$:", null, "", null, "", null, "", null, "Perhaps there is some formula for quadratics of that form, since they seem to behave predictably... but that's for another post." ]
[ null, "https://franklin.dyer.me/img/2017-4-9-Fig1.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig2.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig3.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig4.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig5.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig6.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig7.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig8.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig9.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig10.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig11.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig12.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig13.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig14.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig15.png", null, "https://franklin.dyer.me/img/2017-4-9-Fig16.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92471856,"math_prob":0.9987602,"size":2545,"snap":"2021-43-2021-49","text_gpt3_token_len":663,"char_repetition_ratio":0.14364423,"word_repetition_ratio":0.023809524,"special_character_ratio":0.25972494,"punctuation_ratio":0.09906542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99968743,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T20:37:28Z\",\"WARC-Record-ID\":\"<urn:uuid:2410a1bb-76da-427a-87a9-5e4d032fa455>\",\"Content-Length\":\"6433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4319fff-a168-4dcd-9acc-d7297a7a6ce0>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc7e6483-6973-4f37-99fc-60050da91cb3>\",\"WARC-IP-Address\":\"104.248.61.147\",\"WARC-Target-URI\":\"https://franklin.dyer.me/post/96\",\"WARC-Payload-Digest\":\"sha1:Y43E5FKSAYR6QRTSWGA6HCT5FXJ5QSRB\",\"WARC-Block-Digest\":\"sha1:SOHFI7UG2PSVVIG6EO7YHUCEJS3AAF3J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583083.92_warc_CC-MAIN-20211015192439-20211015222439-00690.warc.gz\"}"}
https://betterlesson.com/lesson/432411/equvalent-ratios-again?from=breadcrumb_lesson
[ "# Equvalent Ratios Again!\n\n15 teachers like this lesson\nPrint Lesson\n\n## Objective\n\nSWBAT represent relationships between two quantities visually.\n\n#### Big Idea\n\nDevelop a relationship between the ratio table and the coordinate grid\n\n## DO NOW\n\n15 minutes\n\nStudents will complete the Equivalent Ratios and tables worksheet.  This worksheet will re-activate prior learning.  The students will be using tables to find the equivalent ratios. (SMP 7)\n\n## Making Connections between the table and graph\n\n15 minutes\n\nThe students will need to have a clear understanding of the relationship between the ratio table and the coordinate grid.  This was addressed in a prior lesson which is located in their 6th grade tool box.  Many times students view graphing as completely unrelated to ratios.  It will be very important for us to make that connection for them and to get them to realize that graphing provides another way of looking at this relationship.  Additionally, it may be a good idea to review order pairs and how they are plotted on the grid as this becomes a problem for students who have limited knowledge of this concept.\n\nStart by asking students to find at least 3 equivalent ratios to 4/16 by using a ratio table.  Randomly ask students for their ratios and place them in the chart.  Then, show them the coordinate grid and explain/remind  them that the first number (numerator) belongs on the x axis and the second number (denominator) belongs on the y axis.  Model out loud how to graph the points the students gave you.  Ask the students what they notice about the relationship between the table and the graph(SMP 7: structure).  Next, ask the students to find two additional points from the line.  Have students justify that these two points are still equivalent ratios? (SMP 3: arguing).  As an extension to this questioning, ask the students if they can make a generalization about all the points on the line?(SMP 8: Patterns)  I’m looking for students to say that all points on the line are equivalent ratios if they form a straight line.  Also, students might say that they can easily check their work by plotting it on the grid because it will form a straight line.\n\n## Checking our Work\n\n20 minutes\n\nIn this section, I’m going to bring back the tables from the Equivalent ratio and tables worksheet from the DO NOW acitivity.  I’m going to have students use these tables and plot the points on the coordinate grid.  Students will quickly see that there work is correct as the plotted points form a line.  As an extension, I’m going to have the students use the graph to find a value that is not in the table.  Then I will have them put it in the table to justify their answer is correct.  (SMP 6)\n\n## In-Class Practice\n\n15 minutes\n\nI’m going to use the In-Class worksheet (Massachusetts DOE, 2012), to let them practice on their own.  While the students are practicing, I’m going to walk around to check for understanding.  At this time, students should be working independently, but can use their tablemates for assistance.\n\n## Closure\n\n10 minutes\n\nTo bring this lesson to a close, I’m going to have the students create their own ratio table and graph it on the grid.  I will then ask them if there graph shows a ratio and how they know?  Then, I will ask them to use the grid and find a ratio that is not from their table and to justify their answer." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9602709,"math_prob":0.90246373,"size":2929,"snap":"2021-21-2021-25","text_gpt3_token_len":622,"char_repetition_ratio":0.16068377,"word_repetition_ratio":0.0077220076,"special_character_ratio":0.20792079,"punctuation_ratio":0.080756016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T16:19:13Z\",\"WARC-Record-ID\":\"<urn:uuid:f225ef3f-1cc9-40fd-a14f-9d54e145f559>\",\"Content-Length\":\"114581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a2d875e-0d14-48c1-a3a2-4f862ea46200>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a793afb-386e-4cb5-857e-cdec94a048f5>\",\"WARC-IP-Address\":\"107.21.130.43\",\"WARC-Target-URI\":\"https://betterlesson.com/lesson/432411/equvalent-ratios-again?from=breadcrumb_lesson\",\"WARC-Payload-Digest\":\"sha1:PG7XUHL5D4AOPPEQXOS66CIP5ACDLP3M\",\"WARC-Block-Digest\":\"sha1:ZROFAEKO2SCIRCGRORH7RXAGL6QHGOUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988758.74_warc_CC-MAIN-20210506144716-20210506174716-00304.warc.gz\"}"}
https://discuss.codechef.com/t/help-may-lunch-time/90568
[ "# Help may lunch time\n\nmay lunchtime - question - birthday gift | TWINGFT\n\nMy codes run perfectly, with all test cases right, and with the time constraints and i even checked the various different test cases with codechef official editorial code. and the results are matching perfectly… can anyone help me why is it still showing the Wrong Answer… please …\n\nhere is the code : -\n\nt =int(input())\nwhile(t>0):\nt -= 1\nn,k = map(int,input().split())\na = [n]\na = list(map(int, input().split()))\na.sort()\na.reverse()\nchef = 0\np = 0\nfor i in range(0,k2):\nif(i == 0 and a[i]==a[i+1]):\np = -1\ncontinue\nelif(2\nk-i==1 and p == -1):\nchef += a[i] + a[i+1]\nelif(i%2==1 ):\nif(p == 0 or p == -1):\nchef += a[i]\nelse:\np = -1\ncontinue\nelif(chef==0 or i%2==0):\nchef += a[i]\np = 1\nprint(chef)\n\npython" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6826447,"math_prob":0.9989815,"size":751,"snap":"2023-14-2023-23","text_gpt3_token_len":242,"char_repetition_ratio":0.11111111,"word_repetition_ratio":0.029850746,"special_character_ratio":0.37816244,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9936021,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T03:22:43Z\",\"WARC-Record-ID\":\"<urn:uuid:05c31856-124f-43bf-8a56-20a9b09e55b4>\",\"Content-Length\":\"12806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc535ab7-cd43-4d3a-a0bc-9289025e67ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9ed1996-2137-4b6d-a812-92777aff8998>\",\"WARC-IP-Address\":\"34.198.237.79\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/help-may-lunch-time/90568\",\"WARC-Payload-Digest\":\"sha1:7UV5S4YPY4PLGHZAIUBH75GJIV7OZ4WX\",\"WARC-Block-Digest\":\"sha1:WRJ536HABBYGIRAIXLNEMAIHLGIW3CLK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650620.66_warc_CC-MAIN-20230605021141-20230605051141-00527.warc.gz\"}"}
https://whatpercentcalculator.com/what-is-percent-decrease-from-113-to-104
[ "# What is the percent decrease from 113 to 104?\n\n## (Percent decrease from 113 to 104 is 7.9646 percent)\n\n### Percent decrease from 113 to 104 is 7.9646 percent! Explanation: What does 7.9646 percent or 7.9646% mean?\n\nPercent (%) is an abbreviation for the Latin “per centum”, which means per hundred or for every hundred. So, 7.9646% means 7.9646 out of every 100. For example, if you decrease 113 by 7.9646% then it will become 104.\n\n### Methods to calculate \"What is the percent decrease from 113 to 104\" with step by step explanation:\n\n#### Method: Use the Percent Decrease Calculator formula (Old Number - New Number/Old Number)*100 to calculate \"What is the percent decrease from 113 to 104\".\n\n1. From the New Number deduct Old Number i.e. 113-104 = 9\n2. Divide above number by Old Number i.e. 9/113 = 0.079646\n3. Multiply the result by 100 i.e. 0.079646*100 = 7.9646%\n\n### 113 Percentage example\n\nPercentages express a proportionate part of a total. When a total is not given then it is assumed to be 100. E.g. 113% (read as 113 percent) can also be expressed as 113/100 or 113:100.\n\nExample: If you earn 113% (113 percent) profit then on an investment of \\$100 you receive a profit of \\$113.\n\nAt times you need to calculate a tip in a restaurant, or how to split money between friends in your head, that is without any calculator or pen and paper.\nMany a time, it is quite easy if you break it down to smaller chunks. You should know how to find 1%, 10% and 50%. After that finding percentages becomes pretty easy.\n• To find 5%, find 10% and divide it by two\n• To find 11%, find 10%, then find 1%, then add both values\n• To find 15%, find 10%, then add 5%\n• To find 20%, find 10% and double it\n• To find 25%, find 50% and then halve it\n• To find 26%, find 25% as above, then find 1%, and then add these two values\n• To find 60%, find 50% and add 10%\n• To find 75%, find 50% and add 25%\n• To find 95%, find 5% and then deduct it from the number\nIf you know how to find these easy percentages, you can add, deduct and calculate percentages easily, specially if they are whole numbers. At least you should be able to find an approximate.\n\n### Scholarship programs to learn math\n\nHere are some of the top scholarships available to students who wish to learn math.\n\n### Examples to calculate \"What is the percent decrease from X to Y?\"\n\nWhatPercentCalculator.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9260501,"math_prob":0.96781516,"size":4804,"snap":"2021-31-2021-39","text_gpt3_token_len":1382,"char_repetition_ratio":0.31791666,"word_repetition_ratio":0.2047083,"special_character_ratio":0.3682348,"punctuation_ratio":0.067540325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99392855,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T10:34:35Z\",\"WARC-Record-ID\":\"<urn:uuid:7c0064c7-460d-48a2-b475-6c421d61d612>\",\"Content-Length\":\"17412\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef74df0b-f9f2-4759-aca2-dc750eed3345>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a3a5f19-9345-477d-b9a0-ae0f2b5a2f67>\",\"WARC-IP-Address\":\"104.21.81.186\",\"WARC-Target-URI\":\"https://whatpercentcalculator.com/what-is-percent-decrease-from-113-to-104\",\"WARC-Payload-Digest\":\"sha1:FIQWWQPPAECG5P7376R6SDFLENE3PUOZ\",\"WARC-Block-Digest\":\"sha1:RMNH75UUHHDDQKW3EF37SMTO4VZIQVES\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00172.warc.gz\"}"}