title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
then the probability that the matrix $\begin{pmatrix} X_1 & X_2\\ X_3 & X_4 \end{pmatrix}$ is nonsingular?
Either $X_1X_4=X_2X_3=1$ ($4$ ways) or $X_1X_4=X_2X_3=-1$ ($4$ ways), so $\frac 8{2^4}=\frac 12$ probability...
Non-linear Integral equation
Let $F(x) = \int_{-\infty}^x(x-z)f(z)$. Then the equation becomes $$ F''(x)F'(x) = F^2(x) \, . $$ A family of solutions is given by $f(x) = ce^x$ for any $c \in \mathbb{R}$.
Global approximation theorem in Sobolev space
Edit: One problem with Evans's proof is that he defines $U_i$ as $$U_i=\{x\in U\big|\mathrm{dist}(x,\partial U)<1/i\},$$ and $V_i=U_{i+3}\setminus\overline{U}_i$. But if, for example, $U=\mathbb R^n$, what are those $V$'s? So, Evans's proof will not work here. In your proof, you use the geometry of $\mathbb R^n$, to say that it is covered by your $U_i$'s. This proves that the theorem holds for $\mathbb R^n$, but might not apply in other cases.
Quotient by powers of prime in a principal ideal domain
Let $k\leq n$. For $x\in p^{k-1}M$, write $x=\underline{p^{k-1}r_x}$ for $r_x\in R$, where the underline means 'class modulo $a$'. Then if $x,y\in p^{k-1}M$ and $x-y\in p^k M$, you have $\underline{p^{k-1}(r_x-r_y-ps_m)}=0$. It follows that $a\,|p^{k-1}(r_x-r_y-ps_m)$, in particuliar, $p^{k}$ divides the thing on the right, and because $R$ is factorial and $k\leq n$, $p|r_x-r_y-ps_m$, so that $p|r_x-r_y$, which implies that $r_x\equiv r_y$ mod $(p)$. So, there is a well defined map of sets $p^{k-1}M/p^kM\to R/(p)$, which is actually a $R$-module map (just write it down). Of course, given $r+(p)\in R/(p)$, the class (mod $p^kM$) of $p^{k-1}\underline r\in p^{k-1}M$ is a reverse image of $r+(p)$, by definition. Also, if the class (mod $p^kM$) of $p^{k-1}\underline r$ is sent on $0\in R/(p)$, then, $p|r$, so actually our element factors by $p^k$, and thus is $0$ mod $p^kM$. So there is an isomorphism. If $k\geq n$, we want to show that $p^kM=p^{k+1}M$. It suffices to show that for $m\in M$, $p^nm\in p^{n+1}M$. Let $b\in R$ such that $p^nb=a$. By Bezout theorem for principal rings we have $u,v\in R$ such that $up^n+vb=1$. Take $r\in R$, then, modulo $a$, we have $$p^nr\equiv 1\cdot p^nr\equiv up^{2n}r+vbp^nr\equiv up^{2n}r\in p^{n+1}M$$ because $n\geq 1$.
What steps are required to describe and graph an ellipsoid?
You can use the eigenvectors to construct an orthogonal matrix ($S$ in your notes) that will serve to diagonalize your matrix $G$. Once you have diagonalized, things get much easier. In fact, this was one of the earliest uses of eigenvectors (Cauchy). There is a good description of the details on this page.
Why does count of Z -Transform of sequence change?
When I wrote that post I used 0-based indexing for the input because that's just what programmers do by default, and I used 1-based indexing for the output because of the division-by-zero issue. I don't know if it's more common for the indices to start at 1 for the input. Basically, don't worry too much about it. It's just a convention, and it's not hard to convert between the various possibilities.
Calculate equal distance between lines and points
Let $P(X,Y)$ be a point on the parabola, by the definition of parabola, you'll have $$\sqrt{(X-0)^2+(Y-6)^2}=|4-X|$$ Then, you can get what you want by squaring and simplifying the equation above. Another way : Since the apex is $(2,6)$, $$(y-6)^2=4\cdot (-2)\cdot (x-2).$$ This is because if the focus is $(p,0)$ and the directrix is $x=-p$, then the parabola can be represented as $$y^2=4px.$$
Find $\phi(\log 2)$ for the integral equation $\phi(x)=1-2x-4x^2+\int_0^x[3+6(x-t)-4(x-t)^2]\phi(t)dt$ .
Differentiating three times gives that $$\phi’’’(x)=3\phi’’(x)+6\phi’(x)-8\phi(x).$$ Solve the characteristic equation and we find it has three distinct roots, so everything will be easy. The result I get is that $\phi(x)=e^x$ so $\phi(\log(2))=2$. Thanks to @lan for pointing out my stupid errors.
Clarification of term in graph theory - about star polygon graphs
Yes, your interpretation is consistent with the MathWorld entry. However, it should be noted that not everyone uses this interpretation. In particular, Grünbaum and others (like me) take the model regular $\{n/d\}$-gon to have its $k$-th vertex (starting at the $0$-th) at coordinates $$\left(\;\cos \frac{2\pi dk}{n}\;,\; \sin\frac{2\pi dk}{n} \;\right)$$ With this view, a $\{12/4\}$-gon, for instance, isn't a compound of four separate triangles in the MathWorld sense; it's a dodecagon that wraps around a single triangular cycle four times. See some related thoughts in this answer to the question "What is a Hexagon?".
Elegant way to prove that the space must be infinite dimensional?
Let $y_0$ be some nonzero element of $V$. Take a sequence of elements of $S$, call it $x_i$. Let $e_i(x)=y_0$ if $x=x_i$ and $0$ otherwise. Prove that the $e_i$ for $i=1\dots,n$ are linearly independent and yet do not span $F(S,V)$.
Find the integer closest to $\ln(2013)$
$2013$ is "very" close to $2048=2^{11}$. So how about $$2013=e^x=2^y$$ where $y$ is effectively equal to $11$. Then $x=y\ln 2$ and $\ln 2$ is famously equal to $0.7$. Then $$\ln(2013)\approx 11\cdot 0.7=7.7$$ giving an answer of $8$.
What do we need to guarantee that $[X, Y]_p$ is linearly independent with $X_p$ and $Y_p$?
You probably won't like this answer, but you need to know that there are no integral manifolds for the $2$-dimensional distribution spanned by $X$ and $Y$. For example, taking $$X=\frac{\partial}{\partial x} \quad\text{and}\quad Y=\frac{\partial}{\partial y}+x\frac{\partial}{\partial z},$$ we have $$[X,Y]=\frac{\partial}{\partial z}.$$ (Of course, if we leave off the $x\,\partial/\partial z$, we have linearly independent, but commuting, vector fields.) Note that your question was phrased just at one point $p$, but linear independence will be an open condition, so (assuming smoothness, of course) linear independence will hold in a neighborhood of $p$ if it holds at $p$.
Parametrizing a given line and equations
Hint: Is there a pair of values of $t$ in each of your parametrisations that give the desired result? For instance, your first attempt at the first question. Obviously, when $t=0$, it matches the first point. But when $t=1$, you get $(-5,4)$, and not $(-5,6)$. Hint 2: The slope has to match between the two points. That is, "rise over run". So if your parametrisation is $(at+b,ct+d)$, then rise over run is $\frac{ct_1-ct_0}{at_1-at_0}=\frac{c}a$. This must match the slope of a line between the two points, so for the first question, it's $\frac{6-2}{-5-3}=\frac4{-8}=-\frac12$.
If there a formal definition for the sum of two irrationals is rational if it is only the additive inverse plus some rational?
You would not look for a formal definition but for a theorem, and also let's not talk about consensus. (Namely, an argument in math is either right or wrong, regardless on how many people are "for" or "against" it.) So if $\alpha, \beta$ are irrational and $\alpha+\beta=q$ - rational, then $\beta=-\alpha+q$ - i.e. indeed $\beta$ is the inverse of $\alpha$ plus a rational constant, as you have claimed. Then $\alpha-\beta=2\alpha-q$, which must then be irrational. (Because, otherwise you would add $q$ and conclude that $2\alpha$ would be rational, and then you would halve it and conclude that $\alpha$ would be rational - contradiction!)
Is there an equation to find out how after $\frac{6!}{6}$ to locate clockwise increase in numbers in sets of 2
Assuming that as in the previous problem, you consider arrangements gained from rotational symmetry to be the same as one another (e.g. 12 34 56 is considered the same as 34 56 12 is considered the same as 56 12 34), let us look at the problem in the following way. We wish to partition the set $\{1,2,3,4,5,6\}$ as subsets $A$, $B$, $C$, with each of size two, and $A,B,C$ considered distinct. Pick which two numbers go to set $A$. This can be accomplished in $\binom{6}{2}=\frac{6!}{2!4!}=\frac{6\cdot 5}{2}=15$ ways. Pick which two numbers from those remaining go to set $B$. This can be accomplished in $\binom{4}{2}=\frac{4!}{2!2!}=\frac{4\cdot 3}{2} = 6$ ways. Pick which two numbers from those remaining go to set $C$. This can be accomplished in $\binom{2}{2}=\frac{2!}{2!0!}=1$ way. For example, we partition as $A=\{1,3\}, B = \{2,4\}, C=\{5,6\}$ For each of these sets there is a unique way to arrange them in increasing order. Now, we place the numbers from $A,B,C$ around the triangle (hexagon with paired sides?) and we notice that if we had done it as $ABC$ it is the "same" as though we did it $BCA$ or as $CAB$. This implies that we accidentally counted every situation three times, so we divide by that amount to make it so we counted each situation exactly once. Applying multiplication principle and dividing by symmetry, we get a final count of $\binom{6}{2}\binom{4}{2}\binom{2}{2}\cdot \frac{1}{3} = \frac{6!}{2!2!2!3}=\frac{15\cdot 6}{3}=\frac{90}{3}=30$. For a generalization, supposing we have $2n$ numbers to arrange, $\{1,2,3,\dots,2n\}$, onto an $n$-gon with two numbers on each side, each side having the numbers appearing in increasing order clockwise where arrangements gained from rotations are considered "the same," there will be a total of $\binom{2n}{2,2,2,\dots,2}\frac{1}{n}=\frac{(2n)!}{(2!)^n n}=\frac{(2n)!}{2^n n}$ arrangements. To output every possible arrangement, the following pseudo-code should suffice: Define $a_1,a_2,b_1,b_2,c_1,c_2$ as integers Set $a_1=1$ (this will allow us to avoid having to remove the excess that comes from doublecounting scenarios) For $a_2=2..6$ ..For $b_1=2..6$ ....For $b_2=(b_1+1)..6$ ......If $b_2=a_2$, skip ......For $c_1=2..6$ ........If $c_1=b_2, c_1=b_1,$ or $c_1=a_2$, skip ........For $c_2=(c_1+1)..6$ ..........If $c_2=b_2, c_2=b_1,$ or $c_2=a_2$ skip ..........Else, output $a_1a_2~~b_1b_2~~c_1c_2$ By skip, I mean end the current iteration of the loop and skip ahead to the next value.
Find the maximum value of the function
Hint. Assume $x>0$. Then you get $$ f'(x)=\frac{-2x\times (1+2 \ln x)}{x^{2 x^2}} . $$ Can you take it from here?
What is the definition of a sample path of Brownian motion?
A possible choice of space $\Omega$ to define Brownian motion is $\Omega=C(\mathbb R_+,\mathbb R)$, then the Brownian motion $(B_t)_{t\in\mathbb R_+}$ is simply the coordinate process, that is, for every $t$ in $\mathbb R_+$ and $\omega$ in $\Omega$, $B_t(\omega)=\omega(t)$. In this construction, sample paths are the elements $\omega$ of $\Omega$. But, as is usual in probability, one may prefer not to specify $\Omega$. Then $\Omega$ can be any space large enough for a family $(X_t)_{t\in\mathbb R_+}$ of random variables with the prescribed properties to exist on $\Omega$. Then a sample path is, for some $\omega$ in $\Omega$, the function $X(\omega):\mathbb R_+\to\mathbb R$, $t\mapsto X_t(\omega)$. Lévy's construction by dichotomy, which you recall, might then correspond to $\Omega=S^\mathbb N$ the product of a countable number of copies of a probability space $(S,\mathcal S,Q)$ being large enough for one standard normal random variable $\xi$ to be defined on it. Then a Brownian motion $(X_t)_{t\in\mathbb R_+}$ on $\Omega$ can be defined as Lévy indicated using the i.i.d. copies of $\xi$ defined on each factor of $\Omega$. Thus, every $\omega$ in $\Omega$ is $\omega=(s_n)_{n\in\mathbb N}$ for some $s_n$ in $S$, and the random variables $X_1$ and $X_{1/2}$, say, are defined by $X_{1}(\omega)=\xi(s_1)$ and $X_{1/2}(\omega)=\frac12\xi(s_1)+\frac12\xi(s_2)$. Edit: Recall that the $n$th approximation $X^{(n)}$ of the Brownian motion $X$ on $[0,1]$ is piecewise linear on each interval $[(k-1)/2^n,k/2^n]$ with $1\leqslant k\leqslant2^n$. Thus, the $0$th approximation is such that $X^{(0)}_t(\omega)=t\xi(s_1)$ for every $t$ in $[0,1]$, after that, $X^{(n+1)}_t=X^{(n)}_t$ at every $t=k/2^n$ and $2X^{(n+1)}_t=X^{(n)}_{k/2^n}+X^{(n)}_{(k+1)/2^n}+\xi(s_*)/2^{n/2}$ at every $t=(2k+1)/2^{n+1}$, where $*$ is the first index $i$ such that $\xi(s_i)$ is not used yet.
Fourier series of cotangent
Expand the exponential form of $\cot x$: \begin{align}\cot x=\frac{i(e^{ix}+e^{-ix})}{e^{ix}-e^{-ix}}&=i+\frac{2ie^{-ix}}{e^{ix}-e^{-ix}}=i+2ie^{-2ix}(1+e^{-2ix}+e^{-4ix}+\cdots)\\[1ex] &=i+2i\sum_{k\ge1}(\cos2kx-i\sin2kx) \end{align}
Determine all value of $p,q\in\mathbb{N}$ such that : $2^{5}5^{3}=(p+1)(2q+p)$
You are right that one of $p+1$ and $2q+p$ is odd and one even. Furthermore, $2q+p>p+1$. So $(p+1,2q+p)$ is one of $(1,16\times 125),(5,16\times 25),(25,16\times 5),(16,125)$ The first of these gives $p=0$ which is not a natural number. Otherwise we have $p=4,q=198$ or $p=24,q=28$ or $p=15,q=55$.
Closed form of a generating function $\sum _{n=1}^\infty x^{n^2}$
Using parity to extend the summation to all integers, one can recognize in the resulting expression Jacobi theta function $\vartheta_3(z,q)=\sum_{n\in\mathbb Z}q^{n^2} e^{2ni z}$. More precisely, we have $$\sum_{n=1}^{\infty}x^{n^2}=\frac{\vartheta_3(0,x)-1}{2}.$$
Does almost everywhere equality means equality on the quotient space?
You're are right except for infinite $p$ this will also work. L-p norm is usually defined for space of L-p Lebesgue integrable functions. If you want such norm to be defined without using Lebesgue measure, you could choose space of $C^1[0,1]$. In this case, the function in your example will have positive norm. When Lebesgue measure is used, set of measure zero doesn't matter any more. And you are right about norm are equivalent on classes as you described. For $L^\infty$ with Lebesgue integration, the essential supremum norm is defined as $essup\{b\in\mathbb{R}|m(f^{-1}(b,\infty))=0\}$. This ignores the null-set difference you mentioned.
If $A$ is idempotent and symmetric, then $A=BB^T$ where $B^TB=I$.
That is not true. $A=0$ is idempotent, as $0^2=0$. If $A=BB^T$ and $B^TB=I$, then $$I=B^TBB^TB=B^T(BB^T)B=B^TAB=B^T0B=0.$$ Try also, $$ A=\left(\begin{matrix} 1&0\\0&0\end{matrix}\right). $$ Then $A^2=A$. If $A=BB^T$ and $B^TB=I$, then $$I=B^TBB^TB=B^T(BB^T)B=B^TAB,$$ which implies that $A$ is nonsingular - contradiction. However, assuming that $A$ is non-singular, then $A^2=A$ implies that $A(A-I)=0$, and in turn that $$ A-I=A^{-1}A(A-I)=A^{-1}0=0, $$ and hence that $A=I$.
Fourier transform of the n-th derivative (without induction)
$$ \begin{align*} \frac{\mathrm d^n}{\mathrm dx^n}f(x)&=\frac{\mathrm d^n}{\mathrm dx^n}\left(\frac{1}{2\pi}\int_{-\infty}^\infty \hat f(\omega)\,\mathrm e^{i\omega x}\,\mathrm d\omega\right)\\ &=\frac{1}{2\pi}\int_{-\infty}^\infty \hat f(\omega)\frac{\mathrm d^n}{\mathrm dx^n}\,\left(\mathrm e^{i\omega x}\right)\,\mathrm d\omega\\ &=\frac{1}{2\pi}\int_{-\infty}^\infty(i\omega)^n\hat f(\omega)\,\mathrm e^{i\omega x}\,\mathrm d\omega\\ &=\mathcal F^{-1}\{(i\omega)^n\hat f(\omega)\} \end{align*} $$ and then $\widehat{f^{(n)}}(\omega)=(i \omega)^{n} \hat{f}(\omega)$
Existence of solution for a particular linear, non-strictly hyperbolic system of PDEs.
Let's think in terms of characteristics, and assume a parametrization $(x(t), t)$ of the coordinates such that $x'(t) = f(t)$. Using the chain rule for the time-derivative of $u(x(t), t)$, we find $$ \frac{\text d}{\text d t} u = u_t + f(t) u_x = h \, . $$ Local existence and uniqueness results follow from the study of the ODE system $$ \begin{aligned} x'(t) &= f(t)\\ u'(t) &= h\big(t, u(t), x(t)\big) \end{aligned} $$ Here, we have used the fact that $A = f\, I_2$ is proportional to the identity matrix, as noted by @Calvin Khor in the comments section. It is one particular example where the method of characteristics applies to PDE systems.
Questions about how to input both odds and payoff into expected value function
The explicit expectation formulas you are looking for are $(q_{\mathrm{win}}p_{\mathrm{win}}+q_{\mathrm{loss}}p_{\mathrm{loss}})^n S_0={\mathrm{E}}[S_n]$ , where upon winning or losing the current amount is multiplied by $q_{\mathrm{win}}$ or $q_{\mathrm{loss}}$ respectively, $n(g_{\mathrm{win}}p_{\mathrm{win}}-g_{\mathrm{loss}}p_{\mathrm{loss}})+S_0={\mathrm{E}}[S_n]$ , where the win and loss amounts are fixed, $g_{\mathrm{loss}}$ and $g_{\mathrm{loss}}$ respectively. Below I will show three ways to derive the first formula and at the very bottom I have derived the second formula. To begin, let us analyze how the random variables are related to the constants given by the game rules in the percentage-based payoff case: $p_{\mathrm{win}}$ probability to win a play (fixed $0<p_{\mathrm{win}}< 1$), let $p_{\mathrm{loss}}=1-p_{\mathrm{win}}$; $g_{\mathrm{win}}$ percentage won per win (fixed $g_{\mathrm{win}}\geq 0$), let $q_{\mathrm{win}}=1+g_{\mathrm{win}}$; $g_{\mathrm{loss}}$ percentage lost per loss (fixed $g_{\mathrm{loss}}\geq 0 $), let $q_{\mathrm{loss}}=1-g_{\mathrm{loss}}$; $S_0$ the amount the gambler begins the game with; $S_n$ the amount he has after the $n$th play. $S_n$ is a different discrete random variable for each $n$. Then $S_1$ is the amount the gambler has after the 1st play. Given the rules, there are 2 possible values for $S_1$: $S_1=S_0(1+g_{\mathrm{win}})=q_{\mathrm{win}}S_0$ , if the gambler wins the $1$st play. This has a probability $p_{\mathrm{win}}$ to occur. $S_1=S_0(1-g_{\mathrm{loss}})=q_{\mathrm{loss}}S_0$ , if the gambler loses the $1$st play. This has a probability $p_{\mathrm{loss}}$ to occur. Now the expected value of any discrete random variable such as $S_1$, is by definition the sum of the products of each possible value with its probability of occurrence, that is ${\mathrm{E}}[S_1]=p_{\mathrm{win}}q_{\mathrm{win}}S_0+p_{\mathrm{loss}}q_{\mathrm{loss}}S_0=S_0(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})$ Note ${\mathrm{E}}[S_1]={\mathrm{E}}[S_1|S_0]={\mathrm{E}}[S_1|S_0=\bar{S_0}]$, the conditional expectation for $S_1$ on condition the gambler started with $S_0=\bar{S_0}$, where $\bar{S_0}$ is any fixed constant. For $S_n$ we can repeat the same considerations: $S_n=S_{n-1}(1+g_{\mathrm{win}})=q_{\mathrm{win}}S_{n-1}$ , if the gambler wins the $n$th play. This has a probability $p_{\mathrm{win}}$ to occur. $S_n=S_{n-1}(1-g_{\mathrm{loss}})=q_{\mathrm{loss}}S_{n-1}$ , if the gambler loses the $n$th play. This has a probability $p_{\mathrm{loss}}$ to occur. Note $S_n=S_n(S_{n-1})$ is a recursive formula and $S_n$ is a function of the previous random variable $S_{n-1}$. Because $S_n$ depends on $S_{n-1}$, knowing the value of $S_{n-1}$ we can find the conditional expectation of $S_n$ on condition $S_{n-1}=\bar{S_{n-1}}$, where $\bar{S_{n-1}}$ is any fixed constant: ${\mathrm{E}}[S_n|S_{n-1}]=p_{\mathrm{win}}q_{\mathrm{win}}S_{n-1}+p_{\mathrm{loss}}q_{\mathrm{loss}}S_{n-1}=S_{n-1}(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})=\\ ={\mathrm{E}}[S_n|S_{n-1}=\bar{S_{n-1}}]=\bar{S_{n-1}}(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})$ Note ${\mathrm{E}}[S_n|S_{n-1}] \neq {\mathrm{E}}[S_n]\quad \forall n \neq 1$: ${\mathrm{E}}[S_n|S_{n-1}]$ is what the gambler is expected to have after $1$ play if he now has $S_{n-1}=\bar{S_{n-1}}$ after already having played $n-1$ times, so it is a function of the random variable $S_{n-1}$; ${\mathrm{E}}[S_n]$ is what he is expected to have after $n$ plays if he started with $S_0$, the total expectation for $S_n$, independent of all previous random values $\{S_j\}_{j=1}^{n-1}$. By the law of total expectation we can express ${\mathrm{E}}[S_n]={\mathrm{E}}[{\mathrm{E}}[S_n|S_{n-1}]]=\\ ={\mathrm{E}}[S_{n-1}(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}}){\mathrm{E}}[S_{n-1}]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}}){\mathrm{E}}[{\mathrm{E}}[S_{n-1}|S_{n-2}]]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}}){\mathrm{E}}[S_{n-2}(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^2{\mathrm{E}}[S_{n-2}]=$ .../iteratively using the law of total expectation/... $=(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^{n-1}{\mathrm{E}}[S_1]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^{n-1}{\mathrm{E}}[{\mathrm{E}}[S_1|S_0]]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^{n-1}{\mathrm{E}}[S_0(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})]=\\ =(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^n{\mathrm{E}}[S_0]=$ .../$S_0$ is constant so ${\mathrm{E}}[S_0]=S_0$, and also ${\mathrm{E}}[S_1]={\mathrm{E}}[S_1|S_0]$/... $=(p_{\mathrm{win}}q_{\mathrm{win}}+p_{\mathrm{loss}}q_{\mathrm{loss}})^n S_0={\mathrm{E}}[S_n]$ , which is good explicit form of the expected value function, dependent only on $n$. Also, there is an explicit expression for $S_n$: $S_n(W)=q_{\mathrm{win}}^W q_{\mathrm{loss}}^L S_0=q_{\mathrm{win}}^W q_{\mathrm{loss}}^{n-W} S_0={\mathrm{E}}[Sn|W]$, $W$ number of wins from $1$st to $n$th play (included); $n-W=L$ number of losses from $1$st to $n$th play (included). $W$ is another discrete random variable, the number of successes with probability $p_{\mathrm{win}}$ out of $n$ independent identical experiments (binomial distribution), i.e. $W \in {\mathrm{Bi}}(n,p_\mathrm{win})$, so $\mathrm{P}(W=k)={n \choose k} p_{\mathrm{win}}^k (1-p_\mathrm{win})^{n-k}$ and $\mathrm{E}[W]=np_\mathrm{win}$. Thus we can express the explicit probability for $S_n$ to be any particular number $s(k)$, where $s(k)=q_{\mathrm{win}}^k q_{\mathrm{loss}}^{n-k} S_0$, depending on the event $W=k$, where $k$ is any fixed constant: $\mathrm{P}(S_n=s(k))=\mathrm{P}(W=k)={n \choose k} p_{\mathrm{win}}^k (1-p_\mathrm{win})^{n-k}=\mathrm{P}(S_n=q_{\mathrm{win}}^k q_{\mathrm{loss}}^{n-k}S_0)$ Then we can also find $\mathrm{E}[S_n]$ by summing the products of the possible values of $S_n$ with their respective probabilities $\mathrm{E}[S_n]=\sum_{k=0}^{n}s(k){n \choose k} p_{\mathrm{win}}^k (1-p_\mathrm{win})^{n-k}=\\ =\sum_{k=0}^{n} (q_{\mathrm{win}}^k q_{\mathrm{loss}}^{n-k}S_0) {n \choose k} p_{\mathrm{win}}^k (1-p_\mathrm{win})^{n-k}$ or again by the law of the total expectation $\mathrm{E}[S_n]=\mathrm{E}[\mathrm{E}[S_n|W]]=\mathrm{E}[q_{\mathrm{win}}^W q_{\mathrm{loss}}^{n-W} S_0]=S_0\mathrm{E}[q_{\mathrm{win}}^W q_{\mathrm{loss}}^{n-W}]$ , however the above computations are not as simple as the first form of $\mathrm{E}[S_n]$ we found. Now for the case where the win and loss amounts are fixed and not percentages, it is even simpler to define $S_n$: $g_{\mathrm{win}}$ amount won per win; $g_{\mathrm{loss}}$ amount lost per loss; $S_n=S_0+Wg_{\mathrm{win}}-Lg_{\mathrm{loss}}=S_0+Wg_{\mathrm{win}}- (n-W)g_{\mathrm{loss}}=$ $=S_0+W(g_{\mathrm{win}}+g_{\mathrm{loss}})-ng_{\mathrm{loss}}$ . Then for the expected value we have $\mathrm{E}[S_n]=\mathrm{E}[S_0+W(g_{\mathrm{win}}+g_{\mathrm{loss}})-ng_{\mathrm{loss}}]=$ $=S_0-ng_{\mathrm{loss}}+ (g_{\mathrm{win}}+g_{\mathrm{loss}})\mathrm{E}[W]=$ $=S_0-ng_{\mathrm{loss}}+(g_{\mathrm{win}}+g_{\mathrm{loss}})np_{\mathrm{win}}=S_0+n((g_{\mathrm{win}}+g_{\mathrm{loss}})p_{\mathrm{win}}-g_{\mathrm{loss}}))=$ $=S_0+n(g_{\mathrm{win}}p_{\mathrm{win}}-g_{\mathrm{loss}}(1-p_{\mathrm{win}}))=S_0+n(g_{\mathrm{win}}p_{\mathrm{win}}-g_{\mathrm{loss}}p_{\mathrm{loss}})$ . Finally, by using the explicit formulas for the expectations, you can figure out the exact amount the gambler is expected to have after playing $n$ times. Counterintuitively (but evident from the formulas), you will find that $\mathrm{E}[S_n]=S_0\quad\forall g_{\mathrm{win}}$=$g_{\mathrm{loss}},\, p_{\mathrm{win}}$=$p_{\mathrm{loss}}$=$\frac{1}{2}$ , which does mean that the gambler is not expected to either lose or get rich by playing infinitely.
Finding slope of a curve by finding the limits of secant slopes
Your slope calculation is off a bit, you should have $${[(3+h)^2-4(3+h)-5]-[3^2-4(3)-5]\over (3+h)-3}$$ $$={2h+h^2\over h}$$ This is because $Q$ is located at $(3+h,(3+h)^2-4(3+h)-5)$. Then yes, take the limit as $h\to 0$ and you have the slope.
Differentiating an infinite sum
Interchanging the order of differentiation and infinite summation is not valid in general, but it can be validated under certain circumstances. For example, assume the power series $\sum a_n x^n$ has the radius of convergence $R > 0$. Then on the open interval $(-R, R)$, you can freely interchange the order of differentiation/integration and infinite summation: on $|x| < R$, $\displaystyle \frac{d}{dx} \sum_{n=0}^{\infty} a_n x^n = \sum_{n=0}^{\infty} a_n \frac{dx^n}{dx} = \sum_{n=1}^{\infty} n a_n x^{n-1}. $ $\displaystyle \int_{0}^{x} \sum_{n=0}^{\infty} a_n t^n \, dt = \sum_{n=0}^{\infty} a_n \int_{0}^{x} t^{n} \, dt = \sum_{n=0}^{\infty} \frac{a_n}{n+1} x^{n+1}. $ Note that the radius of convergence of the geometric series $$ a + ax + ax^2 + ax^3 + \cdots $$ is exactly $R = 1$ unless $a = 0$. In particular, your operation is totally legal.
What is the probability that the total score after throwing darts is divisible by $3$.
Suppose probability corresponding to modulo $1$ is $p$, then probability correponding to modulo $2$ is $1-p$. Hence it should be $p^3+(1-p)^3=p^3 + (1-p)^3.$ The radius are $r_1=1, r_2=2, r_3=3$ respectively and the probability is proportional to the area, then $$1-p=\frac{\pi r_2^2 - \pi r_1^2}{\pi r_3^2}=\frac{r_2^2-r_1^2}{r_3^2}.$$ Since $r_2=2r_1$ and $r_3=3r_1$, then $$1-p=\frac{4-1}{9}=\frac13$$ While the numerical value coincides, you should illustrates that your probability is computed based on the area.
What is the unit of the FFT output?
It's still a voltage. If you do a continuous Fourier transform, you go from signal to signal integrated over time, which is signal per frequency, but in a discrete Fourier transform you're just summing discrete voltages with coefficients, and the result is still a voltage. Of course if you want you can multiply it by the time interval between sample points to get a voltage per frequency unit.
Isomorphism in cohomology is an isomorphism in homology
This is a great question! I know no references for the following facts and I am pretty sure one can make more general statements, but these are the ones that I have seen used in practice. 1) Yes, at least when $R$ is a ring. Indeed, $H_{*}(X, R)$ can be defined as homology of $C_{*}(X, R)$, the chain complex which in degree $n$ is the free $R$-module generated by singular $n$-simplices of $X$. This is a bounded below projective chain complex and thus if the map induced by $f$ is a quasi-isomorphism, it must be already a homotopy equivalence. Thus, the map is still a quasi-isomorphism after applying $Hom_{R}(-, R)$ which is one way to compute cohomology of a space. For 2), 3) the following standard trick simplifies analysis. By replacing $Y$ by a mapping cyllinder in needed, we may assume that $f: X \rightarrow Y$ is an inclusion. Thus, for example to prove 2) and 3) it's enough to show that when $H^{*}(Y, X, \mathbb{Z}) = 0$ then also $H_{*}(Y, X, \mathbb{Z}) = 0$ (by the relevant long exact sequences). This is true under the added assumption that homology of $(Y, Z)$ is finitely generated, for which it is enough that both $Y, X$ have finitely generated homology. Indeed, any non-trivial infinite cyclic summand in $H_{n}(Y, X, \mathbb{Z})$ would appear in $H^{n}(Y, Z, \mathbb{Z})$ (as its subquotient is $Hom(H_{n}(Y, Z, \mathbb{Z}), \mathbb{Z})$ by universal coefficient). On the other hand, any finite cylic summand would appear as $0 \neq Ext^{1}(\mathbb{Z}_{k}, \mathbb{Z}) \subseteq Ext^{1}(H_{n}(Y, X, \mathbb{Z}), \mathbb{Z}) \subseteq H^{n+1}(Y, X, \mathbb{Z})$ again by universal coefficient. [Observe that above I have used universal coefficient for the relative co(homology) of $(Y, X)$. This is possible as the theorem is in fact a statement of homological algebra about bounded below free $\mathbb{Z}$-complexes (of which the relative singular complex of $(Y, Z)$ is an example.)]
Free product of the trivial group with another group
Since you tagged it algebraic-topology, perhaps you are learning about free products in a topology course? In this case, if $X$ is a space with fundamental group $G$, then $1*G$ is the fundamental group that you get of the space $X$ with a point glued to a point of $X$, which is just isomorphic to $X$ again, so $1*G\cong G$.
Inverse trigonometric Conversion
Firstly, $$\frac 12\arctan2-2\arctan\frac 12=\frac 12\left(\frac{\pi}{2}-\arctan \frac 12\right)-2\arctan\frac 12=\frac{\pi}{4}-\frac 52\arctan \frac 12$$ Secondly, you should be able to show in a similar way that $$\frac 13\arctan3-3\arctan \frac 13-\frac 56\arctan 3=-\frac{\pi}{4}-\frac 52\arctan\frac 13$$ It then remains to obtain the result by using $$\arctan a+\arctan b=\arctan\left(\frac{a+b}{1-ab}\right)$$
Finding the largest angle of a triangle
Suppose $x>0$. As $(x^{2}-1)$ is a side of the triangle then $x^{2}-1>0$ , therefore $x>1$. Note that $(x^{2}+x+1)-(x^{2}-1)=x+2>0$, then $(x^{2}+x+1)>(x^{2}-1)$. Another hand, as $x>1$, $(x^{2}+x+1)-(2x+1)=x^{2}-x>0$. Therefore $(x^{2}+x+1)$ is the largest side of the triangle. The largest angle is the opposite to the side $(x^{2}+x+1).$ As $(x^{2}+x+1)$ is the largest side of the triangle, by the law of cosines, $$(x^{2}+x+1)^{2}=(2x+1)^{2}+(x^{2}-1)-2(2x+1)(x^{2}-1)\cos\theta$$ Therefore $\cos\theta=-\frac{1}{2}$, then $\theta=120^{\circ}$.
Finding a set on which a group acts on
A quick remark before we begin: This seems like an exercise which is meant to reinforce the statement and proof of "Cayley's Theorem" (which says that you can realised every group as a group of permutations). So you should look up this theorem and its proof, and try to understand how it connects to both your question here and to my answer. Lets begin: A group $G$ is a set $S$ with an operation $\cdot$, and we often write $G=(X, \cdot)$. As such, a group always acts on its underlying set via left multiplication, so the action of $g\in G$ on $G$ is defined as: $$x\mapsto g\cdot x.$$ We often shorten "$G$ acts on its underlying set" to "$G$ acts on itself". Note that $G$ also acts on itself via right multiplication, so $x\mapsto x\cdot g$ which is completely analogous to left multiplication, and via conjugation, so $x\mapsto g^{-1}xg$ which is completely different. So to answer your questions: I am asked to find a set $X$ where this group acts upon non trivially. Your group acts $G$ on the set $X=\{A, B, \ldots, P\}$ via left multiplication, as the set $X$ is the set underlying your group $G$. As $G$ is not the trivial group, the action is non-trivial. Stop here and you will get full marks*! Does it make sense to just replace the letters by numbers in the alphabet to create my permutation set? Yes, this makes sense, but is not necessary to answer the question. However, if you want to understand your group $G$ as a permutation on some numbers (and in the comments you seem to want to do this) then this idea is the way to go. So, as you did in your question, replace $X$ with $Y=\{1, \ldots, 16\}$ using the bijection $$1\leftrightarrow A, 2\leftrightarrow B, \ldots, 16\leftrightarrow P.$$ You can then understand the action of $G$ on $Y$ by applying this bijection to the Cayley table which you gave in the question. So, for example, $B\cdot1=2$ and $B\cdot2=1$. Thus I would define $X$ as $X=\{(1,2)(3,4)(5,6)(7,8)(9,10)(11,12)(13,14)(15,16)\}$, or am I completely off track? You are off track, but not completely. The permutation $(1, 2)(3, 4)(5, 6)(7, 8)(9, 10)(11, 12)(13, 14)(15, 16)$ corresponds to the action of $B$ on the set $Y$. That is, $B=(1, 2)(3, 4)(5, 6)(7, 8)(9, 10)(11, 12)(13, 14)(15, 16)$. But its almost like you have gone too far: you have already found the set which your group acts on, you don't need to find the action explicitly! *Although possibly expand on why the action is non-trivial.
Are the $||x||_p$ norms dense in the set of all norms of $\mathbb{R}^n$?
Hint. Consider the set obtained from $[-1,1]\times[-1,1]$ by cutting the edges by lines $y=1-x$, $y=-1-x$. Can it be approximated by $l_2^p$ balls?
Co-transitivity of the constructive order relation
As a rule of thumb, in constructive mathematics often the way you prove a formula involving a disjunction is to start with an axiom which already has a disjunction. In this case notice that the only axioms mentioning disjunctions are 5 and 6. Also note that axiom 5 is kind of similar to what we have to prove, so that's the one we're going to use. It just needs a bit of adjusting. First of all see that since $a < b$ we can apply axiom 3 to show that $0 < b - a$, so in particular we can divide by $b - a$. Let $x' = \frac{x - a}{b - a}$. Now applying axiom 5 we know that either $0 < x'$ or $x' < 1$. Suppose that $0 < x'$. Then by axiom 4 we can multiply through by $b - a$ (recall $0 < b - a$) to get $0 < x - a$, and then get $a < x$ by axiom 3. Now suppose that $x' < 1$. Then multiplying through by $b - a$ again this time gives $x - a < b - a$. Then apply axiom 3 to get $x < b$. But now we've shown that either $a < x$ or $x < b$ as required.
Laplace transform of two arbitrary functions
It seems correct. But note that we don't necessarily have $$\mathcal{L}(f(t)g(t))=\mathcal{L}(f(t))\times\mathcal{L}(g(t)) $$
Total probability on a vector of Bernoulli random variables
Yes that is the law of total probability, since $u_k$ can take only two values. If you treat the sample space as $\sigma(u)$ any marginal events is contained in this cylindrical algebra.
Equivalence of uniform convergence in metric spaces!
For 1), a hint first then a solution. Try to negate 2) and see what comes from that. Solution: Suppose $f_n \not\rightrightarrows f$ in some $K \subseteq M$ compact. This means that, given $\epsilon > 0$ and for all $n \in \mathbb{N}$ there is some $n'$ and some $x \in K$ such that $$ |f_{n'}(x) - f(x)| \geq \epsilon$$ If we fix the $n$'s and repeat the process and taking some $x$ that validates our previous claim we then get a sub-sequence of functions and values, $(f_{n_k})_{n_k \in \mathbb{N}}$, $(x_n)_{n \in \mathbb{N}}$ such that $$ |f_{n_k}(x_k) - f(x_k) | \geq \epsilon $$ Now, because $x_k$ is a sequence in a compact space, we can extract a subsequence $(x_{k_j})_{k_j}$ that converges to some $x \in K$. But this means that (fixing a bit the indices in the final sequence, just put $x$ in the missing indices) $ \lim_{n \to \infty}f_n(x_n) \not \rightarrow f(x)$ wich contradicts 1). Hence $f_n$ must converge uniformly to $f$ in $K$.
Is the cofinality function monotonic?
No, of course not. If you already know that not all cardinals are regular then it suffices to show that. Simply take $\kappa$ to be a singular cardinal and, $\lambda=\operatorname{cf}(\kappa)^+<\kappa$, then $\lambda$ is regular but $\kappa$ has a strictly smaller cofinality despite being larger. For example, $\aleph_1<\aleph_\omega$ but $\operatorname{cf}(\aleph_1)=\aleph_1>\aleph_0=\operatorname{cf}(\aleph_\omega)$. And to your question, no, there are no "easy" cases which are not tantamount to stating "the cofinality function is monotonic in this case".
Unital rings within matrices
Since $R$ is a unital ring, there are two obvious elements around to play with: the additive identity $0$ and the multiplicative identity $1$. Given that, it may be helpful to note that there are two distinguished elements of $R[t]$: $$I=\begin{bmatrix}1&0\\0&1\end{bmatrix}\ (w=1,z=0),\qquad J=\begin{bmatrix}0&1\\-1&-1\end{bmatrix}(w=0,z=1).$$ Moreover, we can write any element $A\in R[t]$ as $A=wI+zJ$ for some $w,z\in R$ with the usual matrix operations, and conversely by definition any element of $R[t]$ has this form. Can you see how to use this decomposition to prove the statements @anon mentioned? HINTS BELOW: $wI+w'I=(w+w')I$... $IJ=JI=J$ $J^2=J-I$
Why is a braided left autonomous category also right autonomous?
The question has been answered on mathoverflow.net. The equality I forgot to use was that $c_{I,A} = 1_A$.
Partial Fractions Integration Question
Hint: $$\frac{x^5+x-1}{x^3 +1} = \frac{(x^3+x^2-1)(x^2-x+1)}{(x+1)(x^2-x+1)}$$
Simplest way to determine if two 3D boxes intersect?
Assuming that the boxes are axis-aligned (because otherwise they're underspecified), let's say that the corners of the first are $$ P_1 = (x_1, y_1, z_1)\\ Q_1 = (X_1, Y_1, Z_1) $$ with $x_1 < X_1, y_1 < Y_1, z_1 < Z_1$, and for the second $$ P_2 = (x_2, y_2, z_2)\\ Q_2 = (X_2, Y_2, Z_2) $$ Then the way to check for an overlap is this: Compare the intervals $[x_1,X_1]$ and $[x_2, X_2]$, and if they don't overlap, there's no intersection. Do the same for the $y$ intervals, and the $z$ intervals. If all three interval-pairs DO overlap, then there IS an intersection. What do I mean by "overlap"? I mean that there's a number $a$ with $x_1 \le a \le X_1$ and $x_2 \le a \le X_2$, for instance. You can check this easily by checking just endpoints: the intervals overlap if any one of these four conditions is true: $$ x_1 \le x_2 \le X_1 \\ x_1 \le X_2 \le X_1 \\ x_2 \le x_1 \le X_2 \\ x_2 \le X_2 \le X_2 $$ If none of those four is true, then the intervals do not overlap. For your particular case, $$ P_1 = (961.46, 215.15, 1465.44) \\ Q_1 = (970.02, 214.93, 1481.77) $$ and $$ P_2 = (1093.52, -499.50, 896.11)\\ Q_2 = (1093.12, -505.49, 878.68) $$ we see that the x-intervals don't overlap, and hence the boxes don't overlap. We don't even need to look at the $y$s and $z$s (although it's particularly simple, because the $y$s also don't overlap, and the $z$s also don't overlap! These boxes are as disjoint as possible.
Proof of existence of unique successor to a positive number
Let $P(a)$ be the claim $$P(a): \forall a \text{ positive } \exists b \in \mathbb{N} \text{ with } b\mathrm{++} = a\text{.}$$ Consider $P(0)$. By definition $0$ is not positive, so the proof is vacuously true. Suppose $P(0)$ is true. By definition, $1 = 0\mathrm{++}$, and $0 \in \mathbb{N}$, so there is a $b \in \mathbb{N}$ such that $b\mathrm{++} = 1$. Hence $P(1)$ is true. Now suppose $P(k)$ is true for for some $k$ positive. Then there exists a $b \in \mathbb{N}$ such that $b\mathrm{++} = k$. Then, it follows that $(b\mathrm{++})\mathrm{++} = k\mathrm{++}$. Since $b \in \mathbb{N}$, it follows that $c = b\mathrm{++} \in \mathbb{N}$ by an axiom. Hence, there exists a $c \in \mathbb{N}$ such that $c\mathrm{++} = k\mathrm{++}$. By induction, existence holds.
In the Mean Value Theorem, $f(x+h)=f(x)+hf'(x+ \theta h)$ where $0< \theta <1$, $f(x)=\sin x$
We know that $\lim_{h\rightarrow\infty}\frac{\sin(x+h)-\sin x}{h}=\cos x$. Hence the numerator $\left[\arccos\left(\frac{\sin(x+h)-\sin x}{h}\right)-x\right]$ tends to $0$ as $h\rightarrow 0$. So apply L'Hospital's rule.
Is this subset linearly independent or linearly dependent?
That is a fair guess, but to actually prove it, you must demonstrate that they are not necessarily zero. It looks like what you've shown is that if $$a_1(e_1-e_2)+a_2(e_2-e_3)+\cdots+a_{n-1}(e_{n-1}-e_n)+a_n(e_n-e_1)=0,$$ then all the $a_k$ are equal. For linear dependence, we need the other direction, but we don't have to prove it for all non-zero values of $a_1=\cdots=a_n.$ A single value suffices. Try $a_k=1$ for $1\le k\le n$ and see what happens. Of course, you certainly can set all $a_k=a$ for some arbitrary non-zero real $a,$ but it isn't really necessary for the question at hand.
Generalized way of solving this types of equations $x^3 +y^4 =z^5$
The equation $x^3 + y^4 = z^5$ has infinitely many solutions in positive integers. An infinite family of solutions is generated by $$x = a(a^3 + b^4)^{8k},\qquad y=b(a^3 + b^4)^{6k},\qquad z=(a^3 + b^4)^{5k}.$$ There are probably other solutions; I doubt that an exhaustive list of solutions is known. Beal's conjecture would imply that the equation has no relatively prime integer solutions. But this conjecture remains unproved, and there is a $1 million prize for a proof or counterexample.
The probability of a "double supremum" of random variable
Hint: For every $x_k\gt-\frac12$, $1+2x_k\leqslant(1+x_k)^2$ hence $\prod\limits_k(1+2x_k)\leqslant \left(\prod\limits_k(1+x_k)\right)^2$. Application: The event in the LHS is $\bigcup\limits_{t&lt;s}B^a_{t,s}$ and the event in the RHS is $\bigcup\limits_{t&lt;s}C^a_{t,s}$ with $$ B^a_{t,s}=\left[\prod_{k=t+1}^s(1+2x_k)&lt; a\right], \quad C^a_{t,s}=\left[\prod_{k=t+1}^s(1+x_k)&lt; a\right],\quad a=\tfrac23,\quad x_k=\tfrac14X_k. $$ For every $t&lt;s$, $C^a_{t,s}\subseteq B^{a^2}_{t,s}$ and $B^{a^2}_{t,s}\subseteq B^{a}_{t,s}$ since $a&lt;1$. The result follows (and the number $\frac13$ in the LHS may be replaced by $\frac59$).
How to show a certain group element must belong to the stabilizer of a set element
The proof is enough. Some further clarification: Claim: $\alpha^x = \alpha^y \implies y \in Stab(\alpha)x$ Proof: As you noted, there exists $g \in G$ such that $y = gx$. Thus, $$ (\alpha^g)^x = \alpha^x$$ It suffices to show that $\alpha ^ g = \alpha$, as this would imply that $y \in Stab(\alpha)x = \{ gx | \alpha^g = \alpha\}$. But this follows readily from the existence of an inverse $x^{-1}$ of $x$: $$ ((\alpha^g)^x)^{x^{-1}} = (\alpha^x)^{x^{-1}}$$ so you would get $$ \alpha^g = \alpha$$ as desired.
Existence of $\lim_{n \rightarrow \infty}A^n$
You can basically do the same thing with the Jordan normal form. You need all eigenvalues to be either in the interior of the unit disk or $1$. Additionally, if $1$ is an eigenvalue, it cannot be defective (defective eigenvalues of $1$ lead to polynomial growth).
Linearity of Determinants, and multiplicicity
To compute this Vandermonde determinant, simple reasoning is enough. First, the expression must be a cubic polynomial in $a,b,c$, because every term in the development is a product of powers $0,1$ and $2$ (for a similar $n\times n$ matrix, you would have a polynomial of degree $0+1+\cdots n-1=n(n-1)/2$). Then, whenever two parameters are equal, the determinant cancels. The only cubic polynomials that fulfill this are $$\lambda(a-b)(b-c)(c-a).$$ Then the main diagonal yields a term $1\cdot b\cdot c^2$, which only appears when $\lambda=1$. More generally, the determinant is the product of all $n(n-1)/2$ pairwise differences.
Possible mistake in book choosing Kth child
There are $N\alpha_j$ families with $j$ children in the world described in the question, therefore $S = \sum_{j=0}^c j(N\alpha_j) = N\sum_{j=0}^c j\alpha_j$ children altogether. (Note that $S = N E(X)$ where $X$ is the number of children in a randomly chosen family.) The most reasonable interpretation of randomly selecting one of these children is that each child has $\frac 1S$ probability to be selected, regardless of which family the child belongs to. From that it follows that $$P(A_j) = \frac{j\alpha_j N}{S} = \frac{j\alpha_j}{\sum_{m=0}^c m\alpha_m}.$$ We could put this into the textbook answer instead of the incorrect $P(A_j) \stackrel?= \alpha_j.$ But I think I like your approach better. There is one $k$th-born child in each family of $k$ or more children. Hence the total number of $k$th-born children is $N\sum_{j=k}^c \alpha_j,$ and therefore $$ P(K = k) = \frac{N\sum_{j=k}^c \alpha_j}{S} = \frac{\sum_{j=k}^c j\alpha_j}{\sum_{m=0}^c m\alpha_m},$$ which is the formula you derived. Therefore I agree with your solution. Here is another approach. Consider the contribution each child makes to $E(K).$ Number all the children from $1$ to $S$ and let $k_i$ be the birth order of child number $i.$ Then $$ E(K) = \frac1S \sum_{i=1}^S k_i.$$ Separate the sum into subtotals for each size of family. For example, for $0 \leq m \leq c$ there are $N\alpha_m$ families of size $m$ and each of those families has one child with $k_i = 1,$ one child with $k_i = 2,$ and so forth up to their one child with $k_i = m.$ So the contribution of one family of size $m$ to the sum is $1 + 2 + \cdots + k.$ Then \begin{align} \sum_{i=1}^S k_i &amp; = N\alpha_1 + N\alpha_2(1 + 2) + N\alpha_3(1 + 2 + 3) + \cdots + N\alpha_m \sum_{k=1}^m k + \cdots + N\alpha_c \sum_{k=1}^c k \\ &amp; = N\alpha_1 + 3N\alpha_2 + 6N\alpha_3 + \cdots + \frac12 m(m+1)N\alpha_m + \cdots + \frac12 c(c+1)N\alpha_c \\ &amp;= \frac12 N \sum_{m=1}^c m(m+1)\alpha_m. \end{align} Therefore $$ E(K) = \frac{\sum_{i=1}^S k_i}{S} = \frac{\sum_{m=1}^c m(m+1)\alpha_m}{2\sum_{j=0}^c j\alpha_j}.$$ I believe this is equal to your solution as well.
How find this diophantine equation $(3x-1)^2+2=(2y^2-4y)^2+y(2y-1)^2-6y$ integer solution
You can write it as $$(3x-1)^2-(2y^2-3y+\tfrac34)^2=-\tfrac12y-\tfrac{41}{16}.$$ Factoring the LHS gives two factors at least one of which gets too large as $y$ is large, as $$|(3x-1)-(2y^2-3y+\tfrac34)|+|(3x-1)+(2y^2-3y+\tfrac34)|\geqslant2\cdot|2y^2-3y+\tfrac34|.$$ It suffices to check the $y$'s with $-\tfrac12y-\tfrac{41}{16}=0$ (impossible) or $|-\tfrac12y-\tfrac{41}{16}|\geqslant|2y^2-3y+\tfrac34|$, that is, $y\in\{0,1,2\}$. (Note all this makes a little more sense after denominators are cleared, but it's perfectly valid to act as if they are.) The only solution is $(1,2)$.
Monotone increasing sequence in Lp convergent a.e.
The sequence $f_n$ is increasing, so $f_n$ will, at each point, either converge (to a finite number) or go off to infinity. The set $E$ you have defined is measurable because $E=\{ x\in X\;| \limsup\limits_{n\to\infty} f_n (x) = \infty\}$ and $\limsup\limits_{n\to\infty} f_n$ is well known to be a measurable function when the $f_n$ are measurable.
Graphing polynomials
You should probably set Y-min to –1000 and Y-max to 10000 or something like that. That's most likely the problem, but I'm not completely sure because I can't see what you've done exactly.
On seventh powers $x_1^7+x_2^7+\dots+x_n^7 = 2$?
The best I can do is in terms of the radical $\sqrt{3}$: $$2=(9 m^7 + 1)^7 + (-9 m^7 + 1)^7 + (\sqrt{3} m - 9 m^8)^7 + (-\sqrt{3} m - 9 m^8)^7 + 2 (9 m^8)^7$$
Taylor's Inequality - What is x?
If you're looking to create an upper bound, you should make $|x-a|$ as big as possible. Since $a=2$ and $1 \leq x \leq 2.5$, the absolute value is maximized when $x=1$, so $|x-a| \leq |1-2|=|-1|=1$.
$\lim \frac{a_{2n}}{a_n}< \frac{1}{2}$ series convergent
Look up the Cauchy Condensation Test. Then either use the test, or adapt the proof. Because it is faster, we use the test. For example, for the first part, there is an $\alpha \lt \frac{1}{2}$ and an $N$ such that if $n \ge N$, then $\frac{a_{2n}}{a_n} \lt \alpha$. But by Cauchy Condensation, $\sum a_k$ converges if and only if $\sum 2^k a_{2^k}$ converges. The latter series converges by long run comparison with the geometric series $\sum (2\alpha)^k$. Remark: Adapting the proof instead of using the result is a very good idea. One can then see that Condensation, which at first appears magical, comes from natural estimates.
Tangent space of the pre-image of identity via the Lie product
There are the canonical projection maps $p,q: G\times G \rightarrow G$ defined by $$p(g,h):=g, q(g,h):=h.$$ Let $Z:=\mu^{-1}(e) \subseteq G\times G$. There is a canonical map $$s: G \rightarrow Z \subseteq G \times G$$ defined by $s(g):=(g,g^{-1})$. We get induced map $p,q: Z \rightarrow G$ defined by $p(g,g^{-1}):=g, q(g,g^{-1}):=g^{-1}$. It follows $$sp(g,g^{-1})=s(g)=(g,g^{-1})$$ and $$ps(g)=p(g,g^{-1})=g$$ hence $s: G \rightarrow Z$ seems to be an &quot;isomorphism of manifolds&quot;. The set $Z$ is not a subgroup of $G\times G$ since $$(g,g^{-1})(h,h^{-1}):=(gh, g^{-1}h^{-1}) \notin Z$$ and the map $s$ is not a map of Lie groups. If you give $Z$ the following product $$(g,g^{-1})(h,h^{-1}):=(gh, h^{-1}g^{-1})=(gh,(gh)^{-1})$$ it follows $s$ is an isomorphism of Lie groups. Question: &quot;More precisely, I would like to characterize the tangent space $T_{(g,h)}(μ^{−1}({e})) \subseteq T_g(G)\oplus T_h(G)$.&quot; Answer: It seems to me the tangent space of $Z$ at $x:=(g,g^{-1})$ should be as follows $$T_x(Z) \cong T_g(G).$$ This is beacause $s$ is an isomorphism and $p(x)=g$.
what is the interval within which C has a probability of at least 0.75 of lying?
Indeed, $C$ does not have beta distribution. Moreover: there exist a lot of intervals satisfying the above condition: any interval within wich $X$ has a probability at least $0.75$ provides an interval for $C$ with the same probability. One can suppose that the interval should be from zero to some $t$. And then you need to find $t$ such that $$ \mathbb P(0\leq C\leq t)=0.75. $$ Note that $C=10+20X+4X^2$ is the quadratic function of $X$ with the $x$-coordinate of the vertex $x=-20/8=-2.5$. The vertex is outside the domain of $X$ and parabola opens upwards. Therefore $C$ strictly increases when $X$ increases for all $X$ inside $[0,\,1]$. It means that the inequality $0\leq C\leq t$ is equivalent to inequality $0\leq X\leq s$, where $s$ can be found by integration of given probability density function over interval $[0,s]$: $$0.75=\mathbb P(X\leq s)=\int_0^s 2(1-x)dx.$$ If you find $s$ then you get the right endpoint $t$ of interval for $C$ from monotonicity of quadratic function on $[0,\, 1]$: $$0.75=\mathbb P(X\leq s)=\mathbb P(10+20X+4X^2&lt;10+20s+4s^2).$$ Then $t=10+20s+4s^2$.
Does homogeneous scaling minimize this integral quantity?
Since $a^2 + b^2 \ge 2ab$, we have $$ \int_0^1 \phi'(r)^2r + \frac{\phi(r)^2}{r} \,dr \ge \int_0^1 2\phi(r)\phi'(r)\,dr = \int_0^1 \frac{d}{dr}(\phi^2(r))\,dr = \phi^2(1)-\phi^2(0) = \lambda^2. $$
Number of ways to divide variables into two categories
According to your examples 1 and 2, order within groups matters. If so, then there are $4! = 24$ such ordered partitions. Indeed, they are all generated by ordering the four elements in some way ($4!$ ways to do so) and drawing a line between first two and the last two.
Conics and Loci Question (Hyperbolae and Circles)
HINT: The hyperbolae are related to their inversions with respect to circle $r$.
Proving the convergence of $\sum_{n=1}^{\infty}\frac{n^n}{e^n (n+1)!}$
Stirling's approximation states that $$ n!\sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n $$ where $\sim$ means that the ratio of the two sides tends to $1$ as $n\to \infty$. Hence $$ a_n=\frac{n^n}{e^n (n+1)!}\sim \frac{n^n}{e^n \sqrt{2\pi (n+1)}\left(\frac{n+1}{e}\right)^{n+1}} =\frac{e}{\sqrt{2\pi}(n+1)^{3/2}}\left(1-\frac{1}{n+1}\right)^n\leq C \frac{e}{\sqrt{2\pi}(n+1)^{3/2}} $$ for some $C$ since $\left(1-\frac{1}{n+1}\right)^n$ is a convergent sequence. It follows that the original series converges.
finding the sum of the absolute values for the roots
Notice that $$\begin{align}x^4-4x^3-4x^2+16x-8 &amp;= (x-1)^4 - 10(x-1)^2 + 1 \\ &amp;= ((x-1)^2-5)^2-24 \end{align}$$ so you can actually calculate the roots explicitly and sum their absolute values.
application of c*algebras to PDEs
$C^*$-Algebras play a mayor role in many versions of spectral theorems for unbounded hermitian operators, for example the famous Spectral Theorems by Neumann or Gelfand. In an abstract sense, spectral theorems basically say, that you can find a representation of a commutative $C^*$-Algebra on Hilbert Spaces as multiplication operators. This is of course of mayor importance for PDE theory, as you can reduce many PDEs to Eigenvalue Problems of hermitian Operators. For example, if you try separation with $\Psi(\vec{x},t)=\exp{(-\frac{i}{\hbar}E\cdot t)}\cdot\psi(\vec{x})$ on the Schrödinger Equation, you get the Eigenvalue Problem $H\psi=E\psi$, which is known as the stationary Schrödinger Equation, which you can obviously solve using spectral methods. For more general reasons, $C^*$-Algebras also play a mayor role in Quantum Mechanics and Quantum Field Theory (expecially Axiomatic QFT) as Quantum Mechanics is mathematically nothing else than a very big Eigenvalue Problem. This may be also an interesting thing to know.
what is an integral multiple of a period?
You need to truncate the length waveform to integer multiples of the period length. Example: if your period length is 2sec, then your data must be truncated to have a length of 2sec or 4sec or 20sec, and so on. This is, so that the peridic extention of your truncated waveform data is the same as the original waveform (used e.g. when computing the DFT/FFT).
Tom Apostol - Calculus Vol. 1: Method of Exhaustion in Introduction
Remember, he states before it that, for all $n\ge1$, $A &lt; \frac{b^3}{3} + \frac{b^3}{n}$. This is actually the same with the statement: $$∀n\ge1,\Biggl(A &lt; \frac{b^3}{3} + \frac{b^3}{n}\Biggr)$$ Thus since we assumed that $A &gt; \frac{b^3}{3}$, this yields to the fact that $A - \frac{b^3}{3} &gt; 0$, thus we can change something inside the universal quantifier statement (like multiplying with $\frac{x}{x}$ or any other algebra operation). So now, $∀n \ge 1, \bigl(A &lt; \frac{b^3}{3} + \frac{b^3}{n}\bigr)$ means $∀n\ge1, \bigl(A - \frac{b^3}{3} &lt; \frac{b^3}{n}\bigl)$, since the LHS is positive, we divide both sides and get $∀n \ge 1, \biggl(1 &lt; \frac{\frac{b^3}{n}}{A - \frac{b^3}{3}}\biggr)$, we proceed to multiply both sides by $n$, which is okay, since $n \ge 1$ we finally arrives at a contradiction, namely: $$∀n \ge 1 \Biggl(n &lt; \frac{b^3}{A - \frac{b^3}{3}}\Biggr)$$. How to see this contradiction more clearly? First, note that the RHS is positive, so $\frac{b^3}{A - \frac{b^3}{3}} &gt; 0$, we add $1$ to both sides and get $\frac{b^3}{A - \frac{b^3}{3}} + 1 &gt; 1$, let $n = \frac{b^3}{A - \frac{b^3}{3}} + 1$, which is larger than $1$ (so this is one of the quantified value of the universal quantifier), but this $n$ is not smaller than $\frac{b^3}{A - \frac{b^3}{3}}$, for if it does, then $1 &lt; 0$ which is not possible, therefore, a contradiction, as required. You can do the same with the other direction, showing that the only possibility is for $A = \frac{b^3}{3}$.
sum of torsion of an elliptic curve
It's not true if $m$ is not prime to the characteristic of the field (e.g. take an ordinary elliptic curve in characteristic 2; it will have exactly one non-trivial 2-torsion point). We also need the field to be algebraically closed, although you may have been assuming that anyway (e.g. take an elliptic curve over $\mathbb{R}$ whose real points have only one connected component - then there's a uniqut non-trivial two-torsion point). Once we make these two assumptions on the ground field, the torsion is isomorphic to $(\mathbb{Z}/m)^2$ as an abelian group, and this is a property of that group.
Two harmonic subseries
The first series does not converge, because $S_n$ does not tend to zero. I don't have a full proof of this, but I do have an outline. Note that the $k$ in the definition of $S_{n+1}$ is between $(n+1)/S_n-n$ and $(n+1)/S_n$. Note also that increasing $k$ by one decreases the sum $1/k + 1/(k+1) + \cdots + 1/(k+n)$ by $1/k-1/(k+1)+1/(k+n)-1/(k+n+1)$, which is something on the order of $1/n^2$. Therefore $k$ can be tweaked until the sum is within something like $1/n^2$ of $S_n$; it follows that $S_n-S_{n+1}$ is bounded above by something like $1/n^2$. When that "something on the order of $1/n^2$" is made explicit, one should be able to compute the first several values of $S_n$ and then sum the convergent telescoping series of differences $S_{n+1}-S_n$ to prove that $\lim_{n\to\infty} S_n &gt; 0$. Calculated data strongly suggests that $\lim_{n\to\infty} S_n$ equals a number slightly larger than $0.405$.
prove that $\frac{a_n}{n}\rightarrow 1$: strange contest problem
This is problem 4 from the 2015 Miklós Schweitzer contest. There is an AoPS thread discussing this problem, see here. A couple of solutions is given there.
How do I divide Laurent polynomials?
The answer depends on the order in which you try to eliminate the terms. If you start with the $z^{-1}$ term, you can get rid of it by $\frac{1}{4}z^{-1}b(z)$ and you are left with $$ a(z)-\frac{1}{4}z^{-1}b(z) = 5+z $$ For this polynomial you can again chose which one you want to eliminate with a fitting multiple of $b(z)$. Eliminating the $5$ $$ a(z)-\frac{1}{4}z^{-1}b(z)-\frac{5}{4}b(z) = -4z = r(z) $$ eliminating the $z$ $$ a(z)-\frac{1}{4}z^{-1}b(z)-\frac{1}{4}b(z) = 4 = r(z) $$
Stability of homogeneous linear differential equation with variable coefficients
Let $\lambda_{max}(t)$ is the maximum eigenvalue of [$A^T(t)+A(t)$]. If there is a finite constant $\gamma$ such that $$ \int_\tau^t\lambda_{max}(\sigma)d\sigma\leq\gamma$$ $\forall t\geq\tau,$ then the system of variable coefficients ODE is uniformly stable. The detailed proof of this claim can be found in corollary 8.3 p.p. 133 of linear system theory 2ed edition by Wilson J. Rugh.
Distinction between "measure differential equations" and "differential equations in distributions"?
As long as the distributions involved in the equation are (signed) measures, there is no difference and both terms can be used interchangeably. This is the case for impulsive source equations like $y''+y=\delta_{t_0}$. Conceivably, ODE could also involve distributions that are not measures, such as the derivative of $\delta_{t_0}$. In that case only "differentiable equation in distributions" would be correct. But I can't think of a natural example of such an ODE at this time.
Is the boundary of an open subset of $\mathbb{R}^n$ always a topological manifold?
No, consider $U = \{ (x,y) \in \mathbb{R}^2 : xy \neq 0 \}$. Then $\partial U = \{ (x,y) : x = 0 \vee y = 0 \}$, and the point $(0,0)$ doesn't have a neighborhood homeomorphic to $\mathbb{R}$. You can modify this example to get a connected and bounded $U$: consider $$U = \{ (x,y) : x^2 + y^2 &lt; 2 \} \setminus \left( [-1,1]\times0 \cup 0 \times [-1,1]\right)$$
Proving almost sure convergence (help understanding a step in a proof)
Here is a proof using the law of large numbers. The random variables $\{\log|W_m|\}$ are iid and non-positive. By the strong law of large numbers, we have $$ \frac1n\sum_{m=1}^n\log|W_m|\to E(\log|W|) $$ almost surely. But by assumption, $E|W|&lt;1$, so by Jensen's inequality $$ E(\log|W|)\le \log E|W|&lt;0. $$ Recall that for any sequence $\{a_n\}$ of non-positive reals, $$\lim_n\frac1n\sum_{m=1}^na_m&lt;0 \quad\Longrightarrow\quad \sum _{m=1}^na_m\to-\infty.$$ Applying this result pointwise with $a_m:=\log|W_m(\omega)|$, it follows that with probability one, $\sum_{m=1}^n\log|W_m|\to -\infty$. By exponentiating, this last event is the same as the event $\prod_{m=1}^n|W_m|\to0$.
Proving if $A$ is a countable set then the quitent group $A/R$ is countable
HINT: There is a surjection from $A$ onto $A/R$. What can be the cardinality of the image of a function with a countable domain?
probability that a number is a leap year
A regular year is $365=7\cdot52+1$ days, so the day of the week of a given calendar date advances by one day each regular year and by two days each leap year. From $1965$ to $1994$ is $29$ years, of which those that are multiples of $4$ are leap years. The multiples of $4$ start with $1968$ and end with $1992$, so there are $7$ of them. $29+7\equiv1\pmod7$, so the day of the week advances one place from $9$ November $1965$ to $9$ November $1994$, and $9$ November $1965$ must have been a Tuesday. $1970$ is $5$ years later, and one of those years is a leap year, so $9$ November occurs $5+1=6$ days later in the week in $1970$, on Monday.
Finding power series of $\,f(z)$
$$\frac1{1+z}=\frac1{2+(z-1)}=\frac12\frac1{1+\frac{z-1}2}=\frac12\sum_{k=0}^\infty(-1)^k\left(\frac{z-1}2\right)^k$$ The above is true for $$\left|\frac{z-1}2\right|&lt;1\iff |z-1|&lt;2$$ Well, now just substitute $\;z\to w^2\;$ : $$\frac1{1+w^2}=\frac12\sum_{k=0}^\infty (-1)^k\left(\frac{w^2-1}2\right)^k$$
Decipher the meaning of $\mathbb{E}[\mathcal{N}(W_t)]$ and compute its value
So... after much discussions in the comments, in the end it seems that the letter $\mathcal N$ is used here to denote $\Phi$ the CDF of the standard normal distribution, defined by $\Phi(x)=\mathrm P(X\leqslant x)$, where $X$ is any standard normal random variable. Since $W_t=\sqrt{t}Y$ where $Y$ is standard normal, $\mathrm E(\Phi(W_t))=\mathrm P(X\leqslant\sqrt{t}Y)=\mathrm P(Z\leqslant0)$ with $Z=X-\sqrt{t}Y$. Since $X$ and $Y$ are independent and centered normal, $Z$ is centered normal hence $Z$ is symmetric and $\mathrm E(\Phi(W_t))=\Phi(0)=\frac12$ for every $t$. By the same decomposition, $\mathrm E(\Phi(W_t+a))=\mathrm P(Z\leqslant a)$. Since $Z$ is centered normal with variance $1+t$, $\mathrm P(Z\leqslant a)=\mathrm P(\sqrt{1+t}\cdot X\leqslant a)$, hence, for every $a$ and every nonnegative $t$, $$\color{red}{\mathrm E(\Phi(W_t+a))=\Phi\left(\frac{a}{\sqrt{1+t}}\right)}$$ Exercise: Use this approach to completely and readily solve this nearly duplicate question of yours.
Showing that $f$ is twice differentiable
$f(1+h,k)-f(1,0)=\frac{h^3 k^3} {h^2+k^2}$ so we look for a linear transformation $Df(1,0):\mathbb R^2\to \mathbb R$ such that $\frac{\left |Df(1,0)(h,k)-\frac{h^3 k^3} {h^2+k^2}\right|}{\sqrt{h^2+k^2}}\to 0$ as $(h,k)\to 0.$ If we try the easiest one, namely, $Df(1,0)(h,k)=0$ we see that it works. For the second derivative at $(1,0),$ first look at the general case: we have the following data: $f:\mathbb R^2\to \mathbb R;\ x\mapsto f(x);\ Df:\mathbb R^2\to L(\mathbb R^2,\mathbb R);\ x\mapsto Df(x)$ and $Df(x)$ is the linear transformation defined as you have done. Now then, $D^2f:\mathbb R^2\to L(\mathbb R^2,L(\mathbb R^2,\mathbb R))$ defined as follows: If $Df$ is differentiable at $x_0\in \mathbb R$ then there must exist a map $D^2f$ that sends $x_0\in \mathbb R^2$ to a linear transformation $D^2f(x_0),$ which in turn satisfies $Df(x_0+h)-Df(x_0)=D^2f(x_0)(h)+r(h)$ where $r(h)/\|h\|\to 0$ as $h\to 0.$ So, basically we want to calculate $Df(x_0+h)-Df(x_0)-D^2f(x_0)(h)$ and show that it is small whenever $h$ is. This is the same definition of derivative except now $\mathbb R$ is replaced by $L(\mathbb R^2,\mathbb R)).$ Notice, these maps are all elements of $ L(\mathbb R^2,\mathbb R)$ so to make sense of them, we have to evaluate them at an arbitrary $v\in \mathbb R^2:$ $Df(x_0+h)(v)-Df(x_0)(v)-D^2f(x_0)(h)(v)$ Now, it's easier to express the derivatives as $1\times 2$ matrices: we have $x_0=(1,0)$ so writing $h:=(h,k),$ $Df((1,0)+(h,k))=\begin{pmatrix} f_x(1+h,k) &amp; f_y(1+h,k)) \end{pmatrix}=\begin{pmatrix} \frac{k^3\left(3\left(h\right)^2\left(\left(h\right)^2+k^2\right)-2\left(h\right)^4\right)}{\left(\left(h\right)^2+k^2\right)^2} &amp; \frac{\left(3(1+h)^2k^2-6(1+h)k^2+3k^2+k^4\right)\left(h\right)^3}{\left(\left(h\right)^2+k^2\right)^2} \end{pmatrix}$ and $Df((1,0))=0$ as we showed above. So, as in the first part, we look for a linear transformation $D^2f(1,0)$ such that $\frac{\|Df(1+h,k)-D^2f(1,0)(h,k)\|}{\sqrt{h^2+k^2}}\to 0$ as $(h,k)\to 0.$ If we try $D^2f(1,0)=0$ again we have $\frac{Df(1+h,k)(v_1,v_2)}{\sqrt{h^2+k^2}}=\frac{k^3\left(3\left(h\right)^2\left(\left(h\right)^2+k^2\right)-2\left(h\right)^4\right)}{\left(\left(h\right)^2+k^2\right)^{5/2}} v_1+ \frac{\left(3(1+h)^2k^2-6(1+h)k^2+3k^2+k^4\right)\left(h\right)^3}{\left(\left(h\right)^2+k^2\right)^{5/2}}v_2.$ Suping this over $\|v\|\le 1$ and letting $(h,k)\to 0$ shows that our guess was correct.
Proof that there is no way to have certain profit
Suppose that there exists a betting strategy $(x_1,x_2,x_3)$ which makes positive profit no matter what outcome is. Then: $x_1*a &gt; x_1+x_2+x_3$ $x_2*b &gt; x_1+x_2+x_3$ $x_3*c &gt; x_1+x_2+x_3$ $a &gt; \frac{x_1+x_2+x_3}{x_1}$ $b &gt; \frac{x_1+x_2+x_3}{x_2}$ $c &gt; \frac{x_1+x_2+x_3}{x_3}$ $\frac{1}{a} &lt; \frac{x_1}{x_1+x_2+x_3}$ $\frac{1}{b} &lt; \frac{x_2}{x_1+x_2+x_3}$ $\frac{1}{c} &lt; \frac{x_3}{x_1+x_2+x_3}$ $\frac{1}{a}+\frac{1}{b}+\frac{1}{c} &lt; \frac{x_1+x_2+x_3}{x_1+x_2+x_3} = 1$
Is the change of basis matrix always invertible? I am getting conflicting information.
Normally, a change of basis matrix refers to a matrix that maps a basis in $\mathbb{R}^n$ to another basis in $\mathbb{R}^n$. In this case, then the change of basis matrix is square and invertible. However, it seems that in that video, they are transforming between two bases in a subspace of $\mathbb{R}^3$ rather than the whole space. In this case, then the matrix will not be invertible; however I would say that this is not the normal usual way of defining the change of basis matrix.
If $\{f_n\}\subset L^+, f_n$ decreases pointwise to $f,$ and $\int f_1<\infty,$ then $\int f=\lim\int f_n$
Each $f_{n}$ is dominated by $f_{1}\in L^{1}$, so apply the dominated convergence theorem
Equilibrium probabilities in simple 3-state Markov chain
Ok, so from how I learned this, the transition matrix $P$ has column-sums adding up to 1 and I treat $x^{k + 1} = Px^k$, where $k$ is the $k$th step. This means the $P$ that I'm using is the transpose of your $P$, and I'll denote it $P_0$. To calculate equilibrium solutions, find the eigenvector whose eigenvalue is 1 for $P_0$ and scale it so that its terms sum to 1. That means solving the linear system (You're finding Null($P_0 - \lambda I$) where $\lambda = 1$): $$-0.8x_1 + 0.5x_2 + 0.5x_3 = 0$$ $$0.5x_1 -0.75x_2 + 0.25x_3 = 0$$ $$0.3x_1 + 0.25x_2 -0.75x_3 = 0$$ Then scale your solution so the terms sum up to 1. Find $x^6 = P_0^6x^0$. Here, $x^0 = \begin{bmatrix} 0 \\ 0 \\ 1\end{bmatrix}$ since the test 5 iterations ago was easy. The first entry of $x^6$ should be the probability that the upcoming exam will be hard.
Equivalence of characterizations of the convolution of Borel measures
$\chi_A(x+y)=\chi_{\sigma^{-1}(A)}(x,y)$.
Prove that $|G| = |Z(G)| + \sum_{i' \in I'}|G:C_G(x_{i'})|$
This is a very old question and has been addressed in the comments. I am outlining an answer here so the question does not remain forever listed as unanswered. We know that the orbits of this action form a partition of $G$. So we can write $G = \coprod_{i \in I} Gx_{i}$ We can group together the orbits that have size $1$ and those that do not have size $1$, as the OP has tried and is suggested in the comments. $$G = \coprod_{i \in I \setminus I^{\prime}} Gx_{i} \; \amalg \; \coprod_{i^{\prime} \in I^{\prime}} Gx_{i^{\prime}}$$ Combining what the OP calls Lemma 1, 3 and 4, we see that an element $x$ is in the centre of $G$ if and only if the orbit $Gx$ has size $1$. Hence we see that we have $$G = Z(G) \amalg \coprod_{i^{\prime} \in I^{\prime}} Gx_{i^{\prime}}$$. As all the sets are disjoint (since the orbits are disjoint), the desired result about the order follows.
Topology making a family of functions optimal
It's useful to know that in order to show that a function $g:Y\to X$ is continuous, it suffices to show that for each $S\in\cal S$, where $\cal S$ is a subbase, the preimage $g^{-1}(S)$ is open. Remember that by taking all finite intersections of element in $\cal S$ we obtain a base $\cal B$ for the topology on $X$, and each open set is then an arbitrary union of elements in $\cal B$. So if $U$ is open in $X$, $g^{-1}(U)$ is a union of finite intersections of elements of the form $g^{-1}(S)$, where $S\in\cal S$, hence the preimage of $U$ is open. The smallest topology containing $\tilde A$ is the topology having $\tilde A$ as a subbase, so if you have shown that $g^{-1}(A)$ is open for each $A\in\tilde A$, then you are finished.
Examples of closed sets with empty interior
Here are a few intuitive examples Every singleton set $\{p\}$ The circle $\{(x,y) \in \mathbb{R}^2 : x^2 + y^2 = r^2\}$. The line $y=mx + b$ The latter two can be generalized of course. Edit: Here's a bit more intuition: the plane in $\mathbb{R^3}$ Take some point that doesn't lie in the plane. Do you see how you can find an open ball around it which doesn't intersect the plane? Think of the plane as being "thin", if you will. This tells you its complement is open, so the plane is closed. And by the "thin-ness", no point on the plane has an open ball around it contained in the plane. So now we have a point, a line, and plane as being closed nowhere dense sets in $\mathbb{R}, \mathbb{R^2}, \mathbb{R^3}$ respectively. Can you try to generalize to find a closed set with empty interior in $\mathbb{R^n}$?
where does the $\cos(\theta)=1-2\sin^{2}(\theta/2)$ come from?
It's a special case of the compound-angle formula $\cos (A+B)=\cos A\cos B-\sin A\sin B$. Take $A=B=\frac{\theta}{2}$ so $\cos\theta=\cos^2\frac{\theta}{2}-\sin^2\frac{\theta}{2}$. This can be written in two equivalent forms using $\cos^2\frac{\theta}{2}+\sin^2\frac{\theta}{2}=1$, one being $1-2\sin^2\frac{\theta}{2}$ (the other is $2\cos^2\frac{\theta}{2}-1$).
Written Descriptive Logic for Target Heart Rate Equation
It seems mostly fine. You should put an "and" between your two inequalities, or else write $0.7(220 - x) \le y \le 0.85 (220 - x)$. Also, it's not quite right to speak of solving this "equation" because it's not an equation but rather a system of inequalities, which is satisfied by many $y$ (for any fixed $x$.) If you want to give equations then you can say that the lower and upper heart rate targets are given by $y_\text{lower} =0.7(220 - x)$ and $y_\text{upper} =0.85(220 - x)$ respectively. Finally I might suggest using letters like $A$ for age and $r$ for heart rate (instead of $x$ and $y$) to make it easier to follow.
What Does It mean to 'Solve the System' when given a two matrices?
Hint: $$ \begin{pmatrix} 1&amp;k\\3&amp;2 \end{pmatrix} \begin{pmatrix} x\\y \end{pmatrix}= \begin{pmatrix} 0\\0 \end{pmatrix} $$ means: $$ \begin{pmatrix} x+ky\\3x+2y \end{pmatrix}= \begin{pmatrix} 0\\0 \end{pmatrix} $$ and this is equivalent to the system: $$ \begin{cases} x+ky=0\\ 3x+2y=0 \end{cases} $$ can you solve this?
Find the equation defining a perpendicular bisector
Given two points, say $(x_1,y_1)$ and $(x_2,y_2)$, the midpoint of these two points is $$M:=\left(\frac{x_1+x_2}{2},\frac{y_1+y_2}{2}\right)$$ the gradient of the line joining them is $$G:=\frac{y_1-y_2}{x_1-x_2}$$ Hence, for the perpendicular bisector, we want the line passing through $M$ and with gradient $-1/G$. If $M=(m_1,m_2)$ then the equation of the perpendicular bisector is $$\frac{y-m_2}{x-m_1} = \frac{x_1-x_2}{y_2-y_1}$$ After some algebraic manipulation we find that the equation is $$(x_1-x_2)x+(y_1-y_2)y=\tfrac{1}{2}\left(x_1^2+y_1^2-x_2^2-y_2^2\right)$$
compute $\iint_{[0,1]^2}\frac{1}{(1+x)(1+xy^2)}dxdy$ with the substitution $(x,y)=(u^2,\frac{v}{u})$.
Since $0\leq y=\frac{v}{u}\leq 1$ as $u=\sqrt{x}\in [0,1]$, it should be $$2\int_0^1\frac{1}{1+u^2}\left(\int_0^{u}\frac{dv}{1+v^2}\right)du= 2\int_0^1\frac{\arctan(u)}{1+u^2}du=\left[\arctan^2(u)\right]_0^1=\frac{\pi^2}{16}.$$
Matrices and Complex Numbers
Looking in the comments since I've posted, the following answer appears to be expanding on Jyrki's idea. There exists a homomorphism $\phi:S \rightarrow \mathbb{C}$ defined as follows: $$\begin{bmatrix} a &amp; -b \\[0.3em] b &amp; a \\[0.3em] \end{bmatrix} \mapsto (a + bi)$$ Of course, you will want to prove that this is indeed a homomorphism by checking the following conditions: $\phi$ maps the multiplicative identity in $S$ to the multiplicative identity in $\mathbb{C}$. $\phi(xy) = \phi(x)\phi(y)$ for any $x, y \in S$. $\phi(x + y) = \phi(x) + \phi(y)$ for any $x, y \in S$. Once you have done this, then show that $Im(\phi) = \mathbb{C}$, and $\ker(\phi) = \{0\}$, where $0$ is the additive identity in $S$. From here, you can apply the isomorphism theorem to show that $S$ is isomorphic to $\mathbb{C}$.
If $V$ is completely normable, then is every norm complete?
Let $||\cdot||_0$ and $||\cdot||_1$ induce the same topology on a vector space $V$. Let $r&gt;0$, since $B_r(0)_0$ is open in the topology induced by $||\cdot||_0$, it must be open in the one induced by $||\cdot||_1$. This means there exists a $k&gt;0$ so that $B_k(0)_1 \subset B_r(0)_0$. Now any $x \in V, x \neq 0$ can be written as $x=||x||_0\frac{x}{||x||_0}$. Take $r=2$ and $||x||_1=||x||_0\cdot ||\frac{x}{||x||_0}||_1$. The term in the $||\cdot||_1$ norm lies in $B_2(0)_0$, so it also lies in $B_k(0)_1$ for some $k$ independent of the choice of $x$. So we have: $$||x||_1 ≤ k\cdot||x||_0$$ for all $x \in V$. It follows from the same argument that there exists a $h&gt;0$ so that $||x||_0≤h\cdot||x||_1$ for all $x \in V$. These two inequalities have as a consequence that the question of convergence/Cauchy property of a sequence are identical for both norms, ie if $V$ is complete wrt $||\cdot||_0$ it is complete wrt $||\cdot||_1$.
Solve $\sqrt{x^2+8x+7}+\sqrt{x^2+3x+2}=\sqrt{6x^2+19x+13}$
Observe that $(x+1)$ divides all quadratics: the original equation is $$\sqrt{(x+1)(x+7)} + \sqrt{(x+1)(x+2)} = \sqrt{(x+1)(6x+13)},$$ and rearranging we obtain the following: $$(\sqrt{x+1})(\sqrt{x+7}+\sqrt{x+2}-\sqrt{6x+13})=0.$$ We are doing some trickery with allowing square roots to venture into $\mathbb C$ here, but note that it is all still correct: for the original equation to have solutions in $\mathbb R$, then either $x \geq -1$ (and all the square roots stay safely within $\mathbb R$) or $x \leq -7$ (in which case all of the square roots are of negative values, so the extra factors of $i$ safely distribute out, assuming we use the principal square root). The first factor yields a solution of $-1$. The second factor gives solutions when $$\sqrt{6x+13}= \sqrt{x+7}+\sqrt{x+2},$$ and upon squaring both sides gives $$6x+13=2x+9+2\sqrt{(x+7)(x+2)},$$ which rearranges to $$\sqrt{(x+7)(x+2)}=2x+2.$$ Square again, solve the quadratic, and test your solutions to finish.
Change of Coordinates for Discrete-Time Affine Dynamical System $x_{k+1} = A\,x_{k} + b$.
Just figured it out! Wherever you see an $x$, replace it with $x + (I - A)^{-1}\,b$. In particular, for the evolution equation you obtain \begin{align*} x_{k+1} + (I - A)^{-1}\,b &amp;= A\,(x_{k} + (I - A)^{-1}\,b) + b \\ x_{k+1} &amp;= A\,x_{k} + (A - I)(I - A)^{-1}\,b + b \\ &amp;= A\,x_k - b + b \\ &amp;= A\,x_k. \end{align*} You're welcome, past me.
Trying to prove $\sum_{i=1}^{N} i^3 = (\sum_{i=1}^{N} i)^2$
So you wish to prove $$\left(\sum_{i=1}^{N+1} i\right)^2 = \left(\sum_{i=1}^N i\right)^2 + (N+1)^3 $$ Letting $a = \sum_{i=1}^N i$ and $b = N+1$, using the formula $(a+b)^2 = a^2+2ab+b^2$, we obtain $$\left(\sum_{i=1}^{N+1} i\right)^2 = \left(\sum_{i=1}^N i\right)^2 + 2(N+1)\left(\sum_{i=1}^N i\right) + (N+1)^2$$ It suffices to prove $(N+1)^3 = 2(N+1)\left(\sum_{i=1}^N i\right) + (N+1)^2$. By high-school mathematics we know $$\sum_{i=1}^N i = \frac{1}{2}N(N+1)$$ and therefore $$2(N+1)\left(\sum_{i=1}^N i\right) + (N+1)^2 = N(N+1)^2+(N+1)^2=(N+1)^3$$