title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Simplifying inequality $|2x+1|>|x+2|$
One easy way out in dealing with inequalities of the form $\vert x \vert > \vert y \vert$ is to square them since $$\vert x \vert > \vert y \vert \iff x^2 > y^2$$ In your case, we get that $$(2x+1)^2 > (x+2)^2$$ Rearranging, we get that $$4x^2 + 4x + 1 > x^2 + 4x + 4 \implies 3x^2 > 3 \implies x^2 > 1 \implies x \in (-\infty,-1) \cup (1, \infty)$$
Series expansion of $\frac{x^n-1}{x-1}$ at $ x=1$.
hint Let $f (x)=\frac {x^n-1}{x-1}$. we expand $f (x+1) $ near zero as $$f (x+1)=((x+1)^n-1)/x=$$ $$(1+nx+\frac {n (n-1)}{2!}x^2+\frac{n (n-1)(n-2)}{3!}x^3+...-1)/x $$ $$=n+\frac {n (n-1)}{2!}x+.... $$ to get the expansion arround $x=1$, replace $x $ by $x-1$.
Writing a matrix as a product of elementary matrices.
Let's assume that nonzero entries in our matrices are invertible. If $a \ne 0$, then a $2\times 2$ matrix with $a$ in the upper corner can be written as a product of 4 matrices that are elementary in the sense described: $$ \left( \begin{array}{cc} 1 & 0 \\ \frac{c}{a} & 1 \end{array}\right) \left( \begin{array}{cc} a & 0 \\ 0 & 1 \end{array}\right) \left( \begin{array}{cc} 1 & 0 \\ 0 & d-\frac{bc}{a} \end{array}\right) \left( \begin{array}{cc} 1 & \frac{b}{a} \\ 0 & 1 \end{array}\right) = \left( \begin{array}{cc} a & b \\ c & d \end{array}\right) $$ Notice that when $a=1$, three elementary matrices suffice. If $a=0$ but $c\ne 0$, then $$ \left( \begin{array}{cc} 1 & \frac{1}{c} \\ 0 & 1\end{array}\right) \left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)= \left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right) $$ Since $\left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right)$ can be written as a product of 3 elementary matrices, $\left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)$ can again be written as the product of 4. A similar argument holds when $a=0$ but $b \ne 0$. I'll leave the case $a=b=c=0$ to the reader.
What algorithm is used to calculate $\exp(i 2 \pi/1024)$ in FFT?
Let $a_n:=\exp(i2\pi/2^n).$ Then $a_1=-1,a_2=i,$ and $a_{n+1}=\sqrt{a_n}.$ There are good algorithms to compute square roots of complex numbers, especially roots of unity, and they can be used here. For example, if $x+yi=(u+vi)^2, x^2+y^2=1,$ then $u=\sqrt{(1+x)/2},v=y/(2u).$
essential singularity of a function
Here's an idea; correct me if I've made a mistake. If $g \circ f$ doesn't have an essential singularity at $a$ then $lim_{z \to a}g \circ f(z) \in \mathbb{C} \cup \{\ \infty\}$. Let's first suppose that the above limit is in $\mathbb{C}$ and call it $c$. Take some $w \in \mathbb{C}$ s.t $|w-c| > 2$ to get that by continuity of $g \circ f$ there exist $r_1, r_2 >0 $ with: $B(a, r_1) \subset \Omega$ and $B(w, r_2) \cap g\circ f(B(a,r_1)) = \emptyset$. Yet the Casorati-Weierstrass theorem implies that $f(B(a, r_1) - \{a\})$ is dense in $\mathbb{C}$. And further, $g(\mathbb{C})$ is itself dense. So let $\epsilon >0$ be with $B(w,\epsilon) \subset B(w, r_2)$. Take some $z_{\epsilon} \in \mathbb{C}$ s.t $g(z_{\epsilon}) \in B(w,\epsilon)$. Continuity of $g$ gives a $\delta >0 $ s.t $g(B(z_{\epsilon}, \delta)) \subset B(w,\epsilon)$. Now take $z \in f(B(a, r_1))$ s.t $z \in B(z_{\epsilon}, \delta)$ and we get $g(z) \in B(w, r_2)$ - this contradicts $B(w, r_2) \cap g\circ f(B(a,r_1)) = \emptyset$. Now if $c \in \{\ \infty\}$ just take $w = 0$. Again, correct me if there's an error..
what is $F_Y(y)$ based on $X_i$
The key is that the sum of a series of independent normally distributed random variables is a normally distributed random variable whose mean is the sum of the series' means and variance the sum of the series' variances. Also if $X_k\sim\mathcal N(0,1^2)$ then $kX_k\sim\mathcal N(0, k^2)$ Generally: $Z\sim \mathcal N(\mu_Z,\sigma_Z^2)\implies aZ+b\sim\mathcal N(a\mu_Z+b, a^2\sigma^2_Z)$ So, therefore: $$\sum_{k=1}^n k X_k~\sim~\mathcal N\left(0, \sum_{k=1}^n k^2\right)$$ Take it from there. So we have $Y\sim \mathcal N(\mathsf E(Y), \mathsf D(Y))$ for some parameters (specifically $0$ and $\sum_{k=1}^n k^2$ as above). Given that then since $Y^*$ is a linear function of $Y$, namely $\frac{Y-\mathsf E(Y)}{\surd \mathsf D(Y)}$, therefore ...
$\neg \textsf{AC}+ \neg\textsf{CH}$
If you consider the continuum hypothesis to be $\aleph_0<\mathfrak m\rightarrow 2^{\aleph_0}\leq\mathfrak m$, then one possible result worth mentioning is: There exists an infinite Dedekind-finite set of reals which is Borel. If $A$ is such set, then $A\cup\Bbb N$ is a witness for the failure of $\sf CH$ and $\sf AC$. What is perhaps much more surprising is the fact that such set can be Borel. This is true in Cohen's first model. You can actually replace the infinite Dedekind-finite set by all sort of sets which are "morally smaller than the continuum" (e.g. a set which is the countable union of countable sets, but not countable in itself). Here is a surprising result, which is a bit in line of what you ask, although not entirely. Assuming $\sf ZF+DC$, then we do not know of any model in which there is a non-measurable set and $\sf CH$ holds. In particular there is a model in which every set has the Baire Property, but non-measurable sets exist. This is not entirely what you ask for, because it might be consistent that there are non-measurable sets while every set has size continuum and that $\aleph_1\neq2^{\aleph_0}$; however Shelah's model where every set of reals has the Baire Property (and this contradicts $\sf AC$), but there are non-measurable sets is one where $\aleph_1<2^{\aleph_0}$. The reason that we involve $\sf DC$ in all that is to have a reasonable way of defining a $\sigma$-additive and atomless measure on the reals to begin with.
A limit related to tetration growth rate
Following Nemo's idea, set $x=^\infty a-^na$, so that $x\rightarrow 0$ as $n\rightarrow\infty$. Then $$\frac{1}{^\infty a-^na}-\frac{^\infty a-^{(n+1)}a}{^\infty a(^\infty a-^na)^2\ln a}=\frac{1}{x}-\frac{1-\frac{^{(n+1)}a}{^\infty a}}{(^\infty a-^na)^2\ln a}=\frac{1}{x}-\frac{1-a^{^na-^\infty a}}{x^2\ln a}=\frac{1}{x}-\frac{1-a^x}{x^2\ln a}=\frac{a^x-x\ln a-1}{x^2\ln a}.$$ Expanding $a^x$ around $0$ as a power series $1+x\ln a+\frac{1}{2}x^2(\ln a)^2+O(x^3)$ gives that this expression, as $x\rightarrow 0$, is $$\frac{\frac{1}{2}x^2(\ln a)^2+O(x^3)}{x^2\ln a}=\frac{\ln a}{2}+O(x)\rightarrow\frac{\ln a}{2}.$$
Formula for the $n^{th}$ derivative of $f(x)$
There's a pattern here to recognise: $$\frac{x^n}{1-x} = \frac{1 - (1-x^n)}{1-x} = \frac{1}{1-x} - \frac{1-x^n}{1-x}.$$ The derivatives of the first summand, if one hasn't memorised them, are easily found by differentiating a few times, spotting a pattern, and proving it via induction. The second summand should be familiar as the closed form for a geometric sum. This would mean that the derivative of $\dfrac{1-x^n}{1-x}$ must be $0$. The $n^{\text{th}}$ derivative of that must be $0$, not necessarily the lower order derivatives. And indeed, $\dfrac{1-x^n}{1-x}$ is a polynomial of degree $n-1$ [for $n > 0$], so its $n^{\text{th}}$ derivative is $0$. Starting with a geometric sum $1 + x + \dotsc + x^{n-1}$, multiplying with $1-x$ yields $$(1-x)\sum_{k=0}^{n-1} x^k = \sum_{k=0}^{n-1} x^k - \sum_{k=0}^{n-1} x^{k+1} = \sum_{k=0}^{n-1} x^k - \sum_{m=1}^n x^m = 1 - x^n.$$ For $x\neq 1$, we can divide by $1-x$ and obtain $$\sum_{k=0}^{n-1} x^k = \frac{1-x^n}{1-x}.$$ In that form, the derivatives are easily computed if required, but we know that the $n^{\text{th}}$ derivative of a polynomial of degree $< n$ is $0$ without computation.
Krull dimension and localization
What is true is that $\dim S^{-1}A\le \dim A$ because $\mathrm{Spec}(S^{-1}A)$ is homeomorphic to a subset of $\mathrm{Spec}(A)$. For $A_{\mathfrak p}$, the dimension varies: if $\mathfrak p$ is a minimal prime, then the localization has dimension $0$. If $\mathfrak p$ is not maximal and $\dim A$ is finite, then $\dim A_{\mathfrak p}<\dim A$. On the other hand, $\dim A$ is the supremum of the $\dim A_{\mathfrak m}$ when $\mathfrak m$ runs the maximal ideals of $A$. If $A$ is an integral finitely generated algebra over a field, then $\dim A=\dim A_{\mathfrak m}$ for any maximal ideal.
Transformation from cartesian to polar Coordinates of Vector Field
All right ! See example 1, p. 3 in this pdf.
Convexity of a function (volume-to-area ratio)
Right, the first thing to note is that this function is of the form $f/f'$, where of course $f(t) = \int_0^t \sin^m{r} \, dr$. Since this is obviously smooth away from the endpoints, it is sufficient to show that the second derivative is positive. It is easy to check that $f$ satisfies the differential equation $$ (\sin{t})f'' = (m\cos{t}) f'. $$ Differentiating and using this, $$ \left( \frac{f}{f'} \right)' = 1-\frac{ff''}{f'^2} = 1-m\cot{t} \frac{f}{f'}, $$ and differentiating again and using the relation again, $$ \left( \frac{f}{f'} \right)'' = \dotsb = -m\cot{t} + m((m+1)\cot^2{t}+1)\frac{f}{f'}. $$ Multiplying by the positive $(\sin^2{t})f'/[m(1+m\cos^2{t})] = (\sin^{m+2}{t})/[m(1+m\cos^2{t})]$ gives $$ g(t) = f(t)-\frac{\cos{t}\sin^{m+1}{t}}{1+m\cos^2{t}}, $$ and so we want to prove that this is positive. Since $g(0)=0$, $g(t) = \int_0^t g'(r) \, dr$ so it is enough to show that the $g'$ is nonnegative. A bit of work shows that $$ g'(t) = \frac{2\sin^{m+2}{t}}{(1+m\cos^2{t})^2} \geq 0, $$ as required.
The point of writting this isomorphism theorem like this?
Because $S/T$ makes no sense unless $T$ is a subgroup of $S$.
Can limit be taken inside a function which has a removable Discontinuity
No, in your example magically values are equal , so it does not mean you can take the limit inside. Consider $$g(x)=x^2$$ $$f(x)= \begin{cases} x+1, \quad & \text{ if } x \ne 1, \\ 4, &\text{ if } x = 1\end{cases}$$ we have $$f\left(g(x)\right)= \begin{cases} x^2+1, \quad & \text{ if } x \ne \pm1, \\ 4, &\text{ if } x = \pm1\end{cases}$$ So $$\lim_{x \to 1}f(g(x))=2$$ while $$f\left(\lim_{x \to 1} g(x)\right)=4$$
"How would Gauss proceed?"
Gauß proved that ${\mathbb Z}[i]$ is a PID using the fact that the class number of forms with discriminant $-4$ is $1$. On the other hand, Gauß only considered quadratic forms with even middle coefficient, so in the case of discriminant $-163$ he would have been forced to use the fact that the number of classes of forms with discriminant $-163$ is $3$, and the rest of the proof would then require additional arguments. I don't think, however, that this was the point of the question, which was aimed at getting binary quadratic forms as an answer.
Writing complex numbers in form $a+bi$
You can express such an expression in the form $a+ib$. Let $$x+iy = \sqrt{i+\sqrt{2}} \\ (x+iy)^2 = (\sqrt{i+\sqrt{2}})^2 \\ x^2 - y^2 +2ixy = i+\sqrt{2} \\ $$ Now you only need to solve the equations $x^2 - y^2 = \sqrt{2}$, and $xy = \frac{1}{2}$ to get the values of $x$ and $y$. $x = \,\,^+_-\sqrt{\frac{\sqrt{3}+\sqrt{2}}{2}}$ and $y = \,\,^+_-\sqrt{\frac{\sqrt{3}-\sqrt{2}}{2}}$
Question about relations and equivalence classes.
Instead of 'describing the equivalence classes' it might be more appropriate to rather ask for a set $B$ and a surjective function $f:A\to B$ for each example such that the given equivalence relation on $A$ is just its 'kernel' $\{(a,a'):f(a)=f(a')\}$. This set $B$ represents the best the equivalence classes. For 1., the positive numbers are in relation with all each other, and so are the negative numbers, and there's also the $0$, so the function we're looking for is the signum function ${\rm sign}:\Bbb R\to\{-1,0,+1\}$. For 2. and 3., observe that condition in 2. holds for sets $X,Y$ if and only if either both of them contains $8$ or neither of them, so that this can be translated to the same condition as for 3.: $$X\sim Y\iff X\cap\{8\}=Y\cap\{8\}\,.$$ Generalizing this, for any set $U$ and any subset $A\subseteq U$, the relation $X\sim Y\iff X\cap A=Y\cap A$ is an equivalence relation on $P(U)$, and it is the kernel of the 'restriction' function $$f:P(U)\to P(A)\quad X\mapsto X\cap A\,.$$ So, the equivalence classes of 2. are represented by elements of $P(\{8\})=\big\{\emptyset,\,\{8\}\big\}$ and similarly, those of 3. by $P(\{1,2\})=\big\{\emptyset,\,\{1\},\, \{2\},\, \{1,2\}\big\}$.
If the polytope is unbounded then there is no optimal solution
a) Consider the problem $\min\{1^Tx : x \geq 0\}$. The feasible set is unbounded (as the entire nonnegative orthant). But the optimal solution is $x = 0$. b) For the sake of definiteness, suppose that the feasible region is $P = \{ x : x \geq 0, Ax \leq b\}$ (WLOG this is all feasible regions). If $x^*$ and $y^*$ are both feasible and optimal, then $c^Tx^*=c^Ty^*$. If $\lambda \in [0,1]$, then $c^T(\lambda x^* + (1-\lambda) y^*) = \lambda c^Tx^* + (1-\lambda) c^Ty^* = c^Tx^*$, so $\lambda x^* + (1-\lambda) y^*$ is optimal. Also, $\lambda x^* + (1-\lambda) y^* \geq 0$, and $A(\lambda x^* + (1-\lambda) y^*) \leq \lambda b + (1-\lambda) b = b$, which means that $\lambda x^* + (1-\lambda) y^* \in P$. Thus the entire line segment between $x^*$ and $y^*$ is therefore feasible and optimal. c) Taking the problem $\min\{0^Tx : x \in \mathbb{R}^n\}$, we see that every $x$ is feasible. But there are no basic feasible solutions.
Showing that $\textbf{a} \cos \omega t + \textbf{b} \cos \omega t$ traces out an ellipse where $\textbf{a}$ and $\textbf{b}$ are arbitrary vectors.
Consider the map $$(x, y) \mapsto x\mathbf a + y \mathbf b.$$ This transforms the unit circle into the set you're interested in. N.B.: if $\mathbf a$ and $\mathbf b$ are linearly dependent, then the claim is false, unless you allow "a line segment" or "a single point" as special cases of an ellipse. So I'm going to assume they're independent. Assuming that the two (bold) vectors are linearly independent, the image $E$ of the circle under such a transformation is always an ellipse, for if $S$ is the matrix representing the inverse of this transformation, we have that for any $\mathbf e \in E$, $$ \| S \mathbf e \| = 1 $$ which can be rewritten $$ \mathbf e^t S^t S \mathbf e = 1.$$ Now $S^t S$ is symmetric and positive definite, and hence diagonalizable with positive eignevalues, so $S^tS = U^t D U$ where $U$ is orthogonal and $D$ is diagonal with positive entries, so $S^t S = U^t H^t H U$ where $H$ is a diagonal matrix whose entries are the square roots of the diagonal entries of $D$, i.e. $H^2 = H^t H = D$. This gives us $$ \mathbf e^t S^t S \mathbf e = 1\\ \mathbf e^t U^t H^t H U \mathbf e = 1. $$ Changing coordinates by $U$,i.e., $f = Ue$, you get $$ \mathbf f^t H^t H \mathbf f = 1. $$ which can be unravelled into the equation of an axis-aligned ellipse in the $U$-coordinate system. Short form: Apply Sylvester's Law of Inertia (a deep theorem from linear algebra) to avoid writing a whole lot of algebraic stuff.
Show that $K_0(A)$ is a countable group if $A$ is a unital, separable C* algebra
Since projections which are at less than distance $1$ are unitarily equivalent, the Murray-von Neumann equivalence classes of projections lie in disjoint balls, all within the ball of radius two (we are thinking of balls of radius one around elements of the unit ball). Now the fact that $A$ is separable gives us only countably many balls of a given radius within a ball. So we only have countably many classes in $A$. The same reasoning applies to $M_n(A)$. As a countable union of countable sets is finite, the total of all clases in $K_0(A)$ is countable.
Fundamental Question on how to prove $a \not\in K(b)$ where $a,b$ algebraic over $K$
If $\alpha\in\mathbb C$, then $\mathbb{Q}(\alpha)$ is the smallest subfield of $\mathbb C$ containing $\alpha$. But$$\left\{a+b\sqrt2\,\middle|\,a,b\in\mathbb Q\right\}$$is a field which contains $\sqrt2$ and it is clearly the smallest such subfield. Therefore,$$\mathbb{Q}\left(\sqrt2\right)=\left\{a+b\sqrt2\,\middle|\,a,b\in\mathbb Q\right\}.$$
Incidence function $\phi: E(G) \rightarrow V(G) \times V(G)$ of union of graphs $G = F \cup H$.
If $e$ is a common edge in $F$ and $H$ it should not matter. Since according to the definitions the incidence function of $G$ can also be stated as $$ \phi ; E(F) \cup E(H) \rightarrow V(F)\cup V(H) \times V(F)\cup V(H)$$ The incidence function remains well-defined for union graphs. If $e$ is a common edge to both $F$ and $H$ it would not matter since $e \in E(F) \cup E(H) $ occurs only once. Now you talk about varying endpoints of an edge. This is impossible. Since an edge is defined by the endpoints.I don't know which book you use but say for an example Deistel says an edge is a $2$-subset of $V(G)$ and hence is defined by the endpoints. If $e \in E(F) \cap E(H)$ then the endpoints of $e$, say $v_1$ and $v_2$ are also common to both $V(F)$ and $V(H)$. So the function remains unaffected. Hope I helped.
Evaluate $\int_0^\infty \frac{1}{(x+1)(\pi^2+\ln(x)^2)}dx$
$$\int_{0}^{+\infty}\frac{dx}{(x+1)(\pi^2+\log^2 x)}=\int_{0}^{1}\frac{dx}{(x+1)(\pi^2+\log^2 x)}+\int_{0}^{1}\frac{dx}{x(x+1)(\pi^2+\log^2 x)}$$ equals $$ \int_{0}^{1}\frac{dx}{x(\pi^2+\log^2 x)}\stackrel{x\mapsto e^t}{=}\int_{-\infty}^{0}\frac{dt}{\pi^2+t^2}=\int_{0}^{+\infty}\frac{du}{\pi^2+u^2}=\left[\frac{\arctan(u/\pi)}{\pi}\right]_{0}^{+\infty}=\frac{1}{2}.$$ An overkill is to exploit the integral representation of Gregory coefficients.
How to prove the statement?
Let $a, b$ be two integers not divisible by $3$. Then, each of $a$ and $b$ are either $1$ or $2$ mod $3$. If they are the same mod $3$, then $a-b\mod3=0$ so $a-b$ is divisible by 3. If they are different mod $3$, then $a+b\mod3=0$ so $a+b$ is divisible by 3.
Tensor product of two cyclic modules
The question is equivalent to the isomorphism $A/I \otimes_A A/J \cong A/(I+J)$ of $A$-modules. Actually they are also isomorphic as $A$-algebras. Beyond the usual proof which you can read everywhere, there is a one-line proof with the Yoneda-Lemma.
Show that $ a \equiv 1 \pmod{2^3 } \Rightarrow a^{2^{3-2}} \equiv 1 \pmod{2^3} $
Yes, it is correct. In general, if $a \equiv 1 \pmod m$,then we have $a^n \equiv 1 \pmod m$ for any $n$.
Local homeomorphism from a locally Euclidean space implies Euclidean space
Great! Then take $U_x\cap V_x$.
Characterize the free lattice-ordered-group on one generator
Yes, you have the antichain you mention. Now generate a free distributive lattice with that antichain. The result is the free $\ell$-group on one generator. When done, you get that the free $\ell$-group on one generator is isomorphic to $\mathbb Z\times \mathbb Z$ with coordinatewise operations and order. It is freely generated by $x=(1,-1)$. This fact is the content of Theorem 17 of Birkhoff, Garrett Lattice-ordered groups. Ann. of Math. (2) 43 (1942), 298-331.
construct a function which satisfied the given statement?
The statement is true because in $\Bbb R$ limits to some point $x_0$ can only have two trajectories at the end (the two directions): limit from below $x_0$, and limit from above $x_0$. It's even true without the condition of the lateral limits being less than infinity, since $f$ couldn't be continuous if that were the case. To see this consider the case where $\lim_{x \uparrow 0} f'(x)=+\infty$. This means the function is growing more and more while approaching $0$ from below, and the slope of it's tangent line is getting bigger and bigger, on the way of being a vertical line. Then the function must go to $+\infty$, since it never stops growing and it has a vertical asymptote at $0$. The same goes for the case with $-\infty$ or the limit from above. Now the result about lateral limits (it's not about the derivative of a function, it works with any function). Given a point $x_0$ and some function $g:\Bbb R\to \Bbb R$ such that $\lim_{x \uparrow x_0} g(x) = \lim_{x \downarrow x_0} g(x)=l$. Now consider $\lim_{x \to x_0} g(x)$ and take any $\epsilon>0$. Since the two lateral limits exist and give the value $l$, we can say there must be some $\delta_1,\delta_2>0$ such that $\left|g(x)-l\right|<\epsilon$ whenever $0<x_0-x<\delta_1$, and such that $\left|g(x)-l\right|<\epsilon$ whenever $0<x-x_0<\delta_2$. Taking $\delta=\min({\delta_1,\delta_2})$ we have now $\left| g(x)-l\right|<\epsilon$ whenever $0<|x-x_0|<\delta$, so $\lim_{x \to 0} g(x)=l$. Then for your statement the first result tells us that if the lateral limits of the derivative agree and $f$ is continuous there, the limits must exist; and the second result follows (so $f$ is differentiable in that point).
Closed-form for rational log integral: $\int_0^1\left(\frac{\ln x}{1-x}\right)^{n}dx$
In this answer I show that $$ \int_0^1\left(\frac{\log(t)}{1-t}\right)^n\mathrm{d}t=(-1)^nn\sum_{j=0}^{n-1}\genfrac{[}{]}{0}{0}{n-1}{j}\zeta(n-j+1) $$ where $\genfrac{[}{]}{0}{0}{n}{k}$ is a Stirling number of the first kind.
Could any one tell me how to show the expectation is zero for this random variable?
Hint $$\mathbb E\left[\frac{1}{X}\boldsymbol 1_{X>y} \right]\leq \frac{1}{y}.$$
If a function is $n$ times continuously differentiable, prove that there are polynomals with grade lower then n with following conditions.
Fix $a$ and $\delta>0$, and define \begin{align} P(x)=\sum_{k=0}^{n-1}\frac{f^{(k)}(a)}{k!}(x-a)^k\,. \end{align} By Taylor's theorem, for each $x\in[a,a+\delta]$, there exists $\xi_x\in[a,x]$ such that \begin{align} f(x)=P(x)+\frac{f^{(n)}(\xi_x)}{n!}(x-a)^n\,. \end{align} Let $A$ and $B$ be the minimum and maximum of $f^{(n)}$ on $[a,a+\delta]$ respectively. Put \begin{align} p(x)&=P(x)+A(x-a)^n/n!\,, \\ q(x)&=P(x)+B(x-a)^n/n!\,. \end{align} Then $p$ and $q$ have the desired properties.
Counterexamples related to a convergent positive series
Hint. As regards (c) and (e) consider $$a_n=\frac{1+(-1)^n+2^{-n}}{n^2}.$$ Can you modify it in order to obtain a counterexample for (f)?
Find the region in which the complex number lies
Hint: It's actually nothing but an application of Rouche's theorem (see this question Finding number of roots of a polynomial in the unit disk ) for the function $g(z)=1$. The bound is obtained via comparison with the geometric series and the answer obtained is that the claim holds.
Existence of a complex sequence with given property
Hint: Solve $w^{2}-200iw-1=0$. Then choose $\zeta_n \to \infty$ such that $e^{i\zeta_n} =w$ for all $n$. ( Take $c$ with $e^{ic}=w$ and take $\zeta_n =2n\pi +c$). Finally take $z_n=1-\frac 1{\zeta_n}$.
Why are all the letters parameters in the Pythagorean theorem?
Yes. Variables are just place holders. Congratulations! You just discovered trigonometry! Think of what this means. If $(x,y)$ is a point on a circle with radius $r$ then $x^2 + y^2 = r^2$ which means there is always a right triangle that has has $|x|, |y|$ as sides and $r$ as a hypothesis. In other words. If you take the point $A=(x,y)$; drop it to the $x$-axis and take the point $B= (x,0)$ and and $C = (0,0)$ then the triangle $\triangle ABC$ will always be a right triangle with a hypotenuse of length $r$. (image from here which discusses this very issue.) Welcome to the wonderful world of trigonometry! (Seriously... Trigonometry is ENTIRELY about this.) ===== Actually, all this is backwards. FIRST we know that $A=(x,y)$ and $B=(x,0)$ and $C=(0,0)$ must be a right triangle becuase the $x$ and $y$ axis are perpendicular and $\overline {AB}$ is parallel to the $y$-axix. And then BECAUSE it is a right triangle we know that $\overline {AC}$ which we know is equal to the radius of the circle, $r$ thatn $r^2 = (AC)^2= (AB)^2 + (BC)^2= x^2 + y^2$. And that is how we came up with the formula for the circle in the first place! Similarly, this is also how we came up with the distance formula. The distance between $(x_1, y_1)$ and $(x_2, y_2)$ is $\sqrt{(x_2-x_1)^2 + (y_2 - y_2)^2}$ because the points $(x_1, y_1), (x_2, y_1),(x_2, y_2)$ make a right triangle with sides of lengths $(x_2-x_1)$ and $(y_2 - y_2)$ and a hypotenuse the distance between the two points. And a circle always having a distance of $r$ means for any $(x,y)$ on the circle that $r$, the distance between $(x,y)$ and $(0,0)$, must be $\sqrt{(x-0)^2 + (y-0)^2}$. Or in other words it must be that $x^2 + y^2 = r^2$.
Finitely-generated group such that all (non-trivial) normal subgroups have finite index implies all (non-trivial) subgroups have finite index?
Counterexample: Let $G$ be the infinite dihedral group, i.e. $$ G = \langle a,x : x^2=e, xax=a^{-1} \rangle. $$ Note $\langle a\rangle \cong \mathbb{Z}$ is a normal subgroup of index $2$. Let $N$ be a nontrivial normal subgroup of $G$. If $a^k\in N$ for any $k\neq 0$, then $N$ contains a subgroup of $\langle a \rangle$ of finite index, hence $N$ must have finite index in $G$. Now suppose $xa^k \in N$ for some $k$ (note $k$ may now be $0$). Then as $N$ is normal, $N$ must contain $$ a(xa^k)a^{-1} = xa^{k-2}. $$ Then $N$ must contain $$ (xa^k)(xa^{k-2}) = a^{-2}, $$ and hence $N$ has finite index by the argument above. But using the relation $xax=a^{-1}$ (via $xa=a^{-1}x$), every nontrivial element of $G$ can be written as $a^k$ for $k\neq 0$ or $xa^k$ for any $k$. So every nontrivial normal subgroup of $G$ has finite index. But the subgroup $\langle x \rangle \cong \mathbb{Z}/2$ does not have finite index.
Coordinates of the incenter of a triangle
The bisector of angle $A$ intersects side $BC$ at a point $A'$, and according to angle bisector theorem we have: $A'B:A'C=c:b$. It follows that $A'$ is a weighted average of $B$ and $C$, with weights given by the lengths of the opposite sides: $$ A'={b\over b+c}B+{c\over b+c}C, $$ and of course we have analogous expressions for the similarly defined points $B'$ and $C'$. The incenter $I$ of $ABC$ is the intersection of $AA'$, $BB'$ and $CC'$. It is then hardly surprising that it turns out to be the weighted average of $A$, $B$ and $C$. For instance: as $I$ belongs to segment $AA'$ we can write: $$ I=(1-t)A+tA'=(1-t)A+{tb\over b+c}B+{tc\over b+c}C, $$ for some $t\in[0,1]$. But the expression for $I$ must be symmetric when exchanging $A$, $B$, $C$ among them, and it is easy to verify that $t={b+c\over a+b+c}$ does the trick, leading to your formula for $I$: $$ I={a\over a+b+c}A+{b\over a+b+c}B+{c\over a+b+c}C. $$
Embedding $\mathbb Q^c$ into $\mathbb Q^c_p$
Well yes. More generally, if $E \subseteq F$ are fields, then the algebraic closure of $F$ contains a copy of the algebraic closure of $E$. Specifically, let $F^c$ be the algebraic closure of $F$, and let $E'$ be the subfield of $F^c$ consisting of all roots of all polynomials with coefficients in $E$. Then $E'$ is algebraically closed, and is isomorphic to the algebraic closure of $E$. Your question initially contained the word "natural". This word is a bit dicey — the subfield $E'$ is determined in an entirely natural way, but if you have a specific algebraic closure $E^c$ for $E$ in mind, there won't be a natural isomorphism $E^c \to E'$. This is because, in general, there isn't a natural isomorphism between two different algebraic closures of the same field. In particular, there is a copy of $\mathbb{Q}^c$ sitting inside of the complex numbers $\mathbb{C}$, and there is also a copy of $\mathbb{Q}^c$ sitting inside of $\mathbb{Q}_p^c$. These copies are isomorphic, but there isn't a single "best" isomorphism between them. For example, $\mathbb{Q}_p^c$ will always contain two different square roots of $2$, but there's not an obvious way to label one of these as the "positive" square root and the other as "negative" square root.
Solving the Diff. Eq: $y''+9y=36x\cos(3x)$
I'm not sure what method you're using there. If you want to use variation of parameters, then we use an Ansatz of a linear combination of the two homogeneous solutions $\cos(3x)$ and $\sin(3x)$, $$y_p(x) = a(x)\cos(3x) + b(x)\sin(3x)$$ Then after some algebra and imposing the condition $a'(x)\cos(3x) + b'(x)\sin(3x) = 0$, we have expressions for $a$ and $b$, $$a'(x) = - \frac{\cos(3x).36x\cos(3x)}{W}, \ \ \ b'(x) = -\frac{\sin(3x).36x\cos(3x)}{W}$$ where $W$ is the Wronksian of the two homogenous solutions $\cos 3x$ and $\sin 3x$, $$W = \cos(3x)(\sin(3x))' - \sin(3x)(\cos(3x))' = 3\cos^2(3x) + 3\sin^2(3x) = 3$$ Hence to find $a$ and $b$, integrate the expresssions for $a'$ and $b'$. Finally, construct the particular solution $y_p$.
What is the cokernel of this map $\bar{1} \mapsto \bar{13}$?
It's correct. Because $\Bbb Z_{39}/\langle 13\rangle\cong\Bbb Z_{13}$, not so much by Lagrange, but by, say, the first isomorphism theorem. That is, it is the homomorphic image of the cyclic $\Bbb Z_{39}$ (so it's cyclic) and its order is $13$.
Method of false Position in Optimization: Taking Derivatives
Yes, It's very common in optimization to approximate arbitrary functions by quadratic ones as $q$. $q$ is somewhat a Taylor expansion of $f$ even though it's not totally true since the second derivative is approached by finite difference.
Combinations/Permutations Formula Help - Total Potential Portfolio Allocations
You can do this with the stars and bars method. First note that each asset needs to have at least $1$% allocated, so set aside $10$% for that allocation. We want to find out then, how to split the remaining $90$% among $10$ different assets. Let each $1$% of the $90$% be represented by a star, meaning that we have 90 stars. To split the 10 assets, we need to use $10 - 1 = 9$ bars. Consider all possible arrangements of these stars and bars. All the stars to the left of the first bar will be invested in asset $1$, all the starts to the left of the second bar and to the right of the first bar will be invested in asset $2$, between the second and third into asset $3$, and so on... up to the stars to the right of the ninth bar being invested in asset $10$. To solve this problem then, we just need to consider the number of possible arrangements of the $9$ bars and $90$ stars. See if you can finish the problem from here. For further reference on stars and bars, you can check out this link: https://brilliant.org/wiki/integer-equations-star-and-bars/#stars-and-bars
Finding zeros of a function
A priori, I suppose that $x=0$ cannot be an interesting zero of function $$f(x)=1-\dfrac{k}{x}-\dfrac{k}{3}\dfrac{e^{-ax}}{x}+\dfrac{4k}{3}\dfrac{e^{-bx}}{x}$$ So, consider $$g(x)=x\,f(x)=x-k-\dfrac{k}{3}e^{-ax}+\dfrac{4k}{3}e^{-bx}$$ We have $$g(0)=0 \qquad g'(0)=1+\frac{1}{3} k (a-4 b)\qquad g''(0)=-\frac{1}{3} k \left(a^2-4 b^2\right)$$ So, if $g'(0)<0$ and $g''(0) >0$, there is a chance to find something. Expanding $g(x)$ as a Taylor series around $x=0$, we have $$g(x)=x \left(\frac{1}{3} k (a-4 b)+1\right)-\frac{1}{6} x^2 \left(k \left(a^2-4 b^2\right)\right)+\frac{1}{18} k x^3 \left(a^3-4 b^3\right)-\frac{1}{72} x^4 \left(k \left(a^4-4 b^4\right)\right)+O\left(x^5\right)$$ Dividing by $x$ and using series reversion $$x=t+\frac{\left(a^3-4 b^3\right)}{3 \left(a^2-4 b^2\right)}t^2 +\frac{ \left(5 a^6+12 a^4 b^2-64 a^3 b^3+12 a^2 b^4+80 b^6\right)}{36 \left(a^2-4 b^2\right)^2}t^3+O\left(t^{4}\right)$$ where $t=\frac{2k( a -4 b) +6}{(a^2 -4 b^2) k}$ For the case $a=1,b=2,k=1$ considered by @GEdgar, the above gives as an estimate $$x=\frac{5605384}{6834375}=0.820175$$ which is not fantastic. But, starting with this estimate, Newton method will work like a charm (one single iteration for $8$ exact significant figures !). $$\left( \begin{array}{cc} n & x_n \\ 0 & 0.82017507 \\ 1 & 0.92826114 \end{array} \right)$$ Edit Just for the fun of it, I built the series expansion up to $O\left(x^{15}\right)$ (this is just ridiculous !) and used series reversion (I shall not report any formula here since they are monsters). For the worked case, the estimate is $$x=\frac{116953055942942882914859816106777623661064}{126941147255319208068560063838958740234375}$$ which is $0.921317$.
Oblique asymptote position
Your division to find the oblique asymptote gives that $\displaystyle f(x)=x-3+\frac{1}{x-2}$, so you are correct that the graph of the function is above the asymptote when $x>2$, and it is below the asymptote when $x<2$. For example, $f(100)\approx 97.0102>97=100-3$.
Showing that a collection of sets is a $\sigma$-algebra: either set or complement is countable
Consider the four cases: $A$, $B$ countable. $A$, $B^c$ countable. $A^c$, $B$ countable. $A^c$, $B^c$ countable. What can be said of $A\cap B$ or $(A\cap B)^c$?
largest singular value of gaussian random matrix
I think you can imitate the proof of Theorem 1.19 from your notes. Apologies if my approach is a little clumsy. One can show that $\|A\| = \sup_{|u|_2 \le 1, |v|_2 \le 1} u^\top A v$. Then $E\|A\| = E[ \sup_{|u|_2\le 1, |v|_2 \le 1} u^\top A v]$. One can obtain an $1/2$-net $\mathcal{N}^n$ over $\mathcal{B}_2^n$ with $6^n$ points. Similarly one obtains a $1/2$-net $\mathcal{N}^m$ over $\mathcal{B}_2^m$ of size $6^m$. So writing $$u^\top A v = (u-x)^\top A (v-y) + x^\top A v + u^\top A y - x^\top A y$$ where $x \in \mathcal{N}^n$, $y \in \mathcal{N}^m$, and $|x-u|_2 \le 1/2$ and $|y-v|_2 \le 1/2$ yields $$E[\sup_{u \in \mathcal{B}_2^n, v \in \mathcal{B}_2^m} u^\top A v] \le E[\sup_{x \in \mathcal{N}^n, y \in \mathcal{N}^m} x^\top A y] + E[\sup_{x \in \mathcal{N}^n, v \in \mathcal{B}_2^m/2} x^\top A v] + E[\sup_{u \in \mathcal{B}_2^n/2, y \in \mathcal{N}^m} u^\top A y] + E[\sup_{u \in \mathcal{B}_2^n/2, v \in \mathcal{B}_2^m/2} u^\top A v]. $$ Rearranging leads to $$\frac{3}{4} E[\sup_{u \in \mathcal{B}_2^n, v \in \mathcal{B}_2^m} u^\top A v] \le E[\sup_{x \in \mathcal{N}^n, y \in \mathcal{N}^m} x^\top A y] + E[\sup_{x \in \mathcal{N}^n, v \in \mathcal{B}_2^m/2} x^\top A v] + E[\sup_{u \in \mathcal{B}_2^n/2, y \in \mathcal{N}^m} u^\top A y].$$ The first term on the right-hand side is the maximum of $6^{n+m}$ sub-Gaussian random variables with variance proxy $\sigma^2$, so it is $\le \sigma \sqrt{2 (m+n) \log 6}$. I believe you can bound the other two terms by doing a further net argument and obtaining the same $c \sigma \sqrt{m+n}$ rate. Finally $\sqrt{m+n} \le \sqrt{m} + \sqrt{n}$.
When are semidirect products isomorphic?
Consider the special case where $\phi_2$ is constant equal to the identity, so that one of the semi-direct products is actually a direct product. The converse of your proposition in that case would be : If $N \rtimes_{\phi_1} H \cong N\times H$ then there is some $g\in \operatorname{Aut}(N)$ such that $\phi_2(h)=g\phi_1(h)g^{-1}$ for all $h\in H$, which in turn implies that $\phi_1$ is constant equal to the identity as well. But it is well-known that non-trivial actions can lead to direct products.
Find the absolute maximum and absolute minimum values of f on the given interval.
At a local maximum or minimum $f^\prime(x_0) = 0$. You can determine the character of the stationary point by looking at how $f$ varies either side of $x_0$ or by calculating the value of $f^{\prime\prime}(x_0)$.
Determinant n exponent
In you last step you must replace $(detA^k)$ in $(detA^k) (detA)= (detA)^{k+1}$ by $(detA)^k$(by induction hypothesis) and thus you will get $(detA)^k (detA)= (detA)^{k+1}$, and hence proved by induction.
Prove that $\operatorname{ord}_{3^{2n}+3^n+1}2 \equiv 0 \pmod{4}$
Clearly, it's sufficient to find divisor $d$ of $3^{2n}+3^n+1$ such that $\operatorname{ord}_d 2 \equiv 0 \pmod 4$. If $n$ is even, say $n=2k$, then $$ 3^{2n} + 3^n + 1 = (3^{k})^4 + (3^k)^2 + 1 = \bigl([3^k]^2+3^k+1\bigr)\bigl([3^k]^2-3^k+1\bigr), $$ it is divisible by $3^{2k}+3^k+1$. And when $n$ is odd, then $$ 3^{2n} + 3^n + 1 = (3^n - 1)^2 + 3^{n+1} $$ is sum of two coprime squares, hence all its prime divisors are congruent to $1$ modulo 4 (that's obvious if you're familiar with gaussian integers). Moreover, $$ 3^{2n} + 3^n + 1 \equiv 1 + 3 + 1 = 5 \pmod 8, $$ it means that it has prime divisor $p \equiv 5 \pmod 8$. And $2$ is quadratic non-residue modulo that $p$, it means that $$ 2^{(p-1)/2} \equiv -1 \pmod p. $$ Put for clarity $p = 4k+1$. Now it's easy to see that $\operatorname{ord}_p 2 \equiv 0 \pmod 4$: $\operatorname{ord}_p 2 $ divides $(p-1) = 4k$ but not divides $(p-1)/2 = 2k$.
Simple algebra help; shouldn't it be $-e^6$?
Note that $$1 - [1 - e^{-\frac{.5}{.25}}] =1-1+e^{-2}=e^{-2}$$
Estimating Poisson $\theta$ only from which percentage of intervals have events
Suitable assumptions might be that $t_i$ are known and that this is a Poisson process with a uniform rate, and that the time intervals don't overlap. Let $$ X_i = \left.\begin{cases} 1 & \text{if at least one emission in the $i$th time period}, \\ 0 & \text{otherwise}, \end{cases}\right\} = \begin{cases} 1 & \text{if }y_i\ge 1, \\ 0 & \text{if }y_i=0. \end{cases} $$ Then $\Pr(X_i=0)= e^{-\theta t_i}$ and $\Pr(X_i=1)=1-e^{-\theta t_i}$. So $$ L(\theta) = \prod_{i=1}^n (e^{-\theta t_i})^{1-X_i} (1-e^{-\theta t_i})^{X_i} $$ This allows a lot of simplification in the case where $t_1=\cdots=t_n$.
Find Expected Value of Random Variables with Indicator Variables
For part A, you get ${9 \choose 4} / {10\choose 5}$. The numerator is the number of ways that you can select 4 beers out of the remaining 9. For part B, assume that Linday selects beers 1 through 5 (just relabel the beers if necessary), and let $X_i$ take the value 1 if Simon selects beer $i$ and 0 otherwise. The question asks for $E(\sum_{i=1}^5 X_i)$. We have computed $EX_i$ in part A, so The answer is 5 times the answer for part A. For part C, let $X_i$ take the value 1 if $a_i=i$, and 0 otherwise. The question asks for $E(\sum_{i=1}^n X_i)$.
How to prove this theorem about differentiability of a multivariable function?
The way to link the function to the partial derivatives is to use the Mean Value Theorem. Let us write $$ \begin{align*} f(x_0 + h, y_0 + k) - f(x_0,y_0) &= f(x_0 + h,y_0) - f(x_0,y_0)\\ &\qquad \quad {}+ f(x_0 + h, y_0 + k) - f(x_0 + h,y_0). \end{align*} $$ Now, we can apply the Mean Value Theorem to the two pairs of terms: $$ \begin{align*} f(x_0 + h, y_0) - f(x_0,y_0) &= h \cdot \frac{\partial f}{\partial x}(b_1,y_0)\\ f(x_0 + h, y_0 + k) - f(x_0 + h,y_0) &= k \cdot \frac{\partial f}{\partial y} (x_0+h,b_2) \end{align*} $$ for some $b_1 \in (x_0,x_0 + h)$ and $b_2 \in (y_0, y_0 + k)$. Therefore, $$ \begin{align*} & \frac{\left| f(x_0+h,y_0+k)-f(x_0,y_0)-h\frac{\partial f}{\partial x}(x_0,y_0)-k\frac{\partial f}{\partial y}(x_0,y_0) \right|}{\| (h,k)\|}\\ ={} & \frac{\left| h\left( \frac{\partial f}{\partial x}(b_1,y_0)-\frac{\partial f}{\partial x}(x_0,y_0) \right) - k \left( \frac{\partial f}{\partial y}(x_0+h,b_2) - \frac{\partial f}{\partial y}(x_0,y_0) \right) \right|}{\| (h,k)\|}\\ \leq{} & \frac{| h |}{\| (h,k) \|} \cdot \left| \frac{\partial f}{\partial x}(b_1,y_0)-\frac{\partial f}{\partial x}(x_0,y_0) \right| + \frac{|k |}{\| (h,k) \|} \cdot \left| \frac{\partial f}{\partial y}(x_0+h,b_2) - \frac{\partial f}{\partial y}(x_0,y_0) \right| \\ \leq{} & \left| \frac{\partial f}{\partial x}(b_1,y_0)-\frac{\partial f}{\partial x}(x_0,y_0) \right| + \left| \frac{\partial f}{\partial y}(x_0+h,b_2) - \frac{\partial f}{\partial y}(x_0,y_0) \right| \end{align*} $$ since $| h | / \| (h,k) \|$ and $| k | / \| (h,k) \|$ are less than or equal to $1$ for all $(h,k) \neq (0,0)$. Now, taking $\lim_{\|(h,k)\| \to 0}$, we get $0$, because $(b_1,y_0) \to (x_0,y_0)$ and $(x_0+h,b_2) \to (x_0,y_0)$ as $(h,k) \to (0,0)$, and the partial derivatives are continuous at $(x_0,y_0)$. This is where we use the continuity of the partial derivatives at $(x_0,y_0)$.
Composing Morphisms with Morphisms
If a binary operation has the property that keeps staying the same as parentheses are reorganized, it is simply called associative, and the basic step is rather for $n=3$: $$f\circ(g\circ h)=(f\circ g)\circ h \ .$$ Let's assume we have $f_n\circ\dots\circ f_1$ somehow parenthesized, e.g. $(f_5((f_4f_3)f_2))f_1$. Then, for example, apply the replacement $(uv)w \to u(vw)$ to all possible instantiation of $(uv)w$, so that finally we get $f_{n+1}(..(..(f_2f_1)).\!.\!)$, in the example above it goes like e.g.: $$(f_5((f_4f_3)f_2))f_1 = (f_5(f_4(f_3f_2))f_1 = f_5((f_4(f_3f_2))f_1) =\\ = f_5(f_4((f_3f_2)f_1)) = f_5(f_4(f_3(f_2f_1)). $$
Formula for n+n a number of times
formula for sum of an AP is $S_n=\frac n2(2a+(n−1)d)$ here $n=52, a=4, d=4$
Show that the set $B=\{ x+1, x^2+x, x^2-1, x \}$ span $P_2$
You can write: 1.$1$ as $(x+1)-x$ 2.$x$ as $x$ 3.$x^2$ as $(x^2+x)-x$ or $x^2-1+(x+1)-x$ You know that $\{1,x,x^2\}$ is a basis of $P_2$, so $\{x,x+1,x^2+x\}$ is basis, too. Also $\{x,x+1,x^2-1\}$ is basis.
Does $x=x$ represent a valid algebraic equation?
Any kind of $f(x)=g(x)$ formed things are an equation. It can be simplified into $1=1$ by subtracting $x-1$ to each side. Probably, it might cover the whole graph. Any kind of graph, either $x$ plot or $x,y$ plot or $x,y,z$ plot must be filled and will have no space left.
Prove that set contains at least two co-prime integers
Divide the set into n disjoint subsets as such: {1,2},{3,4},{5,6},...,{2n-1,2n} By the pegionhole principle, if you select n+1 numbers 2 of them must be in one of the above subsets. These 2 are coprime. (This is beacuse for all positive integers i,i+1 are coprime)
Let X be the set of reals with the finite-complement topology. Find all subsets of X that are both open and closed
You are almost done. If $X - A$ is open then $X - (X - A) = A$ is finite or equal to $X$. But if $A \neq \emptyset$ is finite, then you can show that $A$ is not open.
Empirical Relationship between mean, median and mode.(Derivation)
The relevant papers are: [1] Hall, P. (1980). On the Limiting Behaviour of the Mode and Median of a Sum of Independent Random Variables. The Annals of Probability, 8(3), 419-430. [2] Haldane, J. B. S. "The Mode and Median of a Nearly Normal Distribution with Given Cumulants." Biometrika 32, no. 3/4 (1942): 294-99. [3] Pearson, Karl. "Mathematical Contributions to the Theory of Evolution. II. Skew Variation in Homogeneous Material. [Abstract]." Proceedings of the Royal Society of London 57 (1894): 257-60. The result is attributed to Pearson (1894). Hall (1980) credits Haldane (1942) for providing a "satisfactory explanation" of the result and Hall provides a very concise explanation of this result: Let $X_1, X_2, \dots$ be iid random variables with $E(X) = 0$, $E(X^2) = 1$, and $E(X^3) = \tau$ (assumed to exist), and set $S_n = \sum_1^nX_j$, $M_n = \text{mode}(S_n)$, and $m_n = \text{median}(S_n)$, assuming that these quantities are uniquely defined. Haldane showed that $M_n \to -\dfrac{1}{2}\tau$ and $m_n \to -\dfrac{1}{6}\tau$ as $n \to \infty$. Hall states that Haldane shows that the formula $$\text{mean} - \text{mode} \sim 3(\text{mean} - \text{median})$$ holds true (note: approximately) when $S_n$ has a density which admits a convergent Edgeworth expansion, and Hall's paper weakens these assumptions. I recommend consulting Hall's paper for the lengthy details. Long story short, this obviously doesn't hold true in all cases.
If $x_1, \ldots, x_n$ have probability distribution function $F(x)$, then the maximum has probability distribution function $F(x)^n$
Hint: $\max(x_1, \ldots, x_n) \le x$ is equivalent to $(x_1 \le x) \wedge (x_2 \le x) \wedge \ldots \wedge (x_n \le x)$.
$a_1=1$, $a_{n+1}=\frac{1}{2}\left(a_n+\frac{2}{a_n}\right)$. Show that the sequence is decreasing
The sequence $\left\{a_n\right\}_{n\geq 1}$ is generated by Newton's method with starting point $a_0=1$ applied to the function $f(x)=x^2-2$, because $$ x-\frac{f(x)}{f'(x)} = x-\frac{x^2-2}{2x} = \frac{1}{2}\left(x+\frac{2}{x}\right). $$ Now $a_1=\frac{3}{2}>\sqrt{2}$ and the function $f(x)$ is positive and convex on the interval $(\sqrt{2},+\infty)$. By the properties of Newton's method (just draw some tangents to make it clear) $$ a_1 < \sqrt{2} <\ldots<a_4<a_3<a_2 $$ and the sequence converges to $\sqrt{2}$ quadratically. As a matter of fact, $a_n$ is a convergent of the continued fraction of $\sqrt{2}$, namely $\frac{p_{2^n}}{q_{2^n}}$.
Partitions of $\alpha$ Variation
The following can be fleshed out, but I'll leave it in "hint" status, since I think the details are routine. I also use the interval $[0,1]$ rather than $[0,T]$, which is only a matter of scale. For positive $x,y$ and positive exponent $\alpha<1$ the inequality $(x+y)^\alpha<x^\alpha+y^\alpha$ holds. This means that for any partition $\pi$, dropping a point in $\pi$ causes the sum of $\alpha$ powers of lengths of subintervals to decrease (or remain the same, if a division point occurs twice in $\pi$ and one removes a copy). Now suppose what you say can be done. Then since one already knows the sum of $\alpha$ powers of differences must diverge to $+\infty$ as the mesh size goes to zero in the case of the "equipartitions" $0,1/n,...,(n-1)/n,1$, it follows that if we partition $[0,1]$ finely enough by a partition $P$, and then drop all the points excepting one each near the points $k/n$, we will only have decreased the $\alpha$ power difference sum of $P$, so that it must be that the sum for $P$ actually exceeds the sum for the equipartition into $n$ parts. Thus the power sum for such $P$ must also diverge.
Let $P$ be a degree $3$ polynomial with complex coefficients such that the constant term is $2010$. Then $P$ has a root
Consider the polynomial $2010(x+1)^3$. It has constant term $2010$, and all roots are $-1$. So the result is not correct. The additional condition that was presumably taken for granted but left out is that the polynomial has lead coefficient $1$ (is monic). Then the product of the roots is $-2010$, so at least one root has norm $\ge \sqrt[3]{2010}$. So we can do somewhat better than $10$.
QR decomposition error
If you've found the QR decomposition for A, then $A=QR$, hence the norm of their difference is 0. If you're solving the Least Squares problem minimizing $||Ax - b||_2$ then the error, or residual, is the norm of the last m-n elements of the vector $Q^Tb$.
Left shift continuity
Let $h\in G$ and $U\subseteq G$ open. We want to prove that $L_h^{-1}(U)$ is open. We know $\mu:G\times G\to G$ given by $\mu(g_1,g_2)=g_1g_2$ is continuous, so $\mu^{-1}(U)$ is open in $G\times G$. By definition of product topology, there are $\{U_\alpha\}_{\alpha\in I}$ and $\{V_\alpha\}_{\alpha\in I}$ open in $G$ such that: $\mu^{-1}(U)=\bigcup_{\alpha \in I}U_\alpha\times V_\alpha$ Let $g\in L_h^{-1}(U)$. Since $hg\in U$ we have $(h,g)\in \mu^{-1}(U)$, hence $(h,g)\in U_\alpha\times V_\alpha$ for some $\alpha \in I$. Then $\{h\}\times V_\alpha\subseteq U_\alpha\times V_\alpha \subseteq \mu^{-1}(U)$. Therefore $hk\in U$ for all $k\in V_\alpha$, and $g\in V_\alpha\subseteq L_h^{-1}(U)$. Since $V_\alpha$ is open, $L_h^{-1}(U)$ is open.
How many functions under these conditions?
For the second question, note that if there are two inputs mapping to each of 2, 4 and 6, then there are no inputs that map to 1, 3 or 5. So in other words, how many ways can you allocate two inputs to 2, two inputs to 4 and two inputs to 6? (Alternatively, how can you pick two elements of the set to map to 2, then pick two elements from the remainder to map to 4, and then pick two from the remainder of that to map to 6?) The third question is the same thing, but with mapping 3 inputs to each of 2 outputs.
Borel Set with undefined density
You mention that you tried concentric annuli, but that the limit seemed to exist. This is not correct, though, as long as the radii shrink rapidly enough. Say we have a sequence of decreasing radii $r_1>r_2>r_3>r_4>...$, and our set $B$ consists of all points between the circles of radius $r_{2n-1}$ and $r_{2n}$, centered on the origin. Then I claim that if the radii shrink fast enough, the limit you are interested in does not exist. HINT: what happens if $${\pi r_n^2\over\pi r_{n+1}^2}\rightarrow \infty?$$ (Of course, the $\pi$s cancel; I'm writing them in as a hint . . .)
Everywhere continuous and differentiable $f : \mathbb{R} → \mathbb{R}$ that is not smooth?
An example is $$f(x) = \begin{cases}0 & \text{for } x<0\\x^2 & \text{for } x\geq 0 \end{cases}.$$ It is clear that the function is continuous and differentiable for all $x\in \mathbb{R}$. But $f'(x)$ is not differentiable at $x=0$.
Suppose that $x\in\mathbb{R}$ and that $x > 1$, and that $k\in\mathbb{N}$. Does $x^{n}/n^{k}$ converge?
The ration test told you the series diverges. For fixed $ x>1$, we let $$\gamma\in(1,x)$$ and $$a_n=\frac{x^n}{n^k}$$ As you done, $$\lim_{n\to+\infty}\frac{a_{n+1}}{a_n}=x>\gamma$$ thus, for $ n $ great enough, let us say, for $ n> p$, $$\frac{a_{p+1}}{a_p}\ge \gamma$$ $$\frac{a_{p+2}}{a_{p+1}}\ge \gamma$$ ... $$\frac{a_n}{a_{n-1}}\ge \gamma$$ thus, by multiplication $$a_n\ge a_p(\gamma)^{n-p}$$ then $$\lim_{n\to+\infty}a_n=+\infty$$
The general solution of $x^a = a^x$ for real $a >0$
For $a\neq 1$ $x^a=a^x$ is equivalent to $$x=a^{\frac{x}{a}}\\ xa^{-\frac{x}{a}}=1\\ xe^{-\frac{\ln a}{a}\cdot x}=1\\ (-\frac{\ln a}{a}\cdot x)e^{-\frac{\ln a}{a}\cdot x}=-\frac{\ln a}{a}\\ -\frac{\ln a}{a}\cdot x=W(-\frac{\ln a}{a})\\ x=-\frac{aW(-\frac{\ln a}{a})}{\ln a}$$ using Lambert W function. Note that this function is actually multivalued for negative arguments, so it includes both the trivial solution $x=a$ and the other, nontrivial one. One shouldn't expect to be able to solve for $x$ using only elementary functions.
Prove by induction $|u - y| < \delta \Rightarrow |u^{n} - y^{n}| < \epsilon$
You don't seem to use the truth of $P(n) $ to establish the truth of $P(n+1)$. Rather you establish the truth of both $P(n) $ and $P(n+1)$ independently using same approach. You can however use induction in following manner. It is obvious that $P(1)$ is true. Let's assume the truth of $P(n) $. Thus given any $\epsilon&gt;0$ and $u\in\mathbb{R} $ there is a $\delta&gt;0$ such that if $|u-y|&lt;\delta$ then $|u^n-y^n|&lt;\epsilon $. Now consider the expression $$|u^{n+1}-y^{n+1}|\leq |u^{n+1}-u^ny|+|u^ny-y^{n+1}|\\\leq |u^n||u-y|+|y||u^n-y^n|$$ If $|u-y|&lt;1$ then $|y|&lt;|u|+1$ and hence by above inequality we have $$|u^{n+1}-y^{n+1}|\leq (|u^n|+1)|u-y|+(|u|+1)|u^n-y^n|$$ Let $\epsilon&gt;0$ be given. Then by truth of $P(n) $ there is a $\delta_1&gt;0$ such that if $|u-y|&lt;\delta_1$ then $|u^n-y^n|&lt;\epsilon/(2(|u|+1))$. Let $$\delta=\min\left(1,\delta_1,\frac{\epsilon}{2(|u^n|+1)}\right)$$ then for $|u-y|&lt;\delta$ we have $$|u^{n+1}-y^{n+1}|&lt;(|u^n|+1)\cdot\frac{\epsilon}{2(|u^n|+1)}+(|u|+1)\cdot\frac{\epsilon}{2(|u|+1)}=\epsilon$$ and this establishes the truth of $P(n+1)$.
If $f^{-1}(x)=\frac{1}{f(x)}$ then find $f(1)$
From $f^{-1}(x)=\frac{1}{f(x)} \tag 1$ by replacing $x: = f(1)$ we get: $1 = \frac{1}{f(f(1))}$ so $f(f(1))= 1$ By applying $f$ to (1) we get $x=f(\frac{1}{f(x)})$, so $1=f(\frac{1}{f(1)})$. Therefore $f(f(1))=f(\frac{1}{f(1)})$ and, using injectivity, $f(1)=\frac{1}{f(1)}$. If follows $f(1)=1$. About your claim "function is bijective, hence it will be monotonic", well, this is guarantee when $f$ continuos.
Using Laplace Transforms to Evaluate Integrals
Method 1 By using the integral \begin{align} \int_{0}^{\infty} e^{-u t} \, du = \frac{1}{t} \end{align} the integral \begin{align} I = \int_0^{\infty} \, \frac{e^{-2t}\cos(3t)-e^{-4t}\cos(2t)}{t}dt \end{align} becomes \begin{align} I &amp;= \int_0^{\infty} \int_{0}^{\infty} (e^{-2t}\cos(3t)-e^{-4t}\cos(2t) ) ds dt \\ &amp;= \int_{0}^{\infty} \, ds \, \left[ \int_{0}^{\infty} e^{-(s+2)t} \cos(3t) \, dt - \int_{0}^{\infty} e^{-(s+4)t} \cos(2t) \, dt \right] \\ &amp;= \int_{0}^{\infty} \left[ \frac{s+2}{(s+2)^{2} + 3^{3}} - \frac{s+4}{(s+4)^{2} + 4} \right] \, ds \\ &amp;= \frac{1}{2} \left[ \ln\left( \frac{(s+2)^{2} + 9}{(s+4)^{2} + 4} \right) \right]_{0}^{\infty} \\ &amp;= - \frac{1}{2} \, \ln\left( \frac{13}{20} \right). \end{align} Method 2 Using the Laplace transform of a of a function divided by the variable is under the rule \begin{align} \mathcal{L}\left\{ \frac{f(t)}{t} \right\} = \int_{s}^{\infty} F(u) \, du \end{align} where $F(s)$ is the transformed function. Fron this rule it is seen that \begin{align} I &amp;= \int_{2}^{\infty} \frac{u}{u^{2} + 3^{2}} \, du - \int_{4}^{\infty} \frac{u}{u^{2} + 2^{2}} \, du \\ &amp;= \frac{1}{2} \left[ \ln(u^{2} + 9) \right]_{2}^{\infty} - \frac{1}{2} \left[ \ln(u^{2} + 4) \right]_{4}^{\infty} \\ &amp;= \frac{1}{2} \, \lim_{u \rightarrow \infty} \left\{ \ln\left( \frac{u^{2} + 9}{u^{2} + 4} \right) \right\} - \frac{1}{2} \, \ln\left(\frac{2^{2} + 9} {4^{2} + 4} \right) \\ &amp;= \frac{1}{2} \, \lim_{u \rightarrow \infty} \left\{ \ln\left( \frac{1 + \frac{9}{u^{2}} }{ 1 + \frac{4}{u^{2}} } \right) \right\} - \frac{1}{2} \, \ln\left( \frac{13}{20} \right) \\ &amp;= - \frac{1}{2} \ln\left( \frac{13}{20} \right). \end{align}
Given that $ \{ \vec{u}, \vec{v} \}$ are l.i. prove that if $ \vec{w} \times \vec{u} = \vec{w} \times \vec{v} = \vec{0}$ then $\vec{w} = \vec{0}$
$w\times u=w\times v=0$ implies that $w=au=bv, a,b\in R$ which is equivalent to $au-bv=0$ since $u,v$ linear independent $a=b=0$ and $ w=0$.
Solving perpendicularity problem not using scalar product of vector
Let $E$ be the midpoint of $AC$. Since $EN$ is a middle line of $\triangle CAD$, we have $EN\parallel AD$ and $EN=AD/2$. Similarly, $EM\parallel CB$ and $EM=CB/2$. In particular, $EN:EM=AD:CB$. Since $OH\perp AD$ and $OK\perp CB$, we get $OH\perp EN$ and $OK\perp EM$. Also $OH=AD\cdot|\cot\angle AOD|$ and $OK=CB\cdot |\cot\angle COB|$, so $OH:OK=AD:CB$ as $\angle AOD=\angle COB$. Since $\angle HOK$ and $\angle NEM$ are angles with perpendicular rays we have two possibilities: $\angle HOK=\angle NEM$ or $\angle HOK+\angle NEM=180^\circ$. I will discuss that the first option holds having in mind the following picture (discussions are similar in other cases): Note that $\angle NEM=$ $180^\circ-\angle NEC+\angle MEA=$ $180^\circ-\angle DAC+\angle BCA=$ $\angle ADC+\angle DCA+\angle BCA=$ $\angle ADC+\angle BCD$ and $\angle HOK&gt;\angle AOB=\angle DOC$, so $\angle NEM+\angle HOK&gt; \angle ADC+\angle BCD+\angle DOC= 180^\circ+\angle ADB+\angle BCA&gt;180^\circ$, so the second option doesnt hold. So $EN:EM=OH:OK$ and $\angle NEM=\angle HOK$, hence $\triangle NEM\sim\triangle HOK$. Therefore, $EN\perp OH$ and $EM\perp OK$ imply $NM\perp HK$.
How does $(k+1)!(k+2)-1 = (k+2)!-1$?
The recurrence for $n!$ is $n! = n (n-1)!.$ Apply for $n=k+2.$
In how many different ways can the faces of a regular dodecahedron be colored?
We compute the cycle index of the permutation group of the faces. In order to answer this question it is best to work with an image of the dodecahedron like the one at Wikipedia. Look at the image and rotate it in your mind. By considering the properties of the object as they appear by visual inspection, we see that there are three types of symmetries in addition to the identity: rotations about an axis passing through the centers of two opposite faces, rotations about an axis passing through opposite vertices and 180 degree rotations that flip two opposite edges, mapping each onto itself. The identity constributes the following term to the cycle index: $$ a_1^{12}.$$ There are six pairs of opposite faces and four rotations for each of these which fix the two opposite faces and create two five-cycles, giving $$ 6 \times 4 \times a_1^2 a_5^2 = 24 a_1^2 a_5^2.$$ There are ten pairs of opposite vertices and two rotations for each of these which create two three-cycles at the two vertices. The two rotations create two three-cycles among the faces not adjacent to the two vertices, giving $$ 10 \times 2 \times a_3^4.$$ There are fifteen pairs of opposite edges and the 180 degree rotations about the plane passing through them partition everything into two-cycles, giving $$ 15 \times a_2^6.$$ It follows that the cycle index of the permutation group $G$ of the faces is $$ Z(G) = \frac{1}{60} \left( a_1^{12} + 24 a_1^2 a_5^2 + 20 a_3^4 + 15 a_2^6\right).$$ Now evaluating $Z(G)$ at $X_1 + X_2 + \cdots + X_n$ and setting $X_1 = X_2 = X_3 = \ldots = X_n = 1$, we obtain the following sequence of values: $$1, 96, 9099, 280832, 4073375, 36292320, 230719293, 1145393152, 4707296613, 16666924000.$$ Of course this provides the generating functions as well, e.g. for two colors we get $${X_{{1}}}^{12}+{X_{{1}}}^{11}X_{{2}}+3\,{X_{{1}}}^{10}{X_{{2}}}^{2}+5\,{X_{{1}}}^{9}{X_{{2}}}^{3}+12\,{ X_{{1}}}^{8}{X_{{2}}}^{4}+14\,{X_{{1}}}^{7}{X_{{2}}}^{5}+24\,{X_{{1}}}^{6}{X_{{2}}}^{6}\\+14\,{X_{{1}}}^{ 5}{X_{{2}}}^{7}+12\,{X_{{1}}}^{4}{X_{{2}}}^{8}+5\,{X_{{1}}}^{3}{X_{{2}}}^{9}+3\,{X_{{1}}}^{2}{X_{{2}}}^ {10}+X_{{1}}{X_{{2}}}^{11}+{X_{{2}}}^{12}.$$ Substituting into the cycle index we obtain the explicit formula $$\frac{1}{60} \left(n^{12} + 24 n^4 + 20 n^4 + 15 n^6\right) ={\frac {1}{60}}\,{n}^{12}+\frac{1}{4}\,{n}^{6}+{\frac {11}{15}}\,{n}^{4}.$$ Remark, Nov 12 2018. Obviously when we only seek a count rather than a classification we do not need to substitute $n$ variables into the cycle index. It is sufficient to use Burnside with the substitution $a_q = n.$ We obtain the sequence OEIS A000545.
Converting from Polar Basis to Cartesian Basis
So, you figured out that $re_r = xe_i+y e_j+z e_k$, $e_\theta = \cos\theta\cos\phi e_i + \cos\theta\sin\phi e_j-\sin\theta e_k$, $e_\phi = -\sin\phi e_i + \cos\phi e_j$. Moreover you also found $x = r\sin\theta\cos\phi$, $y=r\sin\theta\sin\phi$ and $z=r\cos\theta$. It therefore follows that $$F = -2\theta (xe_i+y e_j+z e_k)/r + \cos\theta\cos\phi e_i + \cos\theta\sin\phi e_j-\sin\theta e_k $$ Filling in your requirement that $r(t)=1$, $\theta(t) = 2t$ and $\phi(t)= \frac{\pi}{2}$: $$F = -4t (\sin(2t) e_j+\cos(2t) e_k) + \cos(2t) e_j-\sin(2t) e_k $$ or $$F = (\cos(2t)-4t \sin(2t)) e_j + (-4t \cos(2t)-\sin(2t)) e_k \; . $$
Show that holomorphic $f_1, . . . , f_n $ are constant if $\sum_{k=1}^n \left| f_k(z) \right|$ is constant.
For simplicity, let's consider $n=2$. In $U$, we have $|f|+|g|=C$ for some constant. Fix a point $z_0$ in $U$. Then, for appropriately chosen unimodular constants $$|\alpha f(z_0) +\beta g(z_0)|=|f(z_0)|+|g(z_0)|=C.$$ This means the holomorphic function $\alpha f(z) +\beta g(z)$ attains its maximum in $U$ (since the supremum in $U$ is at most $C$ by the triangle inequality, and it attains $C$), so it is a constant. So for all $z$, we have $$|\alpha f(z) +\beta g(z)|=|f(z)|+|g(z)|.$$ Equality is only possible in the triangle inequality all the vectors point in the same direction. So $c(z)\alpha f(z) = \beta g(z)$ for some real holomorphic $c(z)$. But since real holomorphic functions are constant, $c$ is constant. Then $\alpha f(z) +\beta g(z)=(1+c)(\alpha f)$, and the latter is a holomorphic function that attains its maximum on $U$, so it is constant. So $f$ is constant, and it follows that $g$ is constant.
Minimise $|f''(x)|$ on an interval when you know the values of the function and the values of the derivative at the endpoints of the interval only.
In this answer I show that the exact answer is $sign((b-a)(c-a))(b+c-2a)+\sqrt{2((b-a)^2+(c-a)^2)}$ (formula (5) below). As noted in AlexRavsky's and MartinR's answer, we may assume without loss that $a=0$, so that $f\in B_{b,c}=A_{0,b,c}$. Next, if $f\in B_{b,c}$ with $b\neq 0$, then $g(x)=\frac{f(x)}{b}$ satisfies $g\in B_{1,\frac{c}{b}}$. So, (assuming $b\neq 0$, which we do since this limit case will turn out to be similar and simpler than our generic case) we may assume without loss that $b=1$, so that $f\in C_{c}=A_{0,1,c}$. Further, if $f \in C_c$, then $h(x)=-\frac{f(1-x)}{c}$ satisfies $h \in C_{\frac{1}{c}}$. So we may assume without loss that $|c| \geq 1$. We assume $|c| \gt 1$ (the limit case $|c|=1$ will turn out to be similar and simpler than our generic case). Let $\varepsilon =\frac{c}{|c|}$ be the sign of $c$. Let $m=||f''||_{\infty}$, let $z\in (0,1)$ be a constant to be fixed later, and let $$ I_1 = \int_0^z (m+\varepsilon f''(t))(z-t)dt, \ I_2 = \int_z^1 (m-\varepsilon f''(t))(t-z)dt \tag{1} $$ Both $I_1$ and $I_2$ can be computed by two successive integration by parts, and we find $$ I_1 = m\frac{z^2}{2} - \varepsilon z + \varepsilon f(z), \ I_2 = m\frac{(1-z)^2}{2}+\varepsilon c(z-1)- \varepsilon f(z)\tag{2} $$ Now both $I_1$ and $I_2$ are nonnegative by construction, so $I_1+I_2\geq 0$. This means that $m\geq m_0$ where $$ m_0=\frac{2\varepsilon (c+z-cz)}{z^2+(1-z)^2} \tag{3} $$ The inequality $m\geq m_0$ will be an equality iff the integrands in $I_1$ and $I_2$ are zero a.e., so that $f''$ is $- \varepsilon m$ on $[0,z]$ and $+ \varepsilon m$ on $[z,1]$. Integrating twice from $0$, it follows that $f(x)=-\varepsilon m\frac{x^2}{2}+x$ for $x\in [0,z]$ and $f(x)=\varepsilon m\frac{x^2}{2}+(1-2\varepsilon mz)x+\varepsilon mz^2$ for $x\in [z,1]$. The two conditions $f'(1)=c$ and $f(1)=0$ give us a (nonlinear) system of two equations in $m$ and $z$, and it turns out that this system has the unique solution $$ m=\varepsilon (c+1)+\sqrt{2(c^2+1)}, z = \frac{c-\varepsilon \sqrt{\frac{c^2+1}{2}}}{c-1}\tag{4} $$ Returning to the general case and unrolling the symmetries of the initial problem, we find the fully general bound for $f\in A_{a,b,c}$ : $$ m=sign((b-a)(c-a))(b+c-2a)+\sqrt{2((b-a)^2+(c-a)^2)} \tag{5} $$ Note 1. To make the formula work when of $b-a,c-a$ is zero and the other is not, we must use the convention $sign(0)=+1$. Note 2. The unique solution we found does not strictly satisfy the requirements in the OP (the second derivative is discontinuous at $z$), but this is not a problem ; by well known density results, the non-$C^2$ optimal solution can be approximated by $C^2$ solutions and the infimum stays the same (although it is never attained by $C^2$ solutions).
Integral $ I=\int_{-r}^r \int_{-\sqrt{r^2-x^2}}^{\sqrt{r^2-x^2}} \sqrt{1 - \frac{x^2 + y^2}{x^2 + y^2 - r^2}} dy dx $
let$x=\rho\cos{t},y=\rho\sin{t}$ then $$I=\int_{0}^{2\pi}\int_{0}^{r}\rho\sqrt{1-\dfrac{\rho^2}{\rho^2-r^2}}d\rho d\theta$$ $$\Longrightarrow I=2\pi\int_{0}^{r}\rho\sqrt{\dfrac{r^2}{r^2-\rho^2}}d=2\pi r\int_{0}^{r}(r^2-\rho^2)^{-1/2}d\left(-\dfrac{1}{2}(r^2-\rho^2)\right)$$
Hölder inequality, showing that if $\int_1^\infty |f(x)| dx$ exists, then exists $\int_1^\infty |f(x)|^2 dx $
Here is a counterexample to the problem as written: $$f(x) = \begin{cases} \lfloor x \rfloor &amp; \text{if } x - \lfloor x \rfloor &lt; \lfloor x \rfloor ^{-3} \\ 0 &amp; \text{otherwise} \end{cases} $$ The function consists of a sequence of rectangular bumps of height $n$ but area $n^{-2}$. So $\int_1^\infty |f(x)|\,dx = \pi^2/6$. However $f(x)^2$ has bumps of the same width but height $n^2$. So the area of each bump is now $n^{-1}$, and thus $\int_1^\infty |f(x)|^2\,dx$ diverges logarithmically. Perhaps there's some condition that this $f(x)$ fails, which you forgot to copy in the question? I notice that the $p,q,r$ are not mentioned in the statement of what you want to prove at all, except that they are real numbers ...
How to prove that it is a surjective homomorphism??
So $\mathcal{M} = \{ g :\hat{\mathbb{C}} \to \hat{\mathbb{C}}, \exists(a,b,c,d) \in \mathbb{C}^4, ad-bc \ne0, g(z)=\frac{az+b}{cz+d} \} $ So take any $(a,b,c,d) \in \mathbb{C}^4$ with $ad-bc \ne 0$, and let called $\mathcal{M} \ni g:z \mapsto\frac{az+b}{cz+d}$. Let $t$ be one of the two square root of $ad-bc$ then $$A = \begin{pmatrix}a/t &amp; b/t \\ c/t &amp; d/t \end{pmatrix} \in SL(2,\mathbb{C})$$ and $g_A(z)=\frac{za/t +b/t}{zc/t+d/t}=\frac{az+b}{cz+d}=g(z)$ So we find a preimage of $g$ and the map is surjective.
Proving $\Theta(n^{k})= {n \choose k}$
$$ {n\choose k}=\frac{n!}{k!(n-k)!}= \frac{n(n-1)\ldots(n-k+1)}{k!} $$ Notice that the right hand side has $k$ terms in the numerator, and each of those is less than $n$. So $$ {n\choose k}\leq \frac{n^k}{k!} $$ Therefore we have $\limsup_{n\to\infty}{n\choose k}/n^k\leq \frac{1}{k!}$. For the other side we need a lower bound. In the first equation each term in the numerator on the right hand side is at least $n-k+1$. So $$ {n\choose k}\geq \frac{(n-k+1)^k}{k!} $$ Therefore we have $$ \liminf_{n\to\infty}{n\choose k}/n^k\geq \frac{1}{k!}\liminf_{n\to\infty}\frac{(n-k+1)^k}{n^k}. $$ But actually the $\liminf$ on the right is a well-defined limit: $$ \lim_{n\to\infty}\frac{(n-k+1)^k}{n^k}=1. $$ This can be seen using L'Hopital's Rule for example. Putting everything together: $$ \frac{1}{k!}\leq\liminf_{n\to\infty}{n\choose k}/n^k\leq\limsup_{n\to\infty}{n\choose k}/n^k\leq \frac{1}{k!}. $$ So actually: $$ \lim_{n\to\infty}{n\choose k}/n^k=\frac{1}{k!} $$ But the
Please help me in this probability problem
Let $Z = X_1+X_2+\cdots+X_n$ where each $X_i$ follows $\gamma(2,1)$ Then $Z$ follows $\gamma(2n,1)$ the mean of $Z$ then is $2n$. The probability that Z will have a mean of 2 as n approaches infinity naturally is =$0$. The question is a little bit absurd.
One-dimensional representations of S5
The derived subgroup is $A_5$, which has index 2. A homomorphism $G\to H$ has abelian image if and only if $G'$ is in the kernel. A linear character is a group homomorphism to the abelian group of units of the field. More generally, $A_n$ is the derived subgroup of $S_n$, and for nice enough fields the group of linear characters of $G$ is isomorphic to $G/G'$. The complex numbers are always nice enough.
Euler Schemes in Stochastic Differential Equations
You can rewrite $X_{n+1}=X_n + aX_{n+1}\Delta t + bX_{n+1}\Delta W_n$ as $f(X_{n+1})=X_n + aX_{n+1}\Delta t + bX_{n+1}\Delta W_n - X_{n+1}$ and use Newton's method to find a root for that equation, thus solving for $X_{n+1}$
Isomorphism of $k$-Algebras
Your question is usually phrased as follows: is a finitely generated algebra Hopfian? A well-known theorem of Maltsev states that every presentable finitely generated algebra is Hopfian —presentable means here that the algebra embeds in a matrix algebra $M_n(R)$ for some commutative ring $C$. This condition holds for an algebra $A$ if one can separate elements in $A$ with maps to algebras of finite dimension, for example (One says in this case that the algebra $A$ is residually finite dimensional) But not all finitely generated algebras are Hopfian. For example, there exist finitely generated groups $G$ with surjections $G\to G$ which are not injective, and the induced morphisms of group algebras $k G\to k G$ are surjective algebra maps which are not injective. The classical example is the Baumslag-Solitar group
In $\triangle ABC$, $D$ is a point on side $BC$ that $\angle BAD = \angle CAD =\angle ABC$. If $BD=1$ and $DC=2$, what would be the length of $AB$?
Let $\alpha=\angle ABC$. Since $AD$ is the bisector of $\angle CAB$, we have $$AB:AC=BD:CD=1:2\Rightarrow AC=2AB.$$ So, since $$\angle ACB=180^\circ-3\alpha\Rightarrow \sin(\angle ACB)=\sin(3\alpha)=3\sin \alpha-4\sin^3\alpha,$$we have with $\sin\alpha\not=0$, by the law of sines, $$\begin{align}\frac{AB}{\sin (\angle ACB)}=\frac{AC}{\sin(\angle ABC)}&amp;\Rightarrow \frac{AB}{3\sin\alpha-4\sin^3\alpha}=\frac{2AB}{\sin\alpha}\\&amp;\Rightarrow \sin\alpha(8\sin^2\alpha-5)=0\\&amp;\Rightarrow \cos^2\alpha=1-\sin^2\alpha=1-\frac 58=\frac 38.\end{align}$$ Hence, we have $$AB=2\cos \alpha=2\sqrt{\frac 38}=\frac{\sqrt 6}{2}.$$
Characterization of an invertible module
Yes, this is correct, and easy to prove. $MN=A$ implies $N \subseteq (A:M)$ and $(A:M) = (A:M)MN \subseteq AN \subseteq N$, hence $(A:M)=N$.
Partial derivatives with only one variable held constant
All variables other than $x$ in your example must be held constant. The ambiguity arises from definitions like $$f = x + y = 2 x + z$$ Here it is evident that $y = x + z$ but it is not specified whether $f$ is a function of $(x, y)$ or $(x, z)$ if you were to write $\frac{\partial f}{\partial x}$. Since there is nothing to say which is the more fundamental quantity, $y$ or $z$ (or if it even makes sense to ask that question), "there is nowhere to stand" (as the Buddhists say), and you must specify which variable, $y$ or $z$, you are choosing to hold constant. $$\left( \frac{\partial f}{\partial x} \right)_y = 1$$ $$\left( \frac{\partial f}{\partial x} \right)_z = 2$$ If some variables are not held constant, the partial derivative is not well defined in the first place, since ambiguities like the above can be created by rewriting any variable in the function. To take a derivative the function must be a function only of the variable with respect to which it is being differentiated.
To prove an apparently obvious statement: if $A_1\subseteq A_2 \subseteq ... \subseteq A_n$, then $\bigcup_{i=1}^n A_i = A_n$
To be completely formal, you'd could first prove by induction that $A_i\subseteq A_n$ for every $i\leq n$. Then prove that $\bigcup_{i=1}^n A_i\subseteq A_n$ and that $A_n\subseteq\bigcup_{i=1}^n A_i$. This last one should be trivial by definition of the union. As for the first one, if $x\in \bigcup_{i=1}^nA_i$, then $x\in A_j$ for some $j$, and then we know (from the proof by induction) that $x\in A_n$.
Hopf fibration is a submersion
It depends on what you consider easy to be. First of all, since $\phi^+$ is a diffeomorphism between the $2$-sphere and the Riemann sphere, you can throw it away and consider only the map $G : S^3 \to \overline{\mathbb C}$ given by $G(z_1,z_2) = z_1/z_2$. And $G$ extends to a scale-invariant map $G : \mathbb C^2 \setminus \{(0,0)\} \to \overline{ \mathbb C }$ via the same formula. So $F$ is a submersion if and only if $G$ (with domain $\mathbb C^2 \setminus {(0,0)}$) is a submersion. That this latter map is a submersion is easy-enough to check -- you'll have to use two charts on the Riemann sphere $\overline{\mathbb C}$ to see it, but that's fine. You could also eliminate that step by making some symmetry observations. A totally different way to make the argument would be to write $S^3$ as the union of two solid tori, and define the map on those solid tori, I believe Hatcher does this in his 3-manifolds notes. This gives not only a quick-and-dirty proof but it also gives you a strong intuition for how to visualize the map.
$\mathcal {N}\models \phi(n)$ if and only if $n$ is a power of 2, i.e. $n\in \{1,2,4,8,...\}$.
What does it mean for $n$ to be a power of $2$? It means that there is some natural number $m$ such that $n=2^m$. How can you express this using a formula with the available symbols? You need a formula $\phi(v_1)$ in which $v_1$ is a free variable, i.e. $v_1$ is not bounded by quantifiers. You want that the formula says that $v_1$ is a power of two. The following formula will do: $\exists v_2 (v_1=\mbox{exp}(1+1,v_2))$
Sum of ideals in polynomial rings
$\mathbb{Z}[X]$ is not a principal ideal domain, so not every ideal is of the form $\langle f(X)\rangle$. To see that $I\subseteq \langle 5\rangle +\langle X\rangle$, note that every element $g(X)$ of $I$ can be written as $h(X)+5n$ where $n\in\mathbb{Z}$ and $h(0)=0$. Then $h\in\langle X\rangle$ and $5n\in\langle 5\rangle$, so $g\in \langle 5\rangle +\langle X\rangle$. To see that $\langle 5\rangle + \langle X\rangle\subseteq I$, note that every element $g(X)$ of $\langle 5\rangle+\langle X\rangle$ is a sum of the form $h(X)+5n$ where $n\in\mathbb{Z}$ and $h\in\langle X\rangle$, meaning that $h(0)=0$. Thus $g(0)=h(0)+5n=5n\in 5\mathbb{Z}$, so $g\in I$.
Is {$u_{1},u_{2},\cdots,u_{k},v$} linearly independent?
Hint: $$ \lambda_1u_1+\dots+\lambda_ku_k+\lambda_{k+1}v=0 $$ $$\implies \lambda_1u_1+\dots+\lambda_ku_k+\lambda_{k+1}\left(u_{1}+u_{2}+\cdots+u_{k}+u_{k+1}\right)=0$$ $$\implies (\lambda_1+\lambda_{k+1})u_1+\dots+(\lambda_k+\lambda_{k+1})u_k+\lambda_{k+1}u_{k+1}=0$$ What can you conclude from the last equation?
Probability of "clock patience" going out
Here's an explanation of why it's 1/13. We are essentially dealing out a randomly ordered deck into piles of a kind of four, and we require that the pile of kings be the last pile completed. And so the probability of winning is 1/13. Maybe this is more convincing. Imagine playing the game backwards with the deck facing you so that you can see the cards. Remove the top card, a seven, say, the next card facing you is a two, say, so you place the seven in the pile at the two o'clock position, the next card facing you is a five, so you place the two at the five o'clock position. You now take the five from the deck and see a king beneath it, so you place the five in the king pile, and so on. You continue until you've placed out all the cards and, if you are going to win this game (when it's played in the correct direction), the last card must be placed in the king pile because that's where you take your first card from when you run the game in the right direction. Now the only way that can happen is if you placed a king down as your first card. And that's a 1/13 chance with a random ordered deck.