title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Limiting distribution of first order statistics ${X}^{n}$
I found the answer. So just in case anyone struggles with the same problem her is the solution. $$F_{X_{1:n}}(x) = P[X_{1:n}\le x]\\ = 1-P[X_{1:n} \gt x]\\= 1- P[Min(X_{1},X_{2},...X_{n}) \gt x] \\ =1- P[X_{1}\gt x, X_{2} \gt x,...,X_{n} \gt x] \\ =\prod_{i=1}^nP[X_{i:n} \gt x] = 1-[1-F(x)]^n = 1 - (1-(1- \frac{1}{x}))^n \\ = 1 - \frac{1}{x^n} $$ Thus the limit of this goes to 1 as n goes to $\infty.$ Similarly for part c) we have that: $$F_{X_{1:n}^n}(x) = P[X_{1:n}^n\le x]\\ = P[X_{1:n} \le x^{1/n}]\\ = 1 - (x^{1/n})^{-n} = 1- \frac{1}{x} $$ Hence the limit of this expression stays the same when n approaches $\infty$.
Removing redundant half-spaces that bound a convex polytope
The polytope can be written as $Ax \geq b$ (where rows of $A$ contain your $v$ vectors). For each row $v = [v_1 v_2 \ldots v_n]$ of $A$, solve the LP $$ \begin{array}{c} \min v^T x \\ Ax \geq b \end{array} $$ The half-space defined by this $v$ is not redundant if and only if the optimum value obtained above is equal to the corresponding $b$ value.
Why is $(x-a)(x-b)(x-c)...(x-z)=0$?
In listing out all the alphabets from $a$ to $z$, doesn't $x$ appear as well? What is the value of $x-x$? What is the value of $(x-x)*(x-a)(x-b)*(x-c)...$?
Lower bound nuclear norm of $A$ by $\mathrm{tr}(|A|)$
Yes, your statement is true. In general, it can be shown that $$ \operatorname{Tr}(BA) \leq \|A\|_*\|B\|_{\infty} $$ Where $\|B\|_\infty$ is the spectral norm (i.e. the largest singular value) of $B$. Your statement can be attained by taking $B$ to be a suitable diagonal matrix with $\pm 1$ on the diagonal. See my earlier question here for more on the general inequality.
Finding a function f(n) such that T(n) = O(f(n))
$$\sum_{i=0}^\infty x^i=\frac1{1-x}$$ Differentiate that with respect to $x$: $$\sum_{i=0}^\infty ix^{i-1}=\sum_{j=0}^\infty(j+1)x^j=\frac1{(1-x)^2}$$ Rearrange it a little, and you can find $\sum i/2^i$. Differentiate again, rearrange it again, and you can find $\sum i^2/2^i$. This is a finite number, as you guessed. $$T(n)=n\log n\sum_{i=0}^{\infty}\frac{i^2}{2^i}=n\log n\sum_{i=0}^{\infty} i^2(\frac12)^i$$
What is the number of functions $f$ from the set $\{1, 2, . . . , 2n\}$ to $\{1, 2, . . . , n\}$ so that $f(x) \leq \lceil x/2 \rceil$ for all $x$?
As you state in your question, there is $1$ place the value $1$ can map to. There is $1$ place that $2$ can map to. There is $i$ place that $2i-1$ can map to & $i$ places that $2i$ can map to. So there will be $\color{red}{(n!)^2}$ such mappings.
Solving limits by Taylor series
May be helpful to change variables to let $u = x-1 \to 0$ then the limit becomes $$ \begin{split} \lim_{x \to 1} \frac{\ln x}{x^2-1} &= \lim_{x \to 1} \frac{\ln x}{(x+1)(x-1)} \\ &= \lim_{u \to 0} \frac{\ln (1+u)}{u(u+2)} \\ &= \lim_{u \to 0} \frac{u - \frac{u^2}{2}+\frac{u^3}{3} \ldots}{u(u+2)} \\ &= \lim_{u \to 0} \frac{1 - \frac{u}{2}+\frac{u^2}{3} \ldots}{u+2} \\ &= \frac12 \end{split} $$
How to construct a line with only a short ruler
You want to draw a line form $A$ to $B$. With the following method you can find the middle point of this line, $M$. Repeating this method wou will be able to find points in the desired line as close as you want for your initial points, and this is enough to draw the line. With a ruler and a compass you can draw lines of the length that you want. First of all we will draw a line passing from A and enough close to B. In order to do this do the following: Draw two lines starting at A more or less close to B, say $l_1$ and $l_2$, and let $\alpha$ be the angle between them. If $B$ is located between the lines, OK. If not, draw a fan of lines forming an angle $\alpha$ with the previous ones until you get two lines for which $B$ is located betwwen them. Now we have two lines starting at $A$ and with $B$ located "inside". Let $l_3$ be the bisectrix. Repeat the step 3 enough times choosing each time lines such that the point $B$ is "inside". After this process we get a line $l$ passing through $A$ as close as we want to $B$. Since $B$ and $l$ are enough close we can draw with our small ruler and our small compass an orthogonal line to $l$ passing through $B$. So we have a right triangle with vertex on $A$ and $B$ and edges $l$ and $r$. Transporting angles from $B$ to $A$ you can draw a rectangle with vertex $A$, $E$, $B$ and $F$. This rectangle have the property that its height (length of the segment $AE$) is enough small and its diagonal is $AB$ is the desired line. Now, using additional lines we are able to find the middle points of the long sides. Let $E_1$ be the middle point of $AE$ and $F_1$ the middle point of $BF$. Since $E_1$ and $F_1$ are enugh close one from the other, we can determine the point $M$ in the middle of them, and this point belongs to the desired line $AB$. Iterating this algorithm you can draw the line. I hope that the you can understand my poor explanation...sorry about my english!
Let $f,g : \Bbb{R} \to \Bbb{R}$ such that $\left| g \left(f(x)\right) - g \left(f(y)\right)\right| \lt \left|x-y\right|$, with f not continuous..
Let $f(x) = \left\{\begin{array}{rl} x & \mathrm{if}\ x < 0 \\ x-2 & \mathrm{if}\ x \geq 0 \\ \end{array}\right.$. If we choose our sequence of $(x_n)$ converging to $0$ from the right, then we get that the $f(x_n)$ converges to $f(0)$. But if we choose our sequence of $(x_n)$ converging to $0$ from the left, then the $f(x_n)$ converge to $0 \neq f(0) = -2$.
Explanation to Linear Independency
If you want a 'formal' way to prove that the third row is independent to the first two, you can do so by contradiction: Suppose that $r_3$ is a linear combination of $r_1$ and $r_2$. Then for some $\alpha, \beta$ with not both equal to $0$, we have that $$(\alpha,\alpha a, 0, \alpha b, \alpha d, 0) + (0,0,\beta,\beta c, \beta e, 0) = \alpha(r_1) + \beta(r_2) = r_3 = (0,0,0,0,0,1)$$ Then $0 + 0 = 1$, a contradiction. Hence $r_3$ cannot be expressed as a nontrivial linear combination of $r_1$ and $r_2$. That essentially captures the idea of what you wrote. You can write something similar to show that $r_2$ is not a linear combination of $r_1$, and you're done.
Null set of Bernoulli distribution
Hint: For $\ A\in\mathscr{B}\ $, $\ P(A)\ $ must have one of four values: $$ P(A)=\cases{0&if $\ 0\not\in A$ and $\ 1\not\in A$\\ p&if $\ 0\in A\ $ but $\ 1\not\in A$\\ 1-p&if $\ 1\in A\ $ but $\ 0\not\in A$\\ 1&if $\ 0\in A\ $ and $\ 1\in A\ $.} $$
little-o and 3 functions
In general, it is not true. If $\displaystyle \lim_{x \rightarrow \infty} \frac{f(x)}{g(x)}$ exists and is not equal to $0$, then it is true. Otherwise, it is not.
Help with a probability problem
What you’ve done is correct, though you can shorten it a little by thinking of the first nine boxes as one box that you pick with probability $0.9$. Then the probability of drawing a white ball is $$\frac9{10}\cdot\frac12+\frac1{10}\cdot\frac56=\frac{32}{60}\;,$$ to which the last box contributes $\frac5{60}$, so the desired probability is $$\frac{5/60}{32/60}=\frac5{32}\;.$$ Or you can be really slick and notice that the probabilities aren’t changed if we add one white and one black ball to each of the first nine boxes. Now, however, all boxes contain the same number of balls, so picking a box at random and then drawing a ball from the box is equivalent to picking a ball at random: each ball has one chance in $60$ of being chosen. And since there are now $32$ white balls, $5$ of which are in the last box, we get the desired result immediately.
Was there a case in history when a theorem was incorrectly proved, but people still used it without realizing that theorem was wrong?
There are finitely many such cases. A detailed list all such well known cases can be found in this link. One example: The following enjoyed the status of a theorem for 15 years before the error was detected and the statement of the theorem was revised. Grunwald (1933), gave an incorrect proof of the erroneous statement that an element in a number field is an $n$-th power if it is an $n$-th power locally almost everywhere. George Whaples (1942) gave another incorrect proof of this incorrect statement. However Wang (1948) discovered the following counter-example: 16 is a $p$-adic 8th power for all odd primes $p$, but is not a rational or 2-adic 8th power. In his doctoral thesis Wang (1950) gave and proved the correct formulation of Grunwald's assertion, by describing the rare cases when it fails. This result is what is now known as the Grunwald–Wang theorem.
Does an infinite series with an unbounded number of terms with the same value converge
Of course the answer is, it depends. Roughly speaking, it depends on the comparison between the rates of growth of $\vert S_i\vert$ and the rate of shrinkage of $c_m$. If the $c_m$s shrink "much faster" than the sizes of the $S_i$s grow, then the corresponding series will converge; otherwise, it will diverge. The rate of growth of $\vert S_i\vert$ on its own tells you absolutely nothing.
Number of ways for 2 objects to not be beside each other in a line
There are $3!=6$ ways to arrange C, D, and E. Once you’ve arranged them, you must pick $2$ of the $4$ slots defined by them into which to insert A and B. (These is one slot to the left of all of them and one to the right of all of them, and there are $2$ between adjacent letters.) There are $\binom42=6$ ways to do that. Finally, you have to decide which of A and B goes in the leftmost of the $2$ slots that you chose, and you can do that in $2$ ways. The total number of possibilities is therefore $$3!\cdot\binom42\cdot2=6\cdot6\cdot2=72\;.$$ Thus, you’re short by a factor of $2$.
Proving that the maximal abelian extension contains all abelian extensions
I had a much longer answer prepared, but I think that this should suffice. If you want justification of my argument, I’ll expand this answer. You’re really asking whether the compositum of all finite abelian extensions of $K$ is equal to the compositum of all abelian extensions of $K$, whether finite or not. But all our extensions are algebraic, and every infinite algebraic extension is the compositum of its finite subextensions. So the compositum of all abelian extensions of $K$ is also the compositum of all finite abelian extensions.
Is there an algebraic-geometric solution to the problem of the Leibnizian formalism?
I will just try to describe the closest things that I have seen, mainly focussing on synthetic differential geometry but with a mention of algebraic geometry at the end. The wishlist is: $f(x+dx)=f(x)+f'(x)dx$ in particular, it should allow addition it should generalize to higher derivatives A basic idea is to use nilpotent elements. If you have a polynomial $f=\sum_{n=0}^N c_nx^n,$ then $f(x+\epsilon)=\sum_{n=0}^N c_n(x+\epsilon)^n=f(x)+\epsilon \sum_{n=0}^N nc_nx^n=f(x)+\epsilon f'(x)$ in the ring $\mathbb C[x,\epsilon]/(\epsilon^2).$ The technicalities come from trying to apply this local definition to varieties or manifolds. The Wikipedia smooth infinitesimal analysis article only mentions nilsquare infinitesimals, but in synthetic differential geometry higher order differentials can be defined. Kock's book on SDG (http://home.math.au.dk/kock/sdg99.pdf) defines subsets $$D_n=\{x\in R \mid x^{n+1}=0\}$$ where $R$ is the base ring - like the reals but with infinitesimals. (Beware that $D_n$ is not an ideal - $(\epsilon_1+\epsilon_2)^2$ is not necessarily zero even if $\epsilon_1^2=\epsilon_2^2=0.$ This algebraic fact is an important point in that AMS article of Mumford you mentioned.) These can be collected into the nilpotent ideal $$D_\infty=\bigcup_{n\geq 0}D_n.$$ SDG postulates that the restriction $f|_{D_\infty}:D_\infty\to R$ of any map $f:R\to R$ is given uniquely by a formal Taylor series near zero. You can compose with translations to get Taylor series near other points. These formal Taylor series compose in the usual way, giving the chain rule. The space of maps $D_n\to R$ becomes a free, finitely generated $R$-module. If $f(0)=0,$ then $f:R\to R$ restricts to an $R$-linear map from $D_1$ to $D_1,$ which is just the familiar notion of the derivative of a differentiable map as a linear map of tangent spaces. Using translation to handle possibly non-zero $x$ and $f(x),$ this formalizes $f(x+dx)=f(x)+f'(x)dx$ - take $dx$ to be an indeterminate element of $D_1.$ Geometrically, a map $g:D_n\to R$ is an element of the $n$-th order jet space around zero. In particular, $g:D_1\to R$ is a tangent vector at $g(0).$ These jet spaces are ordinary objects used in "non-synthetic" differential geometry. For manifolds you can use addition in local co-ordinates. You can't reliably add non-infinitesimal quantities, because the result might fall outside the parameterization. And for anything except tangent vectors, the addition is not geometric: it is not parameterization-invariant. An example for $2$-jets is that the $\mathbb R\to\mathbb R$ curves $t\mapsto t$ and $t\mapsto -t$ add to zero as $\infty$-jets in the usual co-ordinates, but if we change co-ordinates locally via $x'=x+x^2$ these curves become $t\mapsto t+t^2$ and $t\mapsto -t+t^2,$ which don't add to zero. In algebraic geometry there is a simple construction of the "Zariski tangent space" as the dual of the cotangent space. And there are constructions of $n$-jets. I think these can be packaged up into a single object using maps of completed local rings, or using formal schemes, but that's going way outside my comfort zone. You might be interested in: A beginner's guide to jet bundles from the point of view of algebraic geometry, Ravi Vakil, a note listed as "not intended for publication", http://math.stanford.edu/~vakil/files/jets.pdf Jets via Hasse-Schmidt Derivations, Paul Vojta, https://arxiv.org/abs/math/0407113
Calculate the sum of $\sum_{n=1}^{+\infty} \frac{1+2^n}{3^n}$
As Martin R noticed, observe that you have $$ \sum_{n=0}^\infty r^n=\frac1{1-r}, \quad |r|<1, $$ but $$ \sum_{n=\color{red}{1}}^\infty r^n=\frac {\color{red}{r}}{1-r}, \quad |r|<1. $$
Is the subdifferential always convex and closed set?
For a vector $u$ to be an element of the subdifferential, it is necessary and sufficient to have: $$f(y)\geq f(x) + \langle y-x,u\rangle, \forall y$$ Hence the subdifferential can be written as: $$\cap_y \{ u \big| f(y)\geq f(x) + \langle y-x,u\rangle \} $$ This representation is the intersection of closed convex sets. Therefore it is closed and convex. The second part seems like a tautology to me. The subdifferential could be empty or non-empty?
Uniqueness of Complement Subspace Decomposition
$w_1-w_2=z_2-z_1$. Since $Z$ and $W$ are subspaces the left side belongs to $W$ and the right side to $Z$. Hence they belong to $Z \cap W=\{0\}$ which gives $z_1=z_2$ and $w_1=w_2$.
Find a convex polygon $P$ s.t. $P\subset Q \subset (1+\epsilon) P $.
Assume that $$ A_0 = \{ \delta_0 (m,n)| m,\ n\in \mathbb{Z} \} $$ If $\delta_0 < \frac{R}{2}$, then $P_0 = {\rm conv}\ A_0 \bigcap Q$ contains $\frac{R}{2}$-ball $B$. If $v \in \partial B,\ v'\in \partial P_0,\ v'' \in \partial Q$ s.t. $v,\ v',\ v''$ are in some ray starting at the origin, then note that $$ |v'-v''| \leq {\rm diam}\ Q$$ If $\delta = \delta_0 / \frac{{\rm diam}\ Q}{\frac{R}{2}\varepsilon }$, i.e. $\epsilon = \frac{2\delta {\rm diam}\ Q }{R\delta_0}$ then $P$ is the desired one.
Proof of Stone Weierstrass Theorem from Hahn Banach
It is not compact in the norm topology. The proof is using weak* topology and$(A^{\perp})_1$ is compact in this topology by Banach Alaoglu Theoem.
Finding parameters a,b in a matrix with given eigenvalues
Let $\lambda$ be an eigenvalue of $A$. Then$$|A-\lambda I|=\begin{vmatrix}7-\lambda&-4&0\\a&-7-\lambda&b\\3&-2&-\lambda\end{vmatrix}=0$$Putting $\lambda=1$,$$\begin{vmatrix}6&-4&0\\a&-8&b\\3&-2&-1\end{vmatrix}=0\implies a=12$$Putting $\lambda=-1$,$$\begin{vmatrix}8&-4&0\\12&-6&b\\3&-2&1\end{vmatrix}=0\implies b=0$$
How can we prove that this set of probability density functions is compact
The set $S$ is not sequentially compact (in the probabilist's sense of convergence in distribution). Let $f$ be any probability measure satisfying the above condition. Then set $f_n(x) := f(x-n)$. Clearly each $f_n$ also has the above property, so we have a sequence in $S$. However, the sequence cannot have a convergent subsequence since the corresponding CDF's are not tight. (Nonzero means it some set has positive mass, and shifting that mass off to infinity destroys tightness). Reference about tightness: see lemma (a) on page 184 of David Williams: Probability With Martingales (google books link).
Convergence in random variables
Let $X_i$ be the number of dots of the $i$-th throw, $1\le i\le n=20$. Then $\operatorname{E}[\sum_{i=1}^nX_i]=\sum_{i=1}^n\operatorname{E}[X_i]=n\cdot\sum_{j=1}^{6}\frac{j}{6}=3.5n=70$. So you have the mean $\mu$, which lies perfectly in the middle of the given range $[60,80]$. Calculate the standard deviation $\sigma$, and apply the formula for some multiplier that fits the situation.
Getting the cumulative distribution function for $\sqrt{X}$ from the cumulative distribution function for $X$
I am not sure but maybe this is what you are after: $$F_{X^2}(x)=\mathbb P(X^2\leq x)=\mathbb P(-\sqrt x\leq X\leq \sqrt x)=F(\sqrt x)-F_{-}(-\sqrt x)$$ This for $x\geq0$. It is evident that $F_{X^2}(x)=0$ if $x<0$. Here $F_{-}(x)$ stands for $\lim_{y\rightarrow x-}F(y)=\mathbb P(X<x)$. If $F$ is continuous then $F_{-}=F$.
Solve $(x^2+1)y''-2xy'+2y=0$
Hint: Put $x^2+1 = t$ and differentiate and put back in your ODE Think backward, let $y=x^n$ maybe one of solution of your ODE. Then it must satisfy the ODE, \begin{align} y'&=nx^{n-1}\\ y''&=n(n-1)x^{n-2} \end{align} \begin{align} (x^2+1)n(n-1)x^{n-2}-2xnx^{n-1}+2x^n&=0\\ (n^2-3n+2)x^n+n(n-1)x^{n-2}&=0\\ (n-1)(n-2)x^n+n(n-1)x^{n-2}&=0\\ (n-1)((n-2)x^n+nx^{n-2})&=0 \end{align} Yes we luckily got that $y=x$ is one of the solution of your ODE. The rest of your work is simply Variation of parameters method .
An orthonormal set in a separable Hilbert space is complete (is a basis) if its distance to another orthonormal basis is bounded
Hint: Write $\|\langle b_n,x\rangle b_n-\langle a_n,x\rangle a_n\|=\|\langle b_n,x\rangle b_n-\langle b_n,x\rangle a_n+\langle b_n,x\rangle a_n-\langle a_n,x\rangle a_n\|$ $\leq \|\langle b_n,x\rangle(b_n-a_n)\|+\|\langle b_n-a_n,x\rangle a_n\|\leq (|\langle b_n,x\rangle|+\|a_n\|\|x\|)\|b_n-a_n\|$.
Observability matrix of a state-space representation
You would want to factor out as much as possible, so also $X_0$, and use the fact that each $\phi_i(t)$ is scalar and thus commutes. Therefore, it is also possible to write your expression for $Y(t)$ as $$ Y(t) = \begin{bmatrix} \phi_0(t)\,I & \phi_1(t)\,I & \cdots & \phi_{n-1}(t)\,I \end{bmatrix} \begin{bmatrix} C \\ C\,A \\ \vdots \\ C\,A^{n-1} \end{bmatrix} X_0, $$ with $I$ the identity matrix of size $p \times p$.
about shortest path between points
My guess/method would be to let $P'=(0,-1)$ and $Q'=(4,3)$ and draw a straight line from $P'$ to $Q'$. Wherever that line passes the x axis and the line $y=2$ would be your points for $A$ and $B$. And the length of the path would be $4\sqrt{2}$
Writing the general term of a sequence without double factorials
Note that $$1\times3\times5\times\cdots\times(2n-1) =\frac{1\times 2\times 3\times\cdots\times (2n)}{2\times 4\times 6\times\cdots\times(2n)}$$ etc.
Exercise 5C10 in Isaacs' Finite Group Theory
Isaacs's Prop 5.18 states that whenever $G$ is a finite group with an abelian Sylow $p$-subgroup $P$, then $Z(N_G(P)) \cap G' \cap P = 1$. In our case $G=G'$ so we get that $Z(N_G(P)) \cap P = 1$. Of course $Z(N_G(P)) \cap P = C_P( N_G(P))$ are exactly those elements of $P$ that are left alone by every conjugation from $N_G(P)$. Since the group of conjugations from $N_G(P)$ is exactly $N_G(P)/C_G(P) \leq \newcommand{\Aut}{\operatorname{Aut}}\Aut(P)$, and since $P \leq C_G(P)$ so that $N_G(P)/C_G(P)$ must be a group of odd order, we are interested in the odd order subgroups of $\Aut(P)$ for $P$ an abelian group of order 8. If $P=C_8$ then $\Aut(P) \cong C_2 \times C_2$ has no non-identity subgroups of odd order, so $N_G(P)/C_G(P) = 1$ and $N_G(P) = C_G(P)$ and $C_P( N_G(P)) = P \neq 1$. Oops. If $P=C_4 \times C_2$ then $\Aut(P) \cong D_8$ has no non-identity subgroups of odd order, so oops again. If $P=C_2 \times C_2 \times C_2$ then $\Aut(P) \cong \operatorname{GL}(3,2)$ has odd order subgroups of orders 1, 3, 7, and 21. The ones of orders 1 and 3 centralize some non-identity elements of $P$, so oops. The ones of orders 7 and 21 are fine. The one of order 7 creates what is called AGL(1,8) fusion and produces the simple group PSL(2,8). The one of order 21 creates what is called AΓL(1,8) fusion and produces the simple group J1 and ${}^2G_2(3^{2n+1})$ for $n \geq 1$.
How to solve non trivial first order differential equations with integrating factor
After having a bref look at the method that you intend to use, I noticed that this method is valuable for a certain form of ODE. This is not the case of the ODE : $$3xy + y^2 + (x^2 + xy) y' = 0$$ where $p$ and $q$ are not functions of $x$ only. We have to use a more general form of the method of integrating factors. The ODE can be presented as: $$(3xy+y^2)dx+(x^2+xy)dy=0$$ The aim it to find an integrating factor $\mu(x,y)$ in order to obtain a total derivative of a function $F(x,y)$ to be determined : $$\mu(x,y)\left((3xy+y^2)dx+(x^2+xy)dy \right)=dF(x,y)$$ so that, the ODE will become $dF(x,y)=0 \quad\to\quad F(x,y)=C$ $$dF=\frac{\partial F}{\partial x}dx+\frac{\partial F}{\partial y}dy \quad\to\quad \begin{cases} \frac{\partial F}{\partial x}=\mu(x,y)(3xy+y^2)\\ \frac{\partial F}{\partial y}=\mu(x,y)(x^2+xy) \end{cases}$$ $$\frac{\partial^2 F}{\partial x\partial y}=\frac{\partial }{\partial x}\mu(x,y)(x^2+xy)=\frac{\partial }{\partial y}\mu(x,y)(3xy+y^2)$$ $$(x^2+xy)\frac{\partial \mu(x,y) }{\partial x}+(2x+y)\mu(x,y)=(3xy+y^2)\frac{\partial \mu(x,y) }{\partial y}+(3x+2y)\mu(x,y)$$ At this point, the general process could become very complicated. Since the problem is probably a textbook case, we can suppose that $\mu(x,y)$ is a very simple function. We try more simple forms like $\mu(x)$ or $\mu(y)$ or more complicated if not adequate. For example, with the variable $x$ only $$(x^2+xy)\frac{d \mu(x) }{d x}+(2x+y)\mu(x)=(3x+2y)\mu(x)$$ This equation must contains functions of $x$ only $\quad\to\quad \begin{cases} x\frac{d \mu }{d x}+\mu=2\mu \\ x^2\frac{d \mu }{d x}+2x\mu=3x\mu \end{cases} \quad\to\quad \mu(x)=x$ We are not looking for the general solution for $\mu(x)$ : only any one particular solution is enough. So, $\mu=x$ is very well, because simple. $$x\left((3xy+y^2)dx+(x^2+xy)dy \right)=dF(x,y)$$ $$(3x^2y+xy^2)dx+(x^3+x^2y)dy=dF(x,y)=0$$ $$d(x^3y+\frac{1}{2}x^2y^2)=dF(x,y)=0$$ $$F(x,y)=x^3y+\frac{1}{2}x^2y^2=C$$ $$x^2y^2+2x^3y-2C=0$$ $$y=\frac {-x^3 \pm \sqrt{x^6 +2Cx^2}}{x^2}=\frac {-x^2 \pm \sqrt{x^4 +C'}}{x}$$ IN ADDITION : If we don't want to make an assumption about a simplified form for $\mu(x,y)$ we have to continue from the above equation : $$(x^2+xy)\frac{\partial \mu(x,y) }{\partial x}+(2x+y)\mu(x,y)=(3xy+y^2)\frac{\partial \mu(x,y) }{\partial y}+(3x+2y)\mu(x,y)$$ $$(x^2+xy)\frac{\partial \mu(x,y) }{\partial x}-(3xy+y^2)\frac{\partial \mu(x,y) }{\partial y}=(x+y)\mu(x,y)$$ Solving this PDE thanks to the method of characteristics starts from the set of ODEs for the characteristics curves : $$\frac{dx}{x^2+xy}=\frac{dy}{-(3xy+y^2)}=\frac{d\mu}{(x+y)\mu}$$ With the first ODE : $\frac{dx}{x^2+xy}=\frac{dy}{-(3xy+y^2)}$ not surprisingly we come back to the first beginning : $(3xy+y^2)dx+(x^2+xy)dy=0$. This is of no interest. With the second ODE : $\quad\frac{dx}{x^2+xy}=\frac{d\mu}{(x+y)\mu}\quad\to\quad \frac{dx}{(x+y)x}=\frac{d\mu}{(x+y)\mu}\quad$ it is obvious that $\quad \mu=x\quad$ is a solution. We are not looking for all solutions for $\mu$. Only one (any one) is sufficient. This one $\mu=x$ is what we where looking for. The end of the calculus was already shown above. NOTE : Some particular forms for the integrating factor are ready-made, so avoiding a long calculus. For example , see the cases (11), (12), (13) in : http://mathworld.wolfram.com/OrdinaryDifferentialEquation.html
How to separate a partial differential equation where R is a function of three variables?
The argument parallels the two variable case. Setting $R(x, y, z) = X(x)Y(y)Z(z), \tag{1}$ we have $X_{xx}(x)Y(y)Z(z) + X(x)Y_{yy}(y)Z(z) + X(x)Y(y)Z_{zz}(z) = 0, \tag{2}$ and dividing through by $X(x)Y(y)Z(z)$ we obtain $X_{xx} / X + Y_{yy} / Y + Z_{zz} / Z = 0, \tag{3}$ which we write as $X_{xx} / X = -Y_{yy} / Y - Z_{zz} / Z. \tag{4}$ Now we note that, since the two sides depend upon different independent variables, there must be a constant, call it $-k_x^2$, to which they are each equal, thus: $X_{xx} / X = -k_x^2, \tag{5}$ or $X_{xx} + k_x^2X = 0, \tag{5A}$ and $Y_{yy} / Y + Z_{zz} / Z = k_x^2. \tag{6}$ Having separated out the $x$ dependence, we write (6) as $Y_{yy} / Y = k_x^2 - Z_{zz} / Z, \tag{7}$ and once again observe that the two sides depend on different independent variables, so again each must equal some constant value, call it $-k_y^2$ this time: $Y_{yy} / Y = -k_y^2 = k_x^2 - Z_{zz} / Z, \tag{8}$ which leads to $Y_{yy} + k_y^2Y = 0 \tag{9}$ and $Z_{zz} + k_z^2Z = 0, \tag{10}$ where we have set $k_z^2 = -(k_x^2 + k_y^2). \tag{11}$ It should be noted that $k_x^2 + k_y^2 + k_z^2 = 0, \tag{12}$ so that at least one of the three numbers $k_x, k_y, k_z$ must be complex. In the typical case occurring in practical applications, the $k_x, k_y, k_z$ are either real or pure imaginary, leading to solutions of (5A), (9), (10) which are respectively periodic or exponential, again analogous to the two-dimensional case. Finally, it is worth noting that the techniques outlined above easily extend to the $n$-dimensional case of the equation $\sum_1^n R_{x_jx_j} = 0; \tag{13}$ if we set $R = \prod_1^nX_j(x_j), \tag{13A}$ we obtain $n$ equations of the form $d^2X_j / dx_j^2 + k_j^2X_j = 0 \tag{14}$ with $\sum_1^nk_j^2 = 0; \tag{15}$ the details are easy to execute and left to the reader. As is well-known, the solutions $X_j(x_j$) are of the form $X_j(x_j) = a_+e^{ik_jx_j} + a_-e^{-ik_jx_j} \tag{16}$ for suitably chosen $a_\pm$. Hope this helps. Cheerio, and as always, Fiat Lux!!!
Work = line integral over closed loop
You have at least two options: $$ \oint_C \vec{G}\cdot d\vec{r} = \int_0^{2\pi} \vec{G}(\vec{r}(t))\cdot \vec{r}'(t) dt $$ but you end up with a long integral. A better option is to use the Green theorem: $$ \oint_C \vec{G}\cdot d\vec{r} = \iint_D Q_x-P_y \;dA = \iint_D 1 -12y + 12 y \; dA = A(D) = \pi $$
How to find the derivative of the flow of an autonomous differential equation with respect to $x$
Your confusion comes from the abuse of notation. The derivative is taken with respect to the initial value $x(0)$, rather than position $x(t)$. The integral is ought to be $$\eta(x(0))=\int_0^\infty e^{at}\xi(\phi_t(x(0)))dx$$ But it looks rather tedious and hence ignore the parenthesis. To make a clear statement, avoiding $x$ explicitly, let's assume $x(t)=\phi_t(p)$, with $x(0)=\phi_0(p)=p$, then we can write $$\frac{d}{dp}\xi(\phi_t(p))=\frac{d\xi(\phi_t)}{d\phi_t}\frac{d\phi_t}{dp}$$
How to find sum of factors of $2^{2012}$?
Since $2$ is the only prime factor, the sum of the factors of $2^{2012}$ is indeed: $$\sum\limits_{k=0}^{2012} 2^k = 2^0+2^1+2^2+\cdots+2^{2012}\quad\quad\color{red}{\checkmark}$$ Next notice that this is a geometric progression, thus use: $$\sum\limits_{k=0}^{n} ar^k = \dfrac{a(r^{n+1}-1)}{(r-1)}$$
A Covering Map $\mathbb{R}P^2\longrightarrow X$ is a homeomorphism
What about using Euler characteristic? Euler characteristic is multiplicative for a covering map: If $E\to B$ is an $n$-sheeted covering space and $E$ is compact, then $\chi(E)=n\chi(B)$. Since $\chi(\mathbb RP^2)=1$, we're done.
At what angle does the stone have to be hit?
Within a total angle of $4\arcsin{\frac{5.5}{(12\times87)+5.5}}$ or approximately 1 degree + 12 minutes + 3.8175 seconds. It should be noted that the configuration shown in your diagram cannot be attained without curling the path of the moving stone, because when the two stones become tangent at the furthest extent of the possible contact angle, the center of the moving stone is not yet up to the line that you have the three tangent stones shown at. At the point when the moving stone is going to be minimally tangent to the stationary stone the distance between the point of release and the center of the moving stone should equal the distance between the point of release and the center of the target stone. 87 feet + 5.5 inches, in other words. The 5.5 inches is the radius of a stone, of course. By the way, from what part of Canada are you?
Radius of convergence: Why is it $\geq 1$?
The note says that if $|z| \leq 1$, then we have absolute convergence, since the sum of the probabilities is bounded above by 1. Therefore, the radius of convergence is at least 1, hence $r_X \geq 1$. We do not have enough information to conclude how much bigger (if at all) the radius of convergence is.
power series and sequence
Hint what about the power series $$\sum_{n=0}^\infty t^n$$
Proof explanation of $\prod_{k=2}^n \big(1- \frac{2}{k(k+1)} \big) = \frac{1}{3} \big(1+\frac{2}{n} \big), n \geq 2$
In the first step we are using the induction hypotesis that is $$\prod_{k=2}^n \left(1- \frac{2}{k(k+1)} \right) = \frac{1}{3} \left(1+\frac{2}{n} \right)$$ the second one is $$= \frac{(n+1)(n+2)-2}{(n+1)\color{red}{(n+2)}} \cdot \frac{1}{3} \cdot \frac{\color{red}{n+2}}{n} = \frac{1}{3} \frac{(n+1)(n+2)-2}{n(n+1)}$$ and finally $$=\frac{1}{3} \frac{n^2+3n+2-2}{n(n+1)} =\frac{1}{3} \frac{n(n+3)}{n(n+1)}= \frac{1}{3} \cdot \frac{n+3}{n+1 }=$$$$= \frac{1}{3} \cdot \frac{n+1+2}{n+1 }= \frac{1}{3}\cdot \left(\frac{n+1}{n+1} +\frac{2}{n+1}\right)$$
continuity equation for measures from a purely mathematical point view
Okay, I've found a good reference (at least it has all the elements I was looking for), so I'm gonna leave it here in case anyone else is looking for something similar - I'm using a book by Ambrosio, Gigli and Savare "Gradient flows in metric spaces and in the space of probability measures" published by Birkhauser, chapter 8 in particular.
The eigenfunctions of $\Delta \colon H_0^1(\Omega)\cap H^2(\Omega) \to L^2(\Omega)$ form an orthonormal basis?
We can prove the existence and boundedness of the inverse Laplacian using the Riesz representation theorem for Hilbert spaces. First, let us define the bilinear form $B[ \ , \ ]$ on $H_0^1(\Omega)$ as follows: $$ B[u, v] = \int_{\Omega} \sum_i \partial_{x_i} u \partial_{x_i}v.$$ It is possible to prove a couple of inequalities: $|B[u,u]| \leq || u ||^2_{H_0^1(\Omega)} $ $|| u ||_{H_0^1(\Omega)}^2 \leq c B[u, u]$ for a suitably chosen value of $c$. The first inequality is obvious. The second isn't much harder - it just requires some fiddling around with the Poincare inequality. These inequalities tell us that the norm associated to the inner product $B[ \ , \ ]$ is equivalent to the original Sobolev norm $|| . ||_{H_0(\Omega)}^1$. Therefore, since $H_0^1(\Omega)$ is complete with respect to the Sobolev norm, it must also be complete with respect to $B[\ , \ ].\ $ As a consequence, we can legitimately apply the Riesz representation theorem in $H_0^1(\Omega)$ using $B[ \ , \ ]$ instead of $( \ , \ )_{H_0^1(\Omega)}$ as our inner product. Now let's use the Riesz representation theorem in this way to deduce that the Laplacian operator has a bounded inverse. To be more precise, we want to show that, for every $g \in L^2(\Omega)$, there exists a unique $u_g \in H_0^1(\Omega)$ such that $$ B[u_g, v ] = \int_{\Omega} g v \ \ \ \ \ \ \ \ \ {\rm for \ all \ } v \in H_0^1(\Omega) $$ and moreover, the mapping $$ g \mapsto u_g $$ is a bounded linear map from $L^2(\Omega)$ to $H_0^1(\Omega)$. This conclusion does indeed follow from the Riesz representation theorem, and the way to apply the Riesz representation theorem here is to think of $v \mapsto \int_\Omega g v $ as a linear functional on $H_0^1(\Omega)$ whose norm is no greater than $|| g ||_{L^2(\Omega)}$. Notice that the $u_g$ that we have constructed is a solution to the equation $ - \nabla^2 u = g$ in the weak sense. So if we are content to use the notation $\mathcal L$ for the $ - \nabla^2$ operator, then we may as well use the notation $\mathcal L^{-1}$ for our newly constructed bounded operator $L^2(\Omega) \to H_0^1(\Omega)$ sending $g \mapsto u_g$. Having defined our bounded inverse operator $\mathcal L^{-1} : L^2(\Omega) \to H_0^1(\Omega)$, I'll now discuss eigenfunctions. This is where we get to apply the Rellich-Kondrachov theorem and the spectral theorem for compact operators. Let us define a weak eigenfunction of the Laplacian $\mathcal L$ (corresponding to the eigenvalue $k$) to be a $u \in H_0^1(\Omega)$ such that $$ B[u, v] = k \int_\Omega u v \ \ \ \ \ \ \ \ \ {\rm for \ all \ } v \in H_0^1(\Omega) $$ We can immediately rephrase this definition in terms of our inverse operator $\mathcal L^{-1}$: A function $u \in H_0^1(\Omega)$ is a weak eigenfunction of $\mathcal L$ with eigenvalue $k$ if and only if $$ u = k \left( \mathcal L^{-1} (u)\right),$$ Notice that the $u$ on the right-hand side of this equation is thought of as an element of $L^2(\Omega)$ whereas the $u$ on the left-hand side is thought of as an element of $H_0^1(\Omega)$. This makes sense, because $H_0^1(\Omega) \subset L^2(\Omega)$ I'm now going to massage this definition into a form that we can apply the spectral theorem to. Let's use the symbol $\iota$ to denote the inclusion $H_0^1(\Omega) \hookrightarrow L^2(\Omega)$. If you think about it, the previous paragraph can be written like this: A function $u \in L^2(\Omega)$ is a weak eigenfunction of $\mathcal L$ (and, in particular, is contained within the subspace $H_0^1(\Omega) \subset L^2(\Omega)$) iff it satisfies $$ u = k \left( (\iota \circ \mathcal L^{-1}) (u)\right).$$ But $\iota : H_0^1(\Omega) \hookrightarrow L^2(\Omega)$ is a compact operator by Rellich, and $\mathcal L^{-1} : L^2(\Omega) \to H_0^1(\Omega)$ is a bounded operator, so the composition $$\iota \circ \mathcal L^{-1} : L^2(\Omega) \to L^2(\Omega)$$ is compact. The composition $\iota \circ \mathcal L^{-1}$ is also self-adjoint (and to check this, it suffices to verify self-adjointness on $C_{c}^\infty(\Omega)$, which is dense in $L^2(\Omega)$ and is contained inside $H_0^1(\Omega)$). We can therefore legitimately apply the spectral theorem to $\iota \circ \mathcal L^{-1}$. This tells us that the weak eigenfunctions of $\mathcal L$ form a complete (countable) orthogonal basis for the orthogonal complement of the kernel of $\iota \circ \mathcal L^{-1}$ within $L^2(\Omega)$. But the kernel of $\iota \circ \mathcal L^{-1}$ is zero (since any $u$ in this kernel obeys $\int_\Omega uv = 0$ for every $v \in H_0^1(\Omega)$, and in particular, for every $v \in C_c^\infty(\Omega)$). The conclusion then is that the weak eigenfunctions of $\mathcal L$ form a complete (countable) orthogonal basis for $L^2(\Omega)$. At the moment, these eigenfunctions are only weak eigenfunctions, living in $L^2(\Omega)$, and obeying only the weak condition $B[u, v] = k [u , v]$ for $v \in H_0^1(\Omega)$. It would be nice if we could show that these eigenfunctions are genuine smooth functions in $C^\infty(\Omega)$ obeying $\mathcal L u = k u$! We can prove this as follows: By a regularity theorem in Evans Chapter 6.3, any weak solution $u$ to $\mathcal L u = f$, where $f$ is an element of $H^m(\Omega)$, must automatically be in $H_{\rm loc}^{m + 2}(\Omega)$. In our case, $u$ is a weak solution to $\mathcal L u = k u$. So the fact that $u$ is in $L^2(\Omega)$ implies that $u$ is in $H_{\rm loc}^2(\Omega)$, which in turn implies that $u$ is in $H_{\rm loc}^4(\Omega)$, which in turn implies that $u$ is in $H_{\rm loc}^6(\Omega)$, etc. Thus, $u$ is in $H_{\rm loc}^m(\Omega)$ for all $m$. But then, by the Sobolev inequalities, $u $ must also be in $C^\infty(\Omega)$, and we are done. With further conditions on the smoothness of $\mathcal \Omega$, I believe it it is possible to prove that $u$ is also in $C^\infty_c(\bar \Omega)$, where $\bar \Omega$ is the closure of $\Omega$. (See Evans 6.3.) It then makes sense to evaluate $u$ on the boundary $\partial \Omega$, and the fact that $u$ is in $ H_0^1(\Omega)$ rather than just $H^1(\Omega)$ ensures that $u|_{\partial \Omega} = 0$ (see Evans 5.5), which is to say that $u$ satisfies the Dirichlet boundary condition.
Periodic solution of differential equation
No, it doesn't mean that. For instance, $f(x)=0$ is periodic with any period, but $y''(x)=0$ has non-periodic solutions $y(x)=ax+b$.
Geometry coordinates with distance given
If you are told that the distance between $(x_1,y_1)$ and $(x_2,y_2)$ is $\sqrt{113}$, then we have: $(x_1-x_2)^2+(y_1-y_2)^2 = 113$ $(x_2-8)^2+(y_1-13)^2 = 113$ There are infinitely many solutions for $x_2,y_1$. However, if you are given that $x_2,y_1$ are integers, then there are only a few solutions (8 to be precise).
Uniformly Most Powerful Test for a Uniform Sample
Given $\theta$, the probability that $\max(X_{1},\dots,X_{n}) \le m$ is $\left(\frac{m}{\theta}\right)^n$ when $0 \le m \le \theta$, so the density of the maximum is $n\frac{m^{n-1}}{\theta^n} I_{[0 \le m \le \theta]}$ So the likelihood function for $\theta$ given $\max(x_{1},\dots,x_{n})$ is proportional to $L(\theta) = \frac{1}{\theta^n} I_{[ \theta\ge \max(x_{1},\dots,x_{n})] }$, which is constant in the maximum observation apart from the indicator function $L(\theta)=0$ when $\theta\lt \max(x_{1},\dots,x_{n})$, while $L(\theta)$ is a decreasing function of $\theta$ when $\max(x_{1},\dots,x_{n}) \le \theta\lt \infty$, so using the Karlin–Rubin theorem you could reject $H_0$ either when $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or when $\theta_0 \gt m_0$ for some $m_0$ where $\Pr(\max(X_{1},\dots,X_{n}) \le m_0 \mid \theta_0) = \alpha$ i.e. $\left(\frac{m_0}{\theta_0}\right)^n \le \alpha$ This makes the rejection regions $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or $\theta_0 \gt \alpha^{-1/n}\max(x_{1},\dots,x_{n})$. If you prefer, you can express these as $\max(x_{1},\dots,x_{n}) \gt \theta_0$ or $\max(x_{1},\dots,x_{n}) \lt \alpha^{1/n} \theta_0$
Integration of $\frac{1}{1+x^2+x^4+\cdots +x^{2m}}$
The residue method is a bit cumbersome, I suggest a more elementary series approach. HINTS: 1:$$\int^\infty_{-\infty}=\int_{-\infty}^{-1}+\int_{-1}^{1}+\int^\infty_{1}$$ 2: $$\frac1{1-x^{n}}=\sum_{k\ge0}x^{nk}$$ for $|x|<1$. 3: $$\frac1{1-x^{n}}=-\sum_{k\ge0}\frac1{x^{n}}x^{-nk}$$ for $|x|>1$. 4: $$\int \sum =\sum \int$$ at most time. 5: $$\sum_{k=-\infty}^{\infty}\frac1{x-k}=\pi\cot(\pi x)$$ I will elaborate later. I found the answer to be ($2m+2=n$): $$A_n=\frac{2\pi}n(-\cot(\frac{3\pi}n)+\cot(\frac{\pi}n))$$ EDIT: Let $f(x)=\frac{1-x^2}{1-x^{n}}=\frac1{g(x)}-\frac{x^2}{g(x)}$. $$\int^\infty_{-\infty}f(x)dx=\int_{-\infty}^{-1}\frac1{g(x)}dx+\int_{-1}^{1}\frac1{g(x)}dx+\int^\infty_{1}\frac1{g(x)}dx-(\int_{-\infty}^{-1}\frac{x^2}{g(x)}dx+\int_{-1}^{1}\frac{x^2}{g(x)}dx+\int^\infty_{1}\frac{x^2}{g(x)}dx)$$ The second integral equals $$\sum_{k\ge0}\int_{-1}^{1}x^{nk}dx=\sum_{k\ge0}\frac2{nk+1}=2\sum_{k=-\infty}^0\frac1{1-nk}$$ The third integral equals $$-\sum_{k\ge0}\int^\infty_{1}\frac1{x^{n}}x^{-nk}dx=-\sum_{k\ge1}\int^\infty_{1}x^{-nk}dx=\sum_{k\ge1}\frac1{1-nk}$$ With the map $x \mapsto -x$, It can be shown that the first and the third integrals are equal(note that $n$ is even). So, the first three integrals combine to give $$2\sum_{k=-\infty}^\infty\frac1{1-nk}=2\frac1n\sum_{k=-\infty}^\infty\frac1{1/n-k}=\frac{2\pi}n\cot(\pi/n)$$ For the other three integrals, with similar procedures, gives $$2\sum_{k=-\infty}^\infty\frac1{3-nk}=2\frac1n\sum_{k=-\infty}^\infty\frac1{3/n-k}=\frac{2\pi}n\cot(3\pi/n)$$ Therefore, $$A_n=\frac{2\pi}n(\cot(\pi/n)-\cot(3\pi/n))$$
Rotate a vector to become z axis?
The angle to rotate is the angle between $v$ and the $z$ axis. To obtain it, remember that $||\vec{a}×\vec{b}||=||\vec{a}||\sin(\alpha)||\vec{b}||$ $\vec{a}\cdot \vec{b}=||\vec{a}||\cos(\alpha)||\vec{b}||$ Where $\alpha$ is the angle between the two vectors Use this property with $v$ and the vector $(0,01)$ and you will get your result
Find the sum: $\sum_{i=1}^{n}\dfrac{1}{4^i\cdot\cos^2\dfrac{a}{2^i}}$
Hint: $$\frac{1}{4^n\cos^2\frac{a}{2^n}}+\frac{1}{4^n\sin^2\frac{a}{2^n}}= \frac{1}{4^{n-1}\sin^2\frac{a}{2^{n-1}}}.$$ Adding $\displaystyle \frac{1}{4^n\sin^2\frac{a}{2^n}}$ to the sum, the result thus telescopes to $\displaystyle\frac{1}{\sin^2a}$, and hence the initial sum is $$\frac{1}{\sin^2a}-\frac{1}{4^n\sin^2\frac{a}{2^n}}.$$
Finding eigen values of a binary matrix with diagonal elements are all 0s and non-diagonals are 1s
The vector $(1,1,\cdots,1)$ is one eigenvector. You need $n-1$ more (counting with multiplicity). Consider your matrix plus the identity.
Calculating the nth derivative of $\frac{x}{x+1}$
Of course it's right, you only have to realize that $$(-1)^{n+1}=(-1)^{n-1+2}=(-1)^{n-1} (-1)^2 = (-1)^{n-1}$$ As a suggestion, you could try to prove your expression for $f^{(n)}(x)$ in a more rigorous way using induction, if you haven't done that so far! ;)
Integration by transforming to complex
Let's get you started. Your integral is equal to the real part of the integral $$\int_{-\infty}^{\infty} \frac{e^{4ix}}{x^4+5x^2+4}dx.$$ This integral appears (sort of) in the following (surprisingly easier-to-play-with) equation: $$\int_L \frac{e^{4iz}}{z^4+5z^2+4}dz=\lim\limits_{R\to\infty}\left(\int_{-R}^{R} \frac{e^{4iz}}{z^4+5z^2+4}dz+\int_{C_R} \frac{e^{4iz}}{z^4+5z^2+4}dz\right),$$ where $L$ is the curve consisting of the real line and the positively-oriented half-circle in the upper-half plane, $C_\infty$. Note that along the real axis, $z=x$. The integrand has four simple poles, at $z=\pm i$ and at $z=\pm 2i$. Both positive poles are in your contour. For the left side of the equation, find their residues from the Residue Theorem. Then for the right side, show that the integral over $C_R$ goes to $0$ in the limit (at least, it should...). Then take the real part of both sides, and you will be left with your result. EDIT: Between this and the wonderful picture provided by David G. Stork, you should be well on your way.
Find the equation of the line $ r $
Here is another approach: Let $\vec r=(a,b,c)$ be a directing vector of the line we search. Its parametric equation is then given by $$\begin{cases} x=\phantom{-}1+au,\\ y=-2+bu, \\ z=\phantom{-}3+cu,\end{cases}\quad(u\in\mathbf R).$$ It meets the given line if and only if the linear system (in $t$ and $u$): $$\begin{cases} \phantom{-}1+au=2+3t, \\ -2+bu=1+2t, \\ \phantom{-}3+cu= -1,\\ \end{cases}\iff \begin{cases} au-3t=1, \\ bu-2t=3, \\ cu= -4,\\ \end{cases}$$ has a solution. Note the last equation implies that $u, c\ne 0$. Also, a directing vector is defined up to a nonzero scalar multiple, so we may choose the value of $c$ to simplify the computation. We'll take $c=4$, so $u=-1$. Now the orthogonality condition is $$a-3b-4=0,$$ whence a linear system for $a$ and $b$ \begin{cases} a+3t=-1, \\ b+2t=-3, \\ a-3b=4,\\ \end{cases} which can easily be solved with RREF of the augmented matrix.
Which kind of distance is this one? I want to compare if two images (matrices) are the same
Simply count the pixels that differ. This is called the Hamming distance.
Explanation behind a 'non-irreducible' as a product of irreducible
The author is using the fact that each nonzero nonunit in a PID has at least one irreducible factor (which is the claim in the first paragraph of the attached screenshot).
Linear algebra - vectors and spaces, what does it mean for a set of vectors to be a basis/linearly independent?
The basis is the least number of vectors that spann the space V. eg, in 3 dim , if you have 3 vectors you can spann 3d but only if they are linearly independent ( each vector "add something new ) or in other words none of them can be written as scalar times the other . The vectors (1,0,0 ) ( 0 1 0 ) and ( 0 0 1) are a basis for 3D why ? Because they are linearly independent , and they spann 3D . if on the other hand you had (1 0 0 ) , ( 0 1 0) and ( 0 , 2 , 0 ) they cannot be a basis in 3d.
Show that every graph $G$ has a path of length $\delta(G)$
I hope its a simple graph. Let $P$ be the maximal path in the graph $G$. On the contrary assume $|P|\le\delta(G)$. Suppose $v\in V(P)$ be an endpoint in the path. As $v$ is adjacent to $\delta(G)$ vertices. All of its neighbours can't be in the path $P$. But then its contradicts the fact that its the maximal path. Hence $|P|\ge\delta(G)+1$.
Linear algebra- Basis of Range(T)
Choose a basis $1,t,t^2$ for $P_2$, and $1,t,t^2,t^3$ for $P_3$. The the matrix for $T$ is given by: $$ \tau = \left[\begin{array}{rrr} -1 & 2 & 0 \\ 0 & -1 & -1 \\ -2 & 3 & -1 \\ 1 & -1 & 1 \end{array}\right].$$ It is fairly easy to see that $\tau (2,1,-1)^T= 0$ (ie, the second column is equal to the third column minus twice the first column), and that the first and last columns are linearly independent. Thus these two columns span the range space of $T$. Thus $T(1), T(t^2)$ form a basis for the range.
Proof of irregularity of an octagon determined by lines from vertices to midpoints of sides of a square
$$\underbrace{|OA| = \frac12|OM|}_{\text{$A$ is the midpoint of $\overline{OM}$}}=\;\; \frac14|PQ| \;\;\color{red}{\neq}\;\; \frac13\cdot\frac1{\sqrt{2}}|PQ| \;\;= \underbrace{\frac13|OP| = |OB|}_{\text{$B$ is the centroid of $\triangle PQR$}}$$
Stability of a nonlinear ODE system (only existence of the limit)
So you want to know if the solution to $\dot{x}=f(x)$ can be extended to infinity. One sufficient condition is that there exists $C>0$ s.t. $$\langle x,f(x)\rangle\leq C(1+\|x\|^2) \quad \forall x.$$ A classical example of the DE that violates this condition is $\dot{x}=x^2$
Proof that $l^p$ with $1 \leq p < \infty$ is dense in $c_0$
The space $c_0$ is usually considered as a subspace of $\ell^\infty$, and hence automatically inherits the sup-norm. For any given $x = (x_1, x_2, \ldots) \in c_0$, you can truncate it to obtain an $\ell^p$ sequence, i.e. you can consider $$ \hat x^{(n)} = (x_1, x_2, \ldots, x_n , 0, 0 ,\ldots) \in \ell^p. $$ Then $$ \Vert \hat x^{(n)} - x\Vert_{\infty} = \sup_{j \geq n+1} |x_j|, $$ which can evidently be made arbitrarily small by taking $n$ to be large, since $x_j \to 0$ as $j \to \infty$.
Extention of vector bundles on projective line: $Ext^1({\mathcal O_{\mathbb{P}^1}}(n),{\mathcal O_{\mathbb{P}^1}}(m))=$??
$\mathrm{Ext}^1(O(n),O(m)) = \mathrm{Ext}^1(O,O(m-n)) = H^1(O(m-n))$ and the cohomology groups are well-known.
Doubt about Cauchy-Lipshitz theorem use
Here is the statement of Cauchy-Lipschitz: Suppose we are given an ODE, $$y'(t) = f(y(t)), \ \ \ \ \ \ \ y(0) = y_0,$$ where $f$ is a real function defined for $y \in [y_0 - b, y_0 + b]$, obeying the boundedness condition $$ |f(y)| \leq m,$$ and obeying the Lipschitz condition $$|f(y_1) - f(y_2)| \leq k |y_1 - y_2 |.$$ Then for any $$a &lt; \min \left(\tfrac b m, \tfrac 1 k \right),$$ there exists a unique solution $y(t)$ to the ODE, valid for $t \in [-a, a]$, with $y(t)$ taking values in the range $[y_0 - b, y_0 + b]$. Note that Cauchy-Lipschitz does NOT give us a solution for all values of $t$! It only gives us a solution for $t$ in the range $[-a, a]$, where $a$ is restricted by (i) the range of $y$ on which $f(y)$ is defined compared to how big $f(y)$ gets on this range of $y$, and by (ii) how "contracting" the function $f(y)$ is. Let's now apply this theorem to our ODE. In our ODE, $f(y) = 1 + y^2$ and $y_0 = 0$. For the time being, let us fix a value of $b$ (so we are looking for solutions such that $y(t)$ always stays within the interval $[-b,b]$ at all values of $t$.) On the interval $[-b,b]$, we have $$ |f(y)| \leq 1 + b^2,$$ so $|f(y)|$ is bounded by $m = 1 + b^2$. Furthermore, $$ |f'(y)| = 2|y| \leq 2b,$$ so, by the mean value theorem, we learn that $f$ is $k$-Lipschitz with $ k = 2b.$ Taking $$a &lt; \min \left( \tfrac{b}{1 + b^2}, \tfrac 1 {2b} \right) = \begin{cases} \tfrac{b}{1 + b^2} &amp; b \in (0, 1); \\ \tfrac 1 {2b} &amp; b \in (1,\infty),\end{cases}$$ Cauchy-Lipschitz tells us that there exists a solution, valid for $t \in [-a, a]$, which takes values within the range $y(t) \in [-b,b]$. The largest possible bound on $a$ is obtained when we take $b = 1$: in this case, we learn that there exists a solution valid for $t \in (-\tfrac 1 2, \tfrac 1 2)$ (and this solution takes values in the range $y(t) \in [-1,1]$, though this is less interesting). In fact, the solution is $$ y(t) = \tan t,$$ which is valid for $t \in (- \tfrac \pi 2, \tfrac \pi 2)$. So our method of applying Cauchy-Lipschitz has given us an under-estimate for the range of $t$ on which the solution is valid.
Alternative solutions to a probability problem
One should add the hypothesis that the random variables $S_n$ are independent and that the choice of a random subset is independent on them. Formal setting One is given on the one hand an infinite sequence $(A_n)_{n\geqslant1}$ of independent events such that $\mathrm P(A_n)=p_n$ with $p_n=1-\frac1{n^2}$, and on the other hand an infinite sequence $(Y_n)_{n\geqslant1}$ of independent Bernoulli random variables such that $\mathrm P(Y_n=0)=\mathrm P(Y_n=1)=\frac12$. The sequences $(A_n)_{n\geqslant1}$ and $(Y_n)_{n\geqslant1}$ are independent. Here is why this models the situation you have in mind. The sequence $(A_n)_{n\geqslant1}$ models the random variables $(X_n)_{n\geqslant1}$ through the relation $A_n=[X_n=1]$. The sequence $(Y_n)_{n\geqslant1}$ defines a random subset $N=\{n\in\mathbb N\mid Y_n=1\}\subseteq\mathbb N$ and yields at once all the finitary representations used in your post because, for every fixed $n\geqslant1$, the random set $N\cap\{1,2,\ldots,n\}$ is uniformly distributed on the $2^n$ subsets of $\{1,2,\ldots,n\}$. To see this, note that for every $B\subseteq\{1,2,\ldots,n\}$, $$ \mathrm P(N\cap\{1,2,\ldots,n\}=B)=\prod\limits_{k\in B}\mathrm P(k\in N)\cdot\prod\limits_{k\leqslant n,\ k\notin B}\mathrm P(k\notin N), $$ which is $$ \mathrm P(N\cap\{1,2,\ldots,n\}=B)=\prod\limits_{k\in B}\mathrm P(Y_k=1)\cdot\prod\limits_{k\leqslant n,\ k\notin B}\mathrm P(Y_k=0)=\frac1{2^n}. $$ Now, one asks for the probability of the event $$ A=\bigcap\limits_{n\in N}A_n. $$ Solution of the problem For a given $N$, the independence hypothesis on the random variables $(X_n)_{n\geqslant1}$ implies that $$ \mathrm P(A\mid N)=\prod\limits_{n\in N}\mathrm P(A_n)=\prod\limits_{n\in N}p_n=\prod\limits_{n\geqslant1}(1-(1-p_n)\mathbf 1_{n\in N}). $$ The independence hypothesis on the random variables $(Y_n)_{n\geqslant1}$ implies that $$ \mathrm P(A)=\mathrm E(\mathrm P(A\mid N))=\prod\limits_{n\geqslant1}(1-(1-p_n)\mathrm P(n\in N)), $$ that is, $$ \mathrm P(A)=\prod\limits_{n\geqslant1}(1-\tfrac12(1-p_n))=\prod\limits_{n\geqslant1}\frac{1+p_n}2=\prod\limits_{n\geqslant1}\left(1-\frac1{2n^2}\right). $$ Finally, the representation of the sine function as an infinite product for $z=\frac1{\sqrt2}$ indicates that the infinite product in the RHS above is indeed $\frac{\sin z}{z}=\frac{\sqrt2}{\pi}\sin\left(\frac{\pi}{\sqrt2}\right)$.
Can a space $Y$ be homotopically equivalent to $S^1$?
The space you're describing is the two-dimensional sphere $S^2$ with its north and south poles identified. As Maxime Ramzi points out in a comment this is not homotopy equivalent to $S^1$, in fact it is homotopy equivalent to $S^2\vee S^1$. You can give $Y$ a cell structure with one $0$-, $1$- and $2$-cell each: use the $0$- and $1$-cell to make a circle, say $\alpha$, and then attach the $2$-cell as follows: glue half the boundary of the disk to one "side" of $\alpha$ and glue the other half to the other "side" (try drawing pictures to see this). But in reality there are no "sides" of a circle, and we're actually just attaching the boundary of the disk to $\alpha \cdot \alpha^{-1}$, which is null-homotopic. It follows that $Y$ is homotopy equivalent to attaching the $2$-cell with a constant map (see for example this question), which just results in $S^1 \vee S^2$. Alternatively, if you're allowed to use homology it's straightforward to see that $H_2(Y) \cong \mathbb{Z}$ (use the long exact sequence of a pair and the fact that $H_*(X, A) \cong \tilde{H}_*(X/A)$ for "good pairs") so it cannot be homotopy equivalent to $S^1$ since homology is a homotopy-invariant. Edit: Jason DeVito made a good point in the comments: since I'm not allowed to use homology, I haven't actually adequately demonstrated that $S^2\vee S^1$ is NOT homotopy equivalent to $S^1$. The first non-homology alternative I can think of is to show $\pi_2(S^2\vee S^1)$ is non-trivial, but that's not exactly elementary (and as Jason also points out the typical way to show $\pi_2(S^2) \cong \mathbb{Z}$ is to use Hurewicz and homology). My other idea is with mapping into Eilenberg-MacLane spaces (which is secretly cohomology but you don't have to think about it that way), and hopefully that is basic enough: Let $K \simeq K(\mathbb{Z}, 2)$ be an Eilenberg-MacLane space of type $(\mathbb{Z}, 2)$, i.e. a pointed topological space whose homotopy groups are given by $\pi_i(K) = \mathbb{Z}$ if $i=2$ and $0$ otherwise (fact: these exist and are unique up to homotopy). Then if $[-,-]$ denotes pointed homotopy classes of continuous functions, and $X, Y, Z$ are pointed spaces, then 1) a pointed homotopy equivalence $h\colon X \to Y$ induces a bijection $h^*\colon [Y, Z]\cong [X, Z]$, and 2) it follows from the formal properties of the wedge product that $$[X\vee Y, Z] \cong [X, Z] \times [Y, Z]. $$ Now observe that $[S^1, K] \cong \pi_1(K) = 0$ and $$[S^2\vee S^1, K] \cong [S^2, K]\times [S^1, K] \cong [S^2, K]\cong \pi_2(K) \cong \mathbb{Z}. $$ So in particular there is no pointed homotopy equivalence between $S^1$ and $S^2\vee S^1$ (I'm not sure if that's enough, but it should be because they are well-pointed spaces).
Confusion with integral domains
Your confusion is that $|D|$ doesn’t necessarily belong to $D$. For $D$ to be an integral domain any product of two non-zero elements in $D$ must be non-zero. For example $\mathbb Z_5$, which is an integral domain whose order is $5$, but $5$ is not contained in $\mathbb Z_5$.
Is the complex Banach space $C([0,1])$ dual to any Banach Space?
There are two versions of your question: The isometric one: is $C[0,1]$ isometric to a dual Banach space? The isomorphic one: is $C[0,1]$ isomorphic to a dual Banach space? Of course, the negative answer to 2. implies the negative answer to 1. However 1. is elementary. Morally, the answer is no because $C[0,1]$ has too few extreme points. To be more precise, observe that the extreme points of the unit ball of a $C(K)$-space can take values only from the unit circle. Now use the fact that $[0,1]$ is connected and try to approximate real-valued functions by convex combinations of extreme points... That's impossible as long as the functions are constantly $\pm 1$. More interestingly, the answer is also no for the isomorphic version of your question too. Even more is true: $C[0,1]$ is not isomorphic to (a complemented subspace of) a dual Banach space. The same conclusion follows for $C(K)$-spaces for all infinite, compact metric spaces $K$. This follows from the following standard argumentation. Take a sequence of disointly supported norm-one functions in $C[0,1]$; their closed linear span is isomorphic to $c_0$. As $C[0,1]$ is separable, by Sobczyk's theorem, this copy of $c_0$ is complemented in $C[0,1]$. However, $c_0$ is not complemented in any dual Banach space because if it were, it would be complemented in its own bidual (see also Example 5.9(i) on p. 22), that is in $\ell_\infty$, which is not the case by the Phillips–Sobczyk theorem. Compact, Hausdorff spaces $K$ for which $C(K)$ is isometric to a dual Banach space are called hyperstonean. If infinite, they are necessarily non-metrisable. See Chapter 2 of this memoir for more details.
Prove that $|\liminf a_n|\geq \liminf |a_n|$ for any real sequence $\{a_n\}_{n\geq 1}$
Choose a subsequence $a_{n_k}$ such that $\lim_k a_{n_k}= \liminf_n a_n=:L$. Then $\lim_k |a_{n_k}| = |L|$ and since the limit inferior is the smallest limit of a subsequence, we obtain $\liminf |a_n| \leq |L| = |\liminf a_n|$, proving the claim.
How to define differential on tangent space
Ah, I think the confusion is here: If you let $u,v$ be coordinate functions on $S$ about $p \in S$ then; $$\\$$ $$\left\{\frac{\partial}{\partial u}\Bigr|_p, \frac{\partial}{\partial v}\Bigr|_p\right\} \equiv \{\sigma_u, \sigma_v\}$$ $$\\$$ span $T_pS$ and the corresponding 1-covectors (or 1-forms); $\{du, dv\}$ span $T_p^* S$. Recall $du,dv$ have the property that; $$\\$$ $$du(\sigma u) = 1 ; du(\sigma v) = 0 ; dv(\sigma v) = 1; dv(\sigma u) = 0$$ $$\\$$ Therefore, if you have $v = \lambda_1 \sigma_u + \lambda_2 \sigma_v$ then since $du,dv$ are also linear maps; $$\\$$ $$du(v) = du(\lambda_1 \sigma u + \lambda_2 \sigma v) = \lambda_1 du(\sigma_u) + \lambda_2 du(\sigma_v) = \lambda_1$$
Problems proving that if $f_n\rightarrow f$ pointwise and $\int_R f=\lim_{n}\int_R f_n$ then $\int_E f=\lim_{n}\int_E f_n$ for meas $E \subseteq R$.
$\int (f-f_n)^{+} \to 0$ by DCT because $(f-f_n)^{+} \leq f$ and $(f-f_n)^{+} \to 0$. Also $\int (f-f_n) \to 0$ by hypothesis. Subtract the first from the second to get $\int (f-f_n)^{-} \to 0$. Add this to $\int (f-f_n)^{+} \to 0$ to get $\int |f-f_n| \to 0$. For any measurable set $E$ we have $\int_E |f-f_n|\leq \int_{\mathbb R} |f-f_n| \to 0$ which implies $\int_E f_n \to \int_E f$.
Draw a Square Without a Compass, Only a Straightedge -- Part Deux
From an arbitrary point A on the top half of the vertical side of the square, construct the sequence of points to finish with a square twice the area of the original square. That is, the diagonal of a smaller unit square being $\sqrt2$ and forming the side of the larger square with double the area.
License plate combination
Your initial guess is correct. Also, note that $10^4(26^3-1) = 10^4\cdot 26^3 - 10^4.$
How to show the following for Planar Graphs-Proof Verification
Your professors comment seems to imply that using Kuratowski's theorem is valid. You wrote that $G$ contains a subgraph that is isomorphic to $K_5$. Kuratowski's theorem says homeomorphic. Now if two graphs are isomorphic they are automatically homeomorphic. I would have given you full credit for your solution but maybe your professor wanted you to explicitly write down the last sentence showing that you knew the meaning of the different concepts.
Set of simple predictable processes is a vector space
Let $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$ simple predictable processes, i.e. $$\begin{align*} X_t &amp;= 1_{\{t=0\}} A_0 + \sum_{k=1}^m 1_{\{S_k&lt;t \leq T_k\}} A_k \\ Y_t &amp;= 1_{\{t=0\}} B_0 + \sum_{j=1}^n 1_{\{U_j&lt;t \leq V_j\}} B_j \end{align*}$$ where $S_k&lt;T_k$, $U_k&lt;V_k$ are stopping times and $A_k$ are $\mathcal{F}_{S_k}$, $B_k$ are $\mathcal{F}_{U_k}$-measurable bounded random variables. Obviously, this implies $$X_t + Y_t = 1_{\{t=0\}} (A_0+B_0) + \sum_{k=1}^m 1_{\{S_k&lt;t \leq T_k\}} A_k+ \sum_{j=1}^n 1_{\{U_k&lt;t \leq V_k\}} B_k.$$ Note that this already shows that $(X_t+Y_t)_{t \geq 0}$ is a simple process. In fact, we can choose $$\begin{align*} P_k &amp;:= \begin{cases} S_k, &amp; k=1,\ldots,m \\ U_k, &amp; k=m+1,\ldots,m+n \end{cases} \\ R_k &amp;:= \begin{cases} T_k, &amp; k=1,\ldots,m \\ V_k, &amp; k=m+1,\ldots,m+n \end{cases} \\ C_k &amp;:= \begin{cases} A_k &amp; k=1,\ldots,m \\ B_k &amp; k=m+1,\ldots,m+n \end{cases}\end{align*}$$ and $C_0 := A_0+B_0$. Then $$X_t+Y_t = 1_{\{t=0\}} C_0 + \sum_{k=1}^{m+n} C_k 1_{\{P_k&lt;t \leq R_k\}}.$$ Edit: To make the sequence of stopping times increasing, we can argue as follows: Let $$X_t = \sum_{k=1}^m 1_{\{S_k&lt;t \leq S_{k+1}\}} A_k$$ and $$Y_t = \sum_{j=1}^n 1_{\{U_j&lt;t \leq U_{j+1}\}} B_j.$$ For $k \in \{1,\ldots,m\}$ and $j \in \{1,\ldots,n\}$ set $$V_{k,j} := \min\{S_k, U_j\}.$$ Note that $V_{k,j} \leq V_{k',j'}$ for $k \leq k'$ and $j \leq j'$. Define iteratively (with $T_0:=0$) $$T_i(\omega) := \inf\left\{ V_{k,j}(\omega); V_{k,j}(\omega)&gt;T_{i-1}(\omega), k \in \{1,\ldots,m\}, j \in \{1,\ldots,n\} \right\}$$ for $i \leq mn$. This defines a sequence of non-decreasing stopping times and we can write $$X_t+ Y_t = \sum_{i=1}^{mn} C_i 1_{\{T_i&lt;t \leq T_{i+1}\}}$$ where $$C_i := (X_t+Y_t) 1_{\{T_i&lt;t \leq T_{i+1}\}} = \sum_{k=1}^m A_k 1_{\{S_k&lt;t \leq S_{k+1}\}} 1_{\{T_i&lt;t \leq T_{i+1}\}} + \sum_{j=1}^n B_j 1_{\{U_j&lt;t \leq U_{j+1}\}} 1_{\{T_i&lt;t \leq T_{i+1}\}}.$$
How can you compute a set of extensions up to isomorphism from Ext?
No. As Roland commented, there is a simple counterexample in the category of chain complexes of $k$-vector spaces: let $B$ be $k^n$ concentrated in degree $1$ and $A$ be $k^m$ concentrated in degree $0$. An extension of $A$ by $B$ is then just a chain complex of the form $0\to k^n\to k^m\to 0$ and so $\operatorname{Ext}^1(A,B)\cong \operatorname{Hom}(k^n,k^m)\cong k^{mn}$. Up to isomorphism, though, such a chain complex is determined by the rank of the map $k^n\to k^m$, and so there are $\min(m,n)+1$ isomorphism classes. Since $\min(m,n)+1$ is not determined by the product $mn$, the cardinality of $\mathrm{E_{A,B}}$ is not determined by $\operatorname{Ext}^1(A,B)$ up to isomorphism. Here are some things you can say. The automorphism groups of $A$ and $B$ each act on $\operatorname{Ext}^1(A,B)$ as isomorphisms of the middle objects, so $\mathrm{E_{A,B}}$ is no larger than the quotient of $\operatorname{Ext}^1(A,B)$ by these actions. However, it may be even smaller, since there can be extensions whose middle objects are isomorphic but such an isomorphism cannot preserve the subobject $B$. For instance, in the category of $k[x]$-modules, consider $A=B=C=(k[x]/(x))^{\oplus \mathbb{N}}\oplus (k[x]/(x^2))^{\oplus \mathbb{N}}$. Then there are lots of short exact sequences $0\to B\to C\to A\to 0$ which have different images of $B\to C$ even up to automorphisms of $C$, since you can have different numbers of $k[x]/(x)$ summands that map into a $k[x]/(x^2)$ summand to form a nontrivial extension. Note moreover that $\mathrm{E_{A,B}}$ can also be larger than your guess $\{[X_1], \dotsc, [X_n], [A\oplus B]\}$. For instance, in the category of $k[x,y]$-modules, let $A=B=k[x,y]/(x,y)$. Then $\operatorname{Ext}^1(A,B)\cong k^2$ is finite-dimensional, but if $k$ is infinite, then $\mathrm{E_{A,B}}$ is infinite. Indeed, an element $(a,b)\in k^2$ corresponds to the extension with $k$-basis $\{e_1,e_2\}$ in which $xe_1=ae_2$ and $ye_1=be_2$ (and $x$ and $y$ both annihilate $e_2$). For $(a,b)\neq (0,0)$, the annihilator of this module is the ideal generated by $bx-ay$. In particular, such modules can only be isomorphic when their $(a,b)$'s are scalar multiples of each other. So in this case, $\mathrm{E_{A,B}}$ is actually the projectivization of $\operatorname{Ext}^1(A,B)$ together with the trivial extension, which is larger than just a basis together with the trivial extension. (Note that in general, $k^\times$ acts by automorphisms of $A$ and $B$ and this induces the scalar action of $k^\times$ on $\operatorname{Ext}^1(A,B)$, so $\mathrm{E_{A,B}}$ will always be no larger than the projectivization together with the trivial extension.)
Bounded analytic function on a punctured region 2
The fundamental mistake is that the fact that a point $z\in \mathbb{C}$ is a boundary point for $\Omega \subseteq \mathbb{C}$ does not imply that $z$ is a boundary point for $\Omega \cup \{z\}$. Take the punctured unit disk $\mathbb{D}^* = \mathbb{D} \setminus \{0\}$, for example: $0$ is a boundary point for $\mathbb{D}^*$ since $0\in\overline{\mathbb{D}^*}$ and $0\in \overline{(\mathbb{D}^*)^c}$ (in fact, $0\in (\mathbb{D}^*)^c$). But $0\notin \overline{\mathbb{D}^c}$. More generally, this will happen if and only if $z$ is an isolated point of $\mathbb{\Omega}^c$, so that $z$ is in the closure of $\mathbb{\Omega}$ but sufficiently small punctured neighborhoods of $z$ are contained in $\Omega$.
Infinite distinct factorizations into irreducibles for an element
Hint $\:$ Let $\rm R = \mathbb R + x\:\mathbb C[x],\:$ i.e. the ring of all polynomials with complex coefficients and real constant coefficient. Here $\rm\:x^2\:$ has infinitely many distinct factorizations into irreducibles $$\rm x^2\ =\ (c\: x)\: (c^{-1}\: x),\quad c = r + {\it i},\quad \forall\: r\in \mathbb R$$ The factors are nonassociate irreducibles in $\rm R$ since, for $\rm\:r,s\in \mathbb R$ $$\rm (r+{\it i})x\ |\ (s+{\it i})x\ \ in\ \ R\iff \frac{(s+{\it i})\:x}{(r+{\it i})\:x}\in R\iff \frac{s+{\it i}}{r+{\it i}}\in \mathbb R\iff r = s$$ Note $\:$ Such constructions are often used by ring theorists since they yield a very rich source of (counter-) examples, e.g. see $\ $ M. Zafrullah, Various facets of rings between $\rm\:D[X]\:$ and $\rm\:K[X].$
Prove that every proper rigid motion in space (R^3) that fixes the origin is a rotation about some axis
What is a "proper" rigid motion? The map taking $x$ to $-x$ is a rigid motion that fixes the origin, but is not a rotation.
Prove $A+ \emptyset = A, A+A = \emptyset$, and $A +A' = U$ using the definition of $A+B$
About : $A+ \emptyset = A$, $A+A = \emptyset$, and $A +A' = U$. a) $A+ \emptyset$ is $(A \cup \emptyset) \backslash (A \cap \emptyset)$. But you must remember that : $A \cup \emptyset = A \quad$ and that $\quad A \cap \emptyset = \emptyset$ so that : $A+ \emptyset$ is simply : $A \backslash \emptyset = A$. This because, if you "throw away" the empty set form $A$, the result will be again $A$, whichever $A$ is (we need it again under c)). b) $A+A$ is $(A \cup A) \backslash (A \cap A)$. With the same reasoning, $A \cup A = A \cap A = A$, so that $A+A$ is $(A \backslash A) = \emptyset$. c) $A+A' = (A \cup A') \backslash (A \cap A')$. But $A \cup A' = U$ and $A \cap A' = \emptyset$. Again, $U \backslash \emptyset = U$, so that $A+A'=U$.
Functional equations $f\left(\frac{x+y}{2}\right)=\frac{f(x)+f(y)}{2}$ and $f(x)=\frac{f\left(\frac{2}{3}x\right)+f\left(\frac{4}{3}x\right)}{2}$.
No (for the first equation). But we can claim $f(x)=ax+b$ (and all functions of the form satisfy the equation). Let $f$ be any function satisfying the functional equation. Then this remains true if we replace $f$ with $x\mapsto f(x)-f(0)-x(f(1)-f(0))$, i.e., we may assume wlog. that $f(0)=f(1)=0$. Let $S=\{\,x\in \Bbb R\mid f(x)=0\,\}$. So far we have $0\in S$, $1\in S$. Also, $x\in S\iff \frac x2\in S$. Using that, if two of $x,y,x+y$ are in $S$, then so is the third. It follows that $S$ is a dense subgroup of $\Bbb R$. By continuity of $f$, $S=\Bbb R$. The answer is also "No" for the second question, but for different reasons: There are some solutions that are by far not of the given form. The simplest "unusual" solution is $f(x)=|x|$.
Proving that: $||x|^{s/2}-|y|^{s/2}|\le 2|x-y|^{s/2}$
Define $f(x)=(x+1)^{\frac{s}{2}}-x^{\frac{s}{2}}-1$, $x&gt;0$. Note that, $f'(x)=\frac{s}{2}\left((x+1)^{\frac{s-2}{2}}-x^{\frac{s-2}{2}}\right)\le0$, because $x+1\ge x$ and $\frac{s-2}{2}&lt;0.$ So, for $x&gt;0$, $f$ is decreasing, then $f(x)\le f(0)$ for $x&gt;0$. Therefore, $$(x+1)^{\frac{s}{2}}\le x^{\frac{s}{2}}+1.$$ Replace $x$ for $\frac{x}{y}$, $x,y&gt;0$, we obtain, $$\left(\frac{x}{y}+1\right)^{\frac{s}{2}}\le \left(\frac{x}{y}\right)^{\frac{s}{2}}+1.$$ Therefore, $$\left(x+y\right)^{\frac{s}{2}}\le x^{\frac{s}{2}}+y^{\frac{s}{2}}\tag{1}.$$ Now using $(1)$ we have $$ |x|^{\frac{s}{2}}\leq (|x-y|+|y|)^{\frac{s}{2}}\leq |x-y|^{\frac{s}{2}}+|y|^{\frac{s}{2}}. $$ Hence, $$ |x|^{\frac{s}{2}}-|y|^{\frac{s}{2}}\leq |x-y|^{\frac{s}{2}}. $$ At the same way $$ |y|^{\frac{s}{2}}\leq (|y-x|+|x|)^{\frac{s}{2}}\leq |x-y|^{\frac{s}{2}}+|x|^{\frac{s}{2}}. $$ Then, $$ |y|^{\frac{s}{2}}-|x|^{\frac{s}{2}}\leq |x-y|^{\frac{s}{2}}. $$ Therefore $$ ||x|^{\frac{s}{2}}-|y|^{\frac{s}{2}}|\leq |x-y|^{\frac{s}{2}}\leq2|x-y|^{\frac{s}{2}}. $$
Why half coversed or coversed trigonometric functions are being deprecated?
They're not deprecated; they're largely just forgotten. In the present day, trigonometry is taught in courses for non-mathematically inclined people who have no desire to take a math course except in order to get rewarded with a good grade for obeying and working hard. They do not suspect that mathematics is a subject in which one derives things logically from other things and that one knows that something is true by understanding its derivation, rather than by being handed a dogma by authorities. They bring in tuition money. It's a racket. Racketeers get paid to pretend that that is education. If you tell the truth to such (using the term loosely) students, that mathematics is an intellectual endeavor, they complain that other instructors don't require them to know that. Textbooks written for that kind of audience is where people learn trigonometry nowadays, including many people who actually want to understand mathematics. It is largely forgotten that there is actually such a thing as an advanced trigonometry book for an audience consisting of mathematicians. Some of these are of that sort.
tough question about multiple improper integrals
In polar coordinates, for every $\alpha&gt;0$, we have $$\eqalign{ \int_{B(0,\alpha)}f&amp;=\int_0^\alpha\left(\int_0^{\pi/4}\theta d\theta+\int_{\pi/4}^{\pi/2}(\tfrac{\pi}{4}-\theta)d\theta\right)rdr\cr &amp;=\int_0^\alpha\left(\int_0^{\pi/4}\theta d\theta-\int_0^{{\pi/4}}\theta d\theta\right)rdr=0 } $$ Thus, $\lim\limits_{\alpha\to\infty}\int_{B(0,\alpha)}f=0$. On the other hand $$\eqalign{ \int_{[0,\alpha]^2}f&amp;=\int_0^{\pi/4}\theta\left(\int_0^{\alpha/\cos\theta}rdr\right)d\theta +\int_{\pi/4}^{\pi/2}(\tfrac{\pi}{4}-\theta)\left(\int_0^{\alpha/\sin\theta}rdr\right)d\theta\cr &amp;=\alpha^2\int_0^{\pi/4}\frac{\theta}{2\cos^2\theta}d\theta +\alpha^2\int_{\pi/4}^{\pi/2}\frac{ \tfrac{\pi}{4}-\theta }{2\sin^2\theta}d\theta\cr &amp;=\alpha^2\int_0^{\pi/4}\frac{\theta}{\cos2\theta+1}d\theta -\alpha^2\int_{\pi/4}^{\pi/2}\frac{ \theta-\tfrac{\pi}{4} }{\cos2\theta-1}d\theta\cr &amp;=\alpha^2\int_0^{\pi/4}\frac{\theta}{\cos2\theta+1}d\theta -\alpha^2\int_{0}^{\pi/4}\frac{ \theta}{-\sin2\theta-1}d\theta\cr &amp;=\alpha^2\int_0^{\pi/4}\left(\frac{1}{\cos2\theta+1}+\frac{1}{\sin2\theta+1}\right)\theta d\theta\cr &amp;\geq\alpha^2\frac{\pi}{4}\left(\frac{1}{2}+\frac{1}{2}\right)=\frac{\pi}{4}\alpha^2 } $$ So. $\lim\limits_{\alpha\to\infty}\int_{[0,\alpha]^2}f=+\infty$.
If $f$ is locally Lipschitz, then for any compact set $K$, $f \mid_K$ is globally Lipschitz
While I am not a fan of proof by contradiction, it works efficiently here. Suppose $S(x,y)={\|f(x)-f(y)| \over \|x-y\|}$ is unbounded for $x,y \in K, x \neq y$. Then we can find $x_k, y_k \in K$ such that $S(x_k,y_k) \to \infty$. Since $K$ is compact, we can assume that $x_k \to x, y_k \to y$. Since $f$ is bounded on $K$, we must have $x=y$ (otherwise $S(x_k,y_k)$ would not be unbounded). By assumption, $f$ is locally Lipschitz around $x$, hence $S(x_k,y_k) \le L$ for some (finite) $L$, which is a contradiction. Here is a constructive proof: Since $f$ is locally Lipschitz, for each $x$ there is some $r_x&gt;0$ and $L_x$ such that $f$ is Lipschitz with rank $L_x$ on $B(x,r_x)$. Then the sets $B(x, {1 \over 2} r_x)$, $x \in O$ form an open cover of $K$, so a finite number cover $K$. For convenience, denote these by $B(x_k, {1 \over 2} r_k)$ (instead of $r_{x_k}$). Let $M= \sup_{x \in M} \|f(x)\|$, $r= {1 \over 2}\min r_k$, $L_0 = {2M \over r}$ and $L= \max (L_0, L_k)$. Then $L$ is a Lipschitz constant for $f$ on $K$. To see this, pick $x,y \in K$. If $\|x-y\| \ge r$ then we see that ${ \|f(x)-f(y) \| \over \|x - y \|} \le {2M \over r} = L_0 \le L$. If $\|x-y\| &lt; r$, then for some $x_k$ we have $x \in B(x_k, {1 \over 2} r_k)$. Then $y \in B(x_k, r_k)$ and so $\|f(x)-f(y) \| \le L_k \|x - y \| \le L \|x - y \|$.
Proving boundedness for a FTBS numerical scheme
I think you have already proved this (pretty much). FTBS is only stable (and bounded) for 0&lt;=c&lt;=1. FTBS can be re-written: $\phi^{(n+1)}_j = (1-c)\phi^{(n)}_j + c \phi^{(n)}_{j−1}$ So $ϕ^{(n+1)}_j$ cannot lie outside $ϕ^{(n)}_j$ and $ϕ^{(n)}_{j−1}$. (It is a linear combination of $ϕ^{(n)}_j$ and $ϕ^{(n)}_{j−1}$). So new extrema cannot be generated.
Semi-colon in set notation
$$ '\{ (1,2,3,\dots, n); n\in \mathbb{N}\}' $$ is the same thing as $$ \{(1), (1,2),\ldots \} $$ Check out: I don't think we have $\Omega = \{(0,0,...) \cup (1,1,...)\}$
solving the following stochastic differential equation
Let the process $Y_t$ defined $Y_t = X_t^8$, then we have that $$ dY_t=8 X_t^7dX_t+\frac{56}{2} X_t^6 d\langle X\rangle_t $$ Knowing that $d\langle X\rangle_t = \frac{1}{X_t^6}dt$ and $X_t^7dX_t = X_t^4 dW_t = \sqrt{Y_t}dW_t$ $$ dY_t=8 \sqrt{Y_t} dW_t+28 dt $$ So that $Y_t$ follows a Cox-Ingersoll-Ross diffusion of the general form $dY_t=(a-kY_t)dt+\sigma \sqrt{Y_t}dW_t$ and with parameters $a = 28, k = 0, \sigma =8$. Since $0 \leq 2a\leq \sigma^2 $, the process $Y_t$ is non-negative as long as $Y_0 \geq 0$ and we can define two possible solutions for $X_t$ as $$ X_t = Y_t^{\frac{1}{4}} \quad\text{or}\quad X_t = -Y_t^{\frac{1}{4}} $$
Does the von Neumann algebra generated by a normal operator contain all commuting projections?
No. For instance, if $T$ is the identity operator, then $\mathscr{A}$ is just the span of $T$, but every orthogonal projection commutes with $T$.
How to prove $\displaystyle\lim_{x \to 0} \dfrac{\sin^{-1} x}{x} = 1$?
you already have $$\lim_{t\to 0} \frac{\sin t}{t} = 1.$$ we will make a change of variable $$\sin t = x, t = \sin^{-1} x $$ so that $$\lim_{x\to 0} \frac{\sin^{-1}(x)}{x} = \lim_{t\to 0}\frac{t}{\sin t} = 1.$$
Binomial expansion of negative exponent in descending powers of x
I am assuming that you have $|x| &lt;1.$ In that case, $(1+x)^{-1} = \sum _{n =0}^{\infty} (-1)^nx^n.$ This comes from the sum formula of geometric series. Now, take derivatives both sides. Then you have $-(1+x)^{-2} = \sum _{n =0}^{\infty} n(-1)^nx^{n-1}.$ Can you complete the solution?
How to find the probability of a score from multiple dice with varying sides
I found a solution to this specific problem by treating each dice roll as a one-dimensional array containing the probability distribution, and then convolving the arrays into a single distribution. I've uploaded a demonstration here.
Solve limit without use L'Hopital: $\lim _{x\to 0}\left(\frac{sin\left(2x\right)-2sin\left(x\right)}{x\cdot \:arctg^2x}\right)$
HINT: $$\dfrac{\sin2x-2\sin x}{x\arctan^2x}=-2\cdot\dfrac{\sin x}x\cdot\dfrac{1-\cos x}{x^2}\cdot\left(\dfrac x{\arctan x}\right)^2$$ and $$\dfrac{1-\cos x}{x^2}=\dfrac1{(1+\cos x)}\cdot\left(\dfrac{\sin x}x\right)^2$$
Find 'ordinary generating function': 1, 0, 2, 0, 3, 0, 4, 0, 5....
We obtain \begin{align*} \color{blue}{1+2x^2+3x^4+\cdots}&amp;=\sum_{n=0}^\infty (n+1)x^{2n}\\ &amp;=\frac{1}{2x}\frac{d}{dx}\left(\sum_{n=0}^\infty \left(x^2\right)^{n+1}\right)\\ &amp;=\frac{1}{2x}\frac{d}{dx}\left(\sum_{n=1}^\infty \left(x^2\right)^n\right)\\ &amp;=\frac{1}{2x}\frac{d}{dx}\left(\frac{1}{1-x^2}-1\right)\\ &amp;\,\,\color{blue}{=\frac{1}{(1-x^2)^2}} \end{align*}
I need help with this word problem.
Try working with the two equations, in two unknowns: You can finish your first equation (sum of cost of more expensive food (x pounds at a cost of $1.10$) and the cost of the leass expensive food (y pounds at a cost of .85 per pound) by noting we want a total of $40$ pounds costing 0.95 per pound for a total cost of $.95\times 40$: $$1.10x+0.85y= 0.95\times 40\tag{1}$$ The number of total pounds needed is the sum of the weights, in pounds, given by $x + y$: $$x + y = 40\tag{2}$$
Calculating interest-like problem
Adding $20$% to a number is the same as multiplying the number by $1.2$, so the result of doing it $n$ times to the number $x$ is simply $(1.2)^nx$, where $x$ is the original number.
Does ∧ still mean intersection when using predicate logic?
In predicate logic the connectives are the same of propositional logic : $∧$ is "and" and $∨$ is "or". It is intersection (between sets) that is defined with "and" : $x ∈ A∩B \text { iff } x∈A \text { and } x∈B$. The antecedent of the formula reads : "there is an object that is $Q$ and there is an object (not necessarily the same) that is $R$ ", while the consequent reads : "there is an object that is $Q$ and is $R$".
Given $X \sim e^x$ and $Y|X \sim x$ is uniform on $(0,x)$. Find the correlation of (X,Y)
The values are not zero. Check your work $$ \int_1^{\infty} \int_0^x y\frac{e^{-x}}{x}\ dy\ dx = \int_1^{\infty} \frac{xe^{-x}}{2}\ dx =\frac{1}{2}\big[-xe^{-x} - e^{-x}\big]\bigg|_1^\infty = e^{-1} $$ And similarly for the other integral
Doubts about series convergence/divergence and properties of compound functions.
For question (I), all options are incorrect, so for contradicting options (1) and (4), take aₙ = (-1)ⁿ/√n, for option (2) take bₙ= 1/n, for option (3) take aₙ=1/n² and bₙ=1/n. For question (II), option (a) is incorrect, take,$g(x)= \begin{cases} 1, &amp; \text{if $x$ is rational} \\ -1, &amp; \text{if $x$ is irrational} \end{cases}$. And take f(x)=x². Option (b) is correct, since g is bounded on $\mathbb{R}$, and since, f(x) is continuous on $\mathbb{R}$ which is restricted on bounded domain g($\mathbb{R}$), so fg must bounded on $\mathbb{R}$. option (c) is correct, since g is bounded on whole $\mathbb{R}$. So restricted g on f($\mathbb{R}$) must be bounded. So, g can take absolute maximum value. Option (d) is incorrect, take f(x)=x²