title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
what is a "Banach algebra" without the norm condition on a continuous multiplication?
This is a long comment. It seems that the key difference happens in infinite dimensions and it's probably worth studying a specific example of this. Consider a toy example in finite dimensions. Let $B$ be the space of $n\times n$ matrices $A$ with the max norm: $\|A\|:=\max\{|a_{ij}|\}$. This is not a submultiplicitive norm. Of course, it's still a Banach Algebra in the loose sense that it's closed under addition and multiplication. We can generalize this to infinite dimensions by taking the space of infinite matrices, and we define matrix multiplication as usual with $\{AB\}_{ij}=\sum_{k=1}^\infty a_{ik}b_{kj}$. Now we can try to single out the "bad" pairs of $A,B$ which are not submultiplictive under the max-norm above and study if you wish, their spectral properties.
Proof that the general polytope is measurable
Jump to the bottom for the proof. Half-space background A half-space is all the points on one side of a line in $\mathbb{R}^2$ or all the points on one side of a plane in $\mathbb{R}^3$, and so forth. For lines/planes that cut through the origin, you can represent the half-space as all vectors $x$ such that $x \cdot v \ge 0$, for some $v$ perpendicular to the line/plane. To cut the space somewhere other than the origin, you can displace the line/plane along $v$ by defining the half-space as all vectors $x$ such that $x \cdot v \le c$, for some $c \in \mathbb{R}$. If the half-space includes the line/plane that does the cutting, it will be a closed subspace (convince yourself). Polytope background Polytopes are the higher dimensional versions of polygons. Closed convex polygons can be expressed as the intersection of closed half-planes (convince yourself). Closed convex polytopes can be expressed as the intersection of closed half-spaces. A closed convex polytope in $\mathbb{R}^d$ is bounded if it is contained in a box in $\mathbb{R}^d$. Lemma 1 A half space $H \subseteq \mathbb{R}^d$ can be represented in the form $\{(x', t) : x' \in \mathbb{R}^{d-1}; t \le f(x')\}$, where $f : \mathbb{R}^{d-1} \to \mathbb{R}$ is continuous. Proof. By definition, a half-space $H$ in $\mathbb{R}^d$ can be expressed as $H = \{x \in \mathbb{R}^d : x \cdot v \le c \}$ for some $v \in \mathbb{R}^d$ and $c \in \mathbb{R}$. We can reword this as follows. $$ \begin{align} H &= \{x \in \mathbb{R}^d : x \cdot v \le c \} \\ &= \{x \in \mathbb{R}^d : x_1v_1 + ... x_dv_d \le c \} \\ &= \{x \in \mathbb{R}^d : x_dv_d \le c - x_1v_1 + ... x_{d-1}v_{d-1} \} \\ &= \{x \in \mathbb{R}^d : x_d \le \frac{c - x_1v_1 + ... x_{d-1}v_{d-1}}{v_d} \} \\ \end{align} $$ Without loss of generality, assume that $v_d$ is non-zero (one of the dimensions must be non-zero, and to avoid messy indexing, I'll order the dimensions such that the last one is the non-zero one). To tidy this expression up, let $x' := (x_1, x_2, ... x_{d-1})$ be the vector $x$ without the last dimension and let $f : \mathbb{R}^{d-1} \to \mathbb{R}$ be the function $f(x') := \frac{c - x'_1v_1 + x'_2v_2 + ... x'_{d-1}v_{d-1}}{v_d} $, and observe that it is continuous. Then continue like so: $$ \begin{align} H &= \{(x', x_d) \in \mathbb{R}^d : x_d \le f'(x') \} \\ &= \{(x', x_d) : x' \in \mathbb{R}^{d-1} \text { and } x_d \le f'(x') \} \end{align} $$ q.e.d. Lemma 2 A half-space intersected with a box is Jordan measurable. Proof. Let $H$ be a half-space in $\mathbb{R}^d$, let $B$ be a box in $\mathbb{R}^d$ and let Let $x \in \mathbb{R}^d$. Pause for some notation $H = \{x \in \mathbb{R}^d : x \cdot v \le c \}$ for some $v \in \mathbb{R}^d$ and constant $c \in \mathbb{R}$. $B$ can be expressed as $B = I_1 \times I_2 \times ... \times I_d$ where $I_1 ... I_d$ are intervals. Let $B' = I_1 \times I_2 \times ... \times I_{d-1}$. So then $B = B' \times I_d$. Let $x' = (x_1, ... x_{d-1})$, all but the last dimension of $x$. We can express set membership of $H \cap B$ as follows. $$ \begin{align} x \in H \cap B &\iff x \cdot v \le c \text{ and } x \in B' \times I_d \\ \end{align} $$ By lemma 1 (along with that definition of $f$ and our assumption of $v_d$ being non-zero): $$ \begin{align} x \in H \cap B &\iff x_d \le f(x') \text{ and } x \in B' \times I_d \\ &\iff x \in B' \text{ and } a \le x_d \le\min(f(x'), b) \quad \text{where } I_d = [a, b] \\ \end{align} $$ Thus, $H \cap B = \{(x', x_d) : x' \in B' \text{ and } a \le x_d \le \min(f(x'), b) \}$. Now, translate $H \cap B$ by the vector $k = (0, 0, ..., -a) \in \mathbb{R}^d$ to get: $$(H \cap B) + k = \{(x', x_d) : x' \in B' \text{ and } 0 \le x_d \le \min(f(x'), b) - a \}$$ This is the exact form we have in 1.1.7 (2) in the book, we know that this is measurable, so $m((H \cap B) + k) \in \mathbb{R}$ exists. By translation invariance $m((H \cap B) + k) = m(H \cap B)$. q.e.d. Proof Let $P = H_1 \cap H_2 \cap ... \cap H_n$ be a bounded polytope in $\mathbb{R}^d$ expressed as the intersection of $n$ closed half-spaces $H_1 ... H_n$. Such a polytope is necessarily convex. As $P$ is bounded, $P = B \cap P$ for some box $B \in \mathbb{R}^d$. So we have: $$ \begin{align} P &= B \cap P \\ &= B \cap (H_1 \cap H_2 \cap ... \cap H_n) \\ &= (B \cap H_1) \cap (B \cap H_2) \cap ... \cap (B \cap H_n) \\ \end{align} $$ By Lemma 2, $B \cap H_i$ is Jordan measurable for all $1 \le i \le n$, and by boolean closure (1.1.6 (1) in the book) it follows that $(B \cap H_1) \cap (B \cap H_2) \cap ... \cap (B \cap H_n)$ is Jordan measurable. Thus, $P$ is Jordan measurable. q.e.d.
Does $\frac{nx}{1+n \sin(x)}$ converge uniformly on $[a,\pi/2]$ for all $a \in (0,\pi/2]$?
If the convergence were uniform, then as $n\to \infty,$ $$\sup_{x\in(0\pi/2]}\,|f_n(x) - x/(\sin x)| \to 0\implies |f_n(1/n) - (1/n)/(\sin (1/n))| \to 0.$$ But $f_n(1/n) \to 1/2$ and $(1/n)/(\sin (1/n)) \to 1.$
Symmetric random walk passes through 1
Here is a martingale argument. Note that by continuity from below, \begin{align*} \{T_1< \infty\} = \bigcup_{n=1}^{\infty}\{T_1 < T_{-n}\} \Rightarrow P(T_1 < \infty) = \lim_n P(T_1 < T_{-n}), \end{align*} where $T_{-n} \doteq \inf\{m: S_m = -n\}$. Letting $T = T_1 \wedge T_{-n}$, we have by the Optional Sampling Theorem that \begin{align*} 0 = ES_0 = ES_{T \wedge m}. \end{align*} Also note that $P(T < \infty) = 1$, since $P(T \geq m(1+n)) \leq (1-2^{-(1+n)})^m$, i.e. every sequence of $n+1$ flips has probability $2^{-(n+1)}$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have \begin{align*} E\left(\frac{T}{n+1}\right) = \sum_{m=1}^{\infty} P(T \geq m(n+1)) < \infty, \end{align*} since this is a geometric series. Hence $ET < \infty$ so $T$ is finite with probability $1$, so that $S_{T \wedge m} \to S_T$ almost surely. Since $S_{T \wedge m}$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that \begin{align*} 0 = \lim_{m \to\infty} ES_{T \wedge m} = ES_T = P(T_1 < T_{-n})\cdot 1 + (1-P(T_1 < T_{-n})) \cdot (-n). \end{align*} Rearranging gives \begin{align*} P(T_1 < T_{-n}) = \frac{n}{n+1}. \end{align*} Hence, $P(T_1 < \infty) = \lim_{n\to\infty} P(T_{1} < T_{-n}) = 1$.
Cauchy-Schwarz inequality for $L^2$-norm on periodic functions space
Your mistake is the last step. Linearity allows a scalar $k \in \Bbb{F}$ enters into the bracket $\langle u,v \rangle$ $$k\langle u,v \rangle = \langle ku,v \rangle,$$ but not a vector $w \in V$ $$w\langle u,v \rangle = \langle wu,v \rangle. \tag{wrong!}$$ Here, the $V = C(\mathbb{R}/\mathbb{Z},\mathbb{C})$, $\Bbb{F} = \Bbb{C}$, $u = f$ and $v = g$.
Markov chain for a board game with dice rolls
Let's say that we want the probability of doing exactly 4 laps and then landing on the 15th square after $n$ rolls of the die. Note that the number of steps you have moved forward is simply the total of all $n$ die rolls. In other words, if $X_i$ denotes the result of the $i$th roll of the die, then we wan the probability that $$ X_1 + X_2 + \cdots + X_n = 4 \cdot 20 + 14 = 94. $$ Each $X_i$ is an i.i.d. uniform variable over $\{1,2,\dots,6\}$. If we write $Y = X_1 + X_2 + \cdots + X_n$, where looking for the probability that $Y = y = 94$. If you're interested in an exact formula for this probability, see this paper, for instance. For large $n$, however, this can be nicely approximated using the central limit theorem. In particular, each $X_i$ has mean $7/2$ and variance $\frac{6^2 - 1}{12} = \frac{35}{12}$. With that established, the probability that $Y = y$ is approximately equal to $$ P \approx \Pr \left(\frac{y - \frac 12 - \frac{7n}{2}}{\sqrt{\frac{35n}{12}}} \leq Z \leq \frac{y - \frac 12 - \frac{7n}{2}}{\sqrt{\frac{35n}{12}}}\right), $$ where $Z$ is normally distributed with mean $0$ and standard deviation $1$.
Maximum inscribed sphere inside ellipse and minimum circumscribed sphere containing ellipse
Change coordinates by defining $y = x - \hat{x}$. Now your function is $$ g(y) = \frac12 y^t Q y + t, $$ where $t = -\frac12 c^t Q^{-1} c$. The level set for $g(y) = a$ is then all points $y$ with $$ y^t Q y = 2(a - t) $$ Because $Q$ is symmetric positive definite matrix, there's an orthogonal matrix $R$ whose rows are the (unit) eigenvectors of $Q$, such that $$ Q = R^t D R $$ where $D = diag(\lambda_1, \ldots, \lambda_n)$. So we can rewrite $g$ as $$ g(y) = y^t R^t D R y + t. $$ Once again changing coordinates to $z = Ry$, we have $$ h(z) = z^t D z + t $$ whose level-set, for $a$, is $$ \{z \mid z^t D z = 2(a-t) \} $$ Writing that out, we have $$ z_1^2 \lambda_1 + \ldots + z_n^2 \lambda_n = 2(a-t) $$ Now because of the ordering of the $\lambda_i$, we can say $$ z_1^2 \lambda_1 + \ldots + z_n^2 \lambda_n \ge z_1^2 \lambda_1 + \ldots + z_n^2 \lambda_1 = \lambda_1 (z_1^2 + z_n^2) \tag{1} $$ so $$ \lambda_1 \|z\|^2 \ge 2(a-t) $$ hence $$ \|z\|^2 \ge \frac{2(a-t)}{\lambda_1 } $$ so $$ |z| \ge \sqrt{\frac{2(a-t)}{\lambda_1 }}. $$ which says that every point on the ellipsoid is at least that far from the origin (with $(1,0,\ldots, 0)$ being exactly that far from the origin), hence the radius of the inscribed sphere must be that number. I'll bet that you can take equation 1 and write a less-than-or-equal version involving $\lambda_n$, and derive the other half of the result for yourself.
$\mathbb{Z}[\sqrt{-7}]$ is UFD?
$\mathbb{Z}[\sqrt{-7}]$ is not UFD.Because $\mathbb1+\sqrt{-7}$ is irreducible element over $\mathbb{Z}[\sqrt{-7}]$ but not a prime . (note:In integral domain primes are irreducible but in UFD prime implies irreducible and irreducible implies prime)
Does the final answer depend on the original expression or its simplified form?
Community wiki answer so the question can be marked as answered: As has been stated in various comments, you're right and the book is wrong. Substituting $4$ into the inequality results in an undefined expression that cannot be said to be satisfied or not satisfied.
Why do PDE's seem so unnatural?
The big problem with PDEs that makes them so difficult is geometry. ODEs are fairly natural because we only have to consider a few cases when it comes to the geometry of the set and the known information about it. We can find general solutions to many (linear) ODEs because of this, and so there's usually a natural progression to get there. PDEs on the other hand, have at least 2-dimensional independent variables, so the variety in the kinds of domains is increased from just intervals to any reasonable connected domain. This means that initial and boundary values contain a lot more information about the solution, so a general solution would have to take into account all possible geometries. This is not really possible in any meaningful way, so there usually aren't general solutions. When we do pick a geometry, it often simplifies the problem significantly. One nice domain is $\mathbb{R}^n$. Many simple PDEs have invariance properties, which means that if we're given enough space to "shift" and "scale" parts of the equation, we can probably come to reason about what the solution should look like. For these situations, there may be general solutions (see PDEs on unbounded domains). These solutions are also more of the straightforward kinds of solutions we see in ODEs. Many PDEs and ODEs simply don't have closed form solutions, and so usually rely on series methods and other roundabout ways to write solutions which don't really "look" like solutions. Separation of variables is a kind of reasonable guess that the effect of each independent variable should be independent in some way. We can try writing the solution as a sum or a product or some other combination of independent functions of each independent variable, and this often reduces the problem in some way which allows up to separate the PDE into a series of ODEs. We don't know that this will work in every case, but if we can show uniqueness of a solution, then finding any kind of solution means we found the solution to the problem. The last main reason is that the theory of PDEs is way harder than the theory of ODEs. So, when you're first learning to solve ODEs, you can be introduced to these methods with a bit of theory and some background on why each of the guesses and techniques makes some sense. When first learning to solve PDEs, however, you probably will not have anywhere near the amount of background you need to fully understand the problems. You can be taught the methods, but they will always seem like a random guess or just a technique that happens to work, until you learn about the theory behind it. As Eric Towers mentions, some Lie algebra would be a good place to start, and I would also recommend PDE books with a more theoretical slant to them, such as Lawrence Evans' text. Since you seem to have some background in real analysis (and so presumably some basic modern/abstract algebra), I think both of these paths should be achievable at your level.
Characteristic of $3\mathbb{Z}$, $\mathbb{Z} \times 5\mathbb{Z}$, $\mathbb{Z}_5 \times \mathbb{Z}_3$
You intuitions are right. But according to definition, there is no $n$ so that $n \cdot 3=0$ there is no $n$ so that $n \cdot(1,5)=(0,0)$ For all $(a,b) \in \Bbb Z_5 \times \Bbb Z_3$, $$15.(a,b)=(0,0)$$ where $n \cdot a=\underbrace{a+a+\cdots+a}_{n\;\text{summands}}$
Given two sets $A$ and $B$, is it true that $|B^A|=\max\{|B|,|\mathcal{P}(A)|\}$?
Surely not true for finite sets. Counterexample $$A=\{1,2\} \text{ and } B = \{1,2,3\}.$$ And if Singular cardinals hypothesis is assumed, the result is again wrong as $$\text{if } \vert \mathcal P(A) \vert \lt \vert B \vert \text{ and } \text{cf} \vert B \vert \le \vert A \vert \text{ then } \vert B^A \vert = \vert B \vert^+$$
when the numerator is less than the denominator
Assuming that $x$ and $y$ are positive, you have $0<x<y$, so $\frac1y>0$, and $$0\cdot\frac1y<x\cdot\frac1y<y\cdot\frac1y\;,$$ which on simplification becomes $$0<\frac{x}y<1\;.$$
Is $P(X,Y)=a + aY + (b+cX^2)Y^n \in \mathbb Z [X][Y]$ irreducible?
If $a=0$, then the polynomial is $\left(b+cX^2\right)Y^n$ which is reducible over $\mathbb{Z}$ if and only if (1) $\gcd(b,c) \neq 1$, (2) $n >1$, (3) $n=1$ and either $|b|>1$ or $c\neq 0$, or (4) $n=0$, $c=\sigma \mu^2$, and $b$ is equal to $-\sigma \nu^2$, where $\mu,\nu\in\mathbb{Z}$ and $\sigma\in\{-1,+1\}$ are such that $\mu \neq 0$ or $|\nu|>1$. From now on we assume that $a\neq 0$. If $c=0$, then the polynomial is $a+aY+bY^n$. If $\gcd(a,b)\neq 1$, then this polynomial is reducible over $\mathbb{Z}$. If $\gcd(a,b)=1$, then consider two cases: (i) If $b=0$, then $a=\pm1$ and the polynomial is $\pm(1+Y)$, which is irreducible over $\mathbb{Z}$; (ii) If $b\neq 0$, then the irreducibility of $a+aY+bY^n$ over $\mathbb{Z}$ is the same as the irreducibility of $Y^n+\frac{a}{b}Y+\frac{a}{b}$ over $\mathbb{Q}$, but I am not sure if there is a full characterization of which values of $\frac{a}{b}$ and $n$ would make this polynomial irreducible. Now, assume that $c\neq 0$. In the case where $\gcd(a,b,c)\neq 1$, it is obvious that the given polynomial is reducible over $\mathbb{Z}$. Suppose from now on that $\gcd(a,b,c)=1$. If $n=0$, then the given polynomial is $(a+b)+aY+cX^2$. Since the ideal $\big((a+b)+aY\big)$ of the integral domain $\mathbb{Q}[Y]$ is prime, we can argue by Eisenstein's Criterion that $(a+b)+aY+cX^2$ is irreducible over $\mathbb{Q}$, whence also over $\mathbb{Z}$. From now on, assume that $n\geq 1$. Taking modulo $Y$, we conclude that, if $a+aY+\left(b+cX^2\right)Y^n$ is reducible over $\mathbb{Z}$, then either (a) $a+aY+\left(b+cX^2\right)Y^n=\left(f(Y)+XY\,g(Y)+X^2Y\,h(Y)\right)\,t(Y)$ for some polynomials $f(Y),g(Y),h(Y),t(Y)\in\mathbb{Z}[Y]$ with $t(Y) \neq \pm 1$, or (b) $a+aY+\left(b+cX^2\right)Y^n=\left(f_1(Y)+XY\,g_1(Y)\right)\left(f_2(Y)+XY\,g_2(Y)\right)$ for some polynomials $f_1(Y),f_2(Y),g_1(Y),g_2(Y)\in\mathbb{Z}[Y]$ (note that this case may hold only when $n\geq 2$). Case (a): We have $h(Y)\,t(Y)=cY^n$ and $f(Y)\,t(Y)=a+aY$. Therefore, $t(Y)$ must be constant. However, since $\gcd(a,b,c)=1$, we must have $t(Y)=\pm 1$, which is a contradiction. (This part concludes that, if $n=1$, then $a+aY+\left(b+cX^2\right)Y^n$ is irreducible.) Case (b): We have $f_1(Y)\,f_2(Y)=a+aY+bY^n$, $f_1(Y)\,g_2(Y)+f_2(Y)\,g_1(Y)=0$, and $g_1(Y)\,g_2(Y)=cY^{n-2}$. Hence, $g_1(Y)=c_1Y^{n_1}$ and $g_2(Y)=c_2Y^{n_2}$ for some $c_1,c_2\in\mathbb{Z}$ and $n_1,n_2\in\mathbb{N}_0$ such that $n_1+n_2=n-2$ and $c_1c_2=c$. If $n_1\neq n_2$, then the condition $f_1(Y)\,g_2(Y)+f_2(Y)\,g_1(Y)=0$ implies that $Y$ divides $f_1(Y)$ or $f_2(Y)$, contradicting the equality $f_1(Y)\,f_2(Y)=a+aY+bY^n$. Thus, $n_1=n_2$, whence $n$ is even, so $n_1=\frac{n}{2}-1$ and $n_2=\frac{n}{2}-1$. Ergo, $$F(Y):=c_2\,f_1(Y)=-c_1\,f_2(Y)\,.$$ That is, $$-\big(F(Y)\big)^2=\left(c_2\,f_1(Y)\right)\left(c_1\,f_2(Y)\right)=c\left(a+aY+bY^n\right)\,.$$ Hence, $a+aY+bY^n$ has a multiple root $\omega$ in the algebraic closure of $\mathbb{Q}$. The derivative of $a+aY+bY^n$ is $a+nbY^{n-1}$. We then have $a+nb\omega^{n-1}=0$. This means $$0=a+a\omega+b\omega^n=a+a\omega-\frac{a}{n}\omega\,,\text{ or } \omega=-\frac{n}{n-1}\,.$$ Hence, the only possible root $Y=\omega$ of $a+aY+bY^n$ is $\omega=-\frac{n}{n-1}$. That is, for some $k\in\mathbb{Z}$, $$k\big((n-1)Y+n\big)^n=a+aY+bY^n\,.$$ This is possible if and only if $n=2$, where $(a,b)=\lambda(4,1)$ for some $\lambda\in\mathbb{Z}$ with $\gcd(\lambda,c)=1$. If this is the case, the given polynomial is thus $$cX^2Y^2+\lambda(Y+2)^2=\left(c_1XY+\lambda_1 (Y+2)\right)\left(c_2XY+\lambda_2(Y+2)\right)\,,$$ for some $\lambda_1,\lambda_2\in\mathbb{Z}$ such that $\lambda_1\lambda_2=\lambda$. Ergo, $c_1\lambda_2+c_2\lambda_1=0$. Since $\gcd(\lambda,c)=1$, we conclude that $c=su^2$ and $\lambda=-sv^2$ for some $u,v\in\mathbb{Z}$ such that $\gcd(u,v)=1$ and $s\in\{-1,+1\}$. That is, $(a,b,c)=\left(-4sv^2,-sv^2,su^2\right)$. Synopsis: Let $a,b,c\in\mathbb{Z}\setminus\{0\}$ and $n\in\mathbb{N}_0$. Consider the polynomial $a+aY+\left(b+cX^2\right)Y^n$. It is reducible over $\mathbb{Z}$ if and only if either (A) $\gcd(a,b,c)\neq 1$, or (B) $n=2$ and $(a,b,c)=\pm\left(4v^2,v^2,-u^2\right)$ for some $u,v\in\mathbb{Z}$. The same polynomial is reducible over $\mathbb{Q}$ if and only if $n=2$ and $(a,b,c)=\pm\left(4v^2,v^2,-u^2\right)$ for some $u,v\in\mathbb{Z}$. It is reducible over $\mathbb{R}$ iff $n=2$, $a=4b$, and $ac<0$. Finally, this polynomial is reducible over some algebraic extension of $\mathbb{Q}$ or over $\mathbb{C}$ if and only if $n=2$ and $a=4b$. Now, suppose that $a,b,c$ may be zero. Then, $a+aY+\left(b+cX^2\right)Y^n$ is reducible over $\mathbb{Z}$ if and only if (I) $\gcd(a,b,c)\neq 1$, (II) $a=0$ and $n>1$, (III) $a=0$, $n=1$, and $|b|>1$, (IV) $a=0$, $n=1$, and $c\neq 0$, (V) $n=0$, $c=\sigma \mu^2$, and $b$ is equal to $-\sigma \nu^2$, where $\mu,\nu\in\mathbb{Z}$ and $\sigma\in\{-1,+1\}$ are such that $\mu \neq 0$ or $|\nu|>1$, (VI) $c=0$, $b\neq 0$, and $Y^n+\frac{a}{b}Y+\frac{a}{b}$ is reducible over $\mathbb{Q}$, or (VII) $n=2$ and $(a,b,c)=\pm\left(4v^2,v^2,-u^2\right)$ for some $u,v\in\mathbb{Z}$. Reducibility over $\mathbb{Q}$ holds iff (II), (IV), (V), (VI), or (VII) is satisfied. Reducibility over $\mathbb{R}$ holds iff (II), (IV), or one of the following conditions is satisfied: (V') $a=0$, $n=0$, $c\neq 0$, and $bc \leq 0$, (VI') $c=0$, $b\neq 0$, and either $n=2$ with $\frac{a}{b} \in (-\infty,0]\cup[4,+\infty)$ or $n>2$, and (VII') $n=2$, $a=4b$, and $ac<0$. Reducibility over some algebraic extension of $\mathbb{Q}$ or over $\mathbb{C}$ happens iff (II), (IV), or any of the following conditions is met: (V'') $a=0$, $n=0$, and $c\neq 0$, (VI'') $c=0$, $b\neq 0$, and $n>1$, and (VII'') $n=2$ and $a=4b$.
$|G| + \frac{|G|}{\left|\langle a\rangle\right|} + \frac{|G|}{\left|\langle b\rangle\right|} + \frac{|G|}{\left|\langle ab\rangle\right|}$
I think I got it. Check it please. If $|G|$ is odd. Obvious. $|G|$ is even. Then let's assume embedding $f:G \rightarrow S_{|G|}$. According to Cayley theorem $g$ maps to the product of $\frac{|G|}{\left|\langle g\rangle\right|}$ independent cycles of length $\left|\langle g\rangle\right|$. In particular $g$ maps to the odd permutation iff $|G|$ is even and $\frac{|G|}{\left|\langle g\rangle\right|}$ is odd. There are either $2$ or $0$ odd permutations among $f(a), f(b), f(ab)$, i.e. there are zero or two odd summands in this sum ($|G|$ is even).
Describe the elements of a quotient field of a field? EDIT: Field not Group
For a ring $R$ ($=\mathbb{Z}_7[x]$) and an ideal $I$ ($=(x^3+5)$), the quotient $R/I$ is given by $\left\{r + I : r \in R\right\}$. The element $r + I$ is by definition the set $$r + I = \left\{r + i : i \in I\right\}.$$ Let me repeat for clarity: The elements of $R/I$ are given by sets of the form $r+I$. Now the collection of elements $\left\{r+I : r \in R\right\}$ also have two operations turning them into a ring: $$(r + I) + (s + I) = (r+s) + I \qquad (r + I) \cdot (s + I) = (r \cdot s) + I.$$ Here, on the RHS of each equality, $r+s$ and $r \cdot s$ are preformed as in the ring $R$. Now let's see what this means in your example. Take the element $x^3 + I$. Now you should check that $x^3 + I$ gives the same exact set as $-5 + I$. In other words, we have $x^3 + I = -5 + I$ as sets. So these two are the same exact elements of $R/I$. This is why $x^3$ "disappears": any instance of $x^3$ can be replaced by $-5$. More intuitively, in quotienting out by $(x^3 + 5)$, you are declaring that $x^3 + 5 = 0$ and hence $x^3 = -5$. With this intuition, calculation is easy. For example: $$x \cdot (x^2 + 2x + 1) = x^3 + 2x^2 + x = 2x^2 + x - 5.$$ But it is important to understand the definition of a quotient as it is to see it intuitively.
When is $\gcd(a+b,c)=\gcd(a,c-b)$?
Set $d= a+b$. Then the question becomes: When is $\gcd(d,c) = \gcd(d-b,c-b)$? Let $g=\gcd(d,c)$. Then $d=sg$ and $c=tg$ (with $s,t$ coprime). So there are two cases when the sought-for equality will not hold: $g$ does not divide $b$ (equivalently, $b$ is not a multiple of $g$). $b=rg$ such that $s-r$ and $t-r$ have a common factor, so the resulting $\gcd$ is a multiple of $g$. This is harder to test for. Sample case: $(a,b,c) = (40, 6, 16)$. $\gcd(a+b,c) \ne \gcd(a,b) \ne (gcd(a,b-c)$
Unit ball has empty interior in the weak topology
The result is true only if your space $X$ is infinite dimensional. Answer for i): Suppose $X$ is a normed linear space and $x_0$ is an interior point of $\{x: \|x\|<1\}$. By definition of weak topology we can find $N \geq 1$, $r_i >0$ and $x_i^{*} \in X^{*}$ for $i=1,2...,N$ such that $|x_i^{*}(x)-x_i^{*}(x_0)| <r_i$ for all $i$ implies $\|x\|<1$. Put $x=x_0+ny$ where $y \in \cap _i\ker x_i^{*}$ and $n$ is a positive integer. You get $\|x_0+ny\|<1$ and this is true for all $n$. Hence $y=0$. It follows from this that $(x_1^{*},x_2^{*}.,,,x_N^{*}): X \to \mathbb R^{N}$ is an injective linear map. But then $X$ is finite dimensional. Counter-example for iii) and ii): Consider $(y,x,x...)$ where $\|y\|>1$. This sequence converges weakly to $x$ but the first term of the sequence is not in the ball.
Best straight-line approximation for $\sin(2x)$ on $(-\pi,\pi)$
Another way to find out is to minimize the following integral directly with respect to $a$, $$I(a)= \int_{-\pi}^{\pi} [\sin(2x) -ax]^2 dx = \frac{\pi}{3}(2\pi^2a^2+6a+3) $$ Setting $I’(a) =0$ produces the same result as the least square method, i.e. $a=-3/(2\pi^2)$. Then, $$I = \int_{-\pi}^{\pi} \left[ \sin(2x) + \frac{3x}{2\pi^2}\right]^2 dx = \pi - \frac{3}{2\pi} < \pi $$ Thus, $f(x)=0x+0$ is not the best fit. —————— Edit: Keep in mind, though, the book may be using a different criteria for the optimal solution. For instance, it may assume that the best fit is for functions to have the same average value. In this case, $f(x)=0x$ would be the answer, because $$\int_{-\pi}^{\pi} \sin(2x) dx = 0 $$
May I divide by number n in order to solve $2n = n^2$ ( even in a case where $n$ is not equal to $0$)?
A general rule of thumb for equations like these is to only increase or decrease the total order of the equation to make the algebra a little easier - it is important, though to then arrive at the same order in which you began before giving your final solution. For examples like this it is actually not required to do any multiplying by or dividing by $n$. Generally, alarm bells should ring when you begin to divide by $n$ or multiply by $n$ - this is when you should ask yourself, do I really need to do this. I'd always recommend picturing the graph. Consider where the graph $y=x$ intersects with the graph $y=x^2$. Alternatively, rearrange the equation as follows. $$n^2 - 2n = 0 \Rightarrow n(n-2)=0$$ Then we have one of two scenarios which hold the right hand equation true, $$n=0\text{, or }n=2$$ OVERALL: Changing the order of the equation can either introduce additional solutions or remove solutions - so tread carefully! There are some occasions where it might make sense to divide by a variable, or to ignore a solution. For example, if you have a function $u(t)$ which represents the speed of a particle at a given time the function might be 5th order so has at most 5 real roots - but if some of these roots are for negative $t$ then you can discount them, since you have already defined time as starting at $t=0$.
Show that $ e^x \le 1+x+\frac{x^2}{2}+\frac{x^3}{3} $ when $ 0 \le x \le 1$
So, let us try without derivation. As $e^x=\sum_{n=0}^{\infty} x^n/n!,$ we need to prove $$\sum_{n=3}^{\infty} x^n/n! \leq x^3/3,$$ or $$\sum_{k=0}^{\infty} \frac{x^k}{(k+3)!} \leq \frac{1}{3},$$ or $$\sum_{k=1}^{\infty} \frac{x^k}{(k+3)!} \leq \frac{1}{6}.$$ for $0\leq x\leq 1$. It is easy to see that for $s\geq 4$ $$\frac{1}{s!}\leq \frac{1}{2*(s-1)*s} = \frac{1}{2(s-1)}-\frac{1}{2s},$$ so $$\sum_{k=1}^{\infty} \frac{x^k}{(k+3)!} \leq \sum_{k=1}^{\infty} \frac{1}{(k+3)!} \leq \sum_{l=3}^{\infty} \left (\frac{1}{2l}-\frac{1}{2l+2} \right ) = \frac{1}{6}, $$ q.e.d.
More detailed question about MAP hypothesis
You should review the lecture slides. Specifically the one on MAP https://people.cs.umass.edu/~mcgregor/240S16/lec20.pdf You should find it helpful.
Real root of $f(x) = 1+2x+3x^2 +4x^3$
You're off to a good start. You've proven that there's only one real root as $f'(x)$ is always positive. The next step is to narrow down the interval containing the root. One way to do this is to apply the intermediate value theorem, and find two points where $f(x)$ has different signs. Observe that $f(0) = 1 > 0$ and $f(-1) = -2 < 0$. This means the root has to be in $(-1, 0)$ Can you narrow it down further? EDIT: I misread the question to mean asking for the location of the root instead of the sum of real roots. However the answer still works, as there's only one real root.
Negation of universally quantified formula
In general, we have that: $$\neg (\forall x) \phi(x) \Leftrightarrow (\exists x) \neg \phi(x)$$ Applied to your formula: $$\neg (\forall x \in \mathbb{R}) (x^2 >x) \Leftrightarrow$$ $$(\exists x \in \mathbb{R}) \neg (x^2 > x)$$ but of course the claim $\neg (x^2 > x)$ is equivalent to the claim $x^2 \le x$, and so we get: $$(\exists x \in \mathbb{R}) (x^2 \le x)$$
Determinate the furthest and nearest points on an ellipsoid from a plane
Hints: The distance from $\bf p$ to the plane ${\bf a \cdot \bf x} = 0$ is $|{\bf a \cdot \bf p}|/|{\bf a}|$, where $|{\bf a}|$ is the length of $\bf a$. A point on the intersection of the plane with the ellipsoid (if such exists) will minimize the distance. The maximum distance is obtained by minimizing or maximizing ${\bf a \cdot \bf p}$ on the ellipsoid. Thus, using a Lagrange multiplier, you take $F(x,y,z,\lambda) = \lambda (4 x^2 + \ldots + 25) + 2 x + 2 y + z$ and find its critical points.
Why is the matrix derivative of the trace of $AB$ with respect to $B$ not a constant, but $A^T$?
First, if $A$ is a matrix $m\times n$, then $B$ has to be a $n\times m$ matrix (otherwise it doesn't make sense to talk about $tr(AB)$.) Now, you can see $B\mapsto tr(AB)$ as a function from $f:\mathbb{R}^{n\times m}\to \mathbb{R}$ and $\frac{d}{dB}[tr(AB)]$ will be the usual gradient of $f$. This gradient is expected to be some "vector" in $\mathbb{R}^{n\times m}$, hence it may be $A^T$. The mistake is that you are claiming that $\frac{d}{dB}[tr AB]=tr[A\frac{d}{dB}B]$ this doesn't make sense, the one in the left is a "vector"(matrix), while the one in the right is a constant as you mentioned.
How do I find the Jordan canonical form of this 4x4 matrix?
If you had four eigenvectors, you'd be able to diagonalize the matrix. Evidently this matrix is not diagonalizable, but of course it still has a Jordan form. You need to find the "generalized eigenspaces", i.e. the kernels of not just the $(A-\lambda I)$, but also $(A-\lambda I)^n$ for all $n \geq 1$. Since you're only missing one generalized eigenvector here, you'll only need to look at $n=2$ to find your $v_4$.
Understanding eigenvectors and eigenvalues
$\lambda_0$ is a simple eigenvalue, which means that its algebraic multiplicity is equal to its geometric multiplicity which is equal to one. Let $\mathcal M$ be the representative matrix of the linear map $f$, with $\mathcal M \in \operatorname{M}(N\times N,\mathbb K)$. You know that an eigenvector $\underline v$ associated to an eigenvalue $\bar \lambda$ belongs to $Ker(\mathcal f-\bar \lambda id)$, and the geometric multiplicity gives us also its dimension $\implies \underline v\in Sol\Big(\big(\mathcal M-\bar \lambda I_N)\underline x|$ $\underline 0\big)\Big)$. So, in your case, $1=dim(Ker(f-\bar \lambda id))=N-\text rk(\mathcal M-\lambda_0 I_N)$ which means that $\text rk(\mathcal M-\lambda_0 I_N)$ is equal to $N-1$ that is also the number of linearly independent equations of the system $\big(\big(\mathcal M-\lambda_0I_N\big)\underline x|\underline 0\big)$.
$3\times3$ matrix inversion proof
Sure. If we solve that system of $9$ linear equations, we'll get a formula for the inverse of a $3\times3$ matrix. In fact, what we'll get is that the inverse if $A$ is $\frac1{\det A}$ times this matrix.
Fraction manipulation and binomial coefficients
$${{m+1}\choose 3 }= \frac{(m+1)!}{3! (m-2)!}=\frac{(m+1)(m)(m-1) (m-2)!}{3!(m-2)!} =\frac{m(m-1)(m+1)}{6}$$ Second to third equality: By definition , $$(m+1)! = (m+1)(m)(m-1)(m-2)\dots 2\cdot 1 \\ = (m+1)(m)(m-1) \big[ (m-2)(m-3) \dots 2\cdot 1\big] = (m+1)(m)(m-1) (m-2)!$$ Third to fourth equality: I cancelled out the $(m-2)!$ from the numerator and denominator. This leaves $$\frac{(m+1)(m)(m-1)}{3!} $$ Now, $3!=6$.
Help understanding how to factor completely $x^3-x^2-x+1$
$$x^3 - x^2 - x + 1 \to x^2(x - 1) + (-1)(x-1)$$ $$\to (x-1)(x^2 - 1) \to (x-1)^2(x+1)$$
How to find the total number of pages which a book has when the clues given indicate a range?
Compute the fractions of the book read per day: On day 1, $\frac13$ of the novel was read, leaving $\frac23$. On day 2, $\frac23×\frac14=\frac16$ was read, leaving $\frac23×\frac34=\frac12$. On day 3, $\frac12×\frac12=\frac14$ was read, the same fraction being left. On day 4, $\frac14×\frac15=\frac1{20}$ was read, leaving $\frac14×\frac45=\frac15$ that was finished off on day 5. Letting $x$ be the number of pages in the book, because at most 70 pages were left on day 5 we have $\frac15x<70$ or $x<350$. Because at least 14 pages were read per day, including day 4, we have $\frac1{20}x>14$ or $x>280$. Only option 4 satisfies both inequalities, so the novel had 300 pages.
k-edge-connectivity of a graph
We need to show that $k$ edges are insufficient to disconnect the graph. Suppose for the sake of contradiction that some $k$ edges disconnected the graph. Since all the vertices are part of the $K_{k,k}$ subgraph, it is necessary to disconnect $K_{k,k}$. But $K_{k,k}$ has edge connectivity $k$ and so it follows that all $k$ edges used to disconnect the graph are used on the $K_{k,k}$ subgraph. But this means that the green edges are untouched and since there exists at least one edge remaining in $K_{k,k}$ which joins the $2$ green circles, it follows that the graph is connected. This is contrary to the assumption that the $k$ edges disconnects the graph. Therefore the graph has edge connectivity at least $k+1$. In fact, we can easily extend the above argument to show that the graph is in fact $k+2$ connected. This is the best possible since your graph is $(k+2)$-regular.
How you call the relationship between variables $X$ and $Y$ if $X=1-Y$
In probability, the event $A^c$ (with probability $1 - P(A)$) would usually be called the complement of A. In set theory, we have the complement of the set $A$, $A^c$, which is equal to $\Omega - A$ if $\Omega$ is the ‘universe’ set. We also have the notion of the radix complement of an $n$ digit number $x$ in base $b$, which is $b^n - x$. I think calling $X$ the complement of $Y$ is a good choice here. Another view: the points satisfying the equation $X+Y=1$ form a line in the plane, so perhaps the word linear is also an option.
Solving the initial value problem for PDE
General solution of pde is $$u=F(\sqrt{x^2+y^2})e^{\arctan\frac{y}{x}}$$ Then solution of initial value problem is $$u=h(\sqrt{x^2+y^2})e^{\arctan\frac{y}{x}}$$ In polar coordinates $$x=r\cos\phi,\quad y=r\sin\phi$$ pde is $$\frac{\partial u(r,\phi)}{\partial\phi}=u(r,\phi)$$ with solution $$u=F(r)e^\phi$$ If substitute $r=\sqrt{x^2+y^2}$, $\phi=\arctan\frac{y}{x}$ we get general solution.
How can I find the limit of a sequence containing $\sin n!$?
You don't have to worry about the factorial at all, since $|\sin(n!)|\le1$ just like $|\sin n|$. So $$\left|n^{2/3}\sin(n!)\over n+1 \right|\le\left|n^{2/3}\over n+1 \right|\le\left| n^{2/3}\over n\right|={1\over n^{1/3}}\to0$$
how to find the maximum of the cross-entropy of a discrete random variable?
$f(x)=x\ln x$ is convex . $\displaystyle\sum_{k=1}^np_k\ln p_k=n\sum_{k=1}^n\frac{1}{n}f(p_k)\geq nf(\frac{1}{n}\sum_{k=1}^np_k) =-\ln n$ by Jensen inequality
Proof that β-function ∈ C^∞
You have more than enough to differentiate through the integral sign (the Leibniz rule.) For example, thinking of $x,y>0,$ we have $$\frac{d}{dx}\int_0^1t^{x-1}(1-t)^{y-1}\,dt = \int_0^1(\ln t)t^{x-1}(1-t)^{y-1}\,dt.$$ You can keep going, piling up factors like $(\ln t)^m \ln (1-t)^n$ in the integral. None of these factors will destroy integrability.
Lecture notes on Ergodic Theory.
C. E. Silva's Invitation to Ergodic Theory published by AMS. Intended as an introduction to the subject for undergraduate level and develops the required measure theory within the text itself.
How to use conditional expectation to find another expectation
Using the tower rule and the fact that $E[YX\mid Y]=YE[X\mid Y]$: $$\begin{align}E[XY] &= E[E[XY\mid Y]] \\[1ex] &= E[Y\,E[X\mid Y]] \\[1ex] &= E\left[Y\left(Y+\tfrac12\right)\right] \\[1ex] &= E\left[Y^2\right]+\tfrac12E[Y]\\[1ex] &= E\left[Y^2\right]+\tfrac14\end{align}$$ If $E\left[Y^2\right]=\tfrac12$ then, indeed, $E[XY]=\tfrac34.$
Need help with this limit that wasn't explained well.
I think that we have $|x| \le 1.$ For $m \in \mathbb N$ consider the sequence $(a_m)=(\frac{x^m}{m})$ Then $|a_m| \le \frac{1}{m}$ . Hence $a_m \to 0$ as $m \to \infty.$ The sequence $(\frac{x^{6n+6}}{6n+6})$ is a subsequence of $(a_m)$.
Cyclotomic Polynomials and GCD
The resultant $R$ of two polynomials $f,g$ has the property that there exist other polynomials $p,q$ such that $p(x)f(x)+q(x)g(x) = R$ identically. (Originally I had stated that $|R|$ is the least such positive integer, but this seems to be incorrect; see the comments.) Therefore your question 1 is related to calculating the resultant of distinct cyclotomic polynomials $\phi_n, \phi_m$. Experimentally, the answer seems to be $1$ unless $m$ divides $n$ (or vice versa), in which case it seems to be a power of $n/m$. Just eyeballing some data, it seems the answer is $\exp(\phi(m)\Lambda(n/m))$, where $\Lambda$ is the von Mangoldt function.
Why the characteristic function is measurable?
$\mathcal{X}_E^{-1}(\{0\}) = E^C$, which is perfectly measurable. Take any $A \in \mathcal{B}(\mathbb{R})$: $\mathcal{X}_E^{-1}(A) =\begin{cases} X, & 0,1 \in A \\ E, & 1 \in A, 0 \notin A \\ E^C, & 1 \notin A, 0 \in A \\ \emptyset, & o.w. \end{cases}$ all those sets are measurable, since $E$ is measurable.
When is $p_1 \times p_2: V\rightarrow V/U_1 \oplus V/U_2, v\longmapsto (p_1(v),p_2(v))$ surjective?
Elements of $V/U_1 \oplus V/U_2$ are of the form $(v_1+U_1, v_2+U_2)$ for some $v_1,v_2 \in V$. Note that you made an error by using the same $v$ on both sides, which would be the image of $p_1 \times p_2$ which a priori is contained in $V/U_1 \oplus V/U_2$. But this problem concerns the setting where the image is all of $V/U_1 \oplus V/U_2$ (surjectivity). To summarize, the image of $p_1 \times p_2$ is $$\{(v+U_1,v+U_2):v \in V\}$$ while the codomain is $$V/U_1 \oplus V/U_2 := \{(v_1+U_1,v_2+U_2):v_1,v_2 \in V\}.$$ Suppose $U_1+U_2=V$. Then for any $(v_1+U_1, v_2 + U_2) \in V/U_1 \oplus V/U_2$, we claim there exists $v$ such that $(p_1 \times p_2)(v) := (v+U_1,v+U_2) = (v_1+U_1,v_2+U_2)$ so $p_1 \times p_2$ is surjective. To prove this claim, write $v_1-v_2 = u_1+u_2$ where $u_1 \in U_1$ and $u_2 \in U_2$ using the condition $U_1+U_2=V$. Then $v_1-u_1=v_2+u_2$; let this quantity be $v$. For the converse, suppose $p_1 \times p_2$ is surjective. For any $v$, we can consider $(v+U_1,0+U_2) \in V/U_1 \oplus V/U_2$. Surjectivity implies there exists some $v'$ such that $(p_1 \times p_2)(v')=(v'+U_1,v'+U_2)=(v+U_1, 0+U_2)$. Equating each component gives $v-v' \in U_1$ and $v' \in U_2$, so $v=(v-v')+v'$ shows $v \in U_1+U_2$. For the other problem, try $\{0\} \subsetneq U_1 \subsetneq U_2 \subsetneq \mathbb{R}^3$. Then $U_1+U_2=U_2 \ne \mathbb{R}^3$ so it is not surjective. Also, if you choose $v,w \in U_1$, then both map to $(0+U_1,0+U_2)$ so it is not injective.
Is enumeration of spanning trees equivalent to probabilistic graph connectivity?
The relationship does not hold generally. Let $G$ be a triangle. We have $$ p(G) = 3p^2(1-p) + p^3 $$ which is not equal to $3p^2$. This is because the events that spanning trees are remained are not independent.
Clique in random graphs
Answer could be found in the following book, section about the largest independent set. https://www.math.cmu.edu/~af1p/BOOK.pdf
Counting of the elements in a set
stars and bars argument: There are $n$ balls and $nG-1$ bars so that each part (the size of a group) is non-empty. Hence, $\binom{n-1}{nG-1}$ different sets of groups.
$a_n$ and its alternating series $(-1)^n a_n$
For the first, take $ a_n=\frac 1n$. $$\sum (-1)^na_n \text{ converges but}$$ $$\sum a_n \text{ diverges }$$ For the second, take $ b_n=\frac{(-1)^n}{n}$. $$\sum b_n \text{ converges but}$$ $$\sum (-1)^nb_n \text{ diverges }$$
Does $30$ always divide $n^5-n$ where $n \in \mathbb{Z^+}$?
For a proof, try factoring: $$ n^5 - n = n(n^4 - 1) = n(n^2 + 1) (n-1) (n + 1) = (n-1)(n)(n+1) (n^2 + 1) $$ The product of three successive integers is always divisible by 6, so all you need to show is that either one of the three integers is divisible by 5, or that $n^2 + 1$ is divisible by 5. If none are divisible by $5$, then the middle one must be of the form $n = 5k+2$ or $n = 5k+3$. Take the first case and look at $$ n^2 + 1 = (5k+2)^2 + 1 = 25k^2 + 20k + 4 + 1 = 25k^2 + 20k + 5 = 5(5k^2 + 4k + 1). $$ That's clearly divisible by 5. Now.,..work out something similar for the last case, and you're done.
$\prod_{i: \text{prime}}\left(1-\frac{1}{i^2}\right) = \frac{6}{{\pi}^2}$
Refer to Euler's product $$\prod_{p} (1-p^{-s})^{-1} = \prod_{p} \Big(\sum_{n=0}^{\infty}p^{-ns}\Big) = \sum_{n=1}^{\infty} \frac{1}{n^{s}} = \zeta(s)$$ and to Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem).
Optimal Control: How to find the cost function from the given dynamics?
The cost function $J$ is given with the problem, it is the objective function you are trying to minimize. The constraint on $u$ is that the BVP given has a solution. Integrating, we get that $x(t) = x(0) + \int_0^t su(s)ds = \int_0^t su(s)ds$, so $x(1) = \int_0^1 tu(t)dt = 1$. This restricts the set of functions $u(t)$ that we have to choose from to minimize the objective $J$. In the unconstrained case, it's clear that the minimum of $J$ is $0$, achieved when $u(t)\equiv 0$. However, since this doesn't satisfy the constraint ($\int_0^1 t\cdot 0 dt = 0 \neq 1$), it is not a solution.
How does Vieta work with cubics, quartics, and equations with degree greater than $2$?
In short, there are Vieta formulas for every degree. We can prove them now, very quickly. Firstly, note that there's no reason for $a$ to not be $1$, as we just divide everything else by it. So let's look at the cubic $$ x^3 + bx^2 + cx + d.$$ We know this has three roots. Let's call them $r_1, r_2, r_3$. Then we also know that the cubic can be written as $$ (x-r_1)(x-r_2)(x-r_3) = x^3 - (r_1 + r_2 + r_3)x^2 + (r_1r_2 + r_1r_3 + r_2r_3)x - r_1r_2r_3.$$ Comparing these two gives Vieta's formulas, including for instance that the product of the roots corresponds to the constant term.
Which way to calculate expectation is correct?
Option 2 is double counting and wrong. Meanwhile $\displaystyle \frac{1}{F(K)}\int_0^{K}\!c\, \mathrm{d} F(c)$ would be the conditional expectation of the amount you have to pay given that $K \lt c$, i.e. given that you have to pay anything.
Derivative of linear map?
$$ f(X+\epsilon H) = (X+\epsilon H)^T(X+\epsilon H) = \underbrace{X^T X}_{f(X)} + \underbrace{(X^TH+H^T X)}_{f'(X)\cdot H}\epsilon + \mathcal O(\epsilon^2) $$
Prove that rhombus diagonals are perpendicular using scalar product
If you let the vector AB be $x$ and AD be $y$, and point A be the origin, then we know AC is $x+y$ and DB is $x-y$. Now the dot product between AB and AD is $$(x+y)\cdot (x-y)$$ $$=x\cdot x-x\cdot y+y\cdot x-y\cdot y$$ $$=|x|^2-|y|^2=0$$ since dot product is symmetric and $|AB|=|AD|$. Thus AB and AD are perpendicular.
How to find conditions for positive semidefinite matrix?
How does $x_1 = 0$ imply that $x_2 = x_3 = 0$? Recall that a positive semidefinite matrix must have positive semidefinite principal submatrices. If $x_1 = 0$ but either $x_2$ or $x_3$ is non-zero, then there is a $2 \times 2$ principal submatrix that has a negative determinant. The final condition has a bunch of extra inequalities and that's weird. Sylvester's criterion only applies to positive definite matrices. In order to extend the criterion to positive semidefinite matrices, we have to consider the determinant of every principal submatrix. For example, the diagonal matrix $$ \pmatrix{1\\&0\\&&-1} $$ has non-negative leading principal minors but fails to be positive semidefinite.
proving differentiability for a function in a point
The hypothesis that $\lim_{x\to a}f\,'(x)=c$ immediately implies that if $\langle\xi_n:n\in\Bbb N\rangle$ is any sequence in $D$ converging to $a$, then $\lim_{n\to\infty}f\,'(\xi_n)=c$. More generally, if $g$ is any function, and $\lim_{x\to a}g(x)=c$, then $\lim_{n\to\infty}g(x_n)=c$ for every sequence $\langle x_n:n\in\Bbb N\rangle$ converging to $a$. If you’ve not seen a proof of this, you should try proving it. The same thing is happening on the other side, only now the function $g$ is not $f\,'$, but rather $$g(x)=\frac{f(x)-f(a)}{x-a}\;.$$
Blow Up: Resolution of Singularity
Maybe I just need to extend what I know for $\mathbb{CP}^2$ and $\mathbb{C}^2$. Please do let me know if this approach is correct (or incorrect). For the base-point $[x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8]=[0,0,0,0,0,0,1,0]\in\mathbb{CP}^7$, we consider the chart $x_7=1$. Then, let $[x_1,x_2,x_3,x_4,x_5,x_6,x_8]=[y_{10},y_{20},y_{30},y_{40},y_{50},y_{60},y_{70}]\in\mathbb{C}^7$. The base-point in $\mathbb{C^7}$ is $[0,0,0,0,0,0,0]$. In chart 1 of the first blow-up, the transformations are: $y_{10}=y_{11}$, $y_{20}=y_{11}y_{21}$, $y_{30}=y_{11}y_{31}$, $y_{40}=y_{11}y_{41}$, $y_{50}=y_{11}y_{51}$, $y_{60}=y_{11}y_{61}$ and $y_{70}=y_{11}y_{71}$. That is, $$y_{11}=y_{10}$$ and $$y_{i1}=\frac{y_{i0}}{y_{10}},\quad\quad i=2,3,\ldots,7.$$ The exceptional divisor in this chart is defined by $y_{11}=0$. We apply a similar procedure for the remaining 6 charts. Thanks, Radz.
$g(z)=z^2~$ What will $~g(y ̅_n )~$ converge to in probability?
I assume your $\overline y_n$ is given by $\overline y_n=(Y_1+\cdots Y_n)/n$. By the continuous mapping theorem (which can be considered Slutsky's theorem's big sister) if $\overline y_n$ converges in probability to a constant $c$, then $g(\overline y_n)$ converges to $c^2$. But without further info about the distribution of the $Y_i$ one does not know if such a $c$ exists. If $E|Y_i|<\infty$, for instance, $c$ exists and is given by $c=EY_1$. But if the $Y_i$ are Cauchy distributed (so $E|Y_i|=\infty$) no such $c$ exists.
Quantum wave packet propagation, how to use it in FFT?
So, given the dimensionless Hamiltonian $$ \hat{\cal{H}} = \frac{1}{2}\hat{p}^2 + V(\hat{x}), $$ the wave function evolves as $$ \vert\Psi(t+dt)\rangle = e^{-i\hat{\cal{H}}dt}\vert\Psi(t)\rangle = e^{-\frac{1}{2}i\hat{p}^2 dt}e^{-iV(\hat{x})dt}e^{O(dt^2)}\vert\Psi(t)\rangle. $$ The idea is to apply the position-space part of the evolution operator ($e^{-iV(\hat{x})dt}$) to the position-space wave function $\Psi(x, t)$, where it is just a multiplication by $e^{-iV(x)dt}$, and then to apply the momentum-space part of the evolution operator ($e^{-\frac{1}{2}i\hat{p}^2 dt}$) to the momentum-space wave function $\tilde{\Psi}(k, t+dt)$, where it is just a multiplication by $e^{-\frac{1}{2}ik^2 dt}$. To evolve the wave function in this approximation (errors are introduced because terms of order $dt^2$ arising from the noncommutativity of $\hat{x}$ and $\hat{p}$ are being omitted), you would typically choose a small value for $dt$ and a discretization for $x$ and $k$, and then repeatedly apply the four operations shown in your expression above: multiply by the potential energy term, apply the FFT to convert to momentum space, multiply by the kinetic energy term, and apply the inverse FFT to convert back to position space.
Characterization of piecewise continuous functions
In most setting, when we say $f$ is piecewise continuous, we mean we need to be able to write the domain of $f$ as a finite disjoint union of intervals such that $f$ is continuous on each of these intervals. If I define, for example, $$f(x) = \frac{1}{n^2+1}\text{ for } n\leq x < n+1$$ this clearly satisfies (a), as we can divide any bounded interval into finitely many intervals on which f is constant. But on $[0,\infty)$, we cannot obtain a finite number of intervals on which $f$ is continuous. If it were possible, one of these intervals would have to contain $(a,\infty)$ for some $a$, but $$f:(a,\infty)\to\mathbb{R}$$ is not continuous on this interval.
N toys distribution among N children.
According to JMoravitz in this comment: Recognize that any permutation can be written as the product of (or equivalently the composition of) transpositions of adjacent elements and further recognize that each horizontal line represents just such a transposition.
$\tau:=\{Y\in P(X) | A\subseteq Y\}\cup\{\emptyset\}$ topological space
Any open set is either empty or it contains the set $A$. The union, and intersection, of any collection of sets that contain $A$, will also contain $A$. Formally, let me write the union parts, and you can do the intersection: Let $\{U_i\mid i\in I\}$ be a collection of open sets, and let $U=\bigcup_{i\in I}U_i$. We want to show that $U$ is open, that is either $U=\varnothing$ or $A\subseteq U$. If for some $i$, $U_i\neq\varnothing$ then $A\subseteq U_i$, and therefore $A\subseteq U_i\subseteq U$, so $A\subseteq U$ and so $U$ is open; otherwise $U_i=\varnothing$ for all $i\in I$ and so $U=\varnothing$ as well. For the intersection part you may want to chase down the elements from $A$; also remember it is enough to show that the intersection of two open sets is open in order to conclude the general finite case.
Definition completely reducible group representation
Completely reducible is usually called semisimple, which is to say that it can be written as a direct sum of simples. What is a simple module $V$? One that has only the trivial module $\{0\}$ and itself $V$ as submodules. (It has no proper, non-trivial submodules). This is analogous to how we exclude the integer 1 from the list of prime numbers, so that we don't write decompositions like $30 = 5 \times 3 \times 2 \times 1 \times 1 \times \cdots \times 1$.
GCD Proof with Multiplication: gcd(ax,bx) = x$\cdot$gcd(a,b)
Below are $3$ proofs of the gcd distributive law $\rm\:(ax,bx) = (a,b)x\:$ using Bezout's identity, universal gcd laws, and unique factorization. First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$ $\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $ $\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$ $\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\,\ \ $ some $\rm\:n,k\in \mathbb Z$ $\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $ The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$ Alternatively, more generally, in any integral domain $\rm\:D\:$ we have Theorem $\rm\ \ (a,b)\ =\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D.$ Proof $\rm\quad\: c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x\ \ \ $ QED The above proof uses the universal definitions of GCD, LCM, which often served to simplify proofs, e.g. see this proof of the GCD * LCM law. Alternatively, comparing powers of primes in unique factorizations, it reduces to the following $$\begin{eqnarray} \min(a+x,\,b+x) &\,=\,& \min(a,b) + x\\ \rm expt\ analog\ of\ \ \ \gcd(a \,* x,\,b \,* x)&=&\rm \gcd(a,b)\,*x\end{eqnarray}\qquad\qquad\ \ $$ The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\,\le,\,$ and $$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\\ \rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \,|\,\ \gcd(a,b) \end{eqnarray}$$ $\ c \le a,b \!\iff\! c\!+\!x \le a\!+\!x,b\!+\!x\!\iff\! c\!+\!x \le \lfloor a\!+\!x,b\!+\!x\rfloor\!\iff\! c \le \lfloor a\!+\!x,b\!+\!x\rfloor \!-\!x$ where $\,\lfloor y,z\rfloor := \min(y,z).$
How to find the proper $\alpha$ to satisfy the 80/20 rule for the Paretor Distribution
You need to use the Lorenz curve. From the Wikipedia article on the Pareto distribution, we have $$1-(1-0.8)^{1-{1 \over \alpha}}= 0.2$$ if you now solve this for $\alpha,$ you will find the 1.16 you mentioned.
How the dot product of two vectors can be zero?
$$\vec s.\vec r=(2\hat i+\hat j-3\hat k)\cdot(4\hat i+\hat j+3\hat k)=8+1-9=0$$ that means $\vec s$ and $\vec r$ are perpendicular to each other.the intuition behind this dot product is what amount of $\vec s$ is working along with $\vec r$?If we would get some positive value,then that would mean that there is some component of s along r as it brings us in a conclusion that s would be inclined to r.But we have a zero here,that means no component of s is working along r.it is only possible when vectors are orthogonal.
Find sum of possible pairs for given LCM and GCD
First, you can solve the case when $B=1$. The general case $A,B$ can be solved then by solving for $A_1=A/B, B_1=1$ and multiplying the answer by $B$. So, given $A$ and $B=1$, prime factorize $A=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$. Then a pair $m,n$ with $\gcd(m,n)=1$ and $\mathrm{lcm}(m,n)=A$ is just a partition of the set of primes in $A$. Turns out that this sum is: $$\left(1+p_1^{a_1}\right)\left(1+p_2^{a_2}\right)\cdots\left(1+p_k^{a_k}\right)$$ The case $A=B=1$ is a special case, since it is possible for $m=n$ then, so you get $2$. For example, in the case $A=24, B=1$, $A=2^3\cdot 3$ so the total is $(1+8)(1+3)=36$. And the case $A=72,B=3$ is therefore $3\cdot 36=108$. First, let's just take the case $A=p^k$ for some prime $p$. Then there is only one pair, $m=1,n=p^k$ and the total is $1+p^k$. Similarly, if $A=p_1^ap_2^b$, with $p_1^a<p_2^b$, the only pairs or $(m,n)=(1,A)$ or $(m,n)=(p_1^a,p_2^b)$ so the sum is $1+p_1^a+p_2^b+p_1^ap_2^b = (1+p_1^a)(1+p_2^b)$. In general, for $A>1$, the sum is the sum of all divisors $d$ of $A$ such that $\gcd(d,A/d)$. The trick is to realize that your term $m+n$ is really two separate values of $d$. It's not clear how efficient this algorithm is - it depends on prime factorization, which is relatively hard in general.
Counterexample: For real functions existence of all higher order derivatives doesn't imply analycity.
Actually you define $f(x):=e^{-\frac{1}{x^2}}$ for all $x\neq 0$ and $f(0):=0$. Then you check continuity at $x=0$ and go on with the proof.
Interval $[0,1]$ is neither compact nor connected in the Sorgenfrey line.
Your proof that $[0,1]$ is not connected in the Sorgenfrey topology is fine; your argument that it is not compact, however, is not correct. The open cover $\mathfrak{A}$ has the finite subcover $\{[0,2]\}$; indeed, any single member of $\mathfrak{A}$ covers $[0,1]$. However, the open cover $$\left\{\left[0,1-\frac1n\right):n\ge 2\right\}\cup\{[1,2)\}$$ works: any subcover of it must include the set $[1,2)$, since that’s the only one containing $1$, and it must contain enough of the intervals $\left[0,1-\frac1n\right)$ to cover $[0,1)$. Clearly, however, no finite collection of these intervals is enough, since the union of any finite collection of them is equal to the largest interval in that finite collection. To prove that $0$ has no compact nbhd, let $U$ be any nbhd of $0$. Then there is an $a>0$ such that $[0,a)\subseteq U$. Use the idea above to find an open cover of $[0,a)$ with no finite subcover, and add to it the set $U\setminus[0,a)$ to get an open cover of $U$.
On those integers $n>1$ such that there exists a commutative ring with identity with exactly $n$ ideals
Take $R = \mathbb{F}_2[X]/(X^n)$. Then $R$ is finite and has exactly $n+1$ ideals. Indeed, ideals of $R$ are in canonical bijection with ideals of $\mathbb{F}_2[X]$ containing $X^n$, ie with polynomials dividing $X^n$ : these are the $X^k$ for $0\leqslant k\leqslant n$.
7-Dimensional Curvature and Curl
There is brief discussion of 7 dimensional curl, and electromagnetism, in The Mathematical Heritage of C F Gauss By George M. Rassias https://books.google.com/books?id=9RyV75spbW0C&pg=PA131&lpg=PA131&dq=%227+dimensional+curl%22&source=bl&ots=fALlzY9s-A&sig=fCE0jNTvCM1v0Q7YLBkoIxxqxkM&hl=en&sa=X&ved=2ahUKEwjNnJzC7rreAhUJ5IMKHSM0BkUQ6AEwAHoECAAQAQ#v=onepage&q=%227%20dimensional%20curl%22&f=false
Estimates on standard normal distribution $P(\vert X\vert\le x)\le x$
For any $x\in \mathbb R^+$, you have \begin{align*} \int_{-x}^x \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{y^2}{2}\right) dy&\leq \int_{-x}^x\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{0^2}{2}\right) dy\\ &\leq\int_{-x}^x\frac{1}{2}\\ &=x \end{align*} The first line is due to the fact that $\exp\left(-\frac{y^2}{2}\right)$ attains it's maximum in $0$, the second line is because $\pi > 2$. As a side note, the second inequality is strict except when $x=0$.
Number Trichotomy and Constructive Math
You are mixing together different notions of decidability. The relevant notion of decidability here is defined as follows: $P$ is decidable iff $P \lor \neg P$ holds. The Continuum Hypothesis, along with every other proposition, is decidable in this sense classically. Of course, for $\text{CH}\lor\neg\text{CH}$, we know that it holds, but we don't know which case holds. The constructive interpretation of $\lor$, however, requires us to actually know which case it is. (Extensional) equality of two given sequences of bits, i.e. functions $\mathbb{N}\to\mathbf{2}$, is a $\Pi_1$ statement as is the Goldbach conjecture. This means if either is false, we can algorithmically find a counter-example in finite time. Note what's happening in the Goldbach conjecture case. We aren't saying "if Goldbach conjecture is true then $x$ else $y$", we are making a statement that given the constructive decidability of equality on reals (or, technically, functions $\mathbb{N}\to\mathbb{2}$ from which we can make real numbers) we get constructive decidability of the Goldbach conjecture as a special case. This sort of example is called a weak counterexample. The idea isn't so much "ha, we don't know if Goldbach's conjecture is true, so we can't decide equality of real numbers". Instead, it's that decidability of real number equality "solves" the problem for free. The point is that decidability of real number equality "solves" all $\Pi_1$ problems for free; we can stick any $\Pi_1$ problem in for Goldbach's conjecture instead. We can formulate this more definitively. Let $T$ be a Turing machine and define $a(T)_n$ as $1$ if $T$ is in a halting state after $n$ steps, and $0$ otherwise. The function from (encodings of descriptions of) Turing machines to sequences of bits is completely constructively/algorithmically definable. (You can make a Turing machine to do it.) We can make a real number via $r_T = \sum_{n=0}^\infty a(T)_n/2^n$. This mapping is also constructively definable given a suitably constructive notion of reals. (Alternatively, we could just talk about equality of $\mathbb{N}\to\mathbf{2}$ functions.) Decidability of $r_T = 0$ is decidability of whether $T$ halts. Decidability of equality for the reals generally implies decidability of the halting problem. Since we can make a Turing machine that enumerates all the theorems of ZFC and halts when it finds a particular one, decidability of real number equality can now show for any statement in the formal theory of ZFC whether it's provable, refutable, or independent. You will not be surprised to hear that this is related to omniscience principles.
Proving $e^{AT} = e^{\lambda t} \sum_{k=0}^{n-1}\frac{t^{k}}{k!}(A - \lambda I)^{k}$ -final step
By direct calculation, we see that \begin{align} e^{tA} = \sum^\infty_{k=0} \frac{t^kA^k}{k!} = \sum^\infty_{k=0} \frac{t^k (TJT^{-1})^k}{k!} = \sum^\infty_{k=0} \frac{t^k TJ^k T^{-1}}{k!}. \end{align} Like you said \begin{align} J = \lambda I+N \end{align} where $N$ is nilpotent, the we have that \begin{align} J^k = (\lambda I+N)^k = \sum^k_{m=0}\binom{k}{m}N^{k-m}\lambda^m \end{align} which means \begin{align} \sum^\infty_{k=0} \frac{t^kTJ^kT^{-1}}{k!} =&\ T\left(\sum^\infty_{k=0}\frac{1}{k!}\sum^k_{m=0}\binom{k}{m}N^{k-m}\lambda^m\right)T^{-1} = T\left(\sum^\infty_{m=0}\lambda^m\sum^\infty_{k=m} \frac{1}{k!}\binom{k}{m}N^{k-m} \right)T^{-1}\\ =&\ T\left(\sum^\infty_{m=0}\frac{\lambda^m}{m!}\sum^\infty_{k=m}\frac{1}{(k-m)!}N^{k-m} \right)T^{-1} = T\left(\sum^\infty_{m=0}\frac{\lambda^m}{m!}\sum^\infty_{k=0}\frac{N^k}{k!}\right)T^{-1}\\ =&\ T\left(\sum^\infty_{m=0}\frac{\lambda^m}{m!}\sum^{n-1}_{k=0}\frac{N^k}{k!}\right)T^{-1} = T\left(e^{\lambda t}\sum^{n-1}_{k=0}\frac{(J-\lambda I)^k}{k!} \right)T^{-1} = e^{\lambda t}\sum^{n-1}_{k=0}\frac{(A-\lambda I)^k}{k!}. \end{align}
Number of pairs $(x, y)$
Rearranging a little bit, we have the equation $4\times 3^x = 5^y+1$ If there were in fact solutions, then looking at this modulo $4$ we have $4\times 3^x \equiv 5^y+1\pmod{4}$ However... This would imply that $0\equiv 2\pmod{4}$, a contradiction.
How to prove the divergence of the highest power of $2$ that divides $n$ -sequence
"Diverging" is the negation of "converging". In other words, if a sequence does not "settle on a finite value", then it diverges. As you have pointed out already, at any point in your sequence that corresponds to a pure power of $2$, the value of the sequence just gets higher and higher. So it clearly doesn't settle anywhere, and it is therefore divergent. "Diverging to infinity" means it carries off to infinity, and also (just as importantly) that it doesn't come back down. However, in your sequence, there is a $1$ in every odd-numbered spot. So your sequence does not diverge to infinity. The sequence is unbounded, though. That's a different name for a sequence that goes off to infinity (either positive or negative infinity), but this name still applies even if the sequence happens to come back down to small values from time to time.
Inequality between $\ln$ and $-x$.
By the Mean Value Theorem, for any $x\in(0,1)$, we have $g(x)=g(0)+(x-0)g'(c)$ for some $c\in(0,x)$. But with $g(0)=0$ and $g'(c)<0$, we have $g(x)=xg'(c)<0$.
Prove that if $n \in \mathbb{N}$, $n\ge 1$
I think the inductive step is a bit fuzzy, because you assume $n + 1\geq 1$ (which is what you want to prove - fallacious), deduce that $n \geq 0$, and since this holds then $n+1 \geq 1$, which would be affirming the consequent (i.e., $(P \rightarrow Q) \wedge Q \vdash P$) - also fallacious. Maybe your $\Rightarrow$ is backwards. Also, I would add: you seem to have tried to arrive at a contradiction between $n \geq 0$ and $n \geq 1$ ("$n \geq 0$. However [...] $n \geq 1$"), but you can perfectly have an $n$ greater than $0$ and greater than $1$ at the same time. In these cases, try to convince yourself, formally and informally, that you arrived at a contradiction, for example, by trying to come up with counterexamples. I think you'll find it very easy to find an $n$ greater than both $0$ and $1$. You could say that $n\geq 1$ (inductive hypothesis), and then $n+1 \geq n \geq 1$, so $n + 1\geq 1$ (this is assuming that you know that $n + 1 \geq n$, i.e., $1 \geq 0$).
Some kind of Hom tensor adjoint
Note. Answer edited after the OP stated that $M$ is to be assumed finitely presented ratherthan only finitely generated. The fact that the two groups are isomorphic is not sufficient to establish that $\DeclareMathOperator\H{Hom}\phi$ is an isomorphism; however, you can (tediously) check that if $\alpha\colon M\otimes\H(N,E)\to\oplus(\H(N,E))$ is the top isomorphism and $\beta\colon\H(\H(M,N),E)\to\oplus(\H(N,E))$ is the bottom one, then $\beta\phi=\alpha$, so also $\phi$ is an isomorphism. However, you can do without such a tedious check. Suppose $M=M_1\oplus M_2$ and call $\phi$, $\phi_1$ and $\phi_2$ the corresponding maps. You can check that the diagram $$\require{AMScd} \begin{CD} M\otimes\H(N,E) @>>> (M_1\otimes\H(N,E))\oplus (M_2\otimes\H(N,E)) \\ @V{\phi}VV @VV{(\phi_1,\phi_2)}V \\ \H(\H(M,N),E) @>>> \H(\H(M_1,N),E)\oplus\H(\H(M_2,N),E) \end{CD} $$ is commutative. The horizontal maps are defined in the obvious way and they are isomorphisms. Thus $\varphi$ is an isomorphism if and only if $(\phi_1,\phi_2)$ is an isomorphism, that is, if and only if both $\phi_1$ and $\phi_2$ are isomorphisms. You can use this for the induction step in proving the statement for $M=A^n$, which doesn't require $E$ to be injective. Next, consider $M$ finitely presented and an exact sequence $0\to K\to A^n\to M\to 0$. This produces the exact sequences $$ K\otimes\H(N,E)\to A^n\otimes\H(N,E)\to M\otimes\H(N,E)\to0 $$ and $$ 0\to\H(M,N)\to\H(A^n,N)\to\H(K,N) $$ Since $E$ is injective, the latter sequence produces the exact sequence $$ \H(\H(K,N),E)\to \H(\H(A^n,N),E)\to \H(\H(M,N),E)\to0 $$ and we finally get the commutative diagram $$ \begin{CD} K\otimes\H(N,E)@>>> A^n\otimes\H(N,E)@>>> M\otimes\H(N,E) @>>> 0 \\ @V{\phi_K}VV @V{\phi_{A^n}}VV @V{\phi_M}VV \\ \H(\H(K,N),E) @>>> \H(\H(A^n,N),E) @>>> \H(\H(M,N),E) @>>> 0 \end{CD} $$ The middle vertical map $\phi_{A^n}$ is an isomorphism, as shown before. By diagram chasing, $\phi_M$ is surjective. Note that we have made no special hypothesis on $M$, other than it's finitely generated. As $M$ is finitely presented, then also $K$ is finitely generated and so $\phi_K$ is surjective. By diagram chasing, $\phi_M$ is injective.
Show that $V = W \oplus U$.
Here's a hint: Let $V$ be a vector space and $U, W$ subspaces. Define $$\phi:U\oplus W\to V$$ $$\phi(u, w)=u+w$$ Obviously $\phi$ is linear. When is it an isomorphism? How can you apply it to your case?
Article about primes.(Revised)
The $16$ consecutive primes $$31,37,\dots,101$$ can be used to form a magic square with magic constant $258$, the smallest possible for such a $4\times4$ square. Similarly, you can get a $5\times5$ magic square with $25$ consecutive primes and magic constant $313$, a $6\times6$ with consecutive primes and constant $484$, and so on. What's the smallest possible constant for a $3\times3$ magic square with $9$ consecutive primes?
Rigorous Statements: "It suffices to show that [...]" and Variations
The phrases you use do NOT mean the same thing, and they do NOT have different degrees of RIGOR. I strongly suggest that you do not use words and phrases at all if you are not familiar with their meaning. "It suffices" means "It is enough" or "If we prove ... we are done (because of ...)". No reason to use "suffice" if you are not comfortable with it. You should be concerned with meaning not with "what graders and solvers find more rigorous". "It suffices to show $x=y$." means "It is ENOUGH to show $x=y$.", that is, you are saying that the proof of the original question (or the lemma whose proof you are writing) will be done once you have proved $x=y$. Since this is a mathematical claim, this should be either reasonably obvious or follow from arguments immediately before the statement. Just using the word to be rigorous but not giving an argument gives a very bad impression of cargo cult proof writing. Many graders would severely punish this. "We want to show that $x=y$./Our goal is to prove $x=y$." Here, you are certainly not claiming that the proof of this identity will finish the proof. You are saying "I am going to proof $x=y$. Bear with me, I will explain later why this is useful.". Certainly, it is "less rigorous" than the "suffice" sentence if "suffice" is what you actually meant. But this is not the fault of the phrase. But see my last paragraph for better options. "It is desired to prove that $x=y$." This is not so good, because the passive construction makes it unclear if you say that YOU decided to prove $x=y$ because it fits into your brilliant proof strategy or if you think that the people who posed the problem wanted you to prove this. Often, this distinction is not so important, but it obfuscates your proof structure. Finally: For a rigorous proof, you need to indicate the STRUCTURE of your proof. For a readable proof, you need to indicate the structure of your proof at the beginning and not at the end. The phrases you called "less rigorous" do not indicate proof structure. This means that you have to add information, it does not mean that the phrases themselves are not rigorous or that the other phrase would be better (indeed, the other phrase could be simply wrong). So, for example, instead of just "We want to show that $x=y$.", it would be better to say "As a first step, we prove $x=y$." or "Next, we prove that $x=y$ because this will help us to prove that $z$ is even." or "Lemma 3: $x=y$". I personally favor the "Lemma/Fact/Step" approach because it automatically highlights your proof structure, but again, it is even better if you explain it.
Continuous function and normal topological space
Let $A = \bigcap_{n=1}^\infty V_n$ where each $V_n$ is open. By Urysohn's lemma, we can find $f_n : X \to [0, 1]$ where $f(A) = \{0\}$ and $f(X - V_n) = \{1\}$. Define: $$ f(x) = \sum_{n=1}^\infty \frac{f_n(x)}{2^n} $$ By the Weierstrass M-test, $f$ is continuous. It is the desired function.
Is $\sqrt{1 + \sqrt{2}}$ a unit in some ring of algebraic integers?
Letting $\alpha = \sqrt{1 + \sqrt{2}}$ and $K = \mathbb{Q}(\sqrt{1 + \sqrt{2}})$, then $\alpha$ is indeed a unit in the ring of integers $O_K$, and even in the ring $\mathbb{Z}[\alpha]$ (which may or may not be the full ring of integers). One can see this from the minimal polynomial that you found: $$ \alpha^4 - 2 \alpha^2 - 1 = 0 \implies 1 = (\alpha^3 - 2 \alpha)\alpha \implies \frac{1}{\alpha} = \alpha^3 - 2 \alpha \, . $$ In fact, an element is a unit in the ring of algebraic integers iff the constant term of its minimal polynomial is a unit in $\mathbb{Z}$, i.e., $\pm 1$. (Note that the constant term of the minimal polynomial is $\pm$ the field norm $N_{K/\mathbb{Q}}(\alpha)$.)
Fourier transform of 3D Sinc function
It is a function with support on a sphere of radius $P$: $$ \frac {e^{i(\xi,y)}}{4\pi P}\delta_{S_P}(\xi). $$ It is closely related to the formula for the fundamental solution of the wave equation in $\mathbb R^3$. Namely, if to do a Fourier transform on the space variables of the equation $$ u_{PP}-\Delta u=\delta(x,P), $$ then solving the resulting ODE one gets almost the function in question, $\theta(P)\frac{\sin(P|\mathbf{\xi}|)}{|\mathbf{\xi}|}$. Here $\theta$ is the Heaviside step function.
Some isomorphism conditions
The answer to #2 is no. For example, the dihedral group $D_4$ and the quaternion group $Q_8$ both have order $8$ and abelianization $(\mathbb{Z}/2\mathbb{Z})^2$. (This implies, among other things, that they have the same character table.) Group theory would be very boring if anything like this was true.
Uniqueness of Unitary Similarity Transform
Yes. First of all, you can add any permutation to $U$. I.e. given a matrix $A$ and a unitary matrix $U$ such that $UAU^*$ is diagonal, $PU$ still diagonalises $A$ for every permutation $P$ (note that $PU$ is still unitary), since what it does is just permuting the entries of the diagonal matrix. Moreover, consider the case where $A$ is the identity matrix, then every unitary matrix $U$ diagonalizes $A$. This might give you a hint: If you have eigenvalue degeneracies, you can add even more matrices to your set: Let $A\in \mathbb{C}^{4\times 4}$ be normal (i.e. diagonalizable by unitary matrices), and suppose we have $$UAU^*=\operatorname{diag}(\lambda_1,\lambda_1,\lambda_2,\lambda_3) $$ where $\lambda_i\neq \lambda_j$ for $i\neq j$. Then, if we consider any matrix $\hat{U}:=\operatorname{diag}(\tilde{U},1,1)$ with $\tilde{U}\in U(2)$, then $\hat{U}U$ also diagonalizes $A$. In other words: Within a degenerate subspace, one can choose any unitary to diagonalize the matrix. In fact, this is all the freedom in $U$ you have and you can prove this (informally) in the following way: In order to diagonalise a normal matrix $A$, the unitary matrix $U$ must contain an orthonormal basis of eigenvectors of $A$. Now, what choice in the basis do you have? First, you can always multiply all vectors with the same phase (this is what you found), second, you can put the eigenvectors in any order you like (this is uniqueness up to permutation) and third, if you have degenerate eigenvalues, you can choose any orthonormal basis of the eigenspace you like (my last comment). This gives you all the possibilities there are.
If an operator preserves divisibility, does that imply that it preserves multiplicability?
Are you sure about integration? It looks to me like you are assuming $\int{1/f}$ = $1/\int{f}$, which I don't think it true (for integrals). As for your more general question, "preserves division" and "preserves inverses" would imply "preserves multiplication", which you should be able to prove easily, but "definite integral" is not such an operator.
Why the generalized derivatives defined? Why was it needed?
Here is a very interesting survey on the birth of Sobolev spaces: file
Approach square root
It's clear that $x_0,f(x_0),f(f(x_0)),\ldots$ are all positive. When $x>0$ we have $$\eqalign{f(x)-\sqrt a &=\frac12\Bigl(x+\frac ax\Bigr)-\sqrt a\cr &=\frac{x^2-2x\sqrt a+a}{2x}\cr &=\frac{(x-\sqrt a)^2}{2x}\cr}$$ which shows that in fact $f(x_0),f(f(x_0)),\ldots$ are all greater than or equal to $\sqrt a$. Re-using and extending the previous algebra, $$0\le f(x)-\sqrt a=\frac12(x-\sqrt a)\frac{x-\sqrt a}{x} \le\frac12(x-\sqrt a)\ .$$ This shows that the difference between a term of your sequence and $\sqrt a$ at any step is at most half of what it was at the previous step; so this difference tends to $0$.
How can I find a homeomorphism from $\mathbb{R}^n$ to the open unit ball centered at 0?
To figure out the inverse, note that if $|x|=\lambda$, then $$\left|\frac{x}{1+|x|}\right| = \frac{\lambda}{1+\lambda}.$$ Thus, given $y\in\mathbb{R}^n$ with $0\leq |y|\lt 1$, you want to find $\lambda$ such that $\lambda = (1+\lambda)|y|$. Letting $|y|=\mu$, we have $(1-\mu)\lambda = \mu$, or $\lambda = \frac{1}{1-\mu}$. So the map you want for the inverse is $$y\longmapsto \frac{y}{1-|y|}.$$ Note that this is well-defined, since $0\leq |y|\lt 1$, so $0\lt 1-|y|\leq 1$. Also, the compositions are the identity: $$\begin{align*} x &\longmapsto \frac{x}{1+|x|}\\ &\longmapsto \left(\frac{1}{1- \frac{|x|}{1+|x|}}\right)\frac{x}{1+|x|} = \left(\frac{1+|x|}{1+|x|-|x|}\right)\frac{x}{1+|x|}\\ &= \vphantom{\frac{1}{x}}x.\\ y &\longmapsto \frac{y}{1-|y|}\\ &\longmapsto \left(\frac{1}{1 + \frac{|y|}{1-|y|}}\right)\frac{y}{1-|y|} = \left(\frac{1-|y|}{1-|y|+|y|}\right)\frac{y}{1-|y|}\\ &= \vphantom{\frac{1}{y}}y. \end{align*}$$ Now simply verify that both maps are continuous.
Construct a random variable with a given distribution
A necessary and sufficient condition is that $(\Omega,\mathcal{F},P)$ is an atomless probability space. An atom in a probability space is a set $E\in\mathcal{F}$such that $P(E)>0$ and for all $F\subseteq E$ with $F\in\mathcal{F}$ either $P(F)=0$ or $P(F)=P(E)$. The proof of sufficiency is somewhat messy and naturally proceeds by constructing a uniformly distributed random variable with values in $[0,1]$ .
Identity in Number Theory Paper
Here is one way to do the calculation: \begin{align*} \frac{\sum_{m=0}^{i-1}\binom{s}{m}(p-1)^{k-1-m}}{\sum_{m=0}^{i}\binom{s}{m}(p-1)^{k-1-m}}&= \frac{\sum_{m=0}^{i-1}\binom{s}{m}(p-1)^{-m}}{\sum_{m=0}^{i}\binom{s}{m}(p-1)^{-m}}\tag{1}\\ &=\frac{\sum_{m=0}^{i}\binom{s}{m}(p-1)^{-m}-\binom{s}{i}(p-1)^{-i}}{\sum_{m=0}^{i}\binom{s}{m}(p-1)^{-m}}\tag{2}\\ &=1-\binom{s}{i}(p-1)^{-i}\left(\sum_{m=0}^{i}\binom{s}{m}(p-1)^{-m}\right)^{-1}\tag{3}\\ &=1-\binom{s}{i}\left(\sum_{m=0}^{i}\binom{s}{m}(p-1)^{i-m}\right)^{-1}\tag{4}\\ \end{align*} Comment: In (1) we divide by $(p-1)^{k-1}$ In (2) we add the summand $m=i$ to the sum in the numerator and subtract the corresponding value In (3) we perform the division In (4) there is a last small rearrangement Note: According to your precise description you did most of the analysis of Lemma 4 and I suppose that this calculation will look therefore pretty easy to you.
Proving that a Series Converges Conditionally
Hint: Conditional convergence means that $\sum^{\infty}_{k=1}a_k$ converges, but $\sum^{\infty}_{k=1}\left|a_k\right|$ doesn't. Obviously, $$\sum^{\infty}_{k=1}\left|a_k\right| = \sum^{\infty}_{k=1}\frac 1 k$$ is the harmonic series, which is divergent. To prove the convergence of $\sum^{\infty}_{k=1}a_k$, you can use the alternating series test.
Solve robust minimax optimization problem in two subsequent steps?
According to the minimax theorem, if $f$ is continuous function which is concave in $q$ and convex in $x$ (roughly speaking), then $$ \min_x\max_q f(x,q) =\max_q\min_x f(x,q)=\max_q M(q) $$ holds. Hence in this case, your method can solve the problem. However, in general cases, we can say at most $$ \min_x\max_q f(x,q) \ge\max_q\min_x f(x,q), $$ and equality may not hold. In this case, your method does not give the solution.
Fair gambler's ruin tail probability
The exact probability that the game has not ended after the $\ n^\text{th}\ $ toss is $$ \frac{\pmatrix{n\\\left\lfloor\frac{n}{2}\right\rfloor}}{2^n}\sim\sqrt{\frac{2}{\pi n}}\ . $$ The proof of the first expression turns out to be more straightforward than I had initially expected. The asymptotic approximation follows from the well known asymptotic expressions for the central binomial coefficients: \begin{align} {2n\choose n}&\sim\frac{4^n}{\sqrt{\pi n}}=2^{2n}\sqrt{\frac{2}{2n\pi}}\\ {2n+1\choose n}&={2n+1\choose n+1}\sim\frac{2^{2n+1}}{\sqrt{\pi(n+1)}}\\ &=2^{2n+1}\sqrt{\frac{2}{\pi(2n+1)}}\sqrt{1-\frac{1}{2n+2}}\ . \end{align} For $\ i\ge1\ $ let $\ p_{in}\ $ be the probability that the player has $\ i\ $ dollars after the $\ n^\text{th}\ $ toss, and let $\ p_{0n}\ $ be the probability the game ends on or before the $\ n^\text{th}\ $ toss. Then \begin{align} p_{n+1\,n}&=\frac{1}{2^n}\ ,\\ p_{n\,n}&=0\ ,\\ p_{0\,n}&= p_{0\,n-1}+\frac{p_{1\,n-1}}{2}\ ,\\ p_{1\,n}&= \frac{p_{2\,n-1}}{2}\ , \text{ and}\\ p_{i\,n}&= \frac{p_{i+1\,n-1}+p_{i-1\,n-1}}{2}\ \text{ for }\ i\ge2\ . \end{align} To simplify the calculation, let $\ T_{nj}=2^{n+j}p_{n+1-j\,n+j}\ $ for $\ 0\le j<n\ $. Then \begin{align} T_{n0}&=1\ ,\\ T_{11}&=1\ ,\text{ and}\\ T_{nj}&=T_{n\,j-1}+T_{n-1\,j}\ \text{ for }\ 1\le j\le n\ . \end{align} It follows from the last of these identities that $$ T_{nk}=\sum_{j=0}^kT_{n-1\,j}\ . $$ The numbers $\ T_{nj}\ $ are the entries in Catalan's triangle. The numbers $\ T_{nn}\ $ along the diagonal are the Catalan numbers, $$ T_{nn}=\frac{2n\choose n}{n+1}\ , $$ from which we obtain \begin{align} p_{1\,2n}&=\frac{T_{nn}}{2^{2n}}\\ &= \frac{2n\choose n}{(n+1)2^{2n}}\ . \end{align} From the recurrence for $\ p_{in}\ $ we also get $\ p_{1\,2n+1}=p_{2\,2n}=0\ $ and \begin{align} p_{0\,2n}&=p_{0\,2n-1}\\ &=p_{0\,2n-2}+\frac{p_{1\,2n-2}}{2}\\ &= p_{0\,2n-2}+\frac{2n-2\choose n-1}{n2^{2n-1}} \end{align} It can be verified by induction that the solution of this recurrence is \begin{align} p_{0\,2n}&=1-\frac{2n\choose n}{2^{2n}}\\ &=1-\frac{2n-1\choose n-1}{2^{2n-1}}\\ &=p_{0\,2n-1}\ . \end{align} Now $\ p_{0n}\ $ is the probability that after the $\ n^\text{th}\ $ toss the game has ended, so the probability that the game has not ended after the $\ n^\text{th}\ $ toss is $$ 1-p_{0\,n}= \frac{\pmatrix{n\\\left\lfloor\frac{n}{2}\right\rfloor}}{2^n}\ , $$ as stated above.
Find taylor polynomial for $e^x\cos x$
$$e^x=1+x+x^2/2+x^3/6+....$$ $$cosx=1-x^/2-x^4/24+......$$ Multiply to get $$e^x.cosx=1+x/2-x^3/12+.....$$ The formula for the error term involves the derivative of $ e^x.cosx$ and $n!$. With these 3 terms the error condition is met. Thus the degree is 3
"Weighted" Functions
You're almost there already. For any given $x$, you want $\frac{1}{x^2}$ of the quantity $f(x)=x^2$ for every $1$ of the quantity $g(x)=x$. Then the numerator of your "average" is $\frac{1}{x^2}\cdot f(x)+1\cdot g(x)=1+x$. Your only question then is what to divide by to keep the notion of "average." In a standard average, you would divide by $2$, since you had $1$ of the function $f$ and $1$ of the function $g$. In your new weighting system, the total of the weights is not $2$, but $1+\frac{1}{x^2}$, hence your average is $\frac{1+x}{1+\frac{1}{x^2}}=\frac{x^3+x^2}{x^2+1}=(x+1)\left(1-\frac{1}{x^2+1}\right)$.
Localization at maximal ideal of tensor product of algebras
This is not true. We just need to show in general, $R_m\otimes_k S_n$ is not local. Assume $k$ is algebraically closed. Let $R=k[x]$ and $S=k[y]$. Set $m=(x)$ and $n=(y)$. Then we have $R_m\otimes_k S_n\cong T^{-1}k[x,y]$, where $T=\{fg\mid f\in k[x],g\in k[y], f(0)\neq 0,g(0)\neq 0\}$ Claim: $T^{-1}(x+y+1)$ is maximal ideal of $T^{-1}k[x,y]$. The spectrum of $T^{-1}k[x,y]$ corresponds to the prime ideal of $k[x,y]$ which has empty intersection with $T$. If $T^{-1}(x+y+1)$ is not maximal, there exists prime ideal $p$ containing $(x+y+1)$ such that $p\cap T=\emptyset$. Note that $p=(x-a,y-b)$ since $\dim(k[x,y])=2$. $p\cap T=\emptyset $ implies that $a=b=0$. This contradicts with $(x+y+1)\subseteq p$. The claim follows. Since $T^{-1}(x,y)$ is also maximal ideal of $T^{-1}k[x,y]$. We get $T^{-1}k[x,y]$ is not local. The above example that $k[x]_{(x)}\otimes_k k[y]_{(y)}$ is not local works for any field. Assume that $T^{-1}k[x,y]$ is local. We have the maximal ideal of $T^{-1}k[x,y]$ is $T^{-1}(x,y)$. It is clear that $x+y+1\notin T^{-1}(x,y)$. Hence it is invertible in $T^{-1}k[x,y]$. Thus we get $$ (x+y+1)\alpha/fg=1/1, \text{ for some }\alpha\in k[x,y] \text{ and } fg\in T. $$ Then we get $(x+y+1)\alpha=fg$, this contradicts with $k[x,y]$ is UFD.