title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
If $f_k \to f$ a.e. and the $L^p$ norms converge, then $f_k \to f$ in $L^p$
This is a theorem by Riesz. Observe that $$|f_k - f|^p \leq 2^p (|f_k|^p + |f|^p),$$ Now we can apply Fatou's lemma to $$2^p (|f_k|^p + |f|^p) - |f_k - f|^p \geq 0.$$ If you look well enough you will notice that this implies that $$\limsup_{k \to \infty} \int |f_k - f|^p \, d\mu = 0.$$ Hence you can conclude the same for the normal limit.
Given a polynomial f(x)=x(x+1)(x+2)(x+3)+1
You have that $$f(x) = x(x + 1)(x + 2)(x + 3) + 1 \tag{1}\label{eq1A}$$ Since $p$ is an odd prime, then any $n$ with $p \mid f(n)$ is true iff only $p \mid 16f(n)$. Multiply $f(n)$ by $16$ and distribute the powers of $2$ to each factor with $n$ in it to get $$16f(n) = (2n)(2n + 2)(2n + 4)(2n + 6) + 16 \tag{2}\label{eq2A}$$ Hint: Now, let $$m = 2n + 3 \tag{3}\label{eq3A}$$ to get $$\begin{equation}\begin{aligned}16f(n) = g(m) & = (m - 3)(m - 1)(m + 1)(m + 3) + 16 \\ & = (m - 3)(m + 3)(m - 1)(m + 1) + 16 \\ & = (m^2 - 9)(m^2 - 1) + 16 \\& = m^4 - 10m^2 + 9 + 16 \\& = m^4 - 10m^2 + 25 \\ & = (m^2 - 5)^2\end{aligned}\end{equation}\tag{4}\label{eq4A}$$ Thus $p \mid (m^2 - 5)^2$ and, since $p$ is a prime, this means $p \mid m^2 - 5$. For the other direction, for any $m$ where $p \mid m^2 - 5$, if $m$ is even, then use a new $m$ with $p$ added to it so it's odd but where you still have $p \mid m^2 - 5$. Hint for remainder of the proof: Then you can reverse the above steps in \eqref{eq4A} to get, from \eqref{eq3A}, that with the integer $n = \frac{m - 3}{2}$, that $p \mid 16f(n) \implies p \mid f(n)$.
Unimodular row condition
We more naturally get an exact sequence $0\to P\to A^n\to A\to0$, but as it splits we get an exact sequence the other way too. The map $\pi:A^n\to A$ is given by $$(r_1,\ldots,r_n)\mapsto\sum_i r_i a_i.$$ Unimodularity means $\pi$ is surjective, so we get a short exact sequence $0\to P\to A^n\to A\to0$. As $A$ is projective, this splits, so $P$ is a direct summand of the projective module $A^n$ and so is projective. To get the rank, localise at a prime ideal, and the localised sequence now consists of free modules, so the localisation of $P$ has rank $n-1$.
Alternate inner products on Euclidean space?
Any inner product is dot product in some basis. For example, your inner product is standard dot product written in basis $\left(e_1, \frac{1}{2}e_1 + \frac{\sqrt{3}}{2}e_2\right)$.
How find this sum $\sum\limits_{n=0}^{\infty}\frac{1}{(3n+1)(3n+2)(3n+3)}$
Your try is excellent... Note that $f(0)=f'(0)=f''(0)=0$ hence $$ f(1)=\int_0^1f'(x)\,\mathrm dx=\int_0^1\int_0^xf''(y)\,\mathrm dy\,\mathrm dx=\int_0^1\int_0^x\int_0^yf'''(z)\,\mathrm dz\,\mathrm dy\,\mathrm dx, $$ that is, $$ 2f(1)=\int_0^1(1-z)^2f'''(z)\,\mathrm dz=\int_0^1\frac{1-z}{1+z+z^2}\,\mathrm dz. $$ The rest is routine. The change of variable $2z+1=\sqrt3t$ yields $$ 2f(1)=\int_{1/\sqrt3}^\sqrt3\frac{\sqrt3-t}{1+t^2}\,\mathrm dt=\left[\sqrt3\arctan t-\frac12\log(1+t^2)\right]_{1/\sqrt3}^\sqrt3. $$ Note that $\arctan\sqrt3=\pi/3$ and $\arctan1/\sqrt3=\pi/6$, hence $$ 2f(1)=\sqrt3\cdot\left(\frac\pi3-\frac\pi6\right)-\frac12\log4+\frac12\log\frac43, $$ that is, $$ f(1)=\frac14\left[\frac\pi{\sqrt3}-\log3\right]\approx0.1788. $$ Second method: The rational fraction is such that $$ \frac2{(3n+1)(3n+2)(3n+3)}=\frac1{3n+1}-\frac2{3n+2}+\frac1{3n+3}, $$ hence $$ 2f(x)=x^2g_1(x)-2xg_2(x)+g_3(x),\qquad g_k(x)=\sum_{n\geqslant0}\frac{x^{3n+k}}{3n+k}. $$ Thus, for each $k$, $$ g'_k(x)=\sum_{n\geqslant0}x^{3n+k-1}=\frac{x^{k-1}}{1-x^3}. $$ Since $g'_k(0)=0$ for every $k\geqslant1$, this yields $$ 2f(x)=x^2\int_0^x\frac{1}{1-t^3}\mathrm dt-2x\int_0^x\frac{t}{1-t^3}\mathrm dt+\int_0^x\frac{t^2}{1-t^3}\mathrm dt. $$ The change of variable $t=xu$ yields $$ 2f(x)=x^3\int_0^1\frac{1}{1-x^3u^3}(1-2u+u^2)\mathrm du, $$ that is, $$ 2f(x)=x^3\int_0^1\frac{1-u}{1-xu}\frac{1-u}{1+xu+x^2u^2}\mathrm du. $$ When $x\to1$, one obtains once again $$ 2f(1)=\int_0^x\frac{1-u}{1+u+u^2}\mathrm du. $$ More generally, for every integer $k\geqslant2$, $$ \sum_{n\geqslant0}\frac{(k-1)!}{(kn+1)(kn+2)\cdots(kn+k)}=\int_0^1\frac{(1-u)^{k-2}}{1+u+\cdots+u^{k-1}}\,\mathrm du. $$
Infinitely differentiable
By differentiating it an infinite number of times? But seriously, that's what you do. Just that usually you can infer what the higher order derivatives will be, so you don't have to compute it one by one. Example To see that $\sin(x)$ is infinitely differentiable, you realize the following: $\frac{d}{dx}\sin(x) = \cos(x)$, and $\frac{d}{dx} \cos(x) = -\sin(x)$. So you see that $(\frac{d}{dx})^4\sin(x) = \sin(x)$, and so the derivatives are periodic. Therefore by continuity of $\sin(x)$ and its first three derivatives, $\sin(x)$ must be infinitely differentiable. Example To see that $(1 + x^2)^{-1}$ is infinitely differentiable, you realize that $\frac{d}{dx}(1+x^2)^{-n} = -2n x (1+x^2)^{-n-1}$. So therefore by induction you have the following statement: all derivatives of $(1+x^2)^{-1}$ can be written as a polynomial in $x$ multplied by $(1 +x^2)^{-1}$ to some power. Then you can use the fact that (a) polynomial functions are continuous and (b) quotients of polynomial functions are continuous away from where the denominator vanishes to conclude that all derivatives are continuous. The general philosophy at work is that in order to show all derivatives are bounded and continuous, you can take advantage of some sort of recursive relationship between the various derivatives to inductively give a general form of the derivatives. Then you reduce the problem to showing that all functions of that general form are continuous.
Prove a group of even order has an odd number of elements of order 2 using properties of cosets
Lagrange's theorem is necessary here because a group containing any elements of order $2$ ("involutions") must have even order. I don't think there is alternative proof using cosets, however, because the set of involutions doesn't necessarily form a subgroup, even when including the identity (nor does the set of elements with order $>2$). The easiest example of this is $D_4$, which has order $8$, yet is generated by (even just two of) its five involutions. The $|a|>2\Rightarrow a\not= a^{-1}$ element is already quite simple, though. I would just stick with that.
Bounding distance between random variables.
This is an issue of unclear notation, and if you had said more precisely what your notation means the problem might not have arisen. You are both right: $\|X-Y\|$ is a function on $\Omega$, and the inequality $\|X(\omega)-Y(\omega)\|^2\le r^2$ holds for all $\omega$. You meant $\|X-Y\|$ in the pointwse sense, and your peer understood you to mean it was some kind of norm on a space of Hilbert space-valued functions of $\Omega$. Without further explanation both interpretations are reasonable. You might have said something like $P(\|X-Y\|^2\le r^2)=1$, or even "the random variable $\|X-Y\|$ obeys $P(\|X-Y\|^2\le r^2)=1$" or (more tersely), "$\|X-Y\|\le r^2$ holds pointwise in $\omega$". There is no absolute truth about mathematical notations: they are just tools we use to help convey our ideas. If your notations are misunderstood it is just like your words being misunderstood; if you find out you have been misunderstood, you fix the problem by rewording.
Econometrics OLS estimates
Use this relationship: $E(Y|X=x)=\mu_y + \rho \frac{\sigma_2}{\sigma_1}(x-\bar{x})$, which appears in most of the probability textbooks. The answer will fall out of it.
If $A_n$ is a countable collection of sets in $F$ with $\mu(A_n) = 1 $ for $ n\geq 1$ then: $\mu\big(\bigcap_{n=1}^{\infty} A_n \big) = 1$
Hint: The simple way to solve this is to show that $$ \mu \Bigg( \Big( \bigcap_{n=1}^{\infty} A_n\Big)^c \Bigg)=0 .$$ Try using De-Morgan's law, and recall that $\mu\big( A_n^c \big)=0 $ for all $n$. The significance of this statement, is that with respect to $\mu$, you only care what happens with probability $1$. Everything else is indistinguishable with respect to $\mu$.
Poisson: $P(N(4)-N(2)=5|N(4)=8)$
Yes.   We could also use the definition of Conditional Probability, and the fact that the count of Poisson events occurring in disjoint intervals are independent. $$\begin{align} \mathsf P(N(4)-N(2)=3\mid N(4)=8) ~=~& \dfrac{\mathsf P(N(2)=5)~\mathsf P(N(4)-N(2)=3)}{\mathsf P(N(4)=8)} \\ =~& \dfrac{((2\lambda)^5\mathsf e^{-2\lambda}/5!)~((2\lambda)^3\mathsf e^{-2\lambda}/3!)}{(4\lambda)^8\mathsf e^{-4\lambda}/8!} \\ =~& \binom{8}{3}\frac 1{2^8} \end{align}$$ No.   For this you need to use Linearity of Expectation: $$\begin{align}\mathsf E(N(4)-N(2)\mid N(3)=1) ~=~ & \mathsf E(N(4)\mid N(3)=1)-\mathsf E(N(2)\mid N(3)=1)\end{align}$$ So, when you are given that one event occurs in the first three unit-times: How many do you expect will occur in the next unit-time? $~\mathsf E(N(4)\mid N(3)=1)$ How many do you expect occurred in the first two unit-times? $~\mathsf E(N(2)\mid N(3)=1)$
Is a quadrilateral with one pair of opposite angles congruent and the other pair noncongruent necessarily a kite?
No. Take a circle with diameter BD, and let A, C be any points on it. So A and C will be right angles and all right angles are congruent, but this isn't true in general that ABCD would be a kite.
How well connected can a (special) partition of $\Bbb R^2$ be?
Using the axiom of choice, you can partition $\mathbb{R}^2$ into $2^{\aleph_0}$ sets $A_i$ such that any union of the $A_i$ is connected. Indeed, there are only $2^{\aleph_0}$ uncountable subsets of $\mathbb{R}^2$ that are either open or closed and each of them has cardinality $2^{\aleph_0}$, so by a straightforward diagonalization argument (very similar to the argument here, for instance), you can partition $\mathbb{R}^2$ into $2^{\aleph_0}$ disjoint sets $A_i$ with the property that each of them intersects every uncountable open or closed subset of $\mathbb{R}^2$. Now suppose some $A_i$ were disconnected. Then there are open subsets $U,V\subset\mathbb{R}^2$ such that $A_i\subset U\cup V$, $U\cap A_i\neq\emptyset$, $V\cap A_i\neq\emptyset$, and $U\cap V\cap A_i=\emptyset$. Since $A_i$ intersects every nonempty open set, $U\cap V$ must be empty. But then $U\cup V$ is disconnected, and hence $\mathbb{R}^2\setminus (U\cup V)$ is an uncountable closed set (as the complement of any countable subset of the plane is connected). So $A_i$ must intersect $\mathbb{R}^2\setminus (U\cup V)$, which is a contradiction. By the same argument, any union of the $A_i$ (indeed, any set containing any $A_i$) is also connected. (This is essentially counterexample 124 in Counterexamples in Topology, though there they only construct two such sets $A_i$.)
A smooth function $f(x)$ has a unique minimum. It $f$ also varies smoothly in time, does the location of its minimum vary smoothly in time?
The function $f(x,t)=(tx-1)^2 - \exp\left(\dfrac{1}{x(x+2)}\right)1_{(-2,0)}(x)$ seems to be a counterexample, where $1_A(x)$ is the indicator function for $A$. The second term is a bump function with min at $x=-1$. Here $\chi(0)=-1$, but $\chi(t)=\frac1t$ when $t>0$.
Unramified at certain places and Selmer groups
I think I can now give an answer to this. Can anyone check my reasoning? Recall that by definition, $\xi \in \operatorname{Sel}_{p}(E/\mathbb{Q})$ is unramified at a place $v$ if it is trivial in $H^{1}(I_{v}, E[p])$ where $I_{v}$ is the inertia subgroup of $G_{\overline{\mathbb{Q}}_{v}/\mathbb{Q}_{v}}$. For a prime $\ell \neq p$, we have the Kummer sequence for $E/\mathbb{Q}_{\ell}$, $$0 \longrightarrow E(\mathbb{Q}_{\ell})pE(\mathbb{Q}_{\ell}) \overset{\phi}{\longrightarrow} H^{1}(G_{\overline{\mathbb{Q}}_{\ell}/\mathbb{Q}_{\ell}}, E[p]) \overset{\psi}{\longrightarrow} H^{1}(G_{\overline{\mathbb{Q}}_{\ell}/\mathbb{Q}_{\ell}}, E(\overline{\mathbb{Q}}_{\ell}))[p] \longrightarrow 0$$ which is a short exact sequence. In particular as $E(\mathbb{Q}_{\ell}/pE(\mathbb{Q}_{\ell}) = 0$, by the exactness of the sequence, $\ker\psi = \operatorname{im}\phi = 0$. Then by the definition of unramified at the beginning of this paragraph, we have that $\operatorname{Sel}_{p}(E/\mathbb{Q})$ is unramified at $\ell$.
Integral of a function with removable (?) discontinuities
To sum it up very briefly, a point has ZERO width. The area under something of zero width is 0 x f(x). Zero times anything is zero. It seems strange, but you can have a function with an infinite amount of removable discontinuities and still get an area under the curve. The area under the discontinuities still amounts to nothing.
trace map is continuous
Is it clear that the map $$k^{n^2}\to k \ \ \ \ \ \ \ \ \ \ (a_{11},a_{12},\dots,a_{1n},a_{22},a_{21},\dots,a_{nn})\mapsto a_{ii}$$ is continuous for $i=1,\dots,n$? Also, the sum of continuous functions is continuous.
If $f: M \to N$ is a smooth map between compact connected manifolds and $\operatorname{rank}{df} = \dim{N}$ then all pre-images are diffeomorphic
Fix $p \in N$. Since $N$ is connected, it suffices to show that the set $S := \{q \in N : f^{-1}(p)\simeq f^{-1}(q) \}$ is opened and closed. From the response of a question on MSE, we know that $S$ is opened. If $q \in \bar{S}$, let $U \subset N$ be a neighborhood of $q$ on which all fibers are diffeomorphic to $f^{-1}(q)$. Since $U$ contains some point in $S$, we conclude that $q \in S$, hence $S$ is closed.
probability density function question for logs
Hint: Consider the CDF of $Z$, which I'll write as $F_Z(t)$. $$ F_Z(t) = P(Z \leq t) = P( \log(X/4) \leq t) = P( X \leq 4 e^{t}) = F_X(4e^t). $$ Here $F_X(t)$ is the CDF of $X$. Now, to find the PDF of $Z$, it comes down to remembering that $f_Z(t) = \frac{d}{dt} F_Z(t)$.
Eigen values of the operator $T : V \rightarrow V : T(f(t)) = t f~'(t)$
The differential equation you need to solve $tf'(t)= \lambda f(t)$ so $f'(t)-\frac{\lambda}{t}f(t)=0$ (you can divide by t, since it's non zero by asumption). An integrating factor is hence $t^{-\lambda}$. Multiplying both sides bu $t^{-\lambda}$ you get $t^{-\lambda}f'(t)-\lambda t^{-\lambda-1} f(t)=0$, which is $\frac{d}{dt}(f(t) t^{-\lambda})=0$. Hence $f(t)t^{-\lambda}=c$ so $f(t)=ct^{\lambda}$. Hence any real is an eigenvalue.
Let $F(x)=cos\int_{0}^{e^{x}}cos^{2}(\int_{0}^{u}(cos^{3}t)dt)du$ Find $F'(x)$
We write $$F(x)=\cos(g(x))$$ with $$g(x)=\int_0^{e^x}\cos^2\left(\int_0^u\cos^3(t)dt\right)du.$$ By the chain rule, $$F'(x)=-\sin(g(x))g'(x)$$ We can write $$g(x)=\int_0^{e^x}\cos^2(H(u))du$$ with $H(u)=\int_0^u\cos^3(t)dt$. Now, suppose $G(x)$ is such that $G'(x)=\cos^2(H(x))$. Then we can write $g(x)=G(e^x)-G(0)$. Thus, $$g'(x)=G'(e^x)e^x=\cos^2(H(e^x))e^x.$$ Putting everything together, we have $$F'(x)=-\sin\left(\int_0^{e^x}\cos^2\left(\int_0^u\cos^3(t)dt\right)du\right)\cos^2\left(\int_0^{e^x}\cos^3(t)dt\right)e^x$$
Proving the product of two non singular matrices is also non singular.
There's different manners to prove this result for example: Using the determinant: $$\det(AB)=\det A\det B$$ and the fact that $C$ is singular iff $\det C=0$. Using the fact that $AB$ is invertible then $A$ is surjective and $B$ is injective and that in finite dimensional space: $C$ is injective iff $C$ is surjective iff $C$ is bijective.
How to tell how many global minima's there is from the function $f(x_1,x_2)$
You might see whether the Hessian is positive definite (and it is). Then the function is convex, so the minimum is unique.
quadratic form in hilbert space and Gram matrix
First check, that $Q^{\frac{1}{2}}$ is a norm. The triangle inequality needs some calculation (... $Q(x+y)=\sum_{k=1}^K |<x,g_k>|^2+2Re<x,g_k>\overline {<y,g_k>} +|<y,g_k>|^2$ and $(\sum_{k=1}^K Re<x,g_k>\overline {<y,g_k>})^2\leq\sum_{k=1}^K|<x,g_k>|^2\sum_{k=1}^K|<y,g_k>|^2$, which follows from Cauchy Schwartz). The same calculation yields $Q(x+y)+Q(x-y)=2(Q(x)+Q(y))$, the parallelogram identity. Thus $Q$ is a quadratic form. For every $x\in V$ there are unique $x_1,...,x_K\in\mathbb{C}$ such that $x=\sum_{l=1}^Kx_lg_l$ . Then $Q(x)=\sum_{k,l,m=1}^Kx_l\overline{x_m}<g_l,g_k>\overline{<g_m,g_k>}$ and if $G$ is the operator on $V$, that belongs to the Gram matrix ,then $Gx=\sum_{k,l=1}^Kx_l<g_k,g_l>g_k$ and $<Gx,x>=\sum_{k,l,m=1}^K x_l\overline{x_m}<g_k,g_l><g_k,g_m>$ For the Rayleigh quotient $\frac{|<Gx,x>|}{<x,x>} $it is known by the min max theorem, that $A\leq\frac{|<Gx,x>|}{<x,x>}\leq B$.
Hadamard's three circle theorem
Let $\lambda=\frac{\log(b/r)}{\log(b/a)}$. Then $1-\lambda=\frac{\log(r/a)}{\log(b/a)}$. Dividing both sides of your equation by $\log(b/a)$ gives: $$\log(M(r))\leq \lambda \log(M(a))+(1-\lambda)\log(M(b)).$$ Notice that $a^{\lambda}b^{1-\lambda}=\exp(\lambda\log(a)+(1-\lambda)\log(b))=r$ (verify this!). Thus, $$\log(M(a^\lambda b^{1-\lambda})\leq \lambda\log(M(a))+(1-\lambda)\log(M(b))$$ which is saying that $M$ is log-convex, in that $\log(M(\exp(z))$ is convex in $z$. It is not hard to show that the only times you get strict equality for convex functions is iff the function is affine (linear): $\log(M(\exp(z))=Az+B$. Thus, $M(\exp(z))=Ce^{Az}$, or $M(z)=Cz^A$ Addendum: to show that all strictly equal convex functions are affine, write: $f''(z)=\lim_{h\rightarrow 0}\frac{f(z+h)+f(z-h)-2f(z)}{h^2}=\lim_{h\rightarrow 0}\frac{2f(z)-2f(z)}{h^2}=0$ where we used $z=\frac{1}{2}(z+h)+\frac{1}{2}(z-h)$ and equal convexity.
Summation Notations - (Discrete Math) I'm having trouble
I'm afraid I'm not sure what you mean when you write "for the increments..." When $i=-1$, the summand is $(-(-1))^{-1+1}=1^0=1$. When $i=0$, the summand is $(-0)^{0+1}=0^1=0$. When $i=1$, the summand is $(-1)^{1+1}=(-1)^2=1$. When $i=2$, the summand is $(-2)^{2+1}=(-2)^3=-8$. And so on. You'll find that the sum evaluates to $-949$.
On a step of Marcus' "Number Fields", Theorem 22 Chapter 3
I have a copy of Marcus with me, so I can fill in some of the details. However, unless you wanted the audience of your post to be limited to those who (i) have access to a copy of the book; and (ii) take the trouble to pull it off the shelf, look up the theorem, and backtrack enough to see what all the symbols mean; then you should try, in the future, to provide enough context in your post so that even those without access may have a shot at understanding the problem and your query about it, and potentially offer help. Marcus will prove (b) in the special case of $I=P$ a prime ideal, relying on (a) to deduce the general case. Now, $S/PS$ is a vector space over $R/P$, and we want to show it has dimension exactly $n$. First, we show it has dimension at most $n$, by showing that any collection of $n+1$ elements is necessarily linearly dependent. To that end, let $\alpha_1,\ldots,\alpha_{n+1} \in S$, and we want to show that their images in $S/PS$ are linearly dependent over $R/P$. We know the original elements are linearly dependent in $L$ over $K$, since $[L:K]=n$. And we know that we can multiply the linear dependence equation by some integer so that all coefficients lie in $R$, rather than in $K$ (this was proven in an exercise in the previous chapter: if $\alpha\in K$, then there exists $m\in\mathbb{Z}$ such that $m\alpha\in R$). This gives us an equation of the form $$\beta_1\alpha_1+\cdots+\beta_{n+1}\alpha_{n+1} = 0,$$ where $\beta_i\in R$. We want to reduce this modulo $P$, but to prove that we don't have a trivial linear combination after reduction, we need to ensure that not all $\beta_j$ lie in $P$. This is where the Lemma comes in. If at least one $\beta_j\notin P$, we are done. Assume, however, that we are unlucky enough to have all $\beta_i\in P$. This could happen: for example, maybe all of our original $\alpha_i$ are in $P$, and so we pick all $\beta_j$ in $P$. So we aren't looking for a contradiction. Instead, we want to show we can tweak the linear dependence relation so that the resulting one does not have all coefficients in $P$. We apply the Lemma with $A=P$, $B=(\beta_1,\ldots,\beta_{n+1})$, ideals in the Dedekind domain $R$, we have $B\subseteq A$, $A\neq R$ (since it is prime). Let $\gamma\in K$ be guaranteed by the Lemma, so that $\gamma B\subset R$ and $\gamma B\not\subset P$. Now take the original linear dependent equation and multiply through by $\gamma$: $$0 = \gamma(\beta_1\alpha_1+\cdots+\beta_{n+1}\alpha_{n+1}) = (\gamma\beta_1)\alpha_1+\cdots+(\gamma\beta_{n+1})\alpha_{n+1}.$$ Now notice that since $\gamma B$ is generated by $\gamma\beta_1,\ldots,\gamma\beta_{n+1}$, it cannot be the case that all the new coefficients lie in $P$ (since $\gamma B\not\subset P$). Thus, we are now in the situation where not all coefficients lie in $P$, and so reducing modulo $P$ we obtain a nontrivial linear dependence relation between $\overline{\alpha_1},\ldots,\overline{\alpha_{n+1}}$, which is what we wanted to show. This proves that $S/PS$ has dimension at most $n$, as claimed.
Distance Between Two Sets of Points
A metric in a set of compacts is Hausdorff distance.
How to set positive-definite function to be equal to the length of inputs?
As it turns out, we will have $v^TMv = v^Tv$ for all $v \in \Bbb R^n$ if and only if $M + M^T = 2I$. One non-symmetric, full-rank example would be $$ M = \pmatrix{1&-1\\1&1}. $$ While the off-diagonal entries are not necessarily zero, they do satisfy $a_{ij} = -a_{ji}$.
Is A is open/closed?
No, $A$ doesn't have to be open. Let $\{q_n\,|\,n\in\mathbb{N}\}$ be an enumeration of $\mathbb Q$. For each $n\in\mathbb N$, let $V_n=\mathbb{R}\setminus\{q_n\}$. Then $V_n$ is open and dense. But $\bigcap_{n\in\mathbb N}V_n=\mathbb{R}\setminus\mathbb Q$, which is not open.
Idea is correct, proof lacks rigor, coefficient of $t$ in $\det(I+tA)$
The result is correct. I think your argument is fine, although it is not very amenable to be written nicely. Here is a shorter proof. If $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $A$ counting multiplicities, then the eigenvalues of $I+tA$ are $1+t\lambda_1,\ldots,1+t\lambda_n$. Thus $$ \det(I+tA)=(1+t\lambda_1)\cdots(1+t\lambda_n)=t^n\lambda_1\cdots\lambda_n+\cdots+t(\lambda_1+\cdots+\lambda_n)+1. $$ So the coefficient of $t$ is $$ \lambda_1+\cdots+\lambda_n=\operatorname{Tr}(A). $$ We also see from above that the coefficient of $t^n$ is $\det A$.
Is the endomorphism ring of a module over a non-commutative ring always non-commutative?
So, am I correct to think that in this case $End M$ is a noncommutative ring because $A$ is not commutative? No, not necessarily. There isn’t a connection. You can have $End(M)$ noncommutative and $A$ commutative ($A=\mathbb Z$ and $M=C_2\times C_2$) You can also have $A$ noncommutative and $End(M)$ commutative (for this you can take a ring $A$ which isn’t commutative, but which has a unique maximal right ideal $I$ such that $A/I$ is commutative, and let $M=A/I$.)
What is the intuition behind the reduced row echelon form of a matrix?
Have a look at the Wikipedia entry on 'Guassian elimination'. The example on three simultaneous equations is the starting point. Then the idea of elementary row operations and what they do to the determinant of a matrix is next, in understanding how echelon form is useful in calculating the determinant of a matrix - but you have to be aware/happy that the determinant of an echelon form matrix is simply the product of its main diagaonal elements. Next is that reduced row echelon form is a way of getting an inverse matrix. There are other uses, but these three are reason enough for using the technique.
Why are integral and differential operators commutative?
You can do it when the function $\boldsymbol B$, and the function $\dfrac{\partial \boldsymbol B}{\partial t}$ are continuous. Why? Here is the answer.
Is showing a graph is non-Hamiltonian NP-Complete?
You probably know that showing a graph is Hamiltonian is an NP-complete problem. Thus, showing that a graph is not Hamiltonian is a co-NP-complete problem. It is an open problem as to whether or not a co-NP-complete problem can also be NP-complete. So, the short answer is, "we don't know yet!"
Finitely presented group with intermediate Turing degree word problem
There might not be specific known examples, but https://en.wikipedia.org/wiki/Word_problem_for_groups gives an example of a mapping from a set $A\subseteq \mathbb{N}$ to a group $G: \langle a,b,c,d| a^nba^n=c^ndc^n , n\in A\rangle$ with presumably the same complexity for its word problem. This group isn't FP, but there's also mention in that article that every FG group with recursively enumerable presentation is a subgroup of a FP group with insoluble word problem; I would suggest checking the references to see if that construction maintains complexity, since that would give your answer.
Binomial Distribution with probability $P$ such that $P$ is Uniformly distributed
No. The expression you displayed will have to be integrated over $[0,1]$: $$\mathbb P(X=k)=\int_0^1{{n\choose k}p^k(1-p)^{n-k}}dp.$$ This is so because $${n\choose k}P^k(1-P)^{n-k}=\mathbb P(X=k\mid P)$$ and $$\mathbb P(X=k)= \mathbb E[\mathbb P(X=k\mid P)].$$
Polynomial $U$ such that $ U^{"}-XU^{'}=0 $
If $U$ has degree $n \geq 2$, then $U''$ has degree $n-2$, while $XU'$ is of degree $n$. Their difference cannot be $0$. If $U$ has degree $1$, then $U'' = 0$, while $XU' \neq 0$, and their difference cannot be $0$. If $U$ is a constant, then the equation is satisfied.
Distribution of sum of discrete random variables and central limit theorem
I came up with some solutions, although they might be somewhat more complicated than necessary! Let $\epsilon>0$. As mentioned, according to the standard CLT, $S_n/\sqrt{n} \stackrel{D}{\to}N(0,1)$. Using this fact, choose $0<b$ so that, for all $n\ge N_0$, $P(S_n/\sqrt{n} > b ) < \epsilon$. Now, the probability of interest can be decomposed into two terms: $P(S_n = k^2,\;\;\;\mbox{ for some k})=P(S_n = k^2,\;\;\;\mbox{ for some k}, \; S_n<b\sqrt{n})+P(S_n = k^2,\;\;\;\mbox{ for some k}, \; S_n>b\sqrt{n}).$ for $n>N_0$, the second term on the right hand side of the above equation is less than $\epsilon$. Regarding the first term, note that the number of possible values $k^2$ such that $0 < k^2 < b\sqrt{n}$ is no more than $\sqrt{b}n^{1/4}$. The most likely single value that the random variable $S_n$ takes is zero, and $P(S_n=0) ={n \choose n/2}(1/2)^n.$ Using Stirlings formula to approximate the factorials in ${n \choose n/2}$ gives ${n \choose n/2} \le C2^n/\sqrt{n}$, which in turn gives $P(S_n=0) \le C1/\sqrt{n}$, where $C$ is an absolute constant. Since this is the most likely value $P(S_n = k^2,\;\;\;\mbox{ for some k}, \; a\sqrt{n}<S_n<b\sqrt{n}) \le \mbox{ [number of terms]$\times$ [largest possible probability] }= \frac{C\sqrt{b}}{n^{1/4}}< \epsilon$ For $n$ sufficiently large, say $n \ge N_1$. Thus for $n \ge \max \{N_0,N_1\}$, $P(S_n = k^2,\;\;\;\mbox{ for some k}) < 2\epsilon$, completing the proof. We want to evaluate $\lim_{n\to \infty} \frac{\log P(S_n/n > t)}{n}$ As $-n \le S_n \le n$, the limit is clearly not defined if $t>1$, and is always zero if $t<0$, so the interesting case is $t\in(0,1)$. Notice that $S_n/2 \stackrel{D}{=} B_n - n/2$, where $B_n$ is a Binomial random variable with parameters $n$ and $1/2$. Two ingredients I use here are: a) Hoeffdings inequality: $P( B_n > (t+1/2)n) \le e^{-2t^2n}$ b) Tail bounds for the normal distribution: If $Z\sim N(0,1)$, $(\frac{1}{\sqrt{2 \pi}t}-\frac{1}{\sqrt{2 \pi}t^3})e^{-t^2/2} \le P(Z>t) \le \frac{1}{\sqrt{2 \pi}t}e^{-t^2/2}$ Now, by multiplying and dividing by $P(Z > t\sqrt{n})$ inside the logarithm, $\frac{\log P(S_n/n > t)}{n}= \frac{1}{n}\log \left(\frac{P(S_n/n > t)}{P(Z > \sqrt{n}t)}\right) + \frac{\log P(Z> t\sqrt{n})}{n}.$ Note that $P(S_n/n > t)= P(B_n > (t/2 + 1/2)n) \le exp(-t^2n/2)$, using Hoeffdings inequality. With this and the tail bound for the normal distribution, $ \frac{P(S_n/n > t)}{P(Z > \sqrt{n}t)} \le \frac{exp(-t^2n/2)}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3})exp(-t^2n/2) } = \frac{1}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3}) }. $ Elementary arguments show that $\frac{1}{n}\log\left(\frac{1}{({1}/{\sqrt{2 \pi}t\sqrt{n}}-{1}/{\sqrt{2 \pi}(t\sqrt{n})^3}) }\right) \to 0$ as $n\to \infty$, and so the limit of interest is governed by $\lim_{n\to \infty} \frac{\log P(Z> t\sqrt{n})}{n}.$ This limit is $-t^2/2$, which can be arrived at using L'Hopitals rule, or the same tail bounds for the normal distribution metioned above. Hence, $\lim_{n\to \infty} \frac{\log P(S_n/n > t)}{n}=-t^2/2,$ $t\in (0,1)$.
Trivial and Nontrivial Solutions to IVPs and the Existence and Uniqueness Theorem
If $x'(t)=t^2$, then $x(t)=\frac{1}{3}t^3+C$. The initial value problem $x'(t)=t^2, x(0)=0$ has the unique solution $x(t)=\frac{1}{3}t^3.$
What's the differences between naive and axiomatic set theory?
The title of Halmos's book is a bit misleading. He goes through developing basic axiomatic set theory but in a naive way. There are no contradictions in his book, and depending on your background that may be a good place to start. Halmos will still develop all the axioms of ZFC in his book, but they will be presented in natural language and a much slower pace than most axiomatic set theory books. If you are looking for something a bit more advanced, I would recommend either Set Theory by Ken Kunen or Set Theory by Thomas Jech. The other thing is that set theory has a close relationship with mathematical logic, and so to understand the basics of set theory there is usually an assumed knowledge of some basic mathematical logic. For a mathematical logic book, I would recommend Mathematical Logic by Ebbinghaus and Flum or Introduction to Mathematical Logic by Enderton. Either way, I think Naive Set Theory by Halmos should be a good beginning point. It is much shorter than the other books and does not require as much in the beginning.
Determining a conic and points of intersection between it and a line
Write down the conic's matrix (or matrices) (See here if you have doubts): $$A=\begin{pmatrix} \;1&\!-\frac12&\frac12\\ \!-\frac12&4&1\\ \;\frac12&1&\!-2\end{pmatrix}\implies \det A=-8\neq0\implies\text{ the conic isn't degenerate}$$ We also have $$\det\begin{pmatrix}1&\!-\frac12\\\!-\frac12&4\end{pmatrix}=\frac{15}4>0\;\implies\;\text{the conic is an ellipse}$$ About the intersection you've already been answered.
Why would this solution be an upper-tailed rather than a lower-tailed?
If your test-statistic is of the form $$Z^{*}=\frac{\hat{p}_1-\hat{p}_2 }{\sqrt{p^{*}(1-p^{*})(\frac{1}{n_1}+\frac{1}{n_2})}}$$ where, $p^{*}=\frac{x_1+x_2}{n_1+n_2}$ Now if $\hat{p}_1$ is significantly greater than $\hat{p}_2$, your test statistic falls in the upper tail region of the Normal distribution. Please go through this link for further reading.
Convergent sequence on unit sphere
Hint: The sequence of numbers $\| x_n\|$ is real and bounded, so it must contain a convergent subsequence by the Bolzano–Weierstrass theorem.
Let R be an equivalence relation defined on a set A. For any x, y ∈ A, either [x] ∩ [y] = ∅ or [x] = [y]. (Prove or Disprove
Suppose $[x]\cap[y]$ is not empty. Then there is an element $z$ such that $zRx$ and $zRy$. But then, by the usual properties of equivalence relations, you obtain $xRy$ and therefore $[x]=[y]$.
Hereditary torsion theories and preradicals
We know that the torsion theory $(\mathscr{T},\mathscr{F})$ cogenerated by $V$, is the one that has as torsion-free class $\mathscr{F}$ the class of the modules cogenerated by $V$. That is $M\in \mathscr{F}$ if and only if there is mononomorphism $M\longrightarrow V^X$ for some set $X$. Now, $r_V$ is the idempotent radical asociated to this torsion theory. Is easy to see that $\mathscr{F}=\mathscr{F}_{r_V}$. $\Rightarrow)$ We want to show that $\mathscr{F}_{r_V}=\mathscr{F}_{r_{E(V)}}$. To do this we use that these are the classes of modules cogenerated by $V$ and $E(V)$ respectively. As $V\subseteq E(V)$, we have that $\mathscr{F}_{r_V}\subseteq\mathscr{F}_{r_{E(V)}}$. We know that, $(\mathscr{T},\mathscr{F})$ is hereditary if and only if $\mathscr{F}$ is closed under injective envelopes. As $\mathscr{F}=\mathscr{F}_{r_V}$ is closed under injectivee envelopes and $V\in\mathscr{F}$, then $E(V)\in \mathscr{F}$. Therefore $\mathscr{F}_{r_V}=\mathscr{F}_{r_{E(V)}}$. $\Leftarrow)$ As $r_V=r_{E(V)}$, we have that $\mathscr{F}_{r_V}\subseteq\mathscr{F}_{r_{E(V)}}$. Let $M\in\mathscr{F}$, then $M$ is submodule of $E(V)^X$ for some set $X$. It follows that $E(M)$ is submodule of $E(V)^X$. Thus $E(M)\in\mathscr{F}$. Therefore $(\mathscr{T},\mathscr{F})$ is hereditary.
Asymptotic estimation of $A_n$
Edit: The following is a better lower estimate. A lower estimate here is for positive $x$, $$ \sum_{m, n\leq x} \frac1{\tau(mn)} $$ where $\tau(n)=\sum_{d|n}1$ is the number of divisors of $n$. This is because $\tau(mn)$ counts all possible choice of positive integers $d, k$ with $dk=mn$, not just the ones with $d\leq x, k\leq x$. Such estimate can be given through Selberg & Delange Method. The method is shown in Tenenbaum's book Introduction to Analytic and Probabilistic Number Theory (Chapter 5, 6). The following is in Page 207 Theorem 8 of Tenenbaum's book. Theorem Let $$ h=\prod_p \sqrt{p(p-1)}\log\left(1-\frac1p\right)^{-1}. $$ Then uniformly for $x\geq 2$, $d\geq 1$, we have $$ \sum_{n\leq x} \frac 1{\tau(nd)}=\frac{hx}{\sqrt{\pi\log x}}\left(g(d)+O\left( \frac{(3/4)^{w(d)}}{\log x}\right)\right) $$ where $g$ is an arithmetic function satisfying $$ \sum_{d\leq x} g(d)=\frac x{h\sqrt{\pi\log x}}\left(1+O\left(\frac1{\log x}\right)\right). $$ Applying this theorem on the estimate, we obtain $$ \begin{align} \sum_{m,n\leq x} \frac1{\tau(mn)} &=\frac{hx}{\sqrt{\pi\log x}}\left(\frac x{h\sqrt{\pi\log x}}+O\left(\frac x{\log^{3/2} x}\right)+O\left(\frac{\sum_{d\leq x}(3/4)^{w(d)}}{\log x} \right)\right)\\ &=\frac{hx}{\sqrt{\pi\log x}}\left(\frac x{h\sqrt{\pi\log x}}+O\left(\frac x{\log^{3/2} x}\right)+O\left(\frac{x\log^{-1/4}x}{\log x} \right)\right)\\ &=\frac{x^2}{\pi \log x}+O\left(\frac {x^2}{\log^{7/4} x}\right) \end{align}. $$
Show: $\sum_{n=1}^{k}\sum_{m=n}^{k}\int_m^{m+1}f(x)dx=\sum_{m=1}^{k}m \int_m^{m+1}f(x)dx$
It is just a matter of definition. $$ \sum_{n=1}^{k} \sum_{m=n}^{k} \int_m^{m+1} f(x)dx $$ Expand along the first sum: $$ \left( \sum_{m=1}^{k} \int_m^{m+1} f(x)dx \right) + \left( \sum_{m=2}^{k} \int_m^{m+1} f(x)dx \right) + \cdots + \left( \sum_{m=k}^{k} \int_m^{m+1} f(x)dx \right) $$ Write each term in parentheses as a new line, and expand the sum: $$ \begin{array}{ccccccc} \int_1^{2} f(x)dx & + & \int_2^{3} f(x)dx & + & \cdots & + & \int_k^{k+1} f(x)dx \\ & + & \int_2^{3} f(x)dx & + & \cdots & + & \int_k^{k+1} f(x)dx \\ &&&& \ddots && \vdots \\ &&&&& + & \int_k^{k+1} f(x)dx \end{array} $$ Now count the number of each integral: $$ 1 \cdot \left( \int_1^{2} f(x)dx \right) + 2 \cdot \left( \int_2^{3} f(x)dx \right) + \cdots + k \cdot \left( \int_k^{k+1} f(x)dx \right) $$ Recombine using sigma notation: $$ \sum_{m=1}^{k} m \int_m^{m+1} f(x)dx $$
Distinguishing topology from metrics
A topology is a very "bare-bones" thing: it just provides a notion of openness (satisfying a few basic properties) for subsets of a given set $X$ of points. Notions like "straight line," "angle," and even "distance" are not built into a topology. Now every metric $d$ on a set $X$ induces a topology $\tau_d$ on $X$: namely, $U$ is open according to $\tau_d$ (or more snappily, $U\in\tau_d$ since a topology literally is the collection of sets it declares to be open) iff for each $u\in U$ there is some $\epsilon>0$ such that for all $v\in X$ with $d(v,u)<\epsilon$ we have $v\in U$. Note that we may have $\tau_{d_1}=\tau_{d_2}$ even if $d_1$ and $d_2$ are quite different metrics; coming up with some examples of this is a good exercise (think about $\mathbb{R}^2$). Topologies of the form $\tau_d$ for some metric $d$ are called metrizable, and the study of metrizability and its variants is an important topic within general topology. The pointwise convergence topology $\tau_{pwc}$ is not metrizable: there is no metric $d$ on the set $Fn(X,\mathbb{R})$ of functions from $X$ to $\mathbb{R}$ such that $\tau_d=\tau_{pwc}$. Of course, $\tau_{pwc}$ is motivated by metric ideas, but it's not literally induced by a metric in the very specific sense of the above paragraph. This is a good exercise; if you don't buy it, first try to write down an explicit metric $d$ on $Fn(X,\mathbb{R})$ and then show that $\tau_d=\tau_{pwc}$. As the issues with this become clear, you'll see how to prove that $\tau_{pwc}$ is not in fact metrizable.
Quaternion product of three vectors: meaning of vector part?
If only the $+$ sign were a $-$, you'd have a cyclic symmetry. What's with that? Well, notice that because imaginary quaternions aren't closed under multiplication, $abc$ has a contribution of $-(a\cdot b)c$ that comes from $c$ interacting with a real number instead. Because of that, much as it pains me to say it, I don't think $V$ will have a nice geometric interpretation. Indeed, $V$ is a measure of how much a product fails to stay in the set of imaginary quaternions to which $a,\,b,\,c$ belong, and is obtained from a calculation that leaves that set as soon as we compute $a,\,b$.
For a ring of char $p$ where $p>0$ is a prime, what does $R^{1/p}$ mean?
If $R$ is a domain, it has a fraction field $K$ which in turn has an algebraic closure $\bar K=\Omega$. This latter field has a well-known Frobenius automorphism $Frob:\Omega \to \Omega: x\mapsto x^p$. The ring you are looking after is the image of $R$ under its inverse automorphism, namely the ring $$R^{1/p}= Frob^{-1}(R)$$ You can iterate this process and get rings $R^{1/p},R^{1/p^2},R^{1/p^3} ,\ldots \subset \Omega\;$ whose union is symbolically denoted $R^{1/p^\infty}$ . If $R$ is not a domain I think you should be very wary and I definitely don't want to say anything about that case. An example The simplest non trivial example might be the polynomial ring $R=\mathbb F_p[X]$, for which we have $R^{1/p}=\mathbb F_p[X^{1/p}]$.
Using the squeeze theorem to determine a limit $\lim_{n\to\infty} (n!)^{\frac{1}{n^2}}$
Hint: Use the inequality $$n^{\frac{n}{2}} \leqslant {n!} \leqslant {\left(\frac{n+1}{2}\right)^n}, \;\; n>1,$$ which can be proved by induction.
How can one grab a random node from a binary tree without flattening it?
Do you know how many nodes are in each node's child subtrees? If you do, you can just decide that you want, say, the $k$-th node from the left and then descend the tree to find that node: Let $n$ be the total number of nodes in the tree. Choose $k$ to be a random integer between $0$ and $n-1$ inclusive. Let $A$ initially be the root node of the tree. Let $m$ be the number of nodes in the left subtree of $A$. (If $A$ is a leaf or has only right children, let $m = 0$.) If $k = m$, choose $A$ as the node we want and stop. Otherwise, if $k < m$, replace $A$ with its left child node and repeat from step 2. Otherwise (i.e. if $k > m$), subtract $m+1$ from $k$, replace $A$ with its right child node and repeat from step 2. This algorithm is much more efficient than traversing the entire tree; its running time is bounded by the depth of the tree, which for (approximately) balanced trees is proportional to the logarithm of the total number of nodes.
Non-finite series implies product is zero
Note that $1-x \leq \exp(-x)$. You can verify this from calculus by looking at the function $f(x) = \exp(-x) +x - 1$ and prove that the function is increasing. Hence, $f(x) \geq f(0) = 0$. Let $M_N = \displaystyle \prod_{n=1}^{N} (1-y_n)$. Hence, we have that $$0 \leq M_N = \displaystyle \prod_{n=1}^{N} (1-y_n) \leq \displaystyle \prod_{n=1}^{N} \exp(-y_n) = \exp \left( - \sum_{n=1}^{N} y_n \right)$$ Hence, $$0 \leq \lim_{N \rightarrow \infty} M_N \leq \lim_{N \rightarrow \infty} \exp \left( - \sum_{n=1}^{N} y_n \right) \leq \lim_{N \rightarrow \infty} \frac1{1 + \displaystyle \sum_{n=1}^{N} y_n} = \lim_{N \rightarrow \infty} \frac1{1 + S(N)} = 0$$ where $\displaystyle S(N) = \sum_{n=1}^{N} y_n$ and we are given that $\displaystyle \lim_{N \rightarrow \infty} S(N) = \infty$. Hence, $$\prod_{n=1}^{\infty} \left( 1-y_n \right) = 0.$$ EDIT Since you have $0 \leq y_n \leq 1$, you could also do as follows. $$1-y_n \leq \frac1{1+y_n}.$$ As before letting, $M_N = \displaystyle \prod_{n=1}^{N} (1-y_n)$. Hence, we have that $$0 \leq M_N = \displaystyle \prod_{n=1}^{N} (1-y_n) \leq \displaystyle \prod_{n=1}^{N} \frac1{1+y_n}$$ Hence, $$0 \leq \lim_{N \rightarrow \infty} M_N \leq \lim_{N \rightarrow \infty} \prod_{n=1}^{N} \frac1{1+y_n} \leq \lim_{N \rightarrow \infty} \frac1{1 + \displaystyle \sum_{n=1}^{N} y_n} = \lim_{N \rightarrow \infty} \frac1{1 + S(N)} = 0$$ where $\displaystyle S(N) = \sum_{n=1}^{N} y_n$ and we are given that $\displaystyle \lim_{N \rightarrow \infty} S(N) = \infty$. Hence, $$\prod_{n=1}^{\infty} \left( 1-y_n \right) = 0.$$
Prove that if $n>10$ then $\sum_{d\mid n}\phi(\phi(d))<\frac{3}5n$
We start with the identity: $$n=\sum_{d|n}\phi(d).$$ In order to prove it, just note that right hand side is a multiplicative function and therefore it is enough to check equality for prime power only. Now the key point is to note that if $d|n$ then $\phi(d)|\phi(n)$ and therefore the left hand side of our inequality is the sum $\phi(m)$ where $m$ runs over some divisors of $\phi(n).$ In other words, $$\sum_{d\mid n}\phi(\phi(d))=\sum_{m|\phi(n)}\phi(m)-S=\phi(n)-S,$$ where $S$ is the sum of those divisors of $\phi(n)$ that are not of the from $\phi(d),$ $d|n.$ Now, if $n=p_1^{\alpha_1}\cdotp_2^{\alpha_2}...\cdot p_k^{\alpha_k}$ then $\phi(n)=p_1^{\alpha_1}\cdotp_2^{\alpha_2}...\cdot p_k^{\alpha_k}(p_1-1)....(p_k-1)$ and those divisors that come from $\phi(d)$ are all of the form $m=p_1^{\beta_1}\cdotp_2^{\beta_2}...\cdot p_k^{\beta_k}\prod_{i}(p_i-1).$ So if $n\ne 2^m$ then divisor $D=\frac{p_1^{\alpha_1-1}\cdotp_2^{\alpha_2-1}...\cdot p_k^{\alpha_k-1}(p_1-1)....(p_k-1)}{2}$ is in $S$ and we can estimate: $$\phi(n)-S\le \phi(n)/2\le \frac{n}{2}\le \frac{3}{5}n.$$ You are left to check $n=2^m$ which can be easily done directly.
Show that $\cot(5\theta)=\frac{1-10\tan^2(\theta)+5\tan^4(\theta)}{1-10\tan^3(\theta)+5\tan(\theta)}, \forall\theta\in R $
Put $z = 1+i\tan\theta$. Then $\cot(5\theta)$ is the ratio between the real part and the imaginary part of $z^5$: $$z^5 = (1+i\tan\theta)^5 = 1+5i\tan\theta+10 i^2 \tan^2\theta+10 i^3 \tan^3\theta + 5 i^4 \tan^4\theta + i^5\tan^5\theta,$$ $$z^5 = (1-10\tan^2\theta+5\tan^4\theta) + i(5\tan\theta-10\tan^3\theta+\tan^5\theta),$$ from which the claim follows.
Algebra of compact operators on $\ell_p$
No, they are not isomorphic as Banach algebras. By the proof of Eidelheit's theorem, if $A_1$ and $A_2$ are subalgebras, respectively, of $B(X)$ and $B(Y)$ that contain all finite-rank operators, ($X$ and $Y$ are Banach spaces) and are Banach-algebra isomorphic, then $X$ and $Y$ are Banach-space isomorphic. This was rediscovered in this paper which surprisingly does not mention Eidelheit's result.
Linear map with polynomials - Find a matrix
The column vectors of $A$ are the coordinates of $F(1), F(x), F(x^2), F(x^3)$ in the standard basis. As $F(1)=0$, $\;F(x)=x+1$, $\;F(x^2)=2x(x+1)$, $\;F(x^3)=3x^2(x+1)$, we find: $$A=\begin{bmatrix} 0&amp;1&amp;0&amp;0\\ 0&amp;1&amp;2&amp;0\\ 0&amp;0&amp;2&amp;3\\ 0&amp;0&amp;0&amp;3 \end{bmatrix} $$ This is a triangular matrix, hence the eigen values are the diagonal elements: $0,1,2,3$. To determine the eigenvectors relative to these eigenvalues you have to solve successively $\;Av=0$, $\;(A-I)v=0$, $\;(A-2I)v=0$, $\;(A-3I)v=0$. In this new basis the matrix $A'$, by definition, will be the diagonal matrix $\;D(0,1,2,3)$.
Number of injective field homomorphism
Yes it is true (and keeep in mind that a morphism of fields is automatically injective). The key point is that the set of $K$-linear maps $f:F\to L$ is a vector space over the field $L$. Indeed, for $\lambda \in L$ the map $\lambda f: F\to L \; $ is defined (you guessed it!) by $(\lambda f)(x)=\lambda \cdot (f(x))$ where the dot $\cdot$ is the product in the field $L$. Linear algebra then teaches us that the dimension of that $L$-vector spase $\mathcal L_{K-lin }(F,L)$ is $n=dim_K(F)$ [see proof in Edit below] The theorem of linear independence of homomorphisms then states that the set of $K$-algebra morphisms $Hom_{K-alg }(F,L)$ is a linearly independent subset $Hom_{K-alg }(F,L)\subset \mathcal L_{K-lin }(F,L)$ of the aforementioned $L$-vector space $\mathcal L_{K-lin }(F,L)$, so that of course $$\operatorname {card} Hom_{K-alg }(F,L)\leq n$$ just as you wished. Caveat The set $Hom_{K-alg }(F,L)$ has no algebraic structure whatsoever: it is an unashamedly naked set, very possibly empty . Edit In order to answer user's question in the comments, here is a proof that $\mathcal L_{K-lin }(F,L)$ has dimension $n$ over $L$: Choose a basis $a_1,...,a_n$ of $F$ over $K$. Then the $K$-linear maps $f_i:F \to L$ defined by $f_i(\sum k_r a_r)= k_i$ are the required basis. This is just a slight generalization of the usual concept of dual basis, which you recover if $L=K$ (which is a perfectly legitimate choice for $L$ in the question and in the answer!). Be very wary of the confusing fact that all $f_i$'s have values in $K$ but that there exist linear maps $f\in \mathcal L_{K-lin }(F,L)$ capable of reaching any $\lambda\in L$: for example $(\lambda f_1)(a_1)=\lambda$ !
Why this integral equals to $\Gamma(4)10^4$
Let $\frac{y}{10} = u \iff y = 10u \iff dy = 10\,du$ Then: $$\int_0^{\infty}y^3 e^{-\frac{y}{10}}\,dy=\int_0^{\infty}(10u)^3 e^{-u}(10\,du) = 10^4\int_0^{\infty}u^{4-1} e^{-u}\,du = \Gamma(4)10^4$$
Average of Two Quadratic Forms
For any symmetric matrix $A$, we have $$ \frac 12 (x - y)^TA(x - y) = \frac{x^TAx + y^TAy}{2} - x^TAy. $$ With that, we can conclude that your inequality will hold for all $x,y$ (with norm less than $1$) if and only if $A$ is negative semidefinite. If $A$ is positive semidefinite, then the opposite inequality will hold. In other cases, neither inequality holds for all $x,y$ (with norm less than $1$).
Nonstandard Definition of the Radical of A Ring
In Atiyah &amp; Macdonald's An Introduction to Commutative Algebra they define the nilradical of a ring to be "the set...of all nilpotent elements in the ring" (chapter 1, page 5). I would say this is the more standard terminology.
sigma fields measures
$X_1(i,j)=k$ is same $i=k$ and $j$ arbitrary. This means $\sigma (X_1)$ consists of sets of the form $A \times \{1,2,3,4,5,6\}$ with $A \subseteq \{1,2,3,4,5,6\}$. So $\{(6,6)\}$ is not in $\sigma (X_1)$
probability that I toss two coins independently, and get two heads or two tails, assuming I discard all HT and TH outcomes.
Yes. You have found a conditional probability. If the first coin has probability $p$ of being heads and the second has probability $q$ of being heads and they are independent, then your expression $\dfrac{pq}{pq + (1-p)(1-q)}$ is the conditional probability that both are heads, given they are both head or both tails. Conditional probability is as simple as $\mathbb{P}(A \mid B)=\dfrac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)}=\dfrac{\mathbb{P}(A \cap B)}{\mathbb{P}(A \cap B) + \mathbb{P}(A^c \cap B)}$.
Solutions of the linear equation $a_1Y_1 + \cdots+ a_mY_m = 0$
I will assume that all $a_i$ are in the maximal ideal, since otherwise, easily you can write down all solutions. If the $a_i$s have a common divisor, clearly you can cancel it in your equation and thus also assume that they have no common divisors. Then, you have a map $A_2^m\to A_2$ given by the $a_i$s and you are interested in the kernel. Homological algebra immediately tells you that the kernel is $A_2^{m-1}$ and the map $A_2^{m-1}\to A_2^m$ is given by an $m-1\times m$ matrix $M$. It can be shown that after a change of basis if necessary, the $m$ $m-1\times m-1$ minors of $M$ are precisely the $a_i$s. This is the best you can do in general.
If we could just use '<' instead of '≤', why are we still using '≤' in many statements?
It is often a lot harder to show $&lt;$ and a lot easier to show $\leq$. For example, if you have a sequence $(a_n)_{n \in \mathbb{N}}$ that is bounded above by a constant $C$, so $a_n \leq C$ for all $n$. If further the sequence converges, then also $$\lim\limits_{n \to \infty} a_n \leq C.$$ On the other hand, even if $a_n &lt; C$ for all $n$, you can't conclude that $$\lim\limits_{n \to \infty} a_n &lt; C.$$
Prove that $p ◦ p = p$. (Representing a Linear Transformation as a Matrix)
If $A$ is the matrix corresponding to the linear operator $p$, then $p(v)=A\cdot v\ \forall v\in\Bbb R_2[X]$. $\implies(p\circ p)(v)=p(p(v))=A\cdot(p(v))=A\cdot(Av)=(A\cdot A)v=A^2\cdot v$ This means the matrix corresponding to the linear operator $p\circ p$ is $A^2$. Since $A^2=A$, the matrices corresponding to $p$ and $p\circ p$ are identical, which means $p=p\circ p$.
Prove that the set $A = \{(x, y) \mid x$ is an odd integer and $y$ is an even integer$\}$ is enumerable
One way to get countability is to create an injective function (not necessarily surjective) from your set $A$ to $\Bbb{N}$. Because then the image $f(A)$ of your set is a subset of $\Bbb{N}$, hence $f(A)$ is countable and by injectivity $A$ is countable. A simple way to achieve this with negative integers etc. is as follows: Let $$f(x,y)=\begin{cases} 2^x \cdot 3^y &amp; \text{ if } x \geq 0, y \geq 0 \\ 5^x \cdot 7^{|y|} &amp; \text{ if } x \geq 0, y &lt; 0\\ 11^{|x|} \cdot 13^{|y|} &amp; \text{ if } x &lt; 0, y &lt; 0\\ 17^{|x|} \cdot 19^y &amp; \text{ if } x &lt; 0, y \geq 0\end{cases}$$ From the uniqueness of prime factorization, injectivity can be concluded fairly quickly.
How do we solve the equation?
Not much simpler, but you can also try something like $a:=\sqrt{x+2}$ then $\sqrt{x+7}=\sqrt{a^2+5}$, then it will have only one sqrt in the equation, put that on one side and the rest on the other side.. $$(a^2-1)a+(a^2+4)\cdot\sqrt{a^2+5} = (a^2+1)(a^2+3)$$
Finding level curve for $\frac{xy}{x^2+y^2}$
If $a=0$, then $\frac{xy}{x^2+y^2}=a$ if and only if $(x=0\lor y=0)\land(x\ne 0\lor y\ne 0)$, therefore the level set is two lines minus their intersection points. If $a\ne 0$, the the equation becomes $\begin{cases}x^2-\frac1axy+y^2=0\\ (x,y)\ne (0,0)\end{cases}$, which requires some additional cases. If $1-4a^2\ge 0$ and $a\ne0$, then $$x^2-\frac1axy+y^2=\left(x-\frac{1-\sqrt{1-4a^2}}{2a}y\right)\left(x-\frac{1+\sqrt{1-4a^2}}{2a}y\right)$$ so that the level set is two lines minus their intersection (the origin) if $1-4a^2&gt;0$, and one line minus the origin if $1-4a^2=0$. If $1-4a^2&lt;0$, then $$x^2-\frac1axy+y^2=\left(x-\frac y{2a}\right)^2+\frac{4a^2-1}{4a^2}y^2$$ so that the level set ends up being the empty set (because the polynomial is $0$ only for $x=y=0$).
Does signed midpoint-convexity imply signed convexity?
A function $|g|$ is continuous and mid-point convex, so it is convex (see, for instance, [L, 5.1]). It implies the required inequality $$|g(\frac{x + y}{2}) |\le \left|\frac{g(x) + g(y)}{2}\right|$$ for every $x,y \in \mathbb R$ such that $f$ does not change its sign on $[x,y]$. The condition implies that $g$ is zero between any two its zeroes, so it either does not change its sign (and then the required claim holds) or changes its sign only once. In the latter case the required claim can fail. For instance, let $g(x)=x|x|$ for each $x\in\Bbb R$. It is easy to check that $g$ satisfies the required condition, but for $x=-4/3$, $y=3$, and $\lambda=8/9$ we have $$\left |g\big(\lambda x + (1-\lambda)y\big)\right|=\frac {529}{729}&gt;\frac {423}{729}=\left|\lambda g(x) + (1-\lambda)g(y) \right|.$$ References [L] Hojoo Lee, Topics in Inequalities - Theorems and Techniques, (February 25, 2006).
find out the point of reflection
suppose the reflection of $A=(1, -5, 6)$ on the plane $$-2x+7y+9z = 4\tag 1$$ is $$A' = (1 - 4t, -5+14t, 6+18t)\tag 2.$$ then the midpoint $$(A+A')/2 = (1 - 2t, -5+7t, 6+9t) $$ must be on the plane $(1).$ therefore $t$ must satisfy $$-2(1-2t)+7(-5+7t)+9(6+9t) = 4\to 134 t = -7, t=-7/134.$$ now putting this value of $t$ in $(2)$ will give us the required point.
How to show that if $u$ is a partial isometry then $u = u u^\ast u$?
Let $\xi\in H$, and $\eta\in ker(u)$. Then $$\langle u^*uu^*\xi,\eta\rangle=\langle uu^*\xi,u\eta\rangle=0=\langle \xi,u\eta\rangle=\langle u^*\xi,\eta\rangle$$ Since $u$ is a partial isometry, then $u$ preserves inner product on $\ker(u)^\perp=\overline{Image(u^*)}$, so for $\eta\in\ker(u)^\perp$ we have $$\langle u^*uu^*\xi,\eta\rangle=\langle u(u^*\xi),u\eta\rangle=\langle u^*\xi,\eta\rangle$$ Now using the fact that $H=ker(u)\oplus ker(u)^\perp$, we conclude that $u^*uu^*=u^*$, which gives $$uu^*u=(u^*uu^*)^*=(u^*)^*=u$$
Geometric sequence, finding two variables
$a$ is the first term in the series. You have one equation. The second you need is the partial sum of a geometric series Do you see the $2$'s and $3$'s in the equation you cited? If you factor $976$ you get $61\cdot 2^4$ The fact that $61=2^6-3$ should be of interest.
Circle passing through intersection points of two bigger circles
Let the equations of two intersecting circles be $C_1=0$ and $C_2=0$. Then the equation of family of circles passing through the intersection points can be given by $C_1 + tC_2 = 0$, $t\ne -1 $. It is easy to see that this equation satisfies the points that are common to both the circles.
Understanding the proof that $C[0,1]^* $ are bounded variation functions
A real function $g$ of bounded variation on $[0,1]$ can be decomposed into two monotone functions. If $V_{0}^{x}(g)$ is the variation of $f$ on $[0,x]$, then $$ |g(x)-g(0)| \le V_{0}^{x}(g). $$ This gives a decomposition of $f(x)-f(0)$ into the difference of monotone non-decreasing functions: $$ g(x)- g(0) = V_{0}^{x}(g) - \{V_{0}^{x}(g)-(g(x)-g(0))\} $$ Monotone functions always have left- and right-hand limits that are found using inf and sup. So $g$ has left- and right-hand limits. If $f$ is a continuous function on $[0,1]$ and $g$ is a monotone, then $$ \int_{0}^{1}fdg = \int_{0}^{1}fd\tilde{g} $$ where $\tilde{g}$ is any other monotone function for which $$ g(x^-) \le \tilde{g}(x) \le g(x^+). $$ And that's because $g(x^-)=\tilde{g}(x^-)$, $g(x^+)=\tilde{g}(x^+)$ for all points in $(0,1]$ and $[0,1)$, respectivey.
Existence and uniqueness of adjoints with respect to pairings
I'm going to denote adjoints by $*$ rather than $\dagger_g$, for notational simplicity. First answer: Yes if $g$ is a perfect pairing, then adjoints always exist and are unique. Let's exploit the tensor-hom adjunction and let $g_V: W\to \newcommand\Hom{\operatorname{Hom}}\Hom(V,L)$ be the obvious map. Now for any $f$, we can consider the map $g_f : W\to \Hom(V,L)$ defined by $g_f(w) = g_V(w)\circ f$. Then in order for an adjoint to exist, we must be able to solve the equation $$ g_Vf^* = g_f.$$ Therefore if $g_V$ is an isomorphism (i.e. if $g$ is a perfect pairing), there is a unique $f^*$ satisfying the equation, $f^*=g_V^{-1}g_f$. Second answer: Let's generalize slightly. When can we solve $g_Vf^*=g_f$? Consider the following diagram $$\require{AMScd} \begin{CD} W @&gt;g_f&gt;&gt; \Hom(V,L) \\ @Vf^*VV @| \\ W @&gt;&gt;g_V&gt; \Hom(V,L) \end{CD} $$ Well, one answer to when we can find such a $f^*$ is if $g_V$ is surjective and $W$ is projective. In this case $f^*$ won't be unique. In fact, this is roughly the most general we can get, though. To generalize this slightly, observe that the image of $g_f$ had better be a subset of the image of $g_V$, otherwise there is no way we can solve it. However if the image of $g_f$ is a subset of the image of $g_V$, then we can replace $\Hom(V,L)$ with $\newcommand\im{\operatorname{im}}\im g_V$, so that $g_V$ is now surjective to its image. Then as long as $W$ is projective, we can lift $g_f$ along $g_V$ to find $f^*$. My final version of the second answer: As long as $W$ is projective, and for every $w$, there exists $w'$ so that $g_V(w)\circ f = g_V(w')$, then there exists a (possibly not unique) "adjoint" $f^*$ solving $g_Vf^* = g_f$.
Proof for differentiable functions
hint Differentiate both sides to get $$f(x)\sin(x)=2f(x)f'(x)$$ If $f(x)\ne 0$ then $$f'(x)=\frac{\sin(x)}{2}$$ at a certain intervall. $$f(x)=\frac{-\cos(x)}{2}+C$$ plugg it to get $C$.
Is there a proof that there is no general method to solve transcendental equations?
Intuition may be misleading here. In fact transcendental cases are often much easier than the integer Diophantine case. For example below is a table listing the known decidability results in various rings for Hilbert's tenth problem and the full first order theory, excerpted from Bjorn Poonen's interesting paper Hilbert's tenth problem over rings of number-theoretic interest
Finding $E(X(X-1))$ where $X$ is a Poisson random variable such that $P(X=2) = \frac{2}{3} P(X=1)$
First we find the parameter $\lambda$ of the Poisson. We have $\Pr(X=2)=e^{-\lambda}\frac{\lambda^2}{2!}$ and $\Pr(X=1)=e^{-\lambda}\frac{\lambda}{1!}$. From the given equation we conclude that $\frac{\lambda^2}{2}=\frac{2}{3}\lambda$, so now we know $\lambda$, since it cannot be $0$. The expectation of $X(X-1)$ is equal to $$\sum_{k=0}^\infty k(k-1)e^{-\lambda}\frac{\lambda^k}{k!}.$$ The first two terms are $0$, and for $k\ge 2$ we have $\frac{k(k-1)}{k!}=\frac{1}{(k-2)!}$, so our sum is $$\sum_{k=2}^\infty e^{-\lambda}\frac{\lambda^k}{(k-2)!}.$$ Replace $k-2$ by $n$. Our sum is $$\sum_{n=0}^\infty e^{-\lambda}\frac{\lambda^{n+2}}{n!}.$$ Bring out a $\lambda^2$. We end up with $$\lambda^2\sum_{n=0}^\infty e^{-\lambda}\frac{\lambda^{n}}{n!}.$$ But the sum after the $\lambda^2$ is $1$, for it is the sum of all the Poisson probabilities. We conclude that $E(X(X-1))=\lambda^2$. Remark: In a similar way, we could compute $E(X(X-1)(X-2))$, and other similar expressions. This sort of expectation is easier to get at than things like $E(X^2)$ and $E(X^3)$. In fact, finding the expectation of $X(X-1)$ is one of the standard paths for finding the variance of $X$.
Why topology on a set is defined the way it is?
The original definition of a topology on a set was given by Hausdorff in 1914 and involved what he called neighbourhoods. Basically he defined a topology on a set $X$ as a collection of subsets of $X$ for each point of $X$, where these subsets (or neighbourhoods) are required to satisfy certain axioms (e.g. each neighbourhood of $x\in X$ must contain $x$ itself; see here for a list of them). Actually his definition carved out a slightly more regular kind of topological space (nowadays called Hausdorff, go figure). Anyway, later other people (I guess Bourbaki was involved) realized that you could just as well use open sets to define a topological space. In this context, an open set is defined as a subset containing a neighbourhood for each of its points. Regarding your intuition about differentiating between different topological spaces, I don't know whether your attempt can be put on a solid footing (for instance, you seem to mix up the notion of open sets and of "segments"). You must also notice that apparently different topologies can actually describe the same space (the key word here is homeomorphic). So looking at the actual open sets is not necessarily a good way to distinguish among topological spaces. Usually topologists show that the interval space and the Y-space are not homeomorphic by showing that if you remove any single point from the interval you get a space with two pieces (called connected components), but if you remove the intersection point from the Y-space you get a three-components space. In general the business of distinguishing among non-homeomorphic topological spaces is hard and inspired people to come up with a lot of interesting ideas. Basically every topological notion you find in a topology textbook is a good candidate to distinguish between spaces: connectedness, compactness, separation properties, homotopy, homology... Good luck with your studies.
Solving $\ln(x)/\ln(y) > x/y$
hint: for $\ln{y} &gt; 0$ that is $y&gt;1$ consider the equivalent $$\frac{\ln(x)}{x} &gt; \frac{\ln(y)}{y}$$ for $\ln{y} &lt; 0$ that is $0&lt;y&lt;1$ consider the equivalent $$\frac{\ln(x)}{x} &lt; \frac{\ln(y)}{y}$$ and study the function $$f(z)=\frac{\ln(z)}{z}$$ as $f(z)$ is strictly increasing for $0&lt;z \leq e$ and strictly decreasing for $z \geq e$ the solution are: when $y&gt;1$ $$e \leq x &lt;y$$ $$1 &lt; y &lt;x \leq e$$ when $0&lt;y&lt;1$ $$0 &lt; x&lt;y &lt;1$$
Simplify sop expression using Boolean algebra
HINT This equivalence principle will be your friend: Adjacency $PQ + PQ' = P$ If you're not allowed to use Adjacency in $1$ step, here is a derivation of Adjacency in terms of more basic equivalence principles: $$P Q + (P Q') \overset{Distribution}=$$ $$P (Q + Q') \overset{Complement}=$$ $$P 1 \overset{Identity}=$$ $$P$$ To apply Adjacency, note that $P$ and $Q$ can be any complex expressions, so in this case, where every terms has $4$ variables, just look for two terms that are the same for $3$ of the variables, but differ in the fourth. For example, the first two terms are the same except for the $D$ variable, so those can be combined: $A'BC'D'+A'BC'D=A'BC'$ You can also combine the first and seventh terms: $A'BC'D'+ABC'D'=BC'D'$ To do both of those, you would need to 'reuse' the first term, but you can get as many copies as you want by: Idempotence $P + P = P$ So, for example, focusing on the first, second, and seventh term: $$A'BC'D'+A'BC'D+ABC'D'\overset{Idempotence}=$$ $$A'BC'D'+A'BC'D'+A'BC'D+ABC'D'\overset{Commutation}=$$ $$A'BC'D'+A'BC'D+A'BC'D'+ABC'D'\overset{Adjacency \ x \ 2}=$$ $$A'BC'+BC'D'$$
Splitting an integral
Because $\ln x-1\geqslant 0$ for $x\geqslant \mathrm{e}$ and $\ln x-1\leqslant 0$ for $x\leqslant \mathrm{e}$.
Find the solution of the pde $xu_y-yu_x=u$.
In polar coordinates: $$x=r\cos\phi \\ y=r\sin\phi \\ \frac{\partial u}{\partial x}=\cos \phi \frac{\partial u}{\partial r}-\frac{1}{r}\sin \phi \frac{\partial u}{\partial \phi}\\ \frac{\partial u}{\partial y}=\sin \phi \frac{\partial u}{\partial r}+\frac{1}{r}\cos \phi \frac{\partial u}{\partial \phi}\\ $$ $x\frac{\partial u}{\partial y}-y\frac{\partial u}{\partial x}=u$ becomes: $$\frac{\partial u}{\partial \phi}=u$$. So: $$ u=f(r)e^\phi \\ =f(\sqrt{x^2+y^2})e^{atan2(x,y)} $$
Evaluate $\lim\limits_{x\to 0^{+}}(\ln(x)-\ln(\sin x))$
$ \frac{x}{\sin x} \to 1$ as $x \to 0$, hence $ \ln (\frac{x}{\sin x}) \to \ln 1=0$ as $x \to 0.$
Calculate combinations of positioning alphabet in N available positions in order
Say you have $N$ slots and $M$ letters, with $N\ge M$. (In your example, $N=3$ and $M=2$.) Then once you choose the slots that will be filled by the $M$ letters, the positions of the letters themselves are determined (because they must be in order) and the positions of the $N-M$ blank spaces are completely determined (because they go in the remaining slots). So the answer is therefore $$\binom NM = \frac{N!}{M!(N-M)!}.$$ (If you are not familiar with the notation, leave a comment.)
Pure states on subalgebras of $\mathcal{B}(\mathcal{H})$ in finite dimensions.
Yes, since you are in finite dimension. Any $A\subset B(H)$ is of the form $$A=\bigoplus_{j=1}^m M_{k_j}(\mathbb C).$$ It is not hard to check that every state is of the form $\varphi:x\longmapsto \operatorname{Tr}(bx)$ for some $b\in A_+$ with $\operatorname{Tr}(b)=1$ (and, thus $\|b\|\leq1$). We have $b=\sum_{r=1}^s b_rq_r$, where $q_r$ are pairwise orthogonal rank-one projections. The above conditions on $b$ give $b_r\geq0$ for all $r$, and $\sum_r b_r=1$. If $s\geq2$, we can write $$ \varphi(x)=\sum_r b_r \operatorname{Tr}(q_rx) $$ and we get $\varphi $ written as a convex combination of the states $x\longmapsto \operatorname{Tr}(q_rx)$. So, if $\varphi$ is pure, we have $s=1$. That is, $b$ is a rank-one projection. In summary, the pure states of $A$ are precisely the maps $x\longmapsto \operatorname{Tr}(qx)$, where $q\in A$ is a rank-one projection. It would remain to prove that every rank-one projection gives rise to a pure state. You can see a proof here.
$0=\frac{13+13^2+13^3+\cdots}{1+2+3+\cdots}$ using infinite sums?
Your mistake is in going from $$1+\sum_{n=1}^\infty 13^n=\sum_{n=1}^\infty n$$ to $$1+\frac{13+13^2+13^3+\cdots}{1+2+3+\cdots}=1.$$ You divided the right side by $\sum_{n=1}^\infty n$, but on the left side you only divided the second term by $\sum_{n=1}^\infty n$, and you forgot to divide the $1$ by $\sum_{n=1}^\infty n$. If you do this step correctly, you get $$\frac1{\sum_{n=1}^\infty n}+\frac{\sum_{n=1}^\infty13^n}{\sum_{n=1}^\infty n}=1,$$ that is (correcting a big mistake in my original answer), $$-12+\frac{\sum_{n=1}^\infty13^n}{\sum_{n=1}^\infty n}=1.$$
The number of cyclic subgroups of order 15 in $\mathbb{Z}_{30} \oplus \mathbb{Z}_{20}$
Provided you correctly counted the elements of order$~15$, your answer is correct. You can indeed count cyclic subgroups by counting their generators (elements or order$~n$) and dividing by the number $\phi(n)$ of generators per cyclic subgroup, since every element of order$~n$ lies in exactly one cyclic subgroup of order$~n$ (the one that it generates). Here is how I would count the elements of order$~15$. By the Chinese remainder theorem one has $\newcommand\Z[1]{\Bbb Z_{#1}}\Z{30}\cong\Z2\oplus\Z3\oplus\Z5$ and $\Z{20}\cong\Z4\oplus\Z5$, so all in all we are dealing with the group $\Z2\oplus\Z4\oplus\Z3\oplus\Z5^2$. To have order $15$, an element must have a trivial (zero) component in $\Z2$ and $\Z4$, in $\Z3$ it must have as component one of the $2$ generators, and it's component in $\Z5^2$ must be any one of the $24$ nonzero elements. Indeed you get $2\times24=48$ elements of order$~15$.
Codimension of linear subspace.
The dimension of $X$ is $1$, since its elements only depend on the single parameter $x$. The dimension of $M_{2x2}$ is, as you said, 4. The codimension is therefore $4 - 1 = 3$.
Cartesian form of vectors calculate the angle
Given the equation of a plane $Ax + By + Cz = d$, the normal vector of the plane is (A,B,C), i.e the vector perpendicular to that plane. Using this you can obtain the two normal vectors, and then apply their escalar product: $$A_1*A_2 + B_1*B_2+C_1*C_2=|n_1|*|n_2|*cos\alpha$$ where $n_1,n_2$ denote the normal vectors and |·| is the module of the vector.
Smallest twin-prime-pair above $2\uparrow\uparrow 5\ $?
You didn't miss any. With $n = 2^{65536}$, $n+44061$ is a PRP as is $n+44181$. All other numbers from $n$ to $n+44181$ are composites. There are no twin primes in the range $n$ to $n+10\ 000\ 000$. To the best of my knowledge, the best way to do this is not dis-similar to what you've said: pick a range and sieve it. Once into reasonable depths, the range length doesn't have a big performance impact, but the depth does. For my test with your $n$ and a length of 10M, my program chose a default depth of 5243M, leaving about 8300 candidates after ~3 minutes. It should have chosen to sieve deeper. ~600 of those candidates drop out after just 6 more minutes of sieving. From the sieved range, candidates are those $n$ where $n$ and $n+2$ both have survived sieving. Run a fast compositeness test on them. At this size PFGW's Fermat test is the most efficient. That typically means running multiple programs running through intermediate files. I use GMP throughout, which is more convenient, but not ideal performance for these over-10k-digit numbers. I use a Miller-Rabin base 2 test, which is about the same speed as a Fermat test in GMP. Verify results if both pass the first test. I use an ES Lucas test, which means the results are extra strong BPSW probable primes since I used a base-2 MR test to start. If using PFGW, just run them through a BPSW test from one of the many packages that have a BPSW test (PFGW includes a version but you'd have to look up how to apply it). This is enough for most purposes, but you can run a few more tests if desired.
Existence of a normal subgroup in G
Hints: By Sylow, there exists one unique Sylow $\;11$- subgroup $\;N\;$ , and thus $\;N\lhd G\;$ Let $\;P\;$ be any Sylow $\;3$-subgroup, so that $\;NP\lhd G\;$ There exists one unique group,up to isomorphism, of order $\;33\;$, which then thas a unique (and normal, of course) subgroup of order $\;3\;$ . Deduce the claim now. Further hint: If $\;P\lhd K\lhd H\;$ and $\;K\;$ cyclic, then $\;P\lhd H\;$
Induction for divisibility: $3\mid 12^n -7^n -4^n -1$
$$a_{k+1}-7a_k=12^k(12-7)+4^k(7-4)\equiv0\pmod3$$ $$\implies a_{k+1}\equiv7a_k\pmod3$$ So, $3|a_{k+1}\iff3|a_k$ We can try with $a_{k+1}-4a_k$ as well.
Computing the number of irreducible polynomials in a field
$x^2+x+a$ is irreducible over $\mathbb{F}_p$ iff its discriminant $1-4a$ is a quadratic non-residue $\!\!\pmod{p}$. There are $\color{blue}{\frac{p-1}{2}}$ quadratic non-residues and $4$ is invertible $\!\!\pmod{p}$ since $p$ is odd.
Finding the mean and variance of an exponential probability distribution
The expectation of $Y^k$ is $$\int_0^\infty \frac{y^k}{10} e^{-y/10}\,dy.$$ We need $E(Y^k)$ for $k=2$ and (for the variance) for $k=3$ and $k=4$. We show in detail how to deal with the case $k=2$, using integration by parts. Let $u=y^2$ and $dv=\frac{1}{10}e^{-y/10}\,dy$. Then $du=2y\,dy$ and we can take $v=-e^{-y/10}$. So our integral is $$\left.-y^2e^{-y/10}\right|_0^{\infty}+\int_0^\infty 2ye^{-y/10}\,dy.$$ The first term is $0$. For the remaining integral, we could integrate by parts again. But it is easier to note that we know that $\int_0^\infty \frac{y}{10}e^{-y/10}\,dy=10$ (the mean), so the remaining integral is $2(10^2)$. The other expectations are handled similarly. For $E(Y^3)$ after one integration by parts step you will be able to reuse the fact just established that $E(Y^2)=2(10^2)$. Remark: An easier way is to look up the $k$-th moment of $Y$ about the mean. This information is available in the Wikipedia article on the exponential distribution, and elsewhere.
Sequence of Radon Measures $\mu_n$ on $\mathbb{R}$
Take $f_n=n^{3/2}\chi_{[0,1/n]}-n^{3/2n}\chi_{[1/n,2/n]}$ and $\mu_n(A)=\int_A f_n(x)dx$. Then clearly $|\mu_n|[0,1]=2n^{1/2}\to\infty$. We need to show that $\int \phi d\mu_n\to 0$ for all $\phi\in C_c^1(\mathbb{R})$. Now fix $\phi\in C_c^1(\mathbb{R})$. Again, since $\phi'$ is continuous and compactly supported, it is bounded. This and the Mean Value Theorem imply that there exists a constant $C&gt;0$ (depending only on $\phi$) such that for all $x,y\in\mathbb{R}$, $|\phi(x)-\phi(y)|\leq C|x-y|$. Now we calculate $$\int\phi d\mu_n=n^{3/2}\left(\int_0^{1/n}\phi(t)dt-\int_{1/n}^{2/n}\phi(t)dt\right)$$ Let's look at the first term: $$\int_0^{1/n}\phi(t)dt=\int_0^{1/n}\phi(t)-\phi(1/n)dt+\frac{\phi(1/n)}{n}$$ and to deal with the first term in the RHS, $$\int_0^{1/n}\phi(t)-\phi(1/n)dt\leq\int_0^{1/n}|t-1/n|Cdt\leq\int_0^{1/n}C\frac{2}{n}=\frac{2C}{n^2}$$ In simpler terms, $$\int_0^{1/n}\phi(t)dt=\frac{\phi(1/n)}{n}\pm\frac{2C}{n^2}$$ The same arguments show that $$\int_{1/n}^{2/n}\phi(t)dt=\frac{\phi(1/n)}{n}\pm\frac{2C}{n^2}$$ Therefore, \begin{align*} \left|\int\phi d\mu_n\right|&amp;=n^{3/2}\left|\int_0^{1/n}\phi(t)dt-\int_{1/n}^{2/n}\phi(t)dt\right|\leq n^{3/2}\frac{4C}{n^2}=\frac{4C}{n^{1/2}} \end{align*} which goes to $0$.
Checking independence between sigma algebra and a random variable
With your definitions $\eta =0$ so there is nothing to prove. If, however, $\mathcal F$ is not the original sigma algebra on $\Omega$ (but some sub-sigma algebra of it) then this result is not true. Let $Y \sim N(0,1)$ and let $\mathcal F$ be the sigma algebra a generated by $Y^{2}$. Then $\eta =Y$ and $Y$ is not independent of $Y^{2}$.
Law of large numbers and converging functions of iid random variables
My attempt at some theorems: Theorem 1: $\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\mathbb{E}[f_i(X_i)] = \mathbb{E}[f(X)]$ Proof: We know that the negative part of the function i.e. $\min\{0,f_i\}$ is Lipschitz continuous. Otherwise the epigraph of $f_i$ would not be convex. Using this and the fact that $f_i \leq f$, $$|\frac{1}{n}\sum_{i=1}^n f_i(X)| \leq \sup_{i=1,..,n}|f_i(0)| + C||X|| + |f(X)| $$ where $C = \sup\{C_1,C_2,...\}$ where $C_i$ Lipschitz constant for $\min\{0,f_i\}$. Note that $C \leq \infty$ since $\min\{0,f_i\}$ converges pointwise to $\min\{0, f\}$ which is Lipschitc continuous. Now using the dominated convergence theorem, we have: $$\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n\mathbb{E}[f_i(X)] = \mathbb{E}[f(X)]$$ It is clear that $\mathbb{E}[f_i(X_i)] = \mathbb{E}[f_i(X)]$ and substituting this in the above limit gives the desired result. QED. Theorem 2: $\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^n f_i(X_i) = \mathbb{E}[f(X)]$ with probability one. Proof: Define sequence $h_i(X_i) = f(X_i) - f_i(X_i)$. Then $\lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^n f_i(X_i) + h_i(X_i) = \mathbb{E}[f(X)]$ with probability one. To see this, note that $f_i(X_i) + h_i(X_i)$ is iid copy of $f(X)$. Using the strong law of large numbers gives the result. Now we want to show $\lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^n h_i(X_i) = 0$ with probability one. Observe that $\lim_{i\to\infty} h_i(X_i) = 0$ almost surely because $f_i$ converges pointwise to $f$. Therefore, $\lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^n f_i(X_i) + 0 = \mathbb{E}[f(X)]$ with probability one. QED.