title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Field Extension problem beyond $\mathbb C$
The field of meromorphic functions on $\mathbb{C}$ is huge so I don't expect that this question has a reasonable general answer. One might ask about the fields between $\mathbb{C}$ and meromorphic functions on the Riemann sphere; these are just the rational functions $\mathbb{C}(x)$. Since $\mathbb{C}$ is already algebraically closed, any nontrivial field between $\mathbb{C}$ and $\mathbb{C}(x)$ necessarily has transcendence degree $1$. Such a field $F$ necessarily lies between $\mathbb{C}(x)$ and $\mathbb{C}(f)$ for some $f \in \mathbb{C}(x)$. $\mathbb{C}(x)$ is always a finite extension of $\mathbb{C}(f)$ (exercise), so the inclusion $F \to \mathbb{C}(x)$ corresponds in the standard way to a branched cover of compact Riemann surfaces (equivalently, smooth projective algebraic curves over $\mathbb{C}$) $$\mathbb{CP}^1 \to S$$ where $S$ is the Riemann surface with function field $F$. By Riemann-Hurwitz, this can only occur if $S \cong \mathbb{CP}^1$, hence we can choose $f$ so that $F \cong \mathbb{C}(f)$. Thus all nontrivial subfields of $\mathbb{C}(x)$ are of the form $\mathbb{C}(f)$ for some rational function $f$. On the other hand, $\text{Aut}_{\mathbb{C}}(\mathbb{C}(x))$ is an interesting group; explicitly it consists of all Möbius transformations $z \mapsto \frac{az + b}{cz + d}, ad - bc \neq 0$ and abstractly it is $\text{PGL}_2(\mathbb{C}) \cong \text{PSL}_2(\mathbb{C})$, the projective special linear group in $2$ dimensions over $\mathbb{C}$. The fixed field of any subgroup of $\text{PGL}_2(\mathbb{C})$ is therefore a subfield of $\mathbb{C}(x)$. Special among these are the finite subgroups. By an averaging argument each of these are contained in the projective special unitary group $\text{PSU}_2$, which is well-known to be isomorphic to $\text{SO}(3)$, and the finite subgroups of this are (more or less) the groups of symmetries of the Platonic solids. The study of the finite subgroups of any of the above related groups is quite fascinating; one entry into further study is the various answers in this MO thread.
A differential form generating the zero current is zero
Of course, if $\omega$ was not zero, then it would have non-empty support, with non-empty interior (because it is smooth). Say on some open subset A of support of $\omega$, inside coordinate patch $(z_1,\ldots,z_d)$ we have $$ \omega = f(z) dz_1\wedge\ldots \wedge dz_p \wedge d\bar{z}_1\ldots \wedge d\bar{z}_q + \ldots$$ Suppose that $f(z) > 0$ on $A$, then take $$\eta = g(z) dz_{p+1}\wedge\ldots \wedge dz_d \wedge d\bar{z}_{q+1}\ldots \wedge d\bar{z}_d$$ with $\text{supp}(g) \subseteq A$ and $g(z) > 0$ in the interior of support. The integral is then clearly non zero. EDIT: As Ted Shifrin points out this only works for real coefficients, in general take $g(z) = \overline{f(z)}$ as he suggests.
What is the relationship regarding a condtional worded as "not a necessary condition"
"Being not odd is not a necessary condition for an integer to be not prime." This is the negation of: "Being not odd is a necessary condition for an integer to be not prime.", so let's first think about this positive statement. Since in general, "$P$ is a necessary condition for $Q$ translates as $Q \to P$, the positive statement is of the form: $\neg Prime(x) \to \neg Odd(x)$ though that should of course be universally quantified: "For every number $x$: not being odd is a necessary condition for it to be not prime", and so we get: $\forall x (\neg Prime(x) \to \neg Odd(x))$ OK, so that's the positive statement. But the original statement is the negation of that, so that works out to: $\neg \forall x (\neg Prime(x) \to \neg Odd(x))$ which is equivalent to: $\exists x (\neg Prime(x) \land Odd(x))$ And that all makes sense: To say that "Being not odd is not a necessary condition for an integer to be not prime." is to say that "Being even is not necessary to not be a prime". And that is a true statement, because there are all kinds of odd numbers that are not prime ... which is the very translation of $\exists x (\neg Prime(x) \land Odd(x))$
Isosceles triangle has the least perimeter among triangles on the same base with same area?
Draw line segment $AB$, representing the base. Since the area is fixed, this means that the height $h$ is fixed. Draw a line $L$ parallel to $AB$ that is $h$ units above $AB$. We seek a point $C$ on $L$ that minimizes the distance $AC + CB$. Now imagine travelling from $A$ to $C$, but then instead of turning back around to $B$, we take the mirrored path (reflected in the line $L$) and arrive at a point $B'$, which is $h$ units above the line $L$ (and thus $2h$ units above $B$). Notice that $CB = CB'$. So it remains to find $C$ such that $AC + CB'$ is minimized. But the shortest distance between any two points is a straight line! Notice by construction that $L$ bisects the straight line from $A$ to $B'$. We conclude that taking $C$ to be the midpoint (so that the triangle is isosceles) will minimize the perimeter.
How would I prove the following problem on discrete structures?
Rename: $m\to a$, $n\to b$ and $N\to n$ Proof: Since $a|n$ we can write $n=ak$. Now since $b|ak$ we have, by Euclid lemma $b|k$, so $k=bl$. Thus $n=abl$ and so $ab|n$. Vice versa. Say $ab|n$ and since $a|ab$ we have by transitivity $a|n$ and the same holds for $b$.
Cellular homology of suspension
There is a cell structure on a product (see appendix of Hatcher) of cell complexes, with cells given by pairs of cells of the factors. This way in particular you get a cell structure on $X\times I$. If you use the standard cell structure on $I$ then the map $X\times I \to SX$ is the quotient by a subcomplex, which also has a natural cell structure. The differentials behave as you expect, so you should be able to work out the cellular chain complex and its homology. The idea is something like “$d(e\times f) = de\times f \pm e\times df$ But $df=0$ because of the quotient by a subcomplex step.”
Third derivative of $y=at^2+2bt+c$ and $t=ax^2+bx+c$
Hint: Replace $t$ by $ax^2+bx+c$ in $at^2+2bt+c$, you will obtain $y(x)$ and calculate the third derivative of the function $y(x)$.
How do I transform this equotation
Assuming $\,x>0\;$ (otherwise you can not do what you want): $$\frac1x\sqrt{x^2+x}=\sqrt{\frac1{x^2}}\cdot\sqrt{x^2+x}=\sqrt{\frac1{x^2}(x^2+x)}=$$ $$=\sqrt{\frac{x^2}{x^2}+\frac x{x^2}}=\sqrt{1+\frac1x}$$. Remember that $\,\sqrt{x^2}=|x|\;$ , so $\,x>0\implies x=\sqrt{x^2}\;$ ...
How to Show that (S V R) logically follows from the following knowledge base.
The statement to be proved is $ S \vee R $ $ \neg (\neg Q) \land Z \implies Q \land Z \implies $ Q is tautology, Z is tautology $ Q \rightarrow S \land P \implies $ S is tautology, P is tautology Since we already know P and Q are tautologies, R is also a tautology from the last statement. However, there is a contradiction between the third and fifth premise so this conclusion may not be useful.
Pointwise Supremum of Quasi-convex Function
Typically you can solve such problems by introducing an indicator function $I_{C_x}(y)$ (for your example it takes the value $0$ if $y\leq x$, $\infty$ otherwise): $$f(x) = \sup_{y \in C} \left\{ w(y)(g(x,y)-I_{C_x}(y)) \right\}$$ The condition is now that $g(x,y)-I_{C_x}(y)$ is quasiconvex in $x$ for each $y$, which is not true, so we cannot use this trick. Going back to page 102, it was concluded that $f$ is quasiconvex, because $f(x)\leq \alpha$ iff $$w(y)g(x,y) \leq \alpha \quad \forall y \in C.$$ In other words, the sublevel set of $f$ is convex, since it is the intersection of convex sets. If you replace $C$ with $C_x$, this argument no longer holds, because $C$ depends on $x$. In other words, the sublevel set of $f$ cannot easily be expressed as the intersection of convex sets.
Finding closed form expression for the roots of $f(x) = \sum_{i=1}^K \frac{\alpha_i \gamma_i \sin(x-\theta_i)}{1+\gamma_i[1+\cos(x-\theta_i) ]}$
Except in the case $K=1$, I do not see a way to obtain a closed form solution. As for the number of solutions, observe that (here I assume that $\gamma_i>-1$, so that the denominators never vanish) $$ f(x)=-\frac{d}{dx}\,\sum_{i=1}^K\log\bigl(1+\gamma_i(1+\cos(x-\theta_i))\bigr). $$ The roots of $f$ are precisely the critical points of $\sum_{i=1}^K\log\bigl(1+\gamma_i(1+\cos(x-\theta_i))\bigr)$. Since this is a smooth function of period $2\pi$, it attains its maximum and minimum on any interval of length $2\,\pi$, so that $f$ has at least two roots on every interval of length $2\,\pi$.
Dimension of a vector space when sum and multiplication changes
If I understand you question correctly, then the answer is yes, as long as the dimension is nonzero and at most $\mathfrak c$ (the cardinality of the continuum $= \lvert {\bf R}\rvert=\lvert {\bf C}\rvert$). This is because if $1\leq \dim V\leq \mathfrak c$, then $\lvert V\rvert=\mathfrak c$, so you can take any $W$ of dimension between $1$ and $\mathfrak c$ (distinct from $\dim V$) and fix a bijection $\varphi\colon V\to W$ and transport the linear structure from $W$ to $V$ using $\varphi$, and then $V$ with this structure will be obviously isomorphic to $W$ and hence have dimension $\dim W$. If the dimension is $0$ then it obviously can't be done, if it's greater than $\mathfrak c$, then $\dim V=\lvert V\rvert$, so it can't be done, either.
Use Rank Nullity Theorem
By the rank-nullity theorem, $\dim\ker A\geqslant1$. But $\ker A\subset\ker(BA)$. Therefore, $\dim\ker(BA)\geqslant1$.
Is $G=\{A\in M_2(\mathbb{R}): A^2=I_2\}$ a group?
Hint: Among the solutions of $A^2=I$ are the reflections about any line through the origin. What is the product of two reflections?
# of seating arrangement in a 6 seat car
Your first way of counting is perfectly good. If you want to count another way, let us invent $2$ identical ghosts. The seats for them can be chosen in $\binom{5}{2}$ ways, since ghosts aren't allowed to drive. (There is a problem with taking their picture for the licence.) The rest of the seats can be filled in $4!$ ways.
Bound on constant in Polynomial so that zeros are bounded (Rouche)
Using $z=ρw$, the normalized equation for $w$ is $$ w^{10}+a(ρ^{-1}w^9+...+ρ^{-9}w+ρ^{-10})=0. $$ The Lagrange bound for the size of the roots is $$ R=\max(1,|a|(ρ^{-1}+...+ρ^{-9}+ρ^{-10}) $$ As we want $|z|\leρ$, we need $R\le 1$, thus $$ |a|\le \frac{1}{ρ^{-1}+...+ρ^{-9}+ρ^{-10}}=\frac{ρ^{10}}{1+ρ+..+ρ^9}. $$
How can I find the second constant to solve this PDE?
use the last equation and the first one $$\frac x 3 dx= \frac yu du$$ Use the first constant of integration $$\frac x3 +\frac 1y=K$$ $$\implies y=-\frac 1{(x/3-K)}$$ $$\frac x 3 dx=\frac {du}{u(K-x/3)}$$ $$\frac x 3(K-\frac x 3) dx=\frac {du}{u}$$ After integration $$K\frac {x^2}6-\frac {x^3}{27}+C_2=\ln |u|$$ Substitute $K=\frac x3+\frac 1y$ You get the final answer you posted $$\frac {x^3}{54}+\frac {x^2}{6y}+C_2=\ln |u|$$
Comparing Asymptotic growth of function using logarithms .
My approach would be to assume the inequality $n\log n<n^{\frac32}$ (assuming $n$ to be arbitrarily large), then manipulate it into something more apparent to verify or disprove it. $$n\log n<n^{\frac32}\\\to\log n<n^{\frac12}\\\to n<e^{n^\frac12}.$$ It is obvious that this inequality is a true statement, thus $n\log n$ is less than $n^\frac32$. The error you seem to make in simplifying $F_1$ and $F_2$ is not realizing that you can alter them both algebraically to get simpler expressions. You seem to instead try to alter $F_1$ and $F_2$ each alone into more manageable expressions, which, in this case, won't work.
Prove $(A/\mathfrak{a})\otimes_A F\simeq F/\mathfrak{a}F$
This is essentially $(A/\mathfrak{a}) \otimes_A M \simeq M / \mathfrak{a}M$ by tensoring the canonical exact sequence. For right $A$-modules write $$\mathfrak a \xrightarrow{\iota} A \xrightarrow{\pi} A/\mathfrak a \to 0$$ and for left $A$-modules $$0\to F \xrightarrow{1_F} F\to 0$$ Then your theorem gives a surjective map $$\pi\otimes 1_F\colon A\otimes_A F\to (A/\mathfrak a)\otimes_A F$$ which kernel is $\mathrm{im}( \iota\otimes 1_F )$. (Why?)
Establishing a few properties of the lower and upper Lebesgue integral
Hint for (I): by definition of the lower integral, you can find a simple function $h \le f$ whose integral is close to the lower integral of $f$. Simple functions are bounded, hence for all sufficiently large $n$, you have $h \le \min(f,n)$ as well. So you should be able to conclude something about $\liminf_{n \to \infty} \underline{\int} \min(f, n)$. Hint for (II): The connection is that Lebesgue measure appears in the definition of the simple integral. As before, choose a simple $h$ whose integral is close to that of $f$. How does the vertical truncation of $h$ compare to that of $f$? Now note that the vertical truncation of $h$ is also a simple function, and you should be able to say something about what happens to its simple integral when $n \to \infty$.
Galois Group of $x^4 - x^2 - 3$
Considering $f(x) = x^4 - x^2 - 3 $ we must finds its roots. Well if you take $y = x^2$ it comes that $$0=y^2 - y - 3 = \left(y - \frac{1}{2}\right)^2 - \frac{1}{4} - 3 = \left(y - \frac{1}{2}\right)^2 - \frac{13}{4} \implies y = \frac{1}{2} \pm \frac{\sqrt {13}}{2} $$ and from $x^2 = y$ you find all $4$ roots of $f$.
Pole and residue of $f(z) = \frac{1}{1+z^n}$
It's probably easier to simplify it in more generality. Let $\zeta$ be a zero of $h(z) = 1+z^n$. Since all zeros of $h$ are simple, we have $$\operatorname{Res} \left(\zeta; \frac{1}{1+z^n}\right) = \frac{1}{h'(\zeta)} = \frac{1}{n\zeta^{n-1}} = \frac{\zeta}{n\zeta^n} = -\frac{\zeta}{n}.$$
Integrate a percentage equation
The correct equation is $\frac{dx}{dt}=xy$ where y is the interest rate. Otherwise, your original equation makes instaneous jump of price possible at each time instant, which contradicts the definition of differentiable function. To solve it one might use separation of variable: $$\frac{dx}{x}=ydt$$ and the solution is $$x(t)=e^{yt}$$ with y is the time continuously compound interest rate
How to get the value of $a + b + c$?
Hint: $$a+b+c+ab+ac+bc+abc=(1+a)(1+b)(1+c)-1$$ Therefore $(1+a)(1+b)(1+c)=?$
Is a weakly contractible connected metric space, uniquely geodesic?
The topological property of being weakly contractible does not say much about the metric. The real line with the metric $d(x,y)=|x-y|^{1/2}$ is not geodesic; in fact it has no paths of finite length. Pac-Man shape with the restriction metric from $\mathbb R^2$ gives another example: rectifiably connected, but not geodesic. Yet another example: a non-strictly convex normed space such as $\ell_1$ or $\ell_\infty$. These are geodesic, but not uniquely geodesic. A sufficient condition for being uniquely geodesic is the triangle comparison property dubbed $\mathrm{CAT}(0)$.
how to prove the following propositional formula using semantic equivalence
Using double negation, from: $(¬¬p \lor q ) \land (¬q \lor ¬p)$ to : $(p \lor ¬¬q ) \land (¬q \lor ¬p)$ and by implication again: $(¬q \to p) \land (p \to ¬q)$.
How to solve Dirac Delta function having 2 centres?
As already pointed out you should take a look of the properties of the Dirac delta for the general case. Nonetheless in this case your intuition gives the correct answer: you can regard $\delta(x^2-3x+2)$ as $\delta(x-1)+\delta(x-2)$, where $1$ and $2$ are the roots of the polynomial $x^2-3x+2$. The result is then simply achieved, and the original integral becomes: $$\int_{-\infty}^{+\infty}(x^2+1)\delta(x-2) + \int_{-\infty}^{+\infty}(x^2+1)\delta(x-1) = (2^2+1)+(1^2+1) = 7$$
Axiom of choice in HoTT without sethood requirement
If I recall correctly, the intent of the book's phrasing was not to imply anything about whether it is actually strictly stronger. I certainly haven't ever seen a proof that 3.8.3 implies the stronger statement where $Y$ isn't a set; but I don't think I've ever seen a proof that it doesn't either. Which means that I guess it is an open problem. I suspect that one of the models in 1508.02410 could be found that would satisfy 3.8.3 but not the stronger version, but I haven't checked.
Expression for $n$-th moment
First, use Tonelli's theorem to conclude that (write the probability as an integral and interchange the two integrals) $$ E[X^n]=\int_0^\infty P\left(X^n> z\right)\, \mathrm{d} z. $$ Now write $P\left(X^n> z\right)=P\big(X> z^{1/n}\big)$ and use change of variables with $t=z^{1/n}$.
Be $f:\mathbb{R}^{2}\to \mathbb{R}^{2}$ a continuous function and $g(x)=\int_0^1 \! f(x,y) \, \mathrm{d}y.$ Proves that g is continuous.
Hint: $f(x,y)$ is uniformly continuous on any $[a,b]\times [0,1].$
Number of vertices with degree more than $\sqrt{|E|}$
This isn't true, I'm afraid. Let $K_n=(V,E)$ be the complete graph with $\lvert V\rvert=n$, for $n$ large enough. Then $\lvert E\rvert=\frac{n(n-1)}{2}$, hence $\sqrt{\lvert E\rvert}<\frac{n}{\sqrt{2}}$. But the degree of any vertex is $n-1>\frac{n}{\sqrt{2}}$, and there are $n>\frac{n}{\sqrt{2}}$ vertices.
Show that S is a subspace of ${R}^{2\times2}$
Yes, that's exactly what you must do. The subspace is obviously not empty, e.g. $$ \begin{pmatrix} 2 & x \\ -1 & y \\ \end{pmatrix} $$ for any $x,y$. and 3. I've already given you a hint for these two. Your matrices are characterised through the orthogonality of the first column with $$ v= \begin{pmatrix} 1 \\ 2 \\ \end{pmatrix} $$ You can derive from that the form of the column and then check if the sum and scalar product also lie in $S$.
Prove that $(l^\infty,\|.\|_\infty)$ is a Banach space.
Your proof is very rigorous and very detailed all the way up to the point where you say Fix $m>n_0$. Then we have $\|x^m-x^n\|_\infty<\epsilon.$ Therefore $\|x^m-y\|_\infty<\epsilon$ as $n\to\infty$. Now I know the inequality stands, but as you were very thorough with all your other inequalities, I think it would be nice if you wrote a little more justification for this one as well - it is not entirely obvious how the right inequality follows from the left one. Other than that, the proof is very well written and easy to follow.
Prove if $f$ is entire and $|f(z)| \leq |z|^{1/2}$ for all $z$, then $f(z) = 0$ for all $z$.
$f(0)=0$ so it suffices to show that $f$ is constant. Here is a hint: Take the power series representation for $f$ about $0$ and use the fact that each coefficient in the power series can be written as an integral over the circle of radius $r>0$ using Cauchy's formula. Then show that the limit of these integrals as $r\to \infty$ is zero. Since this is homework I will leave the details to you.
Is there a formula for differentiating a nonlinear function by a matrix?
What you need to know is the "trick" for the finding derivative of scalar function applied element-wise to a matrix argument. Assume that you have a scalar function $S(x)$ whose derivative is known to be $S'(x)$. When you apply this element-wise to a matrix, the differential is $$\eqalign{ dS({\bf X}) &= S'({\bf X})\circ d{\bf X} \cr }$$ where $\circ$ denotes the Hadamard product. For the Logistic function, the derivative is known to be: $\,\,\,\sigma' = \sigma - \sigma^2$. Now let's rewrite your objective in terms of the Logistic function and the Frobenius product (denoted by a colon), then find its differential $$\eqalign{ f &= \sigma({\bf Wx})^T{\bf b} \cr &= \sigma^T{\bf b} \cr &= {\bf b}:\sigma \cr\cr df &= {\bf b}:d\sigma \cr &= {\bf b}:\sigma'\circ d({\bf Wx}) \cr &= {\bf b}\circ\sigma':d{\bf W}\,{\bf x} \cr &= ({\bf b}\circ\sigma')\,{\bf x}^T:d{\bf W} \cr &= ({\bf b}\circ\sigma-{\bf b}\circ\sigma\circ\sigma)\,{\bf x}^T:d{\bf W} \cr }$$ Since $df=(\frac{\partial f}{\partial W}:dW),\,$ the gradient is $$\eqalign{ \frac{\partial f}{\partial {\bf W}} &= ({\bf b}\circ\sigma-{\bf b}\circ\sigma\circ\sigma)\,{\bf x}^T \cr }$$ In the case that the scalar function is the identity function, i.e. $S(x)=x$, then the deriviative is unity $S'(x)=1$. When applied element-wise to a matrix argument, the result is a matrix of all-ones, which just happens to be the identity element for the Hadamard product. So $(b\circ\sigma')$ would be replaced by $(b\circ 1=b)$ in the differential, yielding a gradient of $$\eqalign{ \frac{\partial f}{\partial {\bf W}} &= {\bf b}\,{\bf x}^T \cr }$$ which is the result that you already knew.
Rate of change of area of a square with respect to side length
$8 ft^2/ft = 8 ft$, which is the answer your formula gives. So your work appears to be correct.
Geometric interpretation of a result from commutative algebra
The intuition is that if you write $K=\text{Quot}(R)$ for some discrete valuation ring $R$ with prime element $\pi$, then this presentation corresponds to fixing a point on the curve, and writing an element ('meromorphic function') of $K$ in the form $\pi^k\varepsilon$ with $k\in{\mathbb Z}$ and $\varepsilon\in R^{\times}$ determines whether it has zero of order $k$ (for $k\geq 0$) or a pole of order $-k$ (for $k\leq 0$) at that point. Therefore, fixing $x\in K$ and looking at $R$ such that $x\notin R$ means looking at the set of points at which $x$ has a pole, of which there should be only finitely many.
infimum and supremum of subsets question
B^−1 is unbounded from above hence for every n in N, there exist b in B such that 1/b>n then 1/n > b for every n. Hence inf B =0(note infimum exist because B bounded below by 0)
Envelope of a family of lines in the plane
Maybe something like this? I will use complex numbers for simplicity of notation. I will assume that the two bugs move along the unit circle and $t$ is the arc-length parametrization of the first bug $A$, i.e. we assume the first bug moves uniformly along the circle. Then by $\theta(t)$ we denote the angle that determines the motion of the second bug $B$ along the unit circle. Then, the motion of bug $A$ is $e^{it}$ and the motion of the second bug $B$ is $e^{t\theta(t)}$. Then, by assumption, the two motions are related by the equation $$G\big(e^{it}, e^{i\theta(t)}\big) = 0$$ which I am going to simply write as $$g\big(t,\theta(t)\big) = 0$$ The enveloping curve in question $z = z(t) = x(t) + i\, y(t)$ is by definition a curve such that $z(t)$ is a point on the line determined by the points $e^{it}$ and $e^{i\theta(t)}$, and its derivative $\dot{z}(t) = \cfrac{dz}{dt}(t)$ should be a vector parallel to the line determined by the points $e^{it}$ and $e^{i\theta(t)}$ The first condition means that there exists a function $\lambda(t)$ such that $$z(t) = e^{it} + \lambda(t) \, \big(e^{i\theta(t)} - e^{it}\big)$$ and the second condition implies that $\dot{z}(t)$ is parallel to $e^{i\theta(t)} - e^{it}$. The latter condition can be written in complex numbers as $$0 = \text{Im}\Big(\, \dot{z}(t) \cdot\overline{\big(e^{i\theta(t)} - e^{it}\big)} \,\,\Big) = \text{Im}\Big(\, \dot{z}(t) \cdot \big(e^{ - i\theta(t)} - e^{- it}\big) \,\Big) $$ Calculate the derivative $$\dot{z}(t) = ie^{it} + \dot{\lambda}(t)\,\big(e^{i\theta(t)} - e^{it}\big) + i \,\lambda(t) \, \big(\dot{\theta}(t) \, e^{i\theta(t)} - e^{it}\big)$$ and form the equation \begin{align} 0 = \text{Im}\Big(\, & \Big(ie^{it} + \dot{\lambda}(t)\,\big(e^{i\theta(t)} - e^{it}\big) + i \,\lambda(t) \, \big(\dot{\theta}(t) \, e^{i\theta(t)} - e^{it}\big)\Big) \cdot \Big(e^{ - i\theta(t)} - e^{- it}\Big) \,\Big)\\ = \text{Im}\Big(\, & ie^{it} \big(e^{ - i\theta(t)} - e^{- it}\big) + \dot{\lambda}(t)\,\big(e^{i\theta(t)} - e^{it}\big)\,\big(e^{ - i\theta(t)} - e^{- it}\big) + \\ &+ i \,\lambda(t) \, \big(\dot{\theta}(t) \, e^{i\theta(t)} - e^{it}\big)\Big) \big(e^{ - i\theta(t)} - e^{- it}\big) \,\Big)\\ = \text{Im}\Big(\, & e^{i(t- \theta(t))} - 1 + \dot{\lambda}(t)\,\big|e^{i\theta(t)} - e^{it}\big|^2 + i \,\lambda(t) \, \big(\dot{\theta}(t) - \, \dot{\theta}(t) e^{i(\theta(t) - t)} - e^{i(t - \theta(t))} +1 \big) \,\Big)\\ = \text{Im}\Big(\, & e^{i(t- \theta(t))} - 1\Big) + \text{Im}\Big(\, i \,\lambda(t) \, \big(\dot{\theta}(t) - \, \dot{\theta}(t) e^{i(\theta(t) - t)} - e^{i(t - \theta(t))} +1 \big) \Big)\\ = \sin\big(t&-\theta(t)\big) + \lambda(t) \Big(\,\big(1 - \cos\big(\theta(t) - t \big)\,\big) \dot{\theta}(t) + 1 - \cos\big(t-\theta(t)\big)\Big) \\ = \sin\big(t&-\theta(t)\big) + \lambda(t) \big(1 - \cos\big(\theta(t) - t \big)\,\big) \big( \dot{\theta}(t) + 1\big) \end{align} Finally, if I haven't mad too many mistakes, we arrive at the equation $$ \sin\big(t-\theta\big) + \lambda \big(1 - \cos\big(\theta - t \big)\,\big) \big( \dot{\theta} + 1\big) = 0$$ which we can solve for $\lambda$ and obtain $$\lambda = \frac{\sin\big(\theta - t\big)}{\big(1 - \cos\big(\theta - t \big)\,\big) \big( \dot{\theta} + 1\big)}$$ The function $\theta$ is defined by the implicit function theorem for $g(t,\theta) = 0$ and $$\dot{\theta} = -\frac{ \partial_t\, g(t,\theta)}{\partial_{\theta}\,g(t, \theta)}$$ or more explicitly $$\lambda = \frac{\sin\big(\theta - t\big) \, \partial_{\theta} \, g(t, \theta) }{\big(1 - \cos(\theta - t)\,\big) \big(\partial_{\theta} \,g(t, \theta) - \partial_{t}\, g(t, \theta)\big)}$$ Finally, $$z = e^{it} + \frac{\sin\big(\theta - t\big) \, \partial_{\theta} \, g(t, \theta) }{\big(1 - \cos(\theta - t)\,\big) \big(\partial_{\theta} \,g(t, \theta) - \partial_{t}\, g(t, \theta)\big)} \, \Big(e^{i\theta} - e^{it}\Big) $$ $$g(t,\theta) = 0$$ I do not guarantee that the calculations are correct. There is another way, more in the spirit of implicit equations, but again there are calculations...
Expected Value of Matches Played Between Two Teams
A general strategy is to compute the expected number of games $t(xy)$ played until a team is declared the winner, starting from every possible partial score of $x$ games won by a player vs $y$ games won by the other one. Then the expected total number of games is $t(00)$. Some simple remarks: (i) $t(xy)=t(yx)$ by symmetry; (ii) $t(x4)=0$ for every $0\leqslant x\leqslant3$; (iii) looking at the result of the first game played yields a relation between $t(xy)$ and $t((x+1)y)$ and $t(x(y+1))$, for each $xy$. Starting from the highest possible partial scores and going backwards, one gets successively, using remarks (i), (ii) and (iii), $$t(33)=1,\quad t(23)=1+\tfrac12t(33)=\tfrac32,\quad t(22)=1+t(23)=\tfrac52, $$ $$t(13)=1+\tfrac12t(23)=\tfrac74,\quad t(03)=1+\tfrac12t(13)=\tfrac{15}8, $$ $$ t(12)=1+\tfrac12t(13)+\tfrac12t(22)=\tfrac{25}8,\quad t(02)=1+\tfrac12t(03)+\tfrac12t(12)=\tfrac72, $$ $$ t(11)=1+t(12)=\tfrac{33}8,\quad t(01)=1+\tfrac12t(11)+\tfrac12t(02)=\tfrac{77}{16}, $$ and, finally, $t(00)=1+t(01)=\frac{93}{16}$. Edit: To check this result, note that $$ t(00)=\frac{2{3\choose 0}2^3\cdot4+2{4\choose 1}2^2\cdot5+2{5\choose 2}2^1\cdot6+{6\choose 3}2^0\cdot7}{2{3\choose 0}2^3+2{4\choose 1}2^2+2{5\choose 2}2^1+{6\choose 3}2^0}. $$
Subspace Topology and Limit Points
SKETCH: Let $A_{1,1}=\{0\}\cup\left\{\frac1{2^n}:n\in\Bbb Z^+\right\}$. To get $A_{2,1}$, we add a sequence converging downward to each $\frac1{2^n}$. Show that this can be done by adding the points $\frac1{2^n}+\frac1{2^{n+m}}$ for $n,m\in\Bbb Z^+$. For $A_{3,1}$ add points $\frac1{2^n}+\frac1{2^{n+m}}+\frac1{2^{n+m+k}}$ for $n,m,k\in\Bbb Z^+$ and show that this works. At this point the general construction should be clear, as well as how to see that it works. It may be helpful to think in binary. Apart from $0$, the points of $A_{1,1}$ are $0.1_2$, $0.01_2$, $0.001_2$, and in general all of the numbers in $[0,1)$ whose binary expansions contain a single $1$. The new points in $A_{2,1}$ are those numbers in $[0,1)$ whose binary expansions contain exactly two $1$s, and in general the points of $A_{n+1,1}\setminus A_{n,1}$ are those numbers in $[0,1)$ whose binary expansions contain exactly $n+1$ $1$s. Once you have all of the sets $A_{n,1}$, you can get $A_{n,m}$ by starting with $$\bigcup_{k=0}^{m-1}(A_{n,1}+k)\,,$$ where $A+k=\{a+k:a\in A\}$, and multiplying it by $\frac1m$.
Prove that any $n$ vectors which span $\mathbb R^{n}$ also form a basis for $\mathbb R^{n}$
I would like to provide another proof that might be more intuitive. I have left my comments on your proof. Suppose vectors $(v_1 \ldots v_n)$ don't form a basis of $R^n$, then they must be linearly dependent. Then there exists some $(c_1 \ldots c_n)\neq (0,...,0)$, such that $\sum^n_{i=1} c_i v_i =0$, implying that a vector can be made out of linear combinations of other vectors. Take $v_n$ for instance. Rearranging the summation, we have: $$-c_nv_n= \sum_{i=1}^{n-1} c_iv_i$$ Therefore, $v_n$ must lie within $\text{span}(v_1\ldots v_{n-1}) \implies \text{span}(v_1\ldots v_{n}) = \text{span}(v_1\ldots v_{n-1})$. But $n-1$ vectors can't possibly span $R^n$.
Is my proof of the claim in example 5.1.7 in Notes on Elementary Linear Analysis (Bedos) correct?
The mistake is assuming there exists $n \in \mathbb{N}$ such that $|\lambda(n)|=\|\lambda\|_{\infty}.$ For example, let $\lambda(n)=1-\dfrac1n.$ Then $\lambda \in l^{\infty}(\mathbb{N})$ and $\|\lambda\|_{\infty}=1$ but $\not\exists\ n \in \mathbb{N}$ such that $\lambda(n)=1.$ Correct argument: Fix $\epsilon >0$ and $M=\|\lambda\|_{\infty}.$ Then there exists $N \in \mathbb{N}$ such that $|\lambda(N)|\geq M-\epsilon.$ Let $(x(n))_n \in \ell^p$ be defined by $$x(n) = \begin{cases} 1 \ \text{if} \ n = N, \\ 0 \ \text{otherwise}. \end{cases}$$ Then \begin{align*} \|M_\lambda(x)\|_p &= \left( \sum_{n \in \mathbb{N}} \lvert \lambda(n)x(n) \rvert^p \right)^{1/p} \\ &= \left( \lvert \lambda(N) \cdot 1 \rvert^p\right)^{1/p} \\ &= \lvert \lambda(N) \rvert \\ &\geq M-\epsilon. \end{align*} Thus $\|M_{\lambda}\|\geq M-\epsilon$ for every $\epsilon >0.$ Hence $\|M_{\lambda}\|\geq M.$
Proof of general conditional probability formula
I don't believe there is a 'rigorous' proof, the definition of conditional probability comes from the intuition behind what you are looking for. Conditional probability, $P(A|B)$, means you are looking for the probability of a certain event ($A$), given a certain amount of information ($B$). Any time you are looking for the probability of just event $A$ you are assuming an underlying probability space $\Omega$. Therefore, $P(A)$ can also be viewed as the $P(A|\Omega) = \frac{P(A \cap \Omega)}{P(\Omega)}$ where $P(\Omega) = 1$ and $P(A \cap \Omega) = P(A)$. Moreover, $P(A|B)$ assumes that you are still interested in finding $P(A)$; however, your sample space $\Omega$ is now being restricted only to the event $B$. With this is mind, the probability of interest becomes $P(A \cap B)$; that is the probability of both A and B occurring. However, you still have to divide by $P(B)$ because the underlying probability space no longer has probability 1.
How to find critical points of definite integral
If you want to figure out where $g'$ is zero, compute it as \begin{align*} g'(x) &= -3\int_a^b\, (f(t) - x)^2\, dt \\ &= -3\int_a^b\, f^2(t)\, dt + 6x \int_a^b f(t)\, dt -3x^2(b - a). \end{align*} It's a quadratic in $x$. Set it to zero and solve for the $x$s that are the critical points. If you set $I_0 = b - a$, $I_1 = \int_a^b\, f(t)\, dt$, $I_2 = \int_a^b\, f^2(t)\, dt$ your critical points occur at \begin{equation*} \hat{x} = \frac{I_1}{I_0} \pm \sqrt{\biggl( \frac{I_1}{I_0}\biggr)^2 - \frac{I_2}{I_0}} \end{equation*}
Reference for the subgroup structure of $\rm{PSL}_2(q)$
There are some notes by Oliver King containing a statement of the full classification in modern terms. However, this expository paper does not derive the result. A standard reference for the subgroup structure of classical groups is the book by Kleidman and Liebeck, but I don't recall that they cover Dickson's full list. They focus on maximal subgroups. The exposition there is rather, shall we say, "efficient".
reasoning about almost-groups and almost-associativity?
The paper Approximate Homomorphisms by Hyers and Rassias doesn't directly answer the question I asked, but it answers a very related question.
How to prove it using $\epsilon$-$\delta$ defination of limit?
You wanted an $\varepsilon-\delta$ proof, but the claim is wrong. So I will provide an $\varepsilon-\delta$ that the claim is wrong. More specifically, I will show that $$\lim_{x \to 0} \frac {sin \frac {1} {x}} {sin \frac {1} {x}}=1$$ Given $\varepsilon>0$, put $\delta=\varepsilon$. Then whenever $0<|x-0|<\delta$ (and $1\neq x\pi k$ for some $k\in\mathbb{Z}$), we have $$\left| \frac {sin \frac {1} {x}} {sin \frac {1} {x}} -1\right|=0<\varepsilon.$$
Some question about Lie algebra of $GL_n(\mathbb{C})$
$a\in G={\rm Gl}_n (\mathbb{C}),\ x\in M_n( \mathbb{C})$ Define a vector field $X(a):=a\cdot x\ \ast$ where multiplication is matrix multiplication So $$ L_b\ X(a)=bax =X(ba) $$ Hence $X$ is a left invariant vector field If $e^{tx}$ is an integral curve at $I$, then $ae^{tx}$ is integral curve at $a$ : $$ \frac{d}{dt} ae^{tx}= L_a X(t)=X(ae^{tx}) $$ Recall the definition of Lie bracket in Lie group : $$ Ad_a : T_IG\rightarrow T_IG,\ Ad_a (x)= \frac{d}{dt}\bigg|_{t=0} ae^{tx}a^{-1} $$ $$ [y,x](e):=\frac{d}{dt} Ad_{e^{ty}} (x) $$ Note that $$[y,x](e)=yx-xy = \frac{\partial }{\partial t} \frac{\partial }{\partial s} e^{ty}e^{sx}e^{-ty}$$ Here $$ df\ L_a\ [y,x](e):=\frac{\partial }{\partial t} \frac{\partial }{\partial s} f(ae^{ty}e^{sx}e^{-ty} )$$ In further recall the definition of $[Y,X](a)$ in Riemannian manifold : If $\phi$ is flow of $Y$ \begin{align*} [Y,X](a)&= \frac{d}{dt} d\phi_{-t} X_{\phi_t(a)} \\&= \frac{\partial}{\partial t}\frac{\partial}{\partial s} \phi_t(a)e^{sx} e^{-ty} \\&= \frac{\partial}{\partial t}\frac{\partial}{\partial s} ae^{ty} e^{sx} e^{-ty} \end{align*} So we complete the proof
Is $(n^{1/n}-1)\in O(n^{-\frac12})$ as $n\to \infty$?
I write here a heuristic argument: $$n^{\frac{1}{n}}-1=e^{\frac{\ln n}{n}}{-1}\approx\frac{\ln n}{n}<\frac{\sqrt{n}}{n}$$ for $n$ large enough. Hope you can make it rigorous or come up with another way to prove!! :)
Why does $(a-2b)\times (3a+2b) = a\times (3a+2b) - 2b \times(3a+2b)$?
Use the fact that $a\times a=0$, $b\times b=0$ and $a\times b=-b\times a$. I don't think you need $|a|=|b|$ for this proof.
Derivative at a point (Linear approximation at point)--what is the valid range for approximation?
An approximation can only be deemed good if you specify what good is. A quick demonstration: "A good approximation to $\pi$ is that it is roughly $7654.88$" is a statement that is as true as "A good approximation to $\pi$ is that it is roughly $3.14159265$". Both have no meaning before you specify 'good for what'. It will be correct to say that "$3.14159265$ is a better approximation (or at least not worse) to $\pi$ then $7654.88$ is, regardless of what you want the approximation for". Your question can be made precise and has many interesting and important answers. You will surely get to see Taylor approximations to functions and then you will learn about various forms for the remainder. I believe that will answer your question fully as you will see there conditions that assure that you can actually say something sensible about the linear approximation (and higher order approximations) of a given function. However, for a general differentiable function with no other knowledge other than differentiability there is nothing that can be said about the quality of the approximation. A famous function is Cauchy's function $f(x)e^{1/x^2}$ whose Taylor approximation is constantly $0$.
Number theory problem with many variables in a sequence
I agree with your analysis, and it turns out to be easy to find the best arrangement by guessing, and then showing that the guess is correct. First assume that we will never have $11$ and $13$ in the same group, so $2$ of the groups contain $11$, and the other two contain $13$. Assume also that we will do best to keep $5$ and $7$ separate. Now our $4$ groups contain $$5,11\\7,11\\5,13\\7,13$$ The largest product is $7\cdot13$, so we make our last assumption: the third number in the final group should be $2$. Now the second $2$ can't go with $7$ or $13$, so it must go in the first group, and we have $$2\cdot5\cdot11=110\\ 3\cdot7\cdot11=231\\ 3\cdot5\cdot13=195\\ 2\cdot7\cdot13=182$$ with maximum $231$. Now to validate our assumptions. If $11$ and $13$ are in the same group, the product would be at least $2\cdot11\cdot13=186>231$, so the first assumption is valid. If $5$ and $7$ were in the same group, the product would be at least $5\cdot7\cdot11>231,$ and the second assumption is valid. Finally putting $3$ in the last group would give $3\cdot7\cdot13>231$ and the third assumption is valid.
pendulum on a string problem, involving finding the lagrangian and moment of inertia
The distance between the center of the rod and the point of suspension is $$d = l \sqrt{\frac{5}{4} + \cos(\theta_2 - \theta_1)}$$ Use parallel axis theorem to find $$I = \dfrac{1}{12}ml^2 + md^2 = \left(\frac{4}{3} +\cos(\theta_2 - \theta_1) \right)ml^2$$
Finding $\nabla r^n$
The OP user2850514's solution $\nabla r^n = nr^{n - 2} \mathbf r \tag{1}$ is in fact correct since $\mathbf r = r \hat{\mathbf r}; \tag{2}$ it is simply written in a slightly different fashion. I do this one like this: For differentiable $g: \Omega \to \Bbb R$ and $f:I \to \Bbb R$ with $\Omega$ open in $\Bbb R^m$, $I$ open in $\Bbb R$ and $g(\Omega) \subset I$, we have for $x \in \Omega$, $\nabla (f(g))(x) = \dfrac{df(g(x))}{dg}(g(x)) \nabla g(x); \tag{3}$ this identity is well-known and is really just the chain rule, as may be seen by looking at it in coordinates $x = (x_1, x_2, \ldots, x_m)$ in $\Omega$: $(\nabla(f(g))(x))_k = \dfrac{\partial(f(g)(x))}{\partial x_k} = \dfrac{d(f(g))}{dg}(g(x)) \dfrac{\partial g(x)}{\partial x_k} = \dfrac{d(f(g))}{dg} (g(x))(\nabla g(x))_k; \tag{4}$ (4) is precisely (3), coordinate-by-coordinate; thus (4) $\Rightarrow$ (3). Also, $\nabla r = \hat{\mathbf r}; \tag{5}$ again we use the coordinates: $(\nabla r)_k = \dfrac{\partial r}{\partial x_k} = \dfrac{\partial \sqrt{\sum_1^m x_i^2}}{\partial x_k} =\dfrac{1}{2}(\sqrt{\sum_1^m x_i^2})^{-1/2}(2x_k) = \dfrac{x_k}{r};\tag{6}$ but $x_k / r$ is just the $k$-th component of $\hat{\mathbf r}$, the unit vector field pointing in the $\mathbf r$ direction; hence (5) binds. Now taking $f(r) = r^n$ and $g(x) = r$ we have $\nabla r^n = nr^{n - 1} \nabla r = nr^{n - 1} \hat{\mathbf r}; \tag{7}$ that does it! Of course, in solving such problems I really go directly from (3), (5) to (7); (3) and (5) are standard, useful identities living in my head (that is, memory); I don't re-derive them over and over. But I thought the details might help flesh things out here.
Pick an urn and pick two balls from the urn
There are $5\cdot4\cdot3=60$ ways in which two balls can be drawn: $$\begin{array}{cr} GG&20\\ GR&10\\ RG&10\\ RR&20\\ \hline \text{Total}&60 \end{array}$$ These are the $20$ ways in which $GG$ can be drawn: $$\begin{array}{cr} GG_1&12\\ GG_2&6\\ GG_3&2\\ \hline \text{Total}&20 \end{array}$$ $$\bbox[border:2px solid green]{ \begin{array}{cc|cc|cc} \text{(a)}&P=\frac{30}{60}=\frac{1}{2}& \text{(b)}&P=\frac{20}{30}=\frac{2}{3}& \text{(c)}&P=\frac{18}{20}=\frac{9}{10} \end{array} }$$
two finite models of complete orderings are elementary equivalent
EDIT: replacing "finite" with "infinite," the statement is true, and follows from the more general fact If $\mathcal{M}$ is a relational structure and $\varphi$ is a universal sentence true of $\mathcal{M}$, then every finite substructure $\mathcal{A}\subseteq\mathcal{M}$ has $\mathcal{A}\models\varphi$ and the particular feature of linear orders that Any two infinite linear orders have the same finite suborders. (In the context of finite non-relational languages, replace "finite" with "finitely generated" in the first fact above; the point is that "finite" = "finitely generated" when there are only relation symbols in the language.) The statement is extremely false as stated. Let $L_n$ be the unique up to isomorphism linear order with $n$ elements. Then "$\forall x, y(x=y)$" is true in $L_1$ but not $L_2$. And this just keeps going: e.g. "$\forall x_1, ..., x_{17}(\bigvee_{i<j<18}x_i=x_j)$" is true in $L_{16}$ but not $L_{17}$. Note that these are even positive universal formulas, and that this has nothing to do with the order structure.
Prove $\sum _{n=1}^{\infty }\left(\frac{1}{n}-\ln\left(\frac{\left(1+n\right)}{n}\right)\right)\:$ converges
By the Taylor expansion of the logarithm, for $n\geq 1$: $$\ln\left(\frac{n+1}{n}\right) = \frac{1}{n} +O(n^{-2}).$$ The result follows.
Approximate Sobolev function by smooth function - error estimate?
It is true almost in the way you stated it with $a_\epsilon = \epsilon$. You only have to restrict the norm on the left hand side to $\Omega_\epsilon =\{x \in \Omega \colon dist(x, \Omega^c)>\epsilon \}$, since this is where $u_\epsilon$ is defined. On the plus side, boundary conditions and boundary regularity are irrelevant. The proof is rather straightforward: Write $$u_\epsilon(x) - u(x) = \int_{B_\epsilon (0)} \int_0^1\eta_\epsilon(y) \nabla u(x+ty)\cdot y\,dt\,dy. $$ Then take the squares, use Jensen' inequality and integrate over $\Omega_\epsilon$: \begin{align*} \int_{\Omega_\epsilon} \lvert u_\epsilon(x) - u(x) \rvert^2 \,dx &\leq \epsilon^2 \int_{\Omega_\epsilon} \int_{B_\epsilon (0)} \int_0^1 \eta_\epsilon(y) \lvert \nabla u(x+ty) \rvert^2 \,dt \,dy \,dx \\ &\leq \epsilon^2 \int_\Omega \lvert \nabla u(x) \rvert^2 \,dx \end{align*}
Finding a spectrum of an operator in $C[0, 1]$
Those $\lambda$ not in $\sigma(A)$ should be those such that $A-\lambda I$ is invertible. So let us try to invert $A-\lambda I$. Given $g\in C[0,1]$, we want to find $f\in C[0,1]$ such that $(A-\lambda I)f=g$. That is, $$\tag1 g(x)=\left(\frac{x-1}{x-2}-\lambda\right)\,f(x)+f(0). $$ Taking $x=1$, we get $$g(1)=-\lambda f(1)+f(0).$$ At $x=0$, $$\tag2 g(0)=\left(\frac12-\lambda\right)f(0)+f(0)=\left(\frac32-\lambda\right)f(0). $$ The condition $\lambda=\frac32$ would force $g(0)=0$, which is not true for most $g$: that tells us that $\frac32\in\sigma(A)$. When $\lambda\ne\frac32$, from $(2)$ we obtain $$\tag3 f(0)=\frac{g(0)}{\frac32-\lambda}. $$ Going back to $(1)$, we get $$ f(x)=\frac{g(x)-f(0)}{\frac{x-1}{x-2}-\lambda}=\frac{g(x)-\frac{g(0)}{\frac32-\lambda}}{\frac{x-1}{x-2}-\lambda}. $$ This requires that $\lambda$ is not in the range of $\frac{x-1}{x-2}$, which is $[0,\tfrac12]$. So $$\sigma(A)\subset\left[0,\tfrac12\right]\cup\left\{\tfrac32\right\}.$$ We can confirm that the spectrum is not smaller: we already said that $\tfrac32\in\sigma(A)$. For any $\lambda\in[0,\tfrac12]$, if we look at $(1)$ we will get that there exists $x_0$ with $\tfrac{x-1}{x-2}-\lambda=0$. So $g(x_0)=f(0)$. Comparing with $(2)$, we get $$\tag4 g(0)=\left(\frac32-\lambda\right)\,g(x_0). $$ There are many functions $g\in C[0,1]$ that do not satisfy $(4)$, so $A-\lambda I$ is not invertible. Thus $$ \sigma(A)=[0,\tfrac12]\cup\left\{\tfrac32\right\}. $$
Inverse Trig Integration Resulting in $\sec^{-1}|x|$
For real $a,$ $$\sqrt{a^2}=|a|$$ $=+a$ if $a\ge0$ $=-a$ otherwise Now use https://en.m.wikipedia.org/wiki/Inverse_trigonometric_functions#Principal_values
Question about Humburger moment problem and Characteristic function.
I don't know neither Humburger, nor Humberger problem. If you're speaking about Hamburger problem, then no, this is not assumed. Carleman's condition supplies that there is at most one solution to the moment problem. The existence of solution is provided by another condition (in fact, a criterion) that the matrix $(m_{i+j})$ is positive definite. Returning to your particular question, even if $(m_n)$ is a valid sequence of moments, $\phi$ does not need to be a characteristic function: you impose conditions in the point $0$ only, but elsewhere $\phi$ can misbehave arbitrarily. There can be some positive answers. Say, if $\phi$ is analytic, I think it can be verified (and I welcome you to) from the criterion I cite that $\phi$ is a characteristic function.
Basis for Linear Transformation with Matrix Multiplication
$A\in \ker T\implies TA=XA=0\implies a+c=b+d=0$ if $A=$ \begin{bmatrix} a&b\\c&d\end{bmatrix} Hence $a=-c,b=-d\implies $ \begin{bmatrix} 1&0\\-1&0\end{bmatrix} and \begin{bmatrix} 0&-1\\0&1\end{bmatrix} form a basis of $\ker T$
Can we have an infinite tree in this graph?
Suppose that $G$ has no infinite connected component. On $i$th step let $G_i = \{b_i\} \cup \{a|a$ is white vertex above, reacheable from $b_i$ without going through other blue nodes$\}$ then remove $G_i$ from G and continue. Clearly $G_i$ is finite, has the only blue node and white nodes from above it, $\{G_i\}$ are disjoint. To prove that $\{G_i\}$ is a partition required we only left to prove that every white node will be deleted. By induction we prove that after $i$th step every white vertex from $1..i+1$ levels has been deleted. Let $a$ be an arbitrary white vertex from $(i+1)$th level. If $a$ was connected to another white from $i$th level it has been already deleted (as reachability is transitive). If it was not, it is connected to $b_i$ and will be deleted on $i$th step.
Does $(X, \leqslant)$ have the smallest element?
You're correct. Just observe that for any $s \in \mathbb{N}^{\mathbb{N}}$ if $m, n$ appear in $s$ infinitely many times and $n < m$, replacing one occurence of $m$ with $n$ gives a sequence which is smaller with respect to this order, yet still in $X$.
Geometry question regarding existence of a quadrilateral
The first question was already answered by hardmath using the Law of Cosines: the quadrilateral exists iff $$ (*) \phantom{\infty\infty\infty\infty\infty\infty} a^2 - 2ab \cos \alpha + b^2 = c^2 - 2cd \cos \beta + d^2, \phantom{\infty\infty\infty\infty\infty\infty (*)} $$ because both expressions must equal $e^2$. This can be used to answer the remaining question, proving that for given $a,b,c,d$ the area of the quadrilateral is maximized when $\alpha + \beta = \pi$, and deriving a formula for the area of the quadrilateral in terms of $a,b,c,d$ and $\alpha,\beta$. We first show that such a quadrilateral exists as long as there exists any quadrilateral with sides $a,b,c,d$, that is, provided that each of $a,b,c,d$ is smaller than the sum of the other three sides. Indeed $\alpha + \beta = \pi$ iff $\cos \beta = - \cos \alpha$, so (*) yields $$ \cos \alpha = \frac{a^2+b^2-c^2-d^2}{2ab+2cd} $$ and the necessary condition $|\cos \alpha| < 1$ becomes $$ -(2ab+2cd) < a^2+b^2-c^2-d^2 < 2ab+2cd. $$ If the first inequality fails then $(c-d)^2 \geq (a+b)^2$, so $\left|c-d\right| \geq a+b$, and likewise if the second inequality fails then $\left|a-b\right| \geq c+d$. In either case we find that $a,b,c,d$ cannot be the sides of a quadrilateral. Now let the area of the quadrilateral be $K$. Using Mann's pictured decomposition of the quadrliateral into two triangles, and the formula $\frac12 ab \sin C$ for the area of a triangle, we find $$ 2K = ab \sin \alpha + cd \sin \beta. $$ Thus we are to maximize $2K = ab \sin \alpha + cd \sin \beta$ subject to $$ ab \cos \alpha - cd \cos \beta = -\frac12 (a^2+b^2-c^2-d^2) =: Q. $$ This is easily done using calculus: implicit differentiation gives $$ ab \sin\alpha = cd \sin\beta \frac{d\beta}{d\alpha}, $$ while $$ \frac{d(2K)}{d\alpha} = ab \cos \alpha + cd \cos \beta \frac{d\beta}{d\alpha}, $$ so if $d(2K)/d\alpha = 0$ then $$ ab \cos\alpha = -cd \cos\beta \frac{d\beta}{d\alpha}. $$ Dividing our two formulas for $d\beta/d\alpha$ we find $\tan\alpha = -\tan\beta$, whence $\alpha+\beta = \pi$ as claimed. Alternatively, we can square the formulas for $Q$ and $2K$ and add to find $$ Q^2 + (2K)^2 = (ab \cos \alpha - cd \cos \beta)^2 + (ab \sin \alpha + cd \sin \beta)^2 $$ $$ = (ab)^2 (\cos^2\alpha + \sin^2\alpha) - abcd (\cos\alpha\cos\beta - \sin\alpha\sin\beta) + (cd)^2 (\cos^2\beta + \sin^2\beta) $$ $$ = (ab)^2 + (cd)^2 - abcd \cos(\alpha+\beta). $$ Thus $(2K)^2 = (ab)^2 + (cd)^2 - Q^2 - abcd \cos(\alpha+\beta)$, which is equivalent with Bretschneider's formula for the area of a quadrilateral (and indeed this derivation is equivalent to the proof recited on that Wikipedia page). Therefore if $a,b,c,d$ are fixed then $K$ is maximized when $\cos(\alpha+\beta) = -1$, which is to say $\alpha+\beta = \pi$. The resulting formula for the area of a quadrilateral inscribed in a circle is the special case of Bretschneider's formula already obtained by Brahmagupta: $$ K = \sqrt{(s-a)(s-b)(s-c)(s-d)}, $$ where $s = \frac12(a+b+c+d)$ is the semiperimeter. This in turn is a generalization of Heron's formula, which is the limiting case as one of the sides tends to zero.
Expansion of an expression.
Use the distributive property. We have that $$(x+y+z)(a+b+c)=x(a+b+c)+y(a+b+c)+z(a+b+c)$$ Can you continue?
Conditional joint probability of a function
Can we think of a "joint distribution" of two random variables where one random variable has a continuous density function and the other is discrete? Yes, it's called a "mixed joint probability function" $$\begin{align}f_{X,Y}(x,y) & = \mathsf P(X=x\mid Y=y)\,f_Y(y) \\ & = \mathsf P(X=x)\,f_{Y\mid X}(y\mid x)\end{align}$$ I need to find a closed form expression of the expected value $$\begin{align} \mathsf E(g(X,Y)) & = \sum_{x\in \{1,19,50,1000,3000\}} \int_0^\infty g(x,y)f_{X,Y}(x,y)\operatorname d y \\[1ex] & = \frac 1 5 \sum_{x\in \{1,19,50,1000,3000\}} \int_0^\infty \frac{\lambda}{e^{-\lambda y}(e^{x/y}-1)^{2/a}}\operatorname d y & \textsf{iff independent} \end{align}$$ Is this possible? Hard to say.   The integral does not seem very easy to evaluate.   It does not look promising at all.
Matrix with non-negative eigenvalues (and additional assumption)
The matlab code below can be used to construct a counterexample. (The result is probably true if $A$ is also an $M$-matrix.) The motivation is the Sherman-Morrison-Woodbury formula. You write out the equations that an eigenvalue eigenvector pair $\lambda$, $[x, y]^T$ must satisfy: $Ax+by=\lambda x,$ $c^Tx+c^Tby=\lambda y.$ Observe that the last equation is almost $c^T$ times the leading equations, and this gives rise to a constraint on the last entry of the eigenvector, $c^Tx=y$ which then gives rise to a matrix to which the Sherman-Morrison-Woodbury formula can be applied if $\lambda$ is not an eigenvalue $Ax+bc^Tx=\lambda x$ $(A-\lambda I+bc^T)x=0$ Then for a given negative $\lambda$, the game is finding b and c where the Sherman-Morrison-Woodbury formula cannot be applied. n=10; D=diag(abs(rand(n,1)+1)); D(1,1)=0 D(2,2)=1e-1 [V R]=qr([ones(n,1) randn(n)],0) A=V*D*V' %Let's force -1 to be an eigenvalue of our matrix lambda=-1; inv(A-lambda*eye(n)) %Examine this matrix, it is highly likely that %the matrix will have a negative entry in one of the columns %Column n for example %Let c_j=0 j\neq n and 1 otherwise %Since b must be postive %Make most of the entries of b small with the exception of %the entry that is negative c=zeros(n,1) c(2)=1 (A-lambda*eye(n))\c b=.001*ones(n,1) b(1)=10 b'*inv(A-lambda*eye(n))*c t=(-1/(c'*inv(A-lambda*eye(n))*b)) (c'*inv(A-lambda*eye(n))*b) eig(A-lambda*eye(n)*t*b*c') eig([A t*b ; c'*A c'*b])
Colouring 10 dots to form equilateral triangles
Label the dots as a b c d e f g h i j Assume it is possible to color the dots so that no equilateral triangles can be formed from dots of same colors. WOLOG, we can assume $e = R$. Let $\mathcal{E}$ be the collection of $3$ dots which forms an equilateral triangle. Since $\{ b, f, h \} \in \mathcal{E}$, at least one of $b, f, h$ is $R$. Rotate configuration if necessary, we can assume $h = R$. Since $\{ d, e, h \}, \{ h, e, i \} \in \mathcal{E}$ and $e = h = R$, we have $d = i = B$. Since $\{ d, i, c \} \in \mathcal{E}$ and $d = i = B$, we have $c = R$ Since $\{ b, e, c \}, \{ e,f,c \} \in \mathcal{E}$ and $e = c = R$, we have $b = f = B$. Since $\{ a, d, f \} \in \mathcal{E}$ and $d = f = B$, we have $a = R$. Since $\{ b, g, i \} \in \mathcal{E}$ and $b = i = B$, we have $g = R$. Since $\{ i, f, j \} \in \mathcal{E}$ and $f = i = B$, we have $j = R$. Finally $\{ a, g, j \} \in \mathcal{E}$ but $a = g = j = R$, a contradiction! The decision procedure is illustrated below. The subscript indicate at which step the corresponding color is determined. $$ \color{red}{a_5}\\ \color{blue}{b_4} \quad \color{red}{c_3}\\ \color{blue}{d_2} \quad \color{red}{e_0} \quad \color{blue}{f_4}\\ \color{red}{g_6} \quad \color{red}{h_1} \quad \color{blue}{i_2} \quad \color{red}{j_7}\\ $$
limit $(x^2+y^2)e^{-(x+y)}$ when $(x,y)$ approach infinity
There is nothing wrong with the above solution. I tried to vote it up but I am a newbie ;). However it does not explain why you are wrong and often substituting bounds makes things more difficult conceptually. 1)The reason your answer is wrong is firstly, from the conditions you have given us, $x$ is not related to $y$. Under your subsititutions, $y^{2} = r^{2}-x^{2}$. Also it doesn't help you moveforward as the exponent is not in $x^{2}$ and $y^{2}$ but $x$ and $y$. 2) Your function can be written as $x^{2}e^{-x} \times y^{2}e^-{y}$. Both of these terms go to 0 as $x$ and $y$ go to infinity. The exponential dominates the polynomial. To see this intuitively you can go into $\log$ space, $\log(x^{2}) = 2 \log(x)$ and $\log(e^{-x}) = -x$, now you may be familiar with the $\log$ function, it tails off to grow less than linear quite quickly.
Proof for $(U+W)^\perp=U^\perp \cap W^\perp$ if $U$ and $W$ are both subspaces.
Please excuse the poor formatting. Before I get into the proof, I'd like to explain the intuition. If you're orthogonal to any vector of the form $(c_1)u+(c_2)v$, then taking $c_2$ to be zero, you must be orthogonal to every $u$ and similarly every $v$. Thus being in the orthogonal compliment of $(U+V)$ implies that you're in the orthogonal compliment of U and in the orthogonal compliment of $V$. The reverse is obviously true. If you're orthogonal to $u$ and $v$ then you're clearly orthogonal to any linear combo of them. If $ x$ is in $(U+V)^⊥$ then $ x \cdot ( u+ v)=0$ for any $ u$ in $U$ and $ v$ in $V$. Namely, since $ 0$ is in $U$, $x \cdot v=0$ for all $v$ in $V$. Similarly $ x \cdot u=0$ for all $ u$ in $V$. Thus, $ x$ is in $U^⊥$ and $ x$ is in $V^⊥$, so x is in the intersection of $U^⊥$ and $V^⊥$. So $(U+V)^⊥$ is a subset of $U^⊥ \cap V^⊥$. The reverse is true immediately. If $x$ is in $U^⊥$ and $V^⊥$, then clearly $x(u+v)= xu+xv=0$. Thus, $x$ is in $(U+V)^⊥$. So $U^⊥ \cap V^⊥$ is a subset of $(U+V)^⊥$. This means that $(U+V)^⊥ = U^⊥ \cap V^⊥$.
Is this partial ordering relation on $\mathbb{N}$ uniquely determined?
The usual order on $\mathbb N$ is the only partial order satisfying your condition. Let $\le’$ be your relation and $\le$ be the usual order. By repeated transitivity, $$x \le’ S^n(x) = x + n$$ for any $n\ge 0$. In other words, $x\le y$ implies $x\le’ y$. Conversely, if $x\le’ y$, then assume $y<x$ for contradiction. But then $y\le’ x$, so antisymmetry implies $x=y$, a contradiction. So $x\le y$.
Am I calculating the limit properly without use of the Squeeze theorem?
The final result is correct, but: $$ \lim_{x \to 0}\frac{\sin(1/x)}{1/x}=\lim_{x \to 0}x\sin(1/x)=0 $$ because $\sin(1/x)$ is a bounded function.
Prove $S_{m}^{m}(\Delta)$={$s:s\in C^{m}[a,b]$ and $s$ is a polynomial of order $m$ in each $[x_{i},x_{i+1}]$}=$P_{m}$
So here is my answer: Let $s_{i-1}$ be the polynomial in the $[x_{i-1},x_{i}]$ region of $[a,b]$ and $s_{i}$ the polynomial in the $[x_{i},x_{i+1}]$. Then by the continuity of all the $m$ derivatives of $s$ in the internal nodes of $[a,b]$ we have that: $s_{i-1}^{(m)}(x_{i}) = c_{i-1}$$\quad$ and$\quad$ $s_{i} ^{(m)}(x_{i}) = c_{i}$ $\quad$ $\rightarrow$ $\quad$$c_{i-1}= c_{i}$ $s_{i-1}^{(m-1)}(x_{i}) =a_{i-1}x_{i}+c_{i}$$\quad$ and$\quad$ $s_{i} ^{(m-1)}(x_{i}) = a_{i}x_{i}+c_{i}$$\quad$$\rightarrow$$\quad$ $a_{i-1}=a_{i}$ etc. So we have that $s_{i-1}=s_{i}$ i.e. we have that $s$ is the same polynomial in each $[x_{i},x_{i+1}]$. p.s These self answers in StackEchange are kinda creepy.
Find the Galois group $Gal(\mathbb{Q}[\sqrt{2}]/\mathbb{Q})$ and determine all intermediate subfields explicitly.
You did a very good job determining the splitting field: $K=\mathbf{Q}(\sqrt{2})$. This is just a very simple example as you will see. The extension $K/\mathbf{Q}$ is Galois and has degree two, so the Galois group is $\mathbf{Z}/2\mathbf{Z}$ (the only group, up to isomorphism, of order 2), consisting of the identity automorphism and the one interchanging $\sqrt{2}$ and $-\sqrt{2}$. There are no non-trivial intermediate fields.
Let $\alpha = \sqrt{2} + \sqrt{3}$ and $K \subseteq \mathbb{R}$ such that $\mathbb{Q} \subset K \subset \mathbb{Q}[\alpha]$
Let $m=[K:\mathbb{Q}]$ and $n=[\mathbb{Q}[\alpha]:K]$. Then $mn=4$. Since the inclusions in $\mathbb{Q} \subset K \subset \mathbb{Q}[\alpha]$ are strict, we have $m>1$ and $n>1$. Therefore, $m=n=2$.
Wolfram Alpha gives wrongs answer for an equation
Mind the text right under the equation: Assuming the principal root  |  Use the real‐valued root instead Click the real-valued root and you get the $3$ real roots $\,\{-1,0,1\}\,$, and the graph over the entire $\mathbb{R}$. [ EDIT ]  To clarify what happens under the default Wolfram Alpha interpretation of "assuming the principal root", the equation is taken to be in complex numbers, and $x^{\frac{1}{5}}$ is assumed to be the principal value of the $5^{th}$-root complex power function. With the usual choices for the branch cut along the negative real axis and the principal root being the one with the minimum argument, the principal value of $(-1)^{\frac{1}{5}}$ is $w = \text{cis}\;\frac{\pi}{5}$. This value does not satisfy the equality $w - w^5 = 0$ and therefore WA does not return it as a root of $x^{\frac{1}{5}} - x=0$ under the "principal root" interpretation. For related discussion and insights see for example What is the principal cubic root of −8?.
Prove given group has subgroup with order $k$
Hint: First show that <$x$> is normal in $G$; then consider the factor group and properties of the associated homomorphism.
Open Sets of $\mathbb{R}^1$ and axiom of choice
It is not necessary to pick more than one point in each interval, but It is harmless to pick more than one, or even all of them, and There is a simple algorithm for choosing one from each interval, so no reliance on the axiom of choice is needed. Just take any computable enumeration of the rationals, and let the first rational to fall in a particular interval be the one chosen for that interval.
Calculating two perpendicular vectors between two points (maybe vector decomposition?)
Given your initial two points $P_1$ and $P_2,$ carefully write their x-y coordinates, and find the midpoint of the line segment between them, call that point $C.$ The $x$ coordinate of $C$ is half the sum of the $x$-coordinates of $P_1$ and $P_2,$ the $y$ coordinate of $C$ is half the sum of the $y$-coordinates of $P_1$ and $P_2.$ The circle centered at $C$ with radius half the distance between $P_1$ and $P_2$ passes through both points. Any line passing through $P_1$ does one of three things: (A) it is perpendicular to the segment $P_1 P_2,$ your task is impossible (B) it passes directly through $P_2,$ after passing through $C$ halfway along, you are done, (C) it meets the circle at some second point $Q,$ in which case the line segment from $P_1$ to $Q$ and the line segment from $Q$ to $P_2$ are perpendicular. So you need to refresh your skills in finding the equation describing a circle, given center and radius. Then you need to figure out how to write the equation of a line, given the "angle" you are talking about when the line leaves $P_1.$ Finally, you need to figure out how to find the intersection point $Q.$ Notes. If I am correct about the meaning of your "angle," the following are useful. Suppose the point $P_1$ has coordinates $(x_1,y_1),$ and your directed line leaves that point at angle $\beta.$ Then, the parametrized version of the path, parameter $t,$ is $$ x = x_1 + t \cos \beta, \; \; y = y_1 + t \sin \beta. $$ The equation version of the path is $$ (x-x_1) \sin \beta = (y-y_1) \cos \beta. $$
Solution of transcendental complex function
HINT: you will have $$(x+iy)^2+2(x+iy)+2e^{-x}(\cos(y)+i\sin(y))=0$$ and you must solve the System $$x^2-y^2+2x+2e^{-x}\cos(y)=0$$ $$2xy+2y+2e^{-x}\sin(y)=0$$ by a numerical method we obtain $$x \approx -3.293017589, y \approx 6.912667999$$
Orthogonal projection of a vector onto convex set
I assume that we are working on a real inner product space. The orthogonal projection of a vector $v$ on a convex set $C$ is a vector $v^\star\in C$ such that, for each $w\in C$,$$\bigl\langle v-v^\star,w-v^\star\bigr\rangle\leqslant0.$$It can be proved that, if the space is a Hilbert space and if $C$ is not only convex but also closed, then, for each $v$, $v^\star$ exists and it is unique.
Properties of joint entropy
By standard properties $$H(X,Y) = H(X) + H(Y\mid X) = H(Y) + H(X\mid Y)$$ Assume that $\max\{H(X),H(Y)\} = H(X)$. Then you require $$H(X,Y) \ge \;H(X) \iff H(X) + H(Y\mid X) \ge \;H(X) \iff H(Y\mid X) \ge\; 0$$ which holds true since Entropy (related to discrete random variables, and in contrast to Differential Entropy that refers to continuous random variables), is always positive. An analogous result will obtain if you assume $\max\{H(X),H(Y)\} = H(Y)$. QED
Conformal maps from the upper half-plane to the unit disc has the form
One candidate of $G$ will be $$G(z) = \frac{z-i}{z+i}$$ (which is the case when $\theta=0$ and $\beta=i$). Then $$F(z) = (f\circ G)(z)= e^{i\theta_1} \frac{z(1-\alpha) - i(1+ \alpha)}{z(1-\bar \alpha) + i(1+\bar\alpha)}$$ $$\ \ \ \ \ \ \ \ \ = e^{i\theta_1} \frac{1-\alpha}{1-\bar \alpha}\frac{z -\beta}{z - \bar\beta}\ ,$$ where $$\beta = \frac{i(1+\alpha)}{1-\alpha}$$ $$\bigg| \frac{1-\alpha}{1-\bar \alpha}\bigg|=1\Rightarrow \frac{1-\alpha}{1-\bar \alpha} = e^{i\psi}$$ for some $\psi$. Then $$F(z) = e^{i\theta} \frac{z -\beta}{z - \bar\beta}\ ,$$ where $\theta = \theta_1 + \psi$. To be complete let me also check $\beta\in \mathbb H$: let $\alpha = a+ bi$, then $$\beta = \frac{-2b +(1-a^2-b^2)i}{|1-\alpha|^2} \in \mathbb H$$ as $|\alpha|^2 = a^2+ b^2 <1$ as $\alpha\in \mathbb D$.
Equality of two complex numbers with respect to argument
If we take complex numbers as $z_1 = a+bi$ and $z_2 = c+di$, $z_1 = z_2$ if and only if $a = c$ and $b = d$. So $\arg(z_1) = \arg(z_2)$, obviously. However, if $\arg(z_1) = \arg(z_2)$, it doesn't have to imply that $z_1 = z_2$. A simple counter-example is $z_1 = 2z_2$ with $z_1 \ne 0$. Their arguments are equal however $z_1 \ne z_2$ (if you know some about physics, you can think $z_1$ and $z_2$ as two vectors with same direction but different magnitudes. Then of course we can't say that they are equal).
Classification of a Differential Equation relating multiple differentials
Another technique is a change of variables: Let $x = C R_1 v_{o}+C R_2 v_{in}$. Then $\dot{x} = C R_1 \dot{v_{o}}+C R_2 \dot{v_{in}}$, which gives $\dot{x} = -v_{o} = -\frac{x- C R_2 v_{in}}{C R_1} = -\frac{1}{C R_1} x+ \frac{R_2}{R_1} v_{in}.$ Solve for $x$, and recover $v_{o}$ with $v_{o} = \frac{x- C R_2 v_{in}}{C R_1}$.
Linear combination of Laplace eigenfunctions
Since $\Delta$ is self-adjoint, the eigenfunctions $e_j$ can be taken to be orthogonal; so any linear combination $f = \sum_{j>0} a_j e_j$ satisfies $$\int_\Omega f = \sum_{j>0} a_j \int_\Omega e_j = C\sum_{j>0} a_j \langle e_j, e_0 \rangle_{L^2} = 0$$ since $e_0$ is constant. (Here $C$ is just the constant such that $Ce_0(x) = 1.$) Thus $f$ must change sign unless it is identically zero.
Classification of Principally polarized abelian surface.
There is a beautiful theorem, called Matsusaka-Ran criterion, that can be used to give a direct proof of what you are saying. The Matsusaka-Ran criterion says the following Let $X$ be a $g$-dimensional abelian variety with polarization $\mathcal{O}(D)$. Let $C=\sum_{i=1}^{l} r_iC_i$ be an effective algebraic 1-cycle such that $C$ generates $X$ and $(C\cdot D)=g$. Then $r_i=1$ and $C_i$ is smooth for each $i$ and there exists an isomorphism $\psi:J(C_1)\times...\times J(C_l) \simeq X$. Now, let $X$ be an abelian surface with principal polarization $\mathcal{O}(D)$, so that in this case $D$ is an effective algebraic 1-cycle. Because of the fact that $\mathcal{O}(D)$ is ample, we can apply the Kodaira vanishing theorem which tells us that $0=H^i(X,\mathcal{O}(D)\otimes K_X)=H^i(X,\mathcal{O}(D))$ for $i>0$, where in the last passage we used the fact that the canonical bundle of abelian varieties is trivial. Thus $\chi(\mathcal{O(D)})=h^0(\mathcal{O}(D))=1$, where in the last passage we used the hypothesis that $\mathcal{O}(D)$ is a $\textit{principal}$ polarization. The Riemann-Roch theorem for surfaces tells us that $2\chi(\mathcal{O}(D))=(D\cdot D)$: using what we have discovered above we can conclude that $(D\cdot D)=2$. Thus we are in position to apply the Matsusaka-Ran criterion. If $D$ is irreducible, it tells us that $D$ is a smooth curve and $X=J(D)$. By the fact that dim$(J(D))=$ dim$(X)=2$ and that dim$(J(D))=g(D)$, we deduce that $D$ has genus two. If $D$ is reducible, because of the dimension of $X$ the only possibility is that $X\simeq J(D_1)\times J(D_2)$, with $D=D_1+D_2$ and $J(D_i)$ of dimension one for $i=1,2$. This implies that the $D_i$ are both elliptic curves, so that $J(D_i)\simeq D_i$, thus $X\simeq D_1\times D_2$, and we are done.
Explain the following behavior
You are looking at numbers in one decade. Benford's Law applies to numbers which occupy a large number of decades: From the Wikipedia article: It tends to be most accurate when values are distributed across multiple orders of magnitude. Furthermore, your data was generated to have a uniform distribution. This does not represent a natural, scale-independent distribution of numbers.
Prove that $(f_n)$, $f_n =x^n$, $x \in (0,1)$ is not uniformly convergent on $(0,1)$
\begin{align} \underset{x\to1}{\lim} f_n(x) = \underset{x\to1}{\lim}x^{n} = 1 \text{ } \forall n\in \mathbb{N} \end{align} In other words, as $x$ gets arbitrarily close to $1$, $f_n(x)$ also gets arbitrarily close to 1. Hence, \begin{align*} \exists \epsilon> 0 \text{ and } \exists x\in(0,1) \text{ s.t } ||f_n(x)||_{\infty} > \epsilon \text{ }\forall n \in \mathbb{N}. \end{align*} Which implies that $f_n(x)$ is not uniformly convergent on $(0,1)$.
For fractional ideal $I$ why is $I\cap R \supsetneq \{0\}$?
It follows from the very definition: $I$ is a fractional ideal if it is an $R$-module s.t. there is some $0\neq r\in R$ with $rI\subseteq R$. So pick $0\neq a\in I$: then on one hand $ra\in I$ because $I$ is an $R$-module, and on the other hand $ra\subseteq R$ because $I$ is a fractional ideal. Since $ra\neq 0$, you have the claim.
Coproduct of groups explanation
There is a coproduct in the category of groups, namely the free product. Perhaps you mean the category of finite groups, which does not have a coproduct. Lang says that the coproduct in the category of groups is not the product; in any abelian category finite products coincide with finite coproducts, e.g., in the category of abelian groups.
Show $A_q(n,n-1)=q$, if $n > \binom{q+1}{2}$
Let $C$ be a set of $q+1$ words, and suppose by way of contradiction that each pair of words agree in at most one position. There are $\binom{q+1}2$ pairs of words, and $n$ positions. Since $n>\binom{q+1}2$, there would exist a position such that no two codes agree in that position. Can you take it from here?
w = RsR', solve for R where w, s are known, symmetric and PD. R is orthogonal with det(R)=1
Here is my guess: If you multiply both sides by $R$ on the right you get: $w R - R s = 0$ Which is a special case of the Sylvester equation: https://en.m.wikipedia.org/wiki/Sylvester_equation You can form a linear system to solve for $R$ using matrix vectorization: https://en.m.wikipedia.org/wiki/Vectorization_(mathematics) $A r = 0$ Where $r$ here is a column vector. The $r$ with min norm (best solution in the least squares sense) is the right-singular vector of $A$ associated with the smallest singular value. You can find it using SVD: https://en.m.wikipedia.org/wiki/Singular-value_decomposition Min norm of SVD solution should enforce $R R' = I$.
Why $\dim \operatorname{supp} M_{m}\ge \dim \operatorname{supp} M_{m'}$?
There are lots of ways (all more or less related) to prove the claimed inequality. I'm not sure if I can completely reconstruct the details of Jacob's approach from what you've written, and you should probably just ask him, or someone else in the class, for the details. But here is an attempt: First of all, the conclusions you are making from the argument about Artinian rings don't seem to quite make sense (e.g. $R'/mR$ won't be a ring, but just an $R$-module, unless $R = R'$; probably you mean $R'/mR'$; but then what is meant by $R'/m$ on the right hand side of the purported isomorphism?). The correct conclusion is that $R'_{m'}/m R'_{m'}$ is a direct factor of $R'/m R'$, and hence that $R'_{m'}$ is a direct factor of $R'_{m}$. Tensoring both sides with $M$ over $R'$, we find that $M_{m'}$ is a direct factor of $M_m$, and hence the support of the former is contained in the support of the latter. The statement about dimensions immediately follows. If you've not thought about this kind of thing before, you might want to consider some simples examples, such as $R = \mathbb Z$, $R' = \mathbb Z[i]$, $m = (5)$, and $m' = (2+i)$. Now try to find examples of $M$'s for which the inequality is actually an equality, and others for which the inequality is strict.
Derive necessary and sufficient conditions meaning
You showed that $[\mathbb{X},\mathbb{Y}]=0 \implies \frac{dq(x,y)}{dx}=\frac{dp(x,y)}{dy} , r(z) \neq 0$. All that you need to show now is that the converse holds: $[\mathbb{X},\mathbb{Y}]=0 \Leftarrow \frac{dq(x,y)}{dx}=\frac{dp(x,y)}{dy} , r(z) \neq 0$.
For an invertible $n$-by-$n$ matrix $M$ show the transpose is also invertible.
Since $M$ is invertible, $MM^{-1}=I$. Transposing both sides produces $(MM^{-1})^t=(M^{-1})^tM^t=I^t=I$, so $M^t$ is invertible with inverse $(M^{-1})^t$. That is, $(M^t)^{-1}=(M^{-1})^t$.
Comparison between two binomial lotteries
Approximate solution: If you approximate your lotteries by normal distributions, we just have to solve for $\hat p$ which makes the 2 Z-scores equal. To be precise: we approximate $$B(n,p) \sim N \left( np, \sqrt{np(1-p)} \right)$$ We see that we want $\hat p$ for which $$\frac {k_1-n_1 \hat p}{\sqrt{n_1 \hat p (1 - \hat p)}} = \frac {k_2-n_2 \hat p}{\sqrt{n_2 \hat p (1 - \hat p)}}$$ (where we have assumed, for simplicity, that $k_1$ and $k_2$ are both above their respective means.). This is easily solved...trusting that everything went through without error we end up with: $$\hat p = \frac{k_2 \sqrt{n_1} - k_1 \sqrt{n_2}}{n_2 \sqrt{n_1} - n_1 \sqrt{n_2}}$$ This will not exist for all values...in particular we need the numerator to be positive (and the whole expression to be between 0 and 1).
Ito's formula, and the relationship between dt and dB(t)
For suitable $\mu$ and $\sigma$, remember that $$ dX_t = \mu(t, X_t) dt + \sigma(t, X_t) dB_t, \qquad X_0 = x, $$ is shorthand for $$ X_t = x + \int_0^t \mu(s, X_s) ds + \int_0^t \sigma(s, X_s) dB_s. $$ Itô's formula for a suitable $f$ tells us that $$ f(t, X_t) = f(t, x) + \int_0^t f_t(s, X_s) dt + \int_0^t f_x(s, X_s) dX_t + \frac 1 2 \int_0^t f_{xx}(x, X_s) d \langle X \rangle_t, $$ where $\langle X \rangle = (\langle X \rangle_t)_{t \geq 0}$ is the quadratic variation process of $X$ and is equal to $$ \langle X \rangle_t = \left \langle \int_0^\cdot \sigma(s, X_s) d W_s \right \rangle_t = \int_0^t \sigma^2(s, X_s) d \langle W \rangle_s = \int_0^t \sigma^2(s, X_s) ds. $$ The multiplication rules in Øksendal are stated in the way you wrote, so that the version of Itô's formula in that book yields the correct result. In essence, this is because the quadratic variation for a Brownain motion is $\langle B \rangle_t = t$. It is a good exercise to verify this for yourself. I hope this answers your question.