title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to show inductive principle doesn't work for this system? | I have to prove why each Peano axiom that doesn't hold for a set $\mathbb{N}$ with base case $1$, and $S(1) = 2, S(2) = 3, S(3) = 1$, and $S(n) = n + 1$ for all $n \geq 4$.
Assuming you have $S(4)=5, S(5)=6, S(6)=7, ...$
Some of the Peano Axioms do hold:
$1\in \mathbb{N}$ (edit)
$S: \mathbb{N} \to \mathbb{N}$
$S$ is injective
The two remaining axioms do not hold. (edit)
As you point out, $\forall x\in \mathbb{N}:S(x)\ne 1$ is falsified since you have $S(3)=1$. (edit)
The induction principle does not hold (intuitively) because the number 4 is not a successor of anything, even though it has to be a successor of something in order for the induction principle to work.
Peano's axioms do not require that every number but $1$ has a predecessor, although induction can be used to prove this.
Hint as to the actual reason why this function $S$ does not satisfy the induction axiom:
Let $P=\{1, 2, 3\}$.
$1\in P$.
$k\in P \implies S(k)\in P$
ADDED:
That was not very clear, I'm afraid. Induction will not hold on $\mathbb{N}$ with your successor function $S$ because we have a subset $X=\{4, 5, 6, ...\}$ the elements of which are not connected in any way to the subset $P=\{1, 2, 3\}.$
We have:
$1\notin X$
$\forall x\in X:\neg \exists y\in \mathbb{N}:[y\notin X\land S(y)=x]$
If we have $P=\{1,2,3\}$, then it can be shown that:
$$1\in P \land \forall x\in P: S(x)\in P\land \exists x\in N: x\notin P]$$
Generalizing...
$$\exists P\subset N : [1\in P \land \forall x\in P: S(x)\in P\land \exists x\in N: x\notin P]]$$
Or equivalently...
$$\neg\forall P\subset N:[1\in P \land \forall a\in P: S(a)\in P \implies P=N]$$
This the negation of the induction axiom.
Also see my answer at Are there natural numbers that are not the descendant of 0? |
Closed Form E[exp(x'Ax)] | This can be solved using the general solution of the Gaussian integral
$$
\int\!d^nx\,\exp\Bigl(-\frac12 x^T A x\Bigr) = \sqrt{\frac{(2\pi)^n}{\det A}}.$$
In your case, you we have that
$$\mathbb{E}\left[e^{x^{T}Ax}\right]
= \sqrt{\frac{1}{(2\pi)^n \det \Sigma}} \int\!d^nx \exp\Bigl(-\frac{1}{2} x^T \Sigma^{-1} x\Bigr) e^{x^T A x}=\sqrt{\frac{1}{(2\pi)^n \det \Sigma}} \int\!d^nx \exp\Bigl(-\frac{1}{2} x^T (\Sigma^{-1} -2 A) x\Bigr) .$$
The expectation value is convergent, if $\Sigma^{-1} -2 A$ is a positive definite matrix. In this case, we obtain the result
$$\mathbb{E}\left[e^{x^{T}Ax}\right] = \sqrt{\frac{1}{\det \Sigma\,\det(\Sigma^{-1}- 2 A)}} = \frac{1}{\sqrt{\det( I- 2 \Sigma A)}}\,.$$ |
What is an example of a situation where AB is not subgroup of G, when A, B are subgroups of G? | A generic type of example is when $G$ is a finite group which more than one Sylow $p$-subgroup for som prime $p.$ Suppose then that $A$ and $B$ are different Sylow $p$-subgroups of a finite group $G$, say $|A| = |B| = p^{a}$, so that $p^{a+1}$ does not divide $|G|.$ Then the SET
$AB = \{ab: a \in A, b \in B \}$ has cardinality $\frac{|A| |B|}{|A \cap B|}$ which is a power of $p$ and is at least $p^{a+1}$ since $A \cap B \neq A.$ Hence $AB$ can't be a subgroup of $G$, as the order of a subgroup of $G$ divides $|G|$ by Lagrange's theorem, whereas $p^{a+1}$
(or any higher power of $p$) does not divide $|G|$.
The example Don Antonio gives is an instance of this. Another example is given by $G = A_{5}$, $A = \langle (12345) \rangle$, $B = \langle (13245) \rangle$, where $AB$ has order $25$, so $AB$ is not a subgroup of $A_{5}.$ |
Continuous Bivariate Random Variable, Conditional Probability Problem | Consider $$\frac{\displaystyle \int_0^{\frac{1}{2}}\int_0^{\min(x,\frac{1}{8})}{f(x,y)\,dy\,dx} }{\displaystyle \int_0^{\frac{1}{2}}\int_0^{x}{f(x,y)\,dy\,dx}}$$ though you may find the calculation easier if you split the numerator into $$\displaystyle \int_0^{\frac{1}{8}}\int_0^{x}{f(x,y)\,dy\,dx}+\int_{\frac{1}{8}}^{\frac{1}{2}}\int_0^{\frac{1}{8}}{f(x,y)\,dy\,dx} $$ |
Why do inverse function and chain rule not produce the same derivative? | $f(x)=e^x$ and $f^{-1}(x)=\ln(x)$. By direct computation we see that
$(f^{-1})'(x)=d/dx(\ln(x))=1/x$
By the inverse function rule, this should be equal to $\frac{1}{f'(f^{-1}(x))}$. Well, $f'(x)=e^x$ and hence $\frac{1}{f'(f^{-1}(x))}=\frac{1}{e^{\ln(x)}}=\frac{1}{x}$ as desired. |
Finding Points of Intersection of 2 circles | When I equate the two LHS of the equations, I get$$7x^2-7y^2-46x-114y+190=0$$
which is not what you wrote. |
Find the minimum value of an expression with three variables | Remember that for any 3 positive numbers we have $${a+b+c\over 3} \geq \sqrt[3]{abc}$$
this is inequality between arithmetic mean and geometic mean.
Use it twice. First $a= (xy)^2$, $b=....$
and second $a= x^2$,... and you get:
$$(\frac{xy}{z}+\frac{zx}{y}+\frac{yz}{x})(\frac{x}{yz}+\frac{y}{xz}+\frac{z}{xy})\geq 9$$ |
Show that $\lim_{t\to \infty} 1/t \; \max_{n \leq t} S_n \to E[X]$ a.s | There is nothing stochastic here: to wit, assume that a real valued sequence $(s_n)$ is such that $s_n/n$ converges to a nonnegative limit $\ell$ and define a new sequence $(m_n)$ by $m_n=\max\{s_k;k\le n\}$, then $m_n/n$ converges to $\ell$.
The usual epsilon-delta approach works. |
If A and B are real orthogonal matrices how to prove that either A-B or A+B is singular? | Hint.
$\det(A+B) \det(A-B) =\det(A+B) \det(A^T-B^T) = \det((A+B)(A^T-B^T)) $
Also note that for an odd-dimensional antisymetric matrix $M$, $\det(M)=0$. |
How to solve $y’’(t)=y(t)+t$ | First solve the homogeneous part $y''-y=0$., put $y=e^{mt}$, you get $m=\pm 1$, so
$y=A e^t + B e^{-t}$. Next let $y=p t +q$ in $y''-y=t \implies 0-p t -q =t \implies p=-1, q=0$. Thius the complrtre solution is $$y= A e^{t} +B e^{-t}-t.$$ |
Triangularization of matrix over PID | Let $R$ be a PID with quotient field $F$, let $n$ be a positive integer, let $L\cong R^n$ be a free $R$-module of rank $n$, and let $V=F^n$, so that $L \subseteq V$ is an $R$-lattice in the $F$-vector space $V$. Given an endomorphism $A \in \mathrm{End}_R(L)$ we will call $A$ $F$-split if all the roots of its characteristic polynomial $p(t)=\mathrm{det}(t\cdot 1 -A)$ are elements of $F$ (in other words, if the characteristic polynomial splits into linear factors over $F$). We will show that, given an $F$-split endomorphism $A$ of $L$, there is some basis of $L$ with respect to which $A$ is upper triangular. (Note that since $R$ is a PID, it is in particular integrally closed, so that once we assume the roots of the characteristic polynomial are in $F$ they are a fortiori elements of $R$).
We proceed by induction on $n$. For $n=1$ the result is trivially true. In general let $v \in V$ be an eigenvector for $A$ of some eigenvalue $\lambda$ (this exists by our hypothesis). Let $L_1=F v \cap L$, which is an $A$-stable $R$-submodule of $L$. Since $R$ is a PID, any finitely generated generate torsion free $R$-module is free, and hence $L_1$ and $L'=L/L_1$ are free. Moreover, the rank of $L'$ (that is, the dimension of $F \otimes_R L'$ as an $F$-vector space) is one less than the rank of $L$. The characteristic polynomial $p'$ of $A$ as an endomorphism of $L'$ satisfies
$$p=(t-\lambda)\cdot p',$$ where $p$ is the characteristic polynomial of $A$ as an endomorphism of $L$, so that our inductive hypothesis applies to $A$ acting on $L'$, and there is a basis $\overline{v_2},\dots,\overline{v_n}$ of $L'$ with respect to which $A$ is upper triangular, where $v_2,\dots,v_n \in L$. Now choosing a basis element $v_1$ of $L_1$ gives a basis with respect to which $A$ is upper triangular. This completes the proof.
The proof amounts to an algorithm for computing such a basis of $L$, provided that your PID R is given in such a way that you can compute a basis element $v_1$ of $L_1$. The first step is to compute any eigenvector $v$ in $F^n$; one must then find the least common multiple of the denominators of the coordinates of $v$ to compute $v_1$. Thus at least whenever you have some version of the Euclidean algorithm available for $R$ (e.g., in $\mathbf{Z}$ or $\mathbf{Z}[i]$, and certainly in case $R$ is a DVR such as $\mathbf{Z}_p$ or the completion of a ring of algebraic integers at a prime ideal) the proof can be made into an effective computation. |
About the C* algebras $C_0(\mathbb{N})$ | You have that $C_0(X)$ is the set of continuous function in $X$ that vanishes in "$\infty$", i.e. $f \in C_0(X)$ iff for all $\epsilon > 0$, exists a compact $K$ such that $|f|< \epsilon$ on $X\setminus K$.
Taking $\mathbb N = X$, we have that $C_0(\mathbb N)$ must be the set of sequences $(x_n)_{n \in \mathbb N}$ such that for every $\epsilon > 0$, exists a compact set $K$ of $\mathbb N$ such that $|x_n| < \epsilon, \, \forall n \in \mathbb N \setminus K$.
What can you say about the compact subsets of $\mathbb N$? |
mapping cone and cylinder | Regarding your first question, note that for example the neighbourhood $ X \times [0,1/2) \subseteq cyl(f)$ of $X \times {0}$ deformation retracts onto it by the explicit $ (x,t,s) \mapsto (x,(1-s)*t) $.
Regarding the question about the long exact sequence, it follows from the above by the following observations:
The mapping cylinder $ cyl(f) $ deformation retracts onto $ Y $.
The homologies of homotopy equivalent spaces (e.g. deformation retracts) are isomorphic, hence
$$
H_n(Y) \simeq H_n(cyl(f))
$$
The good pair discussed above $ (cyl(f), X \times {0}) $ gives rise to a long exact sequence of homology
$$
\ldots \rightarrow H_{n+1}(cyl(f),X \times {0})\rightarrow H_n (X \times {0})\rightarrow H_n(cyl(f))\rightarrow H_n(cyl(f),X \times {0}) \rightarrow \ldots
$$
Therefore, by using the last comment you wrote down, that is that
$$
H_n(cyl(f),X)=H_n(cone(f))
$$
one obtains the long exact sequence above. |
Rotation of a bar problem | First we need to find $\theta$ and $\phi$:
$$
\begin{align}
\alpha&=\arccos{\left(\frac{\vec{B}’\cdot\vec{B}}{||B||^{2}}\right)}\\
\\
\theta&=\alpha\frac{\vec{B}\times\vec{B}’}{\left|\left|\vec{B}\times\vec{B}’\right|\right|}\cdot\hat{i}\\
\phi&=\alpha \frac{\vec{B}\times\vec{B}’}{\left|\left|\vec{B}\times\vec{B}’\right|\right|}\cdot\hat{j}
\end{align}
$$
Now define the rotational matrix
$$
\begin{align}
G&=
\begin{bmatrix}
1 & 0 & 0 \\
0 & \cos{\left(\theta\right)} & -\sin{\left(\theta\right)} \\
0 & \sin{\left(\theta\right)} & \cos{\left(\theta\right)}
\end{bmatrix}
\begin{bmatrix}
\cos{\left(\phi\right)} & 0 & \sin{\left(\phi\right)} \\
0 & 1 & 0 \\
-\sin{\left(\phi\right)} & 0 & \cos{\left(\phi\right)}
\end{bmatrix}
\end{align}
$$
The rotated axis are:
$$
\begin{align}
\begin{bmatrix}
\hat{x}’ & \hat{y}’ & \hat{z}’
\end{bmatrix}
&=\left[G\right]
\end{align}
$$ |
Computing the correlation between two random variables | To compute $\mathsf{E}[XY]$, we have for $0 < \alpha < 1$,
\begin{align}
\mathsf{E}[XY] &= \mathsf{E}[XY \mid X < \alpha] \cdot \Pr(X < \alpha) + \mathsf{E}[XY \mid X \geq \alpha] \cdot \Pr(X \geq \alpha) \\
&= \mathsf{E}[X^2 \mid X < \alpha] \cdot \Pr(X < \alpha) + \mathsf{E}[(1 + \alpha - X)X \mid X \geq \alpha] \cdot \Pr(X \geq \alpha) \\
&= \left(\int_0^{\alpha}\frac{x^2}{\alpha} dx\right) \cdot \alpha + \left(\int_\alpha^1 \frac{(1+\alpha - x)x}{(1-\alpha)}dx\right)\cdot(1-\alpha) \\
&= \int_0^\alpha x^2 dx + \int_\alpha^1 (1 + \alpha-x)xdx
\end{align}
It is left as an exercise for you to compute the result. |
Why is modeling the joint distribution between many continuous random variables, obtains generalization more easily? | If one wants to model the joint distribution of 10
consecutive words in a natural language with a vocabulary V of size 100,000, there are potentially
$100000^{10} − 1 = 10^{50} − 1$ free parameters
Why is this? Well, to specify the joint distribution of 2 words is a table of $|V|^2$ numbers (probabilities of joint appearance). For each new word, you add 1 new dimension to the table. Hence, for a set of $n$ words you need to specify $|V|^n$ values, minus $1$ (because probability distributions sum to $1$).
So, ouch! That's a lot.
When modeling continuous variables, we obtain generalization more easily (e.g. with smooth classes of functions like multi-layer neural networks or Gaussian mixture models) because the function to be learned can be expected to have some local smoothness properties.
The thing about discrete distributions is that they can be exceptionally "jagged"; i.e. things can
In language, for instance, there is no reason why one word should statistically appear in similar contexts to, say, the one next to it, alphabetically.
Hence the explosion of parameters above. Continuous distributions, by assumption, don't have this issue.
More concretely, our problem above had $|V|^n-1$ parameters to characterize the joint distribution of $n$ variables, in the discrete case. Let's suppose we have an RV $X_i$ that takes values in $\mathbb{R}^d$ rather than $V$. At first glance, this seems to be harder since the number of possible values in $\mathbb{R}^d$ is larger than $|V|$ (even for $d=1$). However, what if we think that the joint distribution of $X=(X_1,\ldots,X_n)$ is well approximated by a Gaussian mixture model? Then we need only specify $k$ (number of Gaussians), $W$ (vector of weights, $|W|=k$), the means $\mu_j\in\mathbb{R}^d$, and the covariances $\sigma\in\mathbb{R}^{d\times d}$. This is only on the order of $k+k(nd+n^2d^2)$, roughly speaking. This is comparatively quite small! Much of the reason for this is that large patches of space are assumed to have probabilities smoothly varying compared to their neighbors; hence, one requires many fewer parameters to characterize large patches of space. (Even the largest deep neural networks have nowhere near close to ${\sim}10^{50}$ parameters! Hence why we prefer to do NLP in "continuous spaces" by embedding them). |
Infinite Series Convergence and Sum | Note that your series can be written as $$ \sum_{k=1}^\infty 4^3 \big( \frac{4}{7} \big)^{k-1}= 4^3 \sum_{k=1}^\infty \big( \frac{4}{7} \big)^{k-1} = 4^3 \sum_{k=0}^\infty \big( \frac{4}{7} \big)^{k} $$
this final series is a Geometric Series, which converges as $ \frac{4}{7}< 1$, and its sum is $$ 4^3 \frac{1}{1-4/7}$$ |
How to prove this logical equivalence in predicate logic? | Hint
We have to work with Prenex normal form equivalences.
The first formula is equivalent to :
$((∃x)P(x) \to (∀y)Q(y)) ∧ (∃z)¬Q(z)$.
By De Morgan, the second formula is :
$¬(∃x) P(x) ∧ ¬(∀z)Q(z)$, i.e. $¬(∃x) P(x) ∧ (∃z)¬Q(z)$.
The first one implies the second : if we have that $(∃z)¬Q(z)$ holds, then it is false that $(∀y)Q(y)$ and thus also $(∃x)P(x)$ is false.
Thus : $¬(∃x) P(x)$ holds.
The second one implies the first : if $¬(∃x) P(x)$ holds, then $(∃x)P(x) \to R$ holds, for $R$ whatever.
Thus : $(∃x)P(x) \to (∀y)Q(y)$ holds. |
Prove about real function of bounded variation | It is clear that RHS $\leq$LHS. For the other way let $\epsilon >0$ and choose $\delta$ such that $|x-y| <\delta$ implies $|g'(x)-g'(y)| <\epsilon $. [ Possible because $g'$ is uniformly continuous]. Consider a partition $\{x_i\}$ of $X$ with $|x_{i+1}-x_i|<\delta$ for all $i$. Now write $\sum |g(x_{i+1})-g(x_i)|$ as $\sum |x_{i+1}-x_i||g'(t_i)|$ for some $t_i$ between $x_{i}$ and $x_{i+1}$ Now $\int_X |g'(t)|\, dt=\sum \int_{x_i}^{x_{i+1}} |g'(t)|\, dt$. From this show that $|\int_X |g'(t)|\, dt-\sum \int_{x_i}^{x_{i+1}} |g'(t_i)|\, dt| < \epsilon$. This gives $\int_X |g'(t)|\, dt< \epsilon + \sum |g(x_{i+1})-g(x_i)|$. Conclude the proof from this. |
Zero lower Riemann integral | Hint: Modify Thomae's function |
Vertices of intersection between N spheres | My interpretation of your question: For spheres $S_i \subset \mathbb{R}^3$, find all points (vertices) $v_j$ which lie on the intersection of at least three.
I'm ignoring this inside/outside nomenclature - perhaps it won't be too hard to split up your $v_j$ into inside/outside groups once you have them - feel free to clarify.
The problem (I'm guessing here) is that when the size of your set of spheres $\{S_i\}_{i\le N}$ is large, the number of triples of spheres $N \choose 3$ is really large.
Still, computationally, what can you do? One straightforward optimization would be to split things up into disjoint (or sufficiently disjoint) clusters of spheres, if our $S_i$ are sparse enough in $\mathbb{R}^3$. But we're left with the main piece of work: Iterate over pairs, find the circles of intersection for those who are close enough, see if anyone else lies close enough to hit, and if so we've got a new pair of points to add to our $v_i$.
There's no trickery - we're not trying to figure out if two circles intersect in $\mathbb{R}^3$ - when the centers of a sphere and a circle are close enough, we're guaranteed that intersection. |
Second order non-linear partial differential equation | $$
-2u_{x}\cdot u_{y}+u\cdot u_{xy}=k
$$
HINT :
The change of function $\quad u(x,y)=\frac{1}{v(x,y)}\quad$ transforms the PDE to a much simpler form :
$$v_{xy}=-k\:v^3$$
I doubt that a closed form exists to analytically express the general solution. It is better to consider some numerical methods.
If the boubdary conditions are explicitly defined, the question has to be reconsidered according to this complementary information. |
What does the symbol $\Subset$ mean? | According to my Differential Geometry professor, it means that the closure of $V_{\alpha}$ is contained in $U_{\alpha}$.
According to Silvia Ghinassi and other sources, it generally means that the closure of $V_{\alpha}$ is a compact subset of $U_{\alpha}$, in which case the notation $V_{\alpha}\Subset U_{\alpha}$ is read "$V_{\alpha}$ is compactly contained in $U_{\alpha}$". |
Finding two projections of a vector that their resultant is the first vector | Orthogonal projection will work if the two vectors you project onto are perpendicular to each other. Vectors at $45$ degrees and $135$ degrees work because they are $90$ degrees apart, so they are perpendicular. Vectors at $70$ and 110 degrees are only $40$ degrees apart.
But if what you really want is a vector at $70$ degrees and another at $110$ degrees (or whatever angles happen to be required at a particular time), whose vector sum (or resultant) should equal a given vector, this is a solvable problem.
Suppose you know the direction and length of the vector marked $w$ in the figure below. You also know the directions in which the vectors marked $u$ and $v$ should point, but you do not know the lengths of those vectors.
As you may know, in order for the resultant of $u$ and $v$ to be $w,$ there must be a triangle like one of the two triangles in the figure. Let's use the triangle on the right. Since you know the directions of all three vectors, you can find the angle $\alpha$ between $u$ and $w$ and the angle $\beta$ between $v$ and $w$.
The two vectors/segments marked $u$ are parallel, so you also have an angle
$\alpha$ as shown at the top of the right-hand triangle.
Finally, you can find the angle $\theta$ by knowing that the sum of angles in a triangle is always $180$ degrees.
For example, if the two unknown vectors are at $70$ degrees and $110$ degrees
and the known vector is at $85$ degrees, then
$\alpha = 110\text{ degrees} - 85\text{ degrees} = 25\text{ degrees},$
$\beta = 85\text{ degrees} - 70\text{ degrees} = 15\text{ degrees},$ and
$\theta = 180\text{ degrees} - \alpha - \beta = 140\text{ degrees}.$
(The vectors in the drawing are not at $70$ and $110$ degrees, of course.
I drew them in directions that make more room to put the letters.)
Now we bring in a little trigonometry, namely, the Law of Sines.
Applied to the right-hand triangle, the Law of Sines gives us a relationship among the three angles $\alpha,$ $\beta,$ and $\theta$ of the triangle and the three sides of length $\lVert u\rVert,$ $\lVert v\rVert,$ and $\lVert w\rVert.$
$$ \frac{\lVert v\rVert}{\sin(\alpha)} = \frac{\lVert u\rVert}{\sin(\beta)}
= \frac{\lVert w\rVert}{\sin(\theta)}.
$$
So you compute the sine of each angle, and then you know everything in that formula except $\lVert u\rVert$ and $\lVert v\rVert.$ But a little algebra tells us that
$$
\lVert u\rVert = \frac{\lVert w\rVert \sin(\beta)}{\sin(\theta)}
\qquad\text{and}\qquad
\lVert v\rVert = \frac{\lVert w\rVert \sin(\alpha)}{\sin(\theta)}.
$$
And those are the lengths you need.
The formulas work for any angles as long as the vector $w$ is
"inside" the smaller angle between the two other vectors as shown in the figure. If $w$ is "outside" the angle then you will have to reverse one or both of the other vectors, that is, change their directions by $180$ degrees.
As a reminder, if you are doing this in a computer program or mathematical software then the sine function usually requires its input to be an angle measured in radians,
so if you have your angles in degrees you will have to convert them to radians before calling $\sin().$ |
Problems in elementary number theory and methods from physics | A "physical" approach to a possible proof of the Riemann Hypothesis: The Spectrum of Riemannium.
The idea: the zeros of $\zeta$ are "like" the energy levels of an atomic nucleus. |
Minimum value of $[a,b]$ = $[a,2a]$ | If you want to minimize $[a,b]$ therefore, you can simply choose $b = 2a$ ,Note : b = a would have been the minimum but since $b>a$ and $b \neq a$ but so we have to look for the next integer multiple of which 2 is so $b = 2a$, since $b>a$ and $b \neq a$
Hence, $$\frac{1}{\text{lcm}(1,2)}+\frac{1}{\text{lcm}(2,4)}+\frac{1}{\text{lcm}(4,8)}+\frac{1}{\text{lcm}(8,16)}=\frac{15}{16}$$
Thanks to @астонвіллаолофмэллбэрг and Raffaele |
Last step in proof of comparison theorem of etale and singular cohomology | I think the intended argument is as follows: Since $H^1(X_{\mathrm{\acute{e}t}},\mathbb Z/n\mathbb Z)\rightarrow H^1(X_{\mathrm{cl}},\mathbb Z/n\mathbb Z)$ is bijective for all $X$ (under the assumptions of Théorème 4.3, as then both sides parametrise $\mathbb Z/n\mathbb Z$-principal étale coverings), it suffices to show:
Lemma. For all $\xi\in H^1(X_{\mathrm{\acute{e}t}},\mathbb Z/n\mathbb Z)$ and all geometric points $x$ of $X$ there exists an étale neighbourhood $X'\rightarrow X$ of $x$ such that $\xi$ vanishes in $H^1(X'_{\mathrm{\acute{e}t}},\mathbb Z/n\mathbb Z)$.
This is a completely general fact about cohomology; see [Stacks Project, Tag 01FW] for a reference. In your situation, this can also be seen geometrically: $\xi$ parametrises a $\mathbb Z/n\mathbb Z$-principal étale covering, and any such covering is étale-locally trivial. |
Trigonometrical ratios | Hint:
As $\sec A+\tan A=P$
and $(\sec A+\tan A)(\sec A-\tan A)=1$
$\sec A-\tan A=\dfrac1P$
Can you find $\sec A,\tan A?$
or use $$4ab=(a+b)^2-(a-b)^2$$ |
Why is $-2-3(-1)^{2/3} = -5$? | How do you define $(-1)^{2/3}$? That is the crux. |
Trouble understanding topological groups. | The topology of $G\times G$ is the product topology, which is the topology of the unions of sets of the form $A\times B$, with $A,B\in\tau$. |
Real Analysis Monotone Convergence Theorem Question | You can get a lot of mileage from just plotting the graph of $f(x)=2+\sqrt{x-2}$. Note that $f$ is increasing and concave, the graph crosses the diagonal $y=x$ at $x=2$ and $x=3$. Now draw some pictures showing how the iteration $x_n=f(x_{n-1})$ works in the intervals $(2,3)$ and $(3,\infty)$. Finally, translate your newfound intuition into rigorous proof. You will not find that difficult, once you understand what you need to prove.
(Iteration, graphically: Start at $(x_{n-1},x_{n-1})$ on the diagonal, move vertically to the graph, hitting $(x_{n-1},f(x_{n-1}))=(x_{n-1},x_n)$, move horizontally to the diagonal, hitting $(x_n,x_n)$, repeat.) |
How do I complete this convergence proof? | Let $N=\max\{N_1,N_2\}$. You want to argue that
$$
|s_n-L|<\epsilon
$$
whenever $n\geq N$.
If $n\geq N$, then $n\geq N_1$ and $n\geq N_2$. Now choosing $k\geq N$, you would have
$$
n_k\geq k\geq N.
$$ |
Domain of the n composed logarithms on x. | If you denote by $\ln^{(n)}(x)$ the iterated logarithm and by ${^n}e=e^{e^{\ldots^{e}}}$ (height $n$) iterated exponentiation of the base $e$ (as per the comment), we have by definition:
$$\ln^{(n)}({^n}e)=1\Rightarrow$$
$$\ln^{(n+1)}({^n}e)=\ln(1)=0$$
Apply for $n=1$ and we get:
$$\ln(\ln(e))=0$$
So $D_2$ should be $D_2=\{x\in\mathbb{R}\colon x\ge e\}$. This however breaks the pattern, because the range of $\ln(\ln(x))$ can be negative for this case, if we extend the domain of $x$ to be $x\gt 1$.
You can't do this for higher iterates however, because negative ranges are not allowed in the domain of $\ln$. Therefore:
$$D_1=\{x\in\mathbb{R}\colon x\gt 0\}$$
$$D_2=\{x\in\mathbb{R}\colon x\gt 1\}$$
$$D_3=\{x\in\mathbb{R}\colon x\ge {^2}e\}$$
$$D_4=\{x\in\mathbb{R}\colon x\ge {^3}e\}$$
and in general ($n\ge 3$):
$$D_n=\{x\in\mathbb{R}\colon x\ge {^{n-1}}e\}$$ |
How to find points of tangency | solve the equation $$(5x+b)^2+x^2-100=0$$ and set the discriminatin equal to Zero to find $b$ |
#(A-B)=#A-#B if and only if B⊆A? | Hint: $A\setminus B=A\setminus (A\cap B)$. Also, $\left|A\cap B\right|\leq \left|B\right|$, in general, with equality if and only if $B\subseteq A$.
Hope this helps! |
About finding eigenvector of a $2 \times 2$ matrix with repeated eigenvalue | You can find $v_1 = (1,-2)^T$ and one solution of $(A-3I)v_2 = v_1$ is $v_2 = (0, 1)^T$,
so we see that $v_1,v_2$ is a basis. Since $(A-3I)v_1 = 0$, we see that $(A-3I)^2 = 0$ (this could have been figured out from the characteristic polynomial as well, so in this case there is no need to find eigenvectors).
We see that $e^{(A-3I)t} = \sum_{k=0}^\infty {t^k \over k!} (A-3I)^k = I+(A-3I)t$.
Hence $e^{At} = e^{3t} e^{(A-3I)t} = e^{3t}(I + (A-3I)t) = e^{3t} \begin{bmatrix} 1+2t & t \\ -4t & 1-2t\end{bmatrix}$. |
How to calculate $\int_{0}^{1}(\arcsin{x})(\sin{\frac{\pi}{2}x})dx$? | Integrate by parts to get
$$\begin{align}\int_{0}^{1} dx \:(\arcsin{x})(\sin{\frac{\pi}{2}x}) = \underbrace{-\frac{2}{\pi} \left [ \cos{\left ( \frac{\pi}{2} x\right)} \arcsin{x} \right ]_0^1}_{\text{this}=0} + \frac{2}{\pi} \int_0^1 \frac{dx}{\sqrt{1-x^2}} \cos{\left ( \frac{\pi}{2} x\right)}\end{align}$$
Now use the Fourier transform relationship:
$$\int_{-1}^1 dx \: \frac{e^{i k x}}{\sqrt{1-x^2}} = \pi J_0(k)$$
where $J_0$ is the Bessel function of the first kind. The integral is then
$$\int_{0}^{1} dx \:(\arcsin{x})(\sin{\frac{\pi}{2}x}) = J_0{\left(\frac{\pi}{2}\right)}$$
EDIT
In case some of you want to see why that FT relation holds, plug the integral into the differential expression defining the Bessel function of zero order:
$$k y''+y'+k y=0$$
$$y(0)=1$$
We then get
$$k y''+y'+k y=k \int_{-1}^1 dx \; \sqrt{1-x^2} e^{i k x} + i \int_{-1}^1 dx \; \frac{x}{\sqrt{1-x^2}} e^{i k x}$$
Integrate the second integral by parts and the above expression is zero. Evaluating the integral
$$\frac{1}{\pi} \int_{-1}^1 dx \: \frac{1}{\sqrt{1-x^2}} = 1$$
verifies that the integral is in fact the Bessel function as stated.
BONUS
It turns out that the factor of $\pi/2$ - normally crucial in order to evaluate an integral like this - is nothing special at all. Using the same technique I summarized above, I get the following, more general result:
$$\int_0^1 dx \: (\arcsin{x})(\sin{k x}) = \frac{\pi}{2 k} [J_0(k)-\cos{k}] $$ |
Cutting a net around a rectangular box | You would want to make enough cuts to reduce the net to a spanning tree. The number of edges in a tree is one less than the number of vertices. So, here's what you do: find the number of vertices, $v$; find the number of edges, $e$, in the uncut net; then you can make $e-v+1$ cuts, leaving $v-1$ edges uncut. Can you work out $v$ and $e$ from $x,y,z$? |
Singular value of a partitioned matrix | Your question is badly written.
$X_1\in M_{p,n},X_2\in M_{q,n}$. We assume that $p+q\geq n$.
$X^*X=X_1^*X_1+X_2^*X_2\geq X_1^*X_1\in M_n$.
$X$ has $n$ singular values, but $X_1$ has $\min(p,n)$ singular values; if $p\leq n$, then we put the $n-p$ supplementary singular values of $X_1$ equal to $0$.
Then $\sigma_{min}(X_1)\leq \sigma_{min}(X)$ (upper bound). |
Enumeration of finite automata | In answering this question we refer to the algorithm at the MSE
link which works
for the generalized problem as well. The only difference is that the
values that go into the slots of the array/table are pairs of states
and output symbols, meaning when we transition from a certain column
on an input symbol corresponding to a row we transition to the state
(first element of the pair) and output the symbol (second element of
the pair). The action on the slots is the simultaneous action of $\pi$
and $\tau$ on the rows and columns and we now have a permutation
$\sigma$ which acts on the set of output symbols and the action on the
values is the combined action of $\tau$ and $\sigma$ on the state /
symbol pairs.
We get the following table for one output symbol.
| 1| 1| 1| 1| 1| 1| 1| 1|
| 3| 7| 13| 22| 34| 50| 70| 95|
| 7| 74| 638| 4663| 28529| 151600| 713176| 3028727|
| 19| 1474| 118949| 7643021| 396979499| 17265522590| 646203233957| 21243806443115|
| 47| 41876| 42483668| 33179970333| 20762461502595| 10831034126757463| 4844565331763027596| 1896647286212566394157|
| 130| 1540696| 23524514635| 274252613077267| 2559276179593762172| 19903050866658120066632| 132673733865643566661223817| 773869304738817313660236854435|
| 343| 68343112| 18477841853059| 3802866637652928476| 626361440405926396941497| 85973094952794304259466151418| 10114722264843500593900485682759058| 1041247439945746392774732251877428013424|
| 951| 3540691525| 19526400231564564| 81874932562648494674439| 274724907231470170012527305235| 768186632385442429091738459545921683| 1841148232300929744056375072663778725072045| 3861169308385212945415179151162048048461447621051|
For two output symbols we have
| 1| 2| 2| 3| 3| 4| 4| 5|
| 6| 44| 226| 1036| 4006| 13876| 43186| 123706|
| 22| 2038| 142336| 7775708| 341906882| 12592855970| 399366367444| 11132314379998|
| 114| 176936| 238882846| 244698934716| 200649261017386| 137143648460408272| 80366174079209158078| 41217801421317353953038|
| 538| 20943790| 694540531869| 17362195783419565| 347256965617453111707| 5787905149678353796143590| 82689320232608432438262174088| 1033688856029644143398545746261666|
| 2800| 3108818680| 3081614657394158| 2300263170022800838590| 1373710145403734491538076692| 683647218221456315461840833799588| 291623393789554111334921119339297251576| 108848103655093534827120896470552784018126133|
| 14435| 553255960308| 19368605578168164179| 510403370619400317035233276| 10760675018954199971112474584547034| 189053417206572805331242303827478007687534| 2846969183281612697167894035560332610102537605107| 37513627164757945129191686915360296965220882487348368322|
| 76312| 114776687721990| 163754994767359896315206| 175823884588034784365611422263567| 151031502945525188132621372232074129315388| 108112560585492844973667875651850996929528575835574| 66334273232261168899346826889209523621370385072001650536116| 35612941825082950044316879351953518880328546726186269125209259942000|
Three output symbols yield
| 1| 2| 3| 4| 5| 7| 8| 10|
| 6| 74| 775| 7124| 55668| 377269| 2255068| 12102178|
| 29| 7623| 1804128| 329641077| 48317584819| 5910777204447| 620630699132987| 57098016161377374|
| 190| 1501516| 10322146155| 53512221536494| 221968136483832014| 767306804276224740828| 2273639672252875423729778| 5895263464882668948056075498|
| 1289| 401371270| 101367856946674| 19243544529701850104| 2922627429145967591227933| 369897467120287921148106491100| 40127586921742103692252419866530400| 3809020901470314640315364328599642887506|
| 9673| 134138227473| 1518024410618449355| 12907594258334064169919121| 87803188849193004851359368791756| 497730359833453928180319002991414602093| 2418417068028280199534213597754694851805840225| 10281969996512134071147543063509604282591387558257520|
| 73604| 53725010241266| 32201676604966459555889| 14499308203534486200843433873288| 5222906915046943511008193569385565417541| 1567819635143439097415728431946215896270059293161| 403397426941463986598664115278880491308873007636372427413| 90819310744609116970288225981171645606992548661728301980002516662|
| 573442| 25081227120200634| 918865057207831149035535828| 25285803348327743049043999665003927370| 556668502782671968664754976635618690023788914186| 10212576716712592462402577334011641314012112279662417473469| 160593227242102911238351158110065456181421151497935704882980552606514| 2209668743041973325985756217800328983151637526070225333484395817216844313778044|
Four symbols yield
| 1| 2| 3| 5| 6| 9| 11| 15|
| 6| 81| 1183| 17320| 223743| 2527953| 25100642| 222144431|
| 29| 11676| 6064606| 2593640209| 897009602752| 259029607981273| 64163314527895517| 13915354324987224434|
| 209| 3831148| 81573276196| 1334647986999812| 17493019379544106141| 191083931326433751661244| 1789145512052234025354299479| 14658245204843197745963032946030|
| 1605| 1790644262| 1896670209705424| 1517048789183286280242| 970906913413864886205472630| 517817738821504564293534451239523| 236717123156531446119639354041331039161| 94687056373953999303903668799913187496263156|
| 14581| 1059379897194| 67316410303471722434| 3215992447007150335848738654| 122917096383192644964591012637376201| 3914970565374711299589044295533654728633307| 106880364202506644619748019682746095700393900152769| 2553144552899651934745530164746582340956543973128820263056|
| 139393| 753537775187942| 3384772964731425916075399| 11417522742490309099171117430032545| 30811161705715253062014503052903675566658545| 69288800372821565423720577304077855202305583626701885| 133558404360787903168869516536280931557107488047811301767090944| 225261750393971075099732774525570356293879964632402213718679044305894097|
| 1396571| 625251791124395555| 228938436067723951049495991006| 62929794221715160999635636523327894882612| 13838407708142508413727196626725814975774777251143465| 2535915030456565177161444959970701001632828430350354446662490458| 398324009007397248996962807526047717969141597988514641606768498289421689877| 54745234941096457415294245370001308972451724232455240696557887565208148810995582605398|
Five yield
| 1| 2| 3| 5| 7| 10| 13| 18|
| 6| 81| 1283| 23718| 427097| 7038183| 103821898| 1372476565|
| 29| 12621| 9875766| 7694431189| 5108729338005| 2866744631627614| 1383444387175373624| 584738631310521555854|
| 209| 5269634| 242293771832| 9508729532667775| 303537782294910006324| 8092008307288214998320242| 184959457244832433282602143175| 3699331066099122391214267900044654|
| 1652| 3522483774| 10830193709142911| 26326043763404282897041| 51400728418762283743166947873| 83657888529920202329649049898106090| 116710057646947398301738658574346204631684| 142468411615177769332030145694979476640229799189|
| 15851| 3145805694347| 748102205731495912974| 136208975222504119847429651282| 19858370962230255015514418124978318079| 2412787002750586428934439397030434799264061139| 251274509502830170033287481345380174207693056359521218| 22897389793260955643229220128574252798224672181928261140465132|
| 166704| 3451400880452119| 73411241836287162439679965| 1180551376563438246848941889675139885| 15191078168438817387019547141066538853359987716| 162897187467367310343607416594982652886027395559664704867| 1497241493787657622590696899117249253525915361372369634716838093562| 12041433120029892610323311791551075975557111745695889862014376128073109032711|
| 1903565| 4453493876743114141| 9696353154834682640039652745383| 15885725788645815939897203091966549890622620| 20821721985157272922019024288021084022442430164588852951| 22742872566990523157952024381064067346859577618809430167348089260678| 21292527088890116346521008056915214793265230093299707297080802600716512185635775| 17442838191172723332310678848004599133452884005515399679140805741625547114446168989072880538|
Finally we get for six symbols
| 1| 2| 3| 5| 7| 11| 14| 20|
| 6| 81| 1296| 25462| 538398| 11293138| 222523395| 4028465835|
| 29| 12695| 11328242| 12588216476| 13507531099557| 12816676023294742| 10610435880654869474| 7727294095780485593467|
| 209| 5635034| 396518228841| 29902254119865429| 1947536351902062154396| 107300432454566001311927042| 5082116300041019725568491696927| 210740137620013511032529954013222997|
| 1652| 4452248665| 28661573376513712| 168916250895768873373125| 817701868164546859278494745163| 3309982213851389919369842502624515185| 11489588802579132510260340618793545674029229| 34899323818332948931809633587657800749959429140381|
| 15981| 5147747713851| 3350282292788028229116| 1806224092274722460193800299488| 785710893213334594665452752490935409600| 285033249600409431428643990739291312182972084132| 88635922473731155883430561365483225722614035385062387241| 24117625609927898779221726509298149056270088412428821435055351917|
| 171494| 7721337186134447| 564461055370558962491069562| 32440112974696247296439224174402635608| 1495496356773389913366753876131348301821086183629| 57461231472727120738649283370058285613319924784137652332510| 1892438067444572851650149500498661434054764424790064535313952779756847| 54535174475104423211660022224834911399436311980199332329136348183225443895408945|
| 2041940| 14003166710753529537| 128580139323392617149472430498611| 905045555050578843422814928359489284108944076| 5100536012710000997786910449314715054126988193344281363091| 23954874543703392448557828429937283387539096055257084732285733625757877| 96433013296267950226465899688337115485213562485757881838921103460239405763926814829| 339676614862729029614552301296020122485910436927008569295805654935518977116532247635480871741432|
The Maple code for this was as follows.
with(combinat);
pet_cycleind_symm :=
proc(n)
local p, s;
option remember;
if n=0 then return 1; fi;
expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n));
end;
pet_flatten_term :=
proc(varp)
local terml, d, cf, v;
terml := [];
cf := varp;
for v in indets(varp) do
d := degree(varp, v);
terml := [op(terml), seq(v, k=1..d)];
cf := cf/v^d;
od;
[cf, terml];
end;
cycles_prod :=
proc(cyca, cycb)
local ca, cb, lena, lenb, res, vlcm;
res := 1;
for ca in cyca do
lena := op(1, ca);
for cb in cycb do
lenb := op(1, cb);
vlcm := lcm(lena, lenb);
res := res*a[vlcm]^(lena*lenb/vlcm);
od;
od;
res;
end;
automaton :=
proc(N, M, K)
option remember;
local idx_slots, idx_cols, idx_syms, res, a, b, c, sim, flat_sim,
sym, flat_sym, flat_a, flat_b, flat_c,
cyc_a, cyc_b, len_a, len_b, p, q;
if N > 1 then
idx_slots := pet_cycleind_symm(N);
else
idx_slots := [a[1]];
fi;
if M > 1 then
idx_cols := pet_cycleind_symm(M);
else
idx_cols := [a[1]];
fi;
if K > 1 then
idx_syms := pet_cycleind_symm(K);
else
idx_syms := [a[1]];
fi;
res := 0;
for a in idx_slots do
flat_a := pet_flatten_term(a);
for b in idx_cols do
flat_b := pet_flatten_term(b);
sim := cycles_prod(flat_a[2], flat_b[2]);
flat_sim := pet_flatten_term(sim);
for c in idx_syms do
flat_c := pet_flatten_term(c);
sym := cycles_prod(flat_b[2], flat_c[2]);
flat_sym := pet_flatten_term(sym);
p := 1;
for cyc_a in flat_sim[2] do
len_a := op(1, cyc_a);
q := 0;
for cyc_b in flat_sym[2] do
len_b := op(1, cyc_b);
if len_a mod len_b = 0 then
q := q + len_b;
fi;
od;
p := p*q;
od;
res := res +
p*flat_a[1]*flat_b[1]*flat_c[1];
od;
od;
od;
res;
end;
output :=
proc(MXN, MXM, K)
local data, N, M, fd, fname, width;
data := table();
for N to MXN do
data[N] := table();
for M to MXM do
data[N][M] := automaton(M, N, K);
od;
od;
fname := sprintf("automata-%d-%d-%d.txt", MXN, MXM, K);
fd := fopen(fname, WRITE);
for N to MXN do
fprintf(fd, "|");
for M to MXM do
width := nops(convert(data[MXN][M], base, 10));
fprintf(fd, "% *d|", width+1, data[N][M]);
od;
fprintf(fd, "\n");
od;
fclose(fd);
end; |
Prove that a constant multiplied by a Poisson random variable is not Poisson | Your observation about the gaps is right. One can look in particular to $P(L=1)$ which must be $e^{-\lambda}>0$ (for some $\lambda)$ for a Poisson. But $L$ cannot be 1, unless $a=1$. So, you are right, $L$ is Poisson iff $a=1$. |
On the formula, $\pi = \frac 5\varphi\cdot\frac 2{\sqrt{2+\sqrt{2+\varphi}}}\cdot\frac 2{\sqrt{2+\sqrt{2+\sqrt{2+\varphi}}}}\cdots$ | Start with Euler's identity
$$ \frac{\sin x}{x} = \prod_{k=1}^{\infty} \cos \left(\frac{x}{2^k} \right) $$
which is readily derived from the sine angle duplication formula. Setting $ x = \pi/10 $ gives
$$ \frac{\varphi - 1}{\pi/5} = \prod_{k=1}^{\infty} \cos \left(\frac{\pi}{20 \cdot 2^{k-1}} \right) $$
$$ \frac{1}{\pi} = \frac{1}{5(\varphi - 1)} \prod_{k=1}^{\infty} \cos \left(\frac{\pi}{20 \cdot 2^{k-1}} \right) = \frac{\varphi}{5} \prod_{k=1}^{\infty} \cos \left(\frac{\pi}{20 \cdot 2^{k-1}} \right) $$
after noting that $ \varphi (\varphi - 1) = 1 $, which finishes the proof given your observation. To prove that one, just remember the cosine angle duplication identity,
$$ 2 \cos(x/2) = \sqrt{2 \cos(x) + 2} $$ |
complex main branch of a logarithmic function holomorphic correct | Its not only true that arg($x+iy$) $= \arctan(\frac{y}{x})$ in the right half plane and you can show that by drawing a picture and using the pythagorean theorem. However, the complex logarithm is only holomorphic throughout the entire cut plane = $\{x+iy | x > 0$ or $y \neq 0)$ i.e., the entire complex plane minus the negative $x-$ axis and the origin. It is noted above that only the main branch of $\arctan$ is being used and thus only the main branch of the complex logarithm is being used as well.
It is true that this particular particular $f(z)$ is valid for the right half plane but it is not defined when $x=0$ since $\arctan(\frac{y}{x})$ would not be defined. This is rectified by defining the domain of $f$ such that Re($z$) $\neq 0$.
Thus your function is holomorphic since it deletes points where $x = 0$ and presumes to use only the principle branch of the complex logarithm. |
Uniqueness of MLE for Poisson Regression | It is indeed unique, because the negative log-likelihood is strictly convex. I will show that here.
On a probability space $(\Omega, \mathscr{F}, P)$ let $y_i : \Omega \to \mathbb{N}$ be $n$ independent Poisson random-variables given some unknown parameters $\theta \in \mathbb{R}^m$ and corresponding "feature" vectors $b_i \in \mathbb{R}^m$. Assume that $m$ of the $n$ feature vectors are linearly-independent (this is almost always the case in practice as $n \gg m$).
The likelihood model for basic Poisson regression is:
\begin{align}
p\big{(}y_1,y_2,\ldots,y_n|\theta\big{)} &= \prod_{i=1}^n \text{Pois}\big{(}y_i;\lambda_i=e^{b_i^\intercal \theta}\big{)}\\[3pt]
&= \prod_{i=1}^n \frac{\big{(}e^{b_i^\intercal \theta}\big{)}^{y_i} e^{-\big{(}e^{b_i^\intercal \theta}\big{)}}}{y_i!}
\end{align}
The maximizing $\theta$ must be a stationary point:
\begin{align}
\frac{d}{d\theta}\log p\big{(}y_1,y_2,\ldots,y_n|\theta\big{)} &= 0\\[3pt]
\frac{d}{d\theta} \sum_{i=1}^n \log e^{y_i b_i^\intercal \theta} + \log e^{-e^{b_i^\intercal \theta}} - \log y_i! &= 0\\[3pt]
\sum_{i=1}^n \frac{d}{d\theta} y_i b_i^\intercal \theta - \frac{d}{d\theta} e^{b_i^\intercal \theta} - \frac{d}{d\theta} \log y_i! &= 0\\[3pt]
\sum_{i=1}^n y_i b_i^\intercal - e^{b_i^\intercal \theta}b_i^\intercal &= 0\\[3pt]
\sum_{i=1}^n \big{(}y_i - e^{b_i^\intercal \theta}\big{)}b_i &= 0\\[3pt]
\end{align}
For a method like gradient ascent or Newton's to maximize uniquely, we must have strict concavity:
\begin{align}
\frac{d^2}{d\theta^2}\log p\big{(}y_1,y_2,\ldots,y_n|\theta\big{)} &\overset{?}{<} 0\\[3pt]
\frac{d}{d\theta} \sum_{i=1}^n y_i b_i^\intercal - e^{b_i^\intercal \theta}b_i^\intercal &\overset{?}{<} 0\\[3pt]
- \sum_{i=1}^n e^{b_i^\intercal \theta}b_i b_i^\intercal &< 0\\[3pt]
\end{align}
We do, because the real exponential is positive-definite, $b_ib_i^\intercal$ is positive-semidefinite, and the sum contains $m$ linearly-independent rank-1 terms. Note: even if that last condition isn't met, we still have concavity ($\small{\implies}$ global maximum), just not strict concavity ($\small{\implies}$ unique global maximum). |
Steps of transformation | I like to work my way from inside to outside. We're given $$y=-5-3 \sqrt{-2x-4} = -5-3 \sqrt{-2(x+2)}$$
Thus the inside most transformation is $\sqrt{x} \mapsto \sqrt{x+2}$. This shifts the function to the left by $2$.
The next inside most transformation is $\sqrt{x+2} \mapsto \sqrt{-2(x+2)}$. This corresponds to reflecting the graph through the $y$-axis and compressing it in the horizontal direction by a factor of $2$.
Then comes $\sqrt{-2(x+2)} \mapsto -3\sqrt{-2(x+2)}$. This reflects the graph through the $x$-axis and stretches it in the vertical direction by a factor of $3$.
And finally $-3\sqrt{-2(x+2)} \mapsto -5-3\sqrt{-2(x+2)}$. This shifts the graph downward by $5$.
So $\sqrt{x} \mapsto -5-3\sqrt{-2(x+2)}$ shifts the function left by $2$, then reflects across the $y$-axis and compresses in the horizontal direction by a factor of $2$, then reflects across the $x$-axis and stretches by a factor of $3$, and finally shifts down by $5$. |
possibly nonisomorphic graphs | The $4$-cycles in the graphs are
$3,4,5,9$ and $1,8,7,10$ in $G$;
$A,H,J,K$ and $B,C,D,E$ in $H$;
$a,g,f,k$ and $b,c,d,e$ and $c,d,j,h$ in $J$.
So $J$ is immediately not isomorphic to either of the others: too many $4$-cycles.
To prove that $G$ and $H$ are isomorphic, we can start building up an isomorphism. In $G$, vertices $2,6$ are not part of any $4$-cycle; in $H$, vertices $F$ and $G$ are not part of any $4$-cycle. There seems to be enough symmetry that we can just map $2$ to $F$ and $6$ to $G$ (but we might need to backtrack).
Let's try mapping $3$ ($2$'s neighbor) to $E$ ($F$'s neighbor). Then cycle $3,4,5,9$ must be mapped to cycle $B,C,D,E$ somehow; in particular, $5$ must be mapped to $C$, the other vertex on the cycle.
Also, $1$ is now forced to map to $K$ and $7$ to $H$, because those are the only neighbors left of $2$ and $6$ (and $F$ and $G$) respectively.
There is a final arbitrary choice to be made; $4$ must be mapped to a neighbor of $C$ and $E$, which can be either $B$ or $D$, but let's pick $B$. Then $9$ is mapped to $D$, $10$ must be mapped to $B$'s neighbor $A$, and $8$ must be mapped to the remaining vertex $J$.
We can check that mapping $1,2,3,4,5,6,7,8,9,10$ to $K,F,E,B,C,G,H,J,D,A$ respectively is an isomorphism by checking any edges we haven't checked.
As intuition for proving that $G$ and $H$ are not isomorphic, I looked at what happens when we delete edges $26$ and $FG$ in them (these are identifiable graph-theoretically in both as "the only edge between the two vertices not part of any $4$-cycle", so doing this should preserve isomorphism). This leaves two vertices of degree $2$; if we replace those by edges, we get a cube graph in both cases. |
How to understand basics of vector deravative and vector field | You're right that when we take the partial derivative of $\psi$ w.r.t. $x$, we treat everything besides $x$ as constant. That's why
$$\frac{\partial}{\partial x}\left(-Kxy + f(y)\right)= \frac{\partial}{\partial x}(-Kxy) + \frac{\partial}{\partial x}(f(y)) = -Ky$$
because if some function $f$ doesn't have $x$ as an argument, it doesn't care at all about $f$, so the change in $f$ w.r.t $x$ is zero.
Treating $y$ as constant when you take a derivative doesn't mean that you won't have $y$'s left in the result. It's just like how
$$\frac{d}{dx} 2x = 2$$
$2$ is constant with respect to $x$ (it's constant with respect to everything) but that doesn't mean the derivative won't have any $2$'s in it. |
probability based on geometry | We can make things a lot simpler by consider a rectangle on the Cartesian plane.
The $4$ vertexes of the rectangle can be denoted as $(0,0)$, $(a,0)$, $(b,0)$, $(a, b$), where $0\leq a, b\leq 10$.
Then, the question becomes to find the probability that point $(a, b)$ lies inside the circle with radius $10$, centered at $(0,0)$.
Can you get it from here? |
Show that $\mathrm{SO}_3(\mathbb{Q}_p) \simeq \mathrm{SL}_2(\mathbb{Q}_p) $ | The thing is, as opposed to the case of $\mathbb{R}$, there are few anisotropic quadratic forms over $\mathbb{Q}_p$. In fact over $\mathbb{Q}_p$, the standard quadratic form $\langle 1,1,1\rangle$ is not anisotropic but it is isometric to $\langle 1,-1,-1\rangle$, so given your statement for $SO_{2,1}(\mathbb{R})$ this shouldn't be too surprising.
The situation is as follows (all quadratic forms are supposed to be non-degenrerate) : any quadratic form $q$ over $\mathbb{Q}_p$ can be written $\langle pa_1,\dots,pa_r, b_1,\dots,b_s\rangle$ with $a_i,b_i\in \mathbb{Z_p}^*$. Then $q$ is uniquely characterized by $q_1 = \langle \overline{a_1},\dots,\overline{a_r}\rangle$ and $q_2 = \langle \overline{b_1},\dots,\overline{b_s}\rangle$ which are quadratic forms over $\mathbb{F}_p$ ; and a quadratic form over $\mathbb{F}_p$ is uniquely characterized by its dimension and its discriminant.
Now an elegant way to see that $SO_3(\mathbb{Q}_p)\simeq SL_2(\mathbb{Q}_p)$ is the following : in general, if $Q$ is a quaternion algebra over a field $K$, then the quaternion norm $N$ of $Q$ is a quadratic form, and so is its restriction $q$ to pure quaternions $Q_0$.
And it is known that the map $Q_1\to SO(Q_0,q)$ defined by $z\mapsto (x\mapsto zx\overline{z})$, where $Q_1 = \{ z\in Q\,|\, N(z)=1\}$, is an isomorphism.
EDIT : no, actually it has kernel $\{\pm 1\}$, and this is also a mistake in your question I think. What we will get is $SL_2(\mathbb{Q}_p)/\{\pm1\} \simeq SO_3(\mathbb{Q}_p)$.
Now over $\mathbb{R}$ there are two quaternion alegbras : $M_2(\mathbb{R})$ and the Hamilton quternions $\mathbb{H}$, and the corresponding $q$ are respectively $\langle 1,-1,-1\rangle$ and $\langle 1,1,1\rangle$, while the corresponding $Q_1$ are $SL_2(\mathbb{R})$ and $\mathbb{H}_1$ (which doesn't have an obvious simpler expression), which in particular gives us $SL_2(\mathbb{R})/\{\pm1\} \simeq SO_{2,1}(\mathbb{R})$.
But over $\mathbb{Q}_p$, the hamilton quaternions and the split quaternions $M_2(\mathbb{Q}_p)$ are isomorphic, precisely because $\langle 1,-1,-1\rangle$ and $\langle 1,1,1\rangle$ are isometric, so this gives $SL_2(\mathbb{Q}_p)/\{\pm1\}\simeq SO_3(\mathbb{Q}_p)\simeq SO_{2,1}(\mathbb{Q}_p)$. |
Find the maximum rate change. | You now have a function (call this g(x,y)) that gives the rate of change of f in the direction of the x-axis for every point (x,y). The question remains when is g(x,y) a maximum? i.e. when is the rate of change of f in the direction of the x-axis a maximum? |
What approach to take to solve this arithmetic or algebraic puzzle? | First row: $10$, $2$, $5$
Second row: $-2$, $8$, $-3$
Third row: $8$, $5$, $2$
EDIT:
Daniel Mathias has provided a link that says we must use the numbers $1$ through $9$ and we evaluate from left to right and from top to bottom, ignoring precedence rules.
This gives us
$3\ 2\ 4$
$6\ 8\ 1$
$9\ 5\ 7$ |
Solving PDE by Canonical form transformation | If $v = u_s$, the reduced problem is
$$ (r^2 + 7 s) \dfrac{\partial v}{\partial r} - 2 r v = 0 $$
Solve this as an ordinary differential equation (treating $s$ as a constant parameter). The arbitrary constant becomes an arbitrary function of $s$. Then integrate with respect to $s$, with an arbitrary constant that is an arbitrary function of $r$. |
Distance from origin of biased random walk conditioned to be positive at time n | This quantity turns out to be $O(1)$. It turns out the necessary calculations were already performed in this 1989 paper of Arratia and Gordon, who proved a more general theorem implying that the conditional law of $S_n$ given $S_n>0$ converges to a geometric distribution with explicit parameters (which we work out below).
Let $T_n$ be the number of rightwards moves made by the random walk in reaching $S_n$ after $n$ steps, so that $n-T_n$ leftward moves were made and thus $S_n=T_n-(n-T_n)=2T_n-n$ and $T_n\sim \textrm{Bin}(n,p)$.
The following is a direct consequence of Arratia and Gordon's Theorem 2. (They obtain asymptotics for the numerator and denominator, then remark in a paragraph before the theorem why the following is implied.)
Theorem. Let $0< p<\alpha<1$ be constants, and let $k_n$ be a sequence of integers tending to infinity such that $\lim_{n\to\infty}k_n/n=\alpha$. Let $r=p(1-\alpha)/\alpha(1-p)$. Let $T_n\sim \textrm{Bin}(n,p)$. Then for all integers $i\geq 0$,
$$
\lim_{n\to\infty}\mathbb P(T_n=k_n+i\mid T_n\geq k_n)=r^i(1-r).
$$
Applying this theorem with $k_n=\lfloor n/2\rfloor+1$ and $\alpha=1/2$, we obtain that
$$
\lim_{n\to\infty}\mathbb P(T_n=\lfloor n/2\rfloor+i+1\mid T_n> \lfloor n/2\rfloor)=\frac{p^i}{(1-p)^i}\frac{1-2p}{1-p},\qquad i\geq 0.
$$
Summing these probabilities appropriately yields the limiting conditional expectation
$$
\lim_{n\to\infty}\mathbb E(T_n-\lfloor n/2\rfloor\mid T_n> n/2)=1+\frac{1-2p}{1-p}\sum_{i=0}^{\infty}\frac{ip^i}{(1-p)^i}=1+\frac{p}{1-2p}.
$$
Recalling that $S_n=2T_n-n$, we obtain that
$$
\lim_{n\text{ even}\to\infty}\mathbb E(S_n\mid S_n>0)=2+\frac{2p}{1-2p},
$$
and
$$
\lim_{n\text{ odd}\to\infty}\mathbb E(S_n\mid S_n>0)=1+\frac{2p}{1-2p}.
$$ |
Is "(p AND q) OR r" logically equivalent to "p AND (q OR r)" ?? | They are not the same.
p q r │ (p∧q)∨r p∧(q∨r)
──────┼───────────────────
F F F │ F F
F F T │ T F *
F T F │ F F
F T T │ T F *
T F F │ F F
T F T │ T T
T T F │ T T
T T T │ T T |
Hilbert's Nullstellensatz: changing $\mathbb{C}$ to $\overline{\mathbb{Q}}$ and stuck in proof | Just because (the image of) $x_1$ is invertible in $\mathbb C[x_1,\ldots,x_n]/M$, this doesn't imply that this inverse can be written as a polynomial in $x_1$.
Rather, the argument is that since $\mathbb C[x_1]$ embeds via $\pi_1$ into
the field $\mathbb C[x_1,\ldots,x_n]/M$, this embedding extends to an embedding of $\mathbb C(x_1) into $\mathbb C[x_1,\ldots,x_n]/M$.
Now the dimension argument applies.
If we replace $\mathbb C$ by a countable field such as $\overline{\mathbb Q}$, then this dimension argument doesn't apply, and one needs a more subtle argument.
(A similar situation occurs in other contexts where one can make a countability argument over $\mathbb C$, e.g. with Quillen's Lemma, which follows from the same sort of dimension argument with $\mathbb C$ coefficients, but needs a
more subtle argument when the coefficient field is countable.) |
How do I find an equation for a boundary between a specific surface and a plane? | HINT:
To handle the zeros and poles it is suggested to treat $(a,b)$ as variables.
$$ \frac1z= \frac {VD}{C};\quad(x,y)=(a,b) ;\\ \quad z^2 =Z_1Z_2=\left( 1- \frac{(x^2-1)y}{k x (y^2-1)} \right) \cdot\left( 1- \frac{ x^2 y^2}{k x^2 (1-y^2)} \right)$$
where $ y= \pm 1 ,x=\pm1$ can be looked to for zeros/ poles. Factored functions can be considered at first as:
$$Z_1= \left( 1- \frac{(x^2-1)y}{k x (y^2-1)} \right) ,\quad Z_2=\left( 1- \frac{ x^2 y^2}{k x^2 (1-y^2)} \right) $$ |
Create Gaussian distribution from random number generator output | Usually a random sample from $U(0;1)$ is used. If you have a random sample from $U(-1;1)$ you can consider the absolute values of your output as from a $U(0;1)$
Then use the integral transform theorem |
A very simple question: what spaces of function does the Laplace transform map from and into? | I'm not sure about the Laplace transform but in Joel L. Schiff's "The Laplace Transform: Theory and Applications" on page 13 the author proves that a large class of functions has a Laplace transform. I am not sure how to describe nicely the result in terms of domain and range of the operator, buy maybe that helps.
As for the Fourier transform, you first define it with domain $L^1(\mathbb{R})$, in which case the range (by Riemann-Lebesgue lemma) will be $\mathcal{C}_0(\mathbb{R})$ (i.e. the set of functions that go to zero at infinity).
Then you can restrict the domain to $L^1\cap L^2$ and notice that this contains the Schwartz space $\mathcal{S}$ which is dense in $L^2(\mathbb{R})$ and so you can extend $\mathcal{F} \colon L^2(\mathbb{R}) \to L^2(\mathbb{R})$ (so domain and range both being $L^2$) which also turns out to be an isometry.
Moreover, if you restrict your attention to $\mathcal{S}$, you get $\mathcal{F} \colon \mathcal{S}(\mathbb{R}) \to \mathcal{S}(\mathbb{R})$ and similarly $\mathcal{F}\colon \mathcal{S}'(\mathbb{R}) \to \mathcal{S}'(\mathbb{R})$, where $\mathcal{S}'(\mathbb{R})$ is the set of tempered distributions. More things can be said about the Fourier transform on $L^p$, for $p \in (1,2]$ but this is less classical and possibly less interesting. |
How to compute the different number and ways to read a given phrase forming a pile or a stack? | The simplest way to do this to notice that:
The only way to get the correct sequence of letters is to start at the top and take a letter from each row in turn.
In making your way down row by row from the top, you always have a choice between taking the left or right letter immediately below.
You make this choice $8$ times, and have $4$ possible starting points, so the total number of routes down is
$$4\cdot 2^8=1024.$$ |
I'm not sure why this statement about functions is false? | I community wiki'd this since this is basically what Ian suggested in the comments.
Without loss of generality, restrict ourselves to integrating on the positive real line (we can "evenly" extend this to $\mathbb{R}$ if desired, since the integral will still be finite).
Fix a convergent, positive series $ \sum a_n$ and define a bijection which assigns to each $a_i$ a triangle of area $a_i$. Choose heights such that the $\lim \sup$ of the heights is positive. Then consider a function
$f:[0, \infty) \to [0, \infty)$ such that the graph of this function is the triangles corresponding to each $a_i$ side to side, properly ordered, possibily with "plateaus of zero" in between. Such a function satisfies the required conditions.
It's tedious and unnecessary to find an explicit formula. If that matters a lot to you, why not try yourself, now that you have an idea what the graph looks like? |
Scalarmultiplying to the right | It's not often used, but you can define right scalar multiplication of a matrix $M=(m_{ij})_{i,j}$ with a scalar $\lambda$ as $M\cdot \lambda := (m_{ij}\lambda)_{i,j}$. This is equal to $\lambda\cdot M$ in any commutative ring, such as $\mathbb{R}$. |
Give an example that shows that the cut property does not hold if we replace the real numbers by rational numbers | Let $A:= \{x \in \mathbb{Q}: x > \sqrt{2}\}$
Let $B:= \{x \in \mathbb{Q}: x < \sqrt{2}\}$ |
Show that $(a, b) \mapsto a + b + pab$ makes $\mathbb{Z} / p^n \mathbb{Z}$ into a cyclic group. | There is, and it's indeed very similar. We can write
$$p(a \ast b) + 1 = (pa)(pb) + pa + pb + 1 = (pa + 1)(pb + 1)$$
so now the isomorphism we want is
$$(\mathbb{Z}/p^n, \ast) \ni x \mapsto px + 1 \in (1 + p \mathbb{Z}/p^{n+1}, \times)$$
where $1 + p \mathbb{Z}/p^{n+1}$ is the subgroup of the (multiplicative) group of units of $\mathbb{Z}/p^{n+1}$ consisting of elements congruent to $1 \bmod p$. This gives the group axioms immediately but it still takes a little bit of work to show that this group is cyclic, although that's a classical argument, closely related to the existence of primitive roots. |
Let $\alpha\geq 1$, let $a,b\in\mathbb{R}^n$, and assume that $\|b\|\leq \|a\|$. Show that $\|a-b\|\leq \|\alpha a-b\|$ | Hint. $\|\alpha a-b\|^2-\|a-b\|^2=\langle\alpha a-b,\,\alpha a-b\rangle-\langle a-b,\,a-b\rangle=(\alpha-1)\left[(\alpha+1)\|a\|^2-2\langle a,b\rangle\right]$. |
Is this polynomial in three variables irreducible? | $$x_0^3 + x_1^3 + x_2^3 - (x_0 + x_1 + x_2)^3=
[x_0^3 + x_1^3] + [x_2^3 - (x_0 + x_1 + x_2)^3].$$
Both summands are divisible by $x_0+x_1$ so the polynomial is reducible over any field. It is in fact $0$ iff the characteristic is $3$. |
Mathematical models in geology and industry | There is a great deal of mathematical modeling in industrial geology. Particularly among the large mining and oil producing companies. Of particular interest are models of seismic and electromagnetic methods of exploration. |
Compact Metric Space Question | You can use the result that tells you that a continuous function on a compact set achieves a minimum (notice that $f$ is continuous since it is a strict contraction). This gives you the existence of $a$.
You get a contradiction from choosing $x=f(a)$ because then
$$g(a) = d(f(a),a) \leq d(f(f(a)), f(a)) = g(x)$$
which contradicts the fact that $f$ is a contraction (since $a \neq f(a)$).
Uniqueness follows easily by contradiction as well since if you assume $\exists x, y \in X$ fixed points you get (using the fact that $f$ is a contraction)
$$d(f(x),f(y)) = d(x,y) < d(x,y).$$ |
Why is the 2nd derivative written as $\frac{\mathrm d^2y}{\mathrm dx^2}$? | Somewhat mundanely,
$$ \frac{d}{dx}\left(\frac{d}{dx}(y)\right) = \frac{d}{dx}\left(\frac{dy}{dx}\right) = \frac{d\,dy}{dx\,dx} = \frac{d^2 y}{dx^2} $$ |
Linearizing a delay differential equation at an equilibrium point. | Informally, if $\dot{x}(t) = f(x(t),x(t-r))$ then we have
(using $f(x_1+h_1,x_2+h_2) \approx {\partial f(x_1,x_2) \over \partial x_1} h_1 + {\partial f(x_1,x_2) \over \partial x_2} h_2 $)
\begin{eqnarray}
\dot{(x+\delta)}(t) &=& f(x(t)+\delta(t),x(t-r)+\delta(t-r)) \\
&\approx& f(x(t),x(t-r))+{\partial f(x(t),x(t-r)) \over \partial x_1} \delta(t) + {\partial f(x(t),x(t-r)) \over \partial x_2} \delta(t-r) \\
\end{eqnarray}
From which we get
$\dot{\delta}(t) = {\partial f(x(t),x(t-r)) \over \partial x_1} \delta(t) + {\partial f(x(t),x(t-r)) \over \partial x_2} \delta(t-r)$.
Substituting values, we have
$\dot{\delta}(t) = b(1-x(t-r)) \delta(t) - b x(t) \delta(t-r)$.
Substituting $x(t) = 0$ or $x(t) = 1$ yields the desired results. |
Balancing an acid-base chemical reaction | Let $x$ be the number of $\mathrm{Al(OH)_3}$; $y$ the number of $\mathrm{H_2SO_4}$; $z$ the number of $\mathrm{Al_2(SO_4)_3}$, and $w$ the number of $\mathrm{H_2O}$. Looking at the number of $\mathrm{Al}$, you get $x = 2z$. Looking at $\mathrm{O}$, you get $3x + 4y = 12z + w$. Looking at $\mathrm{H}$ you get $3x + 2y = 2w$; and looking at $\mathrm{S}$ you get $y = 3z$.
That looks like what you are getting from Wolfram, except you have the wrong signs for $z$ and $w$; unless you are interpreting the first two entries to represent the "unknowns", and the last two to represent the "solutions". I would translate into equations the usual way.
What you have is the following system of linear equations:
$$\begin{array}{rcrcrcrcl}
x & & & -& 2z & & & = & 0\\
3x & + & 4y & - & 12z & - & w & = & 0\\
3x & + & 2y & & & - & 2w & = & 0\\
& & y & - & 3z & & & = & 0
\end{array}$$
This leads (after either some back-substitution from the first and last equations into the second and third, or some easy row reduction) to $x=2z$, $y=3z$, and $6z=w$. Since you only want positive integer solutions, setting $z=1$ gives $x=2$, $y=3$, and $w=6$, yielding the smallest solution:
$$\mathrm{2 Al(OH)_3 + 3H_2SO_4 \to Al_2(SO_4)_3 + 6H_2O}$$ |
How does integrating a function lead to itself? | The definite integrals $\int_{-1}^1 f(t)\,dt$ and $\int_{-1}^1 f(x)\,dx$ are the same -- the name of the dummy variable is not visible outside a definite integral. |
Limit of matrix powers. | Edit: You may use eigendecomposition. Let $A=PJP^{-1}$ where $J=J_{r_1}(\lambda_1)\oplus\cdots\oplus J_{r_s}(\lambda_s)$ is the Jordan form of $A$ and each $J_{r_i}(\lambda_i)$ is a Jordan block of size $r_i$ corresponding to the eigenvalue $\lambda_i$. Clearly, $A^m$ converges if and only if $J_{r_i}(\lambda_i)^m$ converges.
Now, consider a Jordan block $B=J_r(\lambda)$.
If $|\lambda|\ge1$ and $\lambda\neq1$, the diagonal entries of $B^m$, which are equal to $\lambda^m$, do not converge.
If $\lambda=1$ and $B$ is a nontrivial Jordan block ($r>1$), the diagonal entries of the superdiagonal of $B^m$, which are equal to $m$, diverge.
If $B$ is a $1\times1$ Jordan block corresponding to the eigenvalue $1$, clearly $B^m=1$ and $\lim_{m\to\infty}B^m=1$.
If $|\lambda|<1$, consider $DBD^{-1}$, where $D$ is a diagonal matrix of the form $\operatorname{diag}(\varepsilon,\varepsilon^2,\ldots,\varepsilon^n)$ with $\varepsilon>0$. $B^m$ converges if and only if $(DBD^{-1})^m$ converges. However, the effect of the conjugation $B\mapsto DBD^{-1}$ is to scale the superdiagonal of $B$ by $\varepsilon$. Therefore, when $\varepsilon$ is sufficiently small, the maximum row sum norm of $DBD^{-1}$, $\|DBD^{-1}\|_\infty$, is strictly smaller than $1$. Hence $C^m$ and in turn $B^m$ converge to $0$.
Therefore, for any $n\times n$ complex matrix $A$,
$A^m$ converges if and only if the Jordan decomposition of $A$ has the form $P(J_{r_1}(\lambda_1)\oplus\cdots\oplus J_{r_t}(\lambda_t)\oplus I)P^{-1}$, where $|\lambda_1|,\ldots,|\lambda_t|<1$ (the identity block $I$ is void if $r_1+\cdots+r_t=n$). If this is the case, $\lim_{m\to\infty}A^m=P(0\oplus I)P^{-1}$. In particular, if all eigenvalues of $A$ lie inside the open unit disc, $\lim_{m\to\infty}A^m=0$.
If $A$ is real, since $\lim_{m\to\infty}A^m=X$ over $\mathbb{R}$ if and only if $\lim_{m\to\infty}A^m=X$ over $\mathbb{C}$, the above argument still applies and $P(0\oplus I)P^{-1}$ is real (given that $A^m$ converges) despite $P$ may be complex. |
If $f: \mathbb{R}^2\to \mathbb{R}$ is Lipschitz, then $g(x)=f(x,a)$ too? | $$|g(x_1)-g(x_2)|=|f(x_1,a)-f(x_2,a)|\leq L\,\|(x_1,a)-(x_2,a)\|=L\,\|(x_1-x_2,0)\|=L\,|x_1-x_2|.$$ |
What is the significance of reversing the polarity of the negative eigenvalues of a symmetric matrix? | Presumably, the "true" matrix that you're measuring really is only positive semidefinite with a large kernel. Since you're doing physical measurements, the chance of getting any number exactly is nil, so all the zero eigenvalues are coming out as either tiny positive or tiny negative numbers.
If you flip the negative eigenvalues to positive ones, you're generating a positive definite matrix which behaves somewhat similarly to the "true" positive-semidefinite matrix. I don't know what you're doing with these subsequently, but you may be feeding them through code that only works properly for positive definite matrices or something.
Try this: instead of flipping the signs, replace them with randomly-generated positive numbers of roughly the same order of magnitude and see what happens. If the results are basically identical to the sign-flipping, then the problems you're having with the PSD matrix are probably some kind of numerical shenanigans. If the flipped-signs version is behaving noticeably better (whatever that means in this context), then there might be something really interesting going on.
Alternatively: How sure are you that the software you're using to find the nearest PSD matrix is working correctly? |
Maximizing the line integral over a line segment | You say that $\textbf{r}(1)=(a,b)$, so $a^2+b^2=100$. But $(a,b)$ is not distance $10$ from the origin, rather distance $10$ from $(2,2)$. So what you know is that $\|\langle a-2,b-2\rangle\|=10$, i.e., $(a-2)^2+(b-2)^2=100$. I think you can continue with the optimization the way you have.
There is an easier way though. Note that $\nabla f=\langle 8,6\rangle$ is a constant vector field. The line integral $\int_C \nabla f \cdot d\textbf{r}$ is by definition $\int_C \nabla f \cdot \textbf{n} \ ds$, where $\textbf{n}$ is a unit tangent vector to the curve $C$ at every point. Since $C$ is a straight line, $\textbf{n}$ is constant along the curve. So $\nabla f \cdot \textbf{n}$ is constant as well, with $\nabla f \cdot \textbf{n}=\|\langle 8,6\rangle\|\|\textbf{n}\| \cos \theta = 10\cos \theta$, where $\theta$ is the angle between $\nabla f=\langle 8,6\rangle$ and $\textbf{n}$. Thus $$\int_C\nabla f \cdot d\textbf{r}=\int_C \nabla f \cdot \textbf{n} \ ds=10\cos \theta \int_C 1 \ ds = 10 \cos \theta ~\text{length}(C)=100\cos \theta.$$
This is maximized when $\cos\theta=1$, i.e., $\theta=0$. In this case, $\textbf{n}$ is a positive scalar multiple of $\nabla f= \langle 8,6\rangle$, so $C$ goes in the same direction as $\langle 8,6\rangle$. As it happens, $\langle 8,6\rangle$ is already a vector of length $10$, so you know the endpoint of $C$ must be $\langle 2,2\rangle + \langle 8,6 \rangle = \langle 10,8\rangle$. |
On an exercise on restrictions of functions in a topological space. | Let $x\in X$ be an arbitrary point. Let's denote by $I_x=\{i\in I: x\in A_i\}.$ So, we define $f(x)=f_i(x)$ for some (any) $i\in I_x.$ This function is well-defined, since if $j\in I_x,$ $j\ne i,$ then $f_j(x)=f_i(x),$ because $f_{i|A_i \cap A_j} = f_{j|A_i \cap A_j}.$
So, there exist one and only one function $f:X\to Y$ such that $f_{|A_i} = f_i, \forall i \in I.$ (Note that we have only used that $X = \cup_{i \in I} A_i$ and not the fact of being closed or open.)
Hint for continuity
If we assume that the $A_i's$ are open then we have that $f$ is continuous. Let $x\in X$ be an arbitrary point. There exists $i$ such that $x\in A_i.$ Use that $f_{|A_i} = f_i$ and that $f_i$ is continuous to show continuity of $f$ at $x.$
Can you think of a non-continuous function if the $A'i$ are closed? For example, if you consider $X=[0,1]$ with the Euclidean topoloty and $A_i$ consists of only one point, for each $i?$ |
What are some applications of apeirogons, apeirohedra, or n-apeirotopes? | They're one case of the classification of abstract polygons (or polyhedra, or polytopes, for the higher-dimensional versions). It's not a very interesting case, but we include it so that the classification is complete.
In hyperbolic space, apeirogons can be more interesting, because they can be closer to our usual definition of a polygon: sides of the same length with the same (nontrivial!) angle at every corner. For example, here is a tiling of the hyperbolic plane by infinitely many apeirogons, which is no different in spirit from tiling the Euclidean plane by infinitely many hexagons. |
How to describe a summation of $\frac{1}{2^x3^y}$ and evaluate. | Due to the Fubini/Tonelli Theorem, you can just sum over one index first, and then over the other. That is,
$$\sum_{n = 0}^\infty \sum_{m = 0}^\infty \frac{1}{2^n 3^m} = \sum_{n = 0}^\infty \frac 1 {2^n}\left(\sum_{m = 0}^\infty \frac 1{3^y} \right) = \sum_{n = 0}^\infty \frac 1 {2^n} \left(\frac{1}{1-(1/3)}\right) = \frac 3 2 \cdot\frac{1}{1-(1/2)} = 3.$$ It'd be the exact same if you summed over $n$ first, then $m$. |
Probability with a shrinking sample space | It can be treated as a permutation counting problem where:
$$Pr(B) = \frac{\text{permutations with $4$th letter UC}}{\text{permutations of 4 from 13}}$$
$$Pr(B) = \frac{5\cdot\frac{12!}{9!}}{\frac{13!}{9!}} = \frac{5}{13}$$ |
solution to $X^3=(1,2,3)$ | It is enough to search for a solution that is a cycle. (why?) If $z$ is such a cycle then, using $z^3 = (1,2,3)$ and the facts you wrote about the powers of cycles, what can you deduce about the length $k$ of $z$? |
Properties of matrix of linear transformation w.r.t an orthonormal basis | A useful property is that the matrix of the adjoint $T^*$ w.r.t an orthonormal basis $E$ is given by the conjugate transpose of the matrix of $T$, i.e. $$(T^*)_{(E)} = (T_{(E)})^*$$
From there you can infer:
$T$ is normal if and only if $T_{(E)}$ is a normal matrix
$T$ is self-adjoint if and only if $T_{(E)}$ is a hermitian matrix
$T$ is unitary if and only if $T_{(E)}$ is a unitary matrix |
Find the number of positive integral solutions of the equation $x_1+x_2+x_3+x_4+x_5=x_1\cdot x_2\cdot x_3\cdot x_4\cdot x_5$ | $$ 0= x_1 + x_2 + x_3 + x_4 +x_5 - x_1 x_2 x_3 x_4 x_5 \tag{1}$$
An immediate observation I see is that the the above expression is symmetric , so finding one solution will lead to 5! other solutions. Hence, we search unique solution after putting an ordering on our variables:
$$ 1 \leq x_1 \leq x_2 \leq x_3 \leq x_4 \leq x_5$$
From the comment by lulu(*), we can deduce that:
$$ x_1 x_2 x_3 x_4 x_5 \leq 5x_5$$
Or,
$$ x_1 x_2 x_3 x_4 \leq 5 \tag{2}$$
For the max case consider putting $x_4= 5$ , this leads to the following ordering:
$$ x_1 \leq x_2 \leq x_3 \leq x_4 \leq 5 \tag{3}$$
Since the cases are small , we can find them by directly counting as the four number lists which satisfy (2) and (3) :
$$ (x_1,x_2, x_3 , x_4 ) = \{ (1,1,1,1), (1,1,1,2) ,(1,1,1,3), (1,1,1,4) , (1,1,2,2) \} \tag{4}$$
Now consider $(1)$ and rearrange it such that we write $x_5$ as a function of othervariables:
$$ 0 = x_1 + x_2 + x_3 + x_4 + x_5(1-x_1 x_2 x_3 x_4)$$
Or,
$$ \frac{x_4 + x_2 + x_3 + x_1 }{x_3 x_4 x_2 x_1 - 1} = x_5$$
Plug in the number lists to the above equation, this will lead us the following set of $(x_1,x_2,x_3,x_4,x_5)$ quintuples : $\{(1,1,1,2,5),(1,1,1,3,2) , (1,1,2,2,3) \}$ (As checked using a Pascal program by @Raffaele )[ I have neglected the cases where plugging in the list gave me negative/ undefined number to satisfy the conditions of the question]
Now, with the solutions, I'll leave it to you to permute them and find the total :^)
(*): If the values of $x_1,x_2,x_3,x_4,x_5$ are positive integers, then after putting the ordering, it must be that the sum of rest of numbers is less than five times the sum of the largest number. For example,
$$ 1 +2 +3 +4 +5 \leq 5(5)$$
The sum of first five integers is definitely less than five times the largest integer |
How to prove that $\int_0^{2\pi} \cos \theta e^{\cos\theta} \cos(\sin \theta) - \sin \theta e^{\cos\theta} \sin(\sin \theta)\,\mathrm{d}\theta=0$ | Variant 1: Harmonic functions have the mean value property,
$$f(z) = \frac{1}{2\pi} \int_0^{2\pi} f(z + re^{i\varphi})\,d\varphi$$
if $f$ is harmonic in $\Omega$ and $\overline{D_r(z)} \subset \Omega$.
$u$ is an entire harmonic function, hence
$$u(0) = \frac{1}{2\pi}\int_0^{2\pi} u(e^{i\varphi})\,d\varphi = \frac{1}{2\pi}\int_0^{2\pi} \cos(\varphi)e^{\cos\varphi}\cos (\sin\varphi) - \sin(\varphi)e^{\cos\varphi}\sin(\sin\varphi)\,d\varphi.$$
Whether we write $z$ and $re^{i\varphi}$ or $(x,y)$ and $(r\cos \varphi, r\sin\varphi)$ is completely immaterial. The complex notation is just more convenient sometimes.
Variant 2: Consider the analytic function $f = u+iv$.
Since $u$ and $v$ are both real, and $d\varphi$ is also real, we have
$$\begin{align}
\int_0^{2\pi} u(\cos\varphi,\sin\varphi)\,d\varphi &= \operatorname{Re}\left(\int_0^{2\pi} u(\cos\varphi,\sin\varphi)\,d\varphi + i\int_0^{2\pi} v(\cos\varphi,\sin\varphi)\,d\varphi\right)\\
&= \operatorname{Re} \int_0^{2\pi} f(e^{i\varphi})\,d\varphi.
\end{align}$$
Now,
The path integral over some closed curve is zero, over an analytic function.
is not correct as stated. On the one hand, the closed curve must not wind around any point in the complement of the function's domain - but since we have an entire function, that is vacuously satisfied here. More pertinent in the case at hand is that the integral theorem concerns only integrals with respect to $dz$ (it's a theorem about holomorphic differential forms), but here the integrand is $f(z)\,d\varphi$, not $f(z)\,dz$. Thus Cauchy's integral theorem does not apply.
However, for integrals over a circle, we have a simple correspondence between $dz$ and $d\varphi$. If we parametrise the circle as $\gamma(\varphi) = z_0 + r e^{i\varphi}$, then we have
$$dz = \gamma'(\varphi)\,d\varphi = ire^{i\varphi}\,d\varphi = i(z-z_0)\,d\varphi,$$
so we get
$$\int_0^{2\pi} f(e^{i\varphi})\,d\varphi = \int_{\lvert z\rvert = 1} f(z)\frac{dz}{iz},$$
and we see that that leads to Cauchy's integral formula,
$$\frac{1}{i} \int_{\lvert z\rvert = 1} \frac{f(z)}{z}\,dz = 2\pi\: f(0).$$ |
$\lim_{R\rightarrow +\infty}\int_{-1}^{1}\frac{\sin(2\pi R t)}{t}dt=\frac{1}{\pi}$ | Let $2\pi Rt=x$, then $$I =\lim_{R\rightarrow\infty} \int_{-2\pi R}^{2\pi R} \frac{\sin x}{x} dx= \int_{-\infty}^{\infty} \frac{\sin x}{x} dx= 2 \int_{0}^{\infty} \frac{\sin x}{x} dx=2\frac{\pi}{2}=\pi$$ |
Let $ S = \{u_1, u_2, u_3\}, T = \{v_1, v_2,v_3\} $ be $2$ orthonormal bases of the subspace $W$. Which of the following is true? | Let $U = (u_1, u_2, u_3)$ and $V = (v_1, v_2, v_3)$. Then $P = U^\top V$.
The equation in (1) can be rewritten as $P (U^\top w) = V^\top w$. The equation in (2) can be rewritten as $P(V^\top w) = U^\top w$.
(2) is true because $PV^\top w = U^\top VV^\top w = U^\top w$. |
Comparing between $(\mathbb{E}[X] \cdot \mathbb{E}[Y])^{1/2}$ and $\mathbb{E}[(XY)^{1/2}]$ | The short answer is: no, we can't say anything about the relationship between these.
Consider the case where $Y = X$; here, the left and right terms are both $\mathbb E[X]$.
Consider the case where $X, Y$ are independent Bernoulli variables; then the left term is $0.5$ and the right term is $0.25$.
Consider the case where $X$ is a Bernoulli variable and $Y = 1-X$. Then the left term is $0.5$ and the right term is $0$.
But this last example illustrates an important point -- the original claim isn't correct! Covariance can be negative. |
Remainder using Modular Arithmetic | Your beginning was promising
$$2005=3\pmod 7\;,\;\;\text{and}\;\;3^3=-1\pmod 7\,,\,\,3^6=1\pmod 7$$
Now, $\;2007^{2009}=0\pmod 3\;$ and also odd , so ... (Try to complete the argument) |
Question involving partial derivatives and matrix representations | If $F : \mathbb R^n \to \mathbb R^m$, let's say $F(x_1,\dots,x_n) = \big( F_1(x_1,\dots,x_n), \dots, F_m(x_1,\dots,x_n) \big)$, is a differentiable function, the derivative of $F$ at a point $a \in \mathbb R^n$ is the $m \times n$ matrix ${\rm D}F(a)$ whose $i$-th row contains all the partial derivatives of $F_i$ at $a$, that is, the $i$-th row of ${\rm D}F(a)$ is $$\big[ \partial_1F_i(a) \ \ \cdots \ \ \partial_nF_i(a) \big].$$ In your case, if you write $h$ as $f \circ j$, where $j : \mathbb R^2 \to \mathbb R^3$ is the function $j(x,y) = (x,y,g(x,y))$, by the chain rule you have $$\begin{align} {\rm D}h(x,y) &= {\rm D}f(j(x,y)) {\rm D}j(x,y) \\ &= \begin{bmatrix} \partial_1f(j(x,y)) & \partial_2f(j(x,y)) & \partial_3f(j(x,y)) \end{bmatrix} \begin{bmatrix} 1&0 \\ 0&1 \\ \partial_1g(x,y) & \partial_2g(x,y) \end{bmatrix} \\ &= \begin{bmatrix} \partial_1f(j(x,y)) + \partial_3f(j(x,y)) \partial_1g(x,y) & \partial_2f(j(x,y)) + \partial_3f(j(x,y)) \partial_2g(x,y) \end{bmatrix}. \end{align}$$
So, if $h=0$, $$\begin{align} {\rm D}h(x,y)=[0 \quad 0] &\implies \begin{cases} \partial_1f(j(x,y)) + \partial_3f(j(x,y)) \partial_1g(x,y) = 0 \\ \partial_2f(j(x,y)) + \partial_3f(j(x,y)) \partial_2g(x,y) = 0 \end{cases} \\[1mm] &\implies \begin{cases} \partial_1g(x,y) = - \dfrac{\partial_1f(j(x,y))}{\partial_3f(j(x,y))} \\ \partial_2g(x,y) = - \dfrac{\partial_1f(j(x,y))}{\partial_3f(j(x,y))} \end{cases} \end{align}$$ |
Links between several complex variables and number theory? | There are many applications within the theory of automorphic forms. I think perhaps the most understandable comes from the theory of Multiple Dirichlet series --- i.e. series of the form
$$ \sum_{m, n} \frac{a(m, n)}{m^s n^w}.$$
Sometimes these objects arise naturally. For instance, the Dirichlet series associated to a weight $1/2$ Eisenstein series $E(z, w)$ on $\textrm{GL}(2)$ looks like
$$ \sum_{n} \frac{L(w, \chi_n) P(n, w)}{n^s},$$
where $\chi_n$ is a Dirichlet character defined mod $n$ and where $P(n, w)$ is a finite correction polynomial when $n$ is not squarefree (and is $1$ otherwise). This is a Dirichlet series whose coefficients are themselves Dirichlet series. This was a major area of investigation by Hoffstein, Cinta, Bump, Friedberg, and Brubaker starting around 1990.
Hoffstein and Hulse recently showed how to use multiple Dirichlet series (and the spectral theory of automorphic forms) to understand a broad class of shifted convolution sums of the form
$$ \sum_{n, m \geq 1} \frac{a(n)b(n+m)}{(n+m)^s},\tag{1}$$
where $a(n)$ and $b(n)$ are coefficients of automorphic forms. The interesting thing is that they wanted to understand the meromorphic continuation of $(1)$, but the only way there were able to do that was to introduce an auxiliary variable $w$ and understand the meromorphic continuation to $\mathbb{C}^2$ of
$$ \sum_{n, m \geq 1} \frac{a(n) b(n+m)}{(n+m)^s m^w}.$$
If you look up these papers and the papers that cite them, you'll find a large literature of people applying the theory of multiple complex variables to (analytic) number theory. |
Constructing a codomain for a bijective function | Hint: list the images $h(n)$ for some values of $n$, like $h(0)=0$, $h(1)=-2$, $h(2)=2$, $h(3)=-6$, $h(4)=4$, etc. Do you notice a pattern?
(In fact, looking at your earlier question might be another hint.) |
What power of $3$ is $4$? | $$ 4 = 3^x \implies \log 4 = x \log 3 \implies x = \color{red}{\frac{\log 4}{\log 3}}$$ |
universal representation of c-star-algebras | To add to what Aweygan said, even using all the states is too much, in a sense. If any two states $\psi,\phi$ are unitarily equivalent, then $(H_\psi,\pi_\psi)$ and $(H_\phi,\pi_\phi)$ are unitarily equivalent, thus the two of them do not contribute more information than just one of them.
Because of the above, it is customary to add not over all states, but over their unitary equivalence classes. |
Show that $E(Y\mid X=x)$ is a linear function in $x$ | You might want to work directly on random variables: there exists some parameters $(a,b)$ and a standard normal random variable $Z$ independent of $X$ such that $Y-\mu_Y=a(X-\mu_X)+bZ$ (can you show this?), thus $E(Z\mid X)=E(Z)=0$ hence $E(Y\mid X)=\mu_Y+a(X-\mu_X)$. To compute $a$, note that $\mathrm{Cov}(Y,X)=a\cdot\mathrm{var}(X)+b\cdot\mathrm{Cov}(Z,X)$ and $\mathrm{Cov}(Z,X)=0$, hence $\rho_{XY}=a\cdot\sigma^2_X$. |
What is the state of Carmichael's totient function conjecture? | Let's define Carmichael's Totient Conjecture:
For each $n$, there exists an integer $m\neq n$ such that $φ(m) = φ(n) = k$. Where $φ$ defined to be Euler's Totient.
The conjecture is an open problem in general, but is proven for all $k$ such that $k+1$ is prime.
Proof: Suppose $n$ is prime and $n-1 = k$. Then $φ(n) = k$. Now $φ(2) = 1$, and the totient of any integer $t$ is the product of totients of primes powers dividing $t$. Now let $m = 2n$. Since the only prime powers dividing $m$ are $2$ and $n$, $φ(m) = (2-1)*(n-1)$ $=$ $φ(m) = k$, therefore $φ(m) = φ(n) = k$. |
Problems with limits and asymptotic notation | You cannot sum this type of relation only multiply. If you wanna study your example you cannot suppress the "rest" behind it because he's too important in the considered limit :
$$
\frac{\sin\left(x\right)-x+2x^2}{3x^3}=\frac{\left(x-\frac{x^3}{6}-x+2x^5+o\left(x^3\right)\right)}{3x^3} \underset{x \rightarrow 0}{\rightarrow}\frac{1}{6 \times 3}=-\frac{1}{18}
$$
The thing is here you used $$\sin\left(x\right) \underset{\left(0^{+}\right)}{\sim}x$$ and you summed equivalent. You can substitute when it is a product, not a sum. For example you would have had instead
$$
\frac{2\sin\left(x\right)x^2}{3x^3}
$$
You could have wrote
$$
\frac{2\sin\left(x\right)x^2}{3x^3} \underset{\left(0^{+}\right)}{\sim}\frac{2xx^2}{3x^3}\underset{x \rightarrow 0}{\rightarrow}\frac{2}{3}
$$ |
Trying to set up a linear programming problem | I would use two variables, $z_1, z_2$ to be the number of chairs to be made
$$\max 25 z_1 + 15z_2$$
subject to
$$2z_1 + z_2 \le 120$$
$$z_1 + \frac12z_2 \le 85$$
$$z \ge 0$$
Notice that the finishing constraint is never active. |
basic doubt about connectness - general topology | Take points $x\in U$ and $y \in V$. Then a ball of radius $>d(x,y)$ around either should work. |
Prove $\sum\limits_{n=2}^\infty \frac {1} {\ln{n}}$ is divergent. | You can simply say that $(\forall n\in\mathbb{N}):\ln n<n$ and that therefore$$\sum_{n=2}^\infty\frac1{\ln n}\geqslant\sum_{n=2}^\infty\frac1n=+\infty.$$ |
To find the position of the ant after 2020 moves is $(p, q)$, | Let $e^{i\pi/3}=:\omega$. Then we have to iterate the map
$$z\mapsto T(z):=\omega z+7\ .$$
This $T$ has a fixed point $a:={7\over1-\omega}$, and is in fact a $60^\circ$ rotation of the plane around $a$. We can therefore write
$$T(z)-a=\omega(z-a)\ ,$$
and this implies
$$T^n(z)-a=\omega^n(z-a)\ .$$
As $2020=6\cdot336+4$ and $\omega^4=-\omega$ we obtain the equation
$$T^{2020}(6)-a=-\omega(6-a)\ .$$
Solving this leads to
$$\bigl|T^{2020}(6)\bigr|^2=57\ .$$ |
strong separation of sets | The statement is indeed not very difficult, and you have essentially proven it. But you need to be a bit careful with what you are allowed to assume and what you are supposed to conclude when you are writing it up. In particular, at no point should you be assuming that both $A$ and $B$ can be separated and that $A-B$ can be separated from the origin; yet your wording suggests you are doing so (when you say "if we assume the distance from $A-B$ to the origin is also positive", emphasis added).
So, to be clear and organized: you are trying to prove an "if and only if" statement. So you want to prove two things: that if $A$ and $B$ can be separated, then the origin can be separated from $A-B$; and also that if the origin can be separated from $A-B$, then $A$ and $B$ can be separated.
The standard way to do an "if and only if" proof is to do each implication separately, though sometimes one can proceed from one proposition to the other by performing steps that are all "reversible" (if and only if statements as well).
Let's do the former: to prove that "only if" implication, you want to show that $A$ and $B$ can be separated only if $\mathbf{0}$ can be separated from $A-B$ (that is, if $A$ and $B$ can be separated, then $\mathbf{0}$ can be separated from $A-B$). So, assume $A$ and $B$ can be separated. That means that there is a $d\gt 0$ such that $\lVert a-b\rVert \geq d$ for all $a\in A$ and $b\in B$ (that is, the infimum of these quantities is $d$, which is positive). You want to show that the infimum of $\lVert \mathbf{0} - x\rVert$, with $x$ ranging over all points in $A-B$, is also positive. And you can do it exactly as you do: since $x\in A-B$, then we can write $x=a-b$ for some $a\in A$ and $b\in B$. Then
$$\lVert \mathbf{0} - x\rVert = \lVert \mathbf{0}-(a-b)\rVert = \lVert -a+b\rVert = \lVert b-a\rVert = \lVert a-b\rVert\geq d$$
(the last inequality by our assumption that we can separate $A$ and $B$). Since every single $\lVert \mathbf{0}-x\rVert$ is greater than or equal to $d$, then $d$ is a lower bound for the set $\{\lVert \mathbf{0}-x\rVert\mid x\in A-B\}$, so it follows that
$$\inf\{\lVert \mathbf{0}-x\rVert\mid x\in A-B\} \geq d\gt 0.$$
Therefore $\mathbf{0}$ can be separated from $A-B$. This proves the "only if" clause.
To prove the "if" clause, we want to show that $A$ and $B$ can be separated if $\mathbf{0}$ can be separated from $A-B$; that is, if $\mathbf{0}$ and $A-B$ can be separated, then $A$ and $B$ can be separated. So we assume that $\mathbf{0}$ and $A-B$ can be separated; that is, that there exists some $r\gt0$ such that $\lVert \mathbf{0} - x\rVert\geq r$ for all $x\in A-B$. We want to show that $\inf\{\lVert a-b\rVert\mid a\in A, b\in B\}\gt 0$.
To that end, let $a\in A$ and $b\in B$. Then $a-b\in A-B$, so
$$\lVert a-b\rVert = \lVert (a-b) - \mathbf{0}\rVert = \lVert\mathbf{0}-(a-b)\rVert \geq r$$
(the last inequality by our assumption that we can separate $\mathbf{0}$ from $A-B$). Since each $\lVert a-b\rVert$ is greater than or equal to $r$, then $r$ is a lower bound for the set $\{\lVert a-b\rVert\mid a\in A,b\in B\}$, which shows that
$$\inf\{\lVert a-b\rVert\mid a\in A, b\in B\}\geq r\gt 0.$$
Thus we conclude that $A$ and $B$ can be separated. QED
(This is essentially your argument, spruced up and stated very carefully).
Note. In this particular instance, you can actually proceed by a chain of "if and only if"s:
$$\begin{align*}
\mbox{$A$ and $B$ can be separated}&\quad\text{if and only if}\quad \inf\{\lVert a-b\rVert\mid a\in A, b\in B\} \gt 0\\
&\quad\text{if and only if}\quad \inf\{\lVert \mathbf{0}-(a-b)\rVert\mid a\in A, b\in B\}\gt 0\\
&\quad\text{if and only if}\quad \inf\{\lVert \mathbf{0}-x\rVert\mid x\in A-B\}\gt 0\\
&\quad\text{if and only if}\quad\mbox{$\mathbf{0}$ can be separated from $A-B$.}
\end{align*}$$
with the explanation that the penultimate step is done by noting that $x\in A-B$ if and only if $x=a-b$ for some $a\in A$ and $b\in B$, and the one prior to that because $\lVert 0 - y\rVert = \lVert y\rVert$ for all $y\in\mathbb{R}^n$. |