title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Meaning of modified Madgwick Filter equations | Reply from one of the authors:
Equation 17 should read:
$\omega_{c,t} = \omega_{t} - \omega_{b,t} \tag{17}$
This aligns more with my expectations, but the factor of two in Equation 15 is unexpected to me. It could be absorbed into the $\sigma$ in any case, so perhaps it is just a scaling. |
Prove that for every integer $n$ there exists a unique integer $m$ such that $2m + 8n = 6$. | Because the set of integers is closed under subtraction and multiplication, $3-4n\in\mathbb{Z}$ where $n\in \mathbb{Z}$, so $m\in\mathbb{Z}$. To show uniqueness, suppose that $m_1$ and $m_2$ are both solutions to the equation. Then $2m_1+8n=6\implies 2m_1=6-8n\implies m_1=3-4n$. But $2m_2+8n=6\implies 2m_2=6-8n\implies m_2=3-4n$, so $m_1=m_2$.
Plugging $m$ back into the original equation will not yield anything useful - only that $6=6$.
I honestly don't know if this should be more complicated than I'm making it. |
Proving Stability for Dynamical System (Delta-Epsilon) | We have to prove that
$$
\forall \epsilon>0 \, \exists \delta>0:\; \|\bar x(0)\|<\delta\,\Rightarrow\,\forall t\ge 0\;\|\bar x(t)\|<\epsilon,
$$
where $\bar x(t)$ is a vector of solution:
$$
\bar x(t)=\left(\begin{array}{c}x_1(t)\\x_2(t)\end{array}\right).
$$
For the considered system
$$
x_1(t)= e^{-\zeta t}(C_1 + C_2t + C_1\zeta t)
$$
$$
x_2(t)= - e^{-\zeta t}(C_1\zeta^2 t+ C_2\zeta t -C_2 ),
$$
where $x_1(0)=C_1$, $x_2(0)=C_2$. Hence
$$
\|\bar x(0)\|=\sqrt{C_1^2+C_2^2},
$$
(note that $|C_1|\le\|\bar x(0)\|$, $|C_2|\le\|\bar x(0)\|$),
$$
\|\bar x(t)\|=\sqrt{x_1^2(t)+x_2^2(t)}
=e^{-\zeta t} \sqrt{(C_1 + C_2t + C_1\zeta t)^2+
(C_1\zeta^2 t+ C_2\zeta t -C_2 )^2}
$$
We can use the triangle inequality:
$$
\sqrt{(a_1+b_1)^2+(a_2+b_2)^2}\le \sqrt{a_1^2+a_2^2}+\sqrt{b_1^2+b_2^2}
$$
to obtain
$$
\|\bar x(t)\|\le e^{-\zeta t} \left(
\sqrt{C_1^2(1+\zeta t)^2+C_1^2\zeta^4t^2}+
\sqrt{C_2^2t^2+C_2^2(\zeta t-1)^2}
\right)
$$
$$
=e^{-\zeta t} \left(
|C_1|\sqrt{(1+\zeta t)^2+\zeta^4t^2}+
|C_2|\sqrt{t^2+(\zeta t-1)^2}
\right)
$$
$$
\le
\|\bar x(0)\|e^{-\zeta t} \left(
\sqrt{(1+\zeta t)^2+\zeta^4t^2}+
\sqrt{t^2+(\zeta t-1)^2}
\right)
$$
Since $\zeta>0$, $e^{-\zeta t} \left(
\sqrt{(1+\zeta t)^2+\zeta^4t^2}+
\sqrt{t^2+(\zeta t-1)^2}
\right)$ tends to zero at $t\to\infty$. It implies that there exists some maximum value
$$
M=\max_{t\ge 0} e^{-\zeta t} \left(
\sqrt{(1+\zeta t)^2+\zeta^4t^2}+
\sqrt{t^2+(\zeta t-1)^2}
\right)
$$
and
$$
\|\bar x(t)\|\le \|\bar x(0)\|M e^{-\zeta t} .
$$
We have proved the exponential stability of the system. Usual (Lyapunov) stability is obvious if we take
$$
\delta(\epsilon)=\frac{\epsilon}M
$$
Update
Indeed, $\forall t\ge 0$
$$
\|\bar x(0)\|<\delta=\frac{\epsilon}{M}\quad\Rightarrow\quad
\|\bar x(t)\|<\frac{\epsilon}{M}M e^{-\zeta t}=\epsilon e^{-\zeta t}
\le\epsilon
$$ |
How do I find the smallest integer of $n$ such that $\sigma(n) = 24$? | I suppose you could "filter" some "reasonable guesses" in the following sense:
if $n = p$ is prime then $24 = 1 + p$, so $p = 23$ (and this is of course the unique prime with this property).
What other possibilities are there? Well, you can look at semiprimes less than $24$. Notice that if $n = pq$ with $p, q$ primes (at first sight not necessarily distinct!) then
$$\sum_{d \mid n} d = 1 + p + q + pq = (p+1)(q+1).$$
Write $24 = (p+1)(q+1)$ and see if you can find primes $p, q$ for which this is true. You'll notice that $p \neq q$, as otherwise $23 = p(p+1)$. Now look at the divisors of $24$ and you'll have your answer. |
Prove that $\{f_n\} _{n=1}^{\infty}$ uniformly converges to $ f(x)=\int_{0}^{1}g(x,t)\mathrm{dt}$ | I will show you an example that $f$ is not continuous on $[0, 1]$. It is easy to adapter it to $(0,\infty)$. In particular, your statement is not valid in general.
For $x > 0$ let $g(x, \cdot)$ be the hat function with support on $[0, 1/x]$ and maximum value $x$. Further let $g(0, \cdot) = 0$. Then, $g$ is separately continuous and $f$ is discontinuous at $x=0$.
Notice: the statement is valid, if you add an equicontinuity type condition. |
Who knows this formula for polynomial interpolation? | Hint:
By Cramer, the coefficient of the $n^{th}$ power is the ratio of the Vandermonde determinant dividing a modified determinant where the $n^{th}$ column is replaced by the RHS. The latter can be expanded using the modified column and corresponding minors, which are of a quasi-Vandermonde type (there is a discontinuity in the exponents). These minors are still factored with the differences between two $x$'s, but there is an extra factor. |
The Galois group ${\rm Gal}(\Bbb C/\Bbb Q) ({\rm Aut}(\Bbb C/\Bbb Q)).$ | Given what you said in the comment, what you actually want to do is show that $\mathbb{C}/\mathbb{Q}$ is Galois in the sense that $\mathbb{C}^{\operatorname{Aut}(\mathbb{C}/\mathbb{Q})} = \mathbb{Q}$. In other words, you want to show that if $z\in \mathbb{C}\setminus \mathbb{Q}$, there is an automorphism $\varphi$ of $\mathbb{C}$ such that $\varphi(z)\neq z$. This most definitely does not require to classify all automorphisms of $\mathbb{C}$.
As a first comment, this relies heavily on the axiom of choice: you can find models of ZF where the only non-trivial automorphism of $\mathbb{C}$ is the conjugation. This being said, once we don't mind using transcendence bases, we can answer your question.
If $z$ is algebraic over $\mathbb{Q}$, write $\mathbb{Q}\subset K\subset \mathbb{C}$ such that $K/\mathbb{Q}$ is purely transcendental and $\mathbb{C}/K$ is algebraic. Then $\mathbb{C}$ is the algebraic closure of $K$ and $z$ is algebraic over $K$, so there is an automorphism $\phi\in \operatorname{Gal}(\mathbb{C}/K)$ such that $\varphi(z)\neq z$.
If $z$ is transcendental over $\mathbb{Q}$, then we can choose a transcendence basis $S$ of $\mathbb{C}/\mathbb{Q}$ with $z\in S$. Then there is an automorphism of $\mathbb{Q}(S)$ swapping $z$ with another element of $S$. This automorphism can be extended to $\mathbb{C}$ since it is the algebraic closure of $\mathbb{Q}(S)$. |
Is this statement accurate. And if not what are the chances | Seems accurate. Life expectancy in the US is about 80 years. So $330,000,000 / 13,000 / 80 \approx 317$. |
What's the difference between $\lim_{n \to \infty} \mu ([n,+\infty))$ and $\mu (\lim_{n \to \infty}[n,+\infty))$? | Given a sequence of sets $A_n$ with $A_{n+1}\subset A_n$, one defines
$$
\lim_{n\to\infty}A_n=\bigcap_{n=1}^\infty A_n
$$
When you have $A_n\subset A_{n+1}$, you use union instead of intersection.
In your case, if $A_n=[n,\infty)$, then $A_{n+1}\subset A_n$, so
$$
\lim_{n\to\infty} A_n=\bigcap_n A_n=\emptyset
$$
On the other hand, each $A_n$ has infinite (Lebesgue) measure. |
Calculate this infinite sum | Using Partial Fraction Decomposition,
$$\text{let }\frac{n+3}{(n+1)(n+2)}=\frac A{n+1}+\frac B{n+2}$$
$$n+3=n(A+B)+2A+B\implies A+B=1,2A+B=3\implies A=2, B=-1$$
$$\implies\frac{n+3}{(n+1)(n+2)}=\frac 2{n+1}-\frac1{n+2}$$
$$\implies\frac{n+3}{2^n(n+1)(n+2)}=\frac{(1/2)^{n-1}}{n+1}-\frac{(1/2)^n}{n+2}$$
If $T_m=\dfrac{(1/2)^{m-1}}{m+1},$
$$\frac{n+3}{2^n(n+1)(n+2)}=T(n)-T(n+1)$$ which is clearly Telescoping |
How to calculate this derivative? Do I have to use the chain rule? | For solving using the chain rule
$$\frac{d(r^2-x^2)^{1/2}}{dx}=\frac{d(r^2-x^2)^{1/2}}{d(r^2-x^2)}.\frac{d(r^2-x^2)}{dx}$$
$$=\frac{1}{2}(r^2-x^2)^{-1/2}.(\frac{d(r^2)}{dx}-\frac{d(x^2)}{dx})$$
$$=\frac{1}{2}(r^2-x^2)^{-1/2}.(0-2x)$$
$$=-x(r^2-x^2)^{-1/2}$$ |
How to find length of a rectangular tile when viewing at some angle | It depends on your projection. If you assume orthogonal projection, so that the apparent length of line segments is independent of their distance the way your images suggest, then you cannot solve this, since a rectangle of any aspect ratio might appear as a rectangle of any other aspect ratio by simply aligning it with the image plane and then rotating it around one of its axes of symmetry. So you can't deduce the original aspect ratio from the apparent one, much less the original lengths. |
Triple Recursion Relation Coefficients | I'm not entirely sure I'm reading your query correctly; I'll edit this if my interpretation's off the mark.
You're asking how the Stieltjes procedure for generating orthogonal polynomials with respect to some given weight $w(x)$ works. It's a bootstrapping procedure. You usually start with the first two known members, and slowly build up the other members through inner product computations and recursion.
Again, take
$$(f(x),g(x))=\int_a^b w(u)f(u)g(u) \,\mathrm du$$
and let $\phi_k(x)=A_k x^k+\cdots$ be the degree-$k$ polynomial that is orthogonal with respect to the weight function $w(x)$, i.e.
$$(\phi_k(x),\phi_\ell(x))=0,\quad k\neq \ell$$
Consider first
$$q(x)=\phi_{k+1}(x)-\frac{A_{k+1}}{A_k}x\phi_k(x)$$
which is a linear combination precisely designed to have a missing $x^{k+1}$ term.
This can be expanded as a series of orthogonal polynomials of degree $k$ and lower (abbreviating $\frac{A_{k+1}}{A_k}$ as $a_k$):
$$\phi_{k+1}(x)-a_k x\phi_k(x)=\mu_k\phi_k(x)+\mu_{k-1}\phi_{k-1}(x)+\cdots$$
where the $\mu_j$ are given by
$$\mu_j=\frac{(q(x),\phi_j(x))}{(\phi_j(x),\phi_j(x))}$$
Another fact we are going to need is
$$(\phi_k(x),x^\ell)=0,\quad \ell < k$$
We find from these considerations that the coefficients $\mu_j$ for $j < k-1$ vanish. Thus, after renaming $\mu_k$ to $b_k$ and $\mu_{k-1}$ to $c_k$, we have
$$\phi_{k+1}(x)-a_k x\phi_k(x)=b_k\phi_k(x)+c_k\phi_{k-1}(x)$$
At this point, it should be noted that one convenient normalization is to have the $\phi_k(x)$ be monic ($A_k=1$); this means we can set $a_k=1$ and consider the three-term recursion
$$\phi_{k+1}(x)=(x+b_k)\phi_k(x)+c_k\phi_{k-1}(x)$$
If we take inner products of both sides with $\phi_{k+1}(x)$, $\phi_k(x)$, and $\phi_{k-1}(x)$ in turns, we have the system
$$\begin{align*}(\phi_{k+1}(x),\phi_{k+1}(x))&=(\phi_{k+1}(x),x\phi_k(x))\\0&=(x\phi_k(x),\phi_k(x))+b_k(\phi_k(x),\phi_k(x))\\0&=(x\phi_k(x),\phi_{k-1}(x))+c_k(\phi_{k-1}(x),\phi_{k-1}(x))\end{align*}$$
where we've exploited linearity of the inner product and the orthogonality relation.
Solving for $b_k$ and $c_k$ in the last two equations, we have
$$\begin{align*}b_k&=-\frac{(x\phi_k(x),\phi_k(x))}{(\phi_k(x),\phi_k(x))}\\c_k&=-\frac{(x\phi_k(x),\phi_{k-1}(x))}{(\phi_{k-1}(x),\phi_{k-1}(x))}\end{align*}$$
$c_k$ can be expressed in a different way, using the fact that $(x\phi_k(x),\phi_{k-1}(x))=(\phi_k(x),x\phi_{k-1}(x))$ and shifting the index $k$ in the equation for $(\phi_{k+1}(x),\phi_{k+1}(x))$, yielding
$$c_k=-\frac{(\phi_k(x),\phi_k(x))}{(\phi_{k-1}(x),\phi_{k-1}(x))}$$
It's been all theoretical at this point; let me demonstrate the Stieltjes procedure with the monic Chebyshev polynomials (of the first kind) as a concrete example. The associated inner product is
$$(f(x),g(x))=\int_{-1}^1 \frac{f(u)g(u)}{\sqrt{1-u^2}}\mathrm du$$
The usual way of proceeding starts with $\phi_{-1}(x)=0$ and $\phi_0(x)=1$. To find $\phi_1(x)$, we compute
$$b_0=-\frac{(x,1)}{(1,1)}=0$$
and thus $\phi_1(x)=x$. To get $\phi_2(x)$, we compute
$$\begin{align*}b_1&=-\frac{(x\phi_1(x),\phi_1(x))}{(\phi_1(x),\phi_1(x))}=0\\c_1&=-\frac{(\phi_1(x),\phi_1(x))}{(\phi_0(x),\phi_0(x))}=-\frac12\end{align*}$$
and thus $\phi_2(x)=\left(x+b_1\right)\phi_1(x)+c_1\phi_0(x)=x^2-\frac12$. Clearly we can continue this bootstrapping, generating $\phi_3(x),\phi_4(x),\dots$ in turn by computing inner products and recursing. (As it turns out, for this example all the $b_k$ are zero.) |
Any injective function from a set to a proper subset is also surjective | Nope, let $X=\mathbb{Z}$ and $Y=2\mathbb{Z}=\{2n\mid n\in\mathbb{Z}\}$. Let $f\colon X\to Y$ be given by $f(k)=4k$. This is injective but misses, for instance $2\in Y$ and so isn't surjective.
I should add, an infinite set needed to be chosen for $X$ because there exist no injective functions from a finite set to a proper subset. Similarly, $Y$ needed to be an infinite subset of $X$ (and of the same cardinality). |
Counting: A distributed network of 10 servers, 40 different movies will be stored on network | For the first question, what is 'refilling' is associated with the base of the exponential, in $26^6$, 26 is the number of leters, and in $10^{40}$, 10 is the number of servers (which are infinite in space as you said), it's not the number of movies (which you want to store in a single server each).
For the second one:
Think about how many options you have at each step of making such a distribution.
If all the servers have the same number of movies, then all of them have 4 movies each.
Let's choose the movies one server at the time:
-For the first one you have to choose 4 of the 40 movies and like that you have $\dfrac{40!}{4!(40-4)!}$ options (do you see why?).
-When choosing the movies of the Nth server, you have already distributed $4(N-1)$ movies, so there are $40-4(N-1)$ left to choose from. For this server we have to choose 4 movies, so we have $$\dfrac{(40-4(N-1))!}{4!(40-4(N-1)-4)!}=\dfrac{(40-4(N-1))!}{4!(40-4N)!}$$
Now for the total amount of choices we have to do the product of the amount of choices at each step. But notice that $(40-4N)!$ is in the denominator for the Nth step and is in the numerator for the (N+1)th step, so they cancel out when making the product.
The only occurrence of this type that remains is the last one, because there is no 'next' term for cancelling. But in the last occurrence, $N=10$, so it's just (40-40)!=1.
So the total product is $\dfrac{40!}{4!^{10}}$. |
How do I find the inductive definition of the set defined as $\{2n+3m+1|n,m\in\mathbb N\}$? | How's this: 1 and 4 are in $S$; if $n$ is in $S$, then $n+2$ is in $S$. |
Proving commutativity of convolution $(f \ast g)(x) = (g \ast f)(x)$ | You need to check the bounds on your integral. since $y$ ranges from $-\pi$ to $\pi$, you'll have $z = x-y$ ranging from $x+\pi$ to $x-\pi$. Therefore:
$$
\int_{-\pi}^{\pi}f(x-y)g(y)dy = -\int_{x+\pi}^{x-\pi}f(z)g(x-z)dz =
\int_{x-\pi}^{x+\pi}f(z)g(x-z)dz = \int_{-\pi}^{\pi}f(z)g(x-z)dz
$$
in the second-last step, I swapped the two bounds on the integral (this changes the sign). In the final step, I shifted both bounds on the integral by $-x$, which does not change the value because we are integrating over an interval of length $2\pi$ and the function is $2\pi$-periodic. |
How many ways are there to choose 16 cookies? | $Hint:$
First take out the $6$ chocolate chip cookies anyways. Now the problem reduces to choosing $10$ cookies from $6$ varieties with no restriction, which is the stars and bars problem, with $n=10$ and $r = 6$ |
How to prove conservation of energy from a central potential, taking Newton's laws as assumptions? | You can actully derive it using the definition of work.
so we have a central force:
$$
\vec{F}=-\frac{d U}{d r}\hat{r}
$$
let's calculate the work: between to states $S_1$ and $S_2$:
$$
W_{12}=\int_{S_1 \to S_2} \vec{F}\cdot \vec{\textrm{dr}}=-\int_{S_1 \to S_2} \frac{d U}{d r}\hat{r}\cdot \vec{\textrm{dr}}
$$
which is obviously:
$$
W_{12}=U(r_1)-U(r_2)
$$
now let's do the same using newton second law $\vec{F}=\mu\frac{d\vec{v}}{dt}$:
$$
W_{12}=\int_{S_1 \to S_2} \vec{F}\cdot \vec{\textrm{dr}}=\int_{S_1 \to S_2} \mu\frac{d\vec{v}}{dt}\cdot \vec{\textrm{dr}}=
$$
$$
=\int_{S_1 \to S_2} \mu\frac{d\vec{v}}{dt}\cdot \frac{d\vec{r}}{dt}\textrm{dt}=\int_{S_1 \to S_2} \mu\frac{d\vec{r}}{dt}\cdot \vec{\textrm{dv}}=\frac{1}{2}\mu v_2^2-\frac{1}{2}\mu v_1^2
$$
(Remark: this demonstration of the work-energy principle can be done in much more mathematical detail in the lagrangian setting, this is mostly an intuitive proof, consult wikipedia:W-E Principle for more detail)
equate:
$$
\frac{1}{2}\mu v_2^2-\frac{1}{2}\mu v_1^2 = U(r_1)-U(r_2)
$$
rearrange so all variables of $S_1$ and $S_2$ are on opposite sides:
$$
\frac{1}{2}\mu v_2^2+U(r_2) = \frac{1}{2}\mu v_1^2+U(r_1) = \textrm{constant}
$$
so since $S_1$ and $S_2$ are arbitrary this holds for any state,thus we have obtained a constant of the motion, let's call it Energy:
$$
E=\frac{1}{2}\mu v^2+U(r)=\frac{1}{2}\mu \dot{\mathbf{r}}\cdot\dot{\mathbf{r}}+U(r)
$$ |
Evaluating a power series | Suppose $x\in\mathbb{Q}$, $0\lt x\lt1$, and $x$ has the base-$p$ expansion
$$
x=\sum_{k=1}^\infty\frac{d_k}{p^k}\tag{1}
$$
Then
$$
\frac{\{p^nx\}}{p^n}=\sum_{k=n+1}^\infty\frac{d_k}{p^k}\tag{2}
$$
So that
$$
\begin{align}
f_p(x)
&=\sum_{n=0}^\infty\frac{\{p^nx\}}{p^n}\\
&=\sum_{n=0}^\infty\sum_{k=n+1}^\infty\frac{d_k}{p^k}\\
&=\sum_{k=1}^\infty\sum_{n=0}^{k-1}\frac{d_k}{p^k}\\
&=\sum_{k=1}^\infty\frac{k\,d_k}{p^k}\tag{3}
\end{align}
$$
Since the sum in $(3)$ starts at $k=0$, $f_p(x)-x$ is the function in the question. However, if $f_p(x):\mathbb{Q}\mapsto\mathbb{Q}$, then $f_p(x)-x:\mathbb{Q}\mapsto\mathbb{Q}$.
Finite base-$p$ expansion
Obviously, if the base-$p$ expansion of $x$ is finite, then the sum in $(3)$ is finite
$$
f_p(x)=\sum_{k=1}^m\frac{k\,d_k}{p^k}\tag{4}
$$
which is a finite sum of rational numbers, hence $f_p(x)\in\mathbb{Q}$.
Repeating base-$p$ expansion
If the base-$p$ expansion of $x$ repeats with period $m$, then
$$
\begin{align}
f_p(x)
&=\sum_{k=1}^m\sum_{n=0}^\infty\frac{(k+nm)d_k}{p^{k+nm}}\\[6pt]
&=\sum_{k=1}^m\frac{d_k}{p^k}\sum_{n=0}^\infty\frac{k+nm}{p^{nm}}\\[6pt]
&=\sum_{k=1}^m\frac{d_k}{p^k}\left(\frac{kp^m}{p^m-1}+\frac{mp^m}{(p^m-1)^2}\right)\\[6pt]
&=\frac1{p^m-1}\left(mx+p^m\sum_{k=1}^m\frac{k\,d_k}{p^k}\right)\tag{5}
\end{align}
$$
which is again a finite sum of rational numbers, hence $f_p(x)\in\mathbb{Q}$.
Mixed base-$p$ expansions
Note that if there are no base-$p$ carries when adding $x$ and $y$, then each digit of the sum is the sum of the digits, and therefore, by $(1)$ and $(3)$,
$$
f_p(x+y)=f_p(x)+f_p(y)\tag{6}
$$
Furthermore,
$$
\begin{align}
f_p\left(\frac{x}{p^n}\right)
&=\sum_{k=1}^\infty\frac{(k+n)d_k}{p^{k+n}}\\
&=\frac1{p^n}\left(nx+f_p(x)\right)\tag{7}
\end{align}
$$
Combining $(4)$, $(5)$, $(6)$, and $(7)$, we get
Conclusion
If $x\in\mathbb{Q}$, then
$$
\sum_{k=1}^\infty\frac{\{p^kx\}}{p^k}=f_p(x)-x\in\mathbb{Q}
$$
Example 1
In base $5$, $\frac{14}{25}=.\color{#C00000}{24}$. By $(4)$
$$
\begin{align}
f_5\left(\frac{14}{25}\right)
&=\frac{\color{#00A000}{1}\cdot\color{#C00000}{2}}{5^{\color{#00A000}{1}}}+\frac{\color{#00A000}{2}\cdot\color{#C00000}{4}}{5^{\color{#00A000}{2}}}\\[6pt]
&=\frac{18}{25}
\end{align}
$$
Example 2
In base $5$, $\color{#0000FF}{\frac13}=.\overline{\color{#C00000}{13}}$, therefore, $p=5,m=2,d_1=1,d_2=3$. By $(5)$
$$
\begin{align}
f_5\left(\color{#0000FF}{\frac13}\right)
&=\frac1{5^2-1}\left(2\cdot\color{#0000FF}{\frac13}+5^2\left(\frac{\color{#00A000}{1}\cdot\color{#C00000}{1}}{5^\color{#00A000}{1}}+\frac{\color{#00A000}{2}\cdot\color{#C00000}{3}}{5^\color{#00A000}{2}}\right)\right)\\[6pt]
&=\frac{35}{72}
\end{align}
$$
Example 3
In base $5$, $\frac{67}{75}=.24\overline{13}$. Using $(6)$ and $(7)$ and the previous examples, we get
$$
\begin{align}
f_5\left(\frac{67}{75}\right)
&=f_5\left(\frac{14}{25}\right)+f_5\left(\frac13\cdot\frac1{25}\right)\\[6pt]
&=\frac{18}{25}+\frac1{5^2}\left(2\cdot\frac13+\frac{35}{72}\right)\\[6pt]
&=\frac{1379}{1800}
\end{align}
$$ |
How can i find the basis solutions of homogeneous linear ODE? | You may reduce the given DE into another with first derivative removed as follows:
$1$.Put $y=u(x)v(x)$ in the given DE
$2$.Equate the coefficient of $v'(x)$ to zero to obtain $u(x)$.
$3$. Now solve the reduced DE for $v(x)$ with its first derivative term missing by usual methods of CF and PI.
$4$.The solution is $y(x)=u(x)v(x)$. |
Geometric Distribution problem, how to solve when we have boundaries? | Geometric distribution:
$P(X=x)=p(1−p)^x$, where $p=0.15$
You want:
$\begin{equation}
P(16\leq X \leq 19) = P(X=16)+P(X=17)+P(X=18)+P(X=19)\\
\end{equation}$ |
Weird notation for logarithm (from a 1887 trigonometry book) | \begin{align}
\log_{10}(0.017453293) + 10 &= 8.24187738 \\
\log_{10}(0.000290882) + 10 &= 6.46371685
\end{align}
I am not sure why the values are shifted by a factor of $10^{10}$; it may have to do with the fact that the author chose to normalize everything to 9 digits past the decimal point and the particular presentation of the log table he has on hand.
This is presumably explained in the footnote or endnote attached to the asterisk symbol that you managed to omit from the picture your question.
In terms of what the author is doing with the examples: in modern language, the quantity the author denotes by $x$ (the arc length), is simply the angle measured in radians when the circle has radius 1. (Other wise you have to multiply by the radius.) The page you included in your question shows how to convert from degrees into radians. In modern trigonometry classes, you would've been taught that the formula is
$$ \text{radians} = \frac{2\pi}{360} \times \text{degrees} $$
130 years ago they don't have calculators, and the fastest way to multiply numbers with many digits is using a log table. So you have that
$$ \log (\text{radians}) = \log (\frac{2\pi}{360}) + \log (\text{degrees}) $$
and addition is considerably easier than multiplication.
The final step then is to know what to add: so the author gives you the value of $\log(2\pi / 360)$ (offset by 10) for converting from degrees to radians, and also the value of $\log(2\pi/21600)$ for converting from minutes to radians, and also the value of $\log(2\pi /1296000)$ for converting from seconds to radians. |
Number of hypercubes intersected by a hyperplane in a uniform partitioned hypercube | Here is a rough (and unoptimized) idea: let's work in $3D$ because it can still be visualised and I think it makes a plausible case.
Imagine a grid of unit cubes filling out space (i.e. a $3D$ version of squared paper). Now place another unit cube into this grid at an arbitrary point and rotated in an arbitrary fashion. It will clearly intersect at most $A$ cubes, where $A$ is some fairly small number (e.g. it certainly won't intersect more that $100$ grid cubes). This setup obviously also works in exactly the same way if we scale everything by a common factor.
Now take your $3D$ unit cube $[0,1]^3$ that you've partitioned into $k^3$ small cubes. Your plane passing through the cube is oriented in some way. Now take another cube with integer sidelenght $B$, rotated so as to align with the plane (i.e. the plane is parallel to one of the cube's sides) and placed so that it contains $[0,1]^3$ (for this to be possible, $B$ has to be big, e.g. $B = 100$ will certainly always suffice). Furthermore, cut the rotated cube into $(Bk)^3$ small cubes of sidelength $1/k$.
Now it is clear that the plane intersects at most $2(Bk)^2$ of the small rotated cubes since the big rotated cube is aligned so as to be parallel to the plane (the factor of $2$ is there because the plane might slice right in between two layers of the rotated cube). But each small rotated cube intersects at most $A$ of the original small cubes that came from cutting up $[0,1]^3$ by our previous cubic grid argument. (Here the grid doesn't fill out all of space, but that only improves the situation.)
Putting this all together: the intersection of the plane and $[0,1]^3$ is some set that is contained in the intersection of the plane and the big rotated cube. The intersection of the plane and the big rotated cube is contained in the union of at most $2(Bk)^2$ rotated small cubes. Each rotated small cube is contained in the union of at most $A$ original small cubes. Hence the intersection of the plane and $[0,1]^3$ is contained in the union of at most $2AB^2k^2$ small original cubes and we are done.
This is clearly not optimized since I didn't evaluate the constants. Also, we can't really visualise it in higher dimesions. But I do believe the same idea should work in all dimensions, though I don't know how to nicely articulate it. |
Using expected value to find a variable inside a Probability Density Function | You have two equations. Using the known property of p.d.f, we have
$$
\int_0^4 \left ( \alpha \sqrt{x} +\beta x^{2} \right ) dx = 1
$$
and using the known second moment, we have
$$
\int_0^4 x^2 \left ( \alpha \sqrt{x} +\beta x^{2} \right ) dx = 48/5
$$
If you solve these two integrals, you get two linear equations involving $\alpha$ and $\beta$. Solve this system for your parameters. |
Question about inequality | Starting with the second inequality, if inside each set of parentheses we get a common denominator, we have
$$\frac{(2-a-b)^2}{(a+b)^2}\leq \frac{(1-a)(1-b)}{ab}.$$
Multiplying through by $ab/(2-a-b)^2$ (which is positive because of the restrictions on $a$ and $b$), we get
$$\frac{ab}{(a+b)^2}\leq\frac{(1-a)(1-b)}{(2-a-b)^2},$$
which is the first inequality. The argument works in reverse also.
Now, starting with the second inequality again, if we expand everything out we get
$$\frac{4}{(a+b)^2}-\frac{4}{a+b}+1\leq \frac{1}{ab}-\frac1a - \frac1b + 1.$$
Move everything to the right side and combine all fractions. You end up with the third inequality
$$\frac{(a-b)^2(1-(a+b))}{ab(a+b)^2}\geq 0.$$
Again, the argument works in reverse.
The third inequality holds because of the restrictions on $a$ and $b$. Each factor in the numerator is non-negative and each factor in the denominator is positive. |
How to calculate the payout rate of a slot machine? | You identify each payout and the probability of it, and add them up. So if there is one jackpot, requiring you get a specific symbol on all the reels, that pays $1,000,000$ (in units of the bet), the chance you get it is $1$ in $s^r$. This win contributes $\frac{1,000,000}{s^r}$ to the payout rate. If you have another one that requires a single symbol on any reel that pays 3, it contributes $3\left({1-\left(\frac{s-1}{s}\right)^r}\right)$. You just have to count the win chances, multiply by the payout, and add them up. One challenge is making sure they are mutually exclusive-in my example if the jackpot symbol and the one in the win paying 3, you would have to subtract the jackpot probability from the other, as you presumably only pay the jackpot. |
Help needed in proving a result assuming Prime number theorem | By the prime number theroem, for any $0<\varepsilon<1/3$ there is an $x_0>0$ such that
$$
\pi(x)\leq \frac{{1 - \varepsilon }}{{1 - 2\varepsilon }}\frac{x}{\log x}
$$
(note that $\frac{{1 - \varepsilon }}{{1 - 2\varepsilon }}>1$ in the given range). Hence if $s$ is so large that $(1 - 2\varepsilon )\log s >x_0$, then
\begin{align*}
\log D & \le \pi ((1 - 2\varepsilon )\log s)\log ((1 - 2\varepsilon )\log s) \\ & \le \frac{{1 - \varepsilon }}{{1 - 2\varepsilon }}\frac{{(1 - 2\varepsilon )\log s}}{{\log ((1 - 2\varepsilon )\log s)}}\log ((1 - 2\varepsilon )\log s) \\ &= (1 - \varepsilon )\log s .
\end{align*} |
How to find the cartesian equation of a plane given the vector equation? | You have $$\frac{x+3}{6}=\frac{z+1}{-3}$$ and $$\frac{y+5}{6}=\frac{z+1}{-3}$$
Now add these two equations. |
Critical Points of Vector Function | Hint: $f= gohoF$ where $g(z)= \langle w , z \rangle$,$\; h(y)=\frac{z}{\|z\|^2}$, and $F(v)=Av$. |
Would two sequence that converge also produce a converging limit | Not true ... consider $a_n=1$ and $b_n = \frac1n$.
If, however, the limit of $b_n$ is nonzero, then $a_n/b_n$ will converge. |
How can I find $P(X>0 | Y<0)$? | For a $n\times 1$ random Gaussian vector $X$, the density function is $$f_{X}(x)=\frac{1}{(2\pi)^{n/2}\det(C)}\exp\left(-(x-\mu)^TC^{-1}(x-\mu)\right)$$ where $\mu$ is its mean vector and $C$ is the covariance matrix. So, for your case the joint density of $X,Y$ is $$f_{X,Y}(x,y)=\frac{1}{2\pi\det C}\exp\left(-[X\quad Y]C^{-1}[X \quad Y]^T\right)$$ where $$C=\begin{pmatrix}
1 & 1/2\\
1/2 & 1
\end{pmatrix}$$ Hope you can now do it yourself. |
How are 55 minutes gained in 60 minutes by minute hand? | The answer you link to says the minute hand gains $55$ minutes on the hour hand, not the other way around. As you say, in one hour the minute hand moves $360^\circ$, which the answer calls $60$ minutes. In one hour, the hour hand moves $30^\circ$, so the minute hand gains $330^\circ=55$ minutes. |
Transformation matrix from a translated-rotated coordinate system to the general coordinate system | A matrix operation expresses a linear transformation, and as such will always map the origin to the origin. For this reason, you cannot express the translated coordinate system in terms of a simple $3\times3$ matrix, much less the translated-and-rotated one.
One common approach to solve this problem is using homogeneous coordinates. The stripped-down version of this concept is this: add a fourth coordinate which will always be $1$. You need a more general setup if you want to express projective transformations, but you only have affine transformations, so this approach is enough.
Now you want a matrix product of the form
$$
\begin{pmatrix}X\\Y\\Z\\1\end{pmatrix}=
\begin{pmatrix}
M_{11}&M_{12}&M_{13}&M_{14}\\
M_{11}&M_{12}&M_{13}&M_{14}\\
M_{11}&M_{12}&M_{13}&M_{14}\\
0&0&0&1\end{pmatrix}\cdot
\begin{pmatrix}x\\y\\z\\1\end{pmatrix}
$$
As you can see, I assume column vectors, and products where the matrix is on the left and the vector is on the right. You can compute this matrix as a product of matrices. For example, a translation from the origin to the point $o$ would be written as
$$M_0=\begin{pmatrix}
1&0&0&X_0\\
0&1&0&Y_0\\
0&0&1&Z_0\\
0&0&0&1
\end{pmatrix}$$
Your description about the rotations is not very precise, but if you know what you want to do, you can likely formulate this into a product of rotation matrices as well. In the end, multiply all the operations you want to perform, in the correct order (i.e. first operation to perform is the rightmost matrix of the product), and you are done. |
Does $h_n = T(h_{n-1})$ for a compact operator $T$ imply that $h_n$ is a bounded sequence? | For $(a_j) \in c_0$ (the space of sequences converging to $0$) the multiplication operator $K:\ell^2 \to \ell^2$ defined by $K(x)_j = a_j x_j$ is compact and bounded with $\lvert \lvert K \rvert \rvert = \lvert \lvert a \rvert \rvert_\infty$.
Take $a_j = 2^{2-j}$ for $j \geq 1$ so $(a_j) \in c_0$. Let $h_0 = e_1$ where $\{e_j:j \geq 1 \}$ is the standard complete orthonormal sequence in $\ell^2$ and define the sequence $h_k$ as in the question. Then $h_k = 2^k e_1$ so $\lvert \lvert h_k \rvert \rvert_2 = 2^k$. |
$\tilde{H}^i(\sum X) \cong \tilde{H}^{i-1}(X)$ | Use the Mayer-Vietoris sequence to prove the first part. The two cones are contractible, and their intersection is homotopy equivalent to $X$.
For the second part, consider any space whose cohomology ring has nontrivial products and check what the dimension of the products would be. For example $H^*(\mathbb{CP}^2) = \mathbb{Z}[x]/(x^3)$ with $|x| = 2$. This means that if $X = \Sigma \mathbb{CP}^2$, then:
$$H^i(X) = \begin{cases} \mathbb{Z} & i = 0, 3, 5 \\ 0 & \text{otherwise} \end{cases}$$
Let $u$ be a generator of $H^3(X)$, then $|u^2| = 2 |u| = 6$ so necessarily $u^2 = 0$. But $x^2 \neq 0$ in $H^*(\mathbb{CP}^2)$, so it's not an isomorphism of rings (the cup product is not preserved). |
Probability Questions | Question 1: Let $H$ be the event "buys high-tech" and $M$ the event "buys every month."
(i) We have been told that $P(H|M)=0.3$. If $H$ and $M$ were independent, we would have $P(H|M)=P(H)$. But we have been told that $P(H)=0.2$. So $H$ and $M$ are not independent.
Much more informally, the proportion of high-tech shoppers among monthly shoppers is $0.3$, substantially more than the proportion of high-tech shoppers in the general population. If we know that someone is a monthly shopper, our estimate that she is a high-tech shopper is different (and bigger) that if we do not know about the monthly shopping habit.
(ii) We want $P(M|H)$, the probability of $M$ given $H$. But $$P(M|H)P(H)=P(M\cap H)=P(H|M)P(M).$$
We know $P(H|M)$, and $P(M)$, and $P(H)$, so we can compute $P(M|H)$. The answer turns out to be $0.27$.
We can also use the fact that $P(H\cap M)=P(H|M)P(M)$ to find that $P(H\cap M)=(0.3)(0.6)=0.18$. But $P(H)P(M)=(0.2)(0.6)=0.12$. So $P(H\cap M)\ne P(H)P(M)$, which is another (and in this case more complicated) way of seeing that $H$ and $M$ are not independent.
Question 2: The procedure for (i) is right.
For (ii), if $Y$ is the number of smokers in a sample of $400$, then $Y$ has binomial distribution, mean $(400)(0.25)$ and variance $(400)(0.25)(0.75)=75$. The probability that $Y \le 112$ is, approximately, the probability that a normal with mean $100$ and variance $75$ is $\le 112.5$. (We have made the continuity correction. If you do not, and use $112$ instead, the approximation is likely to be less good.)
So our probability is approximately the same as the probability that $Z\le \frac{112.5-100}{\sqrt{75}}$, where $Z$ is standard normal.
Added Remarks: I do not think that "less than $113$" can be interpreted as meaning $113$ or fewer, which is where your $1.5$ comes from. By the way, the inequality should go the other way, we want (with your interpretation) $P(Z\lt 1.5)$.
The probability that $Y \le 112$ is (Wolfram Alpha) approximately $0.924184$. With the continuity correction, the normal approximation gives probability roughly $0.925543$. Not bad. The probability that the normal is $\le 112$ (so no continuity correction) is about $0.917072$, respectable, but not nearly as accurate.
Note that with the availability of good computing tools, we can evaluate binomial probabilities directly, so the normal approximation to the binomial is of diminishing practical importance. |
Borel sets: alternative characterization for metric space | For the answer to the first question: let $\mathcal S' = \{A \in \mathcal S : A^C \in \mathcal S\}$. See that $\mathcal S'$ contains all open sets, that $\mathcal S'$ is closed under taking complements, and that $\mathcal S'$ is closed under countable unions. |
Different answer when simplifying before integrating | $$\ln(k-y)+c=\ln((1-y/k)k)+c=\ln (1-y/k)+\underbrace{\ln k+c}_{c'}$$
Use $\ln ab=\ln a+\ln b$ |
Show that $m^{*}([a, b]\backslash G)=b − a- m^{*}{(G)}$ | It suffices to prove that, if $G$ is an open subset of $[a, b]$, then we have $m_*(G) = m^*(G)$, where $m_*(G)$ is defined as $\sup\{m^*(K): K \subseteq G \text{ compact}\}$.
We write $G$ as a countable disjoint union of open intervals $G = \bigcup_{i = 1}^\infty I_i$. I assume that you know that $m^*(G) = \sum_{i = 1}^\infty |I_i|$.
Since $m_*(G) \leq m^*(G)$ always holds, it suffices to show the other direction.
In the following, we assume that $m^*(G) < \infty$. The case $m^*(G) = \infty$ can be treated similarly.
For a given $\epsilon > 0$, we choose a finite subset $S \subseteq \{1, 2, \dots\}$ such that $m^*(G) - \sum_{i \in S} |I_i| < \frac \epsilon 2$.
Without loss of generality, we may assume that $S$ is the set $\{1, \dots, n\}$. We write $I_i = (a_i, b_i)$.
We then construct a compact set $K = \bigcup_{i = 1}^n [a_i + \frac \epsilon{4n}, b_i - \frac \epsilon{4n}]$. Since $K$ contains the disjoint union $\bigcup_{i = 1}^n (a_i + \frac \epsilon{4n}, b_i - \frac \epsilon{4n})$, we have $$m^*(K) \geq m^*(\bigcup_{i = 1}^n (a_i + \frac \epsilon{4n}, b_i - \frac \epsilon{4n})) = \sum_{i = 1}^n (|I_i| - \frac \epsilon {2n}) = (\sum_{i = 1}^n |I_i|) - \frac \epsilon 2 > m^*(G) - \epsilon. $$
Therefore we have shown that $m_*(G) > m^*(G) - \epsilon$ for any $\epsilon > 0$. |
Can real linear map be defined using only additivity condition? | If you can check that $f$ is continuous, then the assertion is true since $\Bbb Q$ is dense in $\Bbb R$. I don't know if $f$ being measurable is already enough, maybe so.
If this is not the case, we can construct $f$ additive but not linear. The idea uses the existence of a Hamel basis of $\Bbb R$ as a $\Bbb Q-$vector space and you can read a sketch of it here, for example. |
No bijective function $\mathbb Z\to\mathbb Z_+$ is a polynomial | As you said $f$ will grow too fast if it's not linear. Indeed, let $f(x) = a_nx^n + \dots + a_0$ where $n$ is even. You can write $f(x) = x^n(a_n + a_{n-1}x^{-1} + \dots ) = x^ng(x)$. Now $g(x)$ is bounded on $[1, \infty)$, so there is $C > 0$ with $|g(x)| \leq C$ and $|f(x)| > a_n(x^n - C)$. This means that $f$ can't be bijective. |
Transformation from one space to another and keep a custom sorting rule | Notice that your question is equivalent to $f(a_1)\leq f(a_2) \implies a_1 \leq a_2$.
Such an $f$ would have to be injective.
Indeed, if $f(a_1) = f(a_2) \iff f(a_1)\leq f(a_2)$ and $f(a_2)\leq f(a_1)$ then $a_1 \leq a_2$ and $a_2 \leq a_1 \iff a_1 = a_2$.
Hence, whenever $a_1 \neq a_2$, $f(a_1) \neq f(a_2)$ and since $\mathbb Z$ is totally ordered we would necessarily have (relabeling if necessary) $f(a_1) < f(a_2)$.
But the order you imposed on $\mathbb Z^n$ is partial, and so whenever $a_1$ and $a_2$ are incomparable, the property you wish for will fail. |
weak convergence on $\ell_1$ | Fix $\varepsilon$ and $n$. For $k$ integer, the map $l_k\colon\{x_j\}_j\mapsto x_k$ is a linear continuous functional, so by weak convergence you can find $N_k$ such that for $j\geq N_K$, $|l_k(x^j)|=|x_k^{j}|\leq \varepsilon/n$. Now take for example $K:=\max\{N_k,1\leq k\leq n$. |
Give conditions for a map defined on a domaine $D$ is a contraction mapping | A sufficient condition is that the norm of the Jacobian is bounded by $r$ on $D$. This is an immediate consequence of the Mean value theorem in several variables.
Obviously, you need to consider the matrix norm induced by your $\Vert \cdot \Vert$ norm to apply the Mean value theorem quoted above.
The converse is also true as if $\Vert J_f(x_0)\Vert > r$ at $x_0 \in D$, you'll be able to find a point $x$ "close to" $x_0$ such that $$\Vert f(x) - f(x_0) \Vert > r \Vert x - x_0 \vert$$ |
How does one prove that $e$ exists? | Let $a>0$ and $a\ne1$. First we have to prove the existence of $\displaystyle \lim_{h\to0}\frac{a^h-1}{h}$.
Assume that $r>1$ and let $f(x)=x^r-rx+r-1$ for $x>0$. Then
$$f'(x)=r(x^{r-1}-1)\begin{cases}<0 &\text{if }0<x<1\\
=0 &\text{if }x=1\\
>0 &\text{if }x>1 \end{cases}$$
Therefore, $f$ attains its absolute minimum at $x=1$. So for all $x>0$, we have
$$f(x)\ge f(1)=0$$
$$x^r\ge rx+1-r$$
So when $r>1$ and $h>0$, $\displaystyle\frac{a^{rh}-1}{rh}\ge\frac{ra^h+1-r-1}{rh}$ and hence
\begin{align}
\frac{a^{rh}-1}{rh}-\frac{a^h-1}{h}\ge0
\end{align}
When $r>1$ and $h<0$, $\displaystyle\frac{a^{rh}-1}{rh}\le\frac{ra^h+1-r-1}{rh}$ and hence
\begin{align}
\frac{a^{rh}-1}{rh}-\frac{a^h-1}{h}\le0
\end{align}
Therefore, $\displaystyle \frac{a^h-1}{h}$ is an increasing function in $h$. As it is bounded below by $0$ on $(0,\infty)$, $\displaystyle \lim_{h\to0^+}\frac{a^h-1}{h}$ exists.
When $h<0$,
$$\frac{a^h-1}{h}=a^h\left(\frac{a^{-h}-1}{-h}\right)$$
As $\displaystyle \lim_{h\to0^-}a^h$ exists and equals $1$, $\displaystyle \lim_{h\to0^-}\frac{a^h-1}{h}$ exists and $\displaystyle \lim_{h\to0^-}\frac{a^h-1}{h}= \lim_{h\to0^+}\frac{a^h-1}{h}$.
Therefore, $\displaystyle \lim_{h\to0}\frac{a^h-1}{h}$ exists.
Now we are ready to prove that there exists an $e$ such that $\displaystyle \lim_{h\to0}\frac{e^h-1}{h}=1$.
Define $e=a^\frac{1}{k}$, where $\displaystyle k=\lim_{h\to0}\frac{a^h-1}{h}$. Then
\begin{align}
\lim_{h\to0}\frac{e^h-1}{h}&=\lim_{h\to0}\left(\frac{a^\frac{h}{k}-1}{\frac{h}{k}}\cdot \frac{1}{k}\right)\\
&=k\cdot\frac{1}{k}\\
&=1
\end{align}
This number $e$ is unique. Indeed, if $b>0$ and $\displaystyle \lim_{h\to0}\frac{b^h-1}{h}$, then we can prove that $b=e$.
Let $p=\log_eb$. Then $b=e^p$.
\begin{align}
\lim_{h\to 0}\frac{b^h-1}{h}-\lim_{h\to 0}\frac{e^h-1}{h}&=1-1\\
\lim_{h\to 0}\frac{e^{ph}-e^h}{h}&=0\\
\lim_{h\to 0}\left[(p-1)e^h\cdot\frac{e^{(p-1)h}-1}{(p-1)h}\right]&=0\\
(p-1)(1)(1)&=0\\
p&=1
\end{align}
Hence $b=e$. |
Proof writing involving power set and cartesian product: $(P(A) \times P(B)) \subseteq P(A \times B)$ | This proof is not correct, nor is the result you're trying to prove even true. The flaw is that $x\subseteq A$ and $y\subseteq B$ does not imply that $(x,y)\subseteq A\times B$. It is true that if (and only if) $x\in A$ and $y\in B$ then $(x,y)\in A\times B$ (this is the definition of the set $A\times B$), but you can't replace $\in$ with $\subseteq$.
To show that $(x,y)\subseteq A\times B$, you have to prove that $(x,y)$ is a subset of $A\times B$. But this doesn't even make sense to say: $(x,y)$ is an ordered pair, not a set.
(Well, in some contexts an ordered pair might be defined as a certain set. But I don't know of any standard definition of it as a set such that it would be a subset of $A\times B$). |
Generalization of Hoeffding Inequality | To bound this event, you can just apply Hoeffding's Inequality as usual, replacing $t$ with $t + (c - 1)\mathbb{E}[\bar{X_n}]$. |
How do I prove these three statements true/false? | For the first proposition $n=5$ is a counterexample.
$$(2\cdot 5 +1)^2 -2 = 7 \cdot 17$$
For the second proposition, $n=6$ is a counterexample.
$$6^3-(6-1)^3=7\cdot 13$$
For the last proposition, $n=6$ is a counterexample.
$$(2\cdot 6)^2=144 \text { and } 147=3\cdot 7^2$$ |
Differentiability for a function from $\mathbb{R^3} \to \mathbb{R}$ | For the differentiability of several real variables maps, see Wikipedia.
Your map is not differentiable at the origin as it is not even continuous. $f(x_1,0,0) = 1$ for $x_1 >0$ and $f(x_1,0,0) = -1$ for $x_1 <0$. |
Alternative proof for $\int_0^\infty\sqrt t\cos(t^2)\mathrm dt<\infty$ | You are on the right way. Start from
$$
\int_0^{+\infty} \frac{\cos u}{\sqrt[4]{u}}du
$$
and apply Dirichlet's test (see page 34 of this file). The function $u \mapsto 1/\sqrt[4]{u}$ is bounded and monotonically decaying to zero at infinity. Here you have a little issue at $u=0$, but this is easily analyzed directly: so you can integrate over $[1,+\infty)$. |
Show that $B_0 \subset B_1$ for $B_j$ $:= \sigma(A_j)$ being a $\sigma$-algebra. | So you need to show if $B_{0} = \sigma( \{ (a,b) \mid a, b \in \Bbb R \} )$ and $B_{1} = \sigma( \{ (a,b] \mid a, b \in \Bbb R \} )$, then $B_{0} \subseteq B_{1}$, where of course we are assuming $a \leq b$.
But $B_{0}$, which is defined as $\sigma( \{ (a,b) \mid a, b \in \Bbb R \} )$, is by definition the smallest $\sigma$-algebra containing the intervals $(a,b)$, right? Since $B_{1}$ is another $\sigma$-algebra, then if we can show every interval $(a,b)$ is in $B_{1}$, then this necessarily implies $B_{0} \subseteq B_{1}$, since otherwise $B_{0} \cap B_{1}$ would be a smaller $\sigma$-algebra than $B_{0}$ containing the intervals $(a,b)$, a contradiction. (Of course I'm using the fact that if $X$ and $Y$ are $\sigma$-algebras, then so is $X \cap Y$ -- and this is something very easy to prove, and you should prove it yourself.)
So, let's show $(a,b) \in B_{1}$ for every interval $(a,b)$. $B_{1}$ is the $\sigma$-algebra generated by the intervals $(a,b]$, so all of the intervals $(a,b]$ are in $B_{1}$. If we could only express $(a,b)$ as maybe a countable union of $(c,d]$-type intervals, then since the $(c,d]$-type intervals are in $B_{1}$ and $B_{1}$ is closed under countable unions, we would get $(a,b)$ is in $B_{1}$.
Hmm, well, $(a,b) = \bigcup \limits_{n=1}^{\infty} (a,b-\frac{1}{n}]$. I am leaving this fact for you to prove. Once you prove it, you can use it.
Okay, so since $(a, b-\frac{1}{n}]$ is in $B_{1}$ for every $n$, and $(a,b)$ is the countable union of elements of $B_{1}$, that means $(a,b)$ is in $B_{1}$. Then this implies $B_{0} \subseteq B_{1}$ by the reasoning I gave earlier. |
Building a compound probability distribution | As non-statician I dislike percentages and my $p$ stand for your p%.
Maybe something like: $$X:=US$$ where $U$ and $S$ are independent random variables.
This with $P(U=1)=p$ and $P(U=0)=1-p$ and with $S$ having PDF $f$.
$U$ somehow states whether there is a shock or not.
$S$ somehow "measures" a shock (and is probably meant to be positive).
$X$ is (if $p>0$) not continuous and for $x\geq0$ we have $$P(X\leq x)=(1-p)+pP(S\leq x)$$ |
How to make this function suitable for Fourier transform | The long and short of this is; you can't. The best thing to do is to approximate the exponential component with a series expansion of $e^{x}$, or, use the solutions for the chirped oscillating wave. |
Solving a small polynomial system | Start with a little simplification: Divide all of your unknowns by $a$, $v$ by $a^2$ and $d_1, d_2$ by $a^3$. Then also divide $t_r, t_p$ by $-v$ and $d_1, d_2$ by $v^2$. Then your 6 equations become:
$2d_1 = p_1t_p^2 + r_1t_r^2 + 2p_1t_pt_r - 2(t_p+t_r)$
$2d_2 = p_2t_p^2 + r_2t_r^2 + 2p_2t_pt_r$
$1 = (p_1t_p+r_1t_r)$
$0 = (p_2t_p+r_2t_r)$
$1 = p_1^2 + p_2^2$
$1 = r_1^2 + r_2^2$
Equations 5 and 6 tell us that there exist $\theta, \phi$ with $$p_1 = \cos\theta, \quad p_2 = \sin\theta\\r_1 = \cos\phi, \quad r_2 = \sin\phi$$
Now look at 3 and 4: treating the $p$s and $r$s as knowns and applying gaussian elimination, we get $$t_p = \frac{r_2}{r_2p_1 - r_1p_2} = \frac{\sin \phi}{\sin\phi\cos\theta-\cos\phi\sin\theta} = -\frac{\sin\phi}{\sin(\theta - \phi)}$$
$$t_r = \frac{p_2}{p_2r_1-p_1r_2} = \frac{\sin \theta}{\sin\theta\cos\phi-\cos\theta\sin \phi} = \frac{\sin\theta}{\sin(\theta - \phi)}$$
Substituting all of this into 1 and 2, and multiplying through by $\sin(\theta - \phi)^2$, we get:
$$2d_1\sin(\theta - \phi)^2 = \cos\theta \sin^2\phi + \cos\phi\sin^2\theta + 2\cos\theta\sin\theta\sin\phi - 2(\sin\phi + \sin\theta)$$
$$2d_2\sin(\theta - \phi)^2 = \sin\theta\sin^2\phi + \sin\phi\sin^2\theta + 2\sin^2\theta\sin\phi$$
Which is still a nasty batch of trig to simplify, but at least now you are down to two equations in two unknowns. |
Vectors sometimes used in math just as arrays/lists of numbers, sometimes as concept of "change" | The formal definition of a vector is pretty open ended (a member of a vector space). At a very high level vector is a collection of mathematical objects, that obeys rules of addition and scalar multiplication.
A container of numbers isn't too bad. But the objects could be something like differential operators. And, they could be other vectors.
But then, what do these objects represent? They could be points on plane (or in space) and take Euclidean geometry into n-dimensions.
Physicists use them to model position, velocity, and acceleration of objects. And to represent forces acting on an object.
Since vectors obey rules of addition, and scalar multiplication -- they don't have to be the standard rules, they just have to follow some well-defined rule -- they form algebraic structures. Which opens up a world of just what is "Algebra."
The definition is pretty abstract, and the applications are manifold. |
Is this proof for Corollary 29.4. Munkres Topology correct? | First, suppose $X$ is homeomorphic to an open subspace $A$ of a compact Hausdorff space $Y$. By Corollary 29.3, $A$ is locally compact, and hence so is $X$. Also, $A$ is Hausdorff since any subspace of a Hausdorff space is Hausdorff, and thus so is $X$.
Conversely, suppose $X$ is locally compact Hausdorff. By Theorem 29.1, there exists a compact Hausdorff space $Y$ such that $X$ is a subspace of $Y$ and $Y-X$ has one point. Since $Y$ is Hausdorff, the singleton set $Y-X$ is closed in $Y$, so $X$ is open as a subset of $Y$. Thus $X$ is homeomorphic to an open subspace of a compact Hausdorff space (namely, itself as a subspace of $Y$). |
moment generating function from given PMF? | Let $X$ be our random variable. Recall that the mgf of $X$ is $E(e^{tX})$. In our case, this is
$$\frac{1}{n}\left(e^{at}+e^{(a+1)t}+e^{(a+2)t}+\cdots +e^{(a+n-1)t}\right).$$
The expression can be "simplified," for it is a geometric series with first term $\frac{e^{at}}{n}$ and common ratio $e^t$. The sum, for $t\ne 0$, is $\frac{e^{at}}{n}\cdot \frac{e^{nt}-1}{e^t-1}$. |
Can you describe what is $S^1 \times [0,\infty)$? | One way to think of the cross product, is that at each point in the first factor, you are attaching a copy of the second space.
For example: $S^1 \times \{s\}$, where $s$ is a single point, is really just $S^1$, since at each point you are replacing it with a different point.
$S^1 \times \{s,t\}$ is a two copies of the circle. You can view this by taking a circle, and at each point, you are replacing it with two points, and looking at the full collection of these gives two circles.
Actually, $S^1 \times \{1, \dots, n\}$ is nothing but $n$ circles, and $S^1 \times \mathbb Z$ is a countable collection of circles. You can visualize them as stacked along some verticle axis, with a circle at each integer.
Going further, $S^1 \times \mathbb R$ is a circle, but whenever there was a point, you replace it with a line, so you get a circle of lines, or in other words, a cylinder.
$S^1 \times [a,\infty)$ is the same, but with a half open interval.
$S^1 \times S^1$ is a circle of circles, so at each point you attach a circle (for the sake of visualization, say you attach a circle with smaller radius), then you get a torus, with the traditional donut visual. |
Integration of $I=\int_0^1\sqrt{1-x^2}e^{-x^2}~dx$ | Use a trig substitution as you would in the other integral to get
$$I = \int_0^{\pi/2} dt \, \sin^2{t} \, e^{-\cos^2{t}} = \frac12 \int_0^{\pi} dt \, \sin^2{t} \, e^{-\cos^2{t}}$$
then use the half-angle formulae to get
$$I = \frac12 \int_0^{\pi} dt \, \frac12 (1-\cos{2 t}) e^{-(1+\cos{2 t})/2} = \frac1{4 \sqrt{e}}\left [\int_0^{\pi} dt \, e^{-\frac12 \cos{2 t}} - \int_0^{\pi} dt \,\cos{2 t} \, e^{-\frac12 \cos{2 t}}\right ]$$
The first integral is simply $\pi I_0(1/2)$; the second is $-\pi I_1(1/2)$. The result is
$$I = \frac{\pi}{4 \sqrt{e}} \left [I_0\left (\frac12\right)+I_1\left (\frac12\right) \right ]$$ |
Number of ways of dividing a set into a set of sets | Your first formula, though not quite given in full detail, is correct if one makes a reasonable interpretation of what you mean by $\dots$. No explanation is given for the first formula: the reasoning that led to it is not mentioned. Although the actual reasoning is easy to guess from the structure of the formula, some explanation ought to be given. We give a justification of the second formula.
Assume that all of the $n_i$ are distinct. This was not mentioned explicitly, but is a necessary assumption. The result is false otherwise.
Imagine arranging our $n$ objects in a row. Now group the first $n_1$ together, then the next $n_1$, and so on until we have $a_1$ groups of $n_1$ members each. Then do the same with $n_2$, and so on.
Do this for every one of the $n!$ permutations of our $n$ objects. We will get every division of the type you are looking for. The only problem is that we get every division in more than one way. But luckily every division is obtained in the same number of ways.
Each division into groups of $n_1$ occurs in $a_1!(n_1!)^{a_1}$ ways. This is because each of the $a_1$ little groups can be internally permuted in $n_1!$ ways, for a total of $(n_1!)^{a_1}$ ways. Then each of the $a_1$ groups themselves can be permuted as blocks in $a_1!$ ways. The same consideration applies to all $i \le m$. Thus we need to divide $n!$ by
$$\left[a_1!a_2!\cdots a_m!\right]\left[(n_1!)^{a_1}(n_2!)^{a_2}\cdots (n_m!)^{a_m}\right].$$ |
How to divide polynomial matrices | Dividing matrices is multiplying by the inverse, i.e if you want to divide $A$ by $B$ then you calculate $A\cdot B^{-1}$ (right) or $B^{-1}\cdot A$ (left). In order to find the inverse you use the Gauss reduction or the adjoint formula. For $2\times2$ matrices you have:
$$\left( \begin{array}{cc} a&b \\ c&d \end{array}\right)^{-1} = \frac{1}{ad-bc}\left( \begin{array}{cc} d&-b \\ -c&a \end{array}\right)$$
It doesn't matter whether the entries are numbers or polynomials.
To elborate on the comments: For example, suppose $A=\left( \begin{array}{cc} x^2&x+1 \\ x+2&x^2+1 \end{array}\right)$ and $B=\left( \begin{array}{cc} x&x \\ x+1&2x \end{array}\right)$. Then you have $B^{-1}= \frac{1}{2x^2-x^2-x}\left( \begin{array}{cc} 2x&-x \\ -x-1&x \end{array}\right)$. Suppose you want to calculate the right quotient.
$$AB^{-1}=\frac{1}{x^2-x}\left( \begin{array}{cc} 2x^3-(x+1)^2&-x^3+x^2+x\\ 2x^2+4x-(x+1)(x^2+1)&x^3+x-x^2-2x \end{array}\right)$$
Now divide each entry by the determinant, to get:
$$AB^{-1}=\left( \begin{array}{cc} 2x+1&-x \\ -x&x \end{array}\right)+\frac{1}{x^2-x}\left( \begin{array}{cc} -x-1&x \\ 3x-1&-x \end{array}\right)$$
Multiplying both sides by $B$, you get:
$$A=\left( \begin{array}{cc} 2x+1&-x \\ -x&x \end{array}\right)\cdot B+\left( \begin{array}{cc} 0&1 \\ 2&1 \end{array}\right)$$
So the right quotient is $\left( \begin{array}{cc} 2x+1&-x \\ -x&x \end{array}\right)$ and the right reminder is $\left( \begin{array}{cc} 0&1 \\ 2&1 \end{array}\right)$. |
Formula to get total combination possibilities | You can map this to the problem of balls in bins with limited capacity by subtracting $\text{Num}\cdot\text{Min}$ from $\text{Tot}$ and then distributing the remainder into $\text{Num}$ bins with equal capacity $\text{Max}-\text{Min}$. The above page gives a formula for the case of equal capacities at the very end, which can be derived using the inclusion–exclusion principle by considering the number of bins filled to capacity. |
Check whether a function is an isometry using definition | If $V$ and $W$ are not specified, you may assume that $V=W=\Bbb R^2.$
Note that the given transformation is a $45$ degree counterclockwise rotation, which preserves the angles and distances; hence, it is an isometry. |
linear transformation matrix relative to a basis | If you are familiar with change of basis you should know that the entries of a linear transformation $f : (V,B) \longmapsto (V,B')$ are the coordinates in the basis $B'$ of of the image of the starting basis $B$.
Let's $B:=\{v_{1},v_{2},v_{3}\}$ and $\phi:=f$ .
Let's think of $V$ as $\mathbb{R}^{3}$ thanks to the isomorphism of coordinates (given by the specific that $B$ is a basis of $V$, real vectorial space).
In our case since $f(v_{1}) = f(v_{2}) = f(v_{3}) = 1 \cdot v_{1} + 1 \cdot v_{2} + 1 \cdot v_{3}$ we have that :
$$M_{B \to B}(f):=A= \begin{pmatrix}1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1\end{pmatrix}$$
Why is that? Simply because the coordinates of $f(v_{1})$ in the basis $B$ are $1 \cdot v_{1} + 1 \cdot v_{2} + 1 \cdot v_{3}$
Of course, to find the characteristical polynomial you could compute $det(A-tI_{d})$
But in this case the job is much easier, why ? Simply note that $v_{1}+v_{2}+v_{3}$ is an eigenvector of eigenvalue $3$,
(You can notice that by seeing that $f(v_{1}+v_{2}+v_{3}) =3f(v_{1}) = 3(v_{1}+v_{2}+v_{3})$
And secondly observing the matrix we notice that $Ker(f)$ at leat dimension two, since there are 2 linear dependent vectors,each of which give us an eigenvector of eigenvalue $0$.
We've just computed the characteristic polynomial without computational effort since $dim(V) = dim(\mathbb{R}^{3}) = 3$;
But we have just found 3 independant eigenvectors,
So the characteristic polynomial must be $p_{A}(t) = t^{2}(t-3)$ |
prove that space $V$ with norm $\|\varphi\|$ is normed linear space? | Define a sequilinear map $\langle\cdot,\cdot\rangle:C^1(I)\times C^1(I)\to\mathbb C$ by
$$\langle\varphi,\psi\rangle=\int_a^b\varphi(t)\overline\psi(t)+\varphi'(t)\overline\psi'(t)\ dt$$
Then by showing that $\langle\cdot,\cdot\rangle$ is an inner product on $C^1(I)$, and that $\|\varphi\|=\langle\varphi,\varphi\rangle^{1/2}$, you will have the result (assuming you can show that such a norm satisfies the triangle inequality). |
Non-uniqueness of MLE of multivariate Laplace distribution? | The minimizer of such set of points is given by the Geometric Median.
For example, for $ d = 1 $ you get the Median which is not unique for an odd set of different numbers.
For higher dimensions you need to take care of the case the points are collinear, which basically means that the problem, is again, equivalent to 1D. |
What is the expected number of empty boxes? | For bin $k$, the probability no ball lands in this bin is $(1 - 1/n)^n$ by independence, since the probability the $i^{\text{th}}$ ball doesn't land there is $1 - 1/n$. Let $A_k$ denote the event that no ball lands in bin $k$. So we've computed that $\mathbb{P}(A_k) = (1-1/n)^n$.
The number of bins with no ball is $$\sum_{i=1}^n \mathbb{1}(A_k).$$
Thus the expected number of bins with no ball is
$$\mathbb{E}\left[\sum_{i=1}^n \mathbb{1}(A_k)\right] = \sum_{i=1}^n \mathbb{E}\left[\mathbb{1}(A_k)\right] = \sum_{i=1}^n \mathbb{P}(A_k) = n(1-1/n)^n.$$ |
Information Inequality theorem | Note that $p(x)$ and $q(x)$ are probability mass functions. Therefore, $\sum_{x \in \mathcal{X}} p(x)=\sum_{x \in \mathcal{X}} q(x)=1$. $D(p||q)=0$ if the equalities hold in both (2.85) and (2.87). According to (2.85), $q(x)=c p(x)$ for all $x \in A$, whereas (2.87) implies $\sum_{x\in{\mathcal{X}}}q(x)=\sum_{x\in A}q(x)=1$. Therefore, $$1= \sum_{x \in A} q(x) = c\sum_{x \in A} p(x)=c \iff c=1.$$ |
Similarity of an invertible matrix to a diagonal matrix | You must distinguish between the similarity relation and the equivalence!
Two matrices $A$, $B$ are said equivalent if there exist invertible matrices $M$ and $N$ such that $B = M A N$. On a field a matrix $A$ is equivalent to the matrix $\mathrm{diag} \ \{1, \dots, 1, 0, \dots, 0 \}$, where the numbers of $1$s is the rank of $A$. On a PID it is true the weaker statement that you cited (Smith canonical form).
Two square matrices $A$, $B$ are said similar if there exists an invertible matrix $M$ such that $B = M A M^{-1}$. |
What is "induction on complexity of formula" | The construction of terms usually proceeds through the following recursive definition:
Any variable $x$ is a term;
Any constant $c$ is a term;
For an $n$-ary function symbol $f$ and terms $t_1,\ldots, t_n$, $f(t_1,\ldots,t_n)$ is a term.
(Some texts define constants as nullary function symbols, but this does not guarantee that this special case does not require special attention -- i.e. it is not trivial that the induction proof for function symbols of positive arity carries over to nullary function symbols.)
So in order to prove a statement $P(t)$ about terms $t$ by induction on the complexity, one needs to verify that each of these constructions is suitably well-behaved with regard to the formation rules.
That is, we need to verify that $P(x), P(c)$ for $x$ a variable, $c$ a constant, and that $P(t_1),\ldots, P(t_n)$ imply $P(f(t_1,\ldots,t_n))$.
Now why would we want to have such statements $P$? Well, a useful example of a statement $P$ about terms could be verifying your initial statement for the case $\varphi = (t_1 = t_2)$!
In conclusion, proving something about terms is entirely different from proving something for formulae. Therefore, they call for different approaches, and as such require a different basis for their "structural induction". |
Show that if $fd'=f'd $ and the pairs $f, d $ and $f',d' $ are coprime, then $f=f' $ and $d=d' $. | Hint
Use the Euclid's lemma to show that $f$ divides $f'$ and $f'$ divides $f$ and conclude that $f=f'$ and then $d=d'$. |
Real numbers to real powers | Right, your argument for $\gamma \leqslant \alpha \beta$ has the problem that you cannot guarantee that $t - x \in \mathbb{Q}$.
What you would need to make it work without a real modification are $r,s \in \mathbb{Q}$ with $r \leqslant x$, $s\leqslant y$ and $t \leqslant r+s$. Then you'd have
$$a^t \leqslant a^{r+s} = a^r\cdot a^s \leqslant \alpha \beta$$
and be done. But you cannot find such $r,s$ if $x,y \in \mathbb{R}\setminus \mathbb{Q}$ and $t = x+y \in \mathbb{Q}$.
This argument would work if the definition used a strict inequality,
$$a^x = \sup \:\{ a^r : r\in \mathbb{Q}, r < x\}\,,\tag{$\ast$}$$
(the restriction to nonnegative exponents is unnecessary) because for every rational $t < x+y$ we can find rational $r < x$ and $s < y$ with $t < r+s$.
Since the definition uses a nonstrict inequality, we need some additional work. Probably the easiest way (one of several equally easy ways) is to show that
$$\sup\: \{ a^r : r\in \mathbb{Q}, r < x\} = \sup\: \{ a^r : r \in \mathbb{Q}, r \leqslant x\}\,,$$
i.e. that $(\ast)$ is equivalent to the given definition.
The direction $\alpha\beta \leqslant \gamma$ doesn't suffer from that problem. If $r,s$ are rational with $r \leqslant x$ and $s\leqslant y$, then $r+s \leqslant x+y$ and hence
$$a^r\cdot a^s = a^{r+s} \leqslant a^{x+y} = \gamma\,.\tag{1}$$
Fixing an arbitrary rational $r \leqslant x$ and taking the supremum of $(1)$ over all rational $s \leqslant y$, we obtain
$$a^r\cdot \beta \leqslant \gamma\,.\tag{2}$$
Now taking the supremum over all rational $r \leqslant x$ in $(2)$ yields
$$\alpha\cdot\beta \leqslant \gamma\,.$$ |
Question about a differentiable function at point $a$. | Since $f$ is differentiable on $a$ then
$$f(x_n)=f(a)+(x_n-a)f'(a)+(x_n-a)\epsilon_1(x_n)$$
where $\epsilon_1(x_n)\xrightarrow{n\to\infty}0$ and similarly we have
$$f(y_n)=f(a)+(y_n-a)f'(a)+(y_n-a)\epsilon_2(y_n)$$
where $\epsilon_2(y_n)\xrightarrow{n\to\infty}0$. Now subtracting the two equalities and we get
$$f(x_n)-f(y_n)=(x_n-y_n)f'(a)+\underbrace{(x_n-a)\epsilon_1(x_n)-(y_n-a)\epsilon_2(x_n)}_{=R_n}$$
$$R_n=(x_n-y_n)\epsilon_1(x_n)+(y_n-a)(\epsilon_1(x_n)-\epsilon_2(y_n))$$
and notice that
$$0\le a-y_n=\underbrace{(a-x_n)}_{\le0}+(x_n-y_n)\le x_n-y_n$$
Can you take it from here? |
Equivalence for rings with localization property | In your argument,every maximal ideal is of the form you gave?
It seems that this is not correct.Such that take two coprime elements in the ring of integers $\mathbb Z=(4,9)$.But $(4),(9)$ are not prime.
1.If $R=(f_1,...,f_m)$,$\forall m\in M$,there exists $N$ large enough such that $f_i^Nm=0$.Remark that $R=(f_1^N,...,f_m^N)$ since $R=(f_1,...,f_m)$.So $1=\sum r_if_i^N$.Hence $m=0$.
2.If $(f_1,...,f_m)$ is not equal to $R$.Then consider $M=R/(f_1,...,f_m)$.$M[f_i^{-1}]=0$ is a contradiction. |
Distance of point on ellipse from focii | After the parameterization, the expression becomes $$\sqrt{a^2 \cos^2 \theta +(a^2-a^2 e^2) \sin^2 \theta +(ae)^2 \pm 2ea^2 \cos \theta} \\ = \sqrt{a^2 +a^2e^2 \cos^2 \theta \pm 2ea^2 \cos \theta } \\ = a\sqrt{1 +(e\cos \theta)^2 \pm 2e\cos\theta} \\ = a(1\pm e\cos\theta) \\ = a \pm e\alpha$$ |
A question about the common zeroes of a homogeneous polynomial and its partial derivatives | For an ideal $J \subseteq K[X,Y,Z]$, let $Z(J)$ denote the zero-set of $J$, i.e. $Z(J):=\{ x \in \overline{K}^3: f(x)=0 \forall f \in J\}$. Then for ideals $J_1, J_2$ such that $J_1 \subseteq J_2$, we clearly have $Z(J_2)\subseteq Z(J_1)$.
In our case, note that $J:=\langle X^e, Y^e, Z^e\rangle$ satisfies $Z(J)=\{(0,0,0)\}$ and $J\subseteq I$, so we get $Z(I) \subseteq \{(0,0,0)\}$.
As KReiser pointed out, both possibilities $Z(I) = \emptyset$ and $Z(I)=\{(0,0,0)\}$ do occur. |
Stuck on homogeneous linear equation $y' ={ {x^2+xy+y^2} \over x^2}$ | Something went wrong after the substitution. I get
$$\frac{dv}{1+v^2}=\frac{dx}{x}.$$
On the left we get an arctan.
Remark: In this case, we can get $v$, and therefore $y$, explicitly in terms of $x$. However, in general, when we separate variables to solve the differential equation $\frac{dy}{dx}=f(x)g(y)$, then, even when the integrations are doable, we may not be able to then solve for $y$ explicitly in terms of $x$. |
Matrix of finite order in the kernel of group morphism induced by the reduction morphism $M_n(\mathbb{Z}) \longrightarrow M_n(\mathbb{Z}/2\mathbb{Z})$ | It is enough to show that if $M^4=I_n$ and $M=I_n+2N$, then $M^2=I_n$.
Write $M=I_n+2N$. If $M^4=I_n$, then substituting $M=I_n+2N$ and dividing by $8$ yields $N+3N^2+4N^3+2N^4=0$, ie $N+N^2+2N^2(N^2+2N+I_n)=0$.
Write $D=N(N+I_n)$, then $M^2-I_n=4D$ and $D+2D^2=0$.
Let $t \geq 0$ be such that $2^t | D$. Then $2^{2t+1} | -2D^2=D$, and $2t+1 > t$.
Thus $D=0$ so $M^2=I_n$. |
Calculating 2 rightmost decimal digits of large number (modular exponentiation) | Start with a really really easy question: what are the two rightmost digits of $1234567$?
Obviously, $67$. Next question: what does this have to do with modular arithmetic? Answer: it's really just another way of saying that
$$1234567\equiv67\pmod{100}\ .$$
So, you need to simplify $3^{2005}$ modulo $100$. BTW, this question is "obviously" about 9 years old ;-)
Using Euler's function as you suggest is a good start - can you find an exponent $m$ such that $3^m$ is very simple modulo $100$? Then can you find a higher value of $m$ with the same property? And another? And one which is very close to $2005$? |
Convergence almost every where ( times) convergence weak star. | Without assuming that the sequence $u_n$ is bounded in $L^\infty(0,1,L^\infty(\mathbb{R}))$, then the result is false.
Take $v_n(x)=n\chi_{(0,1)}$ if $0<x<\frac1n$ and $v_n(x)=0$ otherwise. Then $v_n\to 0$ a.e. Take $u_n(x)=\chi_{(0,1)}$ for every $x$ and $n$.
Then for $\phi(x)=\chi_{(0,1)}$ for every $x$,
$$\int_0^1\int_{\mathbb{R}}u_nv_n\phi\,dtdtx=n\int_0^{1/n}\int_{0}^1 1\,dtdtx=1\not\to 0.$$
If $u_n$ is bounded in $L^\infty(0,1,L^\infty(\mathbb{R}))$, then you should be able to apply the Lebesgue dominated convergence theorem.
Assume first that there exist $L, M>0$ such that for every $x$, $\phi(x)$ is a function in $L^2(\mathbb{R})$ with compact support in $[-M,M]$ and $\Vert\phi(x)\Vert_\infty\le L$.
Then by Holder's inequality
$$\int_0^1\int_{\mathbb{R}}|u_n(v_n-v)\phi|\,dtdtx=\int_0^1\int_{-M}^M|u_n(v_n-v)\phi|\,dtdtx\\\le \int_0^1\left(\int_{-M}^M|(v_n-v)\phi|^2dt\right)\left(\int_{-M}^M|u_n|^2dt\right)dx\le C \int_0^1\left(\int_{-M}^M|(v_n-v)\phi|^2dt\right)dx,$$
since the sequence $u_n$ is bounded in $L^\infty(0,1,L^2(\mathbb{R}))$.
Now you can just apply Lebesgue dominated convergence theorem on the right-hans side.
The general case of $\phi$ follows by density. |
Generalization of subsubsequence argument for stochastic convergence | Let $(X_n)_n $ be an independent sequence of RV such that each $X_n $ assumes the values $\pm 1$ with equal probability.
Note that $E=\{x \mid (X_n (x))_n \text { converges }\} $ is a tail event, hence has probability $0$ or $1$. Note that the potential limit $X $ can only assume the values $\pm 1$ and that we have
$$
\frac {1}{n}\sum_{i=1}^n X_i \to X
$$
on $E $, so that the law of large numbers implies $X=0$ almost everywhere on $E $. Hence, $E$ has probability zero.
Now, since $X_n $ only assumes the values $\pm 1$, it is easy to see $\limsup_n X_n =1$ and $\liminf_n X_n=-1$ on $E^c $ and hence almost surely.
Since any subsequence $(X_{n_k})_k $ is equidistributed to $(X_n)_n $, the same argument shows $\liminf_k X_{n_k}=-1$ almost surely, although $X_n$ does not converge in probability. |
Is $GL(n;\mathbb{C})$ algebraic or not? | The general linear group is definitely an affine algebraic variety. It is a closed subvariety of $\mathbb{A}^{n^2+1}$, but also an open subvariety of $\mathbb{A}^{n^2}$. This is not a contradiction. I suggest that you look at the case $n=1$ more closely and learn the abstract coordinate-free definition of varieties. |
Find the volume´s integral $\iiint\limits_E\ (1+x+y) dV$ with the inequalities $x^2+y^2+z^2 \leq4\ $ and $z \geq0\ $ | I don´t know if something went wrong doing the integral or changing the variables to polar cords
The change of variables is correct (typo withstanding).
$$\int_0^2\!\int_{0}^\frac{\pi}{2}\!\int_0^{2\pi}\!\ (\rho^2\sin\phi)(1+\rho\sin\phi\sin\theta\ + \rho\sin\phi\cos\theta)\ d\theta\ d\phi\ d\color{red}\rho $$
The first evaluation is correct.
$$\int_0^2\!\int_{0}^\frac{\pi}{2} (\rho^2\sin\phi)\ (2\pi)\ d\phi\ d\rho $$
Which equals $$2\pi\cdot\int_0^{\pi/2}\sin\phi~d \phi\cdot\int_0^2 \rho^2~d \rho~$$
Then...
$$2\pi\cdot\left[-\cos\phi\right]_0^{\pi/2}\cdot\left[\tfrac 13 \rho^3\right]_0^2$$ |
Prove by induction that every integer is either a prime or product of primes | You will need strong induction, so you assume it holds for all $n < N$.
Then you prove it for $n=N$.
For $n=2$ it clearly holds.
Now assume it holds for all $n < N$.
If $N$ is a prime, then the statement is true.
Else, you can write $N = a\cdot b$ with $a,b < N$, and by the strong induction hypothesis you can write $a,b$ as a prime or a product of primes, so $N$ is a product of primes. |
Distributed general load question related to mechanical engineering | No magic, just grind:
$F= \int_{x=0}^2 \int_{y=0}^3 p(x,y) dx dy = 3 \int_0^2 10({6 \over x+1}+8)dx= 60 (3 \ln 3 + 8)$.
$\bar{x} = {1 \over F} \int_{x=0}^2 \int_{y=0}^3 xp(x,y) dx dy = {1 \over F} 60(14-3 \ln 3)$
$\bar{y} = {1 \over F} \int_{x=0}^2 \int_{y=0}^3 y p(x,y) dx dy = {1 \over F} \int_0^3 y dy \int_0^2 10({6 \over x+1}+8)dx = {3 \over 2}$ |
Determinant of block matrices with non-square blocks | Hint: for the first one, note that
$$
\begin{bmatrix}
0 & I_m\\
I_n & 0
\end{bmatrix}
\begin{bmatrix}
I_n & B\\
A & I_m
\end{bmatrix}
\begin{bmatrix}
0 & I_n\\
I_m & 0
\end{bmatrix} =
\begin{bmatrix}
I_m & A\\
B & I_n
\end{bmatrix}
$$ |
Homogeneous systems Constant Coefficients Initial Value Problem with Eigenvalue of zero | The characteristic polynomial is:
$$\lambda^2 -15 \lambda = \lambda(\lambda -15) $$
The eigenvalues are $\lambda_1 = 15, \lambda_2 = 0$.
The corresponding eigenvectors are $v_1 = (4,1), v_2 = (1, 1)$.
This gives,
$$x(t) = c_1 e^{15 t}\begin{bmatrix}
4 \\
1 \\
\end{bmatrix} + c_2\begin{bmatrix}
1 \\
1 \\
\end{bmatrix}$$
Note, for the zero eigenvalue, we have $c_2 e^{0 t} = c_2$.
Can you now use the initial condition, $x(0)$, to solve for $c_1$ and $c_2$ and finish it off?
Update
You should get $c_1 = -5, c_2 = -4$ and you could express the final solution as:
$$x(t) = \begin{bmatrix}
x_1(t) \\
x_2(t) \\
\end{bmatrix} = \begin{bmatrix}
4c_1 e^{15 t} + c_2 \\
c_1 e^{15 t} + c_2
\end{bmatrix}$$
Note: as an alternate approach, you could have also written it as was expressed above, but just substitute the values $c_1$ and $c_2$. |
Combinatorial proof of Catalan number/central binomial convolution: $2n C_n=\sum_{k=1}^n\binom{2k}kC_{n-k}$ | Here is a combinatorial proof that
$$
\binom{2n}{n} - C_n = \sum_{k=1}^n \frac12\binom{2k}{k} C_{n-k}.
$$
Since $C_n = \frac{1}{n+1}\binom{2n}{n}$, the left-hand side simplifies to $n C_n$, and after multiplying by $2$, this gives us the equation you want.
Both sides of the equation above are going to count walks of length $2n$ that start and end at $0$, but do dip below the $x$-axis. Since $\binom{2n}{n}$ counts the total number of walks that start and end at $0$, and $C_n$ counts the number that don't dip below the $x$-axis, the number we are counting is $\binom{2n}{n} - C_n$.
Now split these walks up into $n$ classes based on the last step at which the walk goes from $-1$ to $0$. Such a step must exist, because once we dip below the $x$-axis, we have to come back up to $0$ at some point.
The number of walks in which this step is the $(2k)^{\text{th}}$ step is exactly $\frac12 \binom{2k}{k} C_{n-k}$:
Out of the $\binom{2k}{k}$ walks that return to $0$ on the $(2k)^{\text{th}}$ step, exactly half go from $-1$ to $0$ on that step. (The other half go from $1$ to $0$.)
In the remainder of the walk, we can never dip below $0$, since that step was the last time we came back from below the $x$-axis, so there are $C_{n-k}$ ways to complete the walk.
Summing over all values of $k$, we get the right-hand side of the equation above. |
To show each $f_i$ is bounded | Let $\sum_{i=1}^{n}\lvert f_i\rvert^2=c$. If $c=0$, then we are done, so assume $c > 0$.
If $n=1$, then writing $f_1=u+iv$ with $u,\ v$ real, we note that $u^2+v^2=c$. Taking derivatives w.r.t. $x$ and $y$ we get:
$$u\frac{\partial u}{\partial x} + v\frac{\partial v}{\partial x} =
u\frac{\partial u}{\partial y} + v\frac{\partial v}{\partial y} = 0 \tag{1}$$
Using the Cauchy-Riemann equations
$$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y},\
\frac{\partial v}{\partial x} = - \frac{\partial u}{\partial y} \tag{2}$$
we get the following:
$$\frac{\partial u}{\partial x} = \frac{\partial u}{\partial y} = \frac{\partial v}{\partial x} = \frac{\partial v}{\partial y} = 0 \tag{3}$$
Since $\Omega$ is connected, this implies, $u = v =$ constant, i.e. $f_1 =$ constant.
For $n > 1$, choose any $w\in\Omega$ and define the holomorphic function:
$$ f(z)=\sum_{i=1}^nf_i(z)\overline{f_i(w)} \tag{4} $$
Then by Cauchy-Schwarz,
$$ \lvert f(z)\rvert^2\leq\left(\sum_{i=1}^n\lvert f_i(z)\rvert^2\right) \left(\sum_{i=1}^n\lvert f_i(w)\rvert^2\right)\leq c^2 \tag{5}$$
so that $\lvert f(z)\rvert\leq c$. But $\lvert f(w)\rvert=c$ and $w$ is an internal point of $\Omega$ so by maximum modulus
$$\lvert f(z)\rvert=c\ \forall\ z\in\Omega \tag{6}$$
Then $f$ is a holomorphic function on connected $\Omega$ with $\lvert f\rvert^2=$ constant, so using the $n=1$ case, we conclude $f=$ constant. Thus $f(z)=f(w)=c$.
Equation $(6)$ also implies we actually have equalities in $(5)$, and using the condition of equality in Cauchy-Schwarz, we get for all $z\in\Omega$:
$$ (f_1(z),\ldots,f_n(z))=\alpha(z)(f_1(w),\ldots,f_n(w)) \tag{7}$$
Use this to evaluate $f(z)$
$$ f(z) = \sum_{i=1}^{n}f_i(z)\overline{f_i(w)} = \alpha(z)f(w) \tag{8}$$
Using $f(z)=f(w)=c\neq 0$, we conclude $\alpha(z)=1$ for all $z\in\Omega$, so that $(7)$ implies $f_i(z)=f_i(w)$ for all $z\in\Omega$ and $1\leq i\leq n$, i.e. $f_i$'s are constant.
NOTE: This is not my solution, at least not completely. The question appeared in my Complex Analysis midterm, and professor gave a hint towards the solution, I just filled in the blanks. |
If $f$ is an analytic function such that $fˆ2$ lies on the disc centered at $1$ of radius $1$ then either $Re(f(z))>0$ or $Re(f(z))<0$ | The condition guarantees that $f(z) \notin i\mathbb{R}$ for $z \in U$. So two open sets
$$A = \{z \in \mathbb{C} : \operatorname{Re}(z) > 0\}
\quad\text{and}\quad
B = \{z \in \mathbb{C} : \operatorname{Re}(z) < 0\}$$
are disjoint and disconnect the image $f(U)$. But since $U$ is connected, so is $f(U)$. So we have either $f(U) \subset A$ or $f(U) \subset B$. |
Why the writer of this article divided the mean square error formula by 2 instead of MxN? | "where $M$ is the total number of examples". Except for "images" in the title, there is no reference to pixels or images in this document. His $L_2$ loss definition is for $M$ samples of $N$-dimensional vectors. The $\frac{1}{2}$ here is a little strange, but if you use this scale factor in all your losses, it has no net effect -- half of bigger is still bigger than half of smaller. If your model is pixels, these could be $N=1$ (intensity) or $N = 3$ (RGB), or other choices, and $M$ is the number of pixels. |
Most general solution to the equation $f(x) = f(1/x)$ | Let $A$ be any set, let $g\colon(0,1]\to A$ be any function and define $f\colon (0,\infty)\to A$ by
$$f(x)=\begin{cases}g(x)&\text{if }x\le 1\\g(\frac1x)&\text{if }x>1\end{cases} $$ |
Is the symbol ! used in mathematical writing to express something absurd? | To refer to contradiction sometimes the symbol $$\Rightarrow\Leftarrow$$ however usually the word contradiction itself is used. But there is no symbol in mathematics for something absurd.
As you have rightly noted ! is used for factorial, negation and set theory. There you have it. If you want some more you might want to check out wikipedia. |
Intuition for $N(\mu, \sigma^2)$ in terms of its infinite expansion | Take an experiment that has two outcomes, success (S) and failure (F), of probability $p$ and $q=1-p$ respectively. The probability of S after one trial is $p$. The probability of two S after two trials is $p^2$, for one S and one F is $2pq$ and for two F is $q^2$. In general, for $N$ trials, the probability of having k S is given by the Binomial distribution $P_S(k,N)= \frac{N!}{k!(N-k)!}p^k q^{N-k}$. These are just the coefficients in front of the terms in $(p+q)^N$ after multiplying them out
What happens if we take the limit of large $N$? The Binomial coefficients at large $N$ approximate a Gaussian, which can be seen visually by looking at a low row Pascal's Triangle. We can derive it from the above formula. To make this simple let's work with the symmetric case $p=q=1/2$, so that we have
$$
P_S(k,N)= \frac{N!}{k!(N-k)!}\frac{1}{2^N}
$$
An easy way to get the desired result is to exchange the index $k$ for one that starts from the center of the triangle where values are largest $ x\in (-N/2,N/2),\; k=x+N/2$. Then we swap this in and apply Stirling's approximation $n!=\sqrt{2\pi}n^ne^{-n}$:
$$
P_S(x,N)= \frac{N!}{(N/2+x)!(N/2-x)!}\frac{1}{2^N} \approx \frac{2}{\sqrt{2\pi N}}\frac{1}{(1-\frac{4 x^2}{N^2})^{(N+1)/2}}\left(\frac{1-\frac{2x}{N}}{1+\frac{2x}{N}}\right)^x
$$
Finally exponentiation and taking the log we get
$$
P_S(x,N) \approx \frac{2}{\sqrt{2\pi N }} e^{-\frac{1}{2}(N+1)\log(1-\frac{4 x^2}{N^2}) +x(\log(1-\frac{2x}{N})-\log(1+\frac{2x}{N}))
}
$$
Using the expansion $\log(1+x)=x-x^2/2+ \cdots$, keep everything to order $1/N$
$$
P_S(x,N) \approx \frac{2}{\sqrt{2\pi N}} e^{-\frac{2 x^2}{N}
}
$$
Now you can feel free to tack a on $dx$ and use a transformation of variables to scale $x \rightarrow \sqrt N x/2$ -- then $x$ is an "implicit" variable.
$$
P_S(x,N)dx \approx \frac{1}{\sqrt{2\pi}} e^{-\frac{ x^2}{2}} dx
$$
For general $p,q$, the derivation is similar, see for example
http://scipp.ucsc.edu/~haber/ph116C/NormalApprox.pdf
This isn't exactly an infinite expansion like in your example, but there's a similar vein of thought in the conclusion that $N$ choose $k$ limits to a Gaussian type shape for large $N$. |
Showing that $(e^{iA})^{\dagger} = e^{-i A^{\dagger}} $ for a square matrix $A$ | If $A_n \to A$ then all the elements of $A_n$ tend to the corresponding elements of $A$ and hence the conjugate transpose of $A_n$ tends to the conjugate transpose of $A$. Apply this to the partial sums of the series. |
How to solve this quasilinear parabolic evolution equation (result of curve shortening flow)? | Gage and Hamilton prove this fact in the paper "The Heat Equation Shrinking Convex Plane Curves" see page 80 of this paper.. I confess that I didn't understand the argument too..
I can't access the link you furnished.. |
Prove that $D$ is dense in $M$ | The Baire category theorem says tells us that in a complete metric space a countable union of nowhere dense sets is co-dense (i.e. has dense complement).
A space $X$ is called Baire if $\bigcap_n G_n$ is dense in $X$ for every sequence of open and dense subspaces $G_n$ of $X$. Complete metric spaces are Baire spaces (that is one of the formulations of the Baire category theorem), and the previous statement follows by complementation and de Morgan, and the realisation that $O$ is open and dense iff $X\setminus O$ is closed and nowhere dense.
$X\setminus \partial E_n$ is open and dense for all $n$. So Baire says that $\bigcap_n (X\setminus \partial E_n) = X\setminus \bigcup_n \partial E_n$ is dense, which is exactly your set $D = \bigcup_n \operatorname{int}(E_n)$. |
Find The Number of Squares | I will need some drawing help. For an $n$ that is small but not too small, put dots at the $(n+1)\times (n+1)$ gridpoints with coordinates $(x,y)$, where $0\le x,y\le n$. Something like $n=5$ is good enough.
Now draw the diagonals that go in the Northwest to Southeast direction. The diagonal closest to the origin has $2$ gridpoints on it, the next diagonal has $3$ gridpoints, the next has $4$, and so on. This continues until we hit the main diagonal, which has $n+1$ gridpoints. As we go further up, the number of gridpoints decreases, to $n$, then $n-1$, and so on until our last diagonal, which has $2$ gridpoints.
Take any of these diagonals, and pick two gridpoints on it. Then there is a unique square which has these two points as its Northwest and Southeast corners. As we consider all of our diagonals, and all the ways to choose $2$ points, we produce in this way all possible squares, in exactly one way. It follows that the total number of squares is
$$\binom{2}{2}+\binom{3}{2}+\cdots +\binom{n}{2}+\binom{n+1}{2}+\binom{n}{2}+\cdots + \binom{3}{2}+\binom{2}{2}.$$
Nice, but not really simple. We now proceed to simplify. There is the middle term $\dbinom{n+1}{2}$ plus twice the quantity
$$\binom{2}{2}+\binom{3}{2}+\cdots +\binom{n}{2}.\tag{$\ast$}$$
We claim that the quantity $(\ast)$ is equal to $\dbinom{n+1}{3}$.
For $\dbinom{n+1}{3}$ is the number of ways of picking $3$ numbers from the $n+1$ numbers $0, 1, 2,\dots,n$. When we pick $3$ numbers, the smallest of the numbers picked could be $n-2$. Then the other two numbers can be picked in $\binom{2}{2}$ ways (well, $1$ way). Or else the smallest number picked could be $n-3$. Then the other two can be picked in $\binom{3}{2}$ ways. Or else the smallest is $n-4$, in which case the other two can be picked in $\binom{4}{2}$ ways. Continue on down. Finally, the smallest of the $3$ numbers picked could be $0$, in which case the other two can be picked in $\binom{n}{2}$ ways. We have proved that
$$\binom{2}{2}+\binom{3}{2}+\cdots +\binom{n}{2}=\binom{n+1}{3}.\tag{$\ast\ast$}$$
Now put things together. The total number of squares is therefore
$$\binom{n+1}{2}+2\binom{n+1}{3}.$$
Remark: Now for the interesting part! In the posted solutions, it was shown that the number of squares is equal to
$$1^2+2^2+3^2+\cdots +n^2.$$
We conclude that
$$1^2+2^2+3^2+\cdots +n^2=\binom{n+1}{2}+2\binom{n+1}{3}.$$
The expression on the right looks pretty nice as is. But if we really want to, we can expand it as
$$\frac{(n+1)(n)}{2}+2\frac{(n+1)(n)(n-1)}{3!}.$$
A little simplification (bring to a common denominator, factor out the common factors $n$ and $n+1$) gives us to the more familiar expression
$$\frac{n(n+1)(2n+1)}{6}.$$
So by counting the number of squares in a grid in two different ways, we can obtain a closed form formula for the sum of the first $n$ perfect squares. |
Evaluate Left And Right Limits Of $f(x)=\frac{x}{\sqrt{1-\cos2x}}$ At $0$ | A start: Use $\cos 2x=1-2\sin^2 x$. One needs to be careful when finding the square root of $2\sin^2 x$. It is $\sqrt{2}|\sin x|$. |