title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Sorting 10 pairs of socks into 4 drawers
Take the first pair of socks and decide where to put it, that gives you 4 options. Then proceed with the second pair. Independent of what you did with your first pair, you have 4 options. Then you take the third pair, and again you'll have 4 options since you don't need to care where you had put the first 2 pairs. Same for all the following pairs, which gives you $4^{10}$ possibilities (4 for each of the 10 pairs of socks).
Vector spaces and intersections
$W$ may not always exist. Here is a counter-example. Following your notations, let $\{e_1,e_2,e_3,e_4\}$ be a basis of $\Bbb R^4$ and let $$V_1={\rm span}\{e_1, e_2\}, \quad V_2={\rm span}\{e_3, e_4\}.$$ Moreover, let $$V_3={\rm span}\{e_1+ e_3, e_2+e_4\},\quad V_4={\rm span}\{e_1-e_4, e_2+e_3\}.$$ It is easy to check that those subspaces satisfy all the requirements. Now suppose $W$ is a two dimensional subspace of $\Bbb R^4$ and $\dim (W\cap V_1 )=\dim (W\cap V_2)=1$, so there are $a,b,c,d\in \Bbb R$, such that $$ W={\rm span}\{ a e_1+be_2, c e_3+de_4\}.$$ Consider $u:=(a,b)$ and $v:=(c,d)$ as non-zero vectors in $\Bbb R^2$. Then direct calculation shows that $\dim (W\cap V_3)=1$ if and only if $u$ and $v$ are parallel, and $\dim (W\cap V_4)=1$ if and only if $u$ and $v$ are perpendicular, i.e. those two conditions cannot be both satisfied simultaneously.
Expected Value of product Complex Normal R.V. and its conjugate, different powers
Okay, on closer sketching, this isn't actually that bad. For the sake of convenience, let's assume instead that $Z\sim \mathcal{N}(0,I)$ when viewed as an $\mathbb{R}^2$-valued random variable. Denote its density $f$. Define $P:\mathbb{R}^2\setminus \{x\in (0,\infty),y=0\}\to (0,\infty)\times (0,2\pi)$ to be the standard polar coordinate transformation. Then, since $\{x\in (0,\infty),y=0\}$ is a $\mathcal{N}(0,I)$-null set, we can apply the Jacobi Coordinate Transformation theorem to get that $(R,\Theta):=P(Z)$ has density $$rf(P^{-1}(r,\theta))=rf(r(\cos(\theta)+i\sin(\theta))=\frac{r}{2\pi}\exp(-\frac{r^2}{2})=\frac{1}{2\pi}\cdot r\exp(-\frac{r^2}{2}),$$ which is clearly a factorisation of the density, implying that $R$ and $\Theta$ are independent, and $\Theta$ is uniformly distributed on $(0,2\pi)$. Note that $R$ and $\Theta$ clearly have moments of all orders. Accordingly, we get, by applying independence coordinate-wise, that $$ E Z^m \overline{Z^k}=E(R^{m+k} e^{i (m-k)\Theta})=E(R^{m+k})E(e^{i(m-k)\Theta}), $$ and $$ E(e^{i(m-k)\Theta})=\frac{1}{2\pi}\left(\int_0^{2\pi} \cos((m-k)\theta)\textrm{d}\theta+i\int_0^{2\pi}\sin((m-k)\theta)\textrm{d}\theta\right)=0, $$ since $m\neq k$. This yields the desired.
Fitting object poses in 3D space
Of the two representations you're proposing, the translation vector plus rotation matrix is clearly the most elegant. The translation vector is easy to smooth: all of its degrees of freedom make sense to average. So the problem is in smoothing the rotation matrix. To smooth the rotation matrix, first note that the rotation matrices are the special orthogonal 3x3 matrices $\mathbf{SO}(3)$, so we can write the rotation matrix as the exponential of an $\mathbf{so}(3)$ matrix such as: $$\exp\begin{pmatrix}0&\theta_{12}&-\theta_{31}\\ -\theta_{12}&0&\theta_{23}\\ \theta_{31}&-\theta_{23}&0\end{pmatrix}$$ These antisymmetric matrices have just the degrees of freedom you need. This suggests that you should write the rotation matrices in $\mathbf{so}(3)$ form, and then average the $\theta_{jk}$. The (solvable) problem with doing this is that the mapping from $\mathbf{so}(3)$ to $\mathbf{SO}(3)$ by exponentiation is not 1-1. We can split the information about a rotation into two pieces, "how much to rotate by", and "what direction to rotate around". The "how much to rotate" is given by the length of the $(\theta_{23},\theta_{31},\theta_{12})$ vector: $$\theta = \sqrt{\theta_{23}^2 + \theta_{31}^2 + \theta_{12}^2}.$$ The axis of the rotation is given by the unit vector in the direction of the $(\theta_{23},\theta_{31},\theta_{12})$ vector: $$(\theta_{23},\theta_{31},\theta_{12})/\theta.$$ The above is not defined when $\theta=0$. Written this way, the problem with multiple solutions to the $(\theta_{23},\theta_{31},\theta_{12})$ values becomes clear. For any integer $n$, we can multiply $(\theta_{23},\theta_{31},\theta_{12})$ by $1+2n\pi/\theta$ without changing the rotation. Accordingly, in your smoothing algorithm, choose $n$ in such a way as to minimize changes to $(\theta_{23},\theta_{31},\theta_{12})$. This will keep your rotation vectors compatible for smoothing. Of course the method fails when $\theta$ is near zero, but that is the condition of no rotation so does not pose a problem (special case that condition). You should get reasonable rotation smoothing.
Series of Equivalences
The idea is to rewrite the congruences as $$x\equiv-1\bmod2,3,4,5,6,7$$ and it is now clear that the solutions are $-1+\operatorname{lcm}(2,3,4,5,6,7)k$ for $k\in\mathbb Z$. Therefore the smallest positive solution is 419, as the teacher derived.
Every two positive integers are related by a composition of these two functions?
The strong one is true. As an example, let us take $p=2, q=3, x=1$. Then the numbers we can reach are $\lfloor \frac {2^r}{3^s} \rfloor$ with $r,s \in \Bbb N_+$. Given $y$, we need to find $r,s$ such that $\log y \lt \log \frac {2^r}{3^s} \lt \log(y+1)$ or $\log y \lt r \log 2 - s \log 3 \lt \log (y+1)$ As we can approximate $\frac {\log 3}{\log 2}$ arbitrarily closely by a rational, we can do this. The same argument works for general $p,q,x$ as long as the logs are rationally independent.
Chinese Remainder Theorem example
We will work these two at a time. Note that $(18,96)=6$ and $4\equiv52\pmod{6}$, so the first two equations are solvable. We need to solve $$ \frac{x-4}{6}\equiv\begin{bmatrix}0\\8\end{bmatrix}\text{mod}\begin{bmatrix}3\\16\end{bmatrix}\tag{1} $$ Using the Extended Euclidean Algorithm as implemented in this answer, we get $$ \begin{array}{r} &&5&3\\\hline 1&0&1&-3\\ 0&1&-5&16\\ 16&3&1&0 \end{array}\tag{2} $$ which says that $$ 16(1)+3(-5)=1\tag{3} $$ which tells us that $$ 16\equiv\begin{bmatrix}1\\0\end{bmatrix}\text{mod}\begin{bmatrix}3\\16\end{bmatrix}\tag{4} $$ and $$ -15\equiv\begin{bmatrix}0\\1\end{bmatrix}\text{mod}\begin{bmatrix}3\\16\end{bmatrix}\tag{5} $$ If we add $0$ times $(4)$ to $8$ times $(5)$ we get $$ -120\equiv\begin{bmatrix}0\\8\end{bmatrix}\text{mod}\begin{bmatrix}3\\16\end{bmatrix}\tag{6} $$ which solves $(1)$. Therefore, $\frac{x-4}{6}\equiv-120\pmod{48}$, so that $$ \begin{align} x &\equiv-716\pmod{288}\\ &\equiv148\pmod{288}\tag{7} \end{align} $$ Next we need to solve $(7)$ and the third equation from the question. The GCD of the moduli is $(288,20)=4$, but $6\not\equiv148\pmod{4}$, so the third equation can not be solved with the first two.
Why should I learn rational canonical form?
If you want to determine if two square matrices with the same size are similar, you can determine if they have the same rational canonical form. They are similar if and only if they have the same canonical form. You can't do that with the Jordan canonical form without finding the roots of the characteristic polynomial, which will be quite hard in general.
Examples of an unbounded measurable subset of finite measure of the p-adic number field
An example of such a set is $$\bigcup_{n\ge 0} p^{-n} (1 + p^{2n+1} \mathbb{Z}_p).$$
Length of a Curve (Semicirle) using integration
See this graphic: So, in this picture, if the opposite of the triangle is $\frac{a}{\sqrt{2}}$, then $\theta$ would be equal to $\frac{\pi}{4}$ (because it is a $1,1,\sqrt{2}$ right triangle). By definition, the arc that corresponds to $\theta = \frac{\pi}{4} $ has a length equal to $\frac{1}{8}$ of a full circle. ($\frac{ \pi / 4 }{2 \pi} = \frac{1}{8}$) Then, the complementary angle of $\theta$, $\phi$ would also be equal to $ \frac{\pi}{4} $, and its corresponding arc would also have a length equal to $\frac{1}{8}$ of a full circle. That arc, highlighted in red, corresponds to $x$ values from $0$ to $\frac{a}{\sqrt{2}}$ in the graph.
About injectivity of induced homomorphisms on quotient rings
Let $\bar f$ is injective. Then $f(a)\in \mathfrak{b} $ implies $a\in \mathfrak{a} $. This means $\mathfrak{a} \supseteq \mathfrak{b}^c$ hence $\mathfrak{a}=\mathfrak{b}^c$ .
Number of triplets adding to a certain number
If $L\lt 3(m+1)$, there are no solutions. If $L\ge 3(m+1)$, let $M=L-3m$, and let $x_i=y_i+m$. We want to find the number of solutions of $y_1+y_2+y_3=M$ in positive integers. Imagine that we write down $M$ copies of $\ast$, separated by some space, like this: $$\ast\qquad \ast\qquad \ast\qquad \ast\qquad\ast\qquad \ast\qquad \ast\qquad \ast\qquad\ast\qquad \ast\qquad \ast\qquad \ast$$ These $M$ "stars" determine $M-1$ interstellar gaps. Choose $2$ of these gaps to insert separators into. These are traditionally called bars. Every placement of the bars determines a positive integer solution of $y_1+y_2+y_3=M$. Just let $y_1$ be the number of stars until the first bar, $y_2$ the number of stars between the $2$ bars, and $y_3$ the number of stars after the last bar. Conversely, any solution in positive integers of $y_1+y_2+y_3$ determines a placement of bars. Thus there are $\binom{M-1}{2}$ ways to do the job. Remark: For more information, please see the Wikipedia article on Stars and Bars. Here we were dealing with the a sum $x_1+x_2+x_3$ of $3$ numbers. For this case, there is a simpler way to look at the problem. It is convenient to let $N=L-3(m+1)$. We want to find the number of solutions of $z_1+z_2+z_3=N$ in non-negative integers. Just count separately the solutions that have $z_1=0,1,2,\dots,N$ and add up. If $z_1=0$, then $z_2$ can have $N+1$ values, anything from $0$ to $N$ If $z_1=1$, then $z_2$ can have $N$ values. And so on, until if $z_1=N$ then $z_2$ can have $1$ value. Thus the total is $(N+1)+N+(N-1)+\cdots+1$. This is (backwards) a familiar sum, which simplifies to $(N+1)(N+2)/2$.
Cardinality, surjective, injective function of complex variable.
You are right, this problem is related to cardinality: Two sets A and B are said to be of same cardinality if and only if a bijection between them exists. For your problem, all four desired functions exist. To show that, there are at least three different approaches: You can show that an injection $f_1:A\to B$ and an injection $g_1:B\to A$ exist. From $f_1$ you can conclude $|A|\leq|B|$ and from $g_1$ you can conclude $|B|\leq|A|$. So $|A|=|B|$ and a bijection $h:A\to B$ exists. $h^{-1}:B\to A$ then is a bijection, too. You can show that a surjection $f_2:A\to B$ and a surjection $g_2:B\to A$ exist. From $f_2$ you can conclude $|B|\leq|A|$ and from $g_2$ you can conclude $|A|\leq|B|$. So $|A|=|B|$ and a bijection $h:A\to B$ exists. $h^{-1}:B\to A$ then is a bijection, too. You can show that a bijection $f:A\to B$ exists. $f^{-1}:B\to A$ then is a bijection, too. We proceed by approach 1: Example for an injective function from $A$ to $B$: $$f_{1}:=\begin{cases} A & \to B\\ z & \mapsto z \end{cases}$$ Example for an injective function from $B$ to $A$: $$g_{1}:=\begin{cases} B & \to A\\ r\cdot e^{i\varphi} & \mapsto(r+1)\cdot e^{i\varphi} \end{cases}$$ You can directly give a bijective function (approach 3), too: $$f:=\begin{cases} B & \to A\\ z & \mapsto\frac{\left|z\right|+1}{\left|z\right|}z\quad\text{(if }z\neq1\text{)}\\ z & \mapsto z^{2}\quad\text{(if }z=1,\arg(z)\in\left[0,\pi\right)\text{)}\\ z & \mapsto2z^{2}\quad\text{(if }z=1,\arg(z)\in\left[\pi,2\pi\right)\text{)} \end{cases}$$ (EDIT 1: Corrected the answer. EDIT 2: Added approach 3.)
integration of $\int_0^\infty\frac{(\log(x))^3}{x^3-1}\,\mathrm{d}x$
We can proceed as in this answer and use the same contour. First we will evaluate $$ \int_\gamma\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z\tag{1} $$ over the contour $\hspace{4.5cm}$ Accounting for the pole at $e^{\pi i/3}$ with residue $\dfrac{(\pi i/3)^3}{3e^{2\pi i/3}}$ and letting $\alpha=e^{2\pi i/3}$, we get $$ \begin{align} 2\pi i\dfrac{(\pi i/3)^3}{3e^{2\pi i/3}} &=\color{#C00000}{\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z} \color{#0000FF}{-\int_0^\infty\frac{\log^3(\alpha z)}{z^3+1}\,\mathrm{d}\alpha z}\\ &=\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z\\ &-\alpha\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z -3\frac{2\pi i}{3}\alpha\int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z\\ &+3\frac{4\pi^2}{9}\alpha\int_0^\infty\frac{\log(z)}{z^3+1}\,\mathrm{d}z +\frac{8\pi^3 i}{27}\alpha\int_0^\infty\frac{1}{z^3+1}\,\mathrm{d}z\tag{2} \end{align} $$ Multiplying by $\alpha$, noting that $\alpha(1-\alpha)=2i\sin(\pi/3)=i\sqrt3$, we have $$ \begin{align} \frac{2\pi^4}{81} &=i\sqrt3\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z -2\pi i\alpha^2\int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z\\ &+\frac{4\pi^2}{3}\alpha^2\int_0^\infty\frac{\log(z)}{z^3+1}\,\mathrm{d}z +\frac{8\pi^3 i}{27}\alpha^2\int_0^\infty\frac{1}{z^3+1}\,\mathrm{d}z\tag{3} \end{align} $$ The real part of $(3)$ is $$ \begin{align} \frac{2\pi^4}{81} &=-\pi\sqrt3\int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z -\frac{2\pi^2}{3}\int_0^\infty\frac{\log(z)}{z^3+1}\,\mathrm{d}z +\frac{4\pi^3\sqrt3}{27}\int_0^\infty\frac{1}{z^3+1}\,\mathrm{d}z\\ &=-\pi\sqrt3\int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z -\frac{2\pi^2}{3}\left(-\frac{2\pi^2}{27}\right) +\frac{4\pi^3\sqrt3}{27}\left(\frac{2\pi\sqrt3}{9}\right)\tag{4} \end{align} $$ where we used the result of this answer. Therefore, $$ \begin{align} \int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z &=\frac{10\pi^3\sqrt3}{243}\tag{5} \end{align} $$ The imaginary part of $(3)$ is $$ \begin{align} 0 &=\sqrt3\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z +\pi\int_0^\infty\frac{\log^2(z)}{z^3+1}\,\mathrm{d}z\\ &-\frac{2\pi^2\sqrt3}{3}\int_0^\infty\frac{\log(z)}{z^3+1}\,\mathrm{d}z -\frac{4\pi^2}{27}\int_0^\infty\frac{1}{z^3+1}\,\mathrm{d}z\\ &=\sqrt3\int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z +\pi\left(\frac{10\pi^3\sqrt3}{243}\right)\\ &-\frac{2\pi^2\sqrt3}{3}\left(-\frac{2\pi^2}{27}\right) -\frac{4\pi^2}{27}\left(\frac{2\pi\sqrt3}{9}\right)\tag{6} \end{align} $$ where we used the result of this answer and $(4)$. Therefore, $$ \int_0^\infty\frac{\log^3(z)}{z^3+1}\,\mathrm{d}z=-\frac{14\pi^4}{243}\tag{7} $$ Next we will evaluate $$ \int_\gamma\frac{\log^3(z)}{z^3-1}\tag{8} $$ over the contour $\hspace{6cm}$ Noting that there are no poles inside the contour and letting $\beta=e^{\pi i/3}$, we get $$ \begin{align} 0 &=\color{#C00000}{\int_0^\infty\frac{\log^3(x)}{x^3-1}\,\mathrm{d}x} \color{#0000FF}{-\int_0^\infty\frac{\log^3(\beta x)}{-x^3-1}\,\mathrm{d}\beta x}\\ &=\int_0^\infty\frac{\log^3(x)}{x^3-1}\,\mathrm{d}x\\ &+\beta\int_0^\infty\frac{\log^3(x)}{x^3+1}\,\mathrm{d}x +3\frac{\pi i}{3}\beta\int_0^\infty\frac{\log^2(x)}{x^3+1}\,\mathrm{d}x\\ &-3\frac{\pi^2}{9}\beta\int_0^\infty\frac{\log(x)}{x^3+1}\,\mathrm{d}x -\frac{\pi^3i}{27}\beta\int_0^\infty\frac{1}{x^3+1}\,\mathrm{d}x\\ &=\int_0^\infty\frac{\log^3(x)}{x^3-1}\,\mathrm{d}x\\ &+\beta\left(-\frac{14\pi^4}{243}\right) +\pi i\beta\left(\frac{10\pi^3\sqrt3}{243}\right)\\ &-\frac{\pi^2}{3}\beta\left(-\frac{2\pi^2}{27}\right) -\frac{\pi^3i}{27}\beta\left(\frac{2\pi\sqrt3}{9}\right)\\ &=\int_0^\infty\frac{\log^3(x)}{x^3-1}\,\mathrm{d}x\\ &+\left(\frac12\right)\left(-\frac{14\pi^4}{243}\right) +\pi\left(-\frac{\sqrt3}{2}\right)\left(\frac{10\pi^3\sqrt3}{243}\right)\\ &-\frac{\pi^2}{3}\left(\frac12\right)\left(-\frac{2\pi^2}{27}\right) -\frac{\pi^3}{27}\left(-\frac{\sqrt3}{2}\right)\left(\frac{2\pi\sqrt3}{9}\right)\tag{9} \end{align} $$ Therefore, $$ \int_0^\infty\frac{\log^3(x)}{x^3-1}\,\mathrm{d}x=\frac{16\pi^4}{243}\tag{10} $$
Eigenvalues of Polynomial of linear operator
You seem to misunderstand what a polynomial in an operator means. So here's how it works: Let $$p(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x+a_0$$ be a formal polynomial with coefficients in $\Bbb F$. Then for any endomorphism (a linear operator from a space to itself) $T$, $$p(T) = a_n (\underbrace{T\circ \cdots \circ T}_{n\text{ times}}) + a_{n-1} (\underbrace{T\circ \cdots \circ T}_{n-1\text{ times}}) + \cdots + a_1T + a_0\textrm{id}$$ Notice that this is an endomorphism as well because the sum, scalar multiple, and composition of linear operators is linear. So in your example $p(T) = a_0T$.
Integral $\int_{-1}^1 \frac{e^x}{\sqrt{1-x^2}}dx$
$$\int_{-1}^1 \frac{e^x}{\sqrt{1-x^2}}dx$$ Substitution:$x=\sin\theta$ $$I=\int_{-\pi/2}^{\pi/2}e^{\sin\theta}d\theta$$ $$e^{\sin\theta}=1+\frac{\sin\theta}{1!}+\frac{\sin^2\theta}{2!}+\frac{\sin^3\theta}{3!}.......$$ The odd powers of sine when used in the integral will produce zero. Again for even values of n we have $$\int_{-\pi/2}^{\pi/2}\sin^n\theta d\theta=2 \times \frac{n-1}{n}\frac{n-3}{n-2}\frac{n-5}{n-4}....\frac{3}{4}\frac{1}{2}\frac{\pi}{2}$$ Therefore, $$I=\int_{-\pi/2}^{\pi/2}e^{\sin\theta}d\theta=\pi+2\times\Sigma\frac{1}{n!}[\frac{n-1}{n}\frac{n-3}{n-2}\frac{n-5}{n-4}....\frac{3}{4}\frac{1}{2}\frac{\pi}{2}]$$ $$=\pi+\pi\Sigma [\frac{1}{n(n-2)(n-4)......2}]^2$$ ["n" running from two through even values ] Writing n=2m we have, $$\pi&ltI=\pi\Sigma[\frac{1}{2^m}\frac{1}{m!}]^2&lt4$$ [m running from 0 to infinity]
non differentiable, integrable function
See the Weierstrass function.
if the divergence of a plane vector field is of fixed sign in an annular region, then the differential equation has at most one periodic orbit in S
Suppose that there exists more than one periodic orbit for the system $\dot x=f(x)$ inside the annular region. Choose two of them. Then these two form the boundary $C$ of a region $D$ contained inside the annular region. Thus, we can apply the divergence form of Green's Theorem: $$\iint_D\nabla\cdot f(x)\,dA=\oint_C f(x)\cdot \hat n\, ds,$$ where $\hat n$ is the unit vector normal to $C$. On periodic orbits, $f(x)\cdot\hat n\equiv 0$, so we would have $\iint_D\nabla\cdot f(x)\,dA=0$. But if $\nabla\cdot f(x)$ has a constant sign, this cannot be true. Thus, there can exist only a single periodic orbit.
How do map a set of points on the surface of a sphere to the surface of a scaled sphere?
$(x,y,z) \mapsto (cx,cy,cz)$ (assuming the spheres are both centered at the origin)
Prove that $\int\limits_0^{+\infty}(e^{-1/x^2} - e^{-4/x^2})\,dx$ converges
It is straightforward to show that the integral converges (the integrand is $\displaystyle O(x^{-2})$ as $x \displaystyle \to \infty$). In fact, we can evaluate this integral in closed form. To this latter end we proceed. Enforcing the substitution $x\to 1/x$ reveals $$\int_0^L \left(e^{-1/x^2}-e^{-4/x^2}\right)\,dx=\int_{1/L}^\infty \frac{e^{-x^2}-e^{-4x^2}}{x^2}\,dx \tag 1$$ Integrating by parts the integral on the right-hand side of $(1)$ with $u=e^{-x^2}-e^{-4x^2}$ and $v=-\frac{1}{x}$ yields $$\int_0^L \left(e^{-1/x^2}-e^{-4/x^2}\right)\,dx=\left.\left(\frac{e^{-x^2}-e^{-4x^2}}{x}\right)\right|_{1/L}^\infty-2\int_{1/L}^\infty \left(e^{-x^2}-4e^{-4x^2}\right)\,dx\tag 2$$ Letting $L\to \infty$ we find that the first term on the right-hand side of $(2)$ vanishes and we obtain $$\int_0^L \left(e^{-1/x^2}-e^{-4/x^2}\right)\,dx=-2\int_{0}^\infty \left(e^{-x^2}-4e^{-4x^2}\right)\,dx=\sqrt \pi$$ Therefore, the integral not only converges, but is equal to $\sqrt \pi$.
Probability of an event on the nth draw
Don't over complicate things. We seek the probability that the twenty-first marble drawn is one from the $12$ black marbles under the condition that $5$ particular red marbles (from $12$) have been drawn among the first twenty positions of consecutive draws without replacement. Observe that wheresoever those particular marbles are among the first twenty positions, the remaining $19$ marbles each have equal probability of being in the favoured position, and $12$ of these are black. . Remark: Although this assumes that there is no bias in selecting among the remaining marbles, which is contraindicated by the stated probability of $1/61440$ for selecting the particular marbles into the first twenty positions, rather than the unbiased probability $\binom{19}{15}/\binom{24}{20}$ or $646/1771$.
Why is an antipodal-symmetrically colored circle guaranteed to have an odd number of multicolored edges?
I'm pretty sure what you have written in the body of the question is not correct. For example this arrangement, read going around the circle, should be acceptable: $1,-2 ,1 , -1,2 ,-1$. This has two edges 'with endpoints labelled -1 and 2 (in either order)'. Am I misunderstanding something?
Find the supremum and infinum of $(a_n)$ and $(b_n)$.
HINT: For each $n\ge 1$ you know that $a_n<\sqrt2<a_n+2^{-n}$. Clearly $\sup_na_n\le\sqrt2$, so the real question is whether $\sup_na_n$ could be strictly less than $\sqrt2$. Suppose that $\sup_na_n=s<\sqrt2$. What happens if $n$ is so large that $2^{-n}<\sqrt2-s$? It’s also clear that $\inf_nb_n\ge\sqrt2$, so for this one you want to ask yourself whether it’s possible for $\inf_nb_n$ to be strictly greater than $\sqrt2$; use the same general idea as I used above.
Evaluate the integral $\int_{\mathbb{R}}\frac{e^{\gamma x}}{(1+e^x)}dx$.
With substitution $\dfrac{1}{1+e^x}=u$ we have $$\int_{-\infty}^{+\infty}e^{\gamma x}(1+e^x)^{-1}dx=\int_0^1(1-u)^{\gamma-1}u^{-\gamma}du=B(1-\gamma,\gamma)=\Gamma(\gamma)\Gamma(1-\gamma)=\dfrac{\pi}{\sin\pi\gamma}$$ for $0<\gamma <1$. See here for details about beta function.
How and why are these three graphs visually related? $y=ex^2 \sin\left(\frac{1}{x}\right)$, $y=ex^2$, $y=-ex^2$
As $-1\le\sin{(\frac{1}{x})}\le1$, we have $-ex^2\le ex^2\sin{(\frac{1}{x})}\le ex^2$. So, the functions $ex^2$ and $-ex^2$ serve as upper bounds for $ex^2\sin(\frac{1}{x})$
Inequality in characteristic function
Observe that: $$\phi_{-X}(u)=\mathbb Ee^{-iuX}=\mathbb E\overline{e^{iuX}}=\overline{\mathbb Ee^{iuX}}=\overline{\phi_X(u)}$$ Showing that - if $\phi(u)$ is a characteristic function - then so is $\overline{\phi(u)}$. Also it is well known that a product of characteristic functions is again a characteristic function. This together tells us that $|\phi(u)|^2=\phi(u)\overline{\phi(u)}$ is a characteristic function so this answer on this question (the link provided by bubububub) proves the statement. Also you can take a look at the first part of this answer.
equation of a line for a street lamp post
$y(x) = 0x + c$ gives you a constant function that is c for all values of x. If you want an equation for a vertical line, well... it's no longer a function, because $f(x)$ is not unique and can take on multiple (see, infinite) values. If you wanted to approach a vertical line, you'd see the slope factor would approach infinity as it becomes closer and closer to vertical.
What's the difference between an endofunctor and a morphism?
I think your confusion is, first of all, about language -so, being not a native English speaker, maybe I'm not the best qualified to help you- and different levels of abstraction that you're mixing. Secondly, maybe you lack the knowledge of examples of categories where morphisms don't map anything at all. To begin with, maybe I could say that the meaning of your "map" in "Morphisms map objects" is not the same as in "functors map both objects and morphisms". Or shouldn't be. Or you should deffinitively avoid it in the first sentence. In the second sentence, you're using it correctly in the categorical parlance (categorical level of abstraction), meaning "send". But this same use in the first one is not correct, because, in the categorical parlance (categorical level of abstraction) morphisms don't "map" (send) anything to anything (not necessarily, or not at all). Outside category theory, we say (correctly) "$f(x,y) = x$ maps $\mathbb{R}^2$ onto $\mathbb{R}$", when we are doing Analysis, for instance. This is a perfectly legitimate use of the verb "to map". But, thinking of the same $f: \mathbb{R}^2 \longrightarrow \mathbb{R}$ as a morphism -for instance, in the category of topological spaces and continuous maps- we look at it simply as an "arrow" between the two "objects" $\mathbb{R}^2$ and $\mathbb{R}$ -and you forget that it "sends" vectors $(x,y)$ to its first coordinate $x$. More correctly, we have defined what the set of continuous morphisms $\mathbf{Top} (\mathbb{R}^2, \mathbb{R})$ is and we say that $f$ is a member of this set. Full stop: we don't need anything else from the categorical point of view (categorical level of abstraction). So, generally, you should avoid thinking that morphisms in a category map anything, meaning that they "send" something onto / into something else. You should avoid this use / meaning of the verb "to map" speaking about morphisms in a category because it's not true, correct, in general. Instead, in the categorical parlance (categorical level of abstraction) functors do really map (send) objects to objects and maps to maps. Indeed, by definition, a functor is composed by two "functions": one that assigns objects to objects, and one that assigns maps to maps. One example where the two uses of "map" coexist. The fundamental group functor $\pi_1$, maps (sends) topological spaces to groups and continuous maps to group homomorphisms: $$ \pi_1 : \mathbf{Top} \longrightarrow \mathbf{Groups} $$ For instance, $\pi_1$ "sends" the unit circumference $S^1$ to the group of integer numbers $\pi_1 (S^1) = \mathbb{Z}$ and the continuous map that goes round the circumference twice $\gamma : S^1 \longrightarrow S^1$, to the group homomorphisms $\gamma_ *= \pi_1 (\gamma ) : \mathbb{Z} \longrightarrow \mathbb{Z}$ which is multiplication by $2$: $\gamma_* (m) = 2m$. But we also say, and it is true, of course, that $\gamma$ maps $S^1$ onto $S^1$. And it's also true that $\pi_1$ maps $S^1$ to $\mathbb{Z}$ (and $\gamma $ to $\gamma_*$). But they are both ($\gamma$ and $\pi_1$) different kinds of "maps", corresponding to different levels of mathematical abstraction. Moreover, in this example with $\gamma$ and $\pi_1$, you could insist: "But, anyway, both are maps, arent't they? Both map, don't they?". One example where they don't. Yes, but they are plenty of examples, talking about categories, where morphisms of a particular category don't map / send anything at all. For instance, you can think of $\mathbb{Z}$ as a category, with objects the integer numbers and morphisms defined this way: we say that the set of morphisms between two (different) integer numbers $m$ and $n$ is empty if and only if $m > n$; otherwise, that is if $m < n$, we say that the set of morphisms from $m$ to $n$ has exactly one morphism $m \longrightarrow n$. (And when $m=n$, we have also just one morphism, the identity of $m$). "Oh, but who is this morphism, when $m<n$?, which is its formula?, what does it map?", you could ask. Well, I'm sorry, but, from a categorical point of view, I don't need to tell you: it doesn't matter. My "category" $\mathbb{Z}$ is perfectly defined as it is. (You could try to be picky at this point and ask me for how does composition of morphisms work in this $\mathbb{Z}$ category, but, I don't need to say anything more because there is just one morphism for each pair $m<n$, so composition works in the only possible way.) And a last word: you should not think that this "phenomenon" (morphisms don't map / send ) is unusual in the realm of categories
Visualization / sketch for this basic proof about subspace topology
Draw $X$ to be the plane and $A$ to be the $x$-axis.
Scalar restriction of bilinear maps
This is much easier if you are dealing with the (equivalent) actual bilinear maps $M\oplus N\to Z$, for if such a map is $R$-bilinear, via the identification $(M\oplus N)_S=M_S\oplus N_S$, it is easily seen to be $S$-bilinear, too. So there is your natural map $\mathrm{Bil}_R(M\oplus N,Z)\to \mathrm{Bil}_S(M_S\oplus N_S,Z_S)$. If you don't want to do the elementary verification, here's another proof: We have to show that the obvious map $M_S\otimes_S N_S\to (M\otimes_RN)_S$ is well-defined. Considering the co-unit of the scalar extension- and restriction-adjunction, $R\otimes_SM_S\to M$ and analogously for $N$, we get an $R$-linear map $$R\otimes_SM_S\otimes_SN_S=(R\otimes_SM_S)\otimes_R (R\otimes_S N_S)\to M\otimes_RN,$$ mapping $1_R\otimes m\otimes n$ to $m\otimes n$. Thus, via the adjunction $\hom_R(R\otimes_SM_S\otimes_SN_S,M\otimes_RN)=\hom_S(M_S\otimes_SN_S,(M\otimes_RN)_S)$, we find the obviously defined $S$-linear map $M_S\otimes_SN_S\to(M\otimes_R N)_S$, as required.
Should this integral be zero?
$f$ is not holomorphic and not even meromorphic because $z \mapsto \bar z$ is not differentiable anywhere: $$\frac{\bar z - \bar 0}{z - 0} = \frac{(x-iy)^2}{x^2 + y^2} = \begin{cases} -i, \text{ on the path $y=x$} \\ i, \text{ on the path $y = -x$} \end{cases}$$ the same can be done for any other point. So you can't apply Cauchy's theorem or the Residue theorem. You have to calculate it directly.
How to do multiplication (capital pi) in WolframAlpha?
For example: product (1)/n, n=1..10 Computation here.
A problem on partially ordered set
HINTS: How you prove that two different posets cannot have the same Hasse diagram will depend a bit on how rigorous you’re supposed to be. You could suppose that $\langle P,\le\rangle$ and $\langle P,\preceq\rangle$ have the same Hasse diagram but that there are $p,q\in P$ such that $p\le q$ but $p\not\preceq q$. Can you see how to get a contradiction from this? Consider the graph $G$ with two vertices and one edge; can you find two different posets that have $G$ as cover graph? How about two different posets that have $G$ as their comparability graph?
Will any value of a free variable satisfy a system of equation?
You simply made a sign error in your third matrix equation, you should have $x_1 = \frac{-1}{2} x_3 + b_1$ instead of $x_1 = \frac{1}{2} x_3 + b_1$. That will give you the right answer which is $b_1=5$ by the way.
Let f be a convex differentiable function. Prove that if u is any continuous function, then ...
since $f$ is convex differentiable , so $$f(u(t))\ge f(x_{0})+f'(x_{0})(u(t)-x_{0})$$ let $$x_{0}=\dfrac{1}{a}\int_{0}^{a}u(t)dt$$ so $$f(u(t))\ge f\left(\dfrac{1}{a}\int_{0}^{a}u(t)dt\right)+f'\left(\dfrac{1}{a}\int_{0}^{a}u(t)dt\right)\left[u(t)-\dfrac{1}{a}\int_{0}^{a}u(t)dt\right]$$ Note $$\int_{0}^{a}\left[u(t)-\dfrac{1}{a}\int_{0}^{a}u(t)dt\right]dt=\int_{0}^{a}u(t)dt-\int_{0}^{a}u(t)dt=0$$ so $$\int_{0}^{a}f(u(t))dt\ge \int_{0}^{a}f\left(\dfrac{1}{a}\int_{0}^{a}u(t)dt\right)+0$$ so $$\dfrac{1}{a}\int_{0}^{a}f(u(t))dt\ge f\left(\dfrac{1}{a}\int_{0}^{a}u(t)dt\right)$$
How to prove $\frac{\pi}{2}-\int_{0}^{\frac{\pi}{2}}\frac{\sin{x}}{x}dx<\frac{\pi^{3}}{144}$
Easy to show that $$\sin{x}-x+\frac{x^3}{6}\geq0$$ for all $x\in\left[0,\frac{\pi}{2}\right]$ and use that an integral of non-negative function is non-negative. Indeed, let $f(x)=\sin{x}-x+\frac{x^3}{6}$. Thus, $$f'(x)=\cos{x}-1+\frac{x^2}{2},$$ $$f''(x)=-\sin{x}+x\geq0.$$ Thus, $f'(x)\geq f'(0)=0$ and $f(x)\geq f(0)=0$. Id est, $$\int_{0}^{\frac{\pi}{2}}\frac{f(x)}{x}dx\geq0$$ or $$\int_0^{\frac{\pi}{2}}\left(\frac{\sin{x}}{x}-1+\frac{x^2}{6}\right)dx\geq0$$ or $$\int_0^{\frac{\pi}{2}}\frac{\sin{x}}{x}dx+\left(-x+\frac{x^3}{18}\right)_0^{\frac{\pi}{2}}\geq0,$$ which gives the needed inequality.
Planes and Augmented Matrices
Begin with the augmented matrix $$ \left( \begin{array}{ccc|c} 2 &amp; 1 &amp; 1 &amp; 5\\ 1 &amp; -1 &amp; 1 &amp; 3 \\ -2 &amp; p &amp; 2 &amp; q\end{array} \right) $$ and row reduce this by multiplying the first row by $-1/2$ and adding it to the second row, and adding the first row to the last row to get: $$ \left( \begin{array}{ccc|c} 2 &amp; 1 &amp; 1 &amp; 5\\ 0 &amp; -\frac{3}{2} &amp; \frac{1}{2} &amp; \frac{1}{2} \\ 0 &amp; p+1 &amp; 3 &amp; 5+q\end{array} \right). $$ Then multiply the second row by $-2/3$ to get: $$ \left( \begin{array}{ccc|c} 2 &amp; 1 &amp; 1 &amp; 5\\ 0 &amp; 1 &amp; -\frac{1}{3} &amp; -\frac{1}{3} \\ 0 &amp; p+1 &amp; 3 &amp; 5+q\end{array} \right). $$ Next, multiply the second row by $-1$ and add it to get first row, and multiply the second row by $-(p+1)$ and add it to the third row: $$ \left( \begin{array}{ccc|c} 2 &amp; 0 &amp; \frac{4}{3} &amp; \frac{16}{3}\\ 0 &amp; 1 &amp; -\frac{1}{3} &amp; -\frac{1}{3} \\ 0 &amp; 0 &amp; 3+\frac{p+1}{3} &amp; 5+q+\frac{p+1}{3} \end{array} \right). $$ The equation corresponding to the last row in the augmented matrix is $$\left(3+\frac{p+1}{3}\right)z= 5+q+\frac{p+1}{3}.$$ Multiply both sides by three to obtain: $$ (p+10)z = 15+3q+p+1, $$ which is $$ (p+10)z = p+3q+16. $$ So what we want to do is find $p$ and $q$ so that $p+10=0$ and $p+3q+16=0$. This would mean that the system of equations will have an infinite number of solutions. So $p=-10$ and $q=-2$.
What are those "things that cannot be proved using only ordinary rules of inference"?
I am not sure what the writer means by: "The main benefit of structured proofs is that they allow us to prove things that cannot be proved using only ordinary rules of inference." His earlier remark: "The interesting thing about the Mendelson axiom schemas is that, together with Implication Elimination, they alone are sufficient to prove all logical consequences from any set of premises." is correct. More technical (what is wrong with the statements) Skip this when you find the above enough . It does look like the writer is mixing formulas and rules, he even goes so far as: "Rules with no premises are sometimes called axiom schemas. Axiom schemas are usually written without the horizontal line used in displaying ordinary rules of inference." There are two problems with this statement: 1) Axiom schemes are 'recipes' for formulas (every formula that can be created by replacing the meta variables ϕ , ψ and χ by any well formed formula is an axiom. (see http://en.wikipedia.org/wiki/Hilbert_system ) You can always go from an implicational formula to an inference rule. (An implicational formula is a formula where the main connective is an implication) But the converse (from inference rule to implicational formula) is not always a given (it just depends, if the deduction theorem is true in that logic , in most familiar (non-modal) logics it is, here it can get rather messy and complicated) 2) What are that "rules with no premises" ?? Inference rules (rules for short) always have premisses, inference rules allow you to go from one or more premisses to one conclusion, so that there is at least one premmisse is allready implied in the definition. (see http://en.wikipedia.org/wiki/Inference_rules ) It is true that there are different uses of premisses, (premmisses of an argument vs premmisses of an inference rule), but it is not clear what the writer means with premisses in " rules with no premisses". Maybe he means that when you treat axiom schemes as rules only, then you cannot prove every theorem. (For example try to prove $ P \to P $, without any formula to start with, how can you start? where can you apply your inference rule to?) Maybe that is what the writer means, but I am not sure about it, maybe best is to ask the writer / your lecturer about this. Let me know what they said, maybe I can learn from it as well. Hope this helps.
Interval of p where $\int_0^\infty{\sqrt{x}\sin(\frac{1}{x^p})}dx$ converges
Your answer is correct but there is a flaw in the argument. $\int_0^{\infty} \frac 1{x^{p-\frac 1 2}}dx=\infty$ for all $p$. What you have to do is to split the integral into integrals from $0$ to $1$ and $1$ to $\infty$. The first integral converges for all $p$. The second integral converges iff $p &gt;\frac 1 2$ by your argument.
Associative ring with identity, inverses, divisors of zero and Artinianity
Suppose $R$ is right Artinian. Then $rR \supset r^2 R \supset r^3 R \supset \ldots $ is a descending sequence of right ideals. Thus for some $n$, $r^n R = r^{n+1} R$. Use this fact to show that either $r$ is right invertible, or $r$ is a left divisor of zero. Since $R$ has an identity 1, from $r^nR=r^{n+1}R$ it follows that $r^n1=r^{n+1}s$, or rewritten: $r^n=r^n(rs)$. Since $r$ is not a left divisor of $0$, $r^n\neq0$ and $r^{n+1}s\neq 0$, so it follows that $1=rs$. Then $s$ is a right inverse of $r$, but it contradicts with the statement. Hence $R$ is not Artinian. This is a generalization of the commutative ring theorem that every Artinian integral domain is a field (with essentially the same proof).
What is golden ratio doing in this computer code?
If you look at the code you find that the routine is from Numerical recipes. And if you look there you find the comment:According to Knuth, any large MBIG, and any smaller (but still large) MSEED can be substituted for the above values. In fact the NR routine is derived from Knuth's subtractive generator IN55 (described in Seminumerical Algorithms 3.6), which also explains the magic 55 in the NR and MS code.
The sequence $(a_n)$ is given as $a_1=1, a_{2n} = a_n - 1, a_{2n+1} = a_n + 1$. $a_{2015}=$?
Actually, you made a mistake breaking down $a_{2015}$. It should be $a_{2015}=a_{2\times 1007+1}=a_{1007}+1$. And $a_{1007}=a_{2\times503+1}=a_{503}+1$ $a_{503}=a_{2\times 251+1}=a_{251}+1$ $a_{251}=a_{2\times 125+1}=a_{125}+1$ $a_{125}=a_{2\times 62+1}=a_{62}+1$ $a_{62}=a_{2\times 31+1}=a_{31}-1$ $a_{31}=a_{2\times 15+1}=a_{15}+1$ $a_{15}=a_{2\times 7+1}=a_{7}+1$ $a_{7}=a_{2\times 3+1}=a_{3}+1$ $a_{3}=a_{2\times 1+1}=a_{1}+1$ So, $a_{2015}=a_{1}+1+1+1+1+1+1+1+1+1-1=9$
equation of ellipse after projection
The intersection of the plane and the sphere is: $$ \begin{cases}x+z=1\\ x^2+y^2+z^2=1 \end{cases} $$ that, substituting $z$ from the first equation in the second, becomes: $$ \begin{cases}z=1-x\\ 2x^2+y^2-2x=0 \end{cases} $$ Here there is the answer to the question. The system is the ''equation'' of the circle of intersection. The second equation, interpreted, as an equation in $3$D space (so that $z$ can have any real value), is the equation of a cylinder that pass through the circle anf has axis parallel to $z$ axis. Interpreted as an equation in the $xy$ plane it is the equation of the searched ellipse.
Some question in relative homology
Note: in this answer I assume the singular homology theory. Sometimes $A$ is a subcomplex of $X$, but not in general. For example, we could consider the pair $(\mathbb R,\mathbb Q)$. In order for $H_n(X,A)=\tilde H_n(X/A)$ to be true, we need a neighborhood of $A$ in $X$ which deformation retracts to $A$. This is not true in many cases. For a simple example, let $X=S^2$ and $A=S^2\setminus\{x\}$ where $x$ is any point in $S^1$. The long exact sequence of pairs gives us $$H_2(A)\to H_2(X)\to H_2(X,A)\to H_1(A).$$ We know $H_2(A)=H_1(A)=0$, so $H_2(X,A)=H_2(X)=\mathbb Z$. But $X/A$ is the Sierpinski topological space with two points, which is contractible, so $H_2(X/A)=0$. (Sorry this is an example in the homology groups instead of chain complexes, the chain complexes are harder to describe). The reason we prefer $(X,A)$ to $X/A$ is that a lot of powerful theorems about $(X,A)$ are simply untrue for $X/A$. For example, we do not always get a long exact sequence $$...\to H_n(X)\to H_n(A)\to H_n(X/A)\to H_{n-1}(A)\to...$$
Some question about path connectedness
Stereographic projection provides a homeomorphism between $S^n \setminus \{NP, SP\}$ and $\Bbb R^n \setminus \{0\}$. Because path-connectedness is a homeomorphism invariant, it suffices to show that this second space is path-connected. What better way to do this than to write down a path between any two points? Let $x, y \in \Bbb R^n$ be nonzero. If $x$ is not a scalar multiple of $y$, the path $f(t) = (1-t)x+ty$ is a path from $x$ to $y$ that does not pass through zero. If $x$ is a scalar multiple of $y$, we'll need to modify this path just a tiny bit. Pick a vector $z$ not on the line spanned by $y$. Why not just use the path that first goes from $x$ to $z$ and then to $y$ by a straight line? $z$ is intentionally chosen to be a scalar multiple of neither $x$ nor $y$, so this path will never go through zero. So let's define such a path: $$f(t) = \left\{ \begin{array}{lr} (1-2t)x + 2tz &amp; 0 \leq t \leq 1/2\\ (2-2t)z+(2t-1)y &amp; 1/2 \leq t \leq 1 \end{array} \right.$$ This starts at $x$, is at $z$ at $t=1/2$, and ends at $y$. (You can check that this is continuous.) So we've drawn a path between any two points; this shows that our space is path-connected. Note that to do this, it was key that we had some $z$ that wasn't a multiple of $y$, or else we couldn't have avoided $0$. This is where the $n&gt;1$ hypothesis came in!
Prove convexity of the given function on $\mathbb{R}^n$
This is $f(x)=\max\{z_1-x_1,z_2-x_2,\cdots, z_n-x_n,0\}$. Supremum of a family, finite or otherwise, of convex functions is convex, and affine maps (meaning, maps in the form $g(x)=\langle b,x\rangle +\alpha$) are convex.
locally free resolution of coherent sheaf on quasi-projective scheme
II, 5.18 states that for $\overline{X}$ projective over $A$, any coherent sheaf $\mathscr{F}$ can be written as an epimorphic image of some finite direct sum of the twists $\mathcal{O}(n)$ of the structure sheaf. Thus, as a quotient of a locally free sheaf $\mathscr{L}_0$ of finite rank. As the kernel of $\mathscr{L}_0\rightarrow \mathscr{F}$ is itself coherent, same applies, and one finds $\mathscr{L}_1\rightarrow \mathscr{L}_0 \rightarrow \mathscr{F}\rightarrow 0$ exact with $\mathscr{L}_1$ locally free of finite rank. And so on. In the quasi-projective case, one can choose an open immersion $i:X \hookrightarrow \overline{X}$ into a projective $A$-scheme, and then by Exercise II.5.15 of Hartshorne, extend a coherent sheaf $\mathscr{F}$ on $X$ to a sheaf $\mathscr{F'}$ on $\overline{X}$ which is still coherent. By the previous, there is a resolution $\mathscr{L}_{\bullet}\rightarrow \mathscr{F}'\rightarrow 0$ by a finite rank locally free sheaves on $\overline{X}$. The restriction functor to the open subscheme $X$ is exact and preserves &quot;being locally free of finite rank&quot;, hence ${\mathscr{L}_{\bullet}}_{\restriction_{X}} \rightarrow \mathscr{F'}_{\restriction_{X}}(=\mathscr{F})\rightarrow 0$ is a resolution by finite rank locally free sheaves on $X$ (in fact, finite direct sums of line bundles).
Finite element method for the 'Particle-In-a-Box' problem in quantum mechanics
I ran the following Mathematica code: n = 30; H = IdentityMatrix[n]*2 - DiagonalMatrix[ConstantArray[1, n - 1], -1] - DiagonalMatrix[ConstantArray[1, n - 1], 1]; ListPlot[N[Eigenvectors[H]][[-5 ;; -1]], Joined -> True] This creates the Hamiltonian (without the constant factor $a$) and plots the first eigenvectors As you can see, is works as expected, so the error should be somewhere in you code. (The creation of $H$ seems fine, so I expect it has to do something with the linear algebra part in the last line.) Or, as mentioned as a comment, it has to do something with the expected energies of the system in the theoretical result.
Bayesian network and unknown probability
No, $P(C|A,B)$ can't be determined solely from the marginal distributions $P(A)$ and $P(B)$. For example, say $C$ perfectly determines $A$ and $B$, via one of two mechanisms: If $C=1$, then $A=1$ and $B=1$ (and the marginal distribution of $C$ is the same as $A$ and $B$: $P(C=1) = 0.1$). In this case, $P(C=1|A=1, B=1)=1$. If $C=1$, then $A=0$ and $B=0$ (and the marginal distribution of $C$ is now: $P(C=1) = 0.9$). In this case, $P(C=1|A=1, B=1)=0$.
fraction $\frac{21n+4}{14n+3}$ , $n\in N$
$\gcd(21n+4,14n+3)=\gcd(7n+1,14n+3)=\gcd(7n+1,1)=1$
What is a formal definition of a line?
Let $\mathbb{A}^n$ be the affine space $\mathbb{R}^n$ over $\mathbb{R}^n$. A line is then defined as an affine subspace of $\mathbb{A}^n$ of dimension $1$. Similarly, a plane will be defined as a subspace of dimension $2$, and an hyperplane as a subspace of dimension $n-1$ (or codimension $1$). Note that in this way we can define parallelism too: two affine subspace of $\mathbb{A}^n$ are parallel the direction of one contains the direction of the other one. Your example of line $y=x+2$ is contained in this definition. This type of definition is part of the so called affine geometry, the part of euclidean geometry dedicated to the study of those properties of euclidean spaces that do not depend on the notion of distance or on the one of angle (more formally, those that do not depend on the inner product of an euclidean vector space).
Show whether f is integrable or not
If $f$ is non-negative, then, for each partition $P$ of $[a,b]$, $L(f,P)\geqslant0$. Therefore, $\underline{\int_a^b}f=0$. Since we are assuming that $\overline{\int_a^b}f=0$, $f$ is integrable.
How to compare a sum of uniform RVs with a uniform RV?
This is an application of Lévy's conditional form of Borel-Cantelli lemma. The result is often stated as follows. Consider a sequence of events $(A_n)_n$ which is adapted to a given filtration $(\mathcal F_n)_n$. Then the random series $\sum\limits_n\mathbf 1_{A_n}$ converges/diverges almost surely if and only if the random series $\sum\limits_n\mathrm P(A_{n+1}\mid\mathcal F_{n})$ converges/diverges almost surely. Here, consider $\mathcal F_n=\sigma(X_k;k\leqslant n)$ and $A_{n+1}=[n^2X_{n+1}\leqslant S_n]$ with $S_n=X_1+\cdots+X_n$. Then $\mathrm P(A_{n+1}\mid\mathcal F_{n})=\frac1{n^2}S_n$. By the strong law of large numbers, $\frac1nS_n\to\mathrm E(X_1)=\frac12$ hence, almost surely, $S_n\gt\frac14n$ for every $n$ large enough. This proves that $\sum\limits_n\frac1{n^2}S_n$ diverges almost surely. Hence, almost surely, infinitely many events $A_n$ occur, QED.
Clarification regarding Silverman's proof of the description of Hilbert class field of a quadratic imaginary field
I think I have figured out a way to circumvent the problem, without deviating too much from Silverman's exposition. In case my question was not clear, let me state it again. As mentioned in the comments above by user reuns, Silverman proves that $\mathrm{Gal}(L/K) \simeq Cl(K)$ and the Artin map factors through $Cl(K)$. In order to prove that the Galois group is the class group Silverman tries to show that the map $F : \mathrm{Gal}(L/K) \to Cl(K)$ is surjective, by instead showing that $$ {I}_{\mathfrak{c}_{L/K}} \xrightarrow{(\cdot, L/K)} \mathrm{Gal}(L/K) \xrightarrow{F} Cl(K) $$ is surjective. The way he shows this composition is surjective is by first showing that it is simply the projection. Then he somehow concludes that this implies the conductor is $(1)$, which shows that $I_{\mathfrak{c}_{L/K}}$ is the group of fractional ideals in $K$, which in turn shows that the composition is surjective. Since I was not understanding how he concluded the conductor is $(1)$, I wanted to avoid this route. This can be avoided by using Dirichlet's theorem on primes in arithmetic progression. So that proves that $F$ is indeed an isomorphism. Now to prove that $\mathfrak{c}_{L/K} = (1)$ we note the following. Let $\mathbf{A}^*_K$ be the idele group of $K$. For an arbitrary modulus $\mathfrak{m} = \prod {v}^{e_{v}}$, let $\mathbf{A}^{\mathfrak{m}}_K$ be the group of ideles $(\alpha_v)$ such that $v(\alpha_v - 1) \geq e_v$ and $\alpha_v \in \mathcal{O}_{K,v}^{\times}$ whenever $e_v &gt; 0$. Let $U_{\mathfrak{m}}$ be the group of ideles $(\alpha_v)$ such that $v(\alpha_v - 1) \geq e_v$ whenever $e_v &gt; 0$ and $\alpha_v \in \mathcal{O}_{K,v}^{\times}$ for all $e_v = 0$. For any modulus $\mathfrak{m}$, we have $\mathbf{A}^{\mathfrak{m}}_K/U_{\mathfrak{m}}K^{\times} \simeq I_{\mathfrak{m}}/P_{\mathfrak{m}}$. In particular, $\mathbf{A}^*_K/U_{(1)}K^{\times} \simeq Cl(K)$. Denoting the reciprocity map on the ideles by $[\cdot, K]$, we have a diagram as follows. $\require{AMScd}$ \begin{CD} \mathbf{A}^{\mathfrak{c}_{L/K}}_K @&gt;&gt;&gt; \mathbf{A}^*_K @&gt;{[\cdot, K]}&gt;&gt; \mathrm{Gal}(K^{\mathrm{ab}}/K) \\ @VVV @VVV @VVV \\ I_{\mathfrak{c}_{L/K}} @&gt;&gt;&gt; Cl(K) @&gt;{F^{-1}}&gt;&gt; \mathrm{Gal}(L/K) \end{CD} A priori we do not know that the right small square is commutative and this is exactly what we need to show $L$ is the Hilbert class field. But we do know that the outer rectangle and the left small square are commutative. This along with the fact that $\mathbf{A}^*_K = \mathbf{A}^{\mathfrak{c}_{L/K}}_K K^{\times}$ gives the required commutativity of the small right square. Thus $L$ is the Hilbert class field of $K$.
Is The Series $\sum_{n=1}^{\infty} \frac{n^4-3n+2}{4n^5+7}$ Divergent
No, this is not correct. I suggest you use the limit comparison test instead. Let us know if you don't know how to apply it in this case.
Sequent calculus and first incompletness theorem
There is an unfortunate clash of terminology when talking about completeness in logic (compounded by the fact that there is a Gödel's completeness theorem as well as a Gödel's incompleteness theorem). A formal system (like the sequent calculus or the more familiar natural deduction or Hilbert-style proof systems, possibly with proper axioms etc.) is called complete if semantic truth implies syntactic truth. That is, if a statement is true in all models of the system (it is semantically true) then it is in fact provable in the system (it is syntactically valid). This notion of completeness is what is meant in the wiki article on the sequent calculus. It is also the content of Gödel's completeness theorem: any Hilbert or natural deduction style system in first order logic is complete. The converse of this property, that whatever is provable is always true, is known as soundness. On the other hand, a formal system is (also) called complete if it is consistent and for any sentence $\varphi$ either $\varphi$ or $\lnot\varphi$ is provable in the theory. This notion of completeness is what is discussed in Gödel's incompleteness theorems. These two meanings of completeness are independent of one another; most systems of interest are complete in the former sense, but they may or may not be complete in the latter sense.
How to prove this truism: an infinite set cannot be included in a finite set?
Given a bijection $f : B \to \{ 1,2,\dots,n \}$, the restriction $f|_A$ is a bijection between $A$ and $f(A) \subset \{ 1,2,\dots,n \}$. To finish you can show that if $C \subset \{ 1,\dots,n \}$ then $C$ is finite, from which you could conclude that $f(A)$ is finite and thus so is $A$. One way to do that is to enumerate the elements of $C$ in sorted order (which can be done by the well ordering principle); this furnishes a bijection to some initial segment of $\mathbb{N}$; you can argue that this segment must be contained in $\{ 1,\dots,n \}$ (and thus equal to some $\{ 1,\dots,m \}$). This assumes that "finite" is defined as "can be put into bijection with $\{ 1,\dots,n \}$ for some $n$". If you instead define it as "not Dedekind-infinite", i.e. not in bijection with any of its proper subsets, then you can proceed by first taking a bijection $f$ from some proper subset $C \subsetneq A$ to $A$. Then you can extend $f$ to be onto $B$ by appropriately defining it on $B \setminus A$.
$A= \begin{bmatrix}-1 & 1 & 1 &1\\ 1 & -1 & 1 &1\\ 1 & 1 & -1 &1\\ 1 & 1 & 1 &-1\\ \end{bmatrix}$ $A$ is a symmetric matrix.Find its eigenvalues?
Hints: If the eigenvalues of $A+2I$ are $\lambda_1,\lambda_2,\lambda_3,\lambda_4$, then the eigenvalues of $A$ are $\lambda_1-2,\lambda_2-2,\lambda_3-2,\lambda_4-2$. Do you see why this is true? The matrix $A+2I$ is a $4 \times 4$ matrix of all ones. Hence, the rank of $A+2I$ is one, and it has at most one non-zero eigenvalue (so three of its eigenvalues are $0$). You can find the non-zero eigenvalue by finding the corresponding eigenvector by inspection like J. W. Tanner suggested.
Maximum distance from the origin to the surface
From there you have a system of four equations, so you can use substitution to eliminate variables and solve it. $2x-\frac{x^3\lambda}{4}=0 \\ 2y-\frac{y^3\lambda}{81}=0 \\ 2z-4z^3\lambda=0 \\ \frac{x^4}{16}+\frac{y^4}{81}+z^4=1$ Using the third one: $2z-4z^3\lambda = 0 \\ 1-2z^2\lambda=0 \\ \lambda=\frac{1}{2z^2}$ Plugging that into the first equation: $2x-\frac{x^3}{8z^2}=0 \\ 16z^2=x^2 \\ 4|z|=|x|$ Using the value for $\lambda$ in the second equation: $2y-\frac{y^3}{162z^2}=0 \\ 324z^2 = y^2 \\ 18|z|=|y| $ Then putting all the values in terms of z in the last equation: $ \frac{(4z)^4}{16}+\frac{(18z)^4}{81}+z^4=1 \\ 16z^4+1296z^4+z^4=1 $ $ 1313z^4 = 1$ $z=\pm\frac{1}{{}^4\sqrt{1313}}, y = \pm\frac{18}{{}^4\sqrt{1313}}, x=\pm\frac{4}{{}^4\sqrt{1313}} $ Please double check my math, it looks like it's correct, but I'm not positive.
Is $k[x,y]/(x,y) \otimes_{k[x,y]} (x,y) = 0$?
No this tensor product is not $0$. In general, if $I$ is an ideal in $R$ and $M$ an $R$-module, then $R/I \otimes_R M \cong M/IM$. Your example takes $R=k[x,y]$, $I=(x,y)$, and $M=I$, so $k[x,y]/(x,y) \otimes_{k[x,y]} (x,y) \cong (x,y)/(x,y)^2 \ne 0$. As for your argument, you do not have $[f] \otimes x=[xf] \otimes 1$; indeed, the last expression makes no sense since $1 \notin (x,y)$.
Canonical map from fundamental group to Fuchsian group?
The important fact you're missing is called the uniformization theorem, which says in this case that the hyperbolic plane is the universal cover of $S$.
Universal covering space of $\mathbb{R^3}\setminus S^1$
Here is a way to think about your space: $\mathbb{R}^3-S^1$ deformation retracts to a 3-dimensional ball minus $S^1$. As you mentioned, your the fundemental group of your space is isomorphic to $\mathbb{Z}$ and is generated by going once inside the removed $S^1$. With this in mind, most of our 3-dimensional ball is not needed since this $D^3-S^1$ deformation retracts to simply $S^2$ with its north and south poles connected by a line. This is the space you want to work with.
Convergence of series $ \sum_{n=1}^{\infty} \sin\left( \frac{n\pi}{6}\right)$
Hint: What is $\lim_{n\to\infty}\sin\left(\frac{n\pi}{6}\right)$?
how to divide set of n objects to 3 subsets in order to find the maximum of set's cardinality multiplication
For easier reading, denote $|S_1|, |S_2|, |S_3|$ by $x,y,z$ instead. Your objective function $f(x,y,z) = xy + yz + zx$ and you want to maximize $f$ subject to $x+y+z = n$. Your $f$ is called an elementary symmetric polynomial and a lot is known about them. https://en.wikipedia.org/wiki/Elementary_symmetric_polynomial For your specific purpose of maximizing $f$, that happens when $x,y,z$ split $n$ as evenly as possible, i.e. the difference between any two variables is $0$ or $1$, i.e., each variable $=\lfloor n/3 \rfloor$ or $\lceil n/3 \rceil$. Here is a simple proof. Consider a fixed $x$, then $f = x(y+z) + yz = x(n-x) + yz$ and it is easy to show that, when constrained by the choice of $x$, $f$ is maximized when $y, z$ split $(n-x)$ as evenly as possible. Obviously, this is true for any pair of variables. Therefore, any triplet $(x,y,z)$ where some pair of variables differ by $2$ or more cannot be a maximizing triplet.
Are there any real & decent mathematics video games?
If you're looking for something more complex , take a look at "Kerbal space program". This game works on patched conics approximation Some things in the game include *Flight planning - It is actually automatic but you can still try to mess around with your own calculations on fuel consumption *Ideas of orbit transfer - This is not automatic ,you can use different types of pre-planned orbital manouvers ( which you can set during the journey ) or just calculate them. *Using a mod called principia, which sets the game world into an n - body model ( see wiki "https://en.m.wikipedia.org/wiki/N-body_simulation") . N body model requires more complicated planning. With all this , disable all the assistance by the game , and you are set into a really complicated physics game with complicated calculations.
Finding linear recurrence for a sequence which depend on another recurrence
I don't see any immediate way for this to be rewritten into a linear recurrence. However, we can get a closed form. Let us rewrite $$a_n=\begin{bmatrix}a_{n,0}\\a_{n,1}\\\vdots\\a_{n,M}\end{bmatrix}$$ $$\lambda=\begin{bmatrix}\lambda_{0,0}&amp;\lambda_{1,0}&amp;\cdots&amp;\lambda_{M,0}\\\lambda_{0,1}&amp;\lambda_{1,1}&amp;\cdots&amp;\lambda_{M,1}\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots\\\lambda_{0,M}&amp;\lambda_{1,M}&amp;\cdots&amp;\lambda_{M,M}\end{bmatrix}$$ with $\lambda_{i,0}=0$. We then have $$a_n=\lambda^n\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}$$ and finally, \begin{align}b_n&amp;=\begin{bmatrix}1\\1\\\vdots\\1\end{bmatrix}^\intercal a_n\\&amp;=\begin{bmatrix}1\\1\\\vdots\\1\end{bmatrix}^\intercal\lambda^n\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}\end{align} which can be used to quickly compute $b_n$ for large $n$, among other things.
convex/concave problem.
The only concave functions on $\mathbb{R}$ satisfying $f(x) &gt;0$ everywhere are the constant functions, hence $x \mapsto {1 \over f(x)}$ is trivially convex. Suppose $a&lt;b&lt;c$, then using concavity we have $f(b) \ge f(a)+{b-a \over c-a} (f(c)-f(a))$, and rearranging gives $f(c) \le f(a) + { c -a \over b-a} ((f(b))-f(a))$. Hence if $f(b) &lt;f(a)$, we see that $\lim_{c \to \infty} f(c) = -\infty$. Similarly, if $f(b)&gt;f(a)$, a similar calculation shows that $\lim_{c \to -\infty} f(c) = -\infty$. Hence $f(a)=f(b)$ for all $a,b$. (Note: The conclusion from the above is that any concave function that is bounded below must be constant.)
Name of the LU decomposition algorithm
A reference to the Doolittle's paper is missing on Wiki: Doolittle, M.H.: Method employed in the solution of normal equations and the adjustment of a triangularization. In: U. S. Coast and Geodetic Survey, Report, pp. 115–120 (1878). For more information, see Mathematicians of Gaussian Elimination by Grcar. Apparently, Doolittle worked as a computer.
Showing $F_{64}/F_2$ is Normal
During my qualifying oral exam, I was asked to construct a field of order $27$; so I wrote one polynomial and invoked a certain result, but was asked to do a more constructive construction that did not invoke the existence of splitting fields. I found an irreducible polynomial over $\mathbb{F}_3$ and did the usual construction. Great. Then I was given a different irreducible polynomial over $\mathbb{F}_3$ and was asked to prove the resulting extension was isomorphic to the one I had initially given. After 5 minutes of flailing to try to construct an explicit isomorphism, I was told to step back and look at the polynomial I had originally written down: $x^{27}-x$. Basically: since a finite subgroup of the multiplicative group of a field (in fact, of an integral domain) must be cyclic, the multiplicative subgroup of $\mathbb{F}_{p^n}$ is cyclic of order $p^{n}-1$, and therefore every nonzero element satisfies the polynomial $x^{p^n-1}-1$. Therefore, the elements of $\mathbb{F}_{p^n}$ are all roots of $x^{p^n}-x$, and these are all the roots in an algebraic closure of $\mathbb{F}_{p}$. And no strictly smaller field has all the roots, because this polynomial is separable (its derivative is $-1$). Thus, $\mathbb{F}_{p^n}$ is the splitting field of $x^{p^n}-x$ over $\mathbb{F}_p$, and therefore it is normal (and unique up to isomorphism over $\mathbb{F}_p$).
How can I linearize the distance from two points?
Here's a mixed integer linear programming formulation with 220 variables and $1+20\binom{100}{2}$ linear constraints. Let $V=\{1,\dots,20\}$ denote the set of villages. Let parameter $d_{i,j}^v$ denote the distance from village $v$ to location $(i,j)$. Let $$P=\{(i_1,j_1,i_2,j_2)\in\{1,\dots,100\}^4:i_1 &lt; i_2 \lor ((i_1 = i_2) \land (j_1 &lt; j_2))\}$$ be the set of pairs of distinct locations. Let binary decision variable $y_{i,j}$ indicate whether a package is dropped at $(i,j)$, as in @Kuifje's answer. Let decision variable $z_v$ be the distance from village $v$ to the closest package. The problem is to minimize $\sum_v z_v$ subject to: \begin{align} \sum_{i,j} y_{i,j} &amp;= 2\\ z_v &amp;\ge \min\left(d_{i_1,j_1}^v,d_{i_2,j_2}^v\right)(y_{i_1,j_1}+y_{i_2,j_2}-1)&amp;&amp;\text{for all $v\in V, (i_1,j_1,i_2,j_2)\in P$}\\ y_{i,j} &amp;\in\{0,1\}&amp;&amp;\text{for all $i,j$} \end{align} The idea is that $y_{i_1,j_1} = y_{i_2,j_2} = 1$ forces $z_v \ge \min\left(d_{i_1,j_1}^v,d_{i_2,j_2}^v\right)$.
Prove that if $A$, $B$ are similar matrices then for every $\lambda$ $\in$ $\mathbb{R}$ the matrices $A-\lambda I$ and $B-\lambda I$ are similar.
Take $M=P$. $\ \ \ \ \ \ \ \ \ \ \ $
Limit of hyperbolic function
Since $\cosh(h)=\frac{e^h}{2}+\frac{e^{-h}}{2}$, the limit is $$\frac{1}{2}\lim_{h\to 0}\left(\frac{e^h-1}{h}\right)+\frac{1}{2}\lim_{h\to 0}\left(\frac{e^{-h}-1}{h}\right)=\frac{1}{2}\lim_{h\to 0}\left(\frac{e^h-1}{h}\right)-\frac{1}{2}\lim_{h\to 0}\left(\frac{e^{h}-1}{h}\right)=0.$$ Note that we have to use the fact that the limit exists here, but we know it does because $e^x$ is differentiable, and so $\cosh$ is the sum of differentiable functions.
Find $P(A^\complement \mid B)$ given the following information: $P(B \mid A)=0.4, P(A)=0.9, P(B \mid A^\complement)=0.7$...
$$P(B|A) = 0.4, P(A) = 0.9, P(B|(1-A)) = 0.7$$ $$P((1-A)|B) = \dfrac{P(B|1-A)P(1-A)}{P(B)}$$ $$P(1-A) = 1-P(A) = 1-0.9 = 0.1$$ $$P(B) = P(B\cap A) + P(B\cap (1-A)) = P(B|A)P(A)+P(B|(1-A))P(1-A) = (0.4)(0.9)+(0.7)(0.1)$$ Putting it all together: $$P((1-A)|B) = \dfrac{(0.7)(0.1)}{(0.4)(0.9)+(0.7)(0.1)} = \dfrac{7}{43} \approx 0.163$$
What are the different symbols used for denoting an angle?
$\angle$ is usually used to denote a standard angle, whereas $\measuredangle$ is used to denote a directed angle. That is, given two non-parallel lines $\ell$ and $m$, the directed angle $\measuredangle(\ell, m)$ denotes the measure of the angle starting from $\ell$ and ending at $m$, measured counterclockwise.
Determine that the function $f(x)=\sqrt{x^2-x-6} \text{ in } x_0=3$ is continous with the $\varepsilon$-$\delta$-definition of limit/criterion
First of all, the function $f$ is not defined on $\mathbb{R}$ but for $$ x\in(-\infty,-2]\cup[3,+\infty). $$ So one can only talk about its continuity of $f$ at $x=3$ from the right. For $x&gt;3$, you are right to get $$ |f(x)-f(3)|=\sqrt{x-3}\sqrt{x+2}. $$ Note that you don't need the absolute value for the square root terms. But then you made a mistake: the $\delta$ you get must be positive. The term $\sqrt{x-3}$ should not be dropped and it would give you the desired $\delta$. Consider instead for $0&lt;x-3&lt;1$ the inequality $$ \sqrt{x-3}\sqrt{x+2}\leq 6\sqrt{x-3}\le\varepsilon. $$
Is there a non trivial ideal for the set of upper triangular matrices?
Below I categorize all such nontrivial ideals. It should be clear from the proof how one could easily generate a bunch of nontrivial ideals. Let $\mathfrak{i}$ be an ideal of $\mathfrak{R}$ and let $\mathbf{T}_{ij}$ be the matrix with $1$ at the $(i,j)$ position and $0$ everywhere else. Take $\mathbf{A}\in \mathfrak{i}$. Then $$\begin{eqnarray*}a_{ij}\mathbf{T}_{ij}&amp;=\mathbf{T}_{ii}\mathbf{A}\mathbf{T}_{jj}&amp;\in \mathfrak{i}\end{eqnarray*}$$ Therefore the set of all $(i,j)$ entries of matrices in $\mathfrak{i}$ (which I will write $I_{ij}$) is an ideal of $\mathbb{Z}$, so we have that $I_{ij}=\left(n_{ij}\right)$ for some $\left(n_{ij}\right)\in \mathbb{Z}$. It follows that $$\mathfrak{i}=\{\mathbf{A}\in\mathfrak{R}:d_{ij}\mid a_{ij}\}$$ for some $d_{ij}\in\mathbb{Z}$. Using Bezout's identity, we have $d_{ij}\mid d_{ik}$ for all $j&gt;k$ and $d_{ij}\mid d_{\ell j}$ for all $i&gt;\ell$. In other words, each $d_{ij}$ divides the predecessors of the same row and all entries underneath in the same column.
Integral of $\dfrac{\cos(x)}{5+3\cos(x)}$
Calculate the integral $$\int \frac{\cos \left(x\right)}{5+3\cos \left(x\right)}dx$$ First, apply the Integral Substitution: $$\int f(g(x))\cdot g'(x)dx=\int f(u)du, u=g(x) $$ So, your $u=\tan(\frac{x}{2})$ that $dx=\frac{2}{1+u^2}du$ and $\cos \left(x\right)=\frac{1-u^2}{1+u^2}$. $$\Rightarrow \int \frac{\cos \left(x\right)}{5+3\cos \left(x\right)}dx =\int \frac{\frac{1-u^2}{1+u^2}}{5+3\frac{1-u^2}{1+u^2}}\frac{2}{1+u^2}du =\int \frac{1-u^2}{u^4+5u^2+4}du $$ Now you take the partial fraction of $\frac{1-u^2}{u^4+5u^2+4}$ that \begin{align} \int \frac{1-u^2}{u^4+5u^2+4}du&amp;=\int \frac{2}{3\left(u^2+1\right)}-\frac{5}{3\left(u^2+4\right)}du\\ &amp;=\int \frac{2}{3\left(u^2+1\right)}du-\int \frac{5}{3\left(u^2+4\right)}du \\ &amp;= \frac{2\arctan(u)}{3} - \frac{5\arctan(\frac{u}{2})}{6} \end{align} Now back Substitute then you have $$=\frac{2\arctan \left(\tan \left(\frac{x}{2}\right)\right)}{3}-\frac{5\arctan \left(\frac{\tan \left(\frac{x}{2}\right)}{2}\right)}{6}$$ Don't Forget to add a constant to the solution ;-) So: your solution is $$\int \frac{\cos \left(x\right)}{5+3\cos \left(x\right)}dx=\frac{2\arctan \left(\tan \left(\frac{x}{2}\right)\right)}{3}-\frac{5\arctan \left(\frac{\tan \left(\frac{x}{2}\right)}{2}\right)}{6}+C$$
Smallest Prime Factor - Why does this algorithm find prime numbers?
In the loop $i$ is equal to $30k+7$, where $k$ is a non-negative integer. Let's consider what values can't be prime. I'll list the offsets and remove them as they're eliminated. $$0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29$$ We start with $7$, which is odd. Adding another odd number would give us an even number, which is divisible by $2$. Eliminate those. $$0,2,4,6,8,10,12,14,16,18,20,22,24,26,28$$ We're adding multiples of $30$, which are divisible by $3$. The last digit is $7$. Adding $2$ would make it $9$, making the whole number divisible by $3$. So $2$ gets skipped, so well as $2+3k$. Eliminate those. $$0,4,6,10,12,16,18,22,24,28$$ Same logic for $5$. Since the last digit is $7$ without an offset, then adding $3,8,13$, etc, would make it a multiple of $5$. Eliminate those. $$0,4,6,10,12,16,22,24$$ And you're left with the numbers that are checked. We increment by $30$ since $2\cdot3\cdot5=30$, and we checked those at the very beginning. Adding $30$ produces the same pattern of divisibility by those numbers at the same offsets. So from then on we just need to dodge those offsets.
Prove $f$ is derivable and find $f'(0)$
Set $p(f)=2f^3-3f^2+6f$ since $p'(f)&gt;0$ then by Inverse function theorem $p$ is invertible and $p^{-1}$ is differentiable and we have: $$f(x)=p^{-1} (x)$$ For calculate $f'$, we derive from both sides of $2(f(x))^3 - 3(f(x))^2 + 6f(x) = x$ then we achieve this equation: $$f'(x)=\frac{1}{6(f(x)^2 - f(x)+1)}$$ If $z$ is real root of $p(f)=0$ then $f'(0)=\frac{1}{6(z^2-z+1)}$ and because $z=0$ then $f'(0)=\frac{1}{6}$. Generalization: If $g(f(x))$ and $h(x)$ are real functions such that $g'(f(x))&gt;0$ and $ h'(x)&gt;0 \quad \forall x \in \mathbb R$ now if we have $g(f(x))=h(x)$ then $f(x)$ is differentiable well-defined function.
Prove that the upper-half space $H^k$ is closed and that its iterior is the positive upper-half space.
For $x \in \Bbb R^k\setminus H^k$ with $x_k &lt; 0$, the (euclidean) open ball $B(x, -x_k/2)$ centered on $x$ with radius $-x_k/2$ is included in $\Bbb R^k\setminus H^k$ as for $y=(y_1, \dots, y_k) \in B(x, -x_k/2)$ you have $$0 \le \left\vert y_k - x_k\right\vert \le \sqrt{(y_1-x_1)^2 + \dots +(y_k - x_k)^2} \lt -x_k/2$$ and therefore $$y_k \lt \frac{3}{4}x_k \lt 0$$ This proves that $\Bbb R^k\setminus H^k$ is open and therefore $H^k$ closed. Also $H_+^k$ is open (similar proof than above using balls) and included in $H^k$ therefore $H_+^k \subseteq \text{int}(H^k)$. Conversely if $x$ is such that $x_k &gt;0$ then $x \in \text{int}(H^k)$: similar proof than above using balls. And if $x \in H^k$ with $x_k=0$, then any non empty open ball centered on $x$ will intersect $\Bbb R^k\setminus H^k$. This proves that $\text{int}(H^k) \subseteq H_+^k$ and finally $H_+^k = \text{int}(H^k)$.
shortest distance in triangle
EXE : Consider a triangle of side lengths : $b+c,\ \sqrt{b^2+\varepsilon^2},\ \sqrt{ c^2+\varepsilon^2}$ s.t. the area is $\frac{1}{2}\varepsilon (b+c)$ Prove that $b+ \sqrt{c^2+\varepsilon^2} -\{ c+ \sqrt{ b^2+\varepsilon^2} \} &gt;0$ where $b&gt;c$ Proof : Consider \begin{align*}&amp;(b+ \sqrt{c^2+\varepsilon^2} )^2- (c+ \sqrt{ b^2+\varepsilon^2})^2\\&amp;= 2b\sqrt{c^2+\varepsilon^2} -2c\sqrt{ b^2+\varepsilon^2}\\&amp;&gt;0 \end{align*} where $b&gt;c$ EXE : Consider a triangle $\Delta\ xyz$ whose incircle has a center $o$ Further, $o$ has a foot $x'$ on $[yz]$ i.e. $[ox']\perp [yz]$ When $|y-x'|=b,\ |y-o|=B,\ |z-x'|=c,\ |z-o|=C,\ |x-z'|=a,\ |x-o|=A$, then consider two paths $oyzxo,\ oyxzo$, then by cancelling $${\rm length}\ oyzxo -{\rm length}\ oyxzo =c+A-\{a+C\}$$ By previous EXE, we can determine the sign of the above.
Connections between Cesaro summation and Borel summation of series
For the first question, the answer is negative. For a counterexample It suffices to take $x_n=\left\{ \begin{array}{ll} 1, &amp; \hbox{if }n\in 2\mathbb{N} \\ 0, &amp; \hbox{otherwise.} \end{array} \right.$. Then $S_{2n}=S_{2n+1}=n+1,\forall n\in\mathbb{N} $ So $$\lim_{n\infty}\dfrac{S_0+\ldots+S_n}{n+1}=\frac{1}{2}$$ Which means that $(x_n)$ converges in the Cesaro sense to $\dfrac{1}{2}$. In the other hand $$e^{-x}\sum_{n=0}^\infty S_n\frac{x^n}{n!}=e^{-x}\sum_{n=0}^\infty(n+1) \frac{x^{2n}}{(2n)!}+e^{-x}\sum_{n=0}^\infty(n+1) \frac{x^{2n+1}}{(2n+1)!}\to+\infty\mbox{ as } x\to+\infty$$ Then $(x_n)$ does not converge in the Borel sense.
Determining an Exterior Normal
Let's assume, as Edgar Matias suggested, that our surface is compact, so we have the interior as the bounded region and the exterior as the unbounded one. I don't think that you can answer this question by considering the gradient locally, since it's not too hard imagine two manifolds with the same gradient at a point, but where in one case the gradient is inward, and in the other case the gradient is outward (imagine a shape that folds over itself). One possible way of answering this question is to integrate the gradient over the manifold, fixing the outward orientation on the manifold. If the integral is positive, the gradient was the external normal, otherwise it was the internal normal.
After rolling two dice and flipping 12 coins. What is the probability that the # of heads is equal to the sum of the numbers showing on the two dice?
Let $X$ be the sum of the two numbers on the dice. Then $$\Pr[X = x] = \begin{cases}\frac{6-|x-7|}{36}, &amp; x \in \{2, 3, \ldots, 12\}, \\ 0 &amp; \text{otherwise}. \end{cases}$$ Let $Y$ be the number of heads flipped out of $12$ coins. Then $$\Pr[Y = y] = \binom{12}{y} (1/2)^y (1 - 1/2)^{12-y} = \frac{1}{2^{12}} \binom{12}{y}, \quad y \in \{0, 1, \ldots, 12\}.$$ Then the desired probability is $$\Pr[X = Y] = \sum_{x=0}^{12} \Pr[X = x]\Pr[Y = x] = \frac{1}{6^2 2^{12}} \sum_{x=2}^{12} (6 - |7-x|)\binom{12}{x} .$$
number of matrices of rank 3?
The first column can be any non-zero vector in $\mathbb{F}^4$. There are $3^4-1$ of those. Having chosen one of them, call it $c_1$, and then the second column needs to be any vector not in $span(c_1)$. There are $3^4-3$ of those. The third column needs to be outside $span(c_1,c_2)$, so it is one of $3^4-3^2$ options. The answer is thus C.
Can we guarantee that there exists an $\epsilon' > 0$ such that holds for this inequality?
If $a_n \to a, b_n \to b$ then there is some $M$ such that $|a|,|b_n| \le M$. Then $|a_nb_n -ab| = |a_nb_n -a b_n + a b_n -ab| \le |a_n-a| |b_n| + |a| |b_n-b| \le M (|a-a_n|+ |b-b_n|)$. Now choose $N$ big enough so that $|a-a_n|, |b-b_n| &lt; {\epsilon \over 2 M}$.
The radius of convergence has to be determined
Hint: Group everything in terms of one exponent, like so: $$\sum\limits_{n = 0}^{\infty} \left(\frac{x - 3}{9}\right)^n$$ What kind of series is this?
Sequence of independent random variables, mean = 0
Assume that $P[X_n=n^3-n]=1/n^2$ and $P[X_n=-n]=1-1/n^2$ for every $n$, then $E[X_n]=0$ for every $n$ and the series $\sum\limits_nP[X_n\ne-n]$ converges hence by (the easy part of) Borel-Cantelli lemma, $X_n=-n$ for every $n$ large enough, almost surely, in particular $\sum\limits_{i=1}^nX_i\sim-\frac12n^2$ almost surely, which is enough to conclude. The independence hypothesis is not needed.
Using Fermat's Little Theorem for remainders
$15 = 8+4+2+1$, hence $6^{15} = 6^8 \cdot 6^4\cdot 6^2\cdot 6^1$. Then: $6^1 \equiv 6 \pmod{17}$ $6^2 = 36 \equiv 2 \pmod{17}$ $6^4 = (6^2)^2 \equiv 2^2 = 4 \pmod{17}$ $6^8 = (6^4)^2 \equiv 4^2 = 16 \pmod{17}$ Therefore $6^{15} \equiv 16 \cdot 4\cdot 2 \cdot 6 = 16\cdot 48 = (17-1)(51-3) \equiv 3 \pmod{17}$
An integral formula for the multiplication of inverses.
The formula you cite is correct; in fact, it can be generalized. Your error is assuming $x_2$ integrates from $0$ to $1$; it should be $0$ to $1-x_1$. So the correct result for$$a:=a_1,\,b:=a_2,\,c:=a_3,\,m_1:=a-c,\,m_2:=b-c,\,m_3:=c$$is$$\int_0^1dx_1\int_{m_1x_1+m_3}^{m_1x_1+m_2(1-x_1)+m_3}\frac{du}{m_2u^3}\\=\frac{1}{2m_2}\int_0^1dx_1\left(\frac{1}{(m_1x_1+m_3)^2}-\frac{1}{((m_1-m_2)x_1+m_2+m_3)^2}\right)\\=\frac{1}{2m_2}\left(\frac{1}{m_3(m_1+m_3)}-\frac{1}{(m_1+m_3)(m_2+m_3)}\right)\\=\frac{1}{2(m_1+m_3)m_3(m_2+m_3)}=\frac{1}{2acb}.$$
The intuition behind reparameterization of a curve
The point does not change but speed, acceleration with time dependence can arbitrarily change. Like the same speedway for all competing supercars including a repair road-roller. But differential geometric slope, derivatives etc. that isometrically depend on the first fundamental form are invariant.
find formula for differential equation
Hint. The solution is the sum of the solution of the homogeneous equation that you have found: $$ y=c_1e^{-2t}+c_2e^{-t} $$ and a particular solution that, you can easily see, is $y=te^{-t}$ than find the constants by initial conditions. To find the particular solution use the method of undetermined coefficients for $y=ate^{-t}$ $$ y'=ae^{-t}-ate^{-t} \qquad y''=-ae^{-t}-ae^{-t}+ate^{-t} $$ substituting in the given equation: $$ -2ae^{-t}+ate^{-t}+3ae^{-t}-3ate^{-t}+2ate^{-t}=e^{-t} \Rightarrow ae^{-t}=e^{-t} \Rightarrow a=1 $$
For h $\in \mathbb R$ and $h \gt -1$ and n $\in \mathbb N$: prove $1 + n \cdot h \le (1 + h)^n$
The positive case can be taken care of by the binomial theorem.
Orthogonal decomposition with a special inner product
Yes, there is always such a decomposition, and we don't even need to refer to a particular matrix $\bf A$. Every null space of a matrix is a subspace, and every subspace is the null space of some matrix, so we may as well just show the claim for subspaces $S$. For any inner product $\langle \,\cdot\,,\,\cdot\,\rangle$ on $\Bbb R^p$, any subspace $S \subset \Bbb R^p$ defines an orthogonal subspace $$S^{\perp} := \{{\bf x} \in S : \langle {\bf x}, {\bf y} \rangle = 0 \textrm{ for all ${\bf y} \in S$}\} .$$ Now, nondegeneracy implies that $\dim S + \dim S^\perp = p$, and positive definiteness implies that $S \cap S^{\perp} = \{{\bf 0}\}$: For any ${\bf x} \neq {\bf 0}$ we have $\langle {\bf x}, {\bf x} \rangle &gt; 0$, so if ${\bf x} \in S$, we must have ${\bf x} \not\in S^{\perp}$. Thus, $\Bbb R^p$ decomposes as an orthogonal direct sum $$\boxed{\Bbb R^p = S \oplus S^{\perp}} .$$ In particular, we can decompose any element ${\bf x} \in \Bbb R^p$ uniquely as a sum $${\bf x} = {\bf x}^\top + {\bf x}^\perp, \qquad \textrm{where} \qquad {\bf x}^\top \in S, {\bf x}^\perp \in S^{\perp} ,$$ and by definition $\langle {\bf x}^\top, {\bf x}^\perp \rangle = 0$.
How to choose $a,b\in I$, so $ab\notin I^3$?
If $xy\in I^3$ for every $x,y\in I$, then all the generators of $I^2$ are in $I^3$, therefore $I^2\subseteq I^3\subseteq I^2$. By contrapositive then, if one assumes $I^2\neq I^3$, it must be that there exists $x,y\in I$ such that $xy\notin I^3$.
Proving divergence of the fraction of the square root - without using proof by contradiction
What can you say about $a_{k^2}$ and $a_{k^2 - 1}$ as $k$ ranges through the positive integers? What's the limit of their difference?
Fourier transformation of $f(t) = \frac{1}{1+9t^2} $
Because $$f(t)=\frac{1}{1+(3t)^2}$$
Are vectors $v(x) , u(x)$ and $w(x)$ linear dependent in $\mathbb R^{\mathbb R}$ if $u(x)=|x-2| , v(x)=|x-3| , w(x)=|x-5|?$
Let $a,b,c \in \mathbb R$ such that $au+bv+vw=0$. This means: (*) $a|x-2|+b|x-3|+c|x-5|=0$ for all $x \in \mathbb R$. Consider (*) for $x=2$, $x=3$ and $x=5$. These considerations give you a system of $3$ equations for $a,b,c$. Solve this system and look what happens !
Algorithm to find integer combinations satisfying a set of inequalities
Combine the inequalities into two: $$W \le M \\ \frac FM \le K \le N-WQ$$ Firstly I note that the constraint filters effectively treat $Q$ as a free variable in the same way as $F$ and $N$. Consider constraint 3: For combinations $(M, K, W, Q)$ of $\mathcal{S}$ having the same $(M, W, Q)$, only keep the $(M, K, W, Q)$ with the largest $K$. This implies $K = N - WQ$ and we can eliminate and ignore it. $$W \le M \\ F \le M(N-WQ)$$ For combinations $(M, K, W, Q)$ of $\mathcal{S}$ having the same $(M, Q)$, only keep the $(M, K, W, Q)$ with the largest $W$; So maximise $W$ as a function of $M$. We have $W \le M$ and $W \le \frac{N - FM}Q$, so this implies $W = \min\left(M, \left\lfloor\frac{N - FM}Q\right\rfloor\right)$. For combinations $(M, K, W, Q)$ of $\mathcal{S}$ having the same $(W, Q)$, only keep the $(M, K, W, Q)$ with the smallest $M$; If $M \le \left\lfloor\frac{N - FM}Q\right\rfloor$, we can't reduce $M$ because that would reduce $W$. If $M &gt; \left\lfloor\frac{N - FM}Q\right\rfloor$, decreasing $M$ by one cannot violate $W \le M$ and it increases the other bound on $W$. Therefore we must decrease $M$ until $M = W$. So we have the following constraints: $$ W = M \\ \frac FM \le K = N - WQ $$ Substituting the first into the second we have $$F \le M(N - MQ)$$ or $$Q M^2 - MN + F \le 0$$ Since $Q &gt; 0$, $$\frac{N - \sqrt{N^2 - 4QF}}{2Q} \le M \le \frac{N + \sqrt{N^2 - 4QF}}{2Q}$$ Since we need real roots to have any solutions, $Q$ isn't entirely free: we require $Q \le \frac{N^2}{4F}$. Given a value of $Q$, the number of values of $M$ is approximately $\frac{\sqrt{N^2 - 4QF}}{Q}$, so the total number of solutions is approximately $$\int_{1}^{\frac{N^2}{4F}} \frac{\sqrt{N^2 - 4QF}}{Q} dQ$$ Subst $c = \frac{N^2}{4F}$ to get $2\sqrt{F} \int_1^c \frac{\sqrt{c - Q}}{Q} dQ$, and if I haven't messed up the substitutions and the use of Wolfram Alpha to cheat on the core integral, you get $4\sqrt{F} \left(\sqrt{c-1} - \sqrt{c} \tanh^{-1}\sqrt{1-\frac{1}{c}}\right)$
Establish a definite integral
Let $I = \int_{-2}^{2} x^3 \cos (x/2) \sqrt{4-x^2}dx + \int_{-2}^{2}\frac{1}{2}\sqrt{4-x^2}dx $ I just distributed the parentheses. Notice $\int_{-2}^{2} x^3 \cos(x/2)\sqrt{4-x^2}dx = 0$, as the function is odd. So, we just have to compute $\int_{-2}^{2} \frac{1}{2}\sqrt{4-x^2}dx.$ This, you can do by trig sub, or by just noticing it's half the area of a semicircle of radius 2. $I = \pi$. This exact integral made the rounds on the internet as some kind of wifi passcode, too. Linked here: Solve this integral for free WiFi