problem
stringlengths 1
847
| solution
stringlengths 0
3.64k
| label
stringlengths 0
17
| answer
stringlengths 0
56
|
---|---|---|---|
Compute $|A^{*}|$ for $A=\left(
\begin{array}{ccc}
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 &0 \\
\end{array}
\right)
$, where $A^*$ is the adjoint matrix of A. | Since $AA^{*}=|A|I_3$ and $|A|=2,$ we then obtain
\[|A^{*}|=|A|^2=4.\] | 4. |
|
Suppose that $A\in R^{3\times 3}$ is a matrix with $|A|=1,$ compute $|A^*-2A^{-1}|,$ where $A^*$ is the adjoint matrix of A. | Note the identity $AA^*=|A|I_3$ and $|A|=1,$ we know that
\[A^*=A^{-1}.\]
Thus
\[|A^*-2A^{-1}|=|-A^{-1}|=(-1)^3|A^{-1}|=-1\cdot \dfrac{1}{|A|}=1.\] | -1. |
|
Let $A^*$ denote the adjoint matrix of matrix $A$.
Suppose that
$A^*=\left(
\begin{array}{ccc}
1 & 2 & 3 \\
0 & 1 & 4 \\
0 & 0 & 1 \\
\end{array}
\right)
$, and the determinant is $|A|=1,$
Find $A.$
In your answer, present the matrix in the form of $[a_{11}, a_{12}, a_{13}; a_{21}, a_{22}, a_{23}; a_{31}, a_{32}, a_{33} ]$. | It follows from the equation
$AA^*=|A|I_3$ that
\[A=|A|(A^{*})^{-1}.\]
By the assumption $|A|=1,$ we have $A=(A^{*})^{-1}.$
By the formula
\[(A^{*})^{-1}=\dfrac{1}{|A^*|}(A^{*})^{*}.\]
By the definition of adjoint matrixes, we have
\[(A^{*})^{*}=\left(
\begin{array}{ccc}
1 & -2 & 5 \\
0 & 1 & -4 \\
0 & 0 & 1 \\
\end{array}
\right).\]
We have $|A^*|=1$ by a direct computation.
Consequently, $A=\left(
\begin{array}{ccc}
1 & -2 & 5 \\
0 & 1 & -4 \\
0 & 0 & 1 \\
\end{array}
\right).$ | [1, -2, 5; 0, 1, -4; 0, 0, 1]. |
|
Suppose that the vectors
$\left(
\begin{array}{c}
1 \\ 1 \\ 1 \\
\end{array}
\right),
$
$\left(
\begin{array}{c}
1 \\ 2 \\ 0 \\
\end{array}
\right),
$$\left(
\begin{array}{c}
0 \\ 1 \\ -1 \\
\end{array}
\right)
$ and vectors
$\left(
\begin{array}{c}
0 \\ a \\ -1 \\
\end{array}
\right),
$
$\left(
\begin{array}{c}
b \\ 3 \\ 1 \\
\end{array}
\right)
$ generated the same linear subspace. Compute a and b. Present the answer as $[a,b]$. | The two sets of vectors can be linearly represented by each other. By elementary transformation, we have
\[\left(
\begin{array}{ccccc}
1 & 1 & 0 & 0 & b \\
1 & 2 & 1 & a & 3 \\
1 & 0 & -1 & -1 & 1 \\
\end{array}
\right)\to \left(
\begin{array}{ccccc}
1 & 1 & 0 & 0 & b \\
0 & 1 & 1 & a & 3-b \\
0 & -1& -1 & -1 & 1-b \\
\end{array}
\right)\to \left(
\begin{array}{ccccc}
1 & 1 & 0 & 0 & b \\
0 & 1 & 1 & a & 3-b \\
0 & 0& 0 & a-1 & 4-2b \\
\end{array}
\right)
\]
Thus $a-1=4-2b=0.$ It implies that $a=1,b=2.$ | [1,2] |
|
Suppose that
$A=\left(
\begin{array}{cc}
1 & 2 \\
2& a \\
\end{array}
\right)
$ and $B=\left(
\begin{array}{cc}
0 & 0 \\
0& b \\
\end{array}
\right)$
are similar matrixes, find a and b. Present the answer in the form of $[a,b]$. | Since A and B are similar matrixes, then
\[|A|=|B|,\quad \text{tr}(A)=\text{tr}(B).\]
It shows that
\[a-4=0,\quad 1+a=0+b.\]
Thus $a=4,b=5.$ | [4,5] |
|
Suppose there are two matrixes $A\in \mathbb{R}^{3\times 4},B\in \mathbb{R}^{4\times 3}$ satisfying that
\[AB=\left(
\begin{array}{ccc}
-9 & 2 & 2 \\
-20 & 5 & 4 \\
-35 & 7 & 8 \\
\end{array}
\right),\quad BA=\left(
\begin{array}{cccc}
-14 & 2a-5 & 2 & 6 \\
0 & 1 & 0 & 0 \\
-15 & 3a-3 & 3 & 6 \\
-32 & 6a-7 & 4 & 14 \\
\end{array}
\right).
\]
Compute a. | By the identity
\[3-\text{rank}(I_3-AB)=4-\text{rank}(I_4-BA),\]
and note that
\[\text{rank}(I_3-AB)=1,\]
It implies that
\[\text{rank}(I_4-BA)=2.\]
Since
\[I_4-BA=\left(
\begin{array}{cccc}
15 & 5-2a & -2 & -6 \\
0 & 0 & 0 & 0 \\
15 & 3-3a & -2 & -6 \\
32 & 7-6a & -4 & -13 \\
\end{array}
\right).\]
It indicates that
\[\left|
\begin{array}{ccc}
5-2a & -2 & -6 \\
3-3a & -2 & -6 \\
7-6a & -4 & -13 \\
\end{array}
\right|=0.\]
Thus $a=-2.$ | -2 |
|
Suppose that $A\in \mathbb{R}^{3\times 2}, B\in \mathbb{R}^{2\times 3}$ satisfy
\[AB=\left(
\begin{array}{ccc}
8 & 2 & -2 \\
2 & 5 & 4 \\
-2 & 4 & 5 \\
\end{array}
\right),
\]
Compute $BA$. Present the matrix in the form of $[a_{11},a_{12};a_{21},a_{22}]$. | By the identity
\[3-\text{rank}(9I_3-AB)=2-\text{rank}(9I_2-BA),\]
and note that
\[\text{rank}(9I_3-AB)=\text{rank}\left(
\begin{array}{ccc}
1 & -2 & 2 \\
-2 & 4 & -4 \\
2 & -4 & 4 \\
\end{array}
\right)=1,\]
it implies that
$\text{rank}(9I_2-BA)=0.$ Thus
\[BA=\left(
\begin{array}{cc}
9 & 0 \\
0 & 9 \\
\end{array}
\right).
\] | [9,0; 0, 9] |
|
Compute $a,b,c$ such that the linear equations
\[\left\{\begin{array}{l}
-2x_1+x_2+ax_3-5x_4=1, \\
x_1+x_2-x_3+bx_4=4, \\
3x_1+x_2+x_3+2x_4=c
\end{array}\right.
\]
and the linear equations
\[\left\{\begin{array}{l}
x_1+x_4=1, \\
x_2-2x_4=2, \\
x_3+x_4=-1.
\end{array}\right.
\]
have the same set of solutions. Present the answer as $[a,b,c]$. | The general solutio to the equation \[\left\{\begin{array}{l}
x_1+x_4=1, \\
x_2-2x_4=2, \\
x_3+x_4=-1.
\end{array}\right.
\]
can be written as
\[x_1=1-x_4, x_2=2+2x_4, x_3=-1-x_4, \quad x_4\in \mathbb{R}.\]
Inserting them into the first equation, we obtain that
\[
\left\{\begin{array}{l}
(-1-a)x_4=1+a, \\
(2+b)x_4=0, \\
c=4.
\end{array}\right.
\]
Since $x_4$ is an arbitrary constant, we deduce that
$a=-1,b=-2,c=4.$ | [-1,-2,4] |
|
Suppose that $\phi:\mathbb{R}^{3\times 3}\to \mathbb{R}$ is a mapping which satisfies the following properties
\begin{enumerate}
\item $\phi(AB)=\phi(A)\phi(B)$ for any $A,B\in \mathbb{R}^N.$ and
\item $\phi(A)=|A|$ for any diagonal matrix $A.$
\end{enumerate}
Compute $\phi(A)$ for
\[A=\left(
\begin{array}{ccc}
2 & 1 & 1 \\
1 & 2 &1 \\
1 & 1 & 2 \\
\end{array}
\right)
\] | Note that $A$ is symmetric, so there exists an invertible matrix $P$ such that
\[A=P{\rm diag}(\lambda_1,\lambda_2,\lambda_3)P^{-1}.\]
By the first property of $\phi,$ we have
\[\phi(A)=\phi(P)\phi({\rm diag}(\lambda_1,\lambda_2,\lambda_3))\phi(P^{-1}).\]
Also we know
\[\phi(P)\phi(P^{-1})=\phi(PP^{-1})=\phi(I_3)=|I_3|=1\]
due to the second property.
Thus
\[\phi(A)=\phi({\rm diag}(\lambda_1,\lambda_2,\lambda_3))=\lambda_1\lambda_2\lambda_3=|A|=4.\] | 4 |
|
Suppose that $\psi:\mathbb{R}^{3\times 3}\to \mathbb{R}$ is a mapping which satisfies the following properties
\begin{enumerate}
\item $\psi(AB)=\psi(BA)$ for any $A,B\in \mathbb{R}^N.$ and
\item $\psi(A)={\rm tr}(A)$ for any diagonal matrix $A.$
\end{enumerate}
Compute $\psi(A)$ for
\[A=\left(
\begin{array}{ccc}
1 & 2 & 2 \\
2 & 1 &2 \\
2 &2 & 1 \\
\end{array}
\right).
\] | Note that $A$ is symmetric, so there exists an invertible matrix $P$ such that
\[A=P{\rm diag}(\lambda_1,\lambda_2,\lambda_3)P^{-1}.\]
By the first property of $\psi,$ we have
\[\psi(A)=\psi({\rm diag}(\lambda_1,\lambda_2,\lambda_3)P^{-1}P)=\psi({\rm diag}(\lambda_1,\lambda_2,\lambda_3)).\]
Also we know
\[\psi({\rm diag}(\lambda_1,\lambda_2,\lambda_3))=\lambda_1+\lambda_2+\lambda_3.\]
due to the second property.
Thus
\[\psi(A)=\lambda_1+\lambda_2+\lambda_3={\rm tr}(A)=3.\] | 3 |
|
Compute the limit $\displaystyle \lim_{n\to \infty}\dfrac{y_n}{x_n}$, where the two sequence $\{x_n\}, \{y_n\}$ are defined by
\[ \left(
\begin{array}{c}
x_n \\
y_n \\
\end{array}
\right)=A^n\left(
\begin{array}{c}
1 \\
1 \\
\end{array}
\right)
\] with $A=\left(
\begin{array}{cc}
0 & 1 \\
1 & 1 \\
\end{array}
\right)
$. | The characteristic polynomial of A is
\[\left|
\begin{array}{cc}
\lambda & -1 \\
-1 & \lambda-1 \\
\end{array}
\right|=\lambda^2-\lambda-1.\]
Thus the eigenvalues are $\lambda_1=\dfrac{1+\sqrt{5}}{2},\lambda_2=\dfrac{1-\sqrt{5}}{2}.$ Their eigenvectors are
$\left(
\begin{array}{c}
1 \\
\lambda_1 \\
\end{array}
\right)$ and $\left(
\begin{array}{c}
1 \\
\lambda_2 \\
\end{array}
\right)$ respectively.
Set
\[P=\left(
\begin{array}{cc}
1 & 1 \\
\lambda_1 & \lambda_2 \\
\end{array}
\right),
\]
then
\[A=P\left(
\begin{array}{cc}
\lambda_1 &0 \\
0 & \lambda_2 \\
\end{array}
\right)P^{-1}.
\]
Thus
\[A^n=P\left(
\begin{array}{cc}
\lambda_1^n &0 \\
0 & \lambda_2^n \\
\end{array}
\right)P^{-1}.\]
Since
\[P^{-1}=\dfrac{-1}{\sqrt{5}}\left(
\begin{array}{cc}
\lambda_2 & -1 \\
-\lambda_1 & 1\\
\end{array}
\right)\]
we have
\[A^n\left(
\begin{array}{c}
1 \\
1 \\
\end{array}
\right)=\dfrac{1}{\sqrt{5}}\left(
\begin{array}{c}
\lambda_1^{n+1}-\lambda_2^{n+1}\\
\lambda_1^{n+2}-\lambda_2^{n+2} \\
\end{array}
\right).
\]
Therefore $x_n=\dfrac{1}{\sqrt{5}}\big(\lambda_1^{n+1}-\lambda_2^{n+1}\big)$ and
$y_n=\dfrac{1}{\sqrt{5}}\big(\lambda_1^{n+2}-\lambda_2^{n+2}\big).$
Then, we obtain that
\[\lim_{n\to \infty}\dfrac{y_n}{x_n}=\lambda_1=\dfrac{1+\sqrt{5}}{2}.\] | 1.62 |
|
Find the integer $a$ such that $x^2-x+a$ is a factor of $x^{13}+x+90$. | Let $x^{13}+x+90=(x^2-x+a)q(x)$, where $q(x)\in \mathbb{Z}[x]$ is a polynomial with integral coefficients.
Inserting $x=0,1$ into $x^{13}+x+90=(x^2-x+a)q(x)$ leads to $a|90,a|92.$ Namely a is a factor of 90 and 92. Thus $a|2.$ Then $a=1,-1,2$ or $-2.$ Note that $x^{13}+x+90=0$ has no positive root, therefore $a=1$ or $2.$ Again inserting $x=-1$ into $x^{13}+x+90=(x^2-x+a)q(x)$, we obtain $(a+2)|88.$ Then $a=2.$ Indeed,
{\small
\[ \polylongdiv{x^{13}+x+90}{x^2-x+2}\]} | 2 |
|
Find the integer coefficient polynomial with the smallest degree that has a root $\sqrt{2}+\sqrt{3}$. | Since $\sqrt{2}+\sqrt{3}$ is a root, its conjugates $\pm \sqrt{2}\pm\sqrt{3}$ are also possible roots since the coefficients are integers. Let
\[f(x)=(x-\sqrt{2}-\sqrt{3})(x-\sqrt{2}+\sqrt{3})(x+\sqrt{2}-\sqrt{3})(x+\sqrt{2}+\sqrt{3}),\]
that is, $f(x)=x^4-10x^2+1.$ Suppose that $g(x)$ is the desired polynomial. Then $g(x)|f(x)$. Therefore, there exists an integer coefficient polynomial $h(x)$ such that
\[f(x)=g(x)h(x).\]
On the one hand, the degree of $g(x)$ is not 1 because $x-\sqrt{2}-\sqrt{3}$ does not have integer coefficients. On the other hand, the degrees of $g(x)$ cannot be two because otherwise, the coefficient of $x$ is not an integer when the roots are two of $\pm\sqrt{2}\pm\sqrt{3}$. Similarly, the degree of $g$ cannot be three. Consequently, $g(x) =f(x)=x^4-10x^2+1$ is the desired polynomial. | x^4-10x^2+1. |
|
Let $A=\left(
\begin{array}{ccc}
3 & 2 & 2\\
2 & 3 & 2\\
2 & 2 & 3\\
\end{array}
\right)
$ and $v=(2,1,0)^{\top}$, find the polynomial $f(x)$ with the least degree such that $f(A)v=0.$ | By a direct calculation, we obtain the characteristic polynomial of A is
\[(\lambda-7)(\lambda-1)^2.\]
So $f(x)$ must be one of the five factors $x-1$, $x-7$, $(x-1)^2$, $(x-1)(x-7)$ and $(x-1)^2(x-7)$.
Note that
\[Av=\left(
\begin{array}{c}
8 \\
7 \\
6
\end{array}
\right)\]
thus $f(x)$ is neither $x-1$ nor $x-7$. Since
\[(A-I_3)v=\left(
\begin{array}{c}
6 \\
6 \\
6
\end{array}
\right)\]
and
\[(A-7I_3)\left(
\begin{array}{c}
1 \\
1 \\
1
\end{array}
\right)=\left(
\begin{array}{c}
0 \\
0 \\
0
\end{array}
\right)\]
therefore
\[(A-7I_3)(A-I_3)v=0.\]
Then we deduce that $f(x)=(x-7)(x-1)=x^2-8x+7.$ | x^2-8x+7. |
|
Evaluate the following limit:
\begin{equation*}
\lim_{n \to \infty} \left(\sqrt{n^2+2n-1}-\sqrt{n^2+3}\right).
\end{equation*} | \begin{align*}
\lim_{n \to \infty} \left(\sqrt{n^2+2n-1}-\sqrt{n^2+3}\right)&=\lim_{n \to \infty} \left(\sqrt{n^2+2n-1}-\sqrt{n^2+3}\right) \cdot \frac{\sqrt{n^2+2n-1} + \sqrt{n^2+3}}{\sqrt{n^2+2n-1} + \sqrt{n^2+3}}\\
&=\lim_{n \to \infty} \frac{(n^2+2n-1) - (n^2+3)}{\sqrt{n^2+2n-1} + \sqrt{n^2+3}}\\
&=\lim_{n \to \infty} \frac{2n-4}{\sqrt{n^2+2n-1} + \sqrt{n^2+3}}\\
&=\lim_{n \to \infty} \frac{\frac{1}{n}(2n-4)}{\frac{1}{n}\left(\sqrt{n^2+2n-1} + \sqrt{n^2+3}\right)}\\
&=\lim_{n \to \infty}\frac{2-\frac{4}{n}}{\sqrt{1+\frac{2}{n}-\frac{1}{n^2}}+\sqrt{1+\frac{3}{n}}}\\
&=\frac{2-0}{\sqrt{1+0-0}+\sqrt{1+0}}\\
&=1.
\end{align*}\\ | 1. |
|
Find the limit $$\lim\limits_{x\to 1}\frac{f(2x^2+x-3)-f(0)}{x-1}$$ given $f'(1)=2$ and $f'(0)=-1$. | Let $g(x)=2x^2+x-3$. Noticing that $g(1)=0$, the desired limit equals $\lim\limits_{x\to 1}\frac{f(g(x))-f(g(1))}{x-1}$. By the definition of the derivative and the chain rule and noting that $g'(1)=5$, we have
\[
\lim\limits_{x\to 1}\frac{f(g(x))-f(g(1))}{x-1}=f'(g(1))g'(1)=f'(0)g'(1)=(-1)(5)=-5.
\]\\ | -5 |
|
Evaluate $\lim\limits_{x\to 4}\frac{x-4}{\sqrt{x}-2}$. | \begin{align*}
\lim\limits_{x\to 4}\frac{x-4}{\sqrt{x}-2}&= \lim_{x \to 4} \frac{x - 4}{\sqrt{x} - 2} \cdot \frac{\sqrt{x} + 2}{\sqrt{x} + 2}\\
&=\lim_{x \to 4} \frac{(x - 4)(\sqrt{x} + 2)}{(\sqrt{x} - 2)(\sqrt{x} + 2)} \\
&=\lim_{x \to 4} \frac{(x - 4)(\sqrt{x} + 2)}{x-4}=\lim_{x \to 4}(\sqrt{x}+2)=4.
\end{align*}\\ | 4 |
|
Find the values of $a$ such that the function $f(x)$ is continuous on $\mathbb{R}$, where $f(x)$ is defined as
\[
f(x)=\begin{cases} 2x-1, &\text{if } x\leq 0,\\
a(x-1)^2-3, & \text{otherwise.}
\end{cases}
\] | By the definition of $f(x)$, we have
\begin{align*}
f(0)&=-1;\\
\lim\limits_{x\to 0^{-}}f(x)&=\lim\limits_{x\to 0^{-}}(2x-1)=2(0)-1=-1;\\
\lim\limits_{x\to 0^{+}}f(x)&=\lim\limits_{x\to 0^{+}}(a(x-1)^2-3)=a(0-1)^2-3=a-3.
\end{align*}
To obtain the continuity of $f(x)$ at $x=0$, we need $-1=a-3$, that is, $a=2$.
So, the function $f(x)$ is continuous at $x=0$ when $a=2$.\\ | 2 |
|
Evaluate $\lim\limits_{x\to 1}\frac{x^2-1}{x+1}$. | Use direct substitution to obtain the result:
\[
\lim_{x \to 1} \frac{x^2 - 1}{x + 1} = \frac{1^2 - 1}{1 + 1} = \frac{0}{2} = 0.
\]\\ | 0 |
|
Evaluate the integral $\displaystyle{\int_1^e\ln{x}\ dx}$. | Use integration by parts:
\[
\int u \,dv = uv - \int v \,du.
\]
Choose $u = \ln{x} $ and $dv = dx$, then $ du = \frac{1}{x} \,dx, v = x. $
Apply the integration by parts formula:
\[
\int_1^e \ln{x} \,dx = x \ln{x} \Big|_1^e - \int_1^e x \left(\frac{1}{x}\right) \,dx = (e - 0) - (e - 1)= 1.
\]\\ | 1 |
|
Let $f(3)=-1$, $f'(3)=0$, $g(3)=2$ and $g'(3)=5$. Evaluate $\left(\frac{f}{g}\right)'(3)$. | Use the quotient rule. The quotient rule gives
\[
\left(\frac{f}{g}\right)' = \frac{f'g - fg'}{g^2}.
\]
Now, using that $f(3) = -1$, $f'(3) = 0$, $g(3) = 2$, and $g'(3) = 5$, we have
\[
\left(\frac{f}{g}\right)'(3) = \frac{f'(3)g(3) - f(3)g'(3)}{g(3)^2}= \frac{0 \cdot 2 - (-1) \cdot 5}{2^2} = \frac{5}{4}. \]\\ | 1.25 \ |
|
Find all value(s) of $x$ at which the tangent line(s) to the graph of $y=-x^2+2x-3$ are perpendicular to the line $y=\frac12 x-4$. | The slope of the tangent line at the point $(x,y)$ on the curve is $m=f'(x)=-2x+2$.
If the tangent line is perpendicular to the line $y=\frac12 x-4$, we need the slope of the tangent line to be $m=- \frac{1}{\frac12} = -2$.
Set up the equation: $-2x+2=-2$. Then, solve this equation to obtain $x=2$.
Therefore, the tangent line of the graph of $y = -x^2 + 2x - 3$ is perpendicular to the line $y = \frac{1}{2}x - 4$ at the point where $x = 2$.\\ | 2 |
|
Let $n\in \mathbb{N}$ be fixed. Suppose that $f^{(k)}(0)=1$ and $g^{(k)}(0)=2^k$ for $k=0, 1, 2, \dots, n$. Find $\left.\frac{d^n}{dx^n}(f(x)g(x))\right |_{x=0}$ when $n=5$. | We can use the Leibniz formula:
\[ \frac{d^n}{dx^n}(uv) = \sum_{k=0}^n \binom{n}{k} u^{(k)}v^{(n-k)}, \]
where $u^{(k)}$ denotes the $k$-th derivative of $u$ and $v^{(n-k)}$ denotes the $(n-k)$-th derivative of $v$.
In this case, $u = f(x)$ and $v = g(x)$. We are given that $f^{(k)}(0) = 1$ and $g^{(k)}(0) = 2^k$ for $k = 0, 1, 2, \dots, n$. Substituting these values into the general formula, we get:
\[
\frac{d^n}{dx^n}(f(x)g(x)) \bigg|_{x=0} = \sum_{k=0}^n \binom{n}{k} \cdot 1 \cdot 2^{n-k}.
\]
Notice that this sum corresponds to the expansion of $(1 + 2)^n$ according to the binomial theorem. Therefore, we have
\[
\frac{d^n}{dx^n}(f(x)g(x)) \bigg|_{x=0} = (1 + 2)^n = 3^n.
\]\\ | 3^5 |
|
The function $f(x)$ is defined by
\[
f(x)=\begin{cases}
|x|^\alpha\sin(\frac{1}{x}), \ & x\neq 0,\\
0, \ & x=0,
\end{cases}
\]
where $\alpha$ is a constant. Find the value of $a$ such that for all $\alpha>a$, the function $f(x)$ is continuous at $x=0$. | Noting that $f(0)=0$, in order to obtain the continuity of $f(x)$ at $x=0$ we need
\[
\lim\limits_{x\to}f(x)=0,
\]
that is,
\[
\lim\limits_{x\to0}|x|^\alpha\sin{\frac{1}{x}}=0.
\]
Noting that $\left||x|^\alpha\sin{\frac{1}{x}}\right|\leq |x|^\alpha$, if $\alpha>0$, then we have $\lim\limits_{x\to0}|x|^\alpha=0$ which implies $\lim\limits_{x\to0}|x|^\alpha\sin{\frac{1}{x}}=0$.
If $\alpha=0$, $\lim\limits_{x\to 0}|x|^\alpha\sin{\frac{1}{x}}=\lim\limits_{x\to0}\sin{\frac{1}{x}}$ does not exist.
If $\alpha<0$, we can choose the sequence $x_n=\frac{1}{\frac{\pi}{2}+2n\pi}\to 0$ as $n\to\infty$ but
\[
\lim\limits_{n\to\infty}f(x_n)=\lim\limits_{n\to\infty} |x_n|^{\alpha}\sin{\left(\frac{\pi}{2}+2n\pi\right)}=\lim\limits_{n\to\infty} |x_n|^{\alpha}=+\infty.
\]
Therefore, when $\alpha>0$ the function $f(x)$ is continuous at $x=0$.\\ | a | 0 |
Evaluate $\displaystyle{\int_0^4(2x-\sqrt{16-x^2})dx}$. | \[
\int_0^4(2x-\sqrt{16-x^2})dx=\int_0^42x\,dx-\int_0^4\sqrt{16-x^2}\,dx.
\]
For the first integral, we have
\[
\int_0^4 2x \,dx = x^2 \Big|_0^4 = 4^2 - 0^2 = 16.
\]
For the second integral, by a change of variables $x=4\sin\theta$ we get
\begin{align*}
\int_0^4 \sqrt{16 - x^2} \,dx&=\int_0^{\frac{\pi}{2}}\sqrt{16 - 16\sin^2\theta}\ 4\cos\theta \,d\theta\\
&=\int_0^{\frac{\pi}{2}}\sqrt{16 \cos^2\theta}\ 4\cos\theta \,d\theta\\
&=\int_0^{\frac{\pi}{2}}16\cos^2\theta \,d\theta\\
&=\int_0^{\frac{\pi}{2}}16\frac{1+\cos(2\theta)}{2}\,d\theta\\
&=8 \int_0^{\frac{\pi}{2}} (1 + \cos (2\theta)) \,d\theta \\
& = 8 \left.\left[\theta + \frac{1}{2}\sin (2\theta)\right]\right|_0^{\frac{\pi}{2}} \\
& = 8 \left[\left(\frac{\pi}{2} + \frac{1}{2}\sin \pi\right) - (0 + 0)\right] \\
& = 4 \pi .
\end{align*}
So, $\displaystyle{ \int_0^4 (2x - \sqrt{16 - x^2}) \,dx = 16 - 4\pi }$.\\ | 3.43 |
|
Evaluate the series $\sum\limits_{n=1}^\infty\frac{1}{(n+1)(n+3)}$. | First, express the general term $\frac{1}{(n+1)(n+3)}$ in partial fraction form:
\[
\frac{1}{(n+1)(n+3)} = \frac{A}{n+1} + \frac{B}{n+3}.
\]
Multiplying both sides by the common denominator $(n+1)(n+3)$ we obtain
$$1 = A(n+3) + B(n+1) \Leftrightarrow 1 = (A+B)n + (3A+B). $$
Thus,
\begin{align*}
\begin{cases}
A + B &= 0, \\
3A + B &= 1.
\end{cases}
\end{align*}
Solving this system of equations, we find that $A = \frac{1}{2}$ and $B = -\frac{1}{2}$.
Now, we have
\[
\frac{1}{(n+1)(n+3)} = \frac{1/2}{n+1} -\frac{1/2}{n+3}=\frac{1}{2}\left(\frac{1}{n+1}-\frac{1}{n+3}\right)
\]
Now, using the telescoping nature of the series:
\begin{align*}
\sum_{n=1}^\infty \frac{1}{(n+1)(n+3)} &= \frac{1}{2}\sum_{n=1}^\infty\left( \frac{1}{n+1}-\frac{1}{n+3}\right)\\
&=\frac{1}{2}\left[\left(\frac{1}{2}-\frac{1}{4}\right)+\left(\frac{1}{3}-\frac{1}{5}\right)+\left(\frac{1}{4}-\frac{1}{6}\right)+\cdots\right]\\
&=\frac{1}{2}\left[\frac{1}{2}+\frac{1}{3}\right]=\frac{5}{12}.
\end{align*}\\ | 0.42 |
|
Evaluate the limit $\lim\limits_{x\to 0}\frac{(1+x)^{\frac{1}{x}}-e}{x}$. | We can use L'H\^{o}pital's Rule to obtain
\[
\lim\limits_{x\to 0}\frac{\ln(1+x)}{x}=\lim\limits_{x\to 0}\frac{\frac{1}{1+x}}{1}=1.
\]
Then,
\[
\lim\limits_{x\to 0}(1+x)^{\frac{1}{x}}=\lim\limits_{x\to 0}e^{\ln{(1+x)^{\frac{1}{x}}}}=\lim\limits_{x\to 0}e^{\frac{\ln(1+x)}{x}}=e^1=e.
\]
Let $f(x) (1+x)^{\frac{1}{x}}$, then $\lim\limits_{x\to 0}f(x)=e$ and the given limit can be written as:
\[
\lim_{x\to 0}\frac{(1+x)^{\frac{1}{x}}-e}{x} = \lim_{x\to 0}\frac{f(x) - e}{x}.
\]
Now, find the derivative of \(f(x)\) by using the chain rule and the quotient rule:
\begin{align*}
f'(x) = \frac{d}{dx}(1+x)^{\frac{1}{x}}= \frac{d}{dx}e^{\ln{(1+x)^{\frac{1}{x}}}}&=\frac{d}{dx}e^{\frac{\ln(1+x)}{x}}\\
&=e^{\frac{\ln(1+x)}{x}} \frac{d}{dx}\frac{\ln(1+x)}{x}\\
&=(1+x)^{\frac{1}{x}}\cdot\frac{\frac{x}{1+x}-\ln(1+x)}{x^2}.
\end{align*}
Using L'H\^{o}pital's Rule again to get
\begin{align*}
\lim_{x\to 0}\frac{f(x) - e}{x}=\lim\limits_{x\to 0}\frac{f'(x)}{1}&=\lim\limits_{x\to 0}(1+x)^{\frac{1}{x}}\cdot \lim\limits_{x\to 0}\frac{\frac{x}{1+x}-\ln(1+x)}{x^2}\\
&=e \cdot \lim\limits_{x\to 0}\frac{\frac{(1+x)-x}{(1+x)^2}-\frac{1}{1+x}}{2x}\\
&=e \cdot \lim\limits_{x\to 0}\frac{-1}{2(1+x)^2}\\
&=-\frac{e}{2}.
\end{align*}
Therefore,
\[
\lim_{x\to 0}\frac{(1+x)^{\frac{1}{x}}-e}{x} =-\frac{e}{2}.
\]\\ | -\frac{ e}{2} |
|
Evaluate the series $\sum\limits_{n=0}^\infty \frac{1}{2n+1}\left(\frac12\right)^{2n+1}$. | For $x\in (-1,1)$, we have
\[
\frac{1}{1-x^2}=\sum_{n=0}^\infty x^{2n}.
\]
The series on the right-hand side converges uniformly on any interval $[-x, x]$ for any $x\in (0, 1)$.
Taking the integrals on both sides yields
\[
\int_0^x\frac{1}{1-t^2}dt=\int_0^x\sum_{n=0}^\infty t^{2n}dt=\sum_{n=0}^\infty\int_0^x t^{2n}dt=\sum_{n=0}^\infty\frac{1}{2n+1}x^{2n+1}.
\]
Noting that by partial fraction of $\frac{1}{1-t^2}=\frac12\left(\frac{1}{1+t}+\frac{1}{1-t}\right)$, we have, for $x\in (0,1)$,
\[
\int_0^x\frac{1}{1-t^2}dt=\frac{1}{2}\int_0^x\left(\frac{1}{1+t}+\frac{1}{1-t}\right)dt=\frac12\ln\left(\frac{1+x}{1-x}\right).
\]
So,
\[
\frac12\ln\left(\frac{1+x}{1-x}\right)=\sum_{n=0}^\infty\frac{1}{2n+1}x^{2n+1}.
\]
Taking $x=\frac12$ leads to
\[
\sum_{n=0}^\infty\frac{1}{2n+1}\left(\frac12\right)^{2n+1}=\frac12\ln 3=\ln\sqrt{3}.
\]\\ | \ln\sqrt{3} |
|
Evaluate the limit $\lim\limits_{n\to\infty}\sum\limits_{k=0}^{n-1}\frac{1}{\sqrt{n^2-k^2}}$. | To evaluate this limit, we can interpret this sum as a Riemann sum and convert it into an integral.
Let $f(x) = \frac{1}{\sqrt{1 - x^2}}$ on the interval $[0, 1)$. Notice that $f(x)$ is integrable on the interval $[0,1)$.
The given sum can be expressed as:
\[
\lim_{n \to \infty} \sum_{k=0}^{n-1} \frac{1}{\sqrt{n^2 - k^2}} =\lim_{n \to \infty} \sum_{k=0}^{n-1} \frac{1}{n}\frac{1}{\sqrt{1 - \left(\frac{k}{n}\right)^2}}= \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} f\left(\frac{k}{n}\right).
\]
By the definition of definite integral, we have
\[
\lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} f\left(\frac{k}{n}\right) = \int_{0}^{1} f(x) \,dx=\int_0^1\frac{1}{\sqrt{1-x^2}}\,dx .
\]
By a substitution of $x = \sin(\theta)$, we have
\begin{align*}
\int_{0}^{1} \frac{1}{\sqrt{1 - x^2}} \,dx &= \int_{0}^{\frac{\pi}{2}} \frac{1}{\sqrt{1 - \sin^2(\theta)}} \cos(\theta) \,d\theta \\
&= \int_{0}^{\frac{\pi}{2}} \frac{1}{\cos(\theta)} \cos(\theta) \,d\theta\\
& = \int_{0}^{\frac{\pi}{2}} d\theta = \frac{\pi}{2}
\end{align*}
Therefore, we obtain $\lim_{n \to \infty} \sum_{k=0}^{n-1} \frac{1}{\sqrt{n^2 - k^2}}=\frac{\pi}{2}$.\\
An alternative method to evaluate $\displaystyle{\int_0^1\frac{1}{\sqrt{1-x^2}}\,dx}$:
\[
\int_0^1\frac{1}{\sqrt{1-x^2}}\,dx=\arcsin{x}\big|_0^1=\arcsin(1)-\arcsin(0)=\frac{\pi}{2}-0=\frac{\pi}{2}.
\] \\ | \frac{\pi}{2} |
|
Let $\alpha$ and $\beta$ be positive constant. If $\lim\limits_{x\to 0}\displaystyle{\frac{1}{\alpha-\cos{ x}}\ \int_0^x\frac{2t}{\sqrt{\beta+t^2}}\,dt=1}$, determine the values of $\alpha$ and $\beta$. | Noting that $\lim\limits_{x\to 0} \displaystyle{\int_0^x\frac{2t}{\sqrt{\beta+t^2}}dt}=0$, if the given limit exists and equals $1$, we must have
\[\lim\limits_{x\to 0}(\alpha-\cos x)=0.
\]
Then, we get $\alpha=1$.
Using L'H\^{o}pital's rule and the fundamental theorem of calculus, we have
\begin{align*}
& \lim\limits_{x\to 0}\frac{1}{1-\cos{ x}}\ \int_0^x\frac{2t}{\sqrt{\beta+t^2}}dt=\lim\limits_{x\to 0}\frac{\frac{d}{dx}\left(\int_0^x\frac{2t}{\sqrt{\beta+t^2}}dt\right)}{\frac{d}{dx}(1-\cos{ x})}\\
=& \lim\limits_{x\to 0}\, \frac{\frac{2x}{\sqrt{\beta+x^2}}}{\sin{x}}= 2\lim\limits_{x\to 0}\frac{x}{\sin{x}}\,\cdot\lim\limits_{x\to 0}\frac{1}{\sqrt{\beta+x^2}}=2(1)\left(\frac{1}{\sqrt{\beta}}\right)=\frac{2}{\sqrt{\beta}}.
\end{align*}
Since this limit equals $1$, we must have $\beta=4$.
Therefore, we obtain $\alpha=1$ and $\beta=4$.\\ | \alpha=1 and \beta=4. |
|
Find the length of the curve of the entire cardioid $r=1+\cos{\theta}$, where the curve is given in polar coordinates. | We'll use the arc length formula for polar curves:
\[
L = \int_0^{2\pi} \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2} \,d\theta.
\]
For the cardioid $r = 1 + \cos{\theta}$, we have $\frac{dr}{d\theta} = -\sin{\theta}$.
Now, substitute $r$ and $\frac{dr}{d\theta}$ into the arc length formula and use a change of variables:
\begin{align*}
L &= \int_0^{2\pi} \sqrt{(1 + \cos{\theta})^2 + (-\sin{\theta})^2} \,d\theta \\
&= \int_0^{2\pi}\sqrt{1 + 2\cos{\theta} + \cos^2{\theta} + \sin^2{\theta}} \,d\theta\\
&=\int_0^{2\pi}\sqrt{2 + 2\cos{\theta}} \,d\theta =\int_0^{2\pi}2\left|\cos\left(\frac{\theta}{2}\right)\right| \,d\theta\\
&=\int_0^{\pi}4\left|\cos\left(\alpha\right)\right| \,d\alpha =8\int_0^{\frac{\pi}{2}}\cos\left(\alpha\right) \,d\alpha =8\sin \alpha\big|_{0}^{\frac{\pi}{2}}=8.
\end{align*}
So, the length of the curve for the entire cardioid \(r = 1 + \cos{\theta}\) is \(8\).\\ | 8 |
|
Find the value of the integral $\displaystyle{\int_0^1\frac{1}{(1+x^2)^2}dx}$. | Let $x = \tan \theta$, then $dx = \sec^2 \theta \,d\theta$. Substitute these into the integral to obtain
\begin{align*}
\int_0^1 \frac{1}{(1+x^2)^2} \,dx& = \int_0^{\frac{\pi}{4}} \frac{1}{(1 + \tan^2 \theta)^2} \sec^2 \theta \,d\theta\\
&=\int_0^{\frac{\pi}{4}} \frac{1}{(\sec^2\theta)^2} \sec^2 \theta \,d\theta\\
&=\int_0^{\frac{\pi}{4}} \frac{1}{\sec^2\theta} \,d\theta\\
&= \int_0^{\frac{\pi}{4}} \cos^2 \theta \,d\theta \\
& = \frac{1}{2} \int_0^{\frac{\pi}{4}} (1 + \cos (2\theta)) \,d\theta = \frac{1}{2} \left.\left[\theta + \frac{1}{2}\sin (2\theta)\right]\right|_0^{\frac{\pi}{4}} = \frac{\pi}{8} + \frac{1}{4}.
\end{align*}\\ | \frac{\pi}{8} + \frac{1}{4} |
|
Evaluate the improper integral $\displaystyle{\int_0^\infty \frac{1}{x^2+2x+2}dx}$. | We can write
\[
\int_0^\infty \frac{1}{x^2+2x+2}dx=\int_0^\infty \frac{1}{(x + 1)^2 + 1} \,dx.
\]
Now, making the substitution $u = x + 1$, so $dx = du$, we have
\begin{align*}
\int_0^\infty \frac{1}{x^2+2x+2}dx&=\int_0^\infty \frac{1}{(x + 1)^2 + 1} \,dx\\
&=\int_1^\infty \frac{1}{u^2 + 1} \,du\\
&= \lim_{a \to \infty} \int_1^a \frac{1}{u^2 + 1} \,du\\
& = \lim_{a \to \infty} \arctan(u)\big|_1^a\\
&=\lim_{a \to \infty} \left[\arctan(a) - \arctan(1)\right] = \frac{\pi}{2} - \frac{\pi}{4} = \frac{\pi}{4}.
\end{align*} \\ | \frac{\pi}{4} |
|
Find the area of the region outside the circle $r=2$ and inside the cardioid $r=2+2\cos{\theta}$, where the curves are given in polar coordinates. | The region is bounded by the two curves, so the area $A$ is given by:
\[
A = \int_{\alpha}^{\beta} \frac{1}{2}\left((2+2\cos{\theta})^2 - 2^2\right) \,d\theta.
\]
The bounds $\alpha$ and $\beta$ correspond to the angles at which the two curves intersect. To find these intersection points, set
\[
2=2+\cos{\theta}.
\]
Then, $\cos\theta=0$. For the given two curves, we can take $\theta = -\frac{\pi}{2}$ and $\theta =\frac{\pi}{2}$.
Then, we have $\alpha =- \frac{\pi}{2}$ and $\beta = \frac{\pi}{2}$. Thus,
\begin{align*}
A & = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{1}{2}\left((2+2\cos{\theta})^2 - 2^2\right) \,d\theta \\
& = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{1}{2}(4 + 8\cos{\theta} + 4\cos^2{\theta} - 4) \,d\theta \\
& = \int_{\frac{-\pi}{2}}^{\frac{\pi}{2}} (4\cos{\theta} + 2\cos^2{\theta}) \,d\theta\\
&= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} (4\cos{\theta} + 1+\cos(2\theta)) \,d\theta\\
&= \left.\left(4\sin{\theta} +\theta+ \frac{1}{2}\sin{(2\theta)}\right)\right|_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \\
&= \left(4\sin\left(\frac{\pi}{2}\right) +\frac{\pi}{2}+ \frac{1}{2}\sin\left(\pi\right)\right) - \left(4\sin\left(-\frac{\pi}{2}\right) -\frac{\pi}{2}+ \frac{1}{2}\sin\left(-\pi\right)\right) \\
&= \left[4+\frac{\pi}{2}\right]-\left[-4-\frac{\pi}{2}\right]\\
&=8+\pi.
\end{align*}
So, the area of the region outside the circle $r = 2$ and inside the cardioid $r = 2 + 2\cos{\theta}$ is $8+\pi$.\\ | 8+\pi. |
|
Evaluate $\displaystyle{\int_0^\infty \frac{1}{1+x^4}\ dx}$. | The improper integral $\displaystyle{\int_0^\infty \frac{1}{1+x^4}\ dx}$ converges. We denote \
\[
I=\displaystyle{\int_0^\infty \frac{1}{1+x^4}\ dx}.
\]
By changing of variables $x=\frac{1}{y}$ we obtain
\begin{align*}
I=\int_0^\infty \frac{1}{1+x^4}\, dx&=\int_0^\infty \frac{y^2}{1+y^4}\, dy=\int_0^\infty \frac{x^2}{1+x^4}\, dx.
\end{align*}
Then,
\[
2I=\int_0^\infty \frac{1}{1+x^4}\, dx+\int_0^\infty \frac{x^2}{1+x^4}\, dx=\int_0^\infty\frac{1+x^2}{1+x^4}\,dx.
\]
Hence,
\begin{align*}
I=\frac12\int_0^\infty\frac{1+x^2}{1+x^4}\,dx&=\frac12\int_0^\infty\frac{1+x^2}{(1+2x^2+x^4)-2x^2}\,dx\\
&=\frac12\int_0^\infty\frac{1+x^2}{(1+x^2)^2-2x^2}\,dx\\
&=\frac14\int_0^\infty\left[\frac{1}{(1+x^2)+\sqrt{2}x}+\frac{1}{(1+x^2)-\sqrt{2}x}\right]\,dx.\\
\end{align*}
For $\int_0^\infty\frac{1}{(1+x^2)+\sqrt{2}x}\,dx$, we have
\begin{align*}
\int_0^\infty\frac{1}{(1+x^2)+\sqrt{2}x}\,dx&=\int_0^\infty\frac{1}{\frac12+\left(x+\frac{\sqrt{2}}{2}\right)^2}\,dx\\
&=2\int_0^\infty\frac{1}{1+\left(\sqrt{2}x+1\right)^2}\,dx\\
&=\sqrt{2}\int_1^\infty\frac{1}{1+u^2}\,du\\
&=\sqrt{2}\lim\limits_{a\to\infty}\int_1^a\frac{1}{1+u^2}\,du\\
&=\sqrt{2}\lim\limits_{a\to\infty}\arctan{(u)}\big|_{1}^a\\
&=\sqrt{2}\lim\limits_{a\to\infty}(\arctan{(a)}-\arctan{(1)})\\
&=\sqrt{2}\left(\frac{\pi}{2}-\frac{\pi}{4}\right)=\frac{\sqrt{2}\pi}{4}.
\end{align*}
Similarly, we can obtain
\begin{align*}
\int_0^\infty\frac{1}{(1+x^2)-\sqrt{2}x}\,dx&=2\int_0^\infty\frac{1}{1+\left(\sqrt{2}x-1\right)^2}\,dx\\
&=\sqrt{2}\int_{-1}^\infty\frac{1}{1+u^2}\,du\\
&=\sqrt{2}\left(\frac{\pi}{2}-\left(-\frac{\pi}{4}\right)\right)=\frac{3\sqrt{2}\pi}{4}.
\end{align*}
Therefor, $I=\frac{1}{4}\left(\frac{\sqrt{2}\pi}{4}+\frac{3\sqrt{2}\pi}{4}\right)=\frac{\sqrt{2}\pi}{4}$.\\ | \frac{\sqrt{2}\pi}{4} or \frac{\pi}{2\sqrt{2}}. |
|
Evaluate the iterated integral $\displaystyle{\int_0^1dy\int_y^1(e^{-x^2}+e^x)dx}$. | Noting that the region of the integration is
\[D=\{(x,y): 0\leq y\leq 1, y\leq x\leq 1\}=\{(x,y): 0\leq x\leq 1, 0\leq y\leq x\}
\] and the function $f(x,y)=e^{-x^2}+e^x$ is continuous on $D$, we have
\begin{align*}
\int_0^1dy\int_y^1(e^{-x^2}+e^x)dx&=\iint_D(e^{-x^2}+e^x)dx\\
&=\int_0^1dx\int_0^x(e^{-x^2}+e^x)dy\\
&=\int_0^1(e^{-x^2}+e^x) y\big|_0^x dx\\
&=\int_0^1(e^{-x^2}+e^x)xdx\\
&=\int_0^1xe^{-x^2}dx+\int_0^1xe^xdx.
\end{align*}
By substitution $t=x^2$, we obtain
\[
\int_0^1xe^{-x^2}dx=\frac12\int_0^1e^{-t}dt=-\frac12 e^{-t}\big|_0^1=\frac12-\frac12 e^{-1}.
\]
By integration by parts, we have
\[
\int_0^1xe^xdx=\int_0^1xd(e^x)=xe^x\big|_0^1-\int_0^1e^xdx=e-e^x\big|_0^1=e-(e-1)=1.
\]
Combining all the steps, we can obtain
\[
\int_0^1dy\int_y^1(e^{-x^2}+e^x)dx=\left(\frac12-\frac12 e^{-1}\right)+1=\frac{3}{2}-\frac12 e^{-1}.
\]\\ | \frac{3}{2}-\frac12 e^{-1} |
|
Assume that $a_n>0$ for all $n\in\mathbb{N}$ and the series $\displaystyle{\sum_{n=1}^\infty a_n}$ converges to $4$. Let $\displaystyle{R_n=\sum_{k=n}^\infty a_k}$ for all $n=1, 2,\dots$. Evaluate $\displaystyle{\sum_{n=1}^\infty \frac{a_n}{\sqrt{R_n}+\sqrt{R_{n+1}}}}$. | Noting that $R_n-R_{n+1}=a_n$ for all $n$ and
\[
\frac{a_n}{\sqrt{R_{n}}+\sqrt{R_{n+1}}}=\frac{a_n}{\sqrt{R_{n}} + \sqrt{R_{n+}}} \cdot \frac{\sqrt{R_{n}} - \sqrt{R_{n+1}}}{\sqrt{R_{n}} - \sqrt{R_{n+1}}}=\frac{a_n(\sqrt{R_{n}} - \sqrt{R_{n+1}})}{R_{n}- R_{n+1}}=\sqrt{R_{n}} - \sqrt{R_{n+1}}.
\]
To evaluate the series $\sum_{n=1}^\infty \frac{a_n}{\sqrt{R_{n+1}} + \sqrt{R_n}}$, we'll use a telescoping series:
\begin{align*}
\sum_{n=1}^\infty \frac{a_n}{\sqrt{R_{n}} + \sqrt{R_{n+1}}}&=\sum_{n=1}^\infty (\sqrt{R_{n}} - \sqrt{R_{n+1}})\\
&=[\sqrt{R_{1}} - \sqrt{R_{2}}]+[\sqrt{R_{2}} - \sqrt{R_{3}}]+[\sqrt{R_{3}} - \sqrt{R_{4}}]+\cdots\\
&=\sqrt{R_{1}}=\sqrt{\sum_{n=1}^\infty a_n}=\sqrt{4}=2.
\end{align*}
Therefore, the series $\displaystyle{\sum_{n=1}^\infty \frac{a_n}{\sqrt{R_n}+\sqrt{R_{n+1}}}}$ converges to $2$.\\ | 2 |
|
For any $a>0$ and $b\in\mathbb{R}$, use Sterling's formula
\[
\lim\limits_{x\to\infty}\frac{\Gamma(x+1)}{x^x e^{-x}\sqrt{2\pi x}}=1
\]
to evaluate the limit
\[
\lim\limits_{n\to\infty}\frac{\Gamma(an+b)}{(n!)^a\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}},
\]
where $\Gamma(\alpha)=\int_0^\infty t^{\alpha-1}e^{-t}dt$ is the gamma function defined for any $\alpha>0$. | Since $a>0$, we know that $an+b=(an+b-1)+1\to\infty$ as $n\to\infty$. By Sterling's formula, we have
\[
\lim\limits_{n\to\infty}\frac{\Gamma(an+b)}{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}=1.
\]
Noting that $\Gamma(n+1)=n!$, we get
\[
\lim\limits_{n\to\infty}\frac{n!}{n^{n}e^{-n}\sqrt{2\pi n}}=1.
\]
Thus,
\[
\lim\limits_{n\to\infty}\frac{(n!)^a}{n^{an}e^{-an}(2\pi n)^{\frac{a}{2}}}=1.
\]
Then,
\begin{align*}
&\lim\limits_{n\to\infty}\frac{\Gamma(an+b)}{(n!)^a\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}}\\
= &\lim\limits_{n\to\infty}\frac{\Gamma(an+b)}{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}\cdot\frac{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}{(n!)^a\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}}\\
=&\lim\limits_{n\to\infty}\frac{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}{(n!)^a\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}}\\
=&\lim\limits_{n\to\infty}\frac{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}{\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}\cdot n^{an}e^{-an}(2\pi n)^{\frac{a}{2}}}\cdot\frac{n^{an}e^{-an}(2\pi n)^{\frac{a}{2}}}{(n!)^a}\\
=&\lim\limits_{n\to\infty}\frac{(an+b-1)^{an+b-1}e^{-(an+b-1)}\sqrt{2\pi(an+b-1)}}{\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}\cdot n^{an}e^{-an}(2\pi n)^{\frac{a}{2}}}\\
=&\lim\limits_{n\to\infty}\frac{(an+b-1)^{an+b-\frac12}e^{-(b-1)}}{\: a^{an+b-\frac12}n^{b-\frac12} n^{an}}\\
=&\lim\limits_{n\to\infty}\frac{(an+b-1)^{an}(an+b-1)^{b-\frac12}e^{-(b-1)}}{\: (an)^{an}n^{b-\frac12} a^{b-\frac12}}\\
=&\lim\limits_{n\to\infty}\left(\frac{an+b-1}{an}\right)^{an}\left(\frac{an+b-1}{n}\right)^{b-\frac12}\frac{e^{-(b-1)}}{ a^{b-\frac12}}\\
=&\lim\limits_{n\to\infty}\left(1+\frac{\frac{b-1}{a}}{n}\right)^{an}\left(a+\frac{b-1}{n}\right)^{b-\frac12}\frac{e^{-(b-1)}}{ a^{b-\frac12}}.
\end{align*}
Noticing that
\[
\lim\limits_{n\to\infty}\left(1+\frac{x}{n}\right)^{n}=e^x, \ \forall x\in\mathbb{R},
\]
we obtain
\[
\lim\limits_{n\to\infty}\left(1+\frac{\frac{b-1}{a}}{n}\right)^{an}=\lim\limits_{n\to\infty}\left[\left(1+\frac{\frac{b-1}{a}}{n}\right)^{n}\right]^a=\left[e^{\frac{b-1}{a}}\right]^a=e^{b-1}.
\]
Notice also that $\lim\limits_{n\to\infty}\left(a+\frac{b-1}{n}\right)^{b-\frac12}=a^{b-\frac12}$.
Therefore, by putting everything together, we can obtain the limit
\[
\lim\limits_{n\to\infty}\frac{\Gamma(an+b)}{(n!)^a\: a^{an+b-\frac12}n^{b-\frac12-\frac{a}{2}}(2\pi)^{\frac12-\frac{a}{2}}}=1.
\]\\ | 1 |
|
Consider the differential equation $\frac{dy}{dx} = xy$. Find the value of $y(\sqrt{2})$ given that $y(0) = 2$. | First, we solve the differential equation to get $y(x) = 2e^{\frac{1}{2}x^2}$.
\begin{align*}
\frac{dy}{dx} & = xy \Leftrightarrow \frac{1}{y} dy = x dx \Leftrightarrow
\int \frac{1}{y} dy = \int x dx \\ \Rightarrow
\ln|y| &= \frac{1}{2}x^2 + C \Rightarrow \,
y = \pm e^{\frac{1}{2}x^2 + C}.
\end{align*}
With $y(0) = 2$, we have that $C = \ln 2$ and the solution is $y = 2e^{\frac{1}{2}x^2}$.
Next, we evaluate the function to get $y(\sqrt{2}) =2e $. | 2e |
|
Solve the following first-order differential equation:
\begin{equation*}
\frac{dy}{dx} + 2y = e^{-x}, \quad y(0) = 1.
\end{equation*} | To solve it, we use an integrating factor, \(\mu(x) = e^{\int 2dx} = e^{2x}\). Multiplying the entire equation by \(\mu(x)\) gives:
\begin{align*}
e^{2x} \frac{dy}{dx} + 2e^{2x}y = \frac{d}{dx}(e^{2x}y) &= e^{2x}e^{-x} = e^{x}.
\end{align*}
Hence, $e^{2x}y = \int e^{x} dx = e^{x} + C$, which implies $y = e^{-x} + Ce^{-2x}$.
Using the initial condition \(y(0) = 1\), we obtain $1= y(0)= 1 = e^{0} + Ce^{-0} = 1+C$, so $C=0$. Therefore, the solution is $y = e^{-x}$. | y = e^{-x}. |
|
Given three vectors $y_1=(1,0,0)^\top,y_2=(x,0,0)^\top$ and $y_3=(x^2,0,0)^\top$. Does there exist a system of three linear homogeneous ODEs such that all of $y_1,y_2,y_3$ are the solution to this homogeneous ODE system? | Suppose there is such a system. Then, $[y_1,y_2,y_3]$ gives a fundamental matrix of the solution, and $\det[y_1,y_2,y_3]\neq 0$. Consider the linear system $C_1y_1+C_2y_2+C_3y_3=\vec{0}$, which implies $C_1+C_2x+C_3x^2=0$. This quadratic equation cannot hold for all $x\in\mathbb{R}$ unless $C_1=C_2=C_3=0$, that is, $y_1,y_2,y_3$ are linearly independent. It implies that the determinant $\det[y_1,y_2,y_3]=0$, which leads to a contradiction. | No |
|
Does the ODE $x^2y''+(3x-1)y'+y=0$ have a nonzero power series solution near $x=0$? | Assume there exists a power series solution $y=\sum_{n\geq 0} c_n x^n$. Plugging it into the equation, we can get the recursive formula $c_{n+1}=(n+1)c_n$ for $n\geq 0$. Then, $c_n=c_0 n!$. But we know $\sum_{n\geq 0} n! x^n$ must be divergent as long as $x\neq 0$. | No |
|
Is $y=0$ a singular solution to $y'=\sqrt{y}\ln(\ln(1+y))$? | First, $y=0$ is a singular solution to the ODE. In fact, this is a separable ODE, and the solution is given by $x(y)=\int_0^y \frac{dt}{\sqrt{t}\ln\ln(1+t)}$ when $y\neq 0$. On the other hand, $y=0$ is indeed a solution. So $y=0$ is a singular solution. | Yes |
|
For the ODE system $x'(t)=y+x(x^2+y^2)$ and $y'(t)=-x+y(x^2+y^2)$, is the equilibrium $(x,y)=(0,0)$ stable? | The equilibrium $(x,y)=(0,0)$ for the linear counterpart is a center, as the coefficient matrix has eigenvalues $\pm i$, purely imaginary. Even if the nonlinearity is locally linear ($o(\sqrt{x^2+y^2})$ size near $(0,0)$), we cannot tell the type of the equilibrium $(x,y)=(0,0)$ for the nonlinear system. Instead, we can introduce the Lyapunov function $V(x,y)=\frac{x^2+y^2}{2}$. Along the trajectory, we compute that
\[
\frac{dV}{dt}=xx'(t)+yy'(t)=xy+2(x^2+y^2)^2-xy>0\quad\forall (x,y)\neq (0,0).
\]
That is to say, $V(x,y)$ is increasing as $t$ grows. So, any trajectory starting near the origin will penetrate the circles (the trajectories for the linearized system) and leave away from the equilibrium $(x,y)=(0,0)$. Thus, the equilibrium $(x,y)=(0,0)$ for the nonlinear system is unstable. | No \ |
|
Assume $x\in\mathbb{R}$ and the function $g(x)$ is continuous and $xg(x)>0$ whenever $x\neq 0$. For the autonomous ODE $x''(t)+g(x(t))=0$, is the equilibrium $x(t)=0$ stable? | Let $y=x'$ and we get a ODE system: $x'=y,~~y'=-g(x)$. We construct the Lyapunov function $V(x,y):=0.5y^2+\int_0^x g(t)dt$ which is positive near $(0,0)$ thanks to $xg(x)>0$. Then we compute $\partial_t V$ along the trajectory, which is always equal to zero. That is, $\partial_t V$ is non-positive but not negative, so the equilibrium is stable but not asymptotically stable. | Yes |
|
What is the number of limit cycles for the ODE system $x'(t)=-2x+y-2xy^2$ and $y'(t)=y+x^3-x^2y$? | Let $X,Y$ be the functions on the right side of the two ODEs. Then, we compute that
\[
\partial_x X+\partial_y Y =1-x^2-2y^2<0.
\] Then the limit cycle doesn't exist according to the following lemma: Given a domain $G\subset\mathbb{R}^2$, if there exists a simply-connected domain $D\subset G$ such that $\partial_x X+\partial_y Y$ does not change sign in $D$ and is always nonzero, then there is no periodic solution in $D$ and thus there is no limit cycle. The proof is by contradiction and the usage of Gauss-Green formula. | 0 |
|
Assume $y=y(x,\eta)$ to be the solution to the initial-value problem $y'(x)=\sin(xy)$ with initial data $y(0)=\eta$. Can we assert that $\frac{\partial y}{\partial \eta}(x,\eta)$ is always positive? | According to the ODE, we have $y(x,\eta)=\eta+\int_0^x \sin(s y(s,\eta))ds$. Take $\partial_\eta$ and we get $\frac{\partial y}{\partial\eta}=1+\int_0^x \cos(s y(s,\eta))s\partial_\eta y ds$. Denote the right side by $u$ and take $\partial_x$, we get $u'=x\cos(xy)u$, that is $\frac{du}{u}=x\cos(xy)dx$ with $u(0)=1$. Taking integration, we get
\[
\ln u=\int_0^x x\cos (xy) dx\Rightarrow u=\partial_\eta y =\exp(\int_0^x s \cos(sy)ds)>0.
\] | Yes |
|
Does there exists any nonzero function $f(x)\in L^2(\mathbb{R}^n)$ such that $f$ is harmonic in $\mathbb{R}^n$? | If there exists such a function $u$, then taking Fourier transform, we get $-|\xi|^2\hat{f}(\xi)=0$. $\hat{f}\in L^2(\mathbb{R}^n)$ and thus it is supported in $\{\xi=0\}$. So, $\hat{f}=0$ in $L^2$ and by Plancherel theorem $f=0$ in $L^2$. Since harmonic function is smooth, the function $f$ must be identically zero. | No |
|
Let $u$ be a harmonic function in $\mathbb{R}^n$ satisfying $|u(x)|\leq 100(100+\ln(100+|x|^{100}))$ for any $x\in\mathbb{R}^n$. Can we assert $u$ is a constant? | By the gradient estimate for harmonic functions, we have
\[
|\nabla u(x)|\leq \frac{n}{R}\max\limits_{\overline{B(x,R)}}|u(x)|\leq \frac{100n}{R}(100+\ln(100+R^{100})).
\]Let $R\to\infty$ and we get $\nabla u\equiv 0$. So $u$ must be a constant. | Yes |
|
Assume $u(t,x,y)$ solves the wave equation $u_{tt}-u_{xx}-u_{yy}=0$ for $t>0,x,y\in\mathbb{R}$ with initial data $u(0,x,y)=0$ and $u_t(0,x,y)=g(x,y)$ where $g(x,y)$ is a compactly supported smooth function. Find the limit $\lim\limits_{t\to+\infty}t^{1/4}|u(t,x,y)|$ if it exists. | 2D free wave equation has decay rate $O(1/\sqrt{t})$. | 0 |
|
Consider the transport equation $u_t+2u_x=0$ for $t>0,x>0$ with initial data $u(0,x)=e^{-x}$ for $x>0$ and boundary condition $u(t,0)=A+Bt$ for $t>0$, where $A,B$ are two constants. Find the values of $A,B$ such that there is a solution $u(t,x)$ is $C^1$ in $\{t\geq 0,x\leq 0\}$ to the equation. Present the answer in the form of [A,B]. | The general solution to the transport equation is $u(t,x)=F(x-2t)$. Since $u_0(x):=e^{-x}$ is defined in $\{x>0\}$, the function $u_0(x-2t)$ only determines the solution in $\{x>2t\}$. To determine the solution in $\{0<x<2t\}$, we need the boundary data $u(t,0)=g(t):=A+Bt$. Let $x=0$ in the general solution, we get $g(t)=F(-t/2)$ for any $t>0$. Hence, the solution in $\{0<x<2t\}$ is given by $g(t-\frac{x}{2})=A+B(t-\frac{x}{2})$. To ensure the continuity, we must have $\lim_{t\to 0}g(t)=\lim_{x\to 0}u_0(x)$, which givens $A=e^0=1$. To ensure the $C^1$ differentiability, we must have $\lim_{(t,x)\to 0}u_t+2u_x=0$, which gives $g'(0)+2u_0'(0)=0$; that is $B=2$. The solution is
\[
u(t,x)=\begin{cases}
e^{-x+2t} & x\geq 2t,\\
1+2t-x& 0\leq x\leq 2t.
\end{cases}
\] | [1,2] |
|
In how many ways can you arrange the letters in the word ``INTELLIGENCE''? | It is given by the multinomial coefficient $\binom{12}{2, 2, 1, 3, 2, 1, 1}=\frac{12!}{2!2!1!3!2!1!1!} = 9,979,200$. | 9979200 |
|
Suppose that $A$, $B$, and $C$ are mutually independent events and that $P(A) = 0.2$, $P(B) = 0.5$, and $P(C) = 0.8$. Find the probability that exactly two of the three events occur. | $P(A \cap B \cap C) = (0.2)(0.5)(0.8) = 0.08$, $P(A \cap B) = 0.10$, $P(A \cap C) = 0.16$, $P(B \cap C)=0.40$.
$P(A \cap B \cap C') = P(A \cap B) - P(A \cap B \cap C) = 0.02$. Similarly, $P(A \cap B' \cap C) = 0.16 - 0.08 = 0.08$, and $P(A' \cap B \cap C) = 0.40 - 0.08 = 0.32.$ \\
Thus, $P(A \cap B \cap C') + P(A \cap B' \cap C) + P(A' \cap B \cap C) = 0.42$. | 0.42. |
|
A club with 30 members wants to have a 3-person governing board (president, treature, secretary). In how many ways can this board be chosen if Alex and Jerry don’t want to serve together? | $\binom{2}{1}\binom{28}{2}(3!) + \binom{28}{3}(3!) = 24,192.$ | 24192 |
|
There are seven pairs of socks in a drawer. Each pair has a different color. You randomly draw one sock at a time until you obtain a matching pair. Let the random variable $N$ be the number of draws. Find the value of $n$ such that $P(N=n)$ is the maximum. | You absolutely get a matching pair when $n=8$. For $n=8$, the first draw can be any sock. The second draw must be one of the 12 that are different, the third draw must be one of the 10 that are different from the first two, ..., the seventh draw must be one of the 2. Thus $P(N=8) = (12/13)(10/12)(8/11)(6/10)(4/9)(2/8) = 16/429$. Repeat the similar process for $n=7, 6, \ldots, 2$ to get
$$P(N=7) = 48/429, P(N=6)= 80/429, P(N=5) = 32/143,$$
$$P(N=4) = 30/143, P(N=3) = 2/13, P(N=2) = 1/13.$$
Therefore, $n=5$ yields the maximum value of $P(N=n)$. | 5. |
|
A pharmacy receives 2/5 of its flu vaccine shipments from Vendor A and the remainder of its shipments from Vendor B. Each shipment contains a very large number of vaccine vials. For Vendor A’s shipments, 3\% of the vials are ineffective. For Vendor B, 8\% of the vials are ineffective. The hospital tests 25 randomly selected vials from a shipment and finds that two vials are ineffective. What is the probability that this shipment came from Vendor A? | If the shipment is from Vendor A, the probability that two vials are ineffective is $$\binom{25}{2}(3\%)^2(97\%)^{23} = 0.134003.$$
If the shipment is from Vendor B, the probability that two vials are ineffective is $$\binom{25}{2}(8\%)^2(92\%)^{23} = 0.282112.$$
Applying Bayes Theorem, we can obtain the probability that the shipment came from Vendor A give that there are two vials are ineffective in a selected shipment:
$$\frac{(2/5)(0.134003)}{(2/5)(0.134003) + (3/5)(0.282112)} = 0.24051.$$ | 0.24 |
|
Let $X_k$ be the time elapsed between the $(k-1)^{\rm th}$ accident and the $k^{\rm th}$ accident. Suppose $X_1, X_2, \ldots $ are independent of each other. You use
the exponential distribution with probability density function $f(t) = 0.4e^{-0.4t}$, $t>0$ measured in minutes to model $X_k$. What is the probability of at least two accidents happening in a five-minute period? | The number of accidents in one minute is a Poisson process with mean $0.4$. Using the property of the Poisson process, the number of accidents in a five-minute period, denoted by the random variable $N$, must follow the Poisson distribution with mean $\lambda = (0.4)(5) = 2$.
$$P(N \geq 2) = 1- P(N=0) - P(N=1) = 1 - e^{-2} - 2e^{-2} = 1-3e^{-2}.$$ | 0.59 |
|
In modeling the number of claims filed by an individual under an insurance policy during a two-year period, an assumption is made that for all integers $n \geq 0$, $p(n + 1) = 0.1p(n)$ where $p(n)$ denotes the probability that there are $n$ claims during the period. Calculate the expected number of claims during the period. | From the given recursive formula, $p(n)=0.1^n p(0)$ can be derived. Taking into account $\sum_{n=0}^\infty p(n)=1$, we obtain $p(0)\sum_{n=0}^\infty 0.1^n=1$. Solving this equation yields $p(0)=0.9$. Thus $p(n)=(0.9)(0.1^n)$.
This indicates the number of claims follows Geometric distribution, so the expected number of claims is
$(1-0.9)/0.9=0.11$. | 0.11. |
|
An ant starts at $(1,1)$ and moves in one-unit independent steps with equal probabilities of 1/4 in each direction: east, south, west, and north. Let $W$ denote the east-west position and $S$ denote the north-south position after $n$ steps. Find $\mathbb{E}[e^{\sqrt{W^2+S^2}}]$ for $n=3$. | We make a shift to assume the ant starts at $(0,0)$, $X=W-1, Y=S-1$. We can find the joint probability function for $(X, Y)$: The four points $(\pm 1, 0)$, $(0, \pm 1)$ each have probability $9/64$, the eight points $(\pm 2, \pm 1)$, $(\pm 1, \pm 2)$ each have probability $3/64$, the four points $(\pm 3, 0)$, $(0, \pm 3)$ each have probability $1/64$. These results can be obtained by counting the paths to the corresponding points. Then
$\mathbb{E}[e^{\sqrt{W^2+S^2}}] = \mathbb{E}[e^{\sqrt{(X+1)^2+(Y+1)^2}}] = 12.083$. | 12.08 |
|
Let the two random variables $X$ and $Y$ have the joint probability density function $f(x,y)=cx(1-y)$ for $0<y<1$ and $0<x<1-y$, where $c>0$ is a constant. Compute $P(Y<X|X=0.25)$. | For the joint density function $f(x, y)$, it should satisfy $$\int_0^1 \int_0^{1-y} f(x, y) dxdy = 1, $$
so the value of the constant $c$ must be $8$. The marginal probability density function for $X$ is $$f_X(x) = \int_0^{1-x} f(x, y)dy = 4x(1-x^2), \ 0<x<1.$$
$$P(Y<X|X=0.25) = \int_0^{0.25} f(y|x=0.25) dy = \int_0^{0.25} \frac{f(0.25, y)}{f_X(0.25)} dy = \int_0^{0.25} \frac{2(1-y)}{0.9375} dy = 0.46667.$$ | 0.47 |
|
Three random variables $X, Y, Z$ are independent, and their moment generating functions are:
$$M_X(t) = (1-3t)^{-2.5}, M_Y(t) = (1-3t)^{-4}, M_Z(t) = (1-3t)^{-3.5}.$$
Let $T=X+Y+Z$. Calculate $\mathbb{E}[T^4]$. | The moment generating function for the random variable $T$ is
$$M_T(t) = M_X(t)M_Y(t)M_Z(t) = (1-3t)^{-10}.$$
Applying the property of moment generating function, we obtain
$$\mathbb{E}[T^4] = M_T^{(4)}(0) = 10\times11\times12\times 13\times 3^4 \times (1-0)^{-14} = 1389960.$$ | 1389960 |
|
The distribution of the random variable $N$ is Poisson with mean $\Lambda$. The parameter $\Lambda$ follows a prior distribution with the probability density function
$$f_{\Lambda}(\lambda) = \frac{1}{2} \lambda^2 e^{-\lambda}, \lambda>0.$$
Given that we have obtained two realizations of $N$ as $N_1 = 1$, $N_2 = 0$, compute the probability that the next realization is greater than 1. (Assume the realizations are independent of each other.) | We are asked to compute $P(N\geq 1|N_1=1, N_2=0)$. Taking into account
$$P(N> 1|N_1=1, N_2=0) = \int_0^\infty P(N> 1|\Lambda = \lambda) f(\lambda|N_1=1, N_2=0)d\lambda,$$
we will derive the posterior distribution of $\lambda$ first.
$$f(\lambda|N_1=1, N_2=0) = \frac{P(N_1=1, N_2=0|\Lambda = \lambda)f_\Lambda(\lambda)}{\int_0^\infty P(N_1=1, N_2=0|\Lambda = \lambda) f_\Lambda(\lambda) d\lambda} = \frac{27}{2}\lambda^3e^{-3\lambda}.$$
Thus, $$P(N > 1|N_1=1, N_2=0) = \int_0^\infty (1-e^{-\lambda} - \lambda e^{-\lambda}) \frac{27}{2}\lambda^3e^{-3\lambda} d\lambda = \frac{47}{128}.$$ | 0.37 |
|
The minimum force required to break a type of brick is normally distributed with mean 195 and variance 16. A random sample of 300 bricks is selected.
Estimate the probability that at most 30 of the selected bricks break under a force of 190. | The probability that a brick will not be broken under a force of 190 is $P(Z > \frac{190-195}{4}) = 0.8944$. The number of bricks not breaking under a force of 190
follows a Binomial distribution. The probability that at most 30 bricks break is $$\sum_{n=270}^{300} \binom{300}{n} 0.8944^n 0.1056^{300-n}.$$ This quantity can be approximated by
Normal distribution with continuity correction: $P(N > 265.5) = P(Z > \frac{265.5 - (300)(0.8944)}{\sqrt{(300)(0.8944)(1-0.8944)}})$. The final answer is 0.7019. | 0.70 |
|
Find the variance of the random variable $X$ if the cumulative distribution function of $X$ is
$$F(x) = \begin{cases} 0, & {\rm if} \ x < 1, \\ 1 - 2e^{-x}, & {\rm if} \ x \geq 1. \end{cases}$$ | The random variable $X$ has a point mass at $x=1$. $P(X=1) = 1-2e^{-1}$.
$$ \mathbb{E}[X] = (1)P(X=1) + \int_1^\infty xf(x) dx = (1-2e^{-1}) + \int_1^\infty 2xe^{-x} dx = 1 + 2e^{-1} $$
$$\mathbb{E}[X^2] = (1^2)P(X=1) + \int_1^\infty x^2f(x) dx
= (1-2e^{-1}) + \int_1^\infty 2x^2e^{-x} dx = 1 + 8e^{-1}.$$
$${\rm Var}[X] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2 = 4e^{-1}(1-e^{-1}).$$ | 0.93\ |
|
The hazard rate function for a continuous random variable $X$ is defined as $h(x) = \frac{f(x)}{1-F(x)}$, where $f(\cdot)$ and $F(\cdot)$ are the probability density
function and cumulative distribution function of $X$ respectively. Now you are given $h(x) = 2e^{x} + 1, x>0$. Find $P(X>1)$. | Note $h(x) = \frac{F'(x)}{1-F(x)}$. This implies that
$$F(x) = 1 - e^{-\int_0^x h(t)dt} = 1 - e^{-\int_0^x 2e^{t} + 1dt} = 1-e^{-2e^{x} - x + 2}.$$
Thus, $P(X>1) = 1 - F(1) = e^{-2e+1} = 0.0118365$. | 0.01 |
|
Suppose the random variable $X$ has an exponential distribution with mean $1$. Find $\min_{x \in \mathbb{R}} \mathbb{E}|X-x|$. | Note $\min_{x \in \mathbb{R}} \mathbb{E}|X-x| = \mathbb{E}|X-\pi_{0.5}|$, where $\pi_{0.5} = \ln 2$ is the median of the exponential distribution.
$$\mathbb{E}|X-\ln 2| = \int_0^{\ln2} (\ln 2 - x)e^{-x} dx + \int_{\ln 2}^\infty (x - \ln 2)e^{-x} dx = 1+\ln 2.$$ | 1.69 |
|
The joint probability density function for the random variables $X$ and $Y$ is
$$f(x, y) = 6e^{-(2x+3y)}, \ x>0, \ y>0.$$
Calculate the variance of $X$ given that $X>1$ and $Y>2$. | The marginal density functions can be found as follows.
$$f_X(x) = \int_0^\infty f(x, y) dy = 2e^{-2x}, \ x>0,$$
$$f_Y(y) = \int_0^\infty f(x, y) dx = 3e^{-3y}, \ y>0.$$
Clearly, $f(x, y) = f_X(x)f_Y(y)$ and this implies that the random variables are independent. Thus, ${\rm Var}[X|X>1, Y>2] = {\rm Var}[X|X>1]$. Taking into account $P(X>1) = e^{-2}$, we have
$$\mathbb{E}[X|X>1] = \int_1^\infty 2xe^{-2x}\cdot \frac{1}{e^{-2}} dx = 1.5,$$
$$\mathbb{E}[X^2|X>1] = \int_1^\infty 2x^2e^{-2x}\cdot \frac{1}{e^{-2}} dx = 2.5.$$
Thus, $${\rm Var}[X|X>1, Y>2] = {\rm Var}[X|X>1] = 2.5 - 1.5^2 = 0.25.$$ | 0.25 |
|
Consider the Markov chain $X_n$ with state space $Z = \{0, 1, 2, 3, \ldots\}$. The transition probabilities are
$$p(x, x+2) = \frac{1}{2}, \ p(x, x-1) = \frac{1}{2}, \ x>0,$$
and $p(0, 2)=\frac{1}{2}, \ p(0, 0)=\frac{1}{2}$. Find the probability of ever reaching state 0 starting at $x=1$. | Let $\alpha(x) = P(X_n = 0 \ {\rm for \ some \ } n \geq 0|X_0 = x)$, then $\alpha(x)$ must satisfy
$$\alpha(x) = p(x, x+2)\alpha(x+2) + p(x, x-1)\alpha(x-1), \ x>0.$$
To solve the equation $$\alpha(x) = 0.5 \alpha(x+2) + 0.5\alpha(x-1), x>0$$ with $\alpha(0)=1$, we set $\alpha(x) = a^x$ and obtain
$$0.5a^3 - a + 0.5 = 0.$$ This cubic equation has three roots $$a_1 = 1, a_2 = \frac{1}{2}(\sqrt{5}-1), a_3 = -\frac{1}{2}(\sqrt{5}+1).$$
Thus, $\alpha(x)$ admits the expression of $c_1 + c_2 a_2^x + c_3 a_3^x$. By setting $c_1 = 0, c_3=0, c_2=1$, we can check that $\alpha(x) = \left(\frac{\sqrt{5}-1}{2}\right)^x$ satisfies the properties of a transient Markov chain.
Thus, the chain is transit and the probability of ever reaching state 0 starting at $x$ is $\left(\frac{\sqrt{5}-1}{2}\right)^x$. | 0.62 |
|
The two random variables $X$ and $Y$ are independent and each is uniformly distributed on $[0, a]$, where $a>0$ is a constant. Calculate the covariance of $X$ and $Y$ given that $X+Y<0.5a$ when $a^2 = 2.88$. | The conditional distribution of $X$ and $Y$ given $X+Y<0.5a$ must be uniform over the
triangular region with vertices $(0, 0), \ (0, 0.5a), \ (0.5a, 0)$. Thus, $$f_{X, Y|X+Y<0.5a}(x, y) = 8a^{-2}, \ 0<x, y< 0.5a, \ x+y< 0.5a.$$
$$\mathbb{E}[X|X+Y<0.5a] = \int_0^{0.5a} \int_0^{0.5a - x} 8a^{-2} xdy dx = \frac{1}{6}a,$$
$$\mathbb{E}[Y|X+Y<0.5a] = \int_0^{0.5a} \int_0^{0.5a - y} 8a^{-2} ydy dx = \frac{1}{6}a,$$
$$\mathbb{E}[XY|X+Y<0.5a] = \int_0^{0.5a} \int_0^{0.5a - x} 8a^{-2} xy dx dy = \frac{1}{48}a^2,$$
$${\rm Cov}[X, Y|X+Y<0.5a] = \frac{1}{48}a^2 - (\frac{1}{6}a)^2 = -\frac{1}{144}a^2.$$
When $a^2= 2.88$, we get ${\rm Cov}[X, Y|X+Y<0.5a] = 0.02$. | -0.02 |
|
There are $N$ balls in two boxes in total. We pick one of the $N$ balls at random and move it to the other
box. Repeat this procedure. Calculate the long-run probability that there is one ball in the left box. | Let $X_n$ be the number of balls in the left box after $n$th draw. Clearly, $X_n$ is
a Markov chain because $X_{n+1}$ just depends on $X_n$. The transition matrix is
$$ P = \begin{pmatrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 & 0 \\
\frac{1}{N} & 0 & \frac{N-1}{N} & 0 & \ldots & 0 & 0 & 0 \\
0 & \frac{2}{N} & 0 & \frac{N-2}{N} & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \ldots & \frac{N-1}{N} & 0 & \frac{1}{N} \\
0 & 0 & 0 & 0 & \ldots & 0 & 1 & 0
\end{pmatrix}.$$
Let $\bar\pi = (\pi_0, \pi_1, \ldots, \pi_N)$ be the stationary distribution. We have $\bar\pi = \bar\pi P$ that gives the system of equations:
$$\begin{cases}
\pi_0 = \frac{1}{N} \pi_1 \\
\pi_1 = \pi_0 + \frac{2}{N} \pi_2 \\
\pi_2 = \frac{N-1}{N}\pi_1 + \frac{3}{N} \pi_3 \\
\ldots \\
\pi_{N-1} = \frac{2}{N}\pi_{N-2} + \pi_N \\
\pi_N = \frac{1}{N} \pi_{N-1}.
\end{cases}
$$
In general, $\pi_K = \frac{N-K+1}{N} \pi_{K-1} + \frac{K+1}{N} \pi_{K+1}$. We can derive that $\pi_K = \binom{N}{K} \pi_0$. Taking into
account $\sum_{i=0}^N \pi_i = 1$, we can obtain $\pi_0 = 2^{-N}$, and $\pi_K = \binom{N}{K} 2^{-N}$ for $K=0, 1, \ldots, N$.
When $K=1$, we get $\pi_1 = N2^{-N}$.
% When $N=8$ and $K=1$, we get $\pi_1= \binom{8}{1} 2^{-8} = 1/8$. | N2^{-N} |
|
Let $W_t$ be a standard Brownian motion. Find the probability that $W_t = 0$ for some $t \in [1, 3]$. | By the reflection principle,
\begin{eqnarray*}
&& P(W_t = 0 \ {\rm for \ some } \ t \ {\rm with} \ 1 \leq t \leq 3) \\
&=& \int_{-\infty}^\infty p_1(0, x) P(W_s = 0 \ {\rm for \ some } \ s \ {\rm with} \ 1 \leq s \leq 3 | W_1 = x) dx \\
& = & 2\int_0^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}} \left(2 \int_x^\infty \frac{1}{\sqrt{4\pi}} e^{-\frac{t^2}{4}} dt\right) dx \\
& = & \frac{2}{\pi}\arctan\sqrt{2}.
\end{eqnarray*} | 0.61 |
|
Consider a random walk on the integers with probability $1/3$ of moving to the right and probability $2/3$
of moving to the left. Let $X_n$ be the number at time $n$ and assume $X_0 = K > 0$. Let $T$ be the first time
that the random walk reaches either 0 or $2K$. Compute the probability $P(X_T = 0)$ when $K=2$. | Let $M_n = 2^{X_n}$ and the filtration $\mathcal{F}_n = \sigma(X_0, X_1, \ldots, X_n)$. We can show that $M_n$
is a martingale with respect to $\mathcal{F}_n$. One can also show that $T$ is finite almost surely and $\mathbb{E}(|M_n|\mathbf{1}_{\{T >n\}}) \to 0$ as $n \to \infty$.
By optional sampling theorem, $\mathbb{E}(M_T) = \mathbb{E}(M_0)$. Thus,
$$ 2^0 P(X_T = 0) + 2^{2K} P(X_T = 2K) = 2^K,$$
and $$P(X_T = 0) + P(X_T = 2K) = 1.$$
Thus, $P(X_T = 0) = \frac{4^K - 2^K}{4^{K} - 1}$. | 0.80 |
|
Given the data set $ \{3, 7, 7, 2, 5\} $, calculate the sample mean $\mu$ and the sample standard deviation $\sigma$. Present the answer as $[\mu,\sigma]$. | The sample mean $ \bar{x} $ is given by $ \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i $. For our data set,
\[
\bar{x} = \frac{3 + 7 + 7 + 2 + 5}{5} = \frac{24}{5} = 4.8.
\]
The sample standard deviation $ s $ is calculated as $ s = \sqrt{\frac{1}{n-1} \sum_{i=1}^{n} (x_i - \bar{x})^2} $,
\[
s = \sqrt{\frac{(3-4.8)^2 + (7-4.8)^2 + (7-4.8)^2 + (2-4.8)^2 + (5-4.8)^2}{4}} \approx 2.28.
\] | [4.8, 2.28] |
|
A sample of 30 observations yields a sample mean of 50. Assume the population standard deviation is known to be 10. When testing the hypothesis that the population mean is 45 at the 5\% significance level, should we accept the hypothesis? | We use a Z-test for the hypothesis. The null hypothesis $ H_0: \mu = 45 $. The test statistic is
\[
Z = \frac{\bar{x} - \mu_0}{\sigma/\sqrt{n}} = \frac{50 - 45}{10/\sqrt{30}} \approx 3.27.
\]
At the 5\% significance level, the critical value $ Z_{0.05} \approx 1.96 $. Since $ 3.27 > 1.96 $, we reject $ H_0 $. | No |
|
Given points $ (1, 2) $, $ (2, 3) $, $ (3, 5) $, what is the slope of the least squares regression line? | The least squares regression line is $ y = ax + b $ where
\[
a = \frac{n\sum xy - \sum x \sum y}{n\sum x^2 - (\sum x)^2}, \quad b = \frac{\sum y - a\sum x}{n}.
\]
For the given points,
\[
a = \frac{3(1 \cdot 2 + 2 \cdot 3 + 3 \cdot 5) - (1 + 2 + 3)(2 + 3 + 5)}{3(1^2 + 2^2 + 3^2) - (1 + 2 + 3)^2} = \frac{9}{6}=\frac{3}{2},
\]
\[
b = \frac{2 + 3 + 5 - \frac{3}{2}(1 + 2 + 3)}{3} = \frac{1}{3}.
\]
So, the regression line is $ y = \frac{3}{2}x + \frac{1}{3} $. | 1.5 |
|
A random sample of 150 recent donations at a certain blood bank reveals that 76 were type A blood. Does this suggest that the actual percentage of type A donation differs from 40\%, the percentage of the population having type A blood, at a significance level of 0.01? | We want to test the following hypotheses
\[
H_0: p=0.4 \quad vs. \quad H_1: p\neq 0.4.
\]
The test statistic is
\[
z= \frac{76/150 - 0.4}{\sqrt{0.4\cdot 0.6/150}} = 2.67.
\]
The p-value is
\[
2P(Z\ge 2.67) = 0.0076
\]
which is smaller than 0.01. So, the data does suggest that the actual percentage of type A donations differs from 40\%. | Yes |
|
The accompanying data on cube compressive strength (MPa) of concrete specimens are listed as follows:
\[
112.3 \quad 97.0 \quad 92.7 \quad 86.0 \quad 102.0 \quad 99.2 \quad 95.8 \quad 103.5 \quad 89.0 \quad 86.7.
\]
Assume that the compressive strength for this type of concrete is normally distributed. Suppose the concrete will be used for a particular application unless there is strong evidence that the true average strength is less than 100 MPa. Should the concrete be used under significance level 0.05? | We want to test the following hypotheses
\[
H_0: \mu=100 \quad vs. \quad H_1: \mu<100.
\]
The test statistic is
\[
t= \frac{\bar{x}-\mu_0}{s/\sqrt{n}} = \frac{96.42 - 100}{8.26/\sqrt{10}} \approx -1.37.
\]
The p-value is
\[
P(t_{9}\le -1.37) \approx 0.102
\]
which is greater than 0.05. So, we do not reject $H_0$ and so the concrete should be used. | Yes. \
% Yes, the concrete should be used. |
|
Suppose we have a sample from normal population as follows.
\[
107.1 \quad 109.5 \quad 107.4 \quad 106.8 \quad 108.1
\]
Find the sample mean and sample standard deviation, and construct a 95\% confidence interval for the population mean. | The sample mean is
\[
\bar{x} = \frac{107.1+109.5+107.4+106.8+108.1}{5} = 107.78
\]
and the sample standard deviation is $s=1.076$. The corresponding 95\% confidence interval is
\[
\bar{x} \pm t_{0.025, 4}s/\sqrt{n} = 107.78 \pm 2.776\cdot 1.076/\sqrt{5} = (106.44, 109.12).
\] | (106.44, 109.12). |
|
In a survey of 2000 American adults, 25\% said they believed in astrology. Calculate a 99\% confidence interval for the proportion of American adults believing in astrology. | We have that $n=2000$ and $\hat{p}=0.25$. Hence the 99\% confidence interval is given by
\[
\hat{p} \pm z_{\alpha/2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} = 0.25\pm 2.576 \sqrt{\frac{0.25\cdot 0.75}{2000}} = 0.25 \pm 0.025 = (0.225, 0.275).
\] | (0.225, 0.275). |
|
Two new drugs were given to patients with hypertension. The first drug lowered the blood pressure of 16 patients by an average of 11 points, with a standard deviation of 6 points. The second drug lowered the blood pressure of 20 other patients by an average of 12 points, with a standard deviation of 8 points. Determine a 95\% confidence interval for the difference in the mean reductions in blood pressure, assuming that the measurements are normally distributed with equal variances. | Note that, for the first sample, we have that $n_1=16$, $\bar{x}_1=11$ and $s_1=6$; and for the second sample, we have that $n_2=20$, $\bar{x}_2=12$ and $s_2=8$. So, the pooled sample variance is
\[
s_p^2 = \frac{(n_1-1)s_1^2 + (n_2-1)s_2^2}{n_1+n_2-2} = \frac{15\cdot 6^2 + 19\cdot 8^2}{34} \approx 51.647.
\]
With $t_{0.05/2,n_1+n_2-2}=t_{0.025, 34}\approx 2.03$, the 95\% confidence interval for $\mu_1-\mu_2$ is given by
\[
11-12 \pm 2.03\sqrt{51.647\cdot(\frac{1}{16} + \frac{1}{20})} \approx -1\pm 4.9 \Rightarrow (-5.9, 3.9).
\] | (-5.9, 3.9). |
|
The ages of a random sample of five university professors are 39, 54, 61, 72, and 59. Using this
information, find a 99\% confidence interval for the population variance of the ages of all professors at the university, assuming that the ages of university professors are normally distributed. | We have that $n = 5$ and the sample variance $s^2 = 144.5$. Meanwhile, the critical values for chi-square distribution with degree of freedom 4 are given by $\chi_{0.995, 4}^2=0.20699$ and $\chi_{0.005, 4}^2=14.8602$. Thus, the 99\% confidence interval for the variance is given by
\[
\left(\frac{(n-1)s^2}{\chi_{0.005, 4}^2}, \frac{(n-1)s^2}{\chi_{0.995, 4}^2}\right) = \left(\frac{4\cdot 144.5}{14.8602}, \frac{4\cdot 144.5}{0.20699}\right) = (38.90, 2792.41).
\] | (38.90, 2792.41) |
|
Suppose we have two groups of data as follows
\begin{equation*}
\begin{split}
\text{\rm Group 1: }\quad &32 \quad 37 \quad 35 \quad 28 \quad 41 \quad 44 \quad 35 \quad 31 \quad 34\\
\text{\rm Group 2: } \quad &35 \quad 31 \quad 29 \quad 25 \quad 34 \quad 40 \quad 27 \quad 32 \quad 31\\
\end{split}
\end{equation*}
Is there sufficient evidence to indicate a difference in the true means of the two groups at level $\alpha=0.05$? | We want to test
\[
H_0: \mu_1-\mu_2=0 \quad vs. \quad H_1: \mu_1-\mu_2\neq 0.
\]
Note that, for the first sample, we have that $n_1=9$, $\bar{x}_1=35.22$ and $s_1^2=24.445$; and for the second sample, we have that $n_2=9$, $\bar{x}_2=31.56$ and $s_2^2=20.027$. So, the pooled sample variance is
\[
s_p^2 = \frac{(n_1-1)s_1^2 + (n_2-1)s_2^2}{n_1+n_2-2} = \frac{8\cdot 24.445 + 8\cdot 20.027}{16} = 22.236,
\]
implying the pooled sample standard deviation $s_p = 4.716$.
The test statistic is
\[
t = \frac{\bar{x}_1-\bar{x}_2}{s_p\sqrt{\frac{1}{n_1} +\frac{1}{n_2}}} = \frac{35.22 - 31.56}{4.716\sqrt{\frac{1}{9} +\frac{1}{9}}} = 1.65.
\]
The p-value is given by $2P(t_{16}>1.65)=0.1184>0.05$, where $t_{16}$ is a t-distribution with degree of freedom 16. Thus, we do not reject $H_0$ and claim that there is not sufficient evidence to indicate a difference in true mean of two groups. | No |
|
Let $X$ be one observation from the pdf
\[
f(x|\theta) = \left(\frac{\theta}{2}\right)^{|x|}(1-\theta)^{1-|x|}, \quad x=-1, 0, 1; \ \ 0\le \theta \le 1.
\]
Is $X$ a complete statistic? | Note that
\[
E(X) = \frac{\theta}{2}\cdot 1 + (1-\theta)\cdot 0 + \frac{\theta}{2}\cdot (-1) =0, \quad \forall 0\le \theta \le 1.
\]
But $X$ is not equal to 0. By the definition of completeness, $X$ is not a complete statistic. | No |
|
Let $X_1, \ldots, X_n$ be an i.i.d. random sample with probability density function (pdf)
\begin{equation*}
f(x|\theta) = \begin{cases}
\frac{2}{\sqrt{\pi \theta}}e^{-\frac{x^2}{\theta}}, \quad &x>0, \\
0, \quad &\text{otherwise};
\end{cases}
\end{equation*}
where $\theta>0$. What is the Cramer-Rao Lower Bound for estimating $\theta$? | The likelihood function and log likelihood function are given respectively by
\begin{equation*}
\begin{split}
L(\theta) &= \frac{2^n}{\pi^{n/2}}\theta^{-n/2}e^{-\sum_{i=1}^n X_i^2/\theta},\\
\ell(\theta) &= n\log(2/\sqrt{\pi}) - \frac{n}{2}\log\theta - \sum_{i=1}^n X_i^2/\theta.
\end{split}
\end{equation*}
Taking the derivatives in $\theta$, we obtain
\begin{equation*}
\begin{split}
\ell'(\theta) = - \frac{n}{2\theta}+ \frac{\sum_{i=1}^n X_i^2}{\theta^2},\quad
\ell''(\theta) = \frac{n}{2\theta^2}- \frac{2\sum_{i=1}^n X_i^2}{\theta^3}.
\end{split}
\end{equation*}
Note that $E(X^2)=\theta/2$, we have the Fisher information
\[
I_n(\theta) = -E(\ell''(\theta)) = -\frac{n}{2\theta^2}+ \frac{2nE(X^2)}{\theta^3} = \frac{n}{2\theta^2}.
\]
Therefore, the Cramer-Rao Lower Bound is given by $1/I_n(\theta) = 2\theta^2/n$. | 2\theta^2/n. |
|
Let $X_1, X_2, \ldots, X_n$ be an i.i.d. random sample from the population density (i.e., Exp($\frac{1}{\theta}$))
\[ f(x|\theta)=\begin{cases}
\theta e^{-\theta x}, & x>0; \\
0, & \text{\rm otherwise}.
\end{cases} \qquad \text{\rm where } \theta>0.
\]
Let $\hat{\theta}_n$ be the maximal likelihood estimator of $\theta$. What is the variance of the asymptotic distribution of the limiting distribution of $\sqrt{n}(\hat{\theta}_n - \theta)$? | Note that $E(X_i)=\frac{1}{\theta}$ and ${\rm Var}(X_i)=\frac{1}{\theta^2}$. By the Central Limit Theorem,
\[
\sqrt{n}\left(\bar{X}_n - \frac{1}{\theta}\right) \xrightarrow{d} N\left(0,\frac{1}{\theta^2}\right).
\]
Note that the likelihood function and log-likelihood function are given respectively by
\[
L(\theta) = \theta^n e^{-\theta \sum_{i=1}^n x_i}, \quad \ell(\theta) = n\log\theta - \theta\sum_{i=1}^n x_i.
\]
Taking the derivative
\[
\ell'(\theta) = \frac{n}{\theta} - \theta\sum_{i=1}^n x_i=0
\]
gives that the MLE is
\[
\hat{\theta}_n = \frac{n}{\sum_{i=1}^n X_i} = \frac{1}{\bar{X}_n}.
\]
Let $g(t)=\frac{1}{t}$ with $g'(t)=-\frac{1}{t^2}$. By the Delta method, we have
\[
\sqrt{n}(\hat{\theta}_n - \theta) = \sqrt{n}(g(\bar{X}_n) - g(\frac{1}{\theta})) \xrightarrow{d} N\left(0,\frac{(g'(\frac{1}{\theta}))^2}{\theta^2}\right) = N(0,\theta^2).
\] | \theta^2 |
|
Let $U_1, U_2, \ldots$, be i.i.d. ${\rm Uniform}(0,1)$ random variables and let $X_n=\left(\prod_{k=1}^{n} U_k\right)^{-1/n}$. What is the variance of the asymptotic distribution of $\frac{\sqrt{n}(X_n-e)}{e}$ as $n\to \infty$? | Let $Y_n = \log X_n = \frac{1}{n}\sum_{k=1}^n (-\log U_k)$. Note that $-\log U_k$ are i.i.d. with Exponential distribution with parameter 1, having mean $\mu=1$ and variance $\sigma^2=1$. By the central limit theorem,
\[
\frac{\sqrt{n}(Y_n-\mu)}{\sigma} = \sqrt{n}(Y_n-1) \xrightarrow{d} N(0,1).
\]
Applying the Delta method with $g(y)=e^y$ such that $g(1)=e$ and $g'(1)=e>0$, we obtain
\[
\sqrt{n}(g(Y_n)-g(1))\xrightarrow{d} N(0,[g'(1)]^2),
\]
which is equivalent to $\sqrt{n}(X_n-e) \xrightarrow{d} N(0,e^2)$, yielding
\[
\frac{\sqrt{n}(X_n-e)}{e} \xrightarrow{d} N(0,1).
\] | 1 \ |
|
Let $X$ be a single observation from ${\rm Unifrom}(0,\theta)$ with density $f(x|\theta)=1/\theta I(0<x<\theta)$, where $\theta>0$. Does there exist Cramer-Rao Lower Bound for estimating $\theta$? | Let $h$ be a nonzero function. The existence of Cramer-Rao Lower Bound requires that
\[
\frac{d}{d\theta}E_\theta (h(X)) = \int_0^\theta \frac{d}{d\theta}(h(x)f(x|\theta))dx.
\]
However, we have that
\[
\frac{d}{d\theta}E_\theta (h(X)) = \frac{d}{d\theta}\left(\int_0^\theta h(x) \frac{1}{\theta}dx \right) = \frac{h(\theta)}{\theta}- \frac{1}{\theta^2}\int_0^\theta h(x)dx
\]
and
\[
\int_0^\theta \frac{d}{d\theta}(h(x)f(x|\theta))dx = - \frac{1}{\theta^2}\int_0^\theta h(x)dx,
\]
which are not equal when $h$ is a nonzero function. Thus, the condition for the existence of Cramer-Rao Lower Bound is not satisfied.
In fact, if the Cramer-Rao Lower Bound exists, then it would be given by
\[
\frac{1}{E\left(\left(\frac{d}{d\theta} \log f(X|\theta)\right)^2\right)} = \theta^2.
\]
However, $2X$ is an unbiased estimator of $\theta$ with variance $\theta^2/3$ which is smaller than $\theta^2$, making a contradiction. | No |
|
Let $X_1, \ldots, X_n$ be i.i.d. sample from ${\rm Gamma}(\alpha,\beta)$ with density function $f(x|\alpha,\beta) = \frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x/\beta}$, $x>0$, $\alpha,\beta>0$, where $\alpha$ is known and $\beta$ is unknown. What is the value of the uniform minimum variance unbiased estimator (UMVUE) for $1/\beta$ when $n\alpha = 1$? | As an exponential family, we have that $T=\sum_{i=1}^n X_i$ is a complete and sufficient estimator for $\beta$. On the other hand, note that $T$ has a Gamma distribution ${\rm Gamma}(n\alpha,\beta)$, implying that
\[
E\left(\frac{1}{T}\right) = \int_0^\infty \frac{1}{\Gamma(n\alpha)\beta^{n\alpha}}t^{n\alpha-2}e^{-t/\beta}dt = \frac{\Gamma(n\alpha-1)\beta^{n\alpha-1}}{\Gamma(n\alpha)\beta^{n\alpha}} = \frac{1}{(n\alpha-1)\beta}.
\]
This shows that $\frac{n\alpha -1 }{\sum_{i=1}^n X_i}$ is an unbiased estimator for $1/\beta$. Finally, since $\frac{n\alpha -1 }{\sum_{i=1}^n X_i}$ is an estimator based on the complete and sufficient statistic $\sum_{i=1}^n X_i$, by the Lehmann-Scheff\'{e} Theorem, $\frac{n\alpha -1 }{\sum_{i=1}^n X_i}$ is the UMVUE for $1/\beta$. | 0 |
|
Let $X_1, X_2, \ldots, X_n$ be i.i.d. sample from the population density
\[
f(x|\theta) = \frac{2}{\theta}xe^{-x^2/\theta} I(x>0), \quad \theta>0.
\]
Consider using appropriate chi-square distribution to find the size $\alpha$ uniformly most powerful (UMP) test for $H_0: \theta\le \theta_0$ vs. $H_1: \theta> \theta_0$. Let $\chi_{2n, \alpha}^2$ is the value such that $P(\chi_{2n}^2 > \chi_{2n, \alpha}^2) = \alpha$ and $\chi_{2n}^2$ is the chi-squared distribution with degree of freedom $2n$. Should the UMP test reject $H_0$ if $\sum_{i=1}^n X_i^2 > \frac{\theta_0}{2} \chi_{2n, \alpha}^2$? | For $\theta_2>\theta_1$,
\[
\frac{f(x_1,\ldots, x_n|\theta_2)}{f(x_1,\ldots, x_n|\theta_1)} = \frac{\frac{2^2}{\theta_2^n}\left(\prod_{i=1}^n x_i\right) e^{-\sum_{i=1}^n x_i^2/\theta_2}}{\frac{2^2}{\theta_1^n}\left(\prod_{i=1}^n x_i\right) e^{-\sum_{i=1}^n x_i^2/\theta_1}} = \left(\frac{\theta_1}{\theta_2}\right)^ne^{-\sum_{i=1}^n x_i^2(\frac{1}{\theta_2}-\frac{1}{\theta_1})},
\]
which is increasing in $\sum_{i=1}^n x_i^2$. By the Karlin-Rubin theorem, the size-$\alpha$ UMP test reject $H_0$ if $\sum_{i=1}^n X_i^2>c$ where $c$ is some constant such that $P_{\theta_0}(\sum_{i=1}^n X_i^2>c)=\alpha$.
Note that $X_i^2$ has the exponential distribution ${\rm Exp}(\theta)$, implying $\sum_{i=1}^n X_i^2$ has the gamma distribution ${\rm Gamma}(n,\theta)$. Thus,
$2\sum_{i=1}^n X_i^2/\theta$ has the gamma distribution ${\rm Gamma}(n,2)$ which is the same as $\chi_{2n}^2$, the chi-squared distribution with degree of freedom $2n$. Therefore, we have
\[
\alpha=P_{\theta_0}(\sum_{i=1}^n X_i^2>c)=P(\chi_{2n}^2>2c/\theta_0),
\]
implying that $2c/\theta_0=\chi_{2n, \alpha}^2$ and hence $c=\frac{\theta_0}{2} \chi_{2n, \alpha}^2$. | Yes |