title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
setting the value of a variable in such equation to have a specific output
You previously had 0.5y + 0.5x = 0 and 0.5iy - 0.5ix = 0 Now you have (1+i)y + (1-i)x = 1 Hence x+y = 1 and iy - ix = 0 In both cases it guarantees x and y to have the same magnitude. This is also symmetric set of equations, however if you want to keep the same magnitude, then you enforce x and y to have the same real argument equals 0.5 This is a bit intuitive since you change real argument on RHS.
Let p and q be primes where p < q and q not ≡ 1 mod p. Then any group G of size pq is cyclic.
If $x$ is an element of order $pq$ then there are $pq$ different powers of $x$, from $x^0$ (the identity) to $x^{pq-1}$. But that's the order of the whole group, so all members of the group are powers of $x$, i.e. the group is cyclic.
Calculate eigenvectors
Almost right, only the $1$ in the upper right hand corner of $B$ should be a $-1$. Can you find the eigenvectors now?
If $x^3+3x^2+k=0$ has integer roots then number of integral solutions to $k$ is
Hint: For $k=-1$ the polynomial $x^3+3x^2-1$ certainly has no integer root, because by the rational root theorem, this could only be $x=\pm 1$, which isn't a root. The same holds for $k=-3$. On the other hand, for $k=-4$, $x=1$ is an integer root, and for $k=2$, $x=-1$ is an integer root. Question: Do we need to have all roots integers? If not, we can just put $x$ an arbitrary integer and set $k=-x^3-3x^2$. Then we have an integer root for this $k$. If yes, we can compare the polynomial with $(x-a)(x-b)(x-c)$ for integers $a,b,c$ and obtain that $a+b+c=0$ and $ -a^2 - ab + 3a - b^2 + 3b=0$ and $k=abc$.
Projective spaces and Bézout's theorem
The analog of Bezout's theorem in higher dimension is: The number of intersection points of $n$ hypersurfaces of degrees $d_1,\ldots,d_n$ in $n$-dimensional projective space is $d_1\cdots d_n$, counting multiplicity, working over an algebraically closed field, and assuming the hypersurfaces have no common components. It is not clear to me what properties you would like to preserve in a projective space $\Bbb{P}^n$ with $n&gt;2$ where every pair of lines meets in a a point. If you preserve reasonable properties, such as each pair of distinct points defining a unique line, and every three points being contained in a projective plane (in the usual sense of projective plane), then this is impossible.
Limits, if the limit exists or not
$F(a)$ refers to what the function is doing when $x$ is EXACTLY equal to $a$. $\lim_{x\to a} F(x)$ refers to how the function behaves at values of $x$ where $x$ is near $a$ but where $x$ does NOT equal $a$. So..... ONE and FOUR at $x=-1$ then function &quot;jumps off its track&quot; and has that dark red value at $4$. But when $x$ is close to $-1$ but $x$ is not equal to $-1$ we see the function is on a track and is approaching the value of $3$ so... $F(-1)= 4$ $\lim_{x\to -1} = 3$. TWO and FIVE and $x = 1$ the function takes on the value of $3$. At $x$ near $1$ but $x &lt; 1$ we see the function was approaching $2$ but &quot;ripped itself&quot; and jump to $3$ at $x = 1$. And the values of $x$ near $1$ but $x &gt; 1$ we see the function is approaching $3$. As the function is approaching one value for $x$ near but less than $1$, and is approaching a different value foor $x$ near but more than $1$, there isn't any one consistent value that the function approaches when it is near but not equal to $1$. So $\lim_{x\to 1} F(x)$ does not exist. $F(1) = 3$. By the way, we don't ever say something &quot;$= $ does not exist&quot;. &quot;Does not exist&quot; is a statement that something doesn't exist. It's not a number or a value. THREE (and a non-asked for SIX) At $x=3$ there is no value of the function. There's a hole in the graph. So the value $F(3)$ does not exist. If we look at the values of $x$ near but not equal to $3$ we see the function approaches the value of $2$. So $\lim_{x\to 3} F(x) = 2$. $F(3)$ does not exist.
Prove the triangle is equilateral
Hint: find a suitable rotation around $C$.
What percent should we increase the denominator in order to decrease the fraction
1. Let $x=\frac{m}{n}$ and $x'=\frac{m'}{n'}$. $$x'=0.9x\implies\frac{m'}{n'}=0.9\frac{m}{n}.$$ You know how $m$ changes: $$\frac{1.05m}{n'}=0.9\frac{m}{n}\implies n'=\frac{1.05}{0.9}n=\frac{7}{6}n=(1+\frac{1}{6})n.$$ That much. 2. Say $x=\frac{4}{9}$. $x'=\frac{4.2}{9k}$. We want $$\frac{x'-x}{x}=-0.1$$ $$x'=0.9x$$ $$\frac{4.2}{9k}=0.9(4/9)\implies k=\frac{4.2}{9\cdot0.9(4/9)}=\frac{7}{6}.$$ 3. Note that $n'=(1+\frac{50}{3}\color{blue}{\%})n=(1+\frac{1}{6})n$. (Credit to Daniel)
Approximate a piecewise function that is 0 for a while and then has constant slope after a certain point.
The function is equal to $f(x) = \dfrac{|x-5|+(x-5)}{2}.$ That could be considered "piecewise" because the absolute value function is defined piecewise, but its piecewise nature is not made explicit in this characterization. If your function were defined only on a bounded interval I might think about a partial sum of a Fourier series. PS: Alright, lets work with generalized functions and define derivatives accordingly Then $f'(x) = \left.\begin{cases} 1, &amp; x&gt;5, \\ 0, &amp; x&lt;5, \end{cases}\right\}$, and so $f''(x)=\delta(x-5)$. Then the Fourier transform of $f''(x)$ is $$ (\mathcal F (f''))(t) = \int_{-\infty}^\infty e^{-itx} \delta(x-5)\,dx = e^{-5it}. $$ Recall that $$ (\mathcal F (f'))(t) = -it(\mathcal F f)(t), $$ so $$ (\mathcal F (f''))(t) = -t^2(\mathcal F f)(t). $$ So $$ (\mathcal F f)(t) = \frac{-e^{5it}}{t^2}. $$ Applying an inverse Fourier transform should return the original function. So you want an "approximation". Maybe I'd try doing the inverse Fourier transform but find only $\int_{-A}^{-\varepsilon}+\int_\varepsilon^A$ instead of $\int_{-\infty}^\infty$.
In a metric space $(X,d)$, for every Cauchy Sequence in $X$, and $z \in X$, the numeric succession $\{d(x_n;z)\}$ converges
Prove that $d(x_n,z)$ is a Cauchy sequence in $\mathbb R$. What do we know about Cauchy sequences in $\mathbb R$? Full proof: Let $\epsilon&gt;0$. Since $x_n$ is a cauchy sequence there exists $N\in \mathbb N$ such that if $n,m&gt;N$ we have $d(x_n,x_m)&lt;\epsilon$. Now notice $d(x_m,x_n)+d(x_n,z) \geq d(x_m,z)$ from where $d(x_m,x_n)\geq d(x_n,z)-d(x_m,z)$. Analogously $d(x_n,x_m)+d(x_m,z)\geq d(x_n,z)$ and so $d(x_n,x_m)\geq d(x_n,z)-d(x_m,z)$. From here $\epsilon&gt;d(x_n,x_m)\geq|(x_n,z)-(x_m,z)|$. Proving $\{d(x_n,z)\}$ is a Cauchy sequence, and every Cauchy Sequence in $\mathbb R$ converges.
Proving $\mathbb Q \subseteq \mathbb D$
$\mathbb{Q} \subseteq \mathbb{D}$ implies that for any element $a \in \mathbb{Q}$ we can find an element $(0, a) \in \mathbb{D}$. Providing a construction method of $a$ would show this. Write $a = \frac{p}{q}$. Mark points $P = (0, p)$, $Q = (q, 0)$ and unit point $U = (1, 0)$. Draw line $PQ$. Draw a line $L$ intersecting $U$ parallel to $PQ$. By argument of similar triangles, the intersection point of $L$ and $OP$ is at $(0, a)$. All of these operations can be done with a ruler and compass.
How can I solve these Modular problems?
You can write it as $$\begin{pmatrix}7&amp;9\\2&amp;-5\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}0\\2\end{pmatrix}$$ Invert the matrix, but in $\Bbb Z_{31}$, which is a field.
Matrix Solving Method
Now if you add the two equations after dividing by $xy$ you get $$\frac 3x+\frac {18}x=7$$ Note that you should consider whether $x$ or $y$ could be zero before dividing by $xy$
Simple question - Proof
We have $\ln(2x+2)=\ln(2(x+1))=\ln 2+\ln(x+1)$. Thus $$\frac{1}{2}\ln(2x+2)=\frac{1}{2}\ln 2+\frac{1}{2}\ln(x+1).$$ The two functions thus are definitely not equal. But they differ by a constant. So the answer to $\int \frac{dx}{2x+2}$ can be equally well put as $\frac{1}{2}\ln(|2x+2|)+C$ and $\frac{1}{2}\ln(|x+1|)+C$. (We can forget about the absolute value part if $x+1$ is positive in our application.) Remark: A simpler example: It is correct to say $\int 2x\,dx=x^2+C$. It is equally correct (but a little weird) to say $\int 2x\,dx=x^2+47+C$.
Trying to figure out number of permutations based on some rules
Problem 1) If you have $n$ places to put $n$ things, then you have $n!$ ways of doing it. proof: the first one has a choice of $n$, the second a choice of $n-1$ and so on. You multiply all the choices to get the total amount, so $n!$ $64!$ is big. Problem 2) If you have $n$ things to be placed upside up (?) or upside down, then there are $2^n$ ways to do it. proof: First can be done up or down, so 2 choices, then next has 2 and so on. So all $n$ give 2 choices. As before we product, and you get $2^n$. Now, you haven't said much about the tiles, else we might be able to cancel out any which are the same permutation. But $2^{64!}$ is very big.
Testing Series $ \sum\limits_{n = 3}^{\infty} \frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}} $
Note that $$\left|\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}\right|\leq \frac{3}{n(\ln(n))^{\frac{3}{2}}}$$ for $n\geq 3$. Now, $f(x)=\frac{3}{x(\ln x)^{\frac{3}{2}}}$ is positive and decreasing on $[3,\infty)$, and $$\int_3^\infty f(x)dx=\int_3^\infty \frac{3}{x(\ln x)^{\frac{3}{2}}}dx=\left.-\frac{6}{(\ln x)^{\frac{1}{2}}}\right|_{3}^\infty&lt;\infty.$$ By integral test, the series $\displaystyle\sum_{n=3}^\infty \left|\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}\right|$ converges. Therefore, the series $\displaystyle\sum_{n=3}^\infty\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}$ converges absolutely, hence is convergent.
Can a function have n outputs or a' set' as an image?
In a strict sense a function can only output one value, as mentioned in the comments on your question. However in your example the outputs of the function $t$ are ordered pairs $(x,y)$, so could be easier to think of it having two outputs. Of course we are not limited to only two; e.g. we could have a function $f:\mathbb{R} \rightarrow \mathbb{R}^n$ and think of it having $n$ outputs. Remember though that really there is only one output, which is an element of $\mathbb{R}^n$. If you read the definition of a function you will see there is no restriction as to what kind of elements can be in the range, so function can output sets as well, e.g. $f: \mathbb{R} \rightarrow \mathcal{P}(\mathbb{N})$ for example.
Differential equation - help
Since $d(\ln y)=dy/y$ and $d(\ln t)=dt/t$, the equation can be written as $$ \frac{t}{y}\,\frac{dy}{dt}=\alpha\,\Bigl(1-\frac{p}{y}\Bigr). $$ This is equivalent to $$ \frac{dy}{dt}=\frac{\alpha}{t}\,y-\frac{\alpha\,p}{t}, $$ which is a linear equation.
Prove that for any sets $A$ and $B$, $\mathscr P(A)\cup\mathscr P(B)\subseteq \mathscr P(A\cup B)$.
Your proof is nice and rigorous. A more concise proof would be the following: Let $X \in \mathcal{P}(A) \cup \mathcal{P}(B)$. Then $X \in \mathcal{P}(A)$ or $X \in \mathcal{P}(B)$. First, suppose that $X \in \mathcal{P}(A)$. Hence $X \subseteq A$. Then we have that $X \subseteq A \cup B$, so $X \in \mathcal{P}(A \cup B)$. Next, suppose that $X \in \mathcal{P}(B)$. Hence $X \subseteq B$. Then we have that $X \subseteq A \cup B$, so $X \in \mathcal{P}(A \cup B)$. In both cases, we have that if $X \in \mathcal{P}(A) \cup \mathcal{P}(B)$, then $X \in \mathcal{P}(A \cup B)$. Therefore $\mathcal{P}(A) \cup \mathcal{P}(B) \subseteq \mathcal{P}(A \cup B)$. $\square$
Using a cut-point to break a homeomorphism
Why use cut points? Clearly R = usual R is Hausdorff while H = open half line R is not Hausdorff. Hence R and H aren't homeomorphic. If you must use cutpoints, assume f:H -> R is homeomorphism. R' = R - {f(x)} is disconnected; H' = H - {x} is connected. g = f restricted to H' is continuous surjection onto R'.
singular or ordinary point of a differential equation
Consider the general homogeneous second order linear differential equation $$u''+P(x)u'+Q(x)u=0$$ where $z \in D \subseteq \mathbb{C}$. The point $x_0 \in D$ is said to be an ordinary point of the above the given differential equation if $P(x)$ and $Q(x)$ are analytic at $x_0$. If either $P(x)$ or $Q(x)$ fails to be analytic at $x_0$, the point $x_0$ is called a singular point of the given differential equation. A singular point $x_0$ of the given differential equation is said to be regular singular point if the function $(x-x_0)P(x)$ and $(x-x_0)^2 Q(x)$ are analytic at $x_0$ and irregular otherwise. ${}$
Does $\sum\limits_{i=1}^{\infty}|a_i||x_i| < \infty$ whenever $\sum\limits_{i=1}^{\infty} |x_i| < \infty $ imply $(a_i)$ is bounded?
If $a_n$ is unbounded, then there exist integers $0 &lt; n_1 &lt; n_2 &lt; \cdots \to \infty$ such that $|a_{n_k}| &gt; k^2.$ Define $x_n$ as follows: $x_{n_k} = 1/k^2, k = 1,2, \dots,$ $x_n=0$ for all other $n.$ Then $\sum |x_n| &lt; \infty,$ while $\sum |a_n||x_n|$ has infinitely many terms $&gt; 1,$ hence diverges, contradiction.
Why there are no other known Fermat primes.
One reason is that there are probably no more Fermat primes! If you picked a random odd number near $2^{2^n}$ the chance that it would be prime is roughly $1/\log(2^{2^n})$ or $k/2^n$ for some constant $k$. The sum of $k/2^n$ converges, so there should be finitely many primes. The sum over all $n$ such that it is not known whether $2^{2^n}+1$ is prime or not is tiny, so the expected number of remaining Fermat primes is 0. Another reason is that Fermat numbers grow so quickly that it's hard to work with them. The n-th term has about $2^n$ bits, so the 23rd Fermat number takes about 1 MB to store, the 33rd takes about 1 GB, the 43rd takes about 1 TB to store, etc. This makes any reasonable primality test very hard to carry out. (On the other hand trial division is still workable, and this is how the character of many of the Fermat numbers was discovered.)
Consecutive prime gaps with equal length, always a multiple of 6 (for $n \gt 3$)?
First of all note that, the prime gaps must be even if $n&gt;3$. Hence, $g_n \equiv 0 \pmod 2$ Now, $g_{n+1}=g_n$ means each of $p_n,\ p_n+g_n,\ p_n+2g_n$ must be prime, now, if $g_n \equiv 1,2 \pmod 3$ then $p_n,\ p_n+g_n,\ p_n+2g_n$ must be different modulo 3. Thus, one of them is divisible by $3$, hence not a prime. Therefore $g_n \equiv 0 \pmod 3$. So, $$g_n \equiv 0 \pmod 6$$
Difficulty understanding how Babylonian reciprocals work
Our base-$10$ reciprocals do look like $10/k$! Dividing by $2$ is like multiplying by $5$ (and then shifting the decimal point) and vice versa. Indeed, I regularly divide by $5$ by doubling the number and shifting the decimal point. $10$ has fewer divisors than $60$, so this "trick" (if you like) doesn't have as many applications—basically only this one.
Need help in finding extrema in a trigonometric function
To find your critical points, note that $$-\sqrt{3}\sin(x)+\cos(x)=0\implies \cos(x)=\sqrt{3}\sin(x)\implies\frac{1}{\sqrt{3}}=\tan(x)$$ $$\implies\frac{\frac{1}{2}}{\frac{\sqrt{3}}{2}}=\tan(x)\text{ or } \frac{-\frac{1}{2}}{-\frac{\sqrt{3}}{2}}=\tan(x)$$ On the finite interval $[0,2\pi]$, we have that $$\frac{\frac{1}{2}}{\frac{\sqrt{3}}{2}}=\tan(x)\ \implies x=\frac{\pi}{6}$$ and $$\frac{-\frac{1}{2}}{-\frac{\sqrt{3}}{2}}=\tan(x)\implies x=\frac{7\pi}{6}$$ Therefore we have $4$ points to test for extrema, namely the endpoints and the critical points. Observe: $$f(0)=\sqrt{3}\cos(0)+\sin(0)=\sqrt{3}$$ $$f\left(\frac{\pi}{6}\right)=\sqrt{3}\cos\left(\frac{\pi}{6}\right)+\sin\left(\frac{\pi}{6}\right)=2$$ $$f\left(\frac{7\pi}{6}\right)=\sqrt{3}\cos\left(\frac{7\pi}{6}\right)+\sin\left(\frac{7\pi}{6}\right)=-2$$ $$f(2\pi)=\sqrt{3}\cos(2\pi)+\sin(2\pi)=\sqrt{3}$$ So on $[0,2\pi]$, $f(x)$ has a maximum of $2$ at $x=\frac{\pi}{6}$ and a minimum of $-2$ at $x=\frac{7\pi}{6}$.
Help please with finding the equation and pattern of Taylor Series. (2 problems I have attempted down below).
For the first function the series is $$y=\sum_{n=0}^{\infty }\frac{(n+8)!}{8!(n!)}(x-8)^n$$ For the second $$y=6+x/12+\sum_{n=2}^{\infty }\frac{(-1)^{n-1}(n-1)!!}{2^n6^{2n-1}(n!)}x^n$$
Estimation in Sobolev norm
We know that $u\in W^{2,2}(\Omega')$ because $\|D^2 u\|_{L^2(\Omega')}&lt;\infty$ and $u\in W^{1,2}(\Omega')$. Now, take a cut-off function $\eta\in C_c^{\infty}(\Omega)$ such that $0\leq \eta \leq 1$, $\eta=1$ in $\Omega'$, $\|\nabla \eta\|_{L^{\infty}(\Omega)}\leq C_1(\Omega',\Omega)$, $\|D^2 \eta\|_{L^{\infty}(\Omega)}\leq C_2(\Omega',\Omega)$. Then $\eta u \in W_0^{2,2}(\Omega)$. By Poincare's theorem you will have an estimate of the form $$\|\eta u\|_{W^{2,2}(\Omega)} \leq C(\Omega)\|D^2(\eta u)\|_{L^2(\Omega)}$$ On the other hand, by the properties of $\eta$, you also have $$ \|D^2(\eta u)\|_{L^2(\Omega)}\leq C(\Omega',\Omega)\|D^2 u\|_{L^2(\Omega')} $$ and $$\|u\|_{W^{2,2}(\Omega')}\leq \|\eta u\|_{W^{2,2}(\Omega)} $$ so putting together the above inequalities you obtain the desired result.
Given only $P(A)$ and $P(A|B)$ can $P(A \cap B)$ be calculated?
I prepared following example: let's take $A=\{ 1,2,3 \}$, $B_1=\{ 1,4\}$ and $B_2=\{ 1,2,4,5\}$ in space $\Omega=\{1,2,3,4,5,6 \}$ Then, fixing $A$ we have $$P(A|B_1)=\frac{1}{2}=P(A|B_2)$$ While $P(B_1) \ne P(B_2)$.
Let $H$ be a subgroup of a group $G$ and suppose that $g_1,g_2 ∈ G$. Prove that the following conditions are equivalent:
I'll get you started, maybe you're having trouble with the order of the clauses, but you should take the effort to do the rest yourself, as this is a basic question: Assume $g_1H = g_2H$. Then $g_1^{-1}g_2H = g_1^{-1}g_1H = H$ so $g_1^{-1}g_2 \in H$. So (a) implies (e). Assume (e). Then there is $h \in H$ such that $g_1^{-1}g_2 = h$. Thus $g_2 = g_1h\in g_1H$. So (e) implies (d).
Finding all submodules of $\mathbb{Z}_{\mathbb{Z}}$
You are right. In fact, one can make the following: Observation. Let $R$ be a commutative ring with unity, then $R$ is a (left) $R$-module and the sub-$R$-modules of $R$ are exactly its ideals. In the case of $\mathbb{Z}$, its ideals are its subgroups, namely the $n\mathbb{Z}$, for $n\in\mathbb{Z}$.
A nice Application of Baire category Theorem.
Let's define $$A_{n,k} = \left\{ x : f_n(x) &lt;\frac{1}{k} \right\}$$ Each $A_{n,k}$ is open Then the $$B_{i,k} =\bigcup_{n&gt;i} A_{n,k}$$ are open And $$A = \bigcap_{i,k} B_{i,k} $$ So $A$ is a countable intersection of open set Indeed, $$x \in A \Leftrightarrow \forall k, \forall i, \exists n &gt; i, f_n(x) &lt; \frac{1}{k}$$
Writing $\frac{1}{(1+ixy)^{2n+1}} +\frac{1}{(1-ixy)^{2n+1}}$ in a way that is independent of $i$.
Write $1+ixy = re^{it}$ in polar form. Then your expression becomes $$\frac{(e^{-it})^{2n+1} + (e^{it})^{2n+1}}{r^{2n+1}}$$ $$= \frac{2\cosh (2n+1)t}{r^{2n+1}}$$ $$= \frac{2\cosh ( (2n+1)\tan^{-1}(xy))}{\left(\sqrt{1+x^2y^2}\right)^{2n+1}}.$$
C.H. Edwards "Advanced Calculus of Several Variables", Problem 3.5 of page 194
There's clearly a typo in the question. My copy of the book is in my office, so I can't check it now. However, you did miscalculate the formulas for the partial derivatives. I get the Jacobian matrix $$\begin{bmatrix} 1 &amp; 1 &amp; -1 \\ 1 &amp; -1 &amp; 2 \end{bmatrix}\,.$$ So since all three $2\times 2$ minors are nonzero, we can locally express this curve as a graph in any of the three ways.
Computing $\lim_{x \to 1}\frac{\log(x)}{x^2+x-2}$
Let $x = e^t$. Then $t\to 0$ as $x\to 1$. The limit then becomes $$\lim_{x \to 1}\frac{\log(x)}{x^2+x-2} = \lim_{t\to0}\frac{t}{e^{2t}+e^t-2} = \lim_{t\to0}\color{red}{\frac{t}{e^t-1}}\frac{1}{e^t+2} = \color{red}1\cdot \frac{1}{1+2} = \frac{1}{3}$$ $\color{red}{\text{Using}}$ the well-known limit $\lim_{x\to0}\frac{e^x-1}{x}=1$.
Derivate of the determinant
If $X$ is in Jordan normal form, then the diagonal entries of $e^{tX}$ are $e^{ta_{kk}}$ and $$\det e^{tX}=e^{t\sum a_{kk}}$$
Limit of $\vec{x} \rightarrow 0$ of $\frac{\vec{x}}{|\vec{x}|}$
Your argument shows that this limit does not (in general) exist. On the other hand, in physics it's pretty common to have hidden arguments to functions, so that $x$ really denotes $x(t)$, the position of a particle at time $t$, for instance. In such a case, it's possible that for some $t_0$, we have $\lim_{t \to t_0} x(t) = 0$, and perhaps the limit is nice (so that $x$ doesn't, for instance, take on the value $0$ anywhere near $t_0$ except at $t_0$); then writing $$ \lim_{x \to 0} $$ as a proxy for $$ \lim_{t \to t_0} $$ may actually be reasonable, and may actually produce a meaningful value. It also may not, as the example $$ x(t) = (t^3 \sin \frac{1}{t}, t^3 \cos \frac{1}{t}) $$ for $t \ne 0$, $x(0) = (0,0)$, shows. It's also possible that the paper's author is just sloppy or is writing nonsense, of course.
Name and interesting properties of this knot?
This is the prime knot $8_{18}$ (or perhaps this, from the comments above, is a better link).
Find real constants $c$ and $k$ such that $y=cx^k$ passes through point $(a, b)$ with slope $m$
The equations you get are $$b=ca^k$$ $$m=cka^{k^-1}$$ dividing the first equation by the second yields $$\frac{b}{m}=\frac{a}{k}$$ which can be solved for $k$, and you can then solve for c.
Find the suitable vector
The requirement that $A$ has length 1 can be replaced by the requirement that $A$ is nonzero, because then if you have found a solution you can always divide by the norm, and it will remain a solution. Define an $(n-1)\times n$ matrix $M$ with elements $M_{ij}=(B_{i+1}-B_i)_j$, $i=1,2,\ldots n-1$, $j=1,2,\ldots n$. Remove the $j_0$'th column to convert $M$ into a square matrix ${M}^{(j_0)}$ of size $(n-1)\times(n-1)$. Which column you remove does not matter, provided that the determinant of ${M}^{(j_0)}$ is nonzero. The elements of the removed column form a vector $v$ of length $n-1$. Solve the set of $n-1$ linear equations for the $n-1$ unknowns $a_j$, $$\sum_{j=1}^{n-1}M_{ij}^{(j_0)}a_j=v_i,\;\;i=1,2,\ldots n-1.$$ The solution is given by Cramer's rule, $$a_i=\frac{\det X^{(i)}}{\det M^{(j_0)}},$$ where the matrix $X^{(i)}$ is obtained by replacing the $i$-th column of $M^{(j_0)}$ by the column vector $v$. Now the desired vector $A$ has elements $a_1,a_2,\ldots a_{j_0-1},-1,a_{j_0},\ldots a_{n-1}$. Divide by the norm to obtain a unit vector, and you're done.
Simplify the following complex fraction:
Note: answer changed to reflect altered problem statement. $$ \frac{\frac{9}{x^2}}{\frac{x^2}{25}+\frac{x^2}{15}} = \frac{\frac{9}{x^2}}{\frac{(15+25)x^2}{15\cdot 25}} = \frac{9}{x^2\frac{40x^2}{375}} = \frac{375\cdot 9}{40x^4} = \frac{675}{8x^4}. $$
No simple groups of order 9555: proof
The idea is to show that elements of $P_7$ and $P_{13}$ commute. The proof seems to be using that $|\mathrm{Aut}(P_7)| = 48$ but this is wrong. Because $|P_7| = 49$ we know that $P_7$ isomorphic to one of $\mathbb{Z}_{49}$ or $\mathbb{Z}_{7} \times \mathbb{Z}_{7}$ and so $|\mathrm{Aut}(P_7)|$ is either $42$ or $48 \cdot 42$. The idea of the proof can still be used. We know that $P_{13} \leq N_G(P_7)$ and $13$ does not divide $|\mathrm{Aut}(P_7)|$ so we must have $P_{13} \leq C_G(P_7)$. Therefore $P_7P_{13}$ is abelian, since both $P_7$ and $P_{13}$ are. We could change the proof to avoid thinking about $P_7P_{13}$ as follows. Since $P_{13} \leq C_G(P_7)$ we also have that $P_7 \leq C_G(P_{13}) \leq N_G(P_{13})$ and then pick up the proof in the last sentence.
Show that $X$ can be represented as a union of disjoint equivalence classes
You are in the right way. Since $x\in \{x\}$ (because of reflexivity, as you have said, $x\sim x$) you can conclude that $X \subset \bigcup_{x\in X} \{ x \}.$ As you have shown $\bigcup_x \{x \} \subset X$ you have the equality. It only remain to show that two different classes are disjoint. Assume $\{x\}\ne \{y\}$ and $\{x\}\cap \{y\}\ne \emptyset.$ Then, there exists $z\in \{x\}\cap \{y\}.$ That, is $x\sim z$ and $z\sim y.$ Becase of transitivity one has $x\sim y.$ So $y\in\{x\}$ from where $\{y\}\subset \{x\}.$ Using a complete analogous argument you have $\{x\}\subset \{y\}.$ Thus, $\{y\}= \{x\}$ which contradicts the assumption that both classes were different.
X is reflexive, so X is Weakly complete.
A google search should help you out here: http://people.math.gatech.edu/~bwick6/teaching/math6338/math6338_hw4.pdf Specifically see attached. I've posted pics of the solution from the site given above.
Finding centers of ellipses with two points and their respective tangents
As achille hui wrote in a comment, your setup is invariant under affine transformations. So you can simplify things by choosing an affine coordinate system in such a way that the tangent lines meet at $(0,0)$ and the points of contact are $(1,0)$ and $(0,1)$. In that case, for reasons of symmetry the centers of all your ellipses have to lie on the line $x=y$. If you have such a center located at $(c,c)$, the ellipse you are after has the equation $$(x, y, 1)\cdot\begin{pmatrix} -c &amp; c-1 &amp; c \\ c-1 &amp; -c &amp; c \\ c &amp; c &amp; -c \end{pmatrix}\cdot\begin{pmatrix}x\\y\\1\end{pmatrix}=0$$ according to my computations (but please verify). To transform this conic back to your original coordinate system, you have to conjugate this matrix with the transformation matrix (in homogeneous coordinates, i.e. a $3\times3$ matrix) which transforms from original to simplified coordinate system.
Finding the probability of <25 of a Gamma Distribution
hint The integral you need to solve can be reduced to $\frac{1}{6}\int_5^\infty u^3 e^{-u}du$ which can be approached by integration by parts. Alternatively, a Gamma with $\alpha=4$ and $\theta=5$ the waiting time distribution for the fourth event for a Poisson process with rate $\lambda = 1/5,$ so it's the same as the probability this process has four or more events within time $t=25.$ And the number of events within time $t=25$ is Poisson with mean $25\cdot \frac{1}{5} = 5.$
The greatest measure of an orthogonal projection of an $n$-parallelepiped onto a $d$-hyperplane
This is by no means a complete answer, but Plucker Coordinates for a $k$-plane in $n$-space can be determined by taking a unit $k$-cube in that plane and projecting to each of the coordinate $k$-planes (there are $n \choose k$ of them) and determining the resulting $k$-dimensional volume in those planes. Those volumes are the plucker coordinates. Thus the plucker coordinates for a line segment (which you can think of as a representation of a vector) in 3-space are exactly the $x$, $y$, and $z$-components of the vector. Similarly, the coordinates for a plane $P$ in 3-space come from taking a unit square in that plane and pushing it into each of the $yz$, $zx$, and $xy$ planes and measuring its area. As it turns out, the three resulting numbers $A,B,C$ are exactly the coefficients of $x, y, z$ in the plane-equation for $P$, which must therefore have the form $$ Ax + By + Cz = d $$ for some value $d$. (If we restrict to planes through the origin, then $d = 0$, of course). Your question (if it were for a unit parallelipiped) would therefore be "what's the largest possible Plucker coordinate for my cube, in any conceivable orientation?" I'll bet that this is the very sort of thing that Plucker examined, and maybe gave formulas for. So if I were trying to solve your problem, that's where I'd look. (I was hoping to add the tag "Plucker Coordinates" to your question, but alas, there is no such tag.)
How to solve this integral with absolute value
$$\int_{-a}^{a}(a^t-\lvert{x}\rvert^t)^2dx=\int_0^{a}(a^t-{x}^t)^2dx+\int_{-a}^{0}(a^t-(-x)^t)^2dx.$$ Now let $y=-x$ in the second integral: $$\begin{aligned}\int_{-a}^{a}(a^t-\lvert{x}\rvert^t)^2dx&amp;=\int_0^{a}(a^t-{x}^t)^2dx+\int_{a}^{0}(a^t-y^t)^2(-dy)\\&amp;=2\int_{0}^{a}(a^t-{x}^t)^2dx\end{aligned}$$ We could have initially observed that the integrand is even, and thus equal to twice the integral from $0$ to $a$, as we found.
A rule to determine the crossed out digit
Almost -- you can determine the digit except you can't know whether it was a $0$ or a $9$. The remainder of $z-(a+b+c+\dotso)$ $\bmod9$ is $0$, and so is the remainder of the sum of its digits. If you leave out one of the digits $1$ through $8$, the effect will be to make the remainder of the rest come out as one of the remainders $8$ through $1$, respectively. However, if you leave out a $0$ or a $9$, the remainder will be $0$ in either case, and you can't tell which one was left out. For instance, if you start with $9090$ and subtract $18$, you have $9072$; now if you cross out the $9$ you get $w=9$. On the other hand, if you start with $9018$ and subtract $18$, you have $9000$; now if you cross out a $0$ you also get $w=9$. Thus the same value of $w$ can occur whether a $0$ or a $9$ has been crossed out.
Show that two integrals are equal:
HINT: $$\int_0^{\frac\pi2}\sin^mx\cos^mxdx=\frac1{2^m} \int_0^{\frac\pi2}(\sin2x)^mdx$$ $$=\frac1{2^m} \int_0^{\pi}(\sin y)^m\frac{dy}2$$ $$=\frac1{2^m} \int_0^{\frac\pi2}(\sin y)^m\frac{dy}2+\frac1{2^m} \int_{\frac\pi2}^\pi(\sin y)^m\frac{dy}2$$ Put $y=x-\frac\pi2$ and use $$\int_a^bf(x)dx=\int_a^bf(a+b-x)dx\text{ to find }\int_0^{\frac\pi2}(\sin y)^mdy=\int_0^{\frac\pi2}(\cos y)^mdy$$
Are my DNF and CNF for $A \land (A \lor C) \implies (C \lor B)$ correct?
The calculus is correct. &nbsp; You can also check the final answer to be sure it is an equivalence. If $A$ is false, $A\wedge(A\vee C)\to(C\vee B)$ is true, as is the case when either $C$ or $B$ is true. &nbsp; So if and only if $\neg A\vee B\vee C$ do we have the implication. Note also an easier route would have been to apply absorption equivalence first, then implication equivalence: $A\wedge(A\vee C)\to(C\vee B)\\\equiv A\to (C\vee B)\\\equiv \neg A\vee C\vee B$
I do not understand "completeness" at all
Let's start with the easy question: how many Boolean operations of a given arity are there? Well, by definition, each of the inputs to a Boolean operation is either True or False. So there are $2^n$ possible inputs you can give to an $n$-ary Boolean operation. For instance, the $2^2$ possible inputs for a $2$-ary Boolean operation are True, True True, False False, True False, False Now, what sort of output can a Boolean operation give? Again, either True or False. So we can think of an $n$-ary operation is just a map from a $2^n$-element set to a $2$-element set. In general, the number of maps from an $A$-element set to a $B$-element set is $B^A$ (why?), so this says that the number of $n$-ary Boolean operations is $2^{(2^n)}$. (Note that this is not the same as $(2^2)^n$.) For instance, there are $2^{(2^1)}=2^2=4$ unary Boolean operations, just like your teacher says. Now, what does completeness mean? Well, we can combine Boolean operations (via composition) to get new Boolean operations. For instance, let "$\vee$" be the Boolean operation of disjunction ("or"), and "$\neg$" be negation ("not"). Then we can define "$\implies$" (implication) as $a\implies b:=(\neg a)\vee b$, or $$\vee(\neg(a), b)$$ (the former is how we usually write it, the latter makes the composition structure clear - first apply $\neg$ to $a$, then feed the result of that, and $b$, to $\vee$). A collection $S$ of Boolean operations is complete if any Boolean operation can be built out of operations from $S$. It might be surprising to you that there even are finite complete collections of Boolean operations! But this is in fact true: e.g. from $\neg$ and $\vee$, it turns out we can build any Boolean operation you want! I don't know what "$n$-complete" means, however; for that you'll have to consult your notes. If you edit your question to include a definition of $n$-completeness, however, I can help explain what it means. (One guess I have is that $n$-complete means that you can build all the $n$-ary operations; however, that seems a bit silly, since $n$-complete and complete are then the same for $n&gt;1$.) A slight omission: when I talk about "Boolean operations," I am assuming we are looking at non-nullary operations - that is, all the operations we consider take in at least one input. See https://en.wikipedia.org/wiki/Functional_completeness#Formal_definition for a brief discussion of why.
Prove that this summation has a surprising result! (Or prove me wrong, it is possible that the pattern does not hold)
This is combinatorial proof. You have $2x$ people whom you will divide into four groups; A, B, C, and D. A and B has equal number of members and C and D has equal number of members. The first way to do it: Choose an even number of people who will join either A or B. $\binom{2x}{2i}$ Half of these $2i$ people join A while the rest join B. $\binom{2i}{i}$ Half of $2x-2i$ remaining people join C while the rest join D. $\binom{2x-2i}{x-i}$ Total ways of forming such four groups is $\sum_{i=0}^{x}{\binom{2x}{2i}\binom{2i}{i}\binom{2x-2i}{x-i}}$ The second way to do it: Choose $x$ people. $\binom{2x}{x}$ Choose $x$ people again, does not have to be different people. $\binom{2x}{x}$ People who got chosen twice join A People who never got chosen join B People who got chosen only in first selection join C People who got chosen only in second selection join D Total ways of forming such four groups is $\binom{2x}{x}\binom{2x}{x}$
Basis for the Null Space of a Matrix
You got $x_1=-3x_3, \; x_2=5x_3+3x_4$, so the general solution for $Ax=0$ is the vector $(-3x_3,5x_3+3x_4,x_3,x_4)'$, now rewrite it has $(-3x_3,5x_3,x_3,0)'+(0,3x_4,0,x_4)'$. Finally just choose 2 non-zero values for $x_3,x_4$ let's say $1$. We get $\{(-3,5,1,0)',(0,3,0,1)'\}$. If you want you can then normalize them just by dividing for the respective norm. You can check that it is a basis since the vectors are independent and both $\in Ker(A)$
How to prove that $\frac{|x+y+z|}{1+|x+y+z|} \le \frac{|x|}{1+|y|+|z|}+\frac{|y|}{|1+|x|+|z|}+\frac{|z|}{1+|x|+|y|}$
as $|x+y+z|\le |x|+|y|+|z|$ let $|x|=a,|y|=b,|z|=c$ it suffices to prove $$\sum \frac{a}{1+b+c}\ge \sum \frac{a+b+c}{1+a+b+c}$$ indeed by C-S/titu's lemma; $$\sum \frac{a}{1+b+c}=\sum \frac{a^2}{a+ba+ca}\ge \frac{{(a+b+c)}^2}{a+b+c+2(ab+bc+ca)}\ge \frac{a+b+c}{1+a+b+c}$$ Here we used $$\frac{2ab+2bc+2ca}{a+b+c}\le a+b+c$$ which is just $a^2+b^2+c^2\ge 0$
Use recurrence relations to find strings with odd numbers of 0's
Hint: Let $E(n)$ be the number of strings with an even number of zeros and $O(n)$ be the number with an odd number of zeros. Can you write an equation for $E(n)$ in terms of $E(n-1), O(n-1)$ and similarly for $O(n)?$ If you have string of length $n-1$ with an even number of zeros and extend it by one digit... Added: a string of $n$ digits with an odd number of $0$'s can come from a string of length $n-1$ with an odd number of $0$'s that we add a $1$ or a $2$ to, or from a string of length $n-1$ with an even number of $0$'s that we add a $0$ to. The recurrence is then $O(n)=2O(n-1)+E(n-1)$. Similarly we have $E(n)=2E(n-1)+O(n-1)$. The base condition is $O(0)=0,E(0)=1$ because the empty string has an even number of $0$'s. We know $E(n)+O(n)=3^n$ because every string of length $n$ has either an even or odd number of $0$'s. We then have $$O(n)=2O(n-1)+3^{n-1}-O(n-1)\\O(n)=3^{n-1}+O(n-1)$$ and we can sum the geometric series to get $$O(n)=\frac 12(3^n-1)\\E(n)=\frac 12(3^n+1)$$
Is every invariant subspace equal to some $\text{null}(T-\lambda I)^n$?
No. For one thing, $V$ is always $T$-invariant, and unless $T$ has only one eigenvalue, will not be equal to a generalized eigenspace. For another example, consider $T\colon\mathbb{C}^3\to\mathbb{C}^3$ given by $T(x,y,z) = (x,2y,z)$. Then $U=\{(x,y,0)\in\mathbb{C}^3\mid x,y\in\mathbb{C}\}$ is $T$-invariant, but is not a generalized eigenspace, or even generated by a union of generalized eigenspaces, because you would need $\lambda=1$ to get the vectors $(x,0,0)$, but that would force you to include the vectors $(0,0,z)$ which are not contained in $U$. Moreover, you can have a subspace spanned by eigenvectors corresponding to distinct eigenvalues, or by suitably chosen portions of generalized eigenspaces corresponding to distinct eigenvalues, and have an invariant subspace that cannot be expressed as a single generalized eigenspace; and if you don't include all the the generalized eigenspace, you will not be able to express them in terms of generalized eigenspaces either. Now, for a finite dimensional complex vector space, you can show that any invariant subspace has a basis of generalized eigenvectors; but that is simply because you can find a Jordan canonical form for the restriction of $T$; the generalized eigenvectors for $T|_U$, when $U$ is $T$-invariant, are also generalized eigenvectors of $T$. So in a sense you can "come down" to some generalized eigenvectors fo $T$, but you won't get just a generalized eigenspace.
Introduction to ring theory?
Atiyah-Macdonald has been the best introduction to commutative algebra from the moment it was published in 1969. Actually I think it is one of the most extraordinary textbook ever published in all of mathematics . It is exactly 128 pages long, hence also one of the thinnest mathematics books on the market, but contains a mind-boggling quantity of material. It starts with the definition of a ring (!) on page 1 but already in the exercises to Chapter 1 you will find a self-contained introduction to affine algebraic geometry, both classical and scheme-theoretic (and as an aside, remember that schemes were very new in 1969). The book calmly goes on to chapter 11, the last one, where different definitions of dimension are given but proved to be equivalent. You will also learn in that chapter about Hilbert functions and regular local rings, two notions which play a great role in algebraic geometry. I won't even try to summarize the other chapters: suffice it to say that every basic notion in commutative algebra is covered: the Nullstellensatz for example is proved (or given as an exercise with hints) several times. And the most remarkable feature of the book is that every proposition is proved, crisply but completely, without cheating or resorting to hypocritical shortcuts like "it is easy to see..." or "it is left as an exercise ..." There are other good books on commutative algebra: Bourbaki, EGA, Eisenbud, Patil-Storch, Zariski-Samuel, ... but they are probably too advanced for a beginner, whom they might discourage rather than help. I advise you to to use them as reference books once you have studied a reasonable part of Atiyah-Macdonald. Good luck!
which are diagonalizable over $\mathbb{C}$
Hint: The shear matrix $$\left[\begin{array}{cc} 1 &amp; 1\\ 0 &amp; 1 \end{array}\right]$$ has two real eigenvalues (both equal $1$) complex entries, and cannot be diagonalized.
Why is $\int_{-1}^{1} \frac{1}x \mathrm{d}x$ divergent?
First, $\frac 1 x$ isn't defined on $[-1,1]$ (because of what happens in $0$). You could get around this by considering it defined on $[-1,0) \cup (0,1]$, but then you've got another problem, much more serious: the function is unbounded, and the concept of "Riemann integral" is defined only for bounded functions (and bounded intervals). Finally, you might try to use the concept of "improper integral of the second kind". This doesn't work, either: $\int \limits _{-1} ^1 \frac 1 x \ \Bbb d x = \int \limits _{-1} ^0 \frac 1 x \ \Bbb d x + \int \limits _0 ^1 \frac 1 x \ \Bbb d x = \infty - \infty$ which is indeterminate. What you are trying to do is to give a meaning to that integral using the concept of "principal value", in the framework of distribution theory. But this is clearly not the same as saying that your function is integrable with integral $0$.
How to find a specific curve if the initial value is not given?
I'd say that the answer $y(e) = 2$ is wrong if the question is stated like this. You have already found the correct general solution of the first-order linear ODE, with one constant of integration $C \in \mathbb{R}$. From this you do indeed get $y(e) = C$ as William Elliot has pointed out. Therefore, you can obtain any value for $y(e)$, depending on the value of $C$. The reason why the particular solution with $C=2$ is of interest is that it is the only solution of this ODE for which the (right-sided) limit as $x \searrow 1$ is finite. For the general solution $y(x) = \frac{2x(\ln(x)-1)+C}{\ln(x)}$, $C \in \mathbb{R}$, we have \begin{equation} \lim_{x \searrow 1} y(x) = \left\{ \begin{array}{ll} -\infty, C &lt; 2\\ 0, C = 2\\ \infty, C &gt; 2 \end{array} \right.. \end{equation} Therefore, if we assume that the right-sided limit of $y(x)$ is finite as $x$ approaches $1$, then there remains only the particular solution with $C=2$, which satisfies $y(e) = 2$. But this assumption needs to be added to the question, otherwise the answer is wrong.
Is the normalizer of a Sylow $p$-subgroup is a $p$-group?
Not true in general. If $G$ is not a $p$-group with a normal Sylow $p$-subgroup, then $N_G(P)=G$. Example $G=S_3$, $P=A_3$.
Sketch $f(x) =\frac{ 2x}{x^2-5x+4}$
Just using common sense and the knowledge that the denominator, which is zero at $x=1$ and $x=4$, is negative between those points and positive outside that range, we can describe the sketch quite well: Between $1$ and $4$ $f(x)$ is always negative. As $x$ approaches $1$ from above or $4$ from below, the curve asymptotically appoaches the lines $x=1$ and $x=4$, respectively. Somewhere between those two lines, $f(x)$ turns over so that it can go back to negative infinity by the time it reaches $x=4$, so the curve in that region looks like an upside-down infinitely tall cup, with a maximum at roughly $x=2.5$, at which point $y$ is about $-2$. Slightly to the left of $x=1$, as we move further to the left, the curve (which starts at positive infinity at $x=1$) falls rapidly, and by $x=0$ it has just fallen to the origin, which it crosses. Since the curve and its derivatives are not discontinuous at $x=0$, it continues to go negative for negative $x$, and in fact $f(x) &lt; 0$ whenever $x&lt;0$. But when $x$ his a large negative number, the denominator grows faster than the numerator, so the curve approaches zero from below as $x \to -\infty$. Combined with the behavior near the origin, we can deduce that there is some minimum at some negative value of $x$, at which point $f(x)&lt;0$. To the right of $x=4$, $f(x)$ starts out at positive infinity, and falls, but it never becomes negative. $f(x)$ behaves like $2/x$ for very large $x$, approaching the $x$ axis from above.
Regarding whether one has a circular definition in the following case
I don't know that specific book that you mention, but your objection is correct: you can't define Cartisian product using the concept of function and als to define function using the concept of Cartesian product. However, note the the concept of function only uses the concept of Cartesian product of two sets $A$ and $B$. And you can define it as$$\left\{\,\{a,\{a,b\}\}\middle|\,a\in A\wedge b\in B\right\}.$$
Can future value by computer as a finite geometric series?
A student invests $200 at the start of each month for 24 months, starting today. You used the formula which calculates the future value where the first payment is made after one month. You have to multiply your expression by 1.005 to get the right result: $C_{24}=200\cdot 1.005\cdot \frac{1.005^{24}-1}{1.005-1}=5111.82$
Definite integration involving Trigonometry.
We have $$\sin^{\sqrt{2}+1}x=\sin^{\sqrt{2}-1}x\sin^2x=\sin^{\sqrt{2}-1}x-\sin^{\sqrt{2}-1}x\cos^2x.$$ Now use integration by parts $$\int f'g=fg-\int fg'$$ with $$f(x)=\frac{1}{\sqrt{2}}\sin^{\sqrt{2}}x,\quad g(x)=\cos x,$$ it follows that $$\begin{aligned}&amp;\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}x\cos^2x\\ =&amp;\frac{1}{\sqrt{2}}\sin^{\sqrt{2}}x\cos x|_{x=0}^{\frac{\pi}{2}}+\frac{1}{\sqrt{2}}\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}}x\sin xdx\\ =&amp;\frac{1}{\sqrt{2}}\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}+1}xdx \end{aligned}$$ Therefore $$\begin{aligned}\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}+1}xdx&amp;=\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}xdx-\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}x\cos^2xdx\\ &amp;=\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}xdx-\frac{1}{\sqrt{2}}\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}+1}xdx, \end{aligned}$$ which implies $$\frac{\sqrt{2}+1}{\sqrt{2}}\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}+1}xdx=\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}xdx.$$ Hence $$\frac{\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}+1}xdx}{\int_0^{\frac{\pi}{2}}\sin^{\sqrt{2}-1}xdx}=\frac{\sqrt{2}}{\sqrt{2}+1}=\sqrt{2}(\sqrt{2}-1)=2-\sqrt{2}.$$
Prove for all $x\geq 1$, $\log x \leq \sqrt{x}-\frac{1}{\sqrt{x}}$.
METHODOLOGY $1$: Using Taylor's Theorem First we let $y=\sqrt x$. Then, the inequality $\log(x)\le \sqrt x-\frac1{\sqrt x}$ for $x\ge 1$ is equivalent to the inequality $$y\log(y)\le \frac12\left(y^2-1\right)$$ for $y\ge 1$. Using Taylor's Theorem (with remainder) for $\log(y)$ we see that $\log(y)\le y-1+\frac12(y-1)^2$ for $y\ge 1$. Hence, we have for $y\ge 1$ $$\begin{align} y\log(y)&amp;=(y-1)\log(y)+\log(y)\\\\ &amp;\le (y-1)^2-\frac12(y-1)^3+(y-1)-\frac12 (y-1)^2\\\\ &amp;=(y-1)+\frac12(y-1)^2\\\\ &amp;= \frac12 (y^2-1)\end{align}$$ And we are done! METHODOLOGY $2$: Using the Mean Value Theorem Let $f(x)=\log(x)-\sqrt{x}+\frac1{\sqrt x}$. Note that $f(1)=0$ and for $x\ge 1$ $$f'(x)=-\frac{(\sqrt x-1)^2}{2x^{3/2}}\le 0$$ Can you finish?
Suppose $40,000 was invested on January 1, 1980 at an annual effective interest rate of 7%
First part is good. Since we are dealing with annuities-due you need to divide the contribution of $5000 / d where d = i/1+i = .0654 Now, to find the smaller payment 1 year prior to the first $5000 the way I did it was, FV = 40,000 (1.07)^9 = $73,538.37 Now, we know in order for the scholarship to be available, the funds need to increase up to $76,428.57. So at year 9, which is .57 years away before the funds reach maturity is 76,428.57 - 73,538.37 = 2,890.20 Now, that value represents the .57 or in this case, we want to be exact so, .56976171 years left until maturity which means that at the current moment C - C(.56976171) = 5000 - 2890.20 = $2,109.80 has already been earned and can be paid on January 1, 1989.
Equation $\sqrt{x}+\sqrt{y}+\sqrt{z}=\sqrt{2013}$ in rationals
In general, for any non-negative rationals $(a,b,c)$ such that $a+b+c=1$, you have a solution $(x,y,z)=(2013a^2,2013b^2,2013c^2)$. This is the only family of solutions. Dividing through by $\sqrt{2013}$ gives $$ \sqrt{\frac{x}{2013}}+\sqrt{\frac{y}{2013}}+\sqrt{\frac{z}{2013}}=1. $$ Each square root is either rational (if the numerator is $2013a^2$) or of the form $x'\sqrt{p'/q'}$, where $x'$ is a positive rational and $p'$ and $q'$ are square-free integers with no common factor. But if any value of the latter form appears, it can't be made to disappear by adding more rationals or values of the same form.
Abel's Integral $\int_{0}^{\infty} \frac{x}{\sinh(\pi x)(1+x^2)} \,dx$
Note that $$\frac{b}{{{b^2} + 1}} = \int_0^\infty {{e^{ - bx}}\cos xdx} $$ Hence $$\int_0^\infty {\frac{x}{{{x^2} + 1}}{e^{ - ax}}dx} = \int_0^\infty {\int_0^\infty {{e^{ - xt}}(\cos t){e^{ - ax}}dt} dx} = \int_0^\infty {\frac{{\cos t}}{{t + a}}dt}$$ this gives $$\text{Ci}(\pi n) = {( - 1)^{n + 1}}\int_0^\infty {\frac{x}{{{x^2} + 1}}{e^{ - \pi nx}}dx}$$ Summing over $n=1,3,5...$ gives $$\sum_{n = 1,3,5...}\text{Ci}(\pi n) = \int_{0}^{\infty} \frac{x}{(x^2+1)(e^{\pi x}-e^{-\pi x})} dx$$ This is line 6. For the evaluation of the integral, consider $$ \mathcal{I} = \int_{-\infty}^{\infty} \frac{x}{\sinh(\pi x)(1+x^2)}dx $$ Consider the rectangular contour with vertices $R, R+Ri, -R+Ri, -R$, where $R=2N+1/2$ with $N$ a very large integer. When $R$ tends to infinity, the following three integrals tend to 0 $$\int_{R}^{R+Ri} \frac{x}{\sinh(\pi x)(1+x^2)}dx \to 0$$ $$\int_{R+Ri}^{-R+Ri} \frac{x}{\sinh(\pi x)(1+x^2)}dx \to 0$$ $$\int_{-R+Ri}^{-R} \frac{x}{\sinh(\pi x)(1+x^2)}dx \to 0$$ Note that the function $\dfrac{z}{\sinh(\pi z)(1+z^2)}$ has simple pole at $z=ni$, with $n\geq 2$ and a double pole at $z=i$, the residue at $z=ni$ being $\dfrac{(-1)^n(ni)}{\pi(1-n^2)}$, and at $z=i$ being $\frac{i}{4\pi}$. Hence $$\begin{aligned} \mathcal{I} &amp;= 2\pi i\frac{i}{4\pi} + 2\pi i\sum_{n = 2}^{\infty}\frac{(-1)^n(ni)}{\pi(1-n^2)} \\ &amp;= -\frac{1}{2}+2\sum_{n = 2}^{\infty}\frac{(-1)^nn}{n^2-1} \\ &amp;= -\frac{1}{2}+\sum_{n = 2}^{\infty}\left(\frac{(-1)^n}{n-1}+\frac{(-1)^n}{n+1}\right) \\ &amp;= 2\ln 2 -1 \end{aligned} $$
Smooth curves and Line Integrals
Take $f(x,y,z)=1$ and let $C_1$ and $C_2$ be two different curves of different lengths. Then the line integral of $f$ along $C_1$ equals the length of the curve $C_1$. Similarly, the line integral of $f$ along $C_2$ equals the length of the curve $C_2$. Thus, these integrals must be different.
Decomposition of sum of two independent random variables
When $X$ and $Y$ are i.i.d., $E(X\mid X+Y)=E(Y\mid X+Y)$ by symmetry hence $$E(X\mid X+Y)=\tfrac12(X+Y).$$ When $X$ and $Y$ are independent with possibly different distributions, there is no similar simple formula independent on the distributions. Even when $X$ and $Y$ are i.i.d., there exists no general formula for $E(X\mid X+Y\lt a)$, except, by the same argument, the semi-explicit identity $$E(X\mid X+Y\lt a)=\tfrac12E(X+Y\mid X+Y\lt a).$$
Prove that: $\operatorname{cosec}(2A) + \operatorname{cosec}(4A) + \operatorname{cosec}(8A) = \cot(A) - \cot(8A) $
Put the right side to the left. $$cosec(2A)+cosec(4A)+cosec(8A)-cot(A)+cot(8A)=0$$ Then express the left side by sines and cosines. You get: $$\frac{1}{sin(2A)}+\frac{1}{sin(4A)}+\frac{1}{sin(8A)}-\frac{cos(A)}{sin(A)}+\frac{cos(8A)}{sin(8A)} =$$ $$= \frac{1}{sin(2A)}+\frac{1}{sin(4A)}+\frac{1+cos(8A)}{sin(8A)}-\frac{cos(A)}{sin(A)} =$$ $$= \frac{1}{sin(2A)}+\frac{1}{sin(4A)}+\frac{2cos^2(4A)}{2sin(4A)cos(4A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{1}{sin(2A)}+\frac{1}{sin(4A)}+\frac{cos(4A)}{sin(4A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{1}{sin(2A)}+\frac{1+cos(4A)}{sin(4A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{1}{sin(2A)}+\frac{2cos^2(2A)}{2sin(2A)cos(2A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{1}{sin(2A)}+\frac{cos(2A)}{sin(2A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{1+cos(2A)}{sin(2A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{2cos^2(A)}{2sin(A)cos(A)}-\frac{cos(A)}{sin(A)} = $$ $$= \frac{cos(A)}{sin(A)}-\frac{cos(A)}{sin(A)} = 0$$ Q.E.D.
Symmetry arguments in probability
Any argument showing that the probability that that card is a spade is a certain number, would likewise show that the probability that that card is a heart is that same number. And similarly for the other two suits. They're mutually exclusive and exhaustive. So $$ x+x+x+x=1. $$ Now solve for $x$.
Does an irreducible $M$-matrix have positive diagonal entries?
As $\alpha I-P=(\alpha+1)\left[I-\frac1{\alpha+1}(I+P)\right]$, we may assume without loss of generality that $\alpha=1$, i.e. $A=I-P$ with $\rho(P)\le1$. We claim that $A$ has a positive diagonal. Suppose the contrary that $a_{11}\le0$, i.e. $p_{11}\ge1$. Then the first diagonal entry of $P^k$ is also $\ge1$ for every positive integer $k$. Hence $\|P^k\|_\infty\ge1$ and Gelfand's formula implies that $\rho(P)\ge1$. Thus $\rho(P)=1$. As $P$ is irreducible, $v=Pv$ for some positive eigenvector $v$. Therefore $$ v_1=\sum_jp_{1j}v_j\ge v_1+\sum_{j&gt;1}p_{1j}v_j, $$ meaning that all off-diagonal entries on the first row of $P$ are zero. This is a contradiction because $P$ is irreducible.
Let $R$ be a commutative ring with $1\neq 0$. Prove that if every proper ideal of $R$ is prime, then $R$ is a field.
Hint: Take $A = B$ Take $A = B = (a)$ a principal ideal. Then $(a^2)=(a)$ implies that $a^2$ divides $a$, which in an integral domain can only happen if $a=0$ or $a$ is invertible.
Finding average of two variables in two equation
adding two equations we get $$10x+10y=36$$ than, $$x+y=3.6$$ therefore, the average of two variable $x/2+y/2$ $$3.6/2=1.8.$$
Stewart, Introduction to Linear Algebra
I actually have much experience of linear algebra self teaching (I missed all the linear algebra lectures of my first term (which was not an intro course - it was a top 2 uni [I would rather not say]). Needless to say, with one month before start of term tests, I had my work cut out. Lecture notes were provided, but incomplete (e.g. no proofs) so I went out and bought S. Axler's Linear algebra done right (L.A.D.R). The first four chapters were the best of any book I have read to date - layout was easy, proofs were easy and his sentences flow well. There are not any solutions to the exercises but certainly for the first 6 chapters you won't need any (the first three chapters take up a 2 mth course in many unis and reading up to 7 will be one year usually) because the questions are extremely easy (though illuminating!). The negatives: Many people here discredit Axler - I'll admit the book is strange after chapter 5/6 though on the whole doesn't get too weird. By strange I mean some extremely non-standard approaches (i.m.o. incorrect) are put in. I had to unlearn the material in chapter 5/6 (whatever one was the eigenvalue chap.) and find a book on determinants. He doesn't treat things well in this section (his approach doesn't work well for vector spaces over R and C for instance and - obviosuly - he doesn't emphasize the minimum polynomial as much as he should).
Justify an approximation of $\sum_{n=1}^\infty|G_n|\log\left(\frac{n+1}{n}\right)$, where $G_n$ is the $n$th Gregory coefficient
By the very definition of Gregory coefficients $$ \sum_{n\geq 1}|G_n| z^n = 1+\frac{z}{\log(1-z)} \tag{1}$$ and by their integral representation $|G_n|\sim\frac{1}{n\log^2 n}$ for large values of $n$. Since $\log\left(1+\frac{1}{n}\right)\sim\frac{1}{n}$ the wanted series is absolutely convergent; by Frullani's integral and $(1)$ $$\sum_{n\geq 1}|G_n|\log\left(1+\frac{1}{n}\right) = \int_{0}^{+\infty}\frac{1-e^{-t}}{t}\sum_{n\geq 1}|G_n|e^{-nt}\,dt=\int_{0}^{+\infty}\frac{1-e^{-t}}{t}\left(1+\frac{e^{-t}}{\log(1-e^{-t})}\right)\,dt $$ and by substituting $t=-\log u$ in the last integral we get $$\sum_{n\geq 1}|G_n|\log\left(1+\frac{1}{n}\right) = \int_{0}^{1}\frac{u-1}{u\log u}\left(1+\frac{u}{\log(1-u)}\right)\,du \tag{2}$$ where the RHS of (2) is perfectly manageable through standard numerical routines (Newton-Cotes formulas, Gaussian quadrature or a combination of them). My version of Mathematica returns a $\color{green}{0.4122998}$ as an approximated value of the RHS of $(2)$.
Why will $\epsilon - N$ proof not work if we pick a random $L$ that is not the limit?
In that case, for some $\epsilon&gt;0$, you are not able to find such $N$ that satisfy the limit condition.
The difference between standard deviation $=0$ and no standard deviation
A Gaussian distribution with mean $\mu$ and variance $\sigma^2$ is typically defined as a continuous distribution with probability density function $$ f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{-(x - \mu)^2/(2\sigma^2)}. $$ If you set $\sigma=0$ then the density function becomes undefined. Taking a limit as $\sigma \to 0^+,$ the density function goes to $0$ when $x\neq \mu$ and goes to infinity when $x = \mu.$ You can consider a known value (with no uncertainty) to be a discrete random variable that takes a value $\mu$ with probability $1.$ The variance of this variable is zero. So you certainly can assign random variables to every pixel such that some of the variables are Gaussian with positive variance and some of the variables have zero variance. I would not call the zero-variance variables "Gaussian," however.
Probability of a maximum being greater than a number. Inequality
\begin{align*} P\biggl(\sup X_n \geq a\biggr) &amp;= 1 - P\biggl(\sup X_n &lt; a \biggr) \\ &amp;= 1 - P\biggl(\bigcap_n\{X_n &lt; a\} \biggr) \\ &amp;= 1 - \biggl[1 - P\biggl(\bigcup_n\{X_n \geq a\} \biggr)\biggr] \\ &amp;= P\biggl(\bigcup_n\{X_n \geq a\} \biggr) \end{align*}
Convergence of Product of Square Integrable Random Variables
Let $0&lt;\alpha&lt;\frac12$. Let $Y(x)=x^{-\alpha}$ on $(0,1]$ and whatever makes it $\mathcal L^2$ outside. Choose some $\delta&gt;0$ and let $$a_n=n^{-\delta-\frac2{1-2\alpha}}$$ Let $\epsilon=4\alpha/(1-2\alpha)&gt;0$ and let $b_n=a_n+n^{-(2+\epsilon)}$. Define $X_n(x)=n\,\chi_{[a_n,b_n]}(x)$. Then $\lVert X_n\rVert_2^2 = n^2\,(b_n-a_n)=n^{-\epsilon}$. In particular, $X_n \in \mathcal{L}^2$ and $X_n\to 0$ in $\mathcal L^2$. On the other hand, $Y\cdot X_n(x)= n\, x^{-\alpha}$ on $[a_n,b_n]$, and $0$ elsewhere. Hence \begin{align} \lVert Y\cdot X_n\rVert_2^2 &amp;=n^2\,\int_{a_n}^{b_n}\,\frac1{t^{2\alpha}}\,dt\\ &amp;=\frac{n^2}{1-2\alpha}\,\left[t^{1-2\alpha}\right]^{b_n}_{a_n}\\ &amp;=\frac{n^2}{1-2\alpha}\left(b_n^{1-2\alpha}-a_n^{1-2\alpha}\right)\\ \end{align} We manipulate the expression above: $$n^2\,b_n^{1-2\alpha}=\left(n^{\frac2{1-2\alpha}}b_n\right)^{1-2\alpha}=\left(n^{-\delta}+n^{\frac2{1-2\alpha}-(2+\epsilon)}\right)^{1-2\alpha}$$ Now, observe that $\frac2{1-2\alpha}-(2+\epsilon)=0$, and hence $$n^2\,b_n^{1-2\alpha}=\left(n^{-\delta}+1\right)^{1-2\alpha},$$ which goes to $1$ as $n\to\infty$. Moreover, similar manipulations show that $n^2\,a_n^{1-2\alpha}=\left(n^{-\delta}\right)^{1-2\alpha}$, which goes to $0$ as $n\to\infty$. It follows that $$\lVert Y\cdot X_n\rVert_2^2 \to \frac1{1-2\alpha}$$ as $n\to\infty$, so $Y\cdot X_n$ does not converge to $0$ in $\mathcal L^2$.
How to compute the percentage of improvement
Percentage of improvement often does not work like people think, so I don't like the term, but we can calculate it. You were doing $\frac 1{120}$ of the task per second. You are now doing $\frac 1{25}$ of the task per second. The improvement is $\frac 1{25}-\frac 1{120}=\frac {19}{600}$ of the task per second. This is $\frac {19}5$ of the amount you were doing, so the improvement is $380\%$ By contrast, the time required is $\frac {25}{120}=\frac 5{24}$ of the time taken previously, so the time has been reduced by $\frac {19}{24}$ and the time reduction is about $79\%$. Be careful to define what number you are quoting.
Prove that $F = GP$ for some projection $P$ on $V$ and for some non-singular linear map $G$ on $V$.
One hint here is that, whenever such a decomposition is possible, $\operatorname{ker} F = \operatorname{ker} P$, since $G$ is non-singular. Also, projections are determined uniquely by their kernel and their image, so we simply need to find the image of $P$. As it turns out, we don't really need to find the image, so much as nominate one, and let our choice of $G$ do the rest. Let $W$ be complementary to $\operatorname{ker} F$, i.e. $W \oplus \operatorname{ker} F = V$. Define $P$ to be the projection along $\operatorname{ker} F$ onto $W$. Note that $F$ is injective when restricted to $W$. Hence, if $w_1, \ldots, w_k \in W$ form a basis for $W$, then $Fw_1, \ldots, Fw_k$ is linearly independent in $V$. Let $u_{k+1}, \ldots, u_n$ be a basis for $\operatorname{ker} F$. Then $w_1, \ldots, w_k, u_{k+1}, \ldots, u_n$ of $V$ is a basis. Extend $Fw_1, \ldots, Fw_k$ to a basis $Fw_1, \ldots, Fw_k, u'_{k+1}, \ldots, u'_n$ arbitrarily. Define the unique linear map $G : V \to V$ by mapping the former basis to the latter, in order. Then, as $G$ maps one basis to another, $G$ is non-singular. We just need to show $F = GP$. We do this by showing they are equal on a basis. We have $Pw_i = w_i$, since $w \in W$, the image of the projection $P$. Further, $GPw_i = Gw_i = Fw_i$, by definition of $G$. We also have $GPu_i = G0 = 0 = Fu_i$, since $u_i \in \operatorname{ker} F = \operatorname{ker} P$. Thus, $F = GP$ as required.
Is the cartesian product of a bounded set with a set of points, both in $\mathbb{R}^{n}$ Jordan-Measurable over $\mathbb{R}^{2n}$?
Since $B$ is bounded, it fits inside some $n$-rectangle $D$. Each $\vec x_k$ can be placed in an $n$-cube $C_k$ of sidelength $\epsilon$ for any $\epsilon &gt; 0$. Which means that $$A \times B \subseteq \bigcup_k C_k \times D$$ whose Jordan measure is $\le n\epsilon^n\,m(D)$. Can you take it from there?
Bipartite graph matching like problem.
Let's write $A_i$ as sets of numbers: $$ A_1=\{0,1,2,3\}\\ A_2=\{0,1,4,5\}\\ A_3=\{0,2,5,6\}\\ A_4=\{2,3,4,5\}\\ A_5=\{1,2,5,6\}\\ A_6=\{1,3,4,6\}\\ A_7=\{0,3,5,6\}\\ $$ (I hope I have calculated everything right:)). It's easy to see that $A_i \cup A_j\ne A$, so every number must be in at least $3$ "left component" sets, so if number of pairs is $k$ we have $$4k\ge21$$ so $k\ge6$.
Express AB in terms of $a$
I don't want to give a full solution so you have a chance to try it for yourself. The method to use is:$$\frac{\sin{\angle BAC}}{|BC|}=\frac{\sin{\angle BCA}}{|AB|}$$ Work out the angle $\angle BAC$ using the fact that angles in a triangle sum to $\pi$. Then this can be simplified to give you $|AB|$ in terms of some numbers and $a$.
Representing $\mathbf{Tr}(A + BC^{-1}B^T)< K$ as an LMI
The trace can also be written as $$ \mathbf{Tr}(A + B\,C^{-1}B^\top) = \sum_{i=1}^n e_i^\top (A + B\,C^{-1}B^\top)\,e_i, \tag{1} $$ with $n$ such that $A \in \mathbb{R}^{n\times n}$ and $e_i$ the $i$th column of an $n \times n$ identity matrix. By introducing intermediate scalar variables $\alpha_i$ one can write the initial inequality in an indirect way by using \begin{align} e_i^\top (A + B\,C^{-1}B^\top)\,e_i &amp;&lt; \alpha_i, \forall\,i = 1 \dots n, \tag{2} \\ \sum_{i=1}^n \alpha_i &amp;&lt; K. \tag{3} \end{align} The inequality from $(3)$ is already a linear inequality. The inequality from $(2)$ can be formulated as linear (matrix) inequality by using the Schur complement. It can be noted that applying the Schur complement to each inequality from $(2)$ does require the additional assumptions that $C$ is positive definite.
How can I solve this complex Log equation?
$|\log z| = \Re (\log z) $ Let $z = re^{it} $. $\log z =\log(r)+it $, so $|\log z| =\sqrt{\log^2(r)+t^2} $ and $\Re (\log z) =\log r $. If $\log r \ge 0 $, then $t = 0$ so $z$ must be real. If $\log r \lt 0 $, letting $s = -\log r$, and choosing the negative square root, $s =\sqrt{s^2+t^2} $, and this implies that $t = 0$. Therefore, the only solutions are real $z$.
Logical proof using equivalencies
The first conjunct of the LHS is : $P ∨ [(P ∧ ( P ∨ \sim Q)) ∧ \sim Q]$ i.e. $P ∨ [(P ∧ \sim Q) ∧ ( P ∨ \sim Q)]$. By Distributivity: $$(P ∨ (P ∧ \sim Q)) ∧ (P ∨ ( P ∨ \sim Q)) \equiv (P \lor P) \land (P \lor \sim Q) \land (P \lor \sim Q) \equiv$$ $$P \land (P \lor \sim Q).$$ Now, considering also the second conjunct of the LHS, that amounts to : $(\sim P ∨ \sim Q)$ we have : $$P \land (P \lor \sim Q) \land (\sim P ∨ \sim Q) \equiv P \land [(P \land \sim P) \lor \sim Q] \equiv$$ $$ (P \land \sim Q).$$
$R$ semisimple artinian ring $\Rightarrow\varphi(R)$ is such
If $I=\ker\varphi$, then $I$ is an ideal of $R=M_{n_1}(D_1)\times\dots\times M_{n_t}(D_t)$, so it is of the form $I=I_1\times\dots\times I_t$, where $I_j$ is an ideal of $M_{n_j}(D_j)$. Therefore $I_j=0$ or $I_j=M_{n_j}(D_j)$. Hence $\varphi(R)\cong R/I$ is a product of matrix rings over division rings, so it is semisimple artinian.
How to call a type that can be added / subtracted / divided / multiplied?
Are you familiar with the terms "group", "ring", and "field"? I think this is what you are looking for. There are also terms for related structures - e.g. semigroup, monoid, magma for one binary operation, and semiring, skew field for two binary operations. In general, you may be interested in the examples section of https://en.wikipedia.org/wiki/Algebraic_structure.
N is a normal subgroup of G if $aNa^{-1} \subset N $ for all $a ∈ G$. Prove that in that case, $aNa^{-1} = N $.
Indeed since $N\lhd G$ then for all $g\in G$ we have $N^g\subseteq N$. Now do this for $g^{-1}\in G$ so we see that $N^{g^{-1}}\subseteq N$ and therefore $$(N^{g^{-1}})^g\subseteq N^g$$ But what is $(N^{g^{-1}})^g$ then?
Satisfying a Differential Equation
What's next? Just some trig/algebra. Factor out $\cos(kt)$: $$\left(\cos(kt)\right)\left(k^2 - 4\right)=0$$ Now, the expression will be zero whenever either of the two factors are zero. I think you should be able to take it from here...
Continuous embedding of $W^{d,1}(\Omega)$ into $C(\overline{\Omega})$
As already mentioned above, the case for $\Omega \subset \mathbb{R}^{d}$ requires extension. However, the problem in full space is not difficult if you know approximation by convolution, though I imagine a good reference is probably hard to find. First assume $u \in C_{c}^{\infty}(\mathbb{R}^{d}).$ Then notice the equality ( which one can easily prove by repeating integration by parts in each variable) $$ u (x_{1},\ldots, x_{d}) = \int_{-\infty}^{x_{1}}\int_{-\infty}^{x_{2}}\ldots \int_{-\infty}^{x_{d}} \frac{\partial^{d}u}{\partial x_{1}\partial x_{2}\ldots \partial x_{d}}(y_{1}, \ldots, y_{d}) \rm{d}y_{1}\rm{d}y_{2}\ldots \rm{d}y_{d},$$ which holds for arbitrary $(x_{1},\ldots, x_{d}) \in \mathbb{R}^{d}.$ Taking absolute values and taking the supremum over $(x_{1},\ldots, x_{d}) \in \mathbb{R}^{d},$ one obtains, $$ \left\lVert u \right\rVert_{L^{\infty}(\mathbb{R}^{d})} \leq \left\lVert \frac{\partial^{d}u}{\partial x_{1}\partial x_{2}\ldots \partial x_{d}} \right\rVert_{L^{1}(\mathbb{R}^{d})} \leq \left\lVert u \right\rVert_{W^{d,1}(\mathbb{R}^{d})}.$$ Now by density of $C_{c}^{\infty}(\mathbb{R}^{d})$ in $W^{d,1}(\mathbb{R}^{d})$ ( which one can easily show by taking convolutions with smooth mollifiers), the inequality above holds for any $u \in W^{d,1}(\mathbb{R}^{d}).$ Notice that continuity will also follow from this. To see that, again use the density and choose a sequence of function $ \left\lbrace v^{n} \right\rbrace_{n \geq 1} \subset C_{c}^{\infty}(\mathbb{R}^{d})$ such that $$ \left\lVert u - v^{n} \right\rVert_{W^{d,1}(\mathbb{R}^{d})} \rightarrow 0 \text{ as } n \rightarrow \infty.$$ By the inequality above, we have, $$ \left\lVert u - v^{n}\right\rVert_{L^{\infty}(\mathbb{R}^{d})} \leq \left\lVert u - v^{n} \right\rVert_{W^{d,1}(\mathbb{R}^{d})} \rightarrow 0 \text{ as } n \rightarrow \infty.$$ Thus, $u$ is the uniform limit of a sequence of uniformly continuous functions $\left\lbrace v^{n} \right\rbrace_{n \geq 1}$ and thus must be continuous. Hope it helps.
Can't solve this exercise
A slightly more elaborate hint: we know $22 = P(ABMN) = AB + AM + BN + MN$, $AB = 10$, and $MN=AB/2, AM=AC/2, BN=BC/2$. Can you see how to get $AB+AC+BC$ from this?
Functional Equation $f(f(n))=3n$
Usually the way to start is pick some good values for the variables. Assuming $0 \in \Bbb N$ (it doesn't really matter), we have $f(f(0))=0$, and since $f$ is strictly monotonic it gives $f(0)=0$. Then we know that $f(f(1))=3$, so $f(1)$ must be $2$-if it were $1$ we would have $f(f(1))=1$ and $f(2)=3$ Now we have $f(f(2))=f(3)=6$, $f(f(3))=f(6)=9$ and that says $f(4)=7, f(5)=8$ We continue this way, getting $$\begin {array} {r | r} n &amp; f(n)\\ \hline 0&amp;0\\1&amp;2\\2&amp;3\\3&amp;6\\4&amp;7\\5&amp;8\\6&amp;9\\7&amp;12\\8&amp;15\\9&amp;18\\10&amp;19\\11&amp;20\\12&amp;21\\13&amp;22\\14&amp;23\\15&amp;24\\16&amp;25\\17&amp;26\\18&amp;27\\19&amp;30\end {array}$$ Our hypothesis is that $f(3^n)=2\cdot 3^n$ then the values increase by $1$ until $f(2\cdot 3^n)=3^{n+1}$ then the values increase by $3$ until $f(3^{n+1})=2\cdot 3^{n+1}$ You are right that this can be proven by induction, but I leave it to you. Then, since $2001=2\cdot3^6+543, f(2001)=3^7+3\cdot 543=3816$
Find the shortest distance between a point and plane
The vector product $\mathbf{n}=(A-B)\times(A-C)$ gives you a normal to the plane. So now it's just $$\frac{|\mathbf{n}.(D-A)|}{\|\mathbf{n}\|}$$
Why does the linear approximation of $f \circ g$ near $a$ imply $f$ gets linearized by $g$ for a small enough neighborhood of $g$ near $a$?
Assuming $f$ and $g$ are differentiable in the region around $g(a)$ and $a$ respectively, we can expand each in a Taylor series. The result is just what you would expect from the chain rule. For $x$ small: $$g(a+x) \approx g(a)+xg'(a)\\ f(g(a+x)) \approx f(g(a))+x\frac d{dx}(f(g(a+x))\\f(g(a+x)) \approx f(g(a)+x(f'(g(a))g'(a)$$