title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Simplifying $\cos(2\arcsin(x))$ using only pythagorean trigonometric identity
$\cos(2\arcsin x)$ will be $\ge0,$ if $-\dfrac\pi2\le2\arcsin x\le\dfrac\pi2$ $\iff -\dfrac1{\sqrt2}\le x\le\dfrac1{\sqrt2}$ In that case $$|2x^2-1|=-(2x^2-1)$$ Check if $x>\dfrac1{\sqrt2}$ or $x<-\dfrac1{\sqrt2}$
Eigenvector and adjoint eigenvector not orthogonal
Let $A \in M_n(\mathbb{C})$ be a matrix such that $0$ is a simple eigenvalue of $A$ (that is, the algebraic multiplicity of $0$ is one). Let $q \neq 0$ such that $Aq = 0$ and $p \neq 0$ such that $A^{*}p = 0$. Assume that $p \perp q$. Then $$q \in \operatorname{span} \{ p \}^{\perp} = \ker(A^{*})^{\perp} = \operatorname{Im}(A) $$ so write $q = Av$ for some $v \neq 0$. Then $A^2v = Aq = 0$ and $q,v$ are linearly independent so $\dim \ker(A^2) \geq 2$ which implies that the algebraic multiplicity of $0$ is $\geq 2$, a contradiction. For your situation, apply the above to $A - \lambda I$ and conclude that $p,q$ can't be perpendicular.
Create a well ordering for functions "being $0$ near limits"
Here is an outline of how you can wellorder $P_0$. Prove that for $f \in P_0$, $\mathrm{sup}(f) := \{ x < \gamma \mid f(x) \neq 0 \}$ is finite. Let $[\mathrm{Ord}]^{< \omega}$ be the class of all finite sets of ordinals. For $a,b \in [\mathrm{Ord}]^{< \omega}$ let $$ a <^* b \iff \max (a \Delta b) \in b, $$ where $a \Delta b$ is the symmetric difference of $a,b$. Prove that $<^*$ is a strict wellorder of $[\mathrm{Ord}]^{< \omega}$. Recall that there is a definable bijection $$ \mathrm{Ord} \times \mathrm{Ord} \to \mathrm{Ord}, (\alpha, \beta) \to \langle \alpha, \beta \rangle, $$ (e.g. Gödel's pairing function). For $f,g \in P_0$ let $$ f^* := \{ \langle \alpha, \beta \rangle \mid f(\alpha) = \beta \wedge \alpha \in \mathrm{sup}(f) \} $$ and $$ f < g \iff f^* <^* g^*. $$ Prove that this is a wellorder of $P_0$.
How to determine $\varphi$ in spherical coordinates
The inequality $z \geq c^2(x^2+y^2)^{1/2}$ is equivalent to $\cot(\varphi) \geq 1$. Since $\cot$ is decreasing this is equivalent to $\operatorname{arccot}(\varphi) \leq \pi/4$.
Proof verification for proving $\forall n \ge 2, 1 + \frac1{2^2} + \frac1{3^2} + \cdots + \frac1{n^2} < 2 − \frac1n$ by induction
OK. It appears you are not interested in looking at other questions that pose the same question; rather, you solely want to know whether or not your own solution is correct. To this, I would say no. If I were your teacher, then I would probably give you 6/10 on this problem. Why? Using the induction hypothesis, you should have $$ \color{blue}{1+\frac{1}{4}+\cdots+\frac{1}{k^2}}+\frac{1}{(k+1)^2}&lt; \color{blue}{2-\frac{1}{k}}+\frac{1}{(k+1)^2}\tag{1} $$ and not $$ 1+\frac1{2^2}+\frac1{3^2}+\cdots+\frac1{(k+1)^2}&lt;2−\frac1{k+1}+\frac1{(k+1)^2},\tag{2} $$ as it currently stands. The induction hypothesis is used on the part highlighted in blue in $(1)$, but you did not use it properly in $(2)$. Finally, by your own admission, your goal is to end up with $$ 1+\frac1{2^2}+\frac1{3^2}+\cdots+\frac1{(k+1)^2}&lt;2−\frac1{k+1}, $$ but all you did was write $$ 1+\frac1{2^2}+\frac1{3^2}+\cdots+\frac1{(k+1)^2}=2-\frac1k+\frac1{k^2+2k+1}, $$ and then you claimed you proved the result (something you obviously did not do). In $(1)$, your task is to reduce the right-hand side down into $2-\frac{1}{k+1}$, thus proving $P(k+1)$. You can do this like so: \begin{align} 1+\frac{1}{4}+\cdots+\frac{1}{k^2}+\frac{1}{(k+1)^2} &amp;&lt;2-\frac{1}{k}+\frac{1}{(k+1)^2}\quad\text{(by $P(k)$, the ind. hyp.)}\\[1em] &amp;= 2-\frac{1}{k+1}\left(\frac{k+1}{k}-\frac{1}{k+1}\right)\\[1em] &amp;= 2-\frac{1}{k+1}\left(\frac{k^2-k}{k(k+1)}\right)\\[1em] &amp;\leq 2-\frac{1}{k+1}.\qquad\text{(since $k\geq 1, k^2-k\geq 0$)} \end{align} Why could you never reach this end form? It's because you did not apply the induction hypothesis properly in $(2)$.
Classifying critical points (general)
For a linear system with two states, $x=\pmatrix{x_1&amp;x_2}^T$ in general form: $$\dot{x}=Ax$$ the general form of the solution is: $$x(t)=\Phi(t)x(0)$$ where $\Phi(t)=e^{At}$ can be evaluated by converting $A$ to its Jordan canonical form. $$A=M^{-1}J M\implies e^{At}=M^{-1}e^{Jt}M$$ This means the general solution of the system is a linear combination of the elements of $e^{Jt}$. Now in your case where $\det A=0$, we can be sure that one of the eigenvalues are zero. Thus $$\begin{align} J&amp;=\pmatrix{\lambda_1&amp;0\\0&amp;0}\text{ or}&amp;\pmatrix{0&amp;0\\0&amp;0}&amp;\text{ or }\pmatrix{0&amp;1\\0&amp;0}\\ e^{Jt}&amp;=\pmatrix{e^{\lambda_1 t}&amp;0\\0&amp;1}\text{or}&amp;\pmatrix{1&amp;0\\0&amp;1}&amp;\text{ or }\pmatrix{1&amp;t\\0&amp;1} \end{align}$$ If $e^{Jt}$ is not a function of $t$ (meaning $J=0$), then the system is static. If $\lambda_1\neq 0$ you can analyze the system's behavior based on the real part of $\lambda_1$, i.e. one of the system's modes is: unstable if $\Re(\lambda_1)&gt;0$ stable and vanishing if $\Re(\lambda_1)&lt;0$ oscillating if $\Re(\lambda_1)=0$
What is base in Linear Algebra (vector, matrix)?
Hint: Are both $\lbrace (-x,y),(x,-y) \rbrace$ and $\lbrace (1,2,3),(1,2,0),(-1,2,6) \rbrace$ linearly independent?
Prove $f(a + b) = f(a) + f(b)$ in an ordered field
Transpose your proof. Rather than fixing $a$ and iterating over $b$, you should instead iterate over $b$, and prove it for all $a$ at each step.
Proving this trigonmetric identity
$$\begin{align}\sec(x)(\sin^3(x) + \sin(x)\cos^2(x))&amp;=\sec(x)(\sin(x)(1-\cos^2(x)) + \sin(x)\cos^2(x))\\&amp;=\sec(x)\sin(x)\\&amp;=\frac{1}{\cos(x)}\sin(x)\\&amp;=\tan(x)\end{align}$$
Do all mathematical ideas eventually find their way into the real world?
I think the Logical answer in No (If we consider that the world is gonna to end someday) Because you can create something not useful just before world ends! But this answer is not practical at all! By the way, I look at mathematics just as I look at Painting, Music, and other arts. Are they useful? I do not think so. But they are beautiful and it is an application itself. It applies to enjoy you and what application could be better than it? But I'm not suggesting to say this for get scholarship because it won't work. You can try to find some way to justify that your works will be applicable or spends time without money but with beauty of ART.
Determinant of an n x n matrix
Let $M_n$ be your matrix. Let $\eta_n$ be the $n\times n$ matrix with entry $1$ at the superdiagonal and $0$ 4 elsewhere. If you Subtract row $k+1$ from row $k$ for $k = 1,2,\ldots,n-1$. This is equivalent to multiply $M_n$ by $I_n - \eta_n$ from the left Subtract column $k-1$ from column $k$ for $k = n,n-1,\ldots,2$ (notice the order of $k$). This is equivalent to multiply $(I_n-\eta_n)M_n$ by $I_n - \eta_n$ from the right. After you do this, your matrix simplifies to $$(I_n - \eta_n) M_n (I_n - \eta_n) = \begin{bmatrix} n-1&amp;-n&amp;0&amp;\cdots&amp;0&amp;0&amp;0\\ 0&amp;n-1&amp;-n&amp;\cdots&amp;0&amp;0&amp;0\\ 0&amp;0&amp;n-1&amp;\ddots&amp;0&amp;0&amp;0\\ \vdots&amp;\vdots&amp;\vdots&amp;\ddots&amp;\ddots&amp;\vdots&amp;\vdots\\ 0&amp;0&amp;0&amp;\cdots&amp;n-1&amp;-n&amp;0\\ 0&amp;0&amp;0&amp;\cdots&amp;0&amp;n-1&amp;-\lambda\\ 1&amp;0&amp;0&amp;\cdots&amp;0&amp;0&amp;\lambda-1 \end{bmatrix}$$ From this, you can deduce $$\det[M_n] = \det[(I_n - \eta_n)M_n(I_n - \eta_n)] = (n-1)^{n-1}(\lambda-1) + n^{n-2}\lambda$$
How many revolutions per minute does a wheel make if its angular velocity is 20π radians per second?
Hint: Recall that $\pi$ radians is the same angle as $180$ degrees. So one revolution is $2\pi$ radians. I expect no more help is needed. But you might want to leave your solution as a comment, so that I can tell you that you are right.
Let $\psi_A(B)=AB-BA$ for $A,B \in M_n(\mathbb{R})$, Show that $\psi^m_A(B)=\sum_{l=0}^m(-1)^l \binom{m}{l}A^{m-l}BA^l$
For $m=1$, it is trivial. Let $m\in\mathbb N$ and suppose that it holds for $m$. Then\begin{align*}{\psi_A}^{m+1}(B)&amp;=A{\psi_A}^m(B)-{\psi_A}^m(B)A\\&amp;=\left(\sum_{l=0}^m(-1)^l\binom mlA^{m+1-l}BA^l\right)+\left(\sum_{l=0}^m(-1)^{l+1}\binom mlA^{m-l}BA^{l+1}\right)\\&amp;=A^{m+1}B+\left(\sum_{l=1}^{m-1}(-1)^l\left(\binom ml+\binom m{l-1}\right)A^{m+1-l}BA^l\right)+(-1)^{m+1}BA^{m+1}\\&amp;=\sum_{l=0}^{m+1}(-1)^l\binom{m+1}lA^{m+1-l}BA^l.\end{align*}
reference for finit sum of cotangents
Here is the reference that you may find useful: Cvijović, Djurdje. "Summation formulae for finite cotangent sums." Applied Mathematics and Computation 215.3 (2009): 1135-1140.
Converse of Fubini's theorem
It is false that $f$ is necessarily integrable. Consider that if $f$ is non-negative, then Tonelli's Theorem applies, which yields the result of Fubini's Theorem for $f$ (see this Wikipedia page). But $f$ can fail to be integrable. EDIT (to give counterexample): Write down $$ f(x,y)=\frac1{({x^2+y^2})^{1/2}}. $$ By Tonelli's Theorem, the result of Fubini's applies. But a calculation gives $$ \int\limits_0^1\int\limits_0^1|f(x,y)|\,dy\,dx\geq c\int\limits_0^{1/2}\frac1{r^2}r\,dr=c\ln r\Big|_0^{1/2}=+\infty, $$ where the first inequality holds since the integral of $f$ over the square $[0,1]^2$ is larger than or equal to the integral of $f$ over the upper quarter circle of radius $1/2$. So $f$ is not integrable.
probability, depedence => uncorrelated?
Yes, uncorrelated means that their covariance is zero. Furthermore, if two random variables are independent, then they are indeed uncorrelated. However, if they are uncorrelated, that does not mean that they are independent. i would advise reading http://en.wikipedia.org/wiki/Uncorrelated
For a Bi-level Mixed Integer Linear Program with integer variables in the lower, can I use KKT conditions to reduce the problem to a single level?
No, it is a much harder problem. There are some good presentations here https://coral.ie.lehigh.edu/~ted/research/presentations/
Why is the antiderivative of $\frac{1}{1+x^2}=\tan^{-1}(x)$?
$$ \begin{align} \arctan(x) &amp;= y\\ x &amp;= \tan(y)\\ \frac{\mathrm d}{\mathrm dx} x &amp;= \frac{\mathrm d}{\mathrm dx} \tan(y)\\ 1 &amp;= y' \sec^2(y)\\ y'&amp;=\dfrac{1}{\sec^2(y)}\\ y'&amp;=\dfrac{1}{\tan^2(y)+1}\\ y'&amp;=\dfrac{1}{x^2 + 1} \end{align} $$ Because $x = \tan(y)$
Number Of Solutions $X^{2}=X$
Infinitely many solutions. And a lot of structure. Since you said "symmetric", I'll detail the real case. But note that the argument of the first paragraph below shows that the solutions in $M_n(K)$, for any field $K$, split by diagonalization into $n+1$ similarity orbits under $GL(n,K)$ of the obvious diagonal solutions. 1) Elements $p=p^2$ are called idempotents. Equivalently, these are the diagonalizable (just think about the minimal polynomial) matrices with spectrum in $\{0,1\}$. An idempotent is characterized by the decomposition of the vector space into the direct sum of its range and its nullspace. In $M_n$ ($\mathbb{R}$ or $\mathbb{C}$) in general, they split in $n+1$ connected components according to their rank, which is also equal to their trace. Each component corresponds to a similarity orbit. The natural representatives are the diagonal idempotents $0_n$ and $I_n$ (which are alone in their orbits), and $(1,\ldots,1,0,\ldots,0)$ with $k$ $1$'s, $1\leq k\leq n-1$ (whose orbit is a manifold of dimension $2k(n-k)$). In $M_2(\mathbb{R})$, there are therefore three components: $\{0_2\}$, $\{I_2\}$, and the rank one idempotents. That is, the $2\times 2$ matrices whose characteristic polynomial is $X^2-X$: $$ \pmatrix{a&amp;b\\c&amp;d}\qquad a+d=1\qquad ad-bc=0 $$ I let you work on these two equations to realize that this manifold is in affine bijection with the one-sheet hyperboloid. If you want a parametrization, here is a rational one for all but a subset of them of topological dimension one: $$ \pmatrix{\frac{1}{1+st} &amp;\frac{s}{1+st}\\\frac{t}{1+st}&amp;\frac{st}{1+st}}\qquad (s,t)\in\mathbb{R}^2\setminus\{1+st=0\}. $$ 2) Elements $p=p^*=p^2$ are called projections (=self-adjoint idempotents) in operator algebras. They are characterized by their range solely, as their nullspace is the orthogonal of their range. Again, they split into $n+1$ components according to their rank. The rank $k$ component is called the Grassmannian $G(k,n)$ and has dimension $k(n-k)$, half (we dropped the nullspace) of the dimension of the corresponding idempotent component in which it lies as a submanifold. In $M_2(\mathbb{R})$, we still have three components. I let you check that the nontrivial one, rank one projections, can be parametrized by $$ \pmatrix{\cos^2\theta&amp;\cos\theta\sin\theta\\ \cos\theta\sin\theta&amp;\sin^2\theta}\qquad \theta\in [0,\pi]. $$ It should not surprize you that we recover the complex unit circle. These are the one-dimensional subspaces of $\mathbb{R}^2$. That is the projective line. Note that unlike rank one idempotents, it is now compact.
What is $\,a\bmod 63\,$ if $\,a\,$ is both a square and a cube?
You must show $x^6\equiv 0,1,28,36\pmod{63}$ For $ 0 \leq x \leq 62$. By Fermat's little Theorem we conclude $x^6 \equiv 1 \pmod{7}$ if $x\neq 7k$ and it's easy to check $x^6 \equiv 1 \pmod{9}$ if $x\neq 3k$. Now because of theorem : $a\equiv b \pmod {m_1}$ and $a\equiv b \pmod {m_2}$ then $a\equiv b \pmod {lcm (m_1,m_2)}$ It remains to show for $x=3 , 7$: $3^6 \equiv 36\pmod{63}$ and $7^6 \equiv 28\pmod{63}$. For example : $12^6 \equiv 9k \pmod{63}$ we have : $4^6 . 3^4 \equiv k \pmod{7} $ (Because of theorem says : $ak=bk \pmod m $ then $a=b \pmod {\frac{m}{\gcd(k,m)}}$ ) and because $ 4^6 \equiv 1 \pmod 7 , 81 \equiv 4 \pmod 7$ then $k=4$ and $12^6 \equiv 3^6 \equiv 36 \pmod {63}$.
Dividing balls into cells probability question
The probability that exactly two urns are empty: Pick which two urns are the empty urns in $\binom{7}{2}$ ways For the remaining five urns, since none are empty we will have either four with one ball in it and one with three, or we will have three with one ball in it and two with two balls each. Break into cases: If in the first case, pick which urn received three balls in $5$ ways. Finally, pick how to distribute the balls into the urns so that each gets their respective total, for example by picking first which three balls go into the urn needing three balls and then pick which ball goes into the left-most urn which should get one ball, etc... this can be done in $\binom{7}{3}4!$ ways. (As an aside, we could have stopped here for the originally asked problem, as we would have a grand total of $\binom{7}{2}\cdot 5\cdot \binom{7}{3}\cdot 4!$ ways to have exactly two empty bins and one bin with three balls. Dividing by $7^7$ gives the answer given by the book after noting that $\binom{7}{2}5=\frac{7!}{2!1!4!}$) In the second case, pick which two urns receive two balls each in $\binom{5}{2}$ ways. Then, pick which balls go into which urns, starting for example with the left-most urn that receives too balls, eventually totalling $\binom{7}{2}\binom{5}{2}3!$ ways. Combining this information, we get that $$Pr(A)=\frac{\binom{7}{2}\left(5\binom{7}{3}4!+\binom{5}{2}\binom{7}{2}\binom{5}{2}3!\right)}{7^7}$$ Next, we can attempt to calculate $Pr(B\mid A)$. Unfortunately, I do not see a convenient approach to calculating this without first calculating $Pr(A\cap B)$ directly. In your attempt at calculations, you have a denominator of $5^7$ which would not make sense because within those $5^7$ outcomes, some of those have additional empty urns which should have been ignored as they do not satisfy the hypothesis that there are exactly two empty urns.
Is it possible to evaluate $\lim_{x \to 0} x\sin(1/x)$ via L'Hospital?
I wouldn't use L'Hopital's rule for that. I would squeeze, thus: $$ -1 \le \sin(\text{anything at all}) \le 1. $$ Therefore $\Big(x\cdot\sin(\text{something})\Big)$ is between $\pm x$. Since $+x$ and $-x$ both approach $0$ as $x\to 0$, so does $\Big(x\cdot\sin(\text{something})\Big)$. To use L'Hopital's rule you need a fraction in which the numerator and denominator either both approach $0$ or both approach $\infty$. You have $x$ approaching $0$, but $\sin(1/x)$ does not approach any limit and neither does $1/\sin(1/x)$.
Condition for a point to be a limit point of some set in general topology
It's true indeed, and here is a simple argument (an example does not prove a general observation!) why: In a finite topological space $(X, \mathcal T)$ every point $x$ has a minimal open neighbourhood $M_x = \{O \in \mathcal T\mid x \in O\}$ (open as a finite intersection of open sets). If $\{x\}$ is not open this means that there is some $y \in M_x$ such that $y \neq x$. Then by definition $x$ is a limit point of any $A$ with $y \in A$ (because if $O$ is an open neighbourhood of $x$, $y \in M_x \subseteq O$ is a witness). In particular $x \in \{y\}'$. In your example $M_3 = \{1,3\}$ so there any set containing $1$ will do. Minimal neighbourhoods are a nice concept in the study of finite topological spaces; they are a natural base and elicudate the limit point relations between sets very nicely.
Solve a Diophantine equation with three variables
This is not exhaustive. It is only a way given one solution, generate a family of solutions. Let $\Lambda(x,y,z)$ be the $2\times 2$ symmetric matrices $\begin{bmatrix}4x - 1 &amp; 2z \\ 2z &amp; 4y - 1\end{bmatrix}$. The equation at hand can be rewritten as $$\det \Lambda(x,y,z) = (4x-1)(4y-1)-(2z)^2 = -79\tag{*1}$$ For any $2\times 2$ matrix $P$ with integer coefficients and $\det P = \pm 1$, $P^T \Lambda(x,y,z) P$ will be a $2 \times 2$ symmetric matrix with integer coefficients. It has same determinant as $\Lambda(x,y,z)$. If $(x,y,z)$ is a solution of $(*1)$ and we can find integers $x',y',z'$ such that $P^T \Lambda(x,y,z) P = \Lambda(x',y',z')$, then $(x',y',z')$ will be another solution for $(*1)$. This prompts us to look for suitable $P$ from $SL(2,\mathbb{Z})$. $SL(2,\mathbb{Z})$ are generated by following matrices: $$ L = \begin{bmatrix}1 &amp; 0 \\ 1 &amp; 1\end{bmatrix},\quad U = L^T = \begin{bmatrix}1 &amp; 1 \\ 0 &amp; 1\end{bmatrix} \quad\text{ and }\quad J = \begin{bmatrix}0 &amp; -1 \\ 1 &amp; 0\end{bmatrix}$$ It is easy to check $J^T \Lambda(x,y,z) J = \Lambda( y, x, -z )$, so $J$ doesn't give us anything interesting. However, $L$ and $U$ don't disappoint us. We find $$\begin{align} U^{2k} \Lambda(x,y,z) L^{2k} &amp;= \Lambda(x + 2kz + k^2(4y-1), y, z + k(4y-1))\\ L^{2k} \Lambda(x,y,z) U^{2k} &amp;= \Lambda(x, y+2kz + k^2(4x-1), z + k(4x-1)) \end{align} $$ This means if $(x,y,z)$ is an integral solution for $(*1)$, then for any integer $k$, Both $$\begin{align} &amp; (x + 2kz + k^2(4y-1), y, z + k(4y-1))\\ \text{ and }\quad&amp;(x, y+2kz + k^2(4x-1), z + k(4x-1)) \end{align} $$ are solutions of $(*1)$. For example, if one start from the solution $(-1,-3,6)$, we immediately obtain following two parametric families of solution: $$(-1 + 12k -13k^2, -3, 6-13k)\quad\text{ and }\quad (-1, -3+12k-5k^2, 6-5k)$$ More solutions can be constructed in this manner. However, I doubt this type of construction cover all possible solution. I hope this is at least a start.
Elementary solution to $ \int \frac{1}{x^5+1} \, dx $
With $\phi_{\pm} = \frac{1\pm\sqrt5}{4}$ $$x^5+1= (1+x)(x^2-2\phi_+x+1)(x^2-2\phi_-x+1)$$ and $$\frac{5}{1+x^5}=\frac1{x+1}- \frac{2\phi_+x-2}{x^2-2\phi_+x+1} - \frac{2\phi_-x-2}{x^2-2\phi_-x+1}$$ The integral for the first term is just $\ln(x+1)$, and for the second and third terms \begin{align} I(x,\phi) &amp;= \int \frac{2\phi x-2}{x^2-2\phi x+1}dx =\int \frac{\phi d[(x-\phi)^2] -2(1-\phi^2)dx}{(x-\phi)^2 +(1-\phi^2)} \\ &amp;=\phi\ln\left(x^2-2\phi x+1\right) -2\sqrt{1-\phi^2} \tan^{-1}\frac{x-\phi}{\sqrt{1-\phi^2}} \end{align} Thus $$\int \frac{1}{1+x^5}dx=\frac15\left[\ln(x+1)-I(x,\phi_+)-I(x,\phi_-)\right] + C$$
Calculus III (or IV): Sketching Solids
When Dealing with functions of one variable, for example, f(x), you have f(x)=stuff (stuff could be a polynomial in x, trig function in x, etc). Often, you are told that y=stuff, instead of f(x)=stuff, but really y=f(x). In functions of two variables, f(x, y), you can let f(x, y)=stuff, but this stuff is different from the others since stuff can be, for example, 4-x-2y. Its easy for some to let f(x, y) just equal another letter, z for example. Except now you have three orthogonal axis, (x axis, y axis, and f(x, y) axis). To sketch a solid, what I was taught was to let each z=0, or simply 0=4-x-2y, and sketch that in the xy-plane, then let x=0 and sketch z=4-2y in the zy-plane, and then finally letting y=0 and sketching z=4-x in the xz-plane. Each individual curve sketch you draw is called a trace, and thats how you sketch a solid in Euclidean 3-Space. Your drawing should look like a triangle since it is a flat surface.
Finding the number of solutions to $x+2y+4z=400$
My question is how to find the easiest way to find the number of non-negative integer solutions to $$x+2y+4z=400$$ I think the following way is easy (I'm not sure if it's the easiest, though). Since $x+2y+4z=400$, $x$ has to be even. So, setting $x=2m$ gives you $$2m+2y+4z=400\Rightarrow m+y+2z=200.$$ Since $m+y$ has to be even, setting $m+y=2k$ gives you $$2k+2z=200\Rightarrow k+z=100.$$ There are $101$ pairs for $(k,z)$ such that $k+z=100$. For each $k$ such that $m+y=2k$, there are $2k+1$ pairs for $(m,y)$. Hence, the answer is $$\sum_{k=0}^{100}(2k+1)=1+\sum_{k=1}^{100}(2k+1)=1+2\cdot \frac{100\cdot 101}{2}+100=10201.$$
Problem on conditional probability.
You are asked to compute $$ \Pr\left(\text{rejected} \mid \left(A \cup B \right) \right) = \frac{ \Pr\left(\text{rejected} \cap \left(A \cup B \right) \right)}{\Pr\left( A \cup B \right)} = \frac{ \Pr\left( \left(\text{rejected} \cap A \right) \cup \left(\text{rejected} \cap B \right) \right)}{\Pr\left( A \cup B \right)} $$ Since a bottle can not be bottled by both machines, $A \cap B = \emptyset$, hence $\Pr\left(A \cup B \right) = \Pr(A) + \Pr(B)$, and likewise $$\Pr\left( \left(\text{rejected} \cap A \right) \cup \left(\text{rejected} \cap B \right) \right) = \Pr\left( \text{rejected} \cap A \right) + \Pr\left( \text{rejected} \cap B \right)$$ Hence $$ \Pr\left(\text{rejected} \mid \left(A \cup B \right) \right) = \frac{\frac{2}{5} \cdot \frac{1}{20} + \frac{3}{10} \cdot \frac{1}{25}}{\frac{2}{5} + \frac{3}{10}} = \frac{8}{175} $$
What, intuitively, are supernatural numbers?
The set of supernatural numbers is an extension to the set of natural numbers: The factorization of a natural number must contain a finite number of primes The factorization of a supernatural number may contain an infinite number of primes
$∃x¬(\varphi ∨ \psi) → ∃x(¬\varphi ∨ ¬\psi)$ and $∃y(\varphi ∧ \psi) → (∀x$ $\varphi ∧ ∀y$ $\psi)$
Hint 1st) Consider that $\lnot (\varphi \lor \psi)$ is equivalent to $\lnot \varphi \land \lnot \psi$. 2nd) Consider : "there exists a number that is $=0$ and $\ge 0$".
Find the Jordan canonical form of this matrix
I will just indicate a possible approach: Take a look first at the case $n=2$. Then you have the matrix $\begin{pmatrix} 0 &amp; a_1\\ a_2 &amp; 0 \end{pmatrix} $, you can find the JCF of this matrix by simply dividing by dividing the second column by $a_1$. Namely you have $P=\begin{pmatrix}1 &amp; 0\\ 0 &amp; a_1\end{pmatrix}$, and $\begin{pmatrix}1 &amp; 0\\ 0 &amp; a_1\end{pmatrix}\cdot \begin{pmatrix} 0 &amp; a_1\\ a_2 &amp; 0 \end{pmatrix}\cdot \begin{pmatrix}1 &amp; 0\\ 0 &amp; a_1\end{pmatrix}^{-1}=\begin{pmatrix}0 &amp; 1\\ a_1a_2 &amp; 0\end{pmatrix}$. Now you can look at the case $n=3$ and use your knowledge of the case $n=2$ to solve this. The matrix is $\begin{pmatrix} 0 &amp; 0 &amp; a_1\\ 0 &amp; a_2 &amp; 0\\ a_3&amp; 0 &amp; 0 \end{pmatrix}. $ Its trace is $a_2$ so you know you want to move the central element to a diagonal position, and then we will have a $2\times 2$ Jordan block with $0$ diagonal. Necessarily it will involve $a_1$ and $a_3$ and you can show that the JCF of this matrix will be $ \begin{pmatrix} a_2 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 1\\ 0&amp; a_1a_3 &amp; 0 \end{pmatrix}. $ With some work you can show that the Jordan blocks will be $\begin{pmatrix}0 &amp; 1\\ a_ia_{n-i+1} &amp; 0\end{pmatrix}$ for all $i$ with the first diagonal element $a_{\frac{n+1}{2}}$ when $n$ is odd. EDIT: If you're working over an algebraically closed field like $\mathbb{C}$ you can further simplify your Jordan blocks. Can you see what the answer should be then?
solving $a_{n+2}-3a_{n+1}+2a_n=2n$
Do you know "telescoping". Here is the deal: Put $b_n = a_{n+1} - a_n \implies b_{n+1} - b_n = 2n \implies b_n = b_0 + (b_1 - b_0) + (b_2 - b_1) + \cdots + (b_n - b_{n-1}) = 1 + 2\cdot 0 + 2\cdot 1 + 2\cdot 2 +\cdots + 2(n-1) = 1 + 2(1+2+\cdots + (n-1)) = 1+ (n-1)n$ . Repeat this trick again for $a_n$ to get the answer.
Is $U_{x} = - U_{yy}$ solvable?
Since we are dealing with an infinite domain, I hope you are familiar with Fourier transform. Let $$\hat{u}(k,x) = \mathcal{F}\{u\}=\int_{-\infty}^\infty u(x,y) e^{-iky} dy.$$ Using the properties of Fourier transform we arrive to $$ \frac{\partial}{\partial x} \hat{u} = k^2 \hat{u}, $$ whose solution is $$ \hat{u} = C(k) e^{k^2x}. $$ From the initial condition, we have $\hat{u} = \hat{f}(k) e^{k^2x}$, and the solution for $u$ is $$ u(x,y) = -\frac{i}{\sqrt{4\pi x}} \int_{-\infty}^\infty f(y') e^{\frac{(y-y')^2}{4x}} dy'. $$ For the particular case in which the initial condition is the delta function $u(0,y) = \delta(y)$, the solution is $$ u(x,y) = -i\frac{e^{\frac{y^2}{4x}}}{\sqrt{4\pi x}}. $$ What's the meaning of this? The physical interpretation of the problem is precisely the heat diffusion but with time going backward. Let $-x=t$, and the equation is $$ \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial y^2}, $$ in the domain $t&lt;0$. Therefore, the solution 'explodes' because it is the opposite of the phenomena of diffusion, it is, the heat will 'concentrate' in a region instead of to dissipate to the entire domain. Furthermore, the solution is imaginary because there is no real solution of which could be before the Dirac delta. If it was a regular heat diffusion problem, the heat would be totally concentred in the origin and then dissipate to the rest of the domain. If the opposite happens, it will concentrate from the entire domain to a single point, but it can't go further than this, and this fact is reflected in the inexistence of a real solution. Instead of a non-smooth function, let's assume that the initial condition is a gaussian $u(0,y)=\exp -y^2$. In that case, $$ \int_{-\infty}^\infty \exp -y'^2 e^{\frac{(y-y')^2}{4x}} dy' = \sqrt{\frac{\pi x}{4x-1}} \exp \frac{y^2}{4x-1} \mathrm{erf} \left(\frac{(4x-1)y'+y}{2 \sqrt{x(4x-1)}} \right)_{-\infty}^\infty= $$ $$ -2i\sqrt{\frac{\pi x}{1-4x}} \exp \frac{y^2}{4x-1}, $$ and our solution is $$ u(x,y) = \frac{e^{\frac{y^2}{4x-1}}}{\sqrt{1-4x}}. $$ See that this is a real solution, which can be explained by the fact that the initial condition can 'un-diffuse' until a singular distribution, that occurs at $x=1/4$. Apparently, if the initial condition is smooth (perhaps a mathematician could say how much smooth) you will have a valid solution for $0&lt;x&lt;x_s$ (in which $x_s$ is the $x$ in which the singularity occurs) for this case in which we assumed (implicitly with the Fourier transform) that $u=0$ for $|y| \to \infty$. As Robert Israel pointed, solutions like $u(x,y)=\exp(x)\cos(y)$ or $u=\exp(y-x)$ also satisfy the equation and do not have singularity.
If $TM$ is trivial, then $\Lambda^n(M)$ is also trivial and $M$ is orientable
If $M$ is $n$-dimensional, $TM$ is trivial is equivalent to saying that there exists $n$-vector fields $X_1,...,X_n$ such that for every $x\in M$, $X_i(x)\neq 0$. Take a differentiable metric on $M$ and define the $1$-form $f_i(x)(u)=\langle X_i(x),u\rangle$ where $u\in T_xM$, $\Lambda^n(M)_x$ is generated by $f_1(x)\wedge...\wedge f_n(x)$ which is also a volume form on $M$. This implies that $M$ is orientable.
A path-connected graph is connected as a graph
Let $G=(V,E)$ be a graph with vertices $V$ and edges $E$. Let $T(G)$ be its topological realization. I will use the following fact Lemma. If $G$ is a disjoint union of two subgraphs then $T(G)$ is a (topological) disjoint union of two spaces, each one being a topological realization of appropriate subgraphs of $G$. Furthermore if $v,w\in V$ lie in different (graph) components then $v,w$ lie in different connected components of $T(G)$. Proof. Assume that that $G=G_1\sqcup G_2$ as graphs, i.e. $G=G_1\cup G_2$ and there is no path between $G_1$ and $G_2$. Consider $T(G_1)$ and $T(G_2)$ which are subsets of $T(G)$. It can be easily seen that $T(G_1)\cap T(G_2)=\emptyset$. Otherwise there would be either an edge between $G_1$ and $G_2$ or they would share a vertex. Since both $T(G_1)$ and $T(G_2)$ are closed (as full subgraphs always are) then $T(G)=T(G_1)\sqcup T(G_2)$. $\Box$ As you've noted it is trivial to see that every graph path induces topological path because edges correspond to subspaces of $T(G)$ homeomorphic to $[0,1]$. Now assume that $\lambda:[0,1]\to T(G)$ is a path between two vertices $v,w$. Assume that there is no graph path from $v$ to $w$. It follows that $G$ is a disjoint union of two subgraphs: one containing $v$ (the graph path component of $v$) and the other containing $w$ (the complement). By the lemma this implies that $T(G)$ is a disjoint union of appropriate topological realizations. In particular $v,w$ lie in different connected components and thus cannot be connected by a path. Contradiction.
Work Done by the Combined Vectors
$3F_3$ means $3\langle 1,-2,2\rangle = \langle 3,-6,6\rangle$. But they want you to figure out the work done by the force $$F_1 + F_2 + 3F_3 = \langle 2,-4,4\rangle$$ If you're not taking a calculus-based physics course then they must define work as $W=F\cdot d$ where $d$ is the displacement vector. If you are taking a calculus-based physics course then you'll want to know the actual definition (which does involve an integral) even though in this particular problem it'll still reduce to $F\cdot d$.
determine Maclaurin series for function $(1+z)e^{-z} $
$$ \begin{aligned} &amp;\quad \sum_{n=0}^{\infty} \frac {{(-1)^{n}z^{n}}} {n!} + \sum_{n=0}^{\infty} \frac {{(-1)^{n}z^{n+1}}} {n!} \\ &amp;= \sum_{n=0}^{\infty} \frac {{(-1)^{n}z^{n}}} {n!} + \sum_{n=1}^{\infty} \frac {{(-1)^{n-1}z^{n}}} {(n-1)!} \\&amp;= 1+\sum_{n=1}^{\infty} \frac {{(-1)^{n-1}(n-1)z^{n}}} {n!} \\ &amp;=1+\sum_{n=1}^{\infty} \frac {{(-1)^{n+1}(n-1)z^{n}}} {n!} \end{aligned} $$
Prove for Fibonacci numbers: $3\mid f(n) \iff 4\mid n$
Let $g(n)=f(n)\bmod 3$. Then $g(0)=0$, $g(1)=1$, and $g(n+1)=\big(g(n)+g(n-1)\big)\bmod 3$ for $n&gt;0$. The next few values are $g(2)=1$, $g(3)=2$, $g(4)=(1+2)\bmod 3=0$, and $g(5)=2$. Continuing in that vein, we can compute the following values, where I’ve starred the rows in which $3\mid f(n)$: $$\begin{array}{c|c|cc} n&amp;f(n)&amp;g(n)&amp;\\ \hline 0&amp;0&amp;0&amp;*\\ 1&amp;1&amp;1\\ 2&amp;1&amp;1\\ 3&amp;2&amp;2\\ 4&amp;3&amp;0&amp;*\\ 5&amp;5&amp;2\\ 6&amp;8&amp;2\\ 7&amp;13&amp;1\\ \hline 8&amp;21&amp;0&amp;*\\ 9&amp;34&amp;1\\ \end{array}$$ Can you see why I put a bar between $n=7$ and $n=8$? Remember, each value of $f$ or $g$ depends only on the immediately preceding two values. The induction with $g(n)$ is a little easier to see than the induction with $f(n)$.
Does $a \uparrow \uparrow (n+1)-a \uparrow \uparrow n$ divide $a \uparrow \uparrow(n+2) - a \uparrow \uparrow(n+1 )$?
Is my proof correct for the cases n=0 and n=1 ? Yes, it is correct. Can this process be continued to enable an induction proof ? Sure. Let's write $T(a,n) = a \uparrow\uparrow n$. Then $$\begin{align} T(a,n+2) - T(a,n+1) &amp;= a^{T(a,n+1)} - a^{T(a,n)}\\ &amp;= a^{T(a,n)}\left(a^{T(a,n+1)-T(a,n)}-1\right). \end{align}$$ The induction hypothesis gives $T(a,n+1) - T(a,n) = k\bigl(T(a,n) - T(a,n-1)\bigr)$, and monotonicity gives $T(a,n-1) \leqslant T(a,n)$, whence $$\begin{align} \frac{T(a,n+2)-T(a,n+1)}{T(a,n+1)-T(a,n)} &amp;= \frac{a^{T(a,n)}\left(a^{T(a,n+1)-T(a,n)}-1\right)}{a^{T(a,n-1)} \bigl(a^{T(a,n)-T(a,n-1)}-1\bigr)}\\ &amp;= a^{T(a,n) - T(a,n-1)}\cdot\frac{a^{k\bigl(T(a,n) - T(a,n-1)\bigr)}-1}{a^{T(a,n)-T(a,n-1)}-1} \end{align}$$ is recognised as the product of two integers, hence an integer.
What will happen if we change "limit as $x$ to constant" with "limit as $f(x)$ to constant" or "limit as $f(x)$ to variable"
Those expressions aren't normally used. But if there is some interpretation that makes sense and doesn't lead to contradictions, they might be useful in some contexts, as long as the reader knows what's going on. Let's consider: $$\lim_{2x \to 3} x^2 \quad (a)$$ If we agree (and I mean if!) that $x \to 2/3 \ \text{ as } \ 2x \to 3$ then: $$\lim_{2x \to 3} x^2 = \lim_{x \to 3/2} x^2 \quad (b)$$ Thus you can say, if you were writing a paper, "whenever I use expressions like $(a)$ in this paper, I mean $(b)$. However I cannot think of any way to interpret: $$\displaystyle \lim_{2x \to x} 8x$$ ...but if you think of an interpretation that makes sense, ask some mathematicians for feedback, and perhaps you might use it someday when writing a paper, and maybe it will even catch on? Bottom line: The important part of introducing new, non-standard notation is making sure the reader knows what it means. But this assumes the notation has a meaning, and that the meaning doesn't lead to contradictions.
Are infinitesimals, i.e. $dx = ...$, rigorous and correct notation?
There are basically two rigorous ways to deal with differentials. One is to treat them as differential forms. This is kind of an algebraic way of doing things, it sets rules for how you can manipulate differentials without trying to describe them as, say, "limits of small differences". The other way is nonstandard analysis, of which there are at least two incompatible types. One of those is the one from which that name originated, which originally used the idea of a nonstandard model (from model theory) to construct a self-consistent theory containing infinite and infinitesimal "hyperreal" numbers. This originated with Robinson. A different formalism with the same semantics (which is probably easier to understand for non-logicians) was made later by Nelson. An entirely different semantics arises in smooth infinitesimal analysis. SIA is somewhat alien to "mainstream" mathematicians, because it works in a field which has nonzero nilpotent elements (e.g. $dx \neq 0$ but $(dx)^2=0$). Such a thing is a contradiction in terms in classical logic, so this subject requires a weaker logic called intuitionistic logic in order to function (and even then $dx \neq 0$ is really "it cannot be proven that $dx=0$", a weaker statement). Honestly, most mathematicians, scientists, and engineers don't need either one. It is better to learn methods for manipulating differentials in formal (i.e. "regarding only form", which is sort of like "non-rigorous") calculations. Optionally you can also learn proofs in standard analysis (which use finite but arbitrarily small numbers). These never explicitly use differentials.
Find $\sup\{\sin(\theta) | \theta \in [0,\pi]\}$
You are correct. Note that $a\leq1$ for every $a\in A$, i.e. $1$ is an upper bound of $A$, and secondly that for every $\epsilon&lt;1$ we have $\epsilon&lt;1\in A$, so $A$ has no smaller upper bound. That means that $1$ is the least upper bound of $A$. Notation $\sup A=1$. Note that $a\geq0$ for every $a\in A$, i.e. $0$ is a lower bound of $A$, and secondly that for every $\epsilon&gt;0$ we can find some $a\in A$ with $a&lt;\epsilon$, so A has no larger lower bound. That means that $0$ is the greatest lower bound of $A$. Notation $\inf A=0$.
How may number pairs $(n - 2, n)$ are there, less than $n$, where $(n – 2)$ is prime and $n$ is composite?
all primes greater than 5 are 1,7,11,13,17,19,23, or 29 mod 30. Your $n-2$ prime $n$ composite cases are could be any of them mod 30 which means they have a maximum of ${4x\over 15}$ cases up to x, assuming all cases are possible at once ( they aren't) this is also a very weak upper bound for the number of primes.
Prove that a circle can be inscribed iff the given condition is satisfied
For variety , here is a different solution. Let us suppose that the incircles of $\triangle$s $ABD$ and $ACD$ are not tangential . Clearly , as in the figure , there are two distinct points of tangency , $G$ and $K$ Let $AG$ = $x$ , $GK$ = $\delta$ and $KD$ = $y$ . The line segments of the same color in the figure are equal. Now consider quadrilaterals $ABDC$ , $AB_1DC_1$ . Let us assume that a circle can be circumscribed by $AB_1DC_1$ . Join points of tangency to form segments $K’L’$ , $M’N’$ . Join $JL$ , $FH$ . The respective dotted segments are parallel , by Thales’ Theorem. Let $DL’ = a $ , $L’B_1 = b $ , $M’A = c$ and $K’C_1 = d $ . Let $C_1J = q_1$ and $B_1H = q_2 $. Using properties of tangents and isosceles $\triangle$s, we have:- $$ AJ = x + \delta = c + d - q_1$$ $$ AH = x = c + b - q_2 $$ From these , we get $$ \delta = d - b - q_1 + q_2 $$ Call this equation (1). Also , we have :- $$LL’ = a + y = d - q_1 $$ $$ FN’= a+ \delta + y = b - q_2 $$ From these , we get :- $$ \delta = b-d+q_1-q_2$$ Call this equation (2) . Clearly , from (1) and (2) , we get $\delta = 0 $ , a contradiction if the points are distinct . QED Note:- This is a case in which $H$ and $J$ lie inside the quadrilateral $AB_1DC_1$ , while $F$ and $L$ lie outside it . There exist other cases , which can be proven similarly
image of convex closed, bounded subspace of H by monotonous continuous operator is closed
This is not true for $K$ being a subspace. Take $F$ to be linear, compact, and monotone. Let $K=H$. Compact operators do not have closed range. It is true, if $K$ is assumed to be a convex, closed, bounded subset of $H$. Let $(x_n)$ be a sequence in $K$ such that $F(x_n)\to y$. It remains to show that there exists $x\in K$ with $F(x)=y$. Since $K$ is weakly sequentially compact, we can assume (after possibly extracting a subsequence) that $x_n \rightharpoonup x$ with $x\in K$. Take some $v\in H$. By monotonicity, we have $$ \langle F(x_n)-F(v),x_n-v\rangle\ge0. $$ Passing to the limit shows $$ \langle y-F(v),x-v\rangle\ge0\quad \forall v. $$ Setting $v=x+ tw$ for some $w\in H$, $t&gt;0$ gives $$ \langle y-F(x+ tw),-t w\rangle\ge0 $$ Dividing by $t$, letting $t\searrow 0$ gives $$ \langle y-F(x),-w\rangle\ge0\quad \forall w $$ hence $y=F(x)$.
Is this function invertible?
Given a function $f: X \to Y$ we often abuse the notation $f^{-1}(y)$ to mean the set $\{x \in X : f(x) = y\}$, which we call the inverse image of $y$, even if $f$ does not have an inverse! If it does have an inverse, though, the set will have contain only $f^{-1}(y)$ (the element) if $y$ is in the image of $f$, and it will be empty if not. You are correct that your function is not invertible, for exactly the reason you say, but your book is wrong about one thing, namely that in this case we have $f^{-1}(3) = \{\pm \sqrt{2}\}$. If $f$ were instead given the domain, say, $\mathbb{Z}$, then $f^{-1}(3) = \emptyset$.
POISSON Distribution, probability there is no flaw
Convert the rate to $\lambda = 1$ flaw per $10m^2$ and then use the formula $\mathbb{Pr}[X = 0] = \frac{\lambda^0 e^{-\lambda}}{0!} = e^{-\lambda}$
Example of a function that is not twice differentiable
Take $f(x)=x^2\sin(x^{-1})$ on $\mathbb{R}-\{0\}$ and set $f(0)=0$. You can prove this is differentiable at zero, but not twice differentiable there. That said, $f(-h)=-f(h)$ and $f(0)=0$ so that $f(h)+f(-h)-2f(0)=0$.
basis for the unit sphere?
Since $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$ all satisfy $x^2+y^2+z^2=1$ and those three vectors span all of $\mathbb{R}^3$, the subspace of $\mathbb{R}^3$ for which a basis is sought is all of $\mathbb{R}^3$. Thus, you need $3$ basis vectors, such as the three given above.
Finding values for $a$ and $b$ such that the function is continuous everywhere.
Note that $$\frac{x^2-1}{x+1}=x-1$$ if $x\ne-1$, so the definition of $f$ can be simplified to $$f(x)=\begin{cases} x-1,&amp;\text{if }x&lt;-4\\ ax^2+2x+b,&amp;\text{if }-1\le x&lt;0\\ |x+a+2|,&amp;\text{if }x\ge 0\;. \end{cases}$$ This function is certainly continuous on the open ray $(\leftarrow,-4)$. We don’t have to worry about continuity at any point of $[-4,-1)$, because those points aren’t in the domain of the function. Thus, the problem really boils down to choosing $a$ and $b$ so that the function $$g(x)=\begin{cases} ax^2+2x+b,&amp;\text{if }-1\le x&lt;0\\ |x+a+2|,&amp;\text{if }x\ge 0 \end{cases}$$ is continuous on the closed ray $[-1,\to)$. That’s not hard to do, but there are infinitely many pairs $\langle a,b\rangle$ that work. Consequently, I strongly suspect that there may be a misprint in the problem, and that the first case of the definition of $f$ was supposed to be $x&lt;-1$.
Find positive values such that $xy = 32$ and the sum $4x+y$ is as small as possible.
The first step is to use the constrain $xy=32$ to eliminate $y$ (or $x$) from the problem. Using $y = \frac{32}{x}$ we can write $4x+y$ as $4x + \frac{32}{x}$ and with this we have reduced the problem to finding the minimum of a function of a single variable. The most common method to solve these types of problems is to resort to derivatives: if the function $f(x)$ has a minimum/maximum point $x_*$ then $f'(x_*) = 0$. With $f(x) = 4x + \frac{32}{x}$ we have $f'(x) = 4 - \frac{32}{x^2}$ so $f'(x) = 0$ when $x^2 = 8 \implies x = \pm 2\sqrt{2}$. Only $x = +2\sqrt{2}$ satisfy $x\geq 0$ which is the region we are interested in so $f$ has only a single extremal point for $x&gt;0$. Having found the extremal point the final part is to make sure this is indeed a minimum point (apposed to a maximum). Since this is the only extremal point for $x &gt; 0$ and $f(x)$ grows without bounds as $x\to\infty$ (and also as $x\to 0$) the point has to be a minimum point. You can find more information here.
Differentiating $\langle Ax,x\rangle$
If $F(x) = f^T(x) g(x)$ the product rule gives $DF(x)(h) = (Df(x)(h))^T g(x) = f^T(x) Dg(x)(h)$. With $f(x) = Ax$, you have $Df(x)(h) = Ah$ and with $g(x) = x$, you have $Dg(x)(h) = h$. Substituting gives $DF(x)(h) = h^T A^T x + x^T A^Th = 2 x^T Ah$ (using $A=A^T$). This is sometimes written as $Df(x) = 2 x^T A$. Aside: This is easy to verify directly by computing $F(x+h)-F(x)$ and identifying the term that is linear in $h$. In particular, $F(x+h)-F(x)= 2 x^T Ah + h^T Ah$.
For what values of $p$ does this series converge?
Limit comparison test. $$\lim \frac{\sin(1/n)/n^p}{1/n^{p+1}}=1,$$ $\sum\frac{1}{n^{p+1}}$ converges when $p&gt;0$, diverges when $p\leq 0$.
Optimization of location to find where 2 particles are closest.
Use a right triangle to visualize this situation. You know certain facts 1) the speed of the particles can be represented on the two legs 2) the hypotenuse can be the distance between the two particles Thus, the function representing the distance will be based on the Pythagorean theorem.
Find the orthogonal projection using the given weighted inner product
By definition, if you're projecting something onto the vector $v$, you're going to get a solution in the form $\lambda v$ for some $\lambda\in \mathbb{R}$. You want to find $\lambda$ such that $$ (v, u-\lambda v) = 0. $$ We can rearrange this to $$ (v, u)=\lambda(v,v), $$ i.e. $$ \lambda=\frac{(v, u)}{(v,v)}\\ PROJ_v(u)=\frac{(v, u)}{(v,v)}v. $$ The same trick applies to finding the projection of $u$ onto the span of $v$ and $w$. Now we want to find $\lambda$ and $\mu$ such that $(v,u- \lambda v - \mu w)=0$ and $(w,u-\lambda v - \mu w)=0$. Then just take $PROJ_{v,w}(u)=\lambda v+\mu w.$ That means we want to solve the system of equations $$ \pmatrix{(v,u) \\ (w,u)}=\pmatrix{(v,v),&amp;(v,w)\\(w,v),&amp;(w,w)} \pmatrix{\lambda\\\mu}.$$ If the inner product can be represented with a symmetric, positive-definite matrix, say $M$, that turns into $$\pmatrix{v^\top\\w^\top}Mu=\pmatrix{v^\top\\w^\top}M\pmatrix{v,&amp;w}\pmatrix{\lambda\\\mu}.$$
Norm cone is a proper cone
Obviously $K$ is convex and closed as it is defined by the inequality $g(x,\lambda):=\|x\| - \lambda\le0$ with $g$ continuous and convex. Also it has non-empty interior. Take $x_0\ne0$, $\lambda_0&gt;\|x_0\|$. Then $\|x\| &lt; \lambda$ in a neighborhood of $(x_0,\lambda_0)$. If $(x,\lambda),(-x,-\lambda) \in K$, then $0\le \|x\| \le \min(\lambda,-\lambda)$ implying $\lambda=0$ and so $x=0$. And $K$ is pointed.
Efficient ways to read and learn a new topic
Perhaps the question in general is more concerned with cognitive processes than with mathematics alone. Having said that, there are differences between typing (interpret it as your "blogging") and handwriting: Marieke Longcampa, Marie-Thérèse Zerbato-Poudoub, Jean-Luc Velaya, "The influence of writing practice on letter recognition in preschool children: A comparison between handwriting and typing", Acta Psychologica Volume 119, Issue 1, May 2005, Pages 67–79. And I quote part of the abstract: "The results showed that in the older children, the handwriting training gave rise to a better letter recognition than the typing training." (my emphasis). Mathematics is composed not only of letters, but also of many different symbols. The above mentioned study should suggest that if letters are better recognized by handwriting, then mathematical symbols (specially if you are in such pure abstract field as topology) even more so. And there are several other similar studies, for example: Marieke Longcampa, b, , , Céline Boucardb, Jean-Claude Gilhodesb, Jean-Luc Velayb, "Remembering the orientation of newly learned characters depends on the associated writing knowledge: A comparison between handwriting and typing", Human Movement Science Volume 25, Issues 4–5, October 2006, Pages 646–656. I quote part of their abstract: " Results showed that when the characters had been learned by typing, they were more frequently confused with their mirror images than when they had been written by hand. This handwriting advantage did not appear immediately, but mostly three weeks after the end of the training.". Finally, let me quote Janet Emig "Writing as a mode of Learning", College Composition and Communication, Vol. 28, No. 2, May, 1977. Mind you, Emig does not compare the usage of typing into a computer, as opposite to handwriting, but she does implies all along that when she is speaking about "writing" she means "handwriting". Emig says: "what is striking about writing as a process is that by its very nature, all three ways of dealing with actuality [1) enactive - learn by doing; 2) iconic - we learn 'by depiction in an image' and 3) representational or symbolic] are simultaneously or almost simultaneously deployed. That is, the symbolic transformation of experience through the specific symbol system of verbal language is shaped into an icon (the graphic product) by the enactive hand.If the most efficacious learning occurs when learning is re-inforced, then writing through its inherent re-inforcing cycle involving hand, eye, and brain marks a uniquely powerful multi-representational mode of learning".
"Standard reference" for $C_c^\infty(\mathbb R)$ is dense in $C_c(\mathbb R)$
(I am assuming you are equipping $C_c^{\infty}(\mathbb R)$ and $C_c(\mathbb R)$ with the sup-norm). One can use the Stone-Weierstrass Theorem for locally compact Hausdorff spaces to show the result (for references to the Stone-Weierstrass Theorem, see Willard's General Topology Section 44 or Folland Chapter 4). In fact, the Stone-Weierstrass Theorem yields a stronger result: $C_c^{\infty}(\mathbb R^n)$ is dense in $C_0(\mathbb R^n)$ when both spaces are given the topology of uniform convergence. The sum, product, scalar multiple, and complex conjugate of smooth compactly supported functions is easily verified to also be smooth and compactly supported. The fact that $C_c^{\infty}(\mathbb R^n)$ separates points and vanishes nowhere follows from the following theorem in Folland: Theorem (Folland, 8.18). Let $K \subseteq \mathbb R^n$ be nonempty and compact, and let $U$ be an open set with $U \supseteq K$. Then, there is $f \in C_c^{\infty}(\mathbb R^n) $ such that $0 \leq f(x) \leq 1 $ for all $x \in \mathbb R^n$, $f(K)=\{1\}$, and $\text{supp}(f) \subseteq U$. Theorem 8.17 in Folland also proves this in a different way using an approximation of the identity.
Equivalence relations regarding binary relations
No, what you have is not correct. For $R$ to be reflexive, it must contain $(x, x)$ for every $x \in X$. For $R$ to be symmetric, it must contain $(x, y)$ if it contains $(y, x)$. For $R$ to be transitive, it must contain $(x, z)$ if it contains $(x, y)$ and $(y, z)$. In this case, $R$ is not reflexive or symmetric, but it is transitive. Not reflexive: $R$ does not contain $(b, b)$, $(c, c)$ or $(d, d)$. Not symmetric: $R$ contains $(b, c)$ but does not contain $(c, b)$. Similarly, $R$ contains $(c, d)$ but not $(d, c)$ and $(b, d)$ but not $(d, b)$. Transitive: There is no triple of elements $x, y, z \in X$ such that $R$ contains $(x, y)$ and $(y, z)$ but not $(x, z)$. It might help to think of $R$ as defining a binary operator $\sim$: for each pair of elements $x, y \in X$, $x \sim y$ if and only if $(x, y) \in R$. The reflexive property means that every element is related to itself: $x \sim x$ for every $x \in X$. The symmetric property means that you can switch the order of the operands: $x \sim y \iff y \sim x$. The transitive property means that you can "chain" the relation: $x \sim y \text{ and } y \sim z \implies x \sim z$.
Give recursive definition of sequence $a_n = 2^n, n=2,3, 4... where $ $a_1 = 2$
For the first one, write the term $a_{n+1}$ and compare it to $a_n$: $$a_{n+1}=2^{n+1}=2\cdot2^n=2a_n$$ For the second one, repeat the process: $$\begin{align}a_{n+1}&amp;=(n+1)^2-3(n+1)\\ &amp;=n^2+2n+1-3n-3\\ &amp;=n^2-3n+2n-2\\ &amp;=a_n+2(n-1) \end{align}$$
A circle is a set of measure zero. Generalizations?
As others have pointed out Sard's theorem gives a general result. In case you are interested in a low-brow reasoning I proffer the following. If $C$ is a compact subset of $\Bbb{R}^n$, and $f:C\to\Bbb{R}$ is a continuous function, then the graph of $f$, $G=\{(x,f(x))\in\Bbb{R}^{n+1}\mid x\in C\}$, has measure zero. This is because $f$ is necessarily uniformly continuous, and thus for all $\varepsilon&gt;0$ we can find a finite set of boxes that A) their union contains $G$, B) their bases cover $C$ with as little extra as desired, C) their heights are all $\le\varepsilon$. This implies that $m(G)=0$. If a set $S$ consists of at most countably infinitely many pieces like the graph $G$ in item 1, then $m(S)=0$. This follows from countable additivity. So for example the unit circle is the union of two graphs of a continuous function defined on $C=[-1,1]$, and thus has measure zero. Similarly the sine curve in the plane has measure zero as the union of the graphs of $\sin x$ restricted to $C_n=[2n\pi,2(n+1)\pi],n\in\Bbb{Z}$.
Polynomials with degree $5$ solvable in elementary functions?
These two are relatively well-known. I. For the DeMoivre quintic: $$x^5+5ax^3+5a^2x+b = 0\tag1$$ $$x = \left(\frac{-b+\sqrt{D}}{2}\right)^{1/5}-a\left(\frac{-b+\sqrt{D}}{2}\right)^{-1/5},\quad D=b^2+4a^5\\ \color{blue}{\text{or}}\\ \\ x_k = 2\sqrt{-a}\;\sin\left(\tfrac{1}{5}\,\arcsin\big(\tfrac{-b}{2\sqrt{-a^5}}\big)-\tfrac{2\pi\,k}{5}\right)\\ \color{blue}{\text{or}}\\ \\ x_k = 2\sqrt{-a}\;\cos\left(\tfrac{1}{5}\,\arccos\big(\tfrac{-b}{2\sqrt{-a^5}}\big))-\tfrac{2\pi\,k}{5}\right)$$ for all five roots $x_k$ with $k =0,1,2,3,4$. Note that all quintics can be reduced, in radicals (using a quadratic Tschirnhausen transformation), to the form, $$x^5+5ax^3+5bx+c=0\tag2$$ so the general quintic is tantalizingly close to, but not quite, solvable. P.S. This is directly analogous to the soln of the depressed cubic which all cubics can be reduced to, $$x^3+3ax+b = 0\tag3$$ where, $$x_k = 2\sqrt{-a}\;\cos\left(\tfrac{1}{3}\,\arccos\big(\tfrac{-b}{2\sqrt{-a^3}}\big))-\tfrac{2\pi\,k}{3}\right)$$ II. For the tangent quintic: $$x^5 + 5a x^4 + 10 b x^3 + 10a b x^2 + 5b^2 x + a b^2 = 0\tag4$$ $$x = \sqrt{b}\;\frac{1+R^{1/5}}{1-R^{1/5}},\quad R=\frac{a+\sqrt{b}}{a-\sqrt{b}}\\ \color{blue}{\text{or}}\\ \\ x_k =\sqrt{-b}\,\tan\left(\tfrac{1}{5}\,\arctan\big(\tfrac{-a}{\sqrt{-b}}\big)-\tfrac{2\pi\,k}{5}\right)$$ There does not seem to be a commonly known quintic with roots that can be expressible in terms of elementary functions (trigonometric, logarithmic, hyperbolic, etc) that cannot be expressed in terms of radicals as well.
$X$ is complete iff $\sum_{n=1}^\infty \|x_n\| < \infty \implies \sum_{n=1}^\infty x_n$ converges (Carothers, Theorem $7.12$)
By taking $\epsilon = 2^{-k}$ as you suggest, you find that for every $k$, there exists $N_k$ such that for all $n,m \ge N_k$ we have $\|x_n - x_m\| &lt; 2^{-k}$. Taking $x_{N_k}$ as your subsequence almost works, except that the $N_k$ are not necessarily increasing so it may not actually be a subsequence. To fix this, we can use Korone's idea of &quot;making sure the next index is larger than both the previous index and the threshold!&quot; Define $n_k$ recursively by $n_0 = N_0$, $n_{k+1} = \max(N_{k+1}, 1+n_{k})$. Then you can check that the desired property holds for the subsequence $x_{n_k}$, since $n_{k+1} \ge n_k \ge N_k$.
Prove that $(\mathbf{AB})^{T} = \mathbf B^{T}\mathbf A^{T}$ where $\mathbf A$ and $\mathbf B$ are matrices
It's incorrect, you cannot transpose a specific entry of a matrix (unless you treat it as 1x1 matrix, but that's not going to get you anywhere). To do it correctly you need to write $$ \big(({\bf AB})^T\big)_{i,j} = ({\bf AB})_{j,i}$$ and later after using the formula for the entries of product of matrices, you'll go back with $$ {\bf A}_{j,k} {\bf B}_{k,i} = ({\bf A}^T)_{k,j} ({\bf B}^T)_{i,k} = ({\bf B}^T)_{i,k} ({\bf A}^T)_{k,j}$$
Differentiability of an even function
By the Principle of Explosion, all answers are correct. There are two contradictory statements in the question: It is stated that $f'(0)$ exists (because the limit of the difference quotient exists) and that $f$ is even. Hence \begin{align} f'(0) = \lim_{h\to 0} \frac{f(h)-f(0)}{h} &amp;= \lim_{h\to 0} \frac{f(-h)-f(0)}{h} \\ &amp;= \lim_{-h\to 0} \frac{f(h)-f(0)}{-h} \\ &amp;= -\lim_{h\to 0} \frac{f(h)-f(0)}{h} \\ &amp;= -f'(0). \end{align} Since $f'(0) = -f'(0)$, it must be the case that $f'(0) = 0$. It is also state that $f'(0)$ exists and $f'(0) &gt; 0$. These two statements are contradictory, therefore any conclusion follows. The correct answer, then, is to mark all of the multiple choice options. ;) While it is impossible to know, it is likely that (as has been pointed out in the comments) the author(s) intended to consider the one sided limit. It is reasonable to conclude that the question should have read If $f$ is an even function such that $$ \lim_{h\to 0^+} \frac{f(h)-f(0)}{h} $$ has some finite, non-zero value, then... (multiple choice options). In this case, there is no contradiction. Then, by the reasoning above, we know that the function cannot be differentiable at $x=0$. If it were, then the derivative would be zero, but we know that it is not. On the other hand, suppose that the one-sided limit is $L$, i.e. that $\lim_{h\to 0^+} (f(h)-f(0))/h = L$. Then $$ |f(h) - f(0)| = \left| h \frac{f(h) - f(0)}{h} \right| = |h| \left| \frac{f(h) - f(0)}{h} \right| = |h| |L|. $$ We can make this as small as we like by choosing $h$ sufficiently small. The limit from the left will be the same (the sign of the limits will differ, but this is absorbed into the absolute value). This implies that $f$ is continuous at zero. Therefore, assuming that the question was supposed to read as indicated above, the correct answer would be (b).
What is meant by $C(\mathbb{R})$
$C(\mathbb{R})$ typically denotes the space of continuous functions $\mathbb{R}\to \mathbb{R}$. This can also be denoted by $C^0(\mathbb{R})$ in some contexts.
How to define a family of curves from a linear system?
Let $X$ be any variety and $|D|$ a linear system. Then one has the incidence variety $\Gamma\subset X\times |D|$, consisting of pairs, $(x,E)$ where $x\in E$. Then take the projection $\Gamma\to |D|$ to get what you want.
Help me to understand a solve of example that I have homework correctly
Part (d) of [1] is incorrect: $A\cap B=\{1,2\}$, and $A\cap C=\{1,d\}$, so $$(A\cap B)\cup(A\cap C)=\{1,2\}\cup\{1,d\}=\{1,2,d\}\;.$$ Note that you can actually deduce this from your correct answer to (c): one of the distributive laws says that $$(A\cap B)\cup(A\cap C)=A\cap(B\cup C)\;.$$ The correct answer to (h) is the empty set, written $\varnothing$ or, if list form is required here as well, $\{\}$. Everything else is correct.
Which graph products are categorical products?
There is a good treatment of this question in "Algebraic Graph Theory" by Ulrich Knauer. If you look at weak graph homomorphisms, which he calls egamorphisms, then the product in that category is what he calls the boxcross product (A weak homomorphism allows edges to be mapped to vertices as long as their endpoints are approppriately mapped). It is like the categorial product in the normal category with the addition of edges from to for every a, and conversely for fixed b. Another way to view this is to notice that the category of graphs under egamorphisms is the same as the category of reflexive graph under reflexive graph homomorphisms. If instead you have continuous homomorphisms, so edges between the images of two vertices must themselves have a pre-image, then you get another category of graphs wherein the disjunction is now the categorial product. These are the only 3 instances of which I am familiar where this occurs. Knauer also considers a number of other kinds of homomorphisms and shows how the resulting categories do not have products. For more detail I strongly suggest Chapter 4 of the book.
What is wrong in my attempt to expand the definite integral $\int_0^1\ln(1+x^3)dx$ about $x = 1$ through second degree terms?
You are falling victim to your own imprecise notation. After all, what exactly do you mean by the statement $$f(x) = \int_0^1 \log(1+x^3) \, dx?$$ The RHS is a constant. It uses $x$ as the variable of integration, yet you are using the same variable on the left as if somehow you are expecting the RHS to vary with $x$. That is why, in your second approach, you get $f'(x) = 0$, because you just found an elaborate way to show that the derivative of a constant is zero. You don't have this problem in your first approach because you wrote $$f(x) = \int_{\zeta = x}^1 \log(1 + \zeta^3) \, d\zeta.$$ And now the RHS is a function of $x$ because $x$ is the lower limit of integration and you have used a different variable of integration $\zeta$. Well, you didn't really write it exactly this way, instead you wrote the abomination $$f(x) = \int_x^1 \log (1 + x^3) \, dx,$$ but my version is what you intended. Ultimately, were you to use more terms in your series, you would know which approach is correct: We have $$f(x) \approx -(x-1) \log (2) -\frac{3}{4} (x-1)^2 -\frac{1}{8} (x-1)^3 + \frac{5}{32} (x-1)^4 + O((x-1)^5)$$ where upon evaluating at $x = 0$ yields $$f(0) \approx \log 2 - \frac{15}{32} \approx 0.224397$$ which is reasonably close to the exact value, with even more terms giving better convergence.
How can I prove $d_1(x,y) \leq n d_\infty (x,y)$
As C. Falcon says, you should change $d_{\infty}$ by $d_{2}$. By Cauchy-Schwarz we have $$(\sum_{i=1}^n (|x_i - y_i|\cdot 1))^2\le (\sum_{i=1}^n |x_i - y_i|^2)(1^2+\cdots 1^2).$$ Then $$\sum_{i=1}^n |x_i - y_i|\le \sqrt{n}(\sqrt{\sum_{i=1}^n |x_i - y_i|^2}).$$ Hence $d_{1}(x,y)\le \sqrt{n}d_{2}(x,y)$.
Writing A as disjoint sum of sets B, C such that derived sets of A, B and C are equal.
Let $A$ be any set in any metric space; I will show that there are disjoint sets $X,Y\subseteq A$ such that $X'=Y'=A'.$ (We do not need to assume that the space is separable, nor that $A$ is disjoint from $A'.$) Let $P$ be the set of all ordered pairs $(X,Y)$ of disjoint subsets of $A,$ partially ordered so that $$(X_1,Y_1)\le(X_2,Y_2)\iff X_1\subseteq X_2\text{ &amp; }Y_1\subseteq Y_2.$$ For $n\in\mathbb N$ let $P_n$ be the set of all pairs $(X,Y)\in P$ such that every $3$-element subset of $X$ or $Y$ has diameter at least $\frac1n.$ By Zorn's lemma, $P_1$ has a maximal element $(X_1,Y_1),$ which can be extended to a maximal element $(X_2,Y_2)$ of $P_2,$ and so on. Thus we can choose for each $n\in\mathbb N$ a maximal element $(X_n,Y_n)$ of $P_n$ so that $X_1\subseteq X_2\subseteq X_3\subseteq\cdots\text{ and }Y_1\subseteq Y_2\subseteq Y_3\subseteq\cdots.$ Let $X=\bigcup_{n\in\mathbb N}X_n$ and $Y=\bigcup_{n\in\mathbb N}Y_n.$ Then $X,Y$ are disjoint subsets of $A;$ I claim that $X'=Y'=A'.$ By symmetry it will be enough to show that $X'=A'.$ Clearly $X'\subseteq A'$ since $X\subseteq A.$ I have to show that $A'\subseteq X'.$ Assume for a contradiction that $A'\not\subseteq X'.$ Choose a point $a\in A'\setminus X'$ and choose $\varepsilon\gt0$ so that $B(a;\varepsilon)\cap X\subseteq\{a\}.$ Choose $n\in\mathbb N$ so that $\frac2n\lt\varepsilon.$ Now $B(a;\frac1{2n})$ contains infinitely many elements of $A,$ but at most two elements of $Y_n.$ Therefore we can choose a point $x\in A\cap B(a;\frac1{2n})$ such that $x\notin Y_n$ and $x\ne a.$ Since $(X_n,Y_n)$ is a maximal element of $P_n,$ and since $x\notin X_n,$ it follows that $(X_n\cup\{x\},Y_n)\notin P_n.$ Thus the set $X_n\cup\{x\}$ must contain a $3$-element set $\{x,y,z\}$ of diameter less than $\frac1n.$ Now $d(y,a)\le d(y,x)+d(x,a)\lt\frac2n$ and $d(z,a)\le d(z,x)+d(x,a)\lt\frac2n,$ and so $\{y,z\}\subseteq B(a;\frac2n)\cap X_n\subseteq B(a;\varepsilon)\cap X,$ contradicting the assumption that $B(a;\varepsilon)\cap X\subseteq\{a\}.$ Corollary. For any set $A$ in any metric space, there are infinitely many pairwise disjoint sets $X_1,X_2,X_3,\dots\subseteq A$ such that $X_n'=A'$ for each $n\in\mathbb N.$
Derive asymptotic behavior of inverse of the normal cdf with respect to 2^n
When $z\to+\infty$, $\Phi(-z)\sim1/(z\mathrm e^{z^2/2}\sqrt{2\pi})$. Since $2^{1-n}\to0$, this is the regime of interest. The solution of $\Phi(-z_n)=b\mathrm e^{-cn}$ with $b=2$ and $c=\log2$ solves $b\sqrt{2\pi}z_n\mathrm e^{z_n^2/2}\sim\mathrm e^{cn}$, that is, $$ z_n^2+2\log z_n=2cn-\log(2\pi)-2\log b+o(1), $$ in particular $z_n\sim\sqrt{2cn}$. The question asks about $x_n=-anz_n$ with $a=0.58$ hence $x_n\sim -an\sqrt{2cn}$, in particular $x_n=\Theta(n\sqrt{n})$ hence $x_n=\Omega(n\sqrt{n})$ and $x_n=O(n\sqrt{n})$. If need be, the equivalent in the first paragraph yields more precise estimates, for example, one has $z_n=\sqrt{2cn}-\log n/\sqrt{8cn}+o(\log n/\sqrt{n})$ hence, introducing $\alpha=a\sqrt{2c}$ and $\beta=a/\sqrt{8c}$, $$ x_n=-\alpha n\sqrt{n}+\beta\sqrt{n}\log n+o(\sqrt{n}\log n). $$
To prove a statement related to mean value property
Your claim as stated is false. Consider $a= 1$ and $b=1$ then you have \begin{align} \int^{2\pi}_0 \frac{d\theta}{1+\cos\theta} \end{align} which does not converge.
Is it possible for continuous fourier transform of a function to have values only on finite number of frequencies?
Suppose $g$ is the Fourier transform of some $f$, and that $g$ is zero everywhere except a finite set $\{x_n\}$, where it is finite. Then $g$ is a function (i.e. "not a distribution"*) that is zero a.e.; what happens if you integrate a function that is zero a.e.? In particular, what if you take the inverse Fourier transform of $g$? *: of course, functions are distributions, but hopefully my meaning is understood and you will allow me to be informal here.
"Faster" version of powers.
The operation you describe is called tetration and is sometimes denoted with a leading superscript. For instance, $$ {}^3 5 = 5^{5^5} \text{.} $$ More generally, for real $a \neq 0$ and nonnegative integer $n$, $$ {}^n a = \begin{cases} 1 ,&amp; n = 0 \\ a^{\left( {}^{(n-1)} a \right)} ,&amp; n &gt; 0 \text{.} \end{cases} $$ The generalization you are starting along is the hyperoperation sequence. Continuing on, each operation is just repeated application of the previous operation. Tetration is repeated exponentiation, pentation is repeated tetration, and so on. (The notation for hyperoperations is ... awful, but they don't usually come up, so there is little need to make it better.)
What is the meaning of the function $f:B\to A$ for sets $A$ and $B$?
You only need to have all elements of $B$ in the domain. If all elements of $A$ are in the range, then it'll be called a surjective function.
On real roots of a polynomial equation
As $g'(x)=12f(x)$, all local extrema of $g$ are at roots $r$ of $f$, $f(r)=0$. As these local extrema of $g$ the value $g(r)=-f'(r)^2&lt;0$ is negative, even if $f$ has 3 real roots, the local maximum still has a negative value. As the leading term of $g$ is $3x^4$, for large $|x|$ the value of $g$ becomes positive. Thus there are roots of $g$ left and right of the root set of $f$. As $g$ is monotonous on those segments, there is exactly one root of $g$ left of the leftmost root of $f$ and one right of the rightmost one. In other words, let $a=\min\{x\in\Bbb R:f(x)=0\}$ and $b=\max\{x\in\Bbb R:f(x)=0\}$. Then $g$ is negative on $[a,b]$ and monotonous falling resp. increasing on $(-\infty,a]$ and $[b,\infty)$ with a sign change and thus exactly one root in each of the intervals. Or another way using more directly the degree of $g$: if $g$ had $4$ real roots $s_1\le s_2\le s_3\le s_4$, then $g(x)=3(x-s_1)(x-s_2)(x-s_3)(x-s_4)$ would take non-negative values in the interval $[s_2,s_3]$, thus also at the local maximum $r$ there, which is impossible by the first observation, $g(r)=-f'(t)^2$ and $f'(r)\ne0$ as $r$ is also one root of $f$, and a simple one at that.
What is $\nabla X$ in Riemannian geometry?
I imagine this is in the context of differential geometry (rather than vector calculus, where I would not know what it stands for). Then it is very simply related to $\nabla_YX $ which you say you are familiar with: \begin{equation} \nabla X (Y) = \nabla _Y X. \end{equation} While the covariant derivative preserves a tensor field rank (that is, the covariant derivative of a vector is a vector, of a 1-form is a 1-form, and so on), the action of $\nabla$ itself increases the covariant (differential form) degree by one. Hence we can characterise it by saying how it acts on vector fields as above.
Optimal Mix / constrained optimization
You can just substitute $Q_i$ values such as $$\text{max}_{x_i} \big(5+0.5\ln(x_1)+3+0.7\ln(x_2)+2+0.3\ln(x_3)+4+0.9\ln(x_4)+6+1.6\ln(x_5)\big)$$ $$\text{subject to }x_1+x_2+x_3+x_4+x_5\le 1.000.000$$ Since the objective function is increasing w.r.t. every $x_i$ we can reformulate the constraint to be $$\text{subject to }x_1+x_2+x_3+x_4+x_5= 1.000.000$$ and the Lagrangian formulation becomes $$L=\big(20+0.5\ln(x_1)+0.7\ln(x_2)+0.3\ln(x_3)+0.9\ln(x_4)+1.6\ln(x_5)\big)+\lambda\big(x_1+x_2+x_3+x_4+x_5- 1.000.000\big)$$ and you can solve for first-order conditions. $$\frac{\partial L}{\partial x_i}=0$$ $$\frac{\partial L}{\partial \lambda}=0$$
How $+\infty$ is identity element for min operation
We say that $0$ is the identity element for the addition operation, because $$0+a = a+0 = a$$ for any $a$. We also say that $1$ is the identity element for the multiplication operation, because $$1 \cdot a = a \cdot 1 = a$$ for all $a$. In the same way, $\infty$ is the identity element for the min operator, because $$\min(a,\infty) = \min(\infty,a) = a$$ for all $a$. In other words, all numbers are smaller than infinity.
A linear subspace of projective space
No hyperplane $l(z)=\sum c_jz^j=0$ contains all your points $P_i$: substitution yields $l(P_i)=c_i=0$ .
How to write a math notation for a set generated by a relation?
Possibly you could use state notation as I proposed at this question, where your state = time.
Determining of a family of functions is a normal family
Hint: $f(z) = \int_C f'(w)\; dw$ where $C$ starts at $0$ and ends at $z$. BTW you only asserted (and didn't prove) equicontinuity at $0$, not everywhere in the disk.
probability with percentages. How many tries to get 90% success.
$$\dbinom{30}{14}(0.6)^{14}(0.4)^{16} \approx 0.0489$$ You are correct. $$1-0.4^k&gt;0.9 \Longrightarrow k\ln 0.4&lt; \ln 0.1 \Longrightarrow k&gt;2.51$$ You are correct again.
How to extract factor when expression is with a power
$f(x)=8x^{2}(x-3/2)^{3}$ and not $2x^{2}(x-3/2)^{3}$. This is because since $2(x-3/2)=2x-3$, when taking the third power the $2$ becomes an $8$.
Define inl : $σ → σ ∨ τ$
It seems to me that the requested derivation is : 1) $\sigma$ --- assumed [1] 2) $\sigma \to \alpha$ --- assumed [2] 3) $\alpha$ --- from 1) and 2) by $\to$-E 4) $(\tau \to \alpha) \to \alpha$ --- from 3) by $\to$-I 5) $(\sigma \to \alpha) \to (\tau \to \alpha) \to \alpha$ --- from 2) and 4) by $\to$-I, discharging [2] 6) $\forall \alpha [(\sigma \to \alpha) \to (\tau \to \alpha) \to \alpha]$ --- from 5) by $\forall$-I : $\alpha \notin FV(\sigma)$ 7) $\sigma \lor \tau$ --- definition of $\lor$ $\sigma \to \sigma \lor \tau$ --- from 1) and 7) by $\to$-I, discharging [1].
Area between 3 points in $\mathbb{R^3}$ space (problems with understanding why solution is wrong)
In Solution a, the $j^{th}$ component of the cross product of $u$ with $v$, you got $-(-28-20) =8$. The 8 should be a 48 instead.
Is there anything wrong with this use of the axiom of choice?
If you really can choose the functions $f_n$ so that they are compatible, then the argument is legitimate. If you can actually define them so that they are compatible, rather than merely show that they exist, you don’t need any choice; otherwise, you probably need the axiom of dependent choice. The hypothesis that $A$ is countably infinite already ensures that it has an enumeration $A=\{a_n:n\in\Bbb N\}$, ordinary recursion (possibly using dependent choice) gives you the sequence of functions $f_n$ for $n\in\Bbb N$, and the desired function $h$ is simply $\bigcup_{n\in\Bbb N}f_n$. However, it is crucial that you be able to define the functions $f_n$ so that $f_n\upharpoonright A_m=f_m$ whenever $m\le n$.
Why is an admissible function from a non-compact surface non-surjective?
We also have, at this point, that $g$ is injective; $g$ being injective implies it is not constant. A non-constant holomorphic mapping is an open mapping (that holds for the codomain any Riemann surface), hence, $g$ is a homeomorphism between $M$ and $g(M)$.
What is this dot symbol in a vector notation?
Its a wildcard character. $M_{\cdot j}$ means the the $j$-th column of $M$, while $M_{j\cdot}$ means the the $j$-th row. In linear algebra literature, it is much more common to use an asterisk (i.e. to write $M_{\ast j}$ and $M_{j\ast}$) than to use a dot for rows and columns.
Smoothness of total variation norm with weight
Given $\alpha \ge 0$ (lookup the definition of mixed norms), you have $$\alpha\|u\|_{TV} = \alpha \|Du\|_{2,1} = \|\alpha D u \|_{2,1}.$$ So, to smooth $g := \alpha \|.\|_{TV}$, simply replace the linear operator $D$ by the scaled version $\alpha D$. In particular, you don't need convex conjugates, etc.
Find $\det(I + a b^\top)$
Using Weinstein-Aronszajn, $$\det \left( {\rm I}_4 + {\rm a} {\rm b}^\top \right) = 1 + {\rm b}^\top {\rm a} = \color{blue}{1 + k_1 + 2 k_2 + 3 k_3 + 4 k_4}$$
Singularity of Morphism and Its Extension
1) As you said, by the Jacobian criterion indeed $\varphi^{-1}(0)$ is the unique singular fiber and the singularity type is two tangent parabolas $(x-y^2)(x+y^2) = 0$. 2) The fiber has homogenous equation $x^2z^2 = y^4 + az^4 $. Its intersection with the line at infinity given by $z=0$ and the previous equation, i.e this is the point $(1,0,0)$ (with multiplicity $4$). 3) We take $$\psi(x,y,z) = \frac{x^2z^2 - y^4}{z^4}$$ and the domain of $\psi$ is exactly when $z^4 \neq 0$ or $x^2z^2 - y^4 \neq 0$. This means that the domain is $\Bbb P^2 \backslash \{(1,0,0)\}$. This is not surprising since by the previous question, $(1,0,0)$ was in the closure of each fibers ! Finally, the fiber over $\infty$ is simply given by the line $z=0$, minus the point $[1:0:0]$. For completness, the other projective fibers over $[a:1] \in \Bbb P^1 $ are given by $\{ (x,y,1) : x^2 = y^4 + a \}$. Notice that it does not include the point $(1,0,0)$ by what we said, even though this point is in the closure of all the fibers. A final remark : this is possible to find a surface $X$ and a morphism $f : X \to \Bbb P^2$ so that $\psi$ becomes defined everywhere, this morphism is called a blow-up and making your map defined everywhere is typically why algebraic geometers introduce blow-up.
diameter on a compact metric space
For the sake of having an answer: Since $F$ is closed in $X$, it is compact. Then $F \times F$ is compact, too, and $\rho: F \times F \to [0,\infty)$ is a continuous function on a compact set, so it attains its maximum. In other words, there is a point $(x_0,y_0) \in F \times F$ such that $\rho(x_0,y_0) = \max{\{\rho(x,y)\,:\,(x,y) \in F \times F\}} = \operatorname{diam}{F}$ which is precisely the statement you ask about. Remarks. We didn't use that $F$ has finite diameter, because it follows from our argument. It would be enough to assume that $F$ is a compact subset of $X$ instead of assuming compactness of $X$ itself. Compactness is necessary: equip a countable set $X = \{x_n\}_{n \geq 2}$ with the metric given by $d(x_n,x_m) = \max{\{1-1/n,1-1/m\}}$ if $n \neq m$. Then $\operatorname{diam}{X} = 1$ but no two points are at distance $1$ to each other.
Proof related to breadth first search
The level of a vertex in such a tree is the number edges in a shortest path to the root. Any two vertices in $C$ are at most $\lfloor \tfrac{n}{2} \rfloor$ edges apart. So traveling from one vertex in $C$ via another one in $C$ to the root...
Definition of prime element in a Euclidean ring does not make sense. Herstein - Topics in Algebra
Actually yes, There is no prime elements in $\mathbb Q$. Every element in $\mathbb Q$ is divisible by another element in $\mathbb Q$ Think about any element in $\mathbb Q$. For example, $3 \in \mathbb Q$ is not prime because we have that $\frac{1}{3} \times {9} = 3$ In general, an element $\alpha \in \mathbb Q$ is in the form of $\frac{a}{b}$ where $a,b$ are both integers and $b \neq 0$. Now $\frac{1}{b} \in \mathbb Q$ and $a \in \mathbb Q$ and $\frac{1}{b} \times a = \frac{a}{b}$ and so you can't have a prime element in $\mathbb Q$ However, That definition is actually the definition for an irreducible element. However, sometimes irreducibility implies prime and in a euclidean domain, it's true because a euclidean domain is also a unique factorization domain. However, In a general integral domain, Irreducible doesn't imply prime. The definition for a prime element is the following. $p$ is said to be a prime element , if $p$ is a positive non unit element and if $p \mid ab$ then $p \mid a$ or $p \mid b$ Now to give you an example where there is an irreducible element which is not prime. Consider the integral domain $\sqrt{-5}$ Now $2$ is an irreducible element that divides the product $(1+\sqrt{-5})(1-\sqrt{-5}) = 6$. However $2$ does not divide any of the factors and hence it's not prime.
"optimal" big-O order for $\cos{x}-1+x^2 / 2$
Yes, that is all correct. You're correct that $24=4!$ is not a coincidence here. From Taylor's theorem you can derive a power series expansion for cosine, $$\cos(x)=1-\frac{x^2}{2}+\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots,$$ so that $$\cos(x)-1+\frac{x^2}{2}=\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots,$$ and $$\frac{\cos(x)-1+\frac{x^2}{2}}{x^4}=\frac{1}{4!}-\frac{x^2}{6!}+\frac{x^4}{8!}-\cdots.$$ The series on the right-hand side of the the last equation is defined and continuous everywhere, and goes to $\frac{1}{4!}$ at $0$.
algebraic definition of vector product
You turned the product of $|A|$ and $|B|$ into a sum of the square roots. Replacing with the original product, you get this: There is a number $k$ such that $y_1=kx_1$ and $y_2=kx_2$, since $x=x_1/y_1=x_2/y_2$. Now your big expression is equal to $\frac{x_1x_2+k^2x_1x_2}{\sqrt{x_1^2+k^2x_1^2}\sqrt{x_2^2+k^2x_2^2}}=\frac{(1+k^2)x_1x_2}{|x_1|\sqrt{1+k^2}|x_2|\sqrt{1+k^2}}=\frac{x_1x_2}{|x_1x_2|}$. Notice that you get $1$ or $-1$, which is normal, because your assumption $x_1/y_1=x_2/y_2$ does not imply that the two vectors have the same direction: they can be opposite.
Assumption about the form of solutions to a recurrence relation
One clean explanation (and a uniform way to solve such recurrences) is to use generating functions. Say you have: $\begin{align} a_{n + k} = c_{k - 1} a_{n + k - 1} + \dotsb + c_0 a_n \end{align}$ Define the generating function $A(z) = \sum_{n \ge 0} a_n z^n$, multiply the recurrence by $z^n$ and sum over $n \ge 0$, noting that e.g.: $\begin{align} \sum_{n \ge 0} a_{n + s} z^n = \frac{A(z) - a_0 - a_1 z - \dotsb - a_{s - 1} z^{s - 1}}{z^s} \end{align}$ to get: $\begin{align} \frac{A(z) - a_0 - \dotsb - a_{k - 1} z^{k - 1}}{z^k} = c_{k - 1} \frac{A(z) - a_0 - \dotsb - a_{k - 2} z^{k - 2}}{z^{k - 1}} + c_{k - 2} \frac{A(z) - a_0 - \dotsb - a_{k - 3} z^{k - 3}}{z^{k - 2}} + \dotsb + c_0 A(z) \end{align}$ Multiply through by $z^k$ and collect terms to get: $\begin{align} A(z) (1 - c_{k - 1} z - \dotsb - c_0 z^k) = b_{k - 1} z^{k - 1} + \dotsb + b_0 \end{align}$ Here the $b_i$ are messy combinations of the initial values $a_0$ through $a_{k - 1}$. The critical point is that: $\begin{align} A(z) = \frac{b_{k - 1} z^{k - 1} + \dotsb + b_0} {1 - c_{k - 1} z - \dotsb - c_0 z^k} \end{align}$ This can be split into partial fractions. By that technique you know that a zero $1/r$ of multiplicity $m$ of the denominator gives rise to terms: $\begin{align} \frac{A_m}{(1 - r z)^m} + \dotsb + \frac{A_1}{1 - r z} \end{align}$ Now, by the generalized binomial theorem, for $s \in \mathbb{N}$: $\begin{align} (1 - r z)^{-s} &amp;= \sum_{n \ge 0} (-1)^n \binom{-s}{n} r^n z^n \\ &amp;= \sum_{n \ge 0} \binom{n + s - 1}{s - 1} r^n z^n \end{align}$ Noting that $\binom{n + s - 1}{s - 1}$ is a polynomial of degree $s - 1$ in $n$, you see that a zero $1/r$ of multiplicity $m$ gives rise to a set of terms that add up to $p(n) r^n$, with $p(n)$ a polynomial of degree (up to) $m - 1$ in $n$ ("up to" as $1 - r z$ might be a factor of the numerator). In case you have complex zeros, they come in conjugate pairs $r$, $\overline{r}$, and the coefficients of the terms are also conjugates (otherwise the result wouldn't be real). Thus you get a bunch of terms like: $\begin{align} \alpha n^s r^n + \overline{\alpha} n^s \overline{r}^n = 2 \Re\left(\alpha n^s r^n\right) \end{align}$ These terms can be expressed in trigonometric terms by expressing the values as complex exponentials.