title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
The minimal number of permutation matrices
All the permutations of $n$ elements can be generated by these two permutations (I use cycle notation for readablility): the transposition $s = (1,2)$ and the cycle $r = (1,2,\ldots, n-1,n)$. First of all I suppose known already that each permutation is the composition of transpositions (also called exchanges or 2-cycles) $(a_i, a_j)$. The first step is to construct the $n$ transpositions of the form $(i, {i+1})$ and $(1,n)$ with $1 \leq i < n$. One can easily verify that they are obtained as $r^{-j} \circ s \circ r^j$ with $0 \leq j < n$. The next step is to generate the remaining transpositions $(i,j)$. These can be obtained by the "palindromic" compositions: $(i, i+1) \circ (i+1, i+2) \circ \ldots \circ (j-2, j-1) \circ (j-1,j) \circ (j-2, j-1) \circ \ldots \circ (i+1, i+2) \circ (i, i+1) $ This proves that every permutation can be written as a word using only $r$ and $s$. Note that this construction is only of theorethical importance. In practice, if one wants to construct groups generated by some permutations it is more practical to use algorithms like the Schreier-Sims algorithm.
Does a partial k-tree graph must be planar?
Each (finite simple) $2$-tree is planar. We can inductively construct its straight-line plane drawing, placing new vertex sufficiently close to the midpoint of the edge to which endpoints it is adjacent. On the other hand, a non-planar graph $K_{3,3}$ is a partial $3$-tree.
Proof of a theorem of Galois in Endliche Gruppen I by Huppert
Ok, I found a solution. It works as follows. Suppose $V$ and $V'$ are complements of $N$ in $G$ and let $Q = K \cap V$, resp. $Q' = K \cap V'$. Since $Q$ and $Q'$ are Sylow $q$-subgroups of $K$ there is some $g \in G$ such that $(Q')^g = Q$. Note that $K$ is normal in $G$. Hence $Q = K \cap V = K \cap (V')^g$. And moreover, $Q \unlhd V$ and $Q \unlhd (V')^g$, again since $K \unlhd G$. Hence $(V')^g \leq N_G(Q) = V$ and so $(V')^g = V$ since they have the same order.
$F^n$ as a direct sum of cyclic submodules
The key result is that we have an isomorphism of $F[X]$-modules $F^n\simeq F[X]^n/\ker(XI_n-A)$. I don't have much time now, so I leave you to find some references for now on the web or in the standard books. If I have time tonight (French time) , I will edit my answer and provide a full proof. Now, you just apply the standard procedure . Find the Smith normal form of $XI_n-A$: $$\begin{pmatrix} I_{n-r} & & & \cr & P_1 & & \cr & & \ddots & \cr & & & P_r\end{pmatrix},$$ where $P_1,\ldots, P_r\in F[X]$ are monic of degree $\geq 1$ and $P_1\mid P_2\mid\cdots\mid P_r.$ Then $F^n\simeq F[X]/(P_1)\times\cdots \times F[X]/(P_r)$.
Proving that $\sqrt{2}$ is irrational with a math level of a middle school student?
Geometry was taught after elementary algebra at my middle school, and focused heavily on construction with a compass and straight edge, and so we would all have been able to understand this proof by Apostol (although perhaps only after a couple readings of it), which I will quote here: This note presents a remarkably simple proof of the irrationality of $\sqrt{2}$ that is a variation of the classical Greek geometric proof. By the Pythagorean theorem, an isosceles right triangle of edge-length 1 has hypotenuse of length $\sqrt{2}$. If $\sqrt{2}$ is rational, some positive integer multiple of this triangle must have three sides with integer lengths, and hence there must be a smallest isosceles right triangle with this property. But inside any isosceles right triangle whose three sides have integer lengths we can always construct a smaller one with the same property, as shown below. Therefore $\sqrt{2}$ cannot be rational. Construction. A circular arc with center at the uppermost vertex and radius equal to the vertical leg of the triangle intersects the hypotenuse at a point, from which a perpendicular to the hypotenuse is drawn to the horizontal leg. Each line segment in the diagram has integer length, and the three segments with double tick marks have equal lengths. (Two of them are tangents to the circle from the same point.) Therefore the smaller isosceles right triangle with hypotenuse on the horizontal base also has integer sides. -Tom M. Apostol, Irrationality of The Square Root of Two -- A Geometric Proof, American Mathematical Monthly 107, No. 9 (Nov., 2000), pp. 841-842.
First order theory - define a domain with an even number of elements
HINT: Write a sentence that says that $R$ satisfies the following condition: $R$ is symmetric, and for each $x$ there is a unique $y\ne x$ such that $R(x,y)$. (And of course explain why this does what you want.)
Composition of functions is continuous?
$(3)$:Since $f$ is increasing it is injective and hence has a left inverse . Moreover $f$ is given to be continuous so the left inverse is also so and hence $f^{-1}\circ (f\circ g)=g$ is continuous which proves $3$. $(1)$: $f(x)=$\begin{cases} x & x\in [0,\frac{1}{2}]\\\frac{x}{2}&x\in (\frac{1}{2},1]\end{cases} and $g(x)=x$,then $f\circ g$ is discontinuous at $\frac{1}{2}$. $(2)$: $f(x)=x^2$; $g(x)=$\begin{cases} 1 & x\in \Bbb Q\cap[0,1]\\0 &x\in \Bbb Q^c\cap [0,1]\end{cases};then $f\circ g$ is discontinuous
Sufficient conditions for applying Taylor theorem
Twice differentiable at $a$ is sufficient. Have a look at Taylor's theorem. By the way you even have $$ f(x)=f(a)+f'(a)(x-a)+\frac{1}{2}f''(a)(x-a^2)+o(|x-a|^2)$$ with the square in the $o$.
What does it mean for a function to be Riemann integrable?
The Riemann integral is defined in terms of Riemann sums. Consider this image from the Wikipedia page: We approximate the area under the function as a sum of rectangles. We can see that in this case, the approximation gets better and better as the width of the rectangles gets smaller. In fact, the sum of the areas of the rectangles converges to a number, this number is defined to be the Riemann integral of the function. Note however that we can draw these rectangles in a number of ways, as shown below (from this webpage) If, no matter how we draw the rectangles, the sum of their area converges to some number $F$ as the width of the rectangles approaches zero, we say that the function is Riemann integrable and define $F$ as the Riemann integral of the function. For some functions the area will not converge, the canonical example being the indicator function for the rationals $\mathbb{1}_{\mathbb{Q}}(x)$, which is $1$ if $x$ is a rational and $0$ otherwise.
Given a directional derivative to find a point
Using the chain rule: $$\frac{ \partial}{ \partial x} f(x,y,x^2 + y) = f_x(x,y,x^2 + y) \cdot \frac{ \partial}{ \partial x} (x) + f_y (x,y, x^2 + y) \cdot \frac{ \partial }{ \partial x}(y) + f_z(x,y,x^2 + y) \cdot \frac{\partial }{\partial x} (x^2 + y) = f_x(x,y,x^2 + y) + 2x \cdot f_z(x,y,x^2 + y) = \frac{\partial}{ \partial x}(3x -y) = 3 f_x(x,y,x^2 +y) = 3- 2x f_z(x,y,x^2+y) $$ Notice that $A$ fits perfectly to the $x,y, x^2 + y$ pattern ($0^2 + 12 = 12 = A_z$) Thus: $$ f_x(0,12,12) = 3- 2 \cdot 0 \cdot f_z(x,y,x^2+y) = 3$$ So we have one third of the gradient at the point $A$. Using the chain rule again but now differentiating on $y$: $$ f_y(x,y,x^2 + y) = f_x(x,y,x^2 + y) \cdot (x)_y + f_y(x,y,x^2 + y) \cdot (y)_y + f_z(x,y,x^2 + y) \cdot (x^2 +y)_y \\= f_y(x, y,x^2 +y) + f_z(x,y,x^2 + y) = (3x -y)_y = -1 \\ \Rightarrow \color{red}{f_y(0,12,12) + f_z(0,12,12) = -1} $$ Now using the directional derivative: First we need to normalize the direction vector, which is given by $$ \frac{1}{||(1,0,1)||} \cdot (1,0,1) = ( \frac{1}{\sqrt{2}} , 0, \frac{1}{\sqrt{2}})$$ Now using the fact that the function is differentiable, you don't need to actually calculate the derivative by definition, we can just use this known formula (which is equal to $3$ according to the question): $$ \nabla f(0,12,12) \cdot ( \frac{1}{\sqrt{2}} , 0, \frac{1}{\sqrt{2}}) = 3$$ By the definition of the gradient at point $A$, it is equal to all the vector of all partial derivatives at that point: $$ \nabla f(0,12,12) = \left ( f_x(0,12,12), f_y(0,12,12) , f_z(0,12,12) \right ) $$ Regular dot multiplication gives us: $$ f_x(0,12,12) \cdot \frac{1}{\sqrt{2}} + 0 \cdot f_y(0,12,12) + f_z(0,12,12) \cdot \frac{1}{ \sqrt{2}} = 3$$ Now, we already know what is the value of $f_x(0,12,12)$ and it is equal to $3$. Substituting back gives us: $$f_z(0,12,12) \cdot \frac{1}{\sqrt{2}} = 3 - \frac{3}{\sqrt{2}} \\ f_z(0,12,12) = 3 \sqrt{2} - 3$$ Recall the red equation, we can substitute $f_z$ back and get: $$ f_y(0,12,12) = -1 -f_z(0,12,12) = -1 -3 \sqrt{2} + 3 = 2 - 3 \sqrt{2} $$ Thus we can finally say that the answer is: $$ \nabla f(0,12,12) = (3, 2-3\sqrt{2}, 3 \sqrt{2} -3 ) $$ And rest.
Constructing an explicit isomorphism $\widehat{\Phi}^{-1} : \operatorname{End}(V) \to V \otimes V^*$
Let $f:V\to V$ be a linear map. Let $(v_1,\dots,v_n)$ be a basis of $V$ and let $(\phi_1,\dots,\phi_n)$ be the corresponding dual basis of $V^*$. Consider the element $u=\sum_{i=1}^nf(v_i)\otimes\phi_i$: its image under your map $\Phi$ is equal to $f$. Indeed, for all $v\in V$ we have that $$\Phi(u)(v) = \sum_{i=1}^n\Phi(f(v_i)\otimes\phi_i)(v)=\sum_{i=1}^n\phi_i(v)f(v_i)=f\left(\sum_{i=1}^n\phi_i(v)v_i\right)=f(v).$$
Partial derivative proof of complex numbers
$z=x+iy$ and $\overline z=x-iy$ Let $f=f(z,\overline z)$ Now ${\partial f\over{\partial x}}={\partial f\over{\partial z}} {\partial z\over{\partial{ x}}}+{\partial f\over{\partial{\overline z}}}{\partial {\overline z}\over{\partial{ x}}}={\partial f\over{\partial z}}(1)+{\partial f\over{\partial{\overline z}}}(1)=({\partial \over{\partial z}}+{\partial \over{\partial{\overline z}}})f$ $\implies{\partial\over{\partial x}}\equiv {\partial\over{\partial z}}+{\partial\over{\partial{\overline z}}}$ Similarly, ${\partial f\over{\partial y}}={\partial f\over{\partial z}} {\partial z\over{\partial{ y}}}+{\partial f\over{\partial{\overline z}}}{\partial {\overline z}\over{\partial{ y}}}={\partial f\over{\partial z}}(i)+{\partial f\over{\partial{\overline z}}}(-i)=i({\partial \over{\partial z}}-{\partial \over{\partial{\overline z}}})f$ $\implies{\partial\over{\partial y}}\equiv i({\partial\over{\partial z}}-{\partial\over{\partial{\overline z}}})$
What does inverse of a matrix that is the transpose of a matrix times itself mean in linear regression?
I suspect that you really meant to write $(A^TA)^{-1}$ instead of $(AA^T)^{-1}$. The former is the usual expression that appears in the least-squares approximation for a solution to $Ax=y$. The expression $A^TA$ is called the Gram matrix of $A$ and turns up in many contexts. Its entries are the pairwise dot products of the columns of $A$. It is invertible iff those columns are linearly independent. In this context, it turns up because we are essentially computing the orthogonal projection of $y$ onto the column space of $A$. (Why? Because $A\hat x=\hat y$ can only have a solution when $\hat y$ is in $A$’s column space.) The complete expression for this projection is $A(A^TA)^{-1}A^Ty$; without the leading $A$ factor, what you end up with is the coordinates of this projection relative to the $A$-basis. Comparing this to the orthogonal projection of $y$ onto a single vector $a$, namely ${aa^T\over a^Ta}y = a(a^Ta)^{-1}a^Ty$, we can see that the Gram matrix of $A$ plays an analogous role to the normalizing factor $a^Ta$ (which is just the dot product of $a$ with itself). When projecting onto a subspace that has dimension greater than one, though, there’s more to do than simply normalize the columns of $A$. We also need to deal with the “crosstalk” between basis vectors when they’re not orthogonal. (See this answer for more details of what happens when the basis vectors aren’t orthogonal.) The Gram matrix of $A$ encodes both the norms of the basis vectors and how projections onto them overlap. Inverting this matrix sorts all of this out—for me still somewhat magically, to be honest.
Show that $2-2e^{-|x|}\leq C|x|^{r}$ for some constant $C, r>0$.
If $|x| \ge 1$ then since $e^{-|x|} \ge 0$ we have $$2-2e^{-|x|} \le 2 \le 2|x|.$$ On the other hand, if $|x| \le 1$, then $e^{-|x|} \ge 1-|x|$ so $$2-2e^{-|x|} \le 2-2(1-|x|) = 2|x|.$$ Therefore we can take $C = 2$ and $r=1$.
How to intuivitely see that $e^{i \pi}+1=0$ is true?
$e^{i \theta} = \cos \theta + i \sin \theta$ Thus, on the complex plane, it makes the equation of a circle, with $\theta$ mapping to a point on the circle with angle $\theta$. With $\theta = \pi$, that maps to the point of the circle on the negative real axis, namely the point $-1$.
If $f$ is holomorphic and $\left| f \right|$ is constant then $f$ is constant
The implication "$u_x=u_y\equiv 0 \implies u(x,y)$ is constant" only works if $D$ is connected ! Example: let $D_1$ and $D_2$ open sets with $D_1\cap D_2 = \emptyset$ and $D:= D_1 \cup D_2$. Then define $f:D \to \mathbb C$ by $f(z)=1$ if $z \in D_1$ and $f(z)=-1$ if $z \in D_2$. Then $f$ is holomorphic on $D$, $|f|$ is constant on $D$, but $f$ is not constant on $D$. So, your proof is correct, if $D$ is a region (open and connected).
$X_1$ is a set of numbers which are neither prime nor composite. $X_2$ is a set of numbers from 1 to 40 that are multiples of 10. Find ($X_1\cup X_2$)
I believe that this is simply mistake - someone printed $0$ instead of $1$ in the answers. And then your solution is correct. (But this explanation becomes more improbable if the answer mentions $0$ more than once. Then I believe it's even bigger author's mistake - in thinking instead of typing.) Number $1$ is usually excluded from primes because of uniqueness of natural number factorization ("Every number can be expressed as product of primes in only one way, if order is unimportant") - because number $1$ as prime would allow many different factorizations like $2=1\cdot 2=1\cdot 1\cdot 2=\ldots$ Also the context - primes and compounds - usually doesn't include number $0$ (maybe only as reminder).
Mathematical Induction (power of four exceeds multiple of three by one)
This just reflects the simple fact that $4\cdot x=3\cdot x+x$.
Different results for row reduction in Matlab
This is already documented in MATLAB. Roundoff errors may cause this algorithm to compute a different value for the rank than rank, orth and null. You need to type format long e. After that you can see the difference if you execute N-eye(2) resulting with N-eye(2) ans = -5.000000000000004e-002 4.500000000000000e-001 5.000000000000000e-002 -4.500000000000000e-001 Here, the trouble is already visible in the (1,1) element of the matrix. But also [1 1]*(N-eye(2)) ans = -4.163336342344337e-017 5.551115123125783e-017 gives you the error between seemingly identical elements. The reason why you get correct results with an additional zero column is (my personal view) due to the term in the tolerance computation for rref given by (max(size(A))*eps *norm(A,inf)). Here, the first term is 3 instead of 2 and that should make the small difference between selecting a rank 1 and rank 2 matrix.
Intersection of countably many vector spaces is non empty
It's actually false. Consider the vector space $V$ of polynomials with real coefficients, and define $$V_i=\{P\in V| P(j)=0,\ j=0,\cdots, i\}.$$ Any polynomial in $\bigcap_{i\in \Bbb N} V_i$ must vanish on all natural numbers, thus the intersection is trivial.
Is bijection mapping connected sets to connected homeomorphism?
Using the converse you mentioned, it is clear that if $f$ or $f^{-1}$ fails to map a connected set to another connected set, $f$ cannot be a homeomorphism. Thus, it suffices to show that Theorem: Fix $n > 1$. Let $f:\mathbb{R}^n \to \mathbb{R}^n$ be a bijection, such that $f$ maps connected sets to connect sets, and $f^{-1}$ maps connected sets to connected sets. Then both $f$ and $f^{-1}$ are continuous. This is a corollary of Theorem 1 in a paper of Tanaka. I reproduce the proof below. Proof: since the hypothesis is symmetric in $f$ and $f^{-1}$, it suffices to prove that $f$ is continuous. We proceed by contradiction. Suppose $p\in \mathbb{R}^n$ is such that $f$ is discontinuous at $p$. Then there exists a sequence of points $(p_n)$ such that $p_n \to p$ and $f(p_n) \not\to f(p)$; without loss of generality we assume that $f(p) = 0$. Thus, up to a subsequence we can assume that there exists $\epsilon > 0$ such that $f(p_n) \in B_\epsilon^c = \mathbb{R}^n \setminus B_\epsilon$. Clearly $\{0\} \cup B_\epsilon^c$ has two connected components. On the other hand, $f^{-1}( \{0\} \cup B_\epsilon^c)$ is connected: by assumption $f^{-1}(B_\epsilon^c)$ is connected since $B_\epsilon^c$ is connected, and we have that every open neighborhood of $p = f^{-1}(0)$ intersects $f^{-1}(B_\epsilon^c)$. Thus we forced a contradiction. Remark: In the case $n = 1$, $B_\epsilon^c$ is not connected. However, it has two connected components $B_\epsilon^{c+}$ and $B_\epsilon^{c-}$. The proof can be carried through if we replace every instance of $B_\epsilon^c$ with either $B_\epsilon^{c+}$ or $B_\epsilon^{c-}$: at least one of the two must contain infinitely many $f(p_n)$.
Fastest way to determine whether a uni-variate integer polynomial is positive semi-definite or not.
In Maple, you could use the sturm function to determine the number of real roots. The function is positive semidefinite if the value at one point is positive and all real roots have even multiplicities.
What can we say about the function $|f(x)-f(y)|\leq |x-y|^c$ when $c\geq 1$
The function is constant: take $y=x+h$, then $$ \lvert f(x+h)-f(x) \rvert \leqslant h^c, $$ and dividing both sides by $h$ and taking the limit as $h \to 0$ shows that $f'(x)=0$. But this is true for any $x$, and the only continuous functions with zero derivative everywhere are constant.
Nonspliting short exact sequence
The short exact sequences $0\to\mathbb{Z}\to G\to\mathbb{Q}\to0$ are classified by the group $\operatorname{Ext}(\mathbb{Q},\mathbb{Z})$. Two exact sequences define the same element in $\operatorname{Ext}(\mathbb{Q},\mathbb{Z})$ if and only if there exists a homomorphism $G\to G'$ making the diagram $$\require{AMScd} \begin{CD} 0 @>>> \mathbb{Z} @>>> G @>>> \mathbb{Q} @>>> 0 \\ @. @| @VVV @| \\ 0 @>>> \mathbb{Z} @>>> G' @>>> \mathbb{Q} @>>> 0 \end{CD} $$ commutative. Now $\operatorname{Ext}(\mathbb{Q},\mathbb{Z})$ can be computed from an injective resolution of $\mathbb{Z}$, for instance $0\to\mathbb{Z}\to\mathbb{Q}\to\mathbb{Q}/\mathbb{Z}\to0$, applying the $\operatorname{Hom}(\mathbb{Q},-)$ functor and obtaining the long exact sequence \begin{multline} 0\to\operatorname{Hom}(\mathbb{Q},\mathbb{Z}) \to\operatorname{Hom}(\mathbb{Q},\mathbb{Q}) \to\operatorname{Hom}(\mathbb{Q},\mathbb{Q}/\mathbb{Z})\\ \to\operatorname{Ext}(\mathbb{Q},\mathbb{Z}) \to\operatorname{Ext}(\mathbb{Q},\mathbb{Q}) \to\operatorname{Ext}(\mathbb{Q},\mathbb{Q}/\mathbb{Z}) \to0 \end{multline} Since $\operatorname{Hom}(\mathbb{Q},\mathbb{Z})=0$, $\operatorname{Hom}(\mathbb{Q},\mathbb{Q})\cong\mathbb{Q}$ and $\operatorname{Ext}(\mathbb{Q},\mathbb{Q})=0$ (because $\mathbb{Q}$ is injective) this boils down to the exact sequence $$ 0\to\mathbb{Q} \to\operatorname{Hom}(\mathbb{Q},\mathbb{Q}/\mathbb{Z}) \to\operatorname{Ext}(\mathbb{Q},\mathbb{Z})\to0 $$ It's easy to see that each of these groups is in fact a vector space over $\mathbb{Q}$, but the middle group is huge. A way for seeing this is to consider again the exact sequence above and applying to it the functor $\operatorname{Hom}(-,\mathbb{Q}/\mathbb{Z})$, getting the exact sequence $$ 0\to\operatorname{Hom}(\mathbb{Q}/\mathbb{Z},\mathbb{Q}/\mathbb{Z}) \to\operatorname{Hom}(\mathbb{Q},\mathbb{Q}/\mathbb{Z}) \to\operatorname{Hom}(\mathbb{Z},\mathbb{Q}/\mathbb{Z}) \to0 $$ that can be rewritten as $$ 0\to\prod_p \mathbb{Z}_p \to\operatorname{Hom}(\mathbb{Q},\mathbb{Q}/\mathbb{Z}) \to\mathbb{Q}/\mathbb{Z}\to0 $$ where the product runs over all prime numbers $p$ and $\mathbb{Z}_p$ is the ring of $p$-adic integers, which has the same cardinality as $\mathbb{R}$. So also $\operatorname{Ext}(\mathbb{Q},\mathbb{Z})$ has the same cardinality. Any book on homological algebra will have the relevant information. Note that in general the long exact sequence does not stop and one has to consider $\text{Ext}^k$, but in the case of abelian groups the higher order Ext groups vanish.
Second order PDE: what are the restrictions on boundary conditions?
Specifying the terminal value for the heat equation is difficult: One has backward uniqueness, that is there is at most one solution of the (backward) heat equation. In order to have existence of solutions, the terminal value has to be very smooth, as the heat equation is smoothing. This makes the problem ill-posed.
Energy estimate for $u_{tt} - u_{xx} = 0$
The trick is to multiply your PDE with $u_t$ and integrate over both time and space: $$\int_0^t\int_0^l(u_{tt}u_t-u_{xx}u_t)dxdt=0$$ Now use the facts that $$u_{tt}u_t=\frac12(u_t^2)_t$$ and apply partial integration ovjer the $x$ variable (use the boundary conditions to deal with the boudnary terms appearing from the partial integration) on the second to get a term of the form $$u_xu_{xt}=\frac12(u_x^2)_t,$$ now integrate over time and use the initial condition. If this is not explicit enough, I can add additional explanations, but you should be good. EDIT: The first integral can be rewritten using the above as: \begin{align}\int_0^t\int_0^lu_{tt}u_tdxdt& =\frac12\int_0^l\int_0^t(u_t^2)_tdtdx\\ &=\frac12\int_0^l(u_t^2(t,x)-u_t^2(0,x))dx \\ &=\frac12\int_0^l(u_t^2(t,x)-g^2(x)))dx \end{align} The second integral can be dealt woth as follows: by partial integration over $x$ one has \begin{align}-\int_0^t\int_0^lu_{xx}u_tdxdt&=-\int_0^tu_xu_t\bigg|_{x=0}^{x=l}dt+\int_0^t\int_0^lu_xu_{xt}dxdt \end{align} From the boundary conditions $u_x(t,0)=0=u_x(t,l)$ the first term is zero. The second can be rewritten using the second formula quoted above as \begin{align}\int_0^t\int_0^lu_xu_{xt}dxdt&=\frac12\int_0^l\int_0^t(u_x^2)_tdtdx\\ & =\frac12\int_0^l(u_x^2(t,x)-u_x^2(0,x))dx\\ &=\frac12\int_0^l(u_x^2(t,x)-f_x^2(x))dx, \end{align} putting these together gives the result, up to a typo ($g_x$ and $f$ have to be replace with $g$ and $f_x$ respectively)
Coin Toss Experiment Problem
What you are looking for is a direct application of the binomial distribution: $$ P(X=k)={n\choose k}p^k(1-p)^{n-k} $$ In your case, $X$ would count the number of heads, $p=\frac{1}{3}$ is the probability of heads appearing, $n=5$ is the total number of tosses and $k=2$ is the number of heads that we would like to get.
Converting a slope field into a vector field
Note that $f$ in the vector $\langle f(x,y), g(x,y)\rangle$ and the $f(x,y)$ in the linked article are not the same $f$. Your notation gives the vector field in terms of its components, which is a perfectly good way to describe it. The linked article uses $f(x,y)$ to express the slope of the vector field at a point $(x,y)$, which is also fine as long as the vector field is not vertical (in which case your $f(x,y)$ is zero, and the slope is infinite). If you are given a vector field $\langle f(x,y), g(x,y)\rangle$, then the slope field is the same field except that you say you want unit vectors. So the answer is simply $$\frac{1}{\sqrt{f(x,y)^2+g(x,y)^2}}\langle f(x,y), g(x,y)\rangle$$ unless $f(x,y) = g(x,y) = 0$, in which case the answer is $\langle 0, 0\rangle$. If you are given a vector field in the form $y' = f(x,y)$, you can write this as the vector field $\langle 1, f(x,y)\rangle$ (that is, a unit change in $x$ produces a change of $y'$ in $y$). Then by the method above, you get for the unit vector in this direction $$\frac{1}{\sqrt{f(x,y)^2+1}}\langle 1,f(x,y)\rangle$$ as you said. Note that when you are given a vector field in this form you do not have to worry about the slope being vertical since $y'$ is not defined in this case.
Solving $0.0004<\frac{4,000,000}{d^2}<0.01$
$$0.0004&lt;\dfrac{4,000,000}{d^2}&lt;0.01$$ when we reciprocal a fraction then sign of inequality changes:$x&lt;y\implies\dfrac1x&gt;\dfrac1y$ $$\dfrac{1}{0.0004}&gt;\dfrac{d^2}{4,000,000}&gt;\dfrac{1}{0.01}$$ multiply wach term by $4,000,000$ $$\dfrac{4,000,000}{0.0004}&gt;\dfrac{d^2}{1}&gt;\dfrac{4,000,000}{0.01}$$ $$10^{10}&gt;d^2&gt;4\times10^8$$ $$4\times10^8&lt;d^2&lt;10^{10}$$ $$\sqrt{4\times10^8}&lt;d&lt;\sqrt{10^{10}}$$ since d=distance it can't be negative so we take only positive value after square root $${2\times10^4}&lt;d&lt;{10^{5}}$$ $$20000&lt;d&lt;100000$$
Variation of Peano Existence Theorem
With the help of user Anonymous, it was possible to solve the exercise and, with the change of the hypothesis, arrive at the same conclusion as the stated theorem. Assuming: $a\geq b/M$ By hypothesis, we have $|f|\leq M$, thus $|f|&lt;2M$. In this way, the Cauchy problem $$x'=f(t,x) \qquad x(t_0)=x_0$$ satisfies the hypotheses of Peano theorem. That is, there is a solution $x(t)$ in the interval $[t_0 -b/2M,t_0+b/2M]$. Likewise, the following Cauchy problems have a solution. (See comment below) $$y_+'=f(t,y_+)\qquad y_+(t_1)=x(t_1) \quad\text{where } t_1=t_0+b/2M$$ $$y_-'=f(t,y_-)\qquad y_-(t_2)=x(t_2) \quad\text{where } t_2=t_0-b/2M$$ Therefore $y_+$ is defined in $[t_0,t_0+b/M$] and $y_-$ in $[t_0 - b/M, t_0]$. $$\varphi(t) = \left\{ \begin{array}{ll} y_-(t) &amp; \mbox{if } t\in [t_0 - b/M, t_0 -b/2M) \\ x(t) &amp; \mbox{if } t\in [t_0 -b/2M,t_0+b/2M] \\ y_+(t) &amp; \mbox{if } t\in (t_0+b/2M, t_0 +b/M) \\ \end{array} \right.$$ Thus, $\varphi$ is a solution of $x'=f(t,x), x(t_0)=x_0$ defined in the interval $I_\alpha$ where $\alpha=\min\{a,b/M\}=b/M$ Assuming: $a&gt; b/M$ Let $\epsilon=b/a-M&gt;0$, thus $a=b/(M+\epsilon)$. Then $\vert f\vert&lt;M+\epsilon$, so by the Peano theorem, there is a solution $x(t)$ over $I_\alpha$ where $\alpha=\min\{a,b/(M+\epsilon)\}=a$. But we also know $a=\min\{a,b/M\}$ since $a&lt;b/M$, so in fact $\alpha=\min\{a,b/M\}$. Comment: I believe that I may have made a mistake in assuming that the solution exists. In Peano's theorem, $t_0$ and $x_0$ are required to be in the domain of $f$. I was able to guarantee that $t_1$ is in $I_a$. Because taking $a'=a-b/2M$ it is concluded that $I_{a'}\subset I_{a}$ and $\min\{a',b/2M\}=b/2M$. But the problem is ensuring that $y_+(t_1)=x(t_1) \in B_b$. Is it possible to correct this proof to obtain the conclusion of the theorem?
How to name a matrix with restricted input values?
Let's say you want your matrices to have its entries in a subset $X$ of $R$, where $(R, +, \cdot)$ is a ring, (if you're not familiar with the concept of ring, just think of it as a set with a sum $+$ and a multiplication $\cdot$, like the integers $\Bbb Z$). Then you can denote the set of all matrices which entries are in $X$ as $\mathcal M(X)$. If you want to specify the size of the matrices you can denote it by $\mathcal M_{m\times n}(X)$, for some $m, n\in \Bbb N$. Now to state that the entries $[m_{ij}]$ of a certain matrix $M$ respect some condition, you just let $X$ be the set of elements of $R$ that respect that condition and say $M\in \mathcal M(X)$.
Determing which term of the geometric sequence a number is equal to
$$4\cdot3^{n-1}=78732$$ or $$3^{n-1}=19683$$ (here was your mistake) or $$n=1+\frac{\ln19683}{\ln3},$$ which gives $n=10.$ Also, it's better to learn that $$3^9=19683.$$
Compute norm of vector using optimization over inner product
Let $x \neq 0$. $y^{T}x \leq \|x\|_p$ is just Holder's inequality. Put $y_i=\frac 1 A|x_i|^{p-1} sign(x_i) $ where $A=\|x\|_p^{p/q}$ to see that the value $\|x\|_p$ is actually attained. The result is trivial when $x=0$.
Probability - Bag of Marbles Puzzle
Perhaps slightly rephrasing the question will help. Instead of asking what is the probability that the remaining marble from the same bag is also white, we may equivalently ask, if we draw a white marble, what is the probability that we drew the marble from bag A? From this slightly different phrasing of the question we arrive at the solution of $\frac{2}{3}$ since $2$ of the $3$ white marbles are in bag A. Equivalence with the Monty Hall Problem In the Monty Hall problem, one might think that once the first door is opened the probability of choosing the right door becomes $\frac{1}{2}$ because there are only 2 unopened doors left, 1 with a goat and 1 with the car. Much like one might think that the probability the other marble is white in this problem is $\frac{1}{2}$ since there are white marbles in only two bags, and in one bag the marble's partner is black and the in the other it is white. But to do so in both cases is a mistake because it forgets the original probability that the door you pick has a goat behind it or that the white marble comes from bag A. That is, when you first pick a door, the probability that you pick a door with a goat behind it is $\frac{2}{3}$. The key is to realize that this does not change when one of the other doors is opened. The probability that the door you picked has a goat behind it is still $\frac{2}{3}$. In the same way, the probability of drawing a white marble from bag A is $\frac{2}{3}$ before it is drawn, it is still $\frac{2}{3}$ after it has been drawn, and this implies that the probability the remaining marble is white is also $\frac{2}{3}$. Here is Sal Khan's exposition of the Monty Hall problem https://www.youtube.com/watch?v=Xp6V_lO1ZKA.
Hyperbolic integration solving
Note that: $$\int \frac{dx}{x^2 - a^2} = \frac{\rm{arctanh}(x/a)}{a}$$ Define $a= m / \sqrt{\lambda},\ b = \sqrt{\lambda / 2}$: $$\pm ab(x-x_0) = \rm{arctanh}(\phi(x) / a) - \rm{arctanh}(\phi(x_0) / a)$$ Since we don't know what $\phi(x_0)$ is, call $\rm{arctanh}(\phi(x_0) / a)$, $c$, and get: $$\phi(x) = a \tanh\left(c\pm ab(x-x_0)\right)$$ $$\phi(x) = \frac{m}{\sqrt{\lambda}} \tanh\left(c\pm \frac{m}{\sqrt{2}}(x-x_0)\right)$$ So if you assume that $\phi(x_0)=0$, you can get the desired result.
Methods for solving nonlinear constraints quadratic programming
Nonlinearly constrained quadratic programming? So you essentially ask about nonlinear programming. For that, you use a nonlinear programming algorithm such as interior-point algorithms, penalty methods, SQP, filter methods, etc., and their complexity depends on the method, the problem, properties of the problem, the implementation, etc., i.e. it's impossible to answer generically. The fact that the objective is quadratic is not something you typically would explicitly develop solvers for, once the constraints are general nonlinear. In fact, if you allow nonlinear constraints, there is no loss in generality to assume the objective to be linear
Parametric Equation of circle parallel to equator on a sphere
They do seem to have $\sin$ and $\cos$ mixed up, and your Idea 1 does work. But for what it's worth, $\cos \pi / 4 = \sin \pi / 4$ so technically they're not wrong!
What does $[n=1]$ mean?
Yes, this is an Iverson bracket and works exactly the way you deduced: $$[n = 1] = \begin{cases} 1, \text{ if } n=1\\ 0, \text{ if } n \neq 1 \end{cases}$$
Stationarity of ARMA model
We have $$(1-1.3B+0.4B^2)y_t=2+(1+B)z_t$$ where $B$ is the backward shift operator. So for this process to be stationary, the roots of the equation $$1-1.3x+0.4x^2=0$$ must lie outside the unit circle. They are $x=2,1.25$ so indeed it is causal and hence stationary. Thus the autocovariance is $$\begin{align}\gamma_h&amp;=\text{Cov}(y_t,y_{t+h})=\text{Cov}(y_t,2+1.3y_{t+h-1}-0.4y_{t+h-2}+z_{t+h}+z_{t+h-1})\\&amp;=\text{Cov}(2,y_t)+1.3\text{Cov}(y_t,y_{t+h-1})-0.4\text{Cov}(y_t,y_{t+h-2})+\text{Cov}(y_t,z_{t+h})+\text{Cov}(y_t,z_{t+h-1})\\&amp;=0+1.3\gamma_{h-1}-0.4\gamma_{h-2}+0+0\quad\color{red}{(*)}\\&amp;=1.3\gamma_{h-1}-0.4\gamma_{h-2}\end{align}$$ for $h\ge2$ which does not depend on $t$. $\color{red}{(*)}:$ Since $y_{t-i}$ is a linear combination of $z_{t-i}$, $z_{t-i-1}$, ... and since $\mathbb{E}(z_{t-i-j}z_t)=\mathbb{E}(z_{t-i-j})\mathbb{E}(z_t)=0$ due to independence of white noise, we must have that $$\mathbb{E}(y_{t-i}z_t)=\mathbb{E}\left(\sum_{k=0}^\infty\delta_kz_{t-k}z_t\right)=0$$ Can you now finish it by calculating $\gamma_1$?
Prove: $2^{a}+ 2^{b}+ 2^{c}\leqq 3$
Your second problem is wrong. Try $b=c=0$ and $a=4\sqrt2+2\sqrt5.$ For your first problem, which was $\sum\limits_{cyc}\log_2a\leq3.$ Because $$\sum_{cyc}(1-\log_2a)=\sum_{cyc}\left(1-\log_2a+\frac{1}{\sqrt2\ln2}\left(\sqrt2a-\sqrt{a^2+4}\right)\right)\geq0.$$ Let $$f(a)=1-\log_2a+\frac{1}{\sqrt2\ln2}\left(\sqrt2a-\sqrt{a^2+4}\right).$$ Thus, $$f'(a)=\frac{(a-1)\sqrt{2(a^2+4)}-a^2}{2\ln2a\sqrt{a^2+4}}.$$ We see that $f'(a)&lt;0$ for $0&lt;a&lt;1$, but for $a\geq1$ we obtain: $$f'(x)=\frac{2(a-1)^2(a^2+4)-a^4}{2a\ln2\sqrt{a^2+4}\left((a-1)\sqrt{2(a^2+4)}+a^2\right)}=$$ $$=\frac{(a-2)(a^3-2a^2+6a-4)}{2a\ln2\sqrt{a^2+4}\left((a-1)\sqrt{2(a^2+4)}+a^2\right)}&gt;0$$ for $a&gt;2$, which gives $f(a)\geq f(2)=0$ and we are done!
what is the probability of picking an even number out of all natural numbers?
It depends on what probability distribution one defines on $\mathbb N$ (as observed, this cannot be a uniform distribution, but that doesn't mean no distribution exists). Consider the following probability assignments on $\mathbb N$: Prob(1) = 1/2, Prob(2) = 1/4, ..., Prob($n$) = 1/$2^n$ [this is a legitimate probability distribution since the sum of all terms is 1] Here obviously the probability of picking an even number is 1/4 + 1/16 + ..., which is 1/3, not 1/2. On the other hand, we could 'split the distribution in two' thus: Prob(1) = prob(2) = 1/4, Prob(3) = Prob(4) = 1/8, ..., Prob($2k-1$) = Prob($2k$) = 1/$4^k$ Here obviously the probability of picking an even number is 1/2 since the odd and even terms must separatly have the same sum.
Compute $\int_{\gamma}|z-1||dz|$.
Note that, by Euler's identity $$\left|i e^{it} \right| = \left|e^{it} \right| = \left| \cos{(t)} +i\sin{(t)} \right| = \sqrt{\cos^2{(t)} +\sin^2{(t)}} = 1.$$ Hence $$\int_{0}^{2 \pi} \left| e^{it} - 1 \right| \left| \gamma ' (t) \right| dt = \int_{0}^{2 \pi} \left| e^{it} - 1 \right| dt.$$ We use Euler's identity a second time to evaluate the modulus in the integrand. One has \begin{align*} \left| e^{it} - 1 \right| &amp; = \left| \cos{(t)} + i\sin{(t)} - 1 \right| = \left| (\cos{(t)} - 1) + i\sin{(t)} \right|\\\\ &amp; =\sqrt{(\cos{(t)} - 1)^2 + sin^2{(t)}} = \sqrt{(\cos^2{(t)} + sin^2{(t)} + 1 - 2\cos{(t)}}\\\\ &amp;= \sqrt{1 + 1 - 2\cos{(t)}} = \sqrt{2 - 2\cos{(t)}}. \end{align*} Therefore, the final integral is $$\int_{0}^{2 \pi} \left| e^{it} - 1 \right| dt = \int_{0}^{2 \pi} \sqrt{2 - 2\cos{(t)}} dt.$$ This is an elementary integral. Using the appropriate substitution you should get $8$.
Tangent plane through $(1,1,1)$
Your formula does not account for the z-dimension, computing the directional derivatives $f_1'(x,y,z) = 2xy$ $f_2'(x,y,z) = x^2 + z^2 - 1$ $f_3'(x,y,z) = 2zy - 2$ And evaluating these at $(1, 1, 1)$ $f_1'(1, 1, 1) = 2$ $f_2'(1, 1, 1) = 1$ $f_3'(1, 1, 1) = 0$ The equation of the tangent plane at $(a, b, c)$ is given by $ 0 = (x-a)f_1'(a, b, c) + (y-b)f_2'(a, b, c) + (z-c)f_3'(a, b, c) $ $ 0 = (x-1)f_1'(1, 1, 1) + (y-1)f_2'(1, 1, 1) + (z-1)f_3'(1, 1, 1) $ $ 0 = 2(x-1) + y-1 $ $ 3 = 2x + y $ note that in general the plane is not parallel to the $xy$-plane and you do have to compute the directional derivatives for every input. In this example since $f_3'(1, 1, 1) = 0$ we have that it is in fact parallel to the $xy$-plane.
Why $|z-z_0|^2=r^2|z|^2\iff |z-\frac{z_0}{1-r^2}|=|\frac{rz_0}{1-r^2}|$
We may assume $z_0\in\mathbb{R}$, then $\left\|z-z_0\right\|=r\|z\|$ implies: $$ (z-z_0)(\bar{z}-z_0) = r^2 z\bar{z} \tag{1}$$ and the resulting equation (provided that $r\neq 1$) is trivially the equation of a circle.
Is there a generally accepted name for the described property of arrow $f$?
Such a morphism $f : a \to b$ is said to be $F$-(hyper)cocartesian. This is in connection to Grothendieck opfibrations. You may like to work out what this means concretely in the case of $\mathrm{dom} : [\mathbf{2}, \mathcal{C}] \to \mathcal{C}$ where $\mathcal{C}$ is a category with pushouts.
A variation of the argument to prove that $\{m/n:n \text{ is odd },n,m \in \mathbb{Z}\}$ is a PID
I wish to help you. I will prove that $R$ is a PID by using second way. $R=\{\frac{m}{n}, m,n\in \mathbb Z, 2\nmid n \}=\mathbb Z_{(2)} $ is the ring of fractions $\mathbb Z_S$ where $S=\mathbb Z-(2)$. Every ideal in $\mathbb Z_{(2)}$ of the form $\{(2^n); n\in \mathbb N \}$
How to interpret $1 \to 0$ in ${\bf Set}^\mathrm{op}$, and ${\bf Set}^\mathrm{op}$ itself?
It is a remarkable fact that $\textbf{Set}^\textrm{op}$ is actually a completely concrete category: it is naturally equivalent to the category of complete atomic boolean algebras via the contravariant power set functor. Thus, an object $X$ in $\textbf{Set}^\textrm{op}$ secretly stands for its powerset $P X$, and a morphism $X \to Y$ in $\textbf{Set}^\textrm{op}$ is then a homomorphism of complete boolean algebras $P X \to P Y$. (More precisely, if $f : Y \to X$ is a map in $\textbf{Set}$, then the corresponding homomorphism $P f : P X \to P Y$ is the one that sends a subset $U \subseteq X$ to its preimage $f^{-1} U \subseteq Y$.)
The tensor $\epsilon_{ijk}$ is related to determinants?
If you look at the rule of sarrus for $3\times3$ matrices you find that the determinant is a sum of products of the matrix elements https://en.wikipedia.org/wiki/Rule_of_Sarrus. The formula $a^ib^jc^k\epsilon_{ijk}$ gives exactly that: It is a sum of products of three matrix elements together with the tensor which is $0,-1$ or $1$.
Proving the Schroeder-Bernstein theorem
There are several proofs. I will give you a few hints for a reasonably intuitive one. The first point to grasp is that you have somehow got to construct a bijection out of the two injections. So if the two sets are $A,B$ and the two injections are $f:A\to B$ and $g:B\to A$ you have to decide what subset $X\subset A$ to use for $f$; then you use $g^{-1}$ on $A\setminus X$. Think about that for a while. One way is to "trace backwards". So start say with in $a_1\in A$ and then try to find $b_1\in B$ such that $g(b_1)=a_1$. If you can't, then you have to put $a_1$ into $X$. Similarly, if you cannot trace back from $b_1$, ie find $a_2\in A$ such that $f(a_2)=b_1$, then if $g(b_1)=a_3$ you must put $a_3$ into $A\setminus X$. One of three things must happen, either you can trace back indefinitely, or you end up stuck in one set or the other. If you end up stuck, then that determines how all the elements in that chain (up to getting stuck) must be treated. In the indefinite case you have a choice, but must treat the elements in the chain consistently.
Calculate $\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!\cdot 2^n}$
First note that $$(2n-1)!!=\frac{(2n)!}{2^nn!}$$ and $(2n)!!=2^nn!$, so $$\frac{(2n-1)!!}{(2n)!!\cdot2^n}=\frac{(2n)!}{2^nn!\cdot2^nn!\cdot2^n}=\frac1{2^{3n}}\binom{2n}n\;.$$ Now use the power series $$\frac1{\sqrt{1-4x}}=\sum_{n\ge 0}\binom{2n}nx^n\;,\tag{1}$$ the generating function for the central binomial coefficients. $\binom{2n}n\le 4^n$ for $n\ge 1$, so $(1)$ certainly converges at $x=\frac18$, and we have $$\sqrt2=\frac1{\sqrt{1-1/2}}=\sum_{n\ge 0}\binom{2n}n\left(\frac18\right)^n=\sum_{n\ge 0}\frac1{2^{3n}}\binom{2n}n=1+\sum_{n\ge 1}\frac{(2n-1)!!}{(2n)!!\cdot2^n}\;.$$
tensor, symmetric, exterior power of a module over a PID
Use the following: $S^n(M \oplus N) = \bigoplus_{p+q=n} S^p(M) \otimes S^q(N)$, likewise for $\Lambda^n$. $S^n(R/I) = T^n(R/I)$ $\Lambda^n(R/I)=0$ for $n&gt;1$
Operator on continuous functions under Alexandroff compactification
[copied from my comment] If $Af=(1/2)f$, the constant function $1$ is mapped by $A^∗$ to the [discontinuous] function that is equal to $1/2$ on $X$ and $1$ at $\infty$.
Polar form of Laplace's equation.
Take the Cauchy-Riemann equations in polar form, that is $$u_r = \frac{1}{r}v_{\theta},\quad \frac{1}{r} u_{\theta} = -v_r .$$ Now, take the partial derivative of the first equation with respect to $r$ and the partial derivative of the second equation with respect to $\theta$, then $$ u_{rr} = \frac{\partial}{\partial r} (\frac{1}{r}v_{\theta}) = \frac{v_{\theta\ r}}{r} - \frac{v_{\theta}}{r^2},\\ u_{\theta \theta} = \frac{\partial}{\partial \theta}(-r v_r)= - r v_{r \theta} .$$ Here, I just took the partial derivative for the first one with respect to $r$ and the partial derivative with respect to $\theta$ for the second one, but you can work out the other two partial derivatives. Now, assuming continuity of partial derivatives, one can say that $v_{r \theta} = v_{\theta r},$ so we can substitute the last equation on the one before, thus $$u_{rr} = -\frac{u_{\theta \theta}}{r^2} - \frac{v_{\theta}}{r^2},$$ but we can again make use of the Cauchy-Riemann equations and substitue $v_{\theta} = r u_r,$ and using this result in the last equation yields $$ r^2 u_{rr} = -u_{\theta \theta} - r u_r,\\r^2 u_{rr} + r u_r + u_{\theta \theta} = 0.$$
Derivative of a complex function $y=\operatorname{tg}2x^{\cot\frac x 2}$
Generally you have $$ \frac d {dx} u^v = u^v \log_e u \cdot \frac {dv}{dx} \quad + \quad vu^{v-1} \cdot \frac {du}{dx}. $$ The first term is done just as if $u$ were constant, and the second as if $v$ were constant.
On the sets of injective/surjective linear mappings between Euclidean spaces
Let $\lambda \in \mathcal L(\mathrm R^n,\mathrm R^m)$ be not surjective, $m\le n$ and $(e&#39;_1,\dots,e&#39;_m)$ a base of $\mathrm R^m$ such that $(e&#39;_1, \dots , e&#39;_k)$ is a base of $\mathrm {Im}(\lambda)$. Now let's choose a base $(e_1,\dots,e_n)$ of $\mathrm R^n$ such that $$ \lambda(e_i) = e&#39;_i\quad \forall 1\le i \le k $$ and define a linear map $\delta:\mathrm R^n \to \mathrm R^m$ such that $$ \delta(e_i) := \begin{cases} e'_i &amp; \text{iff $k &lt; i \le m$} \\ 0 &amp; \text{otherwise} \end{cases} $$ For each $\epsilon &gt; 0$, $\lambda_\epsilon := \lambda + \epsilon \delta$ is surjective. For each $1\le i \le k$ $$ \lambda_\epsilon(e_i) = \lambda(e_i) = e&#39;_i $$ If $k+1 \le i \le m$, let $x$ be a linear combination of $(e_1, \dots, e_k)$ such that $\lambda(x) = \lambda(e_i)$ we have $$ \lambda_\epsilon(e_i - x) = \lambda(e_i) - \lambda(x) + \epsilon \delta(e_i) - \epsilon \delta(x) = \epsilon e&#39;_i $$ Moreover $$ \lim_{\epsilon\to 0} \lambda_\epsilon = \lambda $$
Is there any isomorphism between the non-zero complex numbers under multiplication and the complex numbers under addition?
There is no isomorphism between them, and the reason is very simple. In the group of the non zero complex numbers under multiplication there are a lot of non trivial elements of finite order. (think about the roots of unity). On the other hand in the group of complex numbers under addition every non trivial element has infinite order. Isomorphism preserves order of elements, so they can't be isomorphic.
Dimension of schemes over fields
We have $Y=\operatorname {Spec}(A)=\{\eta,M\}$, consisting of the zero ideal $\eta=(0)$ and the maximal ideal $M=(\pi) $, where $\pi$ is a uniformizer. On the other hand, $X=\operatorname {Spec}(K)\sqcup \operatorname {Spec}(k)=\{y\}\sqcup \{m\}$ and since the morphism $f:X\to Y$ satisfies $f(y)=\eta, f(m)=M$, $f$ is bijective. The scheme $X$ has dimension zero because it is discrete, and $X$ has dimension $1$ since its only two primes are connected by the inclusion relation $(0) \subsetneq M$. Finally $K\times k$ is finitely generated over $A$ as the product of the finitely generated algebras $K=A[\frac 1\pi]$ and $k=A/M$.
Tonelli and Hensel Lemma
To simplify the reasoning, let $p=419, k=5, a=5;\;$ i.e. you want to compute a root $ r^2\equiv a \bmod p^k$. For a solution with $k=2$ see my answer https://math.stackexchange.com/a/1895883/61216 where I compute a solution $\bmod p^2$. In the table below I show the lifting steps for the root $r_0\equiv\sqrt{a}\equiv 41 \bmod p$ r = r0 = 41 z = (2r0)^(-1) mod p = 46 j=1 p^j = 419 x = (a-r^2)/p^j = -4 x = x*(2r0)^(-1) mod p^j = 235 r = r + x*p^j = 98506 j=2 p^j = 175561 x = (a-r^2)/p^j = -55271 x = x*(2r0)^(-1) mod p^j = 90949 r = r + x*p^j = 15967195895 j=3 p^j = 73560059 x = (a-r^2)/p^j = -3465893695780 x = x*(2r0)^(-1) mod p^j = 19468360 r = r + x*p^j = 1432109677429135 j=4 p^j = 30821664721 x = (a-r^2)/p^j = -66542094554315137820 x = x*(2r0)^(-1) mod p^j = 8060623011 r = r + x*p^j = 248443251997096924066 p^k = p^5 = 12914277518099 r mod p^k = 8302875642540 Now check that $8302875642540^2 \equiv 5 \bmod{419^5}.$ Now repeat the corresponding steps for the other root $-41 \equiv 378 \bmod 419$ to get the solution $4611401875559.$ Note that for greater values of $k$ it is faster to compute the lifted solutions $\bmod p^{2^j}\;$ (quadratic Hensel lifting). For another references see also the section Powers of odd primes of John Cook's Solving quadratic congruences and the Wikipedia example section.
Equivalence relation - Equilavence classes explanation
I already gave you an example in the comment. For another one: $$|\{1\}\cap T|=1\implies X\in\left[\,\{1\}\,\right]\iff |X\cap T|=1$$ so for example, we have in this case $$\{1\}\,,\,\{0\}\,,\,\{0,2\}\,,\,\{1,2\}\in\left[\,\{1\}\,\right]$$ Now you try other cases.
Prove that $\mathbb{Q}[x,y]$ contains an ideal $I$ which can be generated by 3 elements, but not by 2 elements.
Take $I=(X^2,XY,Y^2)\subset \mathbb Q[X,Y]$. Since $I$ is homogeneous, if $f(X,Y)\in I$, the homogeneous components of $f$ also belong to $I$. If $I$ could be generated by $2$ elements, $f_1,f_2\in I$, then $f_i$ has no homogeneous component of degree $0$ or $1$. So the homogeneous elements of degree $2$ in $I$ can be generated by $2$ elements as a $\mathbb Q$-vector space. But $\dim_\mathbb Q I\cap \{aX^2+bXY+cY^2 \ : \ a,b,c \in \mathbb Q \}=3$. So we get a contradiction.
The relation between the square of the integral and the integral of the square of the integrand
Not true even for finite measures. Let $f(x)=1$ for $ 0 \leq x \leq 2$ and $0$ for all other $x$ (Lebesgue measure). The inequality is true for probability measures. This follows by Jensen's inequality applied ti the function $x \to x^{2}$.
Space formed by difference between lines
First, I'm going to use the names/notation I used in my comments. Now to simplify things, I'm going to assume that $P$ and $Q$ are chosen so that the segment $PQ$ is perpendicular to the vectors $\mathbf u$ and $\mathbf v$, i.e., they're the points, on the two lines, that are as close as possible. Next, I'm going to shift the origin to be at $(P + Q)/2$, the midpoint of the two. That makes the plane contain points of the form $$ s\mathbf u - t \mathbf v + (P - (-P)) = s\mathbf u + t \mathbf v + 2P. $$ The origin, projected on this plane, projects to the point $R = 2P$. Can you see why? A point $X$ is on your sphere if $$ \|R\|^2 + \|X - R\|^2 = c^2 $$ So let's look at the point $X(s, t) = s\mathbf u + t \mathbf v + 2P$ and see what that equation says about $s $ and $t$, assuming, for the moment, that $u$ and $v$ are orthogonal. Let $r = \|R\|$. \begin{align} \|R\|^2 + \|X-R\|^2 &amp;= c^2\\ r^2 + (s^2 + t^2) &amp;= c^2 \\ (s^2 + t^2) &amp;= c^2 - r^2 \end{align} [Where, in the computation above, did I use that $u$ and $v$ were perpendicular?] Now let's look, for such a pair $s, t$, at the points $A = s\mathbf u + P$ and $B = t\mathbf u - P$. The squared distance between them is the length of $A - B$, i.e., $$ \|s\mathbf u + P - (t\mathbf v - P)\|^2 = \|s\mathbf u - t\mathbf v + 2P\|^2 = s^2 + t^2 + r^2 $$ which, by the previous calculation, is just $c^2$ as desired. Note: I computed that last squared distance using dot products: $|\mathbf q|^2 = \mathbf q \cdot \mathbf q$; in this case, that gave me \begin{align} \|s\mathbf u - t\mathbf v + 2P\|^2 &amp;= (s\mathbf u - t\mathbf v + 2P) \cdot (s\mathbf u - t\mathbf v + 2P)\\ &amp;= s^2 \mathbf u\cdot \mathbf u - 2st\mathbf u \cdot \mathbf v + 2s \mathbf u\cdot 2P + t^2 \mathbf v \cdot \mathbf v -2t \mathbf v \cdot P + 4 P \cdot P \end{align} In this situation, the dot products among $u$, $v$, and $P$ are all zero, because they are three perpendicular vectors. Now let's look at all that again, without the assumption about $u$ and $v$. The squared distance from $A$ to $B$ is still length of $A - B$, i.e., \begin{align} \|s\mathbf u + P - (t\mathbf v - P)\|^2 &amp;= (s\mathbf u - t\mathbf v + 2P) \cdot (s\mathbf u - t\mathbf v + 2P)\\ &amp;= s^2 \mathbf u\cdot \mathbf u - 2st\mathbf u \cdot \mathbf v + 2s \mathbf u\cdot 2P + t^2 \mathbf v \cdot \mathbf v -2t \mathbf v \cdot P + 4 P \cdot P \end{align} but this no longer simplifies as nicely. You want this squared distance to be $c^2$, so let's write that down and simplify a bit: \begin{align} s^2 \mathbf u\cdot \mathbf u - 2st\mathbf u \cdot \mathbf v + 2s \mathbf u\cdot 2P + t^2 \mathbf v \cdot \mathbf v -2t \mathbf v \cdot P + 4 P \cdot P &amp;= c^2\\ s^2 \mathbf u\cdot \mathbf u - 2st\mathbf u \cdot \mathbf v + 2s \mathbf u\cdot 2P + t^2 \mathbf v \cdot \mathbf v -2t \mathbf v \cdot P + 2r^2 &amp;= c^2\\ s^2 \mathbf u\cdot \mathbf u - 2st\mathbf u \cdot \mathbf v + 2s \mathbf u\cdot 2P + t^2 \mathbf v \cdot \mathbf v -2t \mathbf v \cdot P + r^2 &amp;= c^2-r^2 \end{align} If you compute values for all the dot products, you get an equation of the form \begin{align} s^2 \alpha - 2st\beta + 2s \gamma + t^2 \delta v -2t \epsilon + r^2 &amp;= (c^2 - r^2) \end{align} which is the equation of the ellipse that you should be looking at in the plane you've created. With some algebraic shuffling, you can surely write this in the form $$ (X - R)^t M (X - R) = c^2 - r^2 $$ where $A$ is a matrix whose columns are probably something like $\mathbf u, \mathbf v, P$ and $M$ is something like $A^t A$, but I'm going to leave this last part to you; work by analogy with the case where $u$ and $v$ are perpendicular.
Prob. 17, Chap. 2 in Baby Rudin: The set of all numbers in $[0,1]$ with only $4$ and $7$ as decimal digits is countable, dense, compact, perfect?
Your arguments for uncountable and not dense look good to me. As far as showing $[0,1]-E$ is open, I think for simplicity since you know that $d_N\not\in\{4,7\}$, and we're dealing with integers, take $$\delta&lt;\frac{1}{10^{N+2}}.$$ Then if $y$ is such that $|x-y|&lt;\delta$ you know that $y$ has to agree with $x$ at $d_N$, and thus $y\not \in E$. I believe that your argument for $E$ being perfect is correct as well :)
Why is it wrong to consider the highest asymptotic point as an absolute maximum of the function?
There is no issue to say a value is maximum if it is achieved at some $ x$. But if that value is not achieved at any $ x$, then it can't be maximum. To be called a maximum, say local maximum or global maximum, there should a neighborhood such that value of $x$, is greater than values of the neighborhood. This is why we define infimum and supremum.
Can there be only one extension to the factorial?
First, for a fixed $c\in\mathbb{C}$, let $$F_c(z):=\Gamma(z+1)\cdot\big(1+c\,\sin(2\pi z)\big)$$ for all $z\in\mathbb{C}\setminus \mathbb{Z}_{&lt;0}$, which defines an analytic function $F_c:\mathbb{C}\setminus\mathbb{Z}_{&lt;0}\to\mathbb{C}$ such that $$F_c(z)=z\cdot F_c(z-1)$$ for all $z\in\mathbb{C}\setminus\mathbb{Z}_{\leq 0}$ and that $F_c(0)=F_c(1)=1$ (whence $F_c(n)=n!$ for every $n\in\mathbb{Z}_{\geq 0}$). Excluding the essential singularity at $\infty$, the negative integers are the only singularities of $F_c$, which are simple poles. Here are some results I checked with Mathematica. If $c$ is a positive real number less than $0.022752$, then $F_c'(z)&gt;0$ for all $z&gt;1$ and $F_c''(z)&gt;0$ for all $z&gt;-1$, making $F_c$ monotonically increasing on $(1,\infty)$ and convex on $(-1,\infty)$. It also appears that, with $0&lt;c&lt;0.022752$, $F_c$ is convex on $(-2n,-2n+1)$ and concave on $(-2n-1,-2n)$ for every $n=1,2,\ldots$. (I have checked this with various values of $c$ and with $n\leq 30$.) It would be great if someone can find an actual proof. Hence, it seems to me that the conditions 1-4 do not give a unique factorial function.
Can this expression be simplified any further with the Laws of logic?
If you distribute the $ \neg q \vee p $, then the $ \neg q $ cancels with the $q \wedge (p \vee r)$ and the $p $ cancels with the $\neg p \wedge r$, so we are left with. $ (\neg q \wedge \neg p \wedge r) \vee (p\wedge q \wedge (p \vee r)) $ which is equivalent to $ (\neg q \wedge \neg p \wedge r) \vee (p \wedge q )$
Goldbach's Weak Conjecture
For 2.: actually he says he proved such a result, but I am not aware whether his result was validated. Here two of his works: http://arxiv.org/pdf/1404.2224.pdf http://arxiv.org/pdf/1312.7748.pdf However, if $n&gt;4$ even, then GSC implies that exists odd primes $p$ and $q$ such that $n=p+q$. Thus, if $m&gt;7$ odd, then $n=m-3&gt;4$ is even, then $m=p+q+3$.
Points for which $AX^2-BX^2$ is constant
We can use coordinate geometry, letting the two points be $(p,0)$ and $(-p,0)$, and grind it out. Not much grinding! If you prefer (I don't) you can let the points be $(a_1,a_2)$ and $(b_1,b_2)$.
Why is a function like $z^{2.5}$ not holomorph?
The function $f(z)=z^{2.5}$ is only well defined on positive real numbers. Everywhere else, you need to provide additional info. For example, it is not clear what $(-1)^{2.5}$ would be equal to. You could write the function as $z^{2.5} = e^{2.5\cdot \log(z)}$, but then, the function $\log$ is not well defined on the complex numbers. Taking the standard logarithm, the function $\log$ is undefined in $(-\infty, 0]$, and there is no way you can ever continuously define the logarithm function on the entire complex plane.
Show that a function is in Bergman space
To consider $\langle f, e_n\rangle$, that expression needs to make sense. A priori, considering the inner product requires $f$ to belong to $A^2(\mathbb{D})$, which is just what is to be shown. Since the functions $e_n$ are bounded, we can relax the condition, $f$ being integrable would suffice to make sense of the integral. However, we don't initially know that $f$ is integrable either. Recall the definition of $A^2(\mathbb{D})$. It's the space of holomorphic square integrable functions on the unit disk. So let's look when a holomorphic function on the unit disk is square integrable. By continuity, every holomorphic function on the unit disk is square integrable over all disks $D_r(0)$ for $0 &lt; r &lt; 1$. Fix an arbitrary $r\in (0,1)$, and consider $g \in \mathscr{O}(\mathbb{D})$. Let $$g(z) = \sum_{n = 0}^{\infty} b_n z^n.$$ Since the power series converges absolutely and uniformly on $D_r(0)$, we have \begin{align} \int_{\lvert z\rvert &lt; r} \lvert g(z)\rvert^2\,d\lambda &amp;= \int_0^{2\pi} \int_0^r \lvert g(\rho e^{i\varphi})\rvert^2\rho\,d\rho\,d\varphi \\ &amp;= \int_0^{2\pi} \int_0^r \Biggl(\sum_{n = 0}^{\infty} b_n \rho^n e^{in\varphi}\Biggr)\Biggl(\sum_{k = 0}^{\infty} \overline{b_k} \rho^k e^{-ik\varphi}\Biggr)\rho\,d\rho\,d\varphi \\ &amp;= \sum_{k,n} \int_0^{2\pi} \int_0^r b_n\overline{b_k} \rho^{n+k+1} e^{i(n-k)\varphi}\,d\rho\,d\varphi \\ &amp;= \sum_{n,k} b_n\overline{b_k}\int_0^{2\pi} \frac{r^{n+k+2}}{n+k+2}e^{i(n-k)\varphi}\,d\varphi \\ &amp;= \sum_{n,k} \frac{b_n\overline{b_k}r^{n+k+2}}{n+k+2} 2\pi \delta_{n,k} \\ &amp;= \sum_{n = 0}^{\infty} \pi \frac{\lvert b_n\rvert^2r^{2(n+1)}}{n+1}. \end{align} Taking the limit $r \to 1$, we see that $$\int_{\mathbb{D}} \lvert g(z)\rvert^2\,d\lambda = \pi\sum_{n = 0}^{\infty} \frac{\lvert b_n\rvert^2}{n+1},$$ i.e. $g$ is square integrable over the unit disk if and only if $$\sum_{n = 0}^{\infty} \frac{\lvert b_n\rvert^2}{n+1} &lt; +\infty.$$ Now apply this with $$b_n = \sqrt{n+1} a_n.$$ We find $$\sum_{n = 0}^{\infty} \frac{\lvert b_n\rvert^2}{n+1} = \sum_{n = 0}^{\infty} \frac{(\sqrt{n+1})^2\lvert a_n\rvert^2}{n+1} = \sum_{n = 0}^{\infty} \lvert a_n\rvert^2.$$ Thus $\sum \lvert a_n\rvert^2 &lt; +\infty$ is precisely the condition for $$\sum_{n = 0}^{\infty} \sqrt{n+1} a_n z^n$$ to be in $A^2(\mathbb{D})$.
combination problem24
The second digit cannot be $0$, because the problem explicitly specifies that every digit must be positive. So there are only $8$ possibilities for the second digit, and similarly for the third and fourth.
Prove this equation involving Landau Notation
It's between $0$ and $\int_Y^\infty\tfrac{1}{u^2}du=\tfrac1Y$.
A problem solving strategy that appears to work, but doesn't?
Two (similar) examples that come to mind are the "tennis tournament problem" and the "two trains and a fly" problem. The tennis tournament puzzle is as follows: Suppose there are 100 participants in a knock-out style tennis tournament. How many games must be played until there is a winner? A brute force approach would be to calculate how many games are in each round, and how many players are eliminated in each round, all the way to the quarter-finals, semi-finals and the final. On the other hand, you could easily solve the puzzle by noticing that exactly one player is eliminated per game, so there is a winner after exactly 99 games (i.e. when 99 players have been eliminated). The classic two trains and a fly problem is similar in that there is an obvious 'brute force' approach but there is also a trick to solving the problem which one might initially miss. There are, of course, lots of other neat puzzles in which a simple trick/'strategy' gives a solution (e.g. the mutiliated chessboard problem), but often these don't have an 'obvious' but inefficient strategy that one could also use to solve the problem. As noted in the comments, your question is a bit vague - I'm not sure if these examples are the sort of thing you are looking for; whilst the 'correct' strategy in each case is making an observation that immediately simplifies the problem, crucially, there is also a more obvious 'brute force' strategy that one may be tempted to use in each case. I've also tried to avoid simply giving examples of fallacious proofs since these really aren't 'problem-solving strategies'.
integrating derivatives
Let me use the notation $(\Delta_h f) (x) = \frac{f(x+h)-f(x)}{h}$, and note that for $h &gt;0$, $\Delta_h f$ is defined on $[a,b-h]$. Then \begin{eqnarray} \int_a^{b-h} \Delta_h f &amp;=&amp; \frac{1}{h} \int_a^{b-h} (f(x+h)-f(x)) dx \\ &amp;=&amp; \frac{1}{h} \left( \int_a^{b-h} f(x+h) dx - \int_a^{b-h} f(x) dx\right) \\ &amp;=&amp; \frac{1}{h} \left( \int_{a+h}^{b} f(x) dx - \int_a^{b-h} f(x) dx\right) \\ &amp;=&amp; \frac{1}{h} \left( \int_{b-h}^{b} f(x) dx - \int_a^{a+h} f(x) dx\right) \\ \end{eqnarray} Since $f$ is continuous, we see that $\lim_{h \downarrow 0} \int_a^{b-h} \Delta_h f = f(b)-f(a)$. Furthermore, we see that $\lim_{h \downarrow 0} (\Delta_h f)(x) = f'(x)$ ae. We are only a small technical detail away from using the dominated convergence theorem. Define $\phi_h(x) = \begin{cases} (\Delta_h f) (x), &amp; x \in [a,b-h] \\ 0, &amp; \text{otherwise}\end{cases}$. We see that $\int_a^b \phi_h = \int_a^{b-h} \Delta_h f$, and $\lim_{h \downarrow 0} \phi_h (x) = f'(x)$ ae. We have $|\phi_{\frac{1}{n}} | \le g$ ae., hence we have $\lim_{n \to \infty} \int_a^b \phi_{\frac{1}{n}} = \int_a^b \lim_{n \to \infty} \phi_{\frac{1}{n}}(x) dx$, or in other words, $\int_a^b f' = f(b)-f(a)$.
integration of probability distribution
$$ p(y|x,X,Y) = \int^{10}_{-10} \int^{10}_{-10} p(y|x,w_1,w_2)\frac{1}{400}dw_1dw_2 $$ for starters using the uniform PDFs there. Now in $p(y|x)$ you will have a function of the form $e^{-(f_\omega(x)-y))^2/2}$. This will integrate differently depending on your $f(\omega;x)$.
Strict transform of blow up
Point (1) is correctly addressed in the comment by Hoot. As for point (2), your intuition is on the right track. On the other hand, you should keep track of the multiplicities of the loci involved. As a matter of example, let $X$ be the projective plane, $Y$ the cuspidal rational curve, and $Z$ the singular point of $Y$. $Z$ is a regular subvariety of $X$, so the exceptional divisor is just a copy of $\mathbb{P}^1$ (in general, if you blow up something singular, the exceptional locus might be pretty ugly though). The strict transform of $Y$ (i.e. $Y'$ in your notation) is going to be a smooth rational curve tangent to $E$. This reflects that $Y$ has multiplicity 2 along $Z$. This gives you that $\pi^*Y= Y'+2E$. As you see, the ingredients are exactly the ones you expected, but, in this case, they are weighted with coefficients depending on the singularities of $Y$ along $Z$. Edit I am reading your answer more carefully now. If both $Y$ and $Z$ are smooth, then claim (2) is fine as well. Addendum You comment is right. The blow up is an isomorphism over $X \setminus Z$. In particular, if $\widehat{Y}$ is disjoint from $Z$, its strict transform $\widehat{Y}'$ coincides with the pullback $\pi^*(\widehat{Y})$ , and it is isomorphic to $\widehat{Y}$. Now, if $Y$ and $\widehat{Y}$ are linearly equivalent, so are their pullbacks (just because the isomorphism between $\mathcal{O}_X(Y)$ and $\mathcal{O}_X(\widehat{Y})$ induces an isomorphism between their pullbacks). On the other hand, this is telling you that the strict transforms of linear equivalent divisors are not linearly equivalent if just one of the two goes through $Z$. Let me be more explicit. Blow up a point $P$ in $\mathbb{P}^2$. Let $L_1$ be a line through $P$, and $L_2$ a line not containing $P$. Denote by $M_1$ and $M_2$ the respective strict transforms. Then, by what above said, we have $\pi^*L_1=M_1+E$, and $\pi^*L_2=M_2$. By Bezout theorem we know that the intersection products $L_1 \cdot L_2=(L_1)^2=(L_2)^2=1$. In particular $L_1$ and $L_2$ meet properly at one point, say $Q$. Now, since $L_2$ does not go through $P$, we have that the pullbacks $M_1+E$ and $M_2$ meet properly at one point (the only premiage of $Q$). Given that these divisors are also linearly equivalent to each other, we get $1=(M_2)^2=M_2 \cdot (M_1+E)=(M_1+E)^2$. In particular, we get $1=(M_1+E)^2=M_1^2+2M_1\cdot E+ E^2$. Since $M_1$ and $E$ meet properly at one point, we know $M_1 \cdot E=1$. Then, we know that $E^2=\mathrm{deg}\mathcal{O}_{X'}(E)_{|E}$. By the description in sections 7 and 8 in chapter 2 of Hartshorne, we know that this is the relative $\mathcal{O}(1)$ bundle, that $E=\mathbb{P}^1$; these things together tell us that $\mathcal{O}_{X'}(E)_{|E}\cong \mathcal{O}_{\mathbb{P}^1}(-1)$. Thus, that degree is $-1$, so $E^2=-1$. This is negative self intersection is phrased as $E$ does not deform: there is no other effective divisor equivalent to $E$. Now, putting this in our previous equation, we get that $(M_1)^2=0$. As you see, $(M_1)^2 \neq (M_2)^2$; in particular, they can not be linearly equivalent.
there is a page ($9\times 9$) how many different rectangles can you draw with odd area
If the area has to be odd, the length and breadth both have to be odd. Hence, we count the number of rectangles by first choosing a row and a column ($10 \cdot 10$ ways to do this), and then choosing another row and column which are at an odd distance from the chosen one ($5 \cdot 5$ ways to do this). But we have counted each rectangle four times -- by the first row/column and then again by the second row/column -- so we divide by 4 to get our final answer: $1/4 \cdot 10 \cdot 10 \cdot 5 \cdot 5 = 625$.
Intuition and counterexamples for higher-order derivative test
Consider a particle which is at position $x(t) = \frac{t^4}{4!}$ at time $t$. Its acceleration is $\ddot{x}(t) = \frac{t^2}{2!}$, which means that at $t=0$ it is stationary and it does not (at the moment) accelerate. Shouldn't it stay in place forever? No, the thing is that it will accelerate in a moment (precisely for any positive time), and the reason is, that jounce (i.e. the fourth derivative) is positive. This could be formulated as "promises", where velocity is a promise that the position will change. To go further consider the following table: $$\begin{array}{|c|c|c|}\hline \textbf{condition} &amp; \textbf{name} &amp; \textbf{description} \\\hline |x| &gt; 0 &amp; \text{position} &amp; \text{the position has changed} \\\hline |\dot{x}| &gt; 0 &amp; \text{velocity} &amp; \text{the position is changing} \\\hline |\ddot{x}| &gt; 0 &amp; \text{acceleration} &amp; \text{promise that } \\ &amp;&amp;\text{the position will be changing} \\\hline |\dddot{x}| &gt; 0 &amp; \text{jerk} &amp; \text{promise about promise that }\\ &amp;&amp;\text{the position will be changing} \\\hline |\ddddot{x}| &gt; 0 &amp; \text{jounce} &amp; \text{promise about promise about promise }\\ &amp;&amp;\text{that the postition will be changing} \\\hline \end{array}$$ Although we have no names for higher derivatives, it's easy to generalize the above. So, even if many derivatives are zero, we still has the appropriate promise, hence the function might have an extremum in that point. However, this is not all! There are even more funny examples, like $$x \mapsto \begin{cases}e^{-\frac{1}{x^2}} &amp; \text{ if} x \neq 0 \\ 0 &amp; \text{ otherwise}\end{cases}$$ which also has minimum at zero, but all its derivatives there are zero (picture courtesy of Wolfram Alpha)! $\hspace{70pt}$ Of course, one also could consider $x \mapsto \mathrm{sgn}(x)\cdot e^{-\frac{1}{x^2}}$ (with the obvious smoothness fix at zero) which does not have minimum nor maximum there, i.e. all derivatives being zero tells us very little about how the function might look in the future. You might argue that this is a weird example, but observe that in nature all functions/movements might be just like that. A car might be stationary, but then it starts moving. How did it happen? Is its movement infinitely differentiable? You might say that no, because combustion in engine and other things are not, but what about the tires and ground friction? I shall stop here, as this is not mathematics anymore. I hope this helps $\ddot\smile$
Finding exponential limit
Each term in parentheses is not larger than $n+1$. So $l$ is not greater than the limit of $n^{-n^2}(n+1)^{n}$, which is zero.
Banach fixed-point theorem : Existence of solution
First note that $$G = \{(x_1,x_2)\mid \|(x_1,x_2)-(0.2,1)\|_{\infty}\leq 0.2\} =\{(x_1,x_2)\mid |x_1-0.2|\leq 0.2 \text{ and } |x_2-1|\leq 0.2\} \\= [0,0.4]\times [0.8,1.2].$$ Now, let $$\phi_1(x_1,x_2)=(5+x_1^2+x_2^2)^{-1}\quad\text{ and }\quad \phi_2(x_1,x_2)=(x_1+x_2)^{1/4},$$ so that $\Phi(x_1,x_2)=(\phi_1(x_1,x_2),\phi_2(x_1,x_2))$. For $(x_1,x_2)\in G$ it holds $$0\leq \phi_1(0.2,1.2)\leq \phi_1(x_1,x_2)\leq \phi_1(0,0.8)\approx 0.1773\leq 0.4$$ and $$0.8 \leq 0.9457\approx \phi_2(0,0.8)\leq \phi_2(x_1,x_2)\leq \phi_2(0.2,1.2)\approx 1.0878 \leq 1.2$$ It follows that $\Phi(G)\subset G$. Now, in order to apply the Banach fixed point theorem, you want to show that $\Phi$ is a strict contraction on $(G,\|\cdot\|_{\infty})$, i.e. there exists $\alpha &lt;1$ such that $$\|\Phi(x)-\Phi(y)\|_{\infty}\leq \alpha \|x-y\|_{\infty},\qquad \forall x=(x_1,x_2),y=(y_1,y_2)\in G.\tag{1}$$ Let $x,y\in G$. As $\Phi$ is differentiable on $G$, by MVT, there exists $s,t\in [0,1]$ such that $u=s x+(1-s)y$ and $v=t x+(1-t)y$ satisfy $$(x-y)\cdot\nabla \phi_1(u) = \phi_1(x)-\phi_1(y)\quad\text{and}\quad (x-y)\cdot\nabla \phi_2(v) = \phi_2(x)-\phi_2(y).$$ Hence, if we can show that there exists $\alpha&lt;1$ independent of $x,y$ such that $$|(x-y)\cdot\nabla \phi_1(u)|\leq \alpha \|x-y\|_{\infty}\quad\text{and}\quad |(x-y)\cdot\nabla \phi_2(v)|\leq \alpha \|x-y\|_{\infty},$$ then, $\alpha$ satisfies $(1)$ and we are done. Now, note that $$\max\{|(x-y)\cdot\nabla \phi_1(u)|,|(x-y)\cdot\nabla \phi_2(v)|\}\leq \max\{\|\nabla\Phi(u)(x-y)\|_{\infty},\|\nabla\Phi(v)(x-y)\|_{\infty}\}\\ \leq \max\{\|\nabla \Phi(u)\|_{\infty,\infty},\|\nabla \Phi(v)\|_{\infty,\infty}\} \|x-y\|_{\infty}$$ where $$\|\nabla \Phi(z)\|_{\infty,\infty}=\max_{\|w\|_{\infty}\leq 1}\|\nabla\Phi(z)w\|_{\infty}.$$ Furthermore, as $u,v\in G$, combining the above arguments, we deduce that we can set $$\alpha = \max_{z\in G}\|\nabla \Phi(z)\|_{\infty,\infty}.$$ Now, for $z\in G$, we have, $$\|\nabla \Phi(z)\|_{\infty,\infty}=\max\{\|\nabla \phi_1(z)\|_{1},\|\nabla \phi_1(z)\|_{1}\},$$ and you have computed that $$\|\nabla \phi_1(z)\|_{1}=\frac{2z_1+2z_2}{(z_1^2+z_2^2+5)^2}, \qquad\text{and}\qquad\|\nabla \phi_2(z)\|_{1}=\frac{1}{2(z_1+z_2)^{3/4}}.$$ As $z\in G=[0,0.4]\times [0.8,1.2]$, we find that $$\frac{2z_1+2z_2}{(z_1^2+z_2^2+5)^2}\leq\frac{2(0.4+1.2)}{(0^2+(0.8)^2+5)^2}\approx 0.1006 \leq 0.2$$ $$\frac{1}{2(z_1+z_2)^{3/4}}\leq \frac{1}{2(0+0.8)^{3/4}}\approx 0.5911 \leq 0.6$$ Therefore, we conclude that with $\alpha=0.6$, $$\|\Phi(x)-\Phi(y)\|_{\infty}\leq 0.6 \|x-y\|_{\infty}$$ which implies that $\Phi\colon G\to G$ is a strict contraction with respect to $\|\cdot\|_{\infty}$ and thus, as $(G,\|\cdot\|_{\infty})$ is a complete metric space, the Banach fixed point theorem implies that $\Phi$ has a unique fixed point $p\in G$. Furthermore, for every $x\in G$, the iterative sequence $\Phi^k(x)$ converges towards $p$. A simple numerical experiments shows that $$p = (0.163190947349524, 1.049361520947913)$$
What is the smallest n-gon such that there can be an interior point further from all boundary points than the points are from each of their neighbors?
Let $A$ and $B$ be a pair of consecutive vertices and $P$ be the point in question. If $PA$ and $PB$ are both longer than $AB$, then angle $P$ in triangle $ABP$ has to measure less than $60°$. To make up a full revolution there have to be seven or more different angles of this type at $P$, each opposite a different side of the polygon. So there are no candidates with fewer than seven sides. But the center of a regular heptagon is farther from the vertices than the length of any side making seven sides a sharp lower bound.
Probability distribution from mean time to failure
"Fail uniformly" probably means that they fail at a constant rate (with a different rate for each type of machine). Uniformly is not a good word in this context as it implies the machine failure time follows a uniform distribution (this would be roughly the same number of machines failing in every time period, which is not realistic). Failing at a constant rate $\lambda$ per unit time happens when the number of fails X in the interval from 0 to t (so $\lambda$t fails on average) follows a Poisson distribution $P(X = k) = \frac{{{(\lambda t)}^{k}}\exp (-\lambda t)}{k!}$ If roughly half of these machines have failed up to some time $t_0$ then half of the remaining will fail in the next interval up to $t_0$ and so on, decreasing all the time. If T is the time to failure of a machine, which is a random variable, then the probability that the machine is still working after time t is the same as the probability that X is 0 i.e. there has been no fail up to time t. Then $P(T&gt;t) = P(X = 0) = \exp (-\lambda t)$ This is the Reliability of the machine type, which is the proportion of machines which are still working after time t. The failure rate $\lambda$ is 1/MTTF for that type of machine. So for a given MTTF we can work out the proportion of machines which will fail up to time t using the reliability. e.g If lightbulbs have a MTTF of 300hrs then the proportion which are still working after 200hrs is $exp(-\frac{1}{300}*200) = 0.513$ or 51% The proportion which have failed up to time t is 1 - Reliability. This can be written as an integral up to time t of the probability density for T. Differentiation will then give the density for the variable T, but the Reliability is a more meaningful thing to focus on for practical purposes.
What does RMSD mean?
The || describes the norm of the vector enclosed, which is basically its length with regard to some definition. || x || = sqrt( (x_1)^2 + (x_2)^2 + ... + (x_n)^2 ) Where n is the amount of the dimensions of x. In your case the norm is squared, which in results nullifies the sqrt() of the norm. In addition with working in 3 dimensions (x, y and z), you get to your second second line, where just the respective dimensions of each vector are used. Just replace the x of the above formula with (v_i - w_i) and write out the norm formula squared and you will see the same result as in wikipedia. With respect to the overall question (assuming you mean this wikipedia article): RMSD takes two sets of points v and w, which are given as sets of vectors. Then it computes the average (hence the 1/n in front) distance between respective pairs (thats the norm in the brackets) of all vectors of each set (hence the sum).
formalizing the theory of real numbers
Since you're curious, here's a curious fact. The computable reals have exactly the same first-order theory as the 'real' reals. And for any real-world (engineering, physics, ...) application one needs (and can manipulate) only computable reals. So arguably we don't need anything more than the first-order theory practically speaking. More mathematically... $ \def\eq{\leftrightarrow} $ Notice that if you look carefully at the categorical second-order axiomatization of the reals, it has just one second-order axiom $X$ that states that every bounded set of reals has a least upper bound, but this axiom is useless unless you also have axioms that permit you to construct sets of reals. All $X$ can do by itself is to force the meta-system (say ZFC) to 'see' that all models of the second-order axiomatization are isomorphic, simply because $X$ 'invokes' the meta-system's viewpoint (namely to 'know' what are sets of reals). The meta-system is certainly going to be equivalent to a first-order one (and ZFC already is), because it must have a recursive set of rules, and hence if it is consistent then it has a countable model. So (in the words of André Nicolas) the problem of categoricity just gets transferred upwards. To make it clearer, suppose you believe that ZFC is meaningful. Then you clearly believe that ZFC is consistent. Then by a proof in ZFC you believe that there is a countable model $M$ of ZFC. In $M$ you can find the set $R$ corresponding to the reals as given by a construction (existential statement) in ZFC. $R$ satisfies the second-order axiomatization of reals from the viewpoint of $M$, but $R$ only has countably many elements from the viewpoint of ZFC. Do you consider $R$ to be the reals? No, but what are the reals? You can't just say &quot;as constructed in ZFC&quot;, since $M$ is a model of ZFC and $R$ is a model of your chosen axiomatization according to $M$. Next you may try using second-order logic with Henkin semantics to axiomatize the real numbers, so that it is more 'independent' of the foundations. But then as mentioned above you need to add set-existence axioms to even be able to use the second-order supremum axiom $X$. What could you add? The obvious choice would be to permit construction of any set $\{ x : P(x) \}$ where $P$ is some $1$-parameter sentence over the language of real arithmetic. But would you allow $P$ to contain only first-order quantifiers? If so, then the whole thing ends up reducing to (being conservative over) the first-order theory of the reals, because such constructions are equivalent to definitorial expansions, and the existence of the supremum of definable bounded sets of reals is a first-order schema that is true in the reals and hence in any model of its (complete) first-order theory. If no, then you can construct $N = \{ n : \forall S\ ( 0 \in S \land \forall k\ ( k \in S \to k+1 \in S ) \to n \in S ) \}$ in the resulting theory $R_2$. Note that $R_2$ easily proves that $0 \in N$ and also that $\forall k\ ( k \in N \to k+1 \in N )$, so $R_2$ can carry out induction over natural numbers as follows. Given any $1$-parameter sentence $P$ such that $P(0) \land \forall n \in N\ ( P(n) \to P(n+1) )$, we can in $R_2$ construct $Q = \{ n : n \in N \land P(n) \}$ and prove that $0 \in Q \land \forall k\ ( k \in Q \to k+1 \in Q )$, and then prove that $\forall n \in N\ ( n \in Q )$ (by the definition of $N$), which gives $\forall n \in N\ ( P(n) )$. Thus $R_2$ interprets arithmetic. Note that $R_2$ has a proof verifier program, and hence $R_2$ is essentially syntactically incomplete, unlike the first-order theory of the reals. But $R_2$ has a subtle issue of impredicativity, in that it can construct a set of objects defined using quantification over all sets of objects, including the one being defined. This circularity is precisely what led to Russell's paradox in naive set theory. So one could question whether $R_2$ is meaningful or not. Of course, ZFC proves that the reals (as constructed in ZFC) satisfy $R_2$, but ZFC is itself impredicative, so if you wish you can transfer that question upwards...
Canonical Divisor of Product of Smooth Curves is Ample
Since $\omega_{C_1}$ is ample, there exists a positive integer $m_1$ such that $\omega_{C_1}^{\otimes m_1}$ is very ample, i.e. there exists a positive integer $m_1$ and a closed embedding $\phi_1 : C_1 \hookrightarrow\mathbb P^{n_1}$ such that $\omega_{C_1}^{\otimes m_1} =\phi_1^\star \mathcal O_{\mathbb P^{n_1}} (1)$. Similarly, since $\omega_{C_2}$ is ample, there exists a positive integer $m_2$ such that $\omega_{C_2}^{\otimes m_2}$ is very ample, and there exists a closed embedding $\phi_2 : C_2 \hookrightarrow \mathbb P^{n_2}$ such that $\omega_{C_2}^{\otimes m_2} = \phi_2^\star \mathcal O_{\mathbb P^{n_2}}(1)$. Hence $$ \omega_{C_1 \times C_2}^{\otimes (m_1m_2)}= (\phi_1 \times \phi_2)^\star \left( \pi_1^\star \mathcal O_{\mathbb P^{n_1}}(m_2)\otimes \pi_2^\star \mathcal O_{\mathbb P^{n_2}} (m_1)\right),$$ where $\pi_1 : \mathbb P^{n_1} \times \mathbb P^{n_2} \to \mathbb P^{n_1}$ and $\pi_2 : \mathbb P^{n_1} \times \mathbb P^{n_2} \to \mathbb P^{n_2}$ are the canonical projections. Now define $N: = \binom{n_1 + m_2}{n_1} + \binom{n_2 + m_1}{n_2}-1$, and consider the map $\varphi : \mathbb P^{n_1} \times \mathbb P^{n_2} \to \mathbb P^{N} $ defined as the composition of the $m_2$th and $m_1$th Veronese embeddings $$\mathbb P^{n_1} \hookrightarrow \mathbb P^{\binom{n_1 + m_2}{n_1} - 1}, \ \ \ \ \mathbb P^{n_2} \hookrightarrow \mathbb P^{\binom{n_2 + m_1}{n_2} - 1}$$ with the Segre embedding $$ \mathbb P^{\binom{n_1 + m_2}{n_1} - 1} \times \mathbb P^{\binom{n_2 + m_1}{n_2} - 1} \hookrightarrow \mathbb P^{\binom{n_1 + m_2}{n_1} + \binom{n_2 + m_1}{n_2}-1} = \mathbb P^{N}. $$ It is well known that $$ \pi_1^\star \mathcal O_{\mathbb P^{n_1}}(m_2)\otimes \pi_2^\star \mathcal O_{\mathbb P^{n_2}} (m_1) =\varphi^\star\mathcal O_{N}(1).$$ Thus we have exhibited a closed embedding $\psi : = \varphi \circ (\phi_1 \times \phi_2) : C_1 \times C_2 \hookrightarrow \mathbb P^N$, such that $$ \omega_{C_1 \times C_2}^{\otimes (m_1m_2)}=\psi^\star \mathcal O_{N}(1). $$ Hence $\omega_{C_1 \times C_2}^{\otimes (m_1 m_2)}$ is very ample, and $\omega_{c_1 \times C_2}$ is ample.
How to check if these vectors are normal or orthogonal?
Dot Product and Orthogonal Vectors: Vectors $\vec {a}$ and $\vec{b}$ are orthogonal (or perpendicular) to each other if, $\vec{a} \cdot \vec{b} = 0$. Further Notes: (Dot Product)
Convergence of the sequence $n ((ea)^{1/n} - a^{1/n})$ for positive $a$
Consider the limit $$\lim_{n \to \infty} n((ea)^{1 \over n} - a ^{1 \over n}) $$ $$ = \lim_{n \to \infty} n((e^{1 \over n}a^{1 \over n} - a ^{1 \over n}) $$$$ = \lim_{n \to \infty} n(a^{1 \over n}(e^{1 \over n} - 1)) $$ $$ =\lim_{n \to \infty} \frac{a^{1 \over n}(e^{1 \over n} - 1)}{1 \over n} $$ Substitute $x = {1 \over n}$ $$ =\lim_{x \to 0} \frac{a^{x}(e^{x} - 1)}{x} $$ Using L'Hospital's rule for $0 \over0 $ we get $$ =\lim_{x \to 0} a^x(\ln(a)*(e^{x} - 1) +e^x) = 1 $$
An alternative method for a problem
Since $f(-1)=-1&lt;0$ and $f(-2)=11&gt;0$, by the Intermediate Value Theorem we may conclude that there is at least a zero in $(-2,-1)$. Now notice that $f'(x)=4x^3+3$ and $f''(x)=12x^2\geq 0$ which imply that $f'$ is increasing. Since $f'(-1)=-1$ we have that $f'$ is negative and $f$ is strictly decreasing in $(-\infty,-1]$. Then it follows that the above zero has to be unique.
Using the Heaviside function to represent a given graph
I am assuming every square is 1 unit in your picture. The first line is $$\frac 53 (t-2)$$ Second lin eis $$-\frac 53 (t-8)$$ Which gives: $$\frac{1}{3} (5 (t-2)) (\theta (t-2)-\theta (t-5))-\frac{1}{3} (5 (t-8)) (\theta (t-5)-\theta (t-8))$$
Change of variables with t=xy as the independent variable.
For $y’’+\frac{2y’}{x}+y=0$, so $xy=-x\left(\frac{d}{dx}\left(\frac{dy}{dx}\right)+\frac{2}{x}\frac{dy}{dx}\right)$ $$\because t=xy, \therefore dt=xdy+ydx, \therefore \frac 1{\frac{dx}{dt}}=\frac{dt}{dx}=x\frac{dy}{dx}+y,\therefore \frac{dx}{dt}=\frac 1{x\frac{dy}{dx}+y}$$ $$\therefore d\frac{dx}{dt}=-\frac{d(x\frac{dy}{dx}+y)}{(x\frac{dy}{dx}+y)^2}=-\frac{xd\frac{dy}{dx}+dx\frac{dy}{dx}+dy}{(x\frac{dy}{dx}+y)^2}=-\frac{xd\frac{dy}{dx}+2dy}{(x\frac{dy}{dx}+y)^2} $$ $$\therefore \frac{d^2x}{dt^2}=\frac{d}{dt}\left(\frac{dx}{dt}\right)=\frac{dx}{dt}\cdot\frac{d}{dx}\left(\frac{dx}{dt}\right)=\frac{dx}{dt}\cdot\left( -\frac{x\frac{d}{dx}\left(\frac{dy}{dx}\right)+2\frac{dy}{dx}}{(x\frac{dy}{dx}+y)^2} \right)= \frac{dx}{dt}\frac {xy}{(x\frac{dy}{dx}+y)^2}=\frac t{(x\frac{dy}{dx}+y)^3} $$ So $\frac{d^2x}{dt^2}-t\left(\frac{dx}{dt}\right)^3=0$
Can we use inclusion and exclusion principles on sums $(\sigma)$?
A few comments on the notation: $i\ne j\ne k$ means simply that $i\ne j$ and $j\ne k$. It does not imply $i\ne k$. So the triplet $(0,1,0)$ satisfies the condition. I am assuming you want to sum over the condition $i,j,k$ distinct i.e. $i\ne j\ne k\ne i$. You can't denote inequalities and the set of triplets that satisfy those inequalities by the same label $(a),(b)$ or $(c)$. Let $A,B,C$ denote the sets of triples $(i,j,k)\in\{0,1,...,n\}^3$ that satisfy the inequalities $(a),(b),(c)$ as you have defined above respectively. Now onto the errors in your approach: Since the inequalities are connected by the $AND$ logical operator, i.e. all of them need to be satisfied simultaneously, we are summing over the triplets in $A\cap B\cap C$ and not $A\cup B\cup C$. So you want to find$$\sum_{(i,j,k)\in A\cap B\cap C}3^{-i-j-k}$$ $A\cap B$ is not the set of $(i,j,k)$ that satisfy $i=k\ne j$. You require $i\ne j$ and $j\ne k$ but not $i=k$. So both $(0,1,2),(0,1,0)$ both belong to $A\cap B$. Similarly, $A\cap B\cap C$ is not the set of triplets with $i=j=k$, rather the set of triplets with $i\ne j\ne k\ne i$. Your final expression should be$$\sum_{i\ne j\text{ or }j\ne k\text{ or }k\ne i}3^{-i-j-k}=3\sum_{i\ne j}3^{-i-j-k}−3\sum_{i\ne j\ne k}3^{-i-j-k}+\color{red}{\sum_{i\ne j\ne k\ne i}3^{-i-j-k}}$$and we are interested in finding the red term. I am not sure if this is particularly easier to evaluate than the original expression. The original expression can be easily evaluated by converting it into a nested summation:$$\sum_{i\ne j\ne k\ne i}3^{-i-j-k}=\sum_{i=0}^n\sum_{i\ne j=0}^n\sum_{i,j\ne k=0}^n3^{-i-j-k}\\=\sum_{i=0}^n3^{-i}\sum_{i\ne j=0}^n3^{-j}\sum_{i,j\ne k=0}^n3^{-k}$$Focus on the innermost summation. We are summing powers of $3$ except $3^{-i}$ and $3^{-j}$. Also since $i\ne j, 3^{-i},3^{-j}$ are distinct terms. Thus,$$\sum_{i,j\ne k=0}^n3^{-k}=\left(\frac1{3^0}+\frac1{3^1}+...+\frac1{3^n}\right)-\frac1{3^i}-\frac1{3^j}=\frac32(1-3^{-n-1})-\frac1{3^i}-\frac1{3^j}$$Moving on to the middle summation,$$\sum_{i\ne j=0}^n\frac1{3^j}\left(\frac32(1-3^{-n-1})-\frac1{3^i}-\frac1{3^j}\right)=\left(\frac32(1-3^{-n-1})-\frac1{3^i}\right)\sum_{i\ne j=0}^n\frac1{3^j}-\sum_{i\ne j=0}^n\frac1{9^j}\\=\left(\frac32(1-3^{-n-1})-\frac1{3^i}\right)^2-\left(\frac98(1-9^{-n-1})-\frac1{9^i}\right)$$Opening the square term and summing over $i=0\to n$,$$\left[\frac94(1-3^{-n-1})^2-\frac98(1-9^{-n-1})\right]\sum_{i=0}^n\frac1{3^i}-3(1-3^{-n-1})\sum_{i=0}^n\frac1{9^i}+\sum_{i=0}^n\frac2{27^i}\\=\left[\frac94(1-3^{-n-1})^2-\frac98(1-9^{-n-1})\right]\frac32(1-3^{-n-1})-\frac{27}8(1-3^{-n-1})(1-9^{-n-1})+\frac{27}{13}(1-{27}^{-n-1})\\=\frac{27}8(1-3^{-n-1})^3-\frac{81}{16}(1-9^{-n-1})(1-3^{-n-1})+\frac{27}{13}(1-{27}^{-n-1})$$You can choose to further simplify but I will rest it here. You can verify that the summation tends to $81/208$ as $n\to\infty$.
Is this identity for the Dihedral group correct?
$D_4$ has 2 subgroups isomorphic to $Z_2$. Picture $D_4$ as the group of rotations and reflections of a square, with generator $a$ of order 4 rotating the square by 90 degrees and generator $b$ of order 2 reflecting the square over y-axis. The subgroup generated by $b$ is not normal: observe that $aba^{-1}$ is the reflection over x-axis, which is not in the subgroup generated by $b$. On the other hand, the other subgroup of order 2 generated by $a^2$ is in fact normal: it's a 180-degree rotation that commutes with the rest of $D_4$. And yes, $D_4/(a^2)$ is isomorphic to $D_2=Z_2\times Z_2$ because it's the only group of order 4 without elements of order 4.
How can $\lim_{x\to3}\left(\frac{\sqrt[3]{32x-96}}{x^{2}-2x-3}\right)$ be shown to equal $2$?
Factor the quantity inside the cube root, and factor the denominator: $$\frac{\sqrt[3]{32x-96}}{x^2-2x-3} = \frac{\sqrt[3]{32(x-3)}}{(x+1)(x-3)} = \frac{\sqrt[3]{32}}{(x+1)(x-3)^{\frac23}}$$ Take the limit as $x\to 3$, and we get $\frac{\sqrt[3]{32}}{4}=\frac1{\sqrt[3]{2}}$ times a $\frac1{0}$ form. The quantity $(x-3)^{\frac23}$ we're dividing by is positive on both sides, so the limit is $\infty$. Uh, oops. There's a mistake here, and it looks like it's the textbook's statement of the problem.
Change of basis, confused about bases
The answer as to values of the basis vectors $u_i'$ that compose the basis $B'$, is that they completely depend on the vectors which define your &quot;original&quot; basis $B$. The values of the $u_i'$ thus calculated correspond to the coordinates of the $u_i'$ vectors, expressed in basis $B$. That's for the &quot;raw&quot; answer to your question. But let me discuss a bit for your benefit. Generally, indeed, $B$ is the &quot;canonical&quot; basis (with each vector having a $1$ in some coordinate, and $0$s everywhere else). But this in no way is forced to be the case. However, the formula for change of basis IS universal: that means that this formula works no matter the $B$ and $B'$ involved, so long as you know the way the basis vectors of $B'$ are expressed in basis $B$'s coordinates (that is, your $[a \space b]^T$ and $[c \space d]^T$ above, which define the change-of-basis matrix), you can calculate your new basis vectors. There is one tricky thing though: expressing the basis vectors of $B$, in the $B$ coordinate system, will ALWAYS make it seem like $B$ is the canonical basis ! Why ? Because the first basis vector of $B$ will have coordinates $(1, 0, 0, 0, ..., 0)$ in basis $B$, etc. No matter the basis $B$. Another way to phrase this same idea: the change-of-basis matrix from $B$ to $B$ is obviously the identity matrix, as nothing changes. And the column vectors of the identity matrix is precisely what the canonical basis &quot;looks like&quot;. The distinction you have to make is &quot;what is the geometry of my basis vectors ?&quot;. If they're not all length 1, nor all 2-by-2 perpendicular with each other, your basis $B$ is quite probably not the canonical basis (which has to be orthonormal). The reason for the confusion, is that when you work in a purely algebraic context (like most textbooks or exercises do), you have to &quot;build up&quot; your geometry from algebraic symbols, and expressing your basis $B$ in the canonical basis (so that you can know the geometry of the vectors of $B$) is sort of the only way to do things intuitively (ie, transform algebra into geometry through the use of number coordinates, by relying on the convention that you'll always start from the orthonormal 'canonical basis'). Do check out 3blue1brown's YouTube series &quot;Essence of Linear Algebra&quot;, where he explains (among other things) change of basis in an intuitive, visual way. PS: Do note that when I say &quot;the formula for change of basis is universal&quot;, there is a slight caveat. Notions of covariance vs contravariance enter into play later on. I suggest the YouTube channel eigenchris to see what this means when you get there. In a nutshell, if you want your (column-type) vector to be an absolute, an &quot;invariant&quot;, then increasing the length of your basis vectors will force you to reduce the value of the respective coordinates, so that the shape of your vector stays the same &quot;geometrically&quot; speaking. Changing &quot;the same way as the basis vectors&quot; is called covariance, changing &quot;the opposite way as the basis vectors&quot; is called contravariance (and you'll use the matrix inverse of your change-of-basis matrix). This is the other way around for (row-type) covectors. Basis covectors are contravariant, and covector coordinates are covariant.
Integration of a $2$-form
I've never seen a definition of the integral of a $2$-form along a "2D path" $C: I^2 \to \mathbb{R}^2$ (has anyone?), but it seems clear to me that the sensible definition should be $$ \int_C \omega := \int_{I^2} C^*\omega$$ where $C^*\omega$ is the pull-back of $\omega$ by $C$. Recall that it is defined by $$ (C^*\omega)_{|p} (u, v) := \omega_{|C(p)}(dC_{|p} (u), dC_{|p} (v))~.$$ NB: More generally, we could define the same way integrals $\int_C \omega$ where $\omega$ is a $k$-form on a $k$-dimensional manifold $M$ and $c$ is a smooth map $U \subset \mathbb{R}^k \to M$. Let's come back to your problem. Practically, you can compute $C^* \omega$ by letting $(x,y) = C(t_1,t_2)$ in the expression of $\omega$, you get: $$ \begin{align*} x &amp;= (t_1 + 1) \cos (2\pi t_2)\\ y &amp;= (t_1 + 1) \sin (2\pi t_2) \\ \end{align*} $$ hence $$ \begin{align*} x^2 + y^2 &amp;= (t_1+1)^2\\ dx &amp;= \cos (2\pi t_2)\,dt_1 - 2\pi (t_1 + 1)\sin(2\pi t_2) dt_2\\ dy &amp;= \sin (2\pi t_2)\,dt_1 + 2\pi (t_1 + 1)\cos(2\pi t_2) dt_2\\ dx \wedge dy &amp;= 2\pi (t_1+1)\, dt_1\wedge dt_2\\ \end{align*} $$ thus $$ C^* \omega = {2\pi\, dt_1\wedge dt_2\over t_1 +1} $$ You can now easily compute your integral: assuming $I = [0,1]$ (you don't say what I is) $$ \int_C \omega = \int_{I^2}{2\pi\, dt_1\wedge dt_2\over t_1 +1} = \int_{I^2}{2\pi\, dt_1 dt_2\over t_1 +1} $$ which gives us by Tonelli's theorem $$ \int_C \omega = 2\pi\left(\int_{0}^1{dt_1\over t_1 +1}\right)\left(\int_{0}^1 dt_2\right) = 2\pi \log (2) $$ NB: Another approach ("just for fun") would be to work in polar coordinates, I'll let you try to figure out how do that.
How to calculate this efficiently?
For the coefficient of $x$ to be 3, $m-n=3$ as you said, so $m>n$ and $m-n>0$ and we can factor the original product: $$\begin{align} (1+x)^m(1-x)^n&amp;=(1+x)^{m-n+n}(1-x)^n \\ &amp;=(1+x)^{m-n}(1+x)^n(1-x)^n \\ &amp;=(1+x)^{m-n}\left((1+x)(1-x)\right)^n \\ &amp;=(1+x)^{m-n}(1-x^2)^n \\ &amp;=(1+x)^3(1-x^2)^n \end{align}$$ Now, the coefficient of $x^2$ is the sum of the coefficients of $x^2$ in $(1-x^2)^n=1-nx^2+\cdots$ and $(1+x)^3=1+3x+3x^2+x^3$, so $-6=-n+3$ and $n=9$. Since $m-n=3$, $m=12$.
Finding $E[X^2]$ for $X \sim Bin(25,0.61)$
Your calculation is not correct. $X\sim\text{Binomial}(n,p)\iff X=\displaystyle\sum_{i=1}^{n}Y_i,$ where $Y_i\stackrel{iid}{\sim} \text{Bernoulli}(p).$ In this case $n=25$ and $p=0.61.$ $X\neq X^2$ since $X$ could be neither $0$ nor $1.$ The Bernoulli random variables $Y_i$ are $0$ or $1,$ and so $Y_i=Y_i^2.$ Note that $X^2=\displaystyle\sum_{i=1}^{n}Y_i^2+\underset{i\neq j}{\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}}Y_iY_j=\displaystyle\sum_{i=1}^{n}Y_i+\underset{i\neq j}{\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}}Y_iY_j.$ Now $Y_i$ are iid, $\mathbb{E}(Y_i)=p,$ and so $\underset{i\neq j}{\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}}\mathbb{E}(Y_iY_j)=n(n-1)\mathbb{E}(Y_i)\mathbb{E}(Y_j)=n(n-1)p^2,$ which gives $$\mathbb{E}(X^2)=np+n(n-1)p^2=238.51$$
Finding the positive \ negative domain of a simple expression
You can go from $\frac{900-6X}{X+50}&gt;0$ to $900-6X&gt;0$ only if $X+50$ is positive. If it is negative, you must flip the inequality sign.
Expected amount of time of arrivals
In general for a jointly continuous random variable $(X,Y)$ you can compute $E[X \mid X&gt;Y]$ through the joint density: $$E[X \mid X&gt;Y]=\frac{\int_{-\infty}^\infty \int_y^\infty x f_{X,Y}(x,y) dx dy}{\int_{-\infty}^\infty \int_y^\infty f_{X,Y}(x,y) dx dy}.$$ In your particular problem, since you have independence, you can "marginalize" these integrals by writing $$E[X \mid X&gt;Y]=\int_{-\infty}^\infty E[X \mid X&gt;y] f_Y(y) dy.$$ This is useful to do because now that middle expectation (involving conditioning on $X&gt;y$ for a fixed real number $y$) can be computed using the form of the memoryless property that you know.
Where does $-b/2a$ come from?
It comes from completing the square. You write $$ax^2+bx+c=a\left(x^2+\frac ba x +\frac ca\right)=a\left(\left(x+\frac b{2a}\right)^2+\frac ca-\frac {b^2}{4a^2}\right)$$ and the maximum or minimum is at $x=-\frac b{2a}$ depending on the sign of $a$