title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Autocorrelation problem, regression analysis
Ah got it now, silly me: $corr(u_t, u_{s})=0$ implies $cov(u_t, u_{s})=0$ implies $E(u_t u_s)=0$ $corr(\Delta u_t, \Delta u_{t-1})=\frac{cov(\Delta u_t, \Delta u_{t-1})}{sd(\Delta u_t)sd(\Delta u_{t-1})}=\frac{E[( u_t- u_{t-1})( u_{t-1}- u_{t-2})}{var(\Delta u_t)}$ Multiplying out the above expected value gives a bunch of $E(u_t u_s)=0$ terms and $-\sigma^2_u$ so we get $corr(\Delta u_t, \Delta u_{t-1})=\frac{-\sigma^2_u}{var(\Delta u_t)}=\frac{-\sigma^2_u}{2\sigma^2_u}=-0.5$ As desired.
Computing $E[ {\rm Tr}\{(ZZ^T)^2 \}]$ for $Z$ Gaussian.
$Z_i$'s i.i.d $\sim N(0,1)$ then $X=\sum Z_i^2 \sim \chi^2_n$. Now $E[Tr\{(ZZ')^2\}]=E[\{\sum Z_i^2\}^2]=E(X^2)$. Now As $X\sim \chi^2_n$ we have $E(X^2)=Var(X)+E^2(X)=2n+n^2$ (i.e. the answer is neither $3n$ nor $3n^2$)
Diagonliazing matrix
The question is fair. Fix $T: V \to V$, linear, with $\dim V = n$. Take a basis $B = \{b_i\}_{i=1}^n$. Suppose that $T$ is diagonalizable, and fix $E = \{e_i\}_{i=1}^n$ a basis of eigenvectors. We have that $[T]_{E}$ is a diagonal matrix. But what's the point of computing this matrix? We want to know easily the value of $Tx$, but we usually have the expression for $x$ in the basis $B$, not in the basis $E$. Explicitly: $$[T]_{B} = [{\rm Id}]_{E,B}[T]_{E}[{\rm Id}]_{B,E}.$$ If you don't go through the hassle of computing $P = [{\rm Id}]_{B,E}$, as above, and do: $$[T]_E [x]_B,$$ you won't get anything meaningful. However, we have: $$[Tx]_E = [T]_E [x]_E \quad\text{and}\quad [Tx]_B = [T]_B[x]_B.$$ The point is: $$[Tx]_B =[T]_B[x]_B =[{\rm Id}]_{E,B}[T]_{E}[{\rm Id}]_{B,E}[x]_B =[{\rm Id}]_{E,B}\color{red}{[T]_{E}[x]_E} =[{\rm Id}]_{E,B}\color{red}{[Tx]_{E}} , $$ where the part in red is easy to compute.
Sum of two arbitrary functions is a weak solution to the 1-d wave equation
The key point is that in new coordinates $u = x+t$, $v=x-t$ the expression $\varphi_{tt} - \varphi_{xx}$ becomes $-2\varphi_{uv}$. Indeed, we have $x = (u+v)/2$ and $t=(u-v)/2$, hence $$\varphi_u = \varphi_x x_u + \varphi_t t_u = \frac12 (\varphi_x + \varphi_t)$$ $$\varphi_{uv} = \frac12 (\varphi_x + \varphi_t)_x x_v + \frac12 (\varphi_x + \varphi_t)_t t_v = \frac14(\varphi_{xx} + \varphi_{tx} - \varphi_{xt} - \varphi_{tt}) = \frac12(\varphi_{xx} - \varphi_{tt}) $$ In the $uv$ coordinates, $f(x+t)$ becomes $g(u)$, independent of $v$. The integral gets the factor of $2$ from the Jacobian and becomes $$ \iint_{\mathbb{R^2}} f(x +t ) (\varphi_{tt} - \varphi_{xx}) \mathrm{d}x \mathrm{d}t = -4 \int_\mathbb{R}\left( \int_\mathbb{R} \varphi_{uv}\,dv\right) g(u)\,du $$ where the inner integral is zero because $\varphi$ has compact support.
Section of cone through the rotated plane (with and without offset)
I suspect there are a number of errors in the equations, but it's unclear because the entire idea behind all these transformations of coordinates is unclear. Yes, the ellipse projected onto an oblique plane can be rotated and translated in 3-D space back onto a plane parallel to the circle. But what's the purpose of that? Or you could rotate and translate some ellipse in the parallel plane onto the projected ellipse, but how do you construct the correct ellipse in the parallel plane to begin with? It seems to me a much simpler approach is to write out the equations of the cone and the plane in three dimensions ($x,$ $y,$ and $z$ coordinates) and solve the equations. If the circle is parallel to the $x,y$ plane of the first set of coordinates, it may be easiest to write the equations first in that system and then transform the coordinates (all three coordinates, not just $x$ and $y$) before solving the equations. Here's an attempt via the second approach. I make a few assumptions based on an interpretation of the diagrams of the cone and the intersecting plane. Assume the coordinates of the point $O$ in all three coordinate systems are $(x^r_O,y^r_O,z^r_O) = (x^g_O,y^g_O,z^g_O) = (x^b_O,y^b_O,z^b_O) = (0,0,0).$ Assume the coordinates of $O'$ in the "red" system are $(x^r_{O'},y^r_{O'},z^r_{O'}) = (0,0,z^r_{O'}).$ Assume the circle $CC'$ has radius $R$ and is parallel to the red plane, so the "red" coordinates of points on that circle satisfy the simultaneous equations \begin{align} (x^r)^2 + (y^r)^2 &= R^2,\\ z^r &= z^r_{O'}. \end{align} Now to find an equation of the cone whose vertex is at $P$ and whose sides pass through the circle $CC',$ consider an arbitrary cross-section of the cone parallel to the red plane. The cross-section is a circle with center on the line $PO'$ and radius proportional to the distance from the parallel plane through $P.$ In particular, the center of the cross-section has "red" coordinates $(x^r,y^r,z^r) = \left(h(z^r - z^r_{O'}), k(z^r - z^r_{O'}), z^r \right)$ where $$h = \frac{x^r_P}{z^r_P - z^r_{O'}} \quad\text{and}\quad k = \frac{y^r_P}{z^r_P - z^r_{O'}},$$ and the radius is $\left\lvert\dfrac{z^r_P - z^r}{z^r_P - z^r_{O'}}\right\rvert R.$ The equation of the cone in "red" coordinates is therefore $$ \left(x^r - h(z^r - z^r_{O'})\right)^2 + \left(y^r - k(z^r - z^r_{O'})\right)^2 = \left(\frac{z^r_P - z^r}{z^r_P - z^r_{O'}} R\right)^2. \tag1 $$ Now to find the equation in "blue" coordinates, we need to work out the conversion of coordinates. The point with "blue" coordinates $(x^b,y^b,z^b)_b$ has "green" coordinates $$(x^g,y^g,z^g)_g = (x^b\cos\beta + z^b\sin\beta, y^b, -x^b\sin\beta + z^b\cos\beta)_g.$$ (This assumes that a small positive rotation angle $\beta$ would bring the positive $z$ axis of the blue plane closer to the positive $x$ axis of the green plane; if the positive direction of rotation is in the other direction, just reverse the sign of $\sin\beta$ in the formula.) The point with "green" coordinates $(x^g,y^g,z^g)_g$ has "red" coordinates $$(x^r,y^r,z^r)_r = (x^g, y^g\cos\alpha - z^g\sin\alpha, y^g\sin\alpha + z^g\cos\alpha)_r$$ (assuming the positive direction of rotation takes the positive $y$ axis toward the positive $z$ axis; if it goes the other way, reverse the sign of $\sin\alpha$). Now suppose a point on the cone has "blue" coordinates $(x^b,y^b,z^b)_b.$ The "red" coordinates of that point, $(x^r,y^r,z^r)_r,$ have the formulas \begin{align} x^r &= x^g = x^b\cos\beta + z^b\sin\beta,\\[6pt] y^r &= y^g\cos\alpha - z^g\sin\alpha \\ &= y^b\cos\alpha - (-x^b\sin\beta + z^b\cos\beta)\sin\alpha \\ &= x^b\sin\beta\sin\alpha + y^b\cos\alpha - z^b\cos\beta\sin\alpha,\\[6pt] z^r &= y^g\sin\alpha + z^g\cos\alpha \\ &= y^b\sin\alpha + (-x^b\sin\beta + z^b\cos\beta)\cos\alpha \\ &= - x^b\sin\beta\cos\alpha + y^b\sin\alpha + z^b\cos\beta\cos\alpha. \end{align} That is, the "red" coordinates of the point with "blue" coordinates $(x^b,y^b,z^b)_b$ are \begin{align} x^r &= a_{11}x^b + a_{13}z^b, \tag2\\ y^r &= a_{21}x^b + a_{22}y^b + a_{23}z^b, \tag3\\ z^r &= a_{31}x^b + a_{32}y^b + a_{33}z^b \tag4 \end{align} where \begin{align} a_{11} &= \cos\beta, & & & a_{13} &= \sin\beta, \\ a_{21} &= \sin\beta\sin\alpha, & a_{22} &= \cos\alpha, & a_{23} &= -\cos\beta\sin\alpha,\\ a_{31} &= -\sin\beta\cos\alpha, & a_{32} &= \sin\alpha, & a_{33} &= \cos\beta\cos\alpha. \end{align} If $(x^b,y^b,z^b)_b$ are the "blue" coordinates of a point on the cone, then the "red" coordinates of the same point must satisfy Equation $(1),$ above. That is, we can use Equations $(2),$ $(3),$ and $(4)$ to make substitutions for $x^r,$ $y^r,$ and $z^r$ in Equation $(1).$ The resulting equation is \begin{multline} \left(a_{11}x^b + a_{13}z^b - h(a_{31}x^b + a_{32}y^b + a_{33}z^b - z^r_{O'})\right)^2 \\ + \left(a_{21}x^b + a_{22}y^b + a_{23}z^b - k(a_{31}x^b + a_{32}y^b + a_{33}z^b - z^r_{O'})\right)^2 \\ = \left(\frac{z^r_P - (a_{31}x^b + a_{32}y^b + a_{33}z^b)} {z^r_P - z^r_{O'}} R\right)^2. \tag5 \end{multline} But we are only interested in the intersection of the cone with the blue plane, where $z^b = 0.$ So we can substitute $z^b = 0$ in Equation $(5),$ with the result \begin{multline} \left(a_{11}x^b - h(a_{31}x^b + a_{32}y^b - z^r_{O'})\right)^2 + \left(a_{21}x^b + a_{22}y^b - k(a_{31}x^b + a_{32}y^b - z^r_{O'})\right)^2 \\ = \left(\frac{z^r_P - (a_{31}x^b + a_{32}y^b)} {z^r_P - z^r_{O'}} R\right)^2. \end{multline} Now, this may still look daunting, but everything in this equation except $x^b$ and $y^b$ is a known constant. You can multiply out the products and squares of the expressions in parentheses until everything is just individual terms, each of which is some kind of constant times $x^b,$ $y^b,$ $(x^b)^2,$ $(y^b)^2,$ or $x^b y^b.$ Collect all the terms together on one side of the equation so that it looks like $$ Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 ,$$ and then you can find the center, major axis, minor axis, and angle of the ellipse by following one of the procedures in the answers to these questions: Compute center, axes and rotation from equation of ellipse, Determining the major/minor axes of an ellipse from general form, Finding the angle of rotation of an ellipse from its general equation and the other way around, or How to convert the general form of ellipse equation to the standard form?. Note that the center of the ellipse will not usually be at the same point as the projection of $O'$ onto the blue plane.
Singular Value Decomp inequality
A reasonable attempt, but this wasn't quite the correct trick. $$ \begin{align} \left\| Tx - \sum_{j=1}^m \sigma_j \<x,v_j\>u_j \right\|^2 &= \left\| \sum_{j=m+1}^n \sigma_j \<x,v_j\>u_j \right\|^2 = \sum_{j=m+1}^n \sigma_j^2 |\<x,v_j\>|^2 \\ & \leq \sum_{j=m+1}^n \sigma_{m+1}^2 |\<x,v_j\>|^2 = \sigma_{m+1}^2 \sum_{j=m+1}^n |\<x,v_j\>|^2 \\ & \leq \sigma_{m+1}^2 \sum_{j=1}^n |\<x,v_j\>|^2 = \sigma_{m+1}^2 \|x\|^2 \end{align} $$ The conclusion follows.
How to find scaling to get minimum positive integer proportion?
Step 1: Clear denominators) Write your vector $x$ entries in fractions (in simplest form), then $b$ will be the lowest common multiple (LCM) of the denominators. For example write x=[0.5, 0.25, 0.75] as $x=[\frac 12, \frac 14, \frac 34]$. Since LCM(2,4)=4, $b_1=4$. Then $y=[2,1,3]$. Step 2: Simplify) At this stage $y$ should already have all entries integers, then divide throughout by the GCD of all the integers. E.g. if $x=[3,3,3]$, $b_2=1/3$ to get $[1,1,1]$. (As pointed out by peterwhy) Overall, the scalar $b$ will be the product of the $b_1b_2$ used in the two stages.
How to show that the Volterra operator is not normal
From this answer we know that $$ V^*(x)(t)=\int_t^1 x(s)\,ds $$ Using Fubini theorem you can chek that $$ VV^*(x)(t) =\int_0^t\int_s^1x(\tau)\,d\tau \,ds =\int_0^t\int_0^\tau x(\tau) \,ds\,d\tau =\int_0^t x(\tau)\int_0^\tau \, ds\,d\tau =\int_0^t \tau x(\tau)\,d\tau $$ $$ V^*V(x)(t) =\int_t^1\int_0^s x(\tau)\,d\tau \,ds =\int_0^t\int_t^1 x(\tau) \,ds \,d\tau + \int_t^1\int_\tau^1 x(\tau) \,ds \,d\tau\\ =(1-t)\int_0^t x(\tau) \,d\tau + \int_t^1(1-\tau) x(\tau) \,d\tau $$ So $VV^*\neq V^*V$.
If $S=TX$, then $\sigma\{S_1,\ldots,S_n\}=\sigma\{X_1,\ldots,X_n\}$
Fact 1: A linear map on a finite dimensional space is continuous. Fact 2: If $f$ is a continuous map, and $G$ is an open subset of the range then $f^{-1}(G)$ is an open subset of the domain. Fact 3: If a bijection $f$ is linear, so is $f^{-1}$. Fact 4: For any collection of sets $\mathcal{C}$, and for any function $f$, $f^{-1}(\sigma(\mathcal{C})) = \sigma(f^{-1}(C))$. Denote by $\tau$ the collection of all open subsets of $\mathbb{R}^n$, and denote by $\mathcal{B}$ the Borel field on $\mathbb{R}^n$. By definition of $\mathcal{B}$, $\mathcal{B} = \sigma(\tau)$. By facts 1-3, $T^{-1}(\tau) = \tau$. Therefore $$ S^{-1}(\mathcal{B}) = X^{-1}\left(T^{-1}(\sigma(\tau))\right) \overset{\text{Fact 4}}{=} X^{-1}\left(\sigma(T^{-1}(\tau))\right) = X^{-1}(\sigma(\tau)) = X^{-1}(\mathcal{B}). $$
No clear analytic method to prove unique maximum? ($2^{-x}+2^{-1/x}$)
Not as pleasing as the AM-GM inequality, but since $f$ and its derivatives are all continuous except the discontinuity at $x=0$: you can differentiate $f$, then analytically show $f'$ is zero at x=1, as well as somewhere around 0.2 (here using the intermediate value theorem]. obtain $f''$ and show it is positive at $x\approx 0.2$ and negative at $x=1.$ conclude $x=1$ is a unique local maximum of this function
Simple bounding question for an expectation with truncating function
For fixed $\varepsilon>0$, we can choose $N$ sufficiently large such that $\sqrt{N} \varepsilon>C$. Then $$\mathbb{P}(|X_m|>\sqrt{n} \varepsilon) = 0$$ implies $$\mathbb{E}(X_m^2 1_{\{|X_m|>\sqrt{n} \varepsilon\}})=0$$ for all $n \geq N$. Writing $$\begin{align*} \frac{1}{n} \sum_{m=1}^N\mathbb{E}(X_m^2 1_{\{|X_m|>\sqrt{n} \varepsilon\}}) &= \frac{1}{n} \sum_{m=1}^N \mathbb{E}(X_m^2 1_{\{|X_m|>\sqrt{n} \varepsilon\}})+ \frac{1}{n} \sum_{m=N+1}^n \underbrace{\mathbb{E}(X_m^2 1_{\{|X_m|>\sqrt{n} \varepsilon\}})}_{0} \\ &\stackrel{n \to \infty}{\to} 0. \end{align*}$$
Find elements of $\{0,1\}^4$
I'm not sure what (4) means, but to satisfy (2) and (3): [1111], [1111], [1100], [1010], [0100], [0010], [0001], [0001]
I'm not quite sure I understand my book's reasoning for the answer
For the book solution, you have two equations: $3a+4b = 56$ and $a+b =12$, subtracting 4 multiples of the second from the first yields $-a = 8$, so $a<0$, which cannot be. Your proof cannot assume all degrees are 4, some could be 3, so correct it to an inequality: $$\sum \deg v_i \leq 4 \cdot 12 = 48 < 56$$ which yields the desired contradiction.
Applications of Pushouts
You are correct, $X=V_1\oplus V_2$. To see this, suppose that for some vector space $V $ we have $f_1$ and $f_2$ morphisms which make the square commute. Then clearly the map $f:X\to V $ given by $f(u)=f_1 (\pi_1 (u))+f_2 (\pi_2 (u)) $, where $\pi_i $ is the projection onto $V_i $, will make the diagram commute. Suppose that we have another morphism $g $ which makes our diagram commute. Then for $i_j:V_j\to X $ the inclusion map, $g\circ i_j=f_j $. Now, by the definition of $X $ we have that any $u\in X $ is such that $u=i_1 (\pi_1 (u))+i_2 (\pi_2 (u))$. Applying $g $ to both sides and using the fact that $g $ is linear then gives that $g=f $, as desired. Finally, I am guessing you are using an inductive definition to get spheres, so that $S^k $ mapping into the boundary of $D^k $ pushes out to $S^{k+1} $. This construction just means that we can get a sphere by "gluing" two disks of lower deminsion along their boundary, so that the disks are the hemispheres of the sphere. This way of thinking about spheres is useful when studying algebraic topology, since it gives you a CW structure, and a solid geometric connection to lower deminsional spheres as well.
Evaluation of this line integral $\int z^2\cdot e^\left(\frac{1}{z}\right)\cdot\sin\left(\frac{1}{z}\right)dz$
The "quick" formulae for residue only works for simple poles and poles "of finite order". They won't work for essential singularities though. However, Laurent series expansion always works.
Finding trace of a matrix by minimal polynolmial
$A$ satisfies a squarefree polynomial $\lambda^2 - \lambda - 2 = (\lambda - 2)(\lambda + 1), \; $ so this must be the minimal polynomial, and the eigenvalues are $2,-1.$ Note that $A$ is diagonalizable, we might as well assume it is diagonal. It says the rank of $A+I$ is three. This means $2$ occurs 3 times, while $-1$ occurs 7 times. $$ 3 \cdot 2 + 7 \cdot (-1) = 6 - 7 = -1 $$
Is writing "However it is a general fact that..." a valid statement in a proof?
Doing so would be extremely tedious and greatly slow down the process of learning new things. Most math books tell you who the intended reader is, this is so that you have the appropriate prerequisites to be able to fill gaps in proofs.
Poisson bracket makes $C^\infty(M)$ into a Lie algebra
First of all, maybe it's just a sign convention, but the way I'm doing it, the statement needs a negative sign: $$ X_{\{f,g\}} = -[X_f,X_g] = [X_g,X_f] $$ This is under the convention that $\omega(v,X_f) = df(v) = vf$ for any vector $v$. In particular, this gives that $$ \{f,g\} = \omega(X_f,X_g) = df(X_g) = X_g f $$ Again, using the definition of the Hamiltonian vector field, note that $X_{\{f,g\}}$ is characterized by the fact that for any $v$, $$ \omega(v,X_{\{f,g\}}) = d\{f,g\}(v) = v\{f,g\} = v X_g f \tag{$\star$} $$ In order to prove the desired claim (that $X_{\{f,g\}} = -[X_f,X_g]$), we will show that the vector $-[X_f,X_g] = [X_g,X_f]$ satisfies this property, and so must be equal to the Hamiltonian vector field $X_{\{f,g\}}$. The flows of Hamiltonian vector fields are symplectic, meaning the Lie derivative $\mathcal{L}_{X_f} \omega$ is zero. In particular $$ \begin {eqnarray} 0 &=& \left( \mathcal{L}_{X_g} \omega \right)(v, X_f) \\ 0 &=& X_g \left( \omega(v,X_f) \right) - \omega([X_g,v],X_f) - \omega(v,[X_g,X_f]) \\ \omega(v,[X_g,X_f]) &=& X_g \left( \omega(v,X_f) \right) - \omega([X_g,v],X_f) \\ \omega(v,[X_g,X_f]) &=& X_g v f - [X_g,v] \, f \\ \omega(v,[X_g,X_f]) &=& v X_g f \end {eqnarray} $$ We have showed that $[X_g,X_f] = -[X_f,X_g]$ satisfies condition $(\star)$, and so it must be that $[X_g,X_f] = X_{\{f,g\}}$.
Question about indices of subgroups
Your argument goes wrong here: "So we get that $I$ is at least as big as $[G:H]$, thus $[G: H \cap K]$ divides $[G: H]$". Since $|I|=[G: H \cap K]$, "$I$ is at least as big as $[G:H]$" means $[G: H \cap K] \ge [G: H]$, which doesn't go well with, let alone implies, "$[G: H \cap K]$ divides $[G: H]$".
Product of $3$ consecutive triangular numbers is a perfect square
You have a Pell-type equation $$y^2-8x^2=9.$$ This implies that $y$ and $x$ are multiples of $3$, so $$(y/3)^2-8(x/3)^2=1$$ which is a genuine Pell equation. Its solution is $$(y+2x\sqrt2)/3=\pm(3+2\sqrt2)^n$$ for $n\in\Bbb Z$.
On the second Clarkson's inequality
See the article A Note On Clarkson’s Inequality In The Real Case by Hiroyasu Mizuguchi and Kichi-suke Saito in Journal of Mathematical Inequalities. And see Theorem 1 of Some Uniformly Convex Spaces by R. P. Boas, Jr. This last article answers your question in a more general spaces $L^p$ and $l^p$ instead of $\mathbb{R}^N$. Note that $\mathbb{R}^N$ can be regarded as a subspace of $l^p$.
Calculus derivation of OLS regression formula
$X$ is $n \times k$ where $k\le n$. So $X$ full rank means the rank of $X$ is $k$. Now if $x$ is $k \times 1$, then $y=Xx$ is definite: $Xx=0$ iff $x=0$ by the full rank of $X$ (if not, some $k \times 1$ basis vector $e$ would have $Xe=0$, contradicting full rank). Finally, $x^TX^TXx=y^Ty$ is clearly positive, and so $X^TX$ is positive-definite.
If $T \in \mathcal{L}(V)$ is diagonalizable and V is infinite dimensional, then $V = null (T) \oplus range (T)$.
Hints: As $\;V\;$ contains a basis of eigenvectors of $\;T\;$ ,$\;V\;$ is the direct sum of the corresponding eigenspaces. But $\;\ker T=\text{null}\,T\;$ is just the eigenspace corresponding to the eigenvalue zero (which, btw, is the zero space if $\;T\;$ is invertible) ...
Point on the left or right side of a plane in 3D space
Call the three points determining the plane $A$, $B$, $C$, and write $X$ for the new point. Form the three differences $B'=B-A$, $C'=C-A$, $X'=X-A$. Now compute the $3\times3$ determinant of the matrix whose columns (or rows, doesn't matter) are $B'$, $C'$, $X'$. The sign of the resulting determinant will be positive for $X$ on one side of the plane and negative on the other side. Now you only need to figure out in general which side you want to call left and which side to call right.
Compute coordinates of a point in 3D-Euclidean Space
This seems to be a common problem in navigation, as it corresponds to distance measurements of an object from different stations: Trilateration The above article argues that three spheres will allow to pinpoint the location down to two candidates, thus a fourth sphere might settle the issue. There is an interesting calculation, which uses a coordinate transform, to simplify the equations: all three centers lie in the plane $z=0$ sphere one is at the origin $x=y=z=0$ sphere two is on the $x$-axis $y=z=0$ I would study the problem for three spheres, which should give two candidates and then use the closest to the fourth sphere (if there is sucha point).
term for distance preserving up to scale
The correct term is homothetic.
Is the union of 2 complex analytic sets still a complex analytic set?
As long as they are closed analytic subsets of some common domain (or manifold), then yes. Locally, as the comment to the question makes clear, you could just multiply the defining functions to get functions that vanish on both. However, if you are talking about local analytic subvarieties (Whitney's terminology), that is, if they are not closed, then no, a union need not be a subvariety. The distinction is as follows. Let $U$ be a domain or a manifold. $X \subset U$ is an analytic subset (a subvariety, sometimes emphasized as "closed subvariety of $U$") if for every $p \in U$ there is a neighborhood $V$ of $p$, and holomorphic functions $f_1,\ldots,f_k$ defined in $V$, such that $X \cap V = \{ z \in V : f_1(z) = 0 , \ldots , f_k(z) = 0 \}$. A definition of a local subvariety is the same except it starts different, it says "for every $p \in X$" instead of "for every $p \in U$". The distinction may seem trivial but it is important. For example in two dimensions, the set given by $z_1 = 0$, $\text{Re}\ z_2 > 0$ is a local analytic subvariety, but not a closed subvariety of ${\mathbb C}^2$. Its union with the subvariety (actual, honest, closed subvariety of ${\mathbb C}^2$) given by $z_2 = 0$ is not a subvariety in either sense (the origin is a problem!) A local subvariety is a (closed) subvariety of some open subset. A union of subvarieties is a subvariety if they are subvarieties of the same open subset. Otherwise not necessarily. So the upshot is: it depends on the definition of "complex analytic set". I'd bet 90% of time people mean a closed subvariety of some set or other, especially if they say "subvariety of $U$" or "analytic subset of $U$". But if considering subvarieties as generalizations of submanifolds, then it makes sense to talk about local subvarieties as that's the way we generally think of submanifolds. Of course, the definition of "submanifold" is yet another can of worms for different reasons. The upshot of the upshot? Be careful about what definition the source that you are reading is using. If you are using a book, check what the definition says carefully and then compare above.
Discounting and Interest
HINT: you have two equations. If we set $k=r+1$, where $r$ is the rate of interest, the first equation is $Ak-A=336$, the second is $ A- A/k=300$. Now solve the system.
Show that if $a^{n-1} \equiv 1 \pmod{n}$ for all $a$ such that $\gcd(n,a) = 1$, then $a^{n} \equiv a \pmod{n}$ for all $a$.
The equivalent definitions of Carmichael numbers are the special case $\,e = n\,$ below. Theorem $\ $ The following are equivalent for integers $\,n,e>1$. $(1)_{\phantom{|_{|_.}}}\ n\mid a^e\ -\ a\ \ $ for all $\,a\in\Bbb Z^{\phantom{|^|}}\!\!,\: $ and $\ (e\!-\!1,n)=1$ $(2)_{\phantom{|_{|_.}}}\ n\mid a^{e-1}\!-1\ $ for all $\,a\in\Bbb Z\,$ with $\, \color{#90f}{(a,n)=1}= (e\!-\!1,n)$ $(3)\ \ \ \:\! n\,$ is squarefree, $ $ prime $\,p\mid n\,\Rightarrow\, \color{#0a0}{p\!-\!1\mid e\!-\!1},\ p\nmid e\!-\!1$ Proof $\ \ (1\Rightarrow 2)\ \ \ (1)\,\Rightarrow\, \color{#90f}{n\mid a}(a^{e-1}-1)\,\Rightarrow\, n\mid a^{e-1}-1\,$ by $\,\color{#90f}{n,a}\,$ coprime & Euclid's Lemma. $(2\Rightarrow 3)\ \ $ Suppose prime $\,p\mid n,\,$ so $\,n = j\, p^k$ with $\,k\ge 1,\ p\nmid j.\,$ Let $\,g\,$ be a primitive root $\!\bmod p^k,\,$ i.e. $\,g\,$ has order $\,(p\!-\!1)p^{k-1}.\,$ By CRT there's $\,a\in\Bbb Z\,$ with $\,\ a\equiv 1\pmod{\!j},\,a\equiv g\pmod{\!p^k},\,$ thus $\,a\,$ is coprime to $\,j,p\,$ so also to $\,n = j\,p^k.\,$ So $\,(2)\Rightarrow\,a^{e-1}\equiv 1\,$ holds $\!\bmod n\,$ so also $\!\bmod p^k,\,$ thus Order Theorem $\Rightarrow\,(\color{#0a0}{p\!-\!1})p^{k-1}\!\mid \color{#0a0}{e\!-\!1}\,\Rightarrow\,k=1\,$ (else $\,p\mid e\!-\!1,n\,$ contra $\,(e\!-\!1,n)=1)$. $(3\Rightarrow 1)\ \ $ Let prime $\,p\mid n.\,$ If $\,p\mid a\,$ then $\,p\mid a^e-a\,$ by $\,e>1.\,$ Else $\,p\nmid a\,$ so by little $\rm\color{#c00}{Fermat}$ $\!\bmod p\!:\ a^{\large\color{#0a0}{e-1}}\equiv \smash[t]{\color{#c00}{(a^{\color{#0a0}{\large p-1}})}}^{\large\color{#0a0} k}\equiv \color{#c00}1^{\large k}\!\equiv 1\,$ so $\,p\mid a^{e-1}-1\mid a^e-a.\,$ So $\, a^e-a\,$ is divisible by all primes $\,p\mid n\,$ so also by their lcm = product = $\,n,\,$ by $\,n\,$ squarefree. $\,(e\!-\!1,n)=1\,$ by $\,p\mid n\,\Rightarrow\,p\nmid e\!-\!1$. Remark $\,\ (3)\, $ for $\,e=n\,$ is known as Korselt's criterion for Carmichael numbers.
Question regarding norms of Cauchy-Schwarz inequality
One way to remember and see it is that things need to stay homogeneous. (Yes, like in physics.) Informal explanation: If $u$ and $v$ were in, say, meters, then the inner product $\langle u,v\rangle$ is a product, so in meters squared; and on the RHS, both norms are in meters, and so the product of two norms is also in meters squared. More formally and more to the point, the inequality must remain true if you multiply $u$ by $\alpha u$, for any number $\alpha$. After all, $\alpha u$ is just another vector. Same thing replacing $v$ by $\beta v$. So we need $$ \lvert \langle \alpha u,\beta v\rangle\rvert \leq \lVert \alpha u \rVert\cdot \lVert \beta v \rVert \qquad \forall \alpha,\beta \in\mathbb{R} \tag{1} $$ But by (bi)linearity, the LHS is equal to $\lvert \langle \alpha u,\beta v\rangle\rvert = \lvert \alpha\beta \langle u, v\rangle\rvert = \lvert \alpha\beta \rvert \cdot \lvert \langle u, v\rangle\rvert$, while the RHS is equal (by properties of norms) to $\lVert \alpha u \rVert\cdot \lVert \beta v \rVert = \lvert \alpha\beta \rvert \cdot \lVert u \rVert\cdot \lVert v \rVert$. That's good! The factors $\lvert \alpha\beta \rvert$ cancel on both sides in (1). If you had a square root in the LHS, they would not cancel, and (1) couldn't be true for all $\alpha,\beta$.
Why doesn't the functional equation imply that $\zeta(s)=0$ for positive even integers?
$\Gamma$ has a pole, not a zero, at each negative integer.
Does the internal logic of a topos satisfy propositional, functional, set extensionality?
are the interpretations of the following terms equal to $\top : \mathrm{Hom}(1,\Omega)$? Yes, the three principles you state are valid in the internal language of any topos, and the proofs you provide are correct. In general, the internal language of a topos is an extensional dependent type theory supplemented with further rules governing power types. If you can read German, then you might enjoy this list (page 14) and the remark on page 16. is there a version of the internal language of a topos which allows for "proof terms" as in the Curry-Howard correspondence I don't think so. However, you might want to look at homotopy type theory, which is the internal language of $(\infty,1)$-toposes. Most propositions in HoTT do not satisfy proof irrelevance (this is seen as a feature, not a bug), but the "mere propositions" do. What about definite description Yes, definitely! You can add that to the internal language. Proposition 2.6 of these notes of mine lists a simplification rule for the interpretation of "$\exists!$" using the Kripke–Joyal semantics (this is not quite what you're asking, but it's related, since one gives meaning to the definite description by exploiting this simplification rule).
1 Dimensional ODE with solution in $L^2$
1. Consider the homogeneous case, i.e., $h=0$. The two linearly independent solutions are $e^{\pm \sqrt{ir} t}$, where we take the principle branch cut of $\sqrt\cdot$. In order to make the solution $L^2[0,\infty)$, you need to discard the exponentially growing solution $e^{\sqrt{ir} t}$ and keep only the decaying one $e^{-\sqrt{ir} t}$. So $\sinh$ and $\cosh$ can not be used alone but only in equal weight pair in each of your terms. Explicitly $$w(t)=w(0)e^{-\sqrt{ir} t},$$ and $w'(0)=-w(0)\sqrt{ir}$. Substitute in the linear form of $w(0)$ and $w'(0)$, we can easily obtain the condition on $a$ which is $a=-\frac{c_1\sqrt{ir}+c_3}{c_2\sqrt{ir}+c_4}$ if the denominator does not vanish, and any $a\in\Bbb C$ if both the denominator and numerator vanish, and does not exist if only the denominator vanishes. 2. Now consider the inhomogeneous case but with homogeneous boundary condition, i.e., $h\not\equiv0$ and $w(0)=w'(0)=0$. There are two interesting cases for two different Green's functions. The difference of these two Green's function is a homogeneous solution. 2.1 The first Green's function is $$G_1(t)=\frac1{k}\sinh(kt)\Theta(t)$$ where $k=\sqrt{ir}$ and $\Theta$ is the Heaviside step function. $$w(t)=-\int_0^t h(u)G_1(t-u)du.$$ There are many $h$'s (counterexamples saught by the OP) making $w\notin L^2[0,\infty)$ for $G_1$: (1) $h(t)=e^{-at}\in L^2[0,\infty)$ with some positive $a$; (2) any $h\in C[0,\infty)$ compactly supported and positive on $(0,T)$ for some positive $T$. For $G_1$, to produce $w\in L^2[0,\infty)$ with a nonzero $h\in L^2[0,\infty)\cap C[0,\infty)$, both the real part and imaginary part of $h$ have to alternate their signs over $t\in[0,\infty)$. 2.2 The second Green's function is $$G_2(t)=-\frac1{2k}e^{-k|t|}.$$ Then $$w(t)=-\int_0^\infty h(u)G_2(t-u)du+\int_0^\infty h(u)G_2(-u)du=-h*G_2+(h*G_2)(t=0),$$ where $*$ stands for convolution. The second term is to ensure the null initial condition. $G_2\in L^1[0,\infty)$. By Young's convolution inequality, $$\|h*G\|_2\le \|h\|_2\|G_2\|_1,$$ and $w\in L^2[0,\infty),\, \forall h\in L^2[0,\infty)$. Note: As indicated before, $G_1-G_2=\frac{e^{kt}}k$ a solution of the homogeneous equation. Separating out the "explosive" exponentially growing term $e^{kt}$ from $\sinh(kt)$ of $G_1(t)$, we have the following proposition regarding the square integrability of $w$. 3. Proposition 1.: $\forall k\in\Bbb C\,\ni \mathbf{Re}(k)>0 \implies \exists h\in L^2[0,\infty) \ni \big(w[h,a](t):=\int_0^t h(u)e^{k(t-u)}\,du-ae^{kt}\notin L^2[0,\infty),\, \forall a\in\Bbb C-\{0\} \big).$ Proof: Take the Laplace transformation of $w[h,a]$ \begin{align} \mathscr L[w]&=\mathscr L[h]\mathscr L[e^{kt}]-a\mathscr L[e^{kt}], \tag1\\ \mathscr L[e^{kt}](s)&=\frac1{s-k}. \end{align} $\mathscr L[h]\in H^{2+}$ the Hardy function space on the right half complex plane iff $h\in L^2[0,\infty)$. Let $$\mathscr L[h](s)=\frac{s-k}{(s+\alpha)(s+\beta)},$$ or equivalently $$h(t)=\frac{(\beta+k)e^{-\beta t}-(\alpha+k)e^{-\alpha t}}{\beta-\alpha},\tag2$$ for some $\alpha,\beta\in\Bbb C$ where $\mathbf{Re}(\alpha)>0,\mathbf{Re}(\beta)>0$. $\mathscr L[h]\in H^{2+}$ since it is a proper fractional function and its poles are all in the left half complex plane. By Eq. (1), there is always a simple pole at $k$ in the right half complex plane so long as $a\ne0$. Then $\mathscr L\big[w[h,a]\big]\notin H^{2+}$ and $w[h,a]\notin L^2[0,\infty),\,\forall a\ne0$. This can also be verified by direct computation of $w$ with the chosen $h$ with the explicit expression Eq. (2). $\quad\square$ 4. Proposition 2: $$k\in\Bbb C\,\ni \mathbf{Re}(k)>0, h\in L^2[0,\infty) \implies \exists !a\in\Bbb C \ni \big(w[a](t):=\int_0^t h(u)e^{k(t-u)}\,du-ae^{kt}\in L^2[0,\infty)\big).$$ Proof: Take the Laplace transformation of $w[h,a]$ \begin{align} \mathscr L[w]&=\mathscr L[h]\mathscr L[e^{kt}]-a\mathscr L[e^{kt}], \tag1\\ \mathscr L[e^{kt}](s)&=\frac1{s-k}. \end{align} $\mathscr L[h]\in H^{2+}$ the Hardy function space on the right half complex plane iff $h\in L^2[0,\infty)$. Let $D(\omega;R)$ be the closed disk centered at $\omega$ and radius $0<R<\mathbf{Re}(k)$. For an arbitrary $x\ge 0$, let $\Omega_x:=\{x+iy\}\setminus D(\omega-k,R)$. We consider two cases. (1)$k$ is not one of the zeros. $$\frac{\mathscr L[h]}{s-k}=\frac{\mathscr L[h](s)-\mathscr L[h](k)}{s-k}+\frac{\mathscr L[h](k)}{s-k}$$ $\phi(s):=\frac{\mathscr L[h](s)-\mathscr L[h](k)}{s-k}$. $\phi(s)$ is holomorphic on the closed right half complex plane. We will prove that $\phi(s)$ is square integrable and uniformly bounded along all vertical line on the right hand side complex plane. $|\phi(s)|$ has a maximum on the compact $D(k,R)$. $$|\phi(s)|^2\le \frac{|\mathscr L[h](s)|^2}{R^2}+\frac{|\mathscr L[h](k)|^2}{|s-k|^2},\ \forall |s-k|\ge R.$$ We have $$\int_{\Omega_x} |\phi(x+iy)|^2 dy\le \frac1{R^2}\int_{\Omega_x} |\mathscr L[h](x+iy)|^2dy+\frac{\pi |\mathscr L[h](k)|^2}{2\,\max(x,R)}.$$ The integral on the right hand side is bounded uniformly over all $x\ge0$ as $\mathscr L[h]\in H^{2+}$. So now too is the left hand side. We conclude $\phi\in H^{2+}$. Then setting and only setting $a=\mathscr L[h](k)$ leads to the desired result. (2) $k$ is one of the zeros. $\phi(s):=\frac{\mathscr L[h](s)}{s-k}$. $\frac{\mathscr L[h]}{s-k}\in H^{2+}$. Again $\phi$ is holomorphic on $B(k;R)$. We will prove that $\phi(s)$ is square integrable and uniformly bounded along all vertical line on the right hand side complex plane. $|\phi(s)|$ has a maximum on the compact $D(k,R)$. $$\int_{\Omega_x} |\phi(x+iy)|^2 dy\le \frac1{R^2}\int_{\Omega_x} |\mathscr L[h](x+iy)|^2dy.$$ The integral on the right hand side is bounded uniformly over all $x\ge0$ as $\mathscr L[h]\in H^{2+}$. Then so is the left hand side. We conclude $\phi\in H^{2+}$. Setting and only setting $a=0$ leads to the desired result. $\quad\square$
Almost every square matrix satisfies Cayley-Hamilton Theorem
Brunton's comment is strange, even in context. He claims someone unspecified pointed out to him there may be exceptions, but settles for claiming "almost every" square matrix satisfies the theorem, as he didn't want to elaborate on edge cases. (This is unfortunate for anyone who hopes they can apply the theorem at some point.) The comments have discussed the fact that matrices not on a commutative ring may be exceptions, but I don't think he had these in mind. If he did, his language should have been more careful, because "almost every" means the set of counterexamples should be of measure $0$. I actually think it's more likely that he and an unnamed colleague are data scientists and not linear algebra experts, leading to sloppiness on their part. What is true is that: in a commutative ring, $n\times n$ diagonalizable matrices "satisfy the theorem" (which I'm using as an unfortunate shorthand for $p_A(A)=O_n$); these are dense in the full space of $n\times n$ matrices in the commutative ring; this implies the non-diagonalizable ones satisfy the theorem too (because the characteristic polynomial is of finite degree, in finitely many entries of the matrix whose characteristic polynomial is computed).
$E$ be a normed vector space. Prove that $U+V$ is open in E
Notice that $$U + V = \bigcup_{v \in V} (U + \{ v \} ).$$ Since each element in the union is a translate of the open set $U$, it is open. Thus, $U + V$ is open, as it is the union of open sets.
How to get bounds for the remainder of the Binomial Series?
In a case such as $(1+1)^\alpha$ with $\alpha$ a non-integer, the terms do not go to zero exponentially (because this is at the radius of convergence), so neither do the remainders. In particular for $\alpha = 1/2$ the $n$'th term is (according to Maple) asymptotic to $(-1)^{n+1}/(2 \sqrt{\pi} n^{3/2})$. But you don't need precise bounds, because (for $x > 0$) the signs of the terms are alternating: as long as the terms are decreasing in magnitude and go to $0$ the sum is between any two successive terms. So just keep taking more terms until you get a term whose absolute value is less than your desired $\epsilon$.
Showing that intersections are not defined
HINT: What is $2\cap\{1,2\}$? I’ve left the answer spoiler-protected. $2\cap\{1,2\}=\{0,1\}\cap\{1,2\}=\{1\}\notin A$.
Probability that a joyride is overbooked
1) Sum the binomial distribution with $p = 0.05, n = 320$ from $0\leq x\leq19$ to get the exact answer. This is the probability that 301, 302,... or 320 people show up (0,1,2,...,19 people don't show up). 2) The Poisson distribution with $\lambda = np = 16$ gives an approximate probability. So $\frac{e^{-16}16^{x}}{x!}$ gives the approximate probability that $x$ people don't show up. Summing this from 0 to 19 gives an approximate probability.
Expanding a structure while keeping it a model of theory
You can add a new set of axioms. Take a distinguished element $c$ and add $c \neq 0, c \neq f(0), c \neq f(f(0)) \dots$ Any finite set of them has a model, you just take $c$ large enough. So by compactness, the whole set has a model.
Absolute maximum and minimum on the closed disk
First of all, you solve the unconstrained optimization problem on the interior of the disk and find that the origin is the only critical point. Of course, it turns out to be a saddle point. Now consider the Lagrange multiplier problem with the constraint $g(x,y) = x^2+y^2 = 1$. Then $\nabla f = \lambda\nabla g$ if and only if $$2xe^{x^2-y^2} = \lambda (2x) \quad\text{and}\quad -2ye^{x^2-y^2} = \lambda(2y).$$ Assuming $x,y\ne 0$, we deduce that $\lambda=e^{x^2-y^2} = -\lambda$. As you pointed out, this is impossible. Therefore, we are left with the critical points where $x=0$ and $y=0$ on the unit circle (where one of the equations drops out and there is a solution of the other). We have $f(\pm 1,0) = e$ and $f(0,\pm 1) = e^{-1}$, so the absolute maximum points are $(\pm 1,0)$ and the absolute minimum points are $(0,\pm 1)$.
What is $\lim_{n \rightarrow\infty} \phi(T^{n}v)$
$T$ will never become the zero operator because it is NOT the zero operator in the first place. Also, $\phi$ being a linear form on $V$ will map vectors onto scalars. Therefore it makes perfect sense to define $\phi(T^nv)$. As you said, becomes all the eigenvalues of $T$ are less than one in absolute value, it is clear that $\lim_n \|T^nv\|=0$, and therefore $\lim_n\phi(T^nv)=0$.
If $(x_n)$ is monotone and contains a convergent subsequence $(x_{n_i}),$ then $(x_n)$ is convergent.
If $x_n$ did not converge then, by monotonicity, $x_n$ would be unbounded, in particular $x_n - 1 > x = \lim x_{n_i}$ for all $n \geq n_0,$ for some $n_0.$ But there exists a firs $i_0$ such that $n_i \geq n_0$ for all $i \geq i_0,$ and so $x_{n_i} \geq x + 1$ for all $i \geq i_0,$ a contradiction. Q.E.D.
How to solve this recurrence relation? (convolution integral)
Following Emre's hint I write $v_n(t):=u_n(t+nT)$ and arbitrarily assume $v_0(t)\equiv 1$. The $v_n$ satisfy the simpler looking recursion $$v_n(t)=\int_0^t \lambda e^{-\lambda y} v_{n-1}(t-y) dy\ .\qquad(*)$$ Computing the first few $v_n$'s by hand one gets the idea that the $v_n$ might be of the form $v_n(t)=1-q_n(t)e^{-\lambda t}$ with polynomials $q_n(t)$. Entering this "Ansatz" into the recursion $(*)$ one obtains after some calculation the following recursion for the $q_n$: $$q_n(t)=1+\lambda \int_0^t q_{n-1}(y) dy\qquad (n\geq1)$$ with $q_0(t)\equiv0$. Now it is easy to see that the $q_n(t$) are the partial sums $s_{n-1}$ of the series for $e^{\lambda t}$.
Obstruction to the splitness of short exact sequence in the category groups
The main problem arising when you deal with anabelian groups sequences is that the you don't have an unique notion of splitness (recall that groups don't form an abelian category). There are two "canonically split" exact sequences: the direct product exact sequence: $$1\longrightarrow G\longrightarrow G\times H\longrightarrow H\longrightarrow 1$$ and the semi-direct product exact sequence: $$1\longrightarrow G\longrightarrow G\rtimes H\longrightarrow H\longrightarrow 1$$ Splitness is usually understood by means of semi-direct product, since this notion is more general and flexible. The obstructions look like the same ones in the abelian case (existence of a section or a retract) but there are some subleties because you need to distinguish when an exact sequence is of "direct product" type or "semi-direct product" type. Here the main result: Theorem. Let $1\to H\to G\to K\to 1$ a short exact sequence. The following are equivalent: there exists a morphism $G\to H$ which is a left inverse for $H\to G$; there exists an isomorphism $G\to H\times K$ which fits in the following commutative diagram: Similarly, the following are equivalent: there exists a morphism $K\to G$ which is a right inverse for $G\to K$; there exists an isomorphism $G\to H\rtimes K$ which fits in the following commutative diagram: For more detailed informations, you can read this nice introduction: http://www.math.uconn.edu/~kconrad/blurbs/grouptheory/splittinggp.pdf
Convolution of a piecewise function with itself
The function is non-zero only in $[0,1]$. So, for the convolution to be non-zero you need $$ 0 < x - y < 1 \implies x-1<y<x \ $$ $$ \ \text{and} \ 0<y<1.$$ If $0<x<1$ then clearly $$0<y<x$$ has to hold by the two inequalities above.
Evaluate $\int_0^{\pi \over 3} \sec x\tan x\sqrt {\sec x + 2} \, dx $ using a substitution of your choice
Let $u=\sec x+2$. Then $du=\sec x\tan x\,dx$, so we are finding $\displaystyle\int_{u=3}^4 u^{1/2}\,du$. Remark: Substitution is a much simpler technique than you make it to be. The substitution $u=\sec x$ also works nicely, in pretty much the same way, except that we end up integrating $\sqrt{u+2}\,du$. You got to that, after more time than necessary, and then made an algebra slip. Of course you know that $\sqrt{u+2}\ne \sqrt{u}+\sqrt{2}$.
How to integrate the following geometric brownian motion in Black-Scholes framework
You have the right start in that you want to compute $$ e^{-rT}E_Q(g(S_T)) $$ where $Q$ is the risk-neutral probability measure and, under $Q$, $$ S_T = S_0e^{\left(r - \frac{\sigma^2}{2}\right)T + \sigma W_T}. $$ So we have $$ e^{-rT}E_Q(g(S_T)) = e^{-rT}E_Q(P\cdot \mathbb{I}_{S_T \geq K}) = Pe^{-rT}Q(S_T \geq K), $$ where the last equality is because $$ E_Q(\mathbb{I}_{S_T \geq K}) = Q(S_T \geq K). $$ A simplifying step often done in texts is to use only the normal distribution, instead of the more cumbersome lognormal, as follows. Note \begin{align} S_T \geq K & \iff S_0e^{\left(r - \frac{\sigma^2}{2}\right)T + \sigma W_T} \geq K \\ & \iff \left(r - \frac{\sigma^2}{2}\right)T + \sigma W_T \geq \log \frac{K}{S_0} \\ & \iff W_T \geq \frac{1}{\sigma}\left(\log\frac{K}{S_0} - \left(r - \frac{\sigma^2}{2}\right)T\right) \\ & \iff \frac{W_T}{\sqrt{T}} =: Z \geq \frac{1}{\sigma\sqrt{T}}\left(\log\frac{K}{S_0} - \left(r - \frac{\sigma^2}{2}\right)T\right) =: -d_2. \end{align} Note I'm suggestively calling the RHS $-d_2$, and $Z \sim \mathcal{N}(0,1)$. Hence, letting $\phi$ be the standard normal pdf and $\Phi$ the standard normal cdf, \begin{align} Pe^{-rT}Q(S_T \geq K) & = Pe^{-rT}Q(Z \geq -d_2) \\ & = Pe^{-rT}\int_{-d_2}^\infty \phi(x)\, dx \\ & = Pe^{-rT}\int_{-\infty}^{d_2}\phi(x)\, dx \\ & = Pe^{-rT}\Phi(d_2). \end{align}
Proof checking: Graph Theory
The proof you provide at the end of your question makes sense to me, but it's not clear to me how it matches the strategy you gave prior to it. Depending on the level of the audience (for example if it is an undergraduate class in graph theory), it might be better to give some more detail about checking the degree condition, and how exactly you are finding the two disjoint simple paths upon removal of the addition of the two dummy vertices.
norm of bounded operator on a hilbert space
You write $$|\langle y,Ax\rangle|=\lambda\,\langle y,Ax\rangle$$ for an appropriate $\lambda\in\mathbb C$ with $|\lambda|=1$. Then $$ |\langle y,Ax\rangle|=\lambda\,\langle y,Ax\rangle =\langle \lambda y,Ax\rangle=\text{Re}\,\langle \lambda y,Ax\rangle. $$ As you go through every possible nonzero $y$, the $\lambda$ is absorbed by $y$.
Limit of $f(n)^{g(n)}$
Notice that $L=0$ and $\log 0$ is undefined. However, we can write $$\lim_{n\to \infty}\left(\frac{n^2+1}{3n^2+1}\right)^n=\lim_{n\to \infty}e^{\log \left(\frac{n^2+1}{3n^2+1}\right)^n}=\lim_{n\to \infty}e^{n\log \left(\frac{n^2+1}{3n^2+1}\right)}=e^{\lim_{n\to \infty}n\log \left(\frac{n^2+1}{3n^2+1}\right)}$$ Noting that the limit in the exponent is $-\infty$, we find $$\lim_{n\to \infty}\left(\frac{n^2+1}{3n^2+1}\right)^n=0$$
Error term for a cubic interpolation
The error term is related to the article in wiki: http://en.wikipedia.org/wiki/Polynomial_interpolation#Interpolation_error at the section "Interpolation error" Since you are given the original function and the closed interval containing the points that you interpolate the function with, then you can apply the formula in wiki directly to calculate an upperbound for the error at x=2. :)
Show that if $f(x)=\int_{0}^x f(t)dt$, then $f=0$
differentiating both sides gives $f'(x) = f(x)$ which has solutions $f(x) = Ae^x$. But the original statement gives that $f(0) = \int_0^0f(t)dt = 0$ so we have $Ae^0 = 0$ so $A = 0$ and the function is constantly zero.
Siimplifying $(n+1)\cdot (n+1)! + (n+1)!-1$
All you're doing is factoring out $(n+1)!$ from the expression. It might be easier if we make a substitution. Let $p=(n+1)!$. Then your original expression is $$(n+1)p + p - 1$$ Factor out $p$ throughout: $$p\left( (n+1) + 1 \right) - 1$$ Simplify the inside: $$p(n+2) - 1$$ Finally, bring back in the fact that $p=(n+1)!$: $$(n+1)! \cdot (n+2) - 1$$
Among $2n$ people there are two who have an even number (including $0$) of friends in common .
You can prove by contradiction. Assume all vertices in a graph $H$ have different degrees. This means that $d(v_1)=0, \ d(v_{2n}) = 2n-1$. This is a contradiction, hence $H$ can't exist.
Spectrum of $Tu=\int^1_0 (x+y)u(y)dy$
Consider $\lambda=0$. $$\int^1_0 (x+y)u(y)dy = 0 $$ or $$x\int^1_0 u(y)dy+\int^1_0 yu(y)dy = 0 $$ So $u(x)$ is an eigenfunction if both $<y,u>=0$ and $<1,u>=0$ (by linear independence). It seems me that this implies $0$ is an eigenvalue of infinite multiplicity because any polynomial of degree greater than 2 can be made orthogonal to both $1$ and $y$.
Software to optimize a quadratic program with quadratic constraints
As $A$ is not positive semidefinite and you have (convex) linear equality and inequality constraints, your problem seems to be NP-hard. There is unfortunatly no quick and easy way to solve your problem. I'm not sure if there is commercial software available to solve non-convex quadratic problems yet. One way to go is doing grid search on all possible values of $\mathbf{x}$. As the dimension of your problem is only 8, this method could deliver the global optimium in reasonable time. Note: Without the linear constraints, the minimization of a non-convex quadratic form subject to one quadratic constraints could be easily solved.
Is Proof method valid
Note that if $s$ and $t$ are non-negative, then $s\ge t$ if and only if $s^2\ge t^2$. Now $$(|x+y|+|x-y|)^2=2x^2+2y^2+2|x^2-y^2|\ge 2x^2+2y^2\tag{1}$$ and $$(|x|+|y|)^2=x^2+y^2+2|xy|.\tag{2}$$ So it is enough to show that $2x^2+2y^2\ge x^2+y^2+2|xy|$, or equivalently that $x^2+y^2\ge 2|xy|$. But this last inequality is clear, for $x^2+y^2-2|xy|=(|x|-|y|)^2$.
The value of $\lim\limits_{n\to\infty}{\left[\sin((n+1)a)-\sin(na)\right]}$
\begin{align} L &= \lim\limits_{n\to\infty}{\left[\sin((n+1)a)-\sin(na)\right]} = \lim\limits_{n\to\infty}{\left[2\cos\left(na+\frac a2\right)\sin\left(\frac a2\right)\right]} \\ & = 2\sin \left(\frac a2\right)\lim\limits_{n\to\infty}{\cos\left(\frac a2+na\right)}. \end{align} Hence, $a=k\pi, k\in \mathbb Z$ and $L=0$.
Random vector with fixed angle
Let $e_N$ be the $N^{\rm th}$ element of the standard basis of $\mathbb R^N$, and identify $\mathbb R^{N-1}$ with the subspace of $\mathbb R^N$ orthogonal to $e_N$. Let $S$ be the unit sphere in $\mathbb R^{N-1}$. Let $T$ be any orthogonal transformation sending $e_N$ to $u/\|u\|$ (concretely you can find such a transformation using Gram-Schmidt). Then your set is exactly $$ T((\cos\theta)e_N+(\sin\theta)S). $$ So it suffices to sample uniformly from $S$. One way is to sample from an $(N-1)$-dimensional Gaussian and normalize (see this question).
Why is it important to extend the trigonometric functions to all angles?
Aside from applications, here are a couple of reasons: Addition formulas, half-angle, formulas, etc. in trigonometry would have to have a bunch of separate cases to enforce their parameters being on $[0, 2\pi]$. Periodicity simplifies the situation, and there's no reason not to do so. For most definitions of the trig functions (Taylor series, differential equations, the connection with $e^{iz}$, etc.), there's no need to restrict the definition to $[0, 2\pi]$; the function in question makes sense or converges on a much larger domain. For that matter, the geometric interpretation corresponding to an angle in the plane makes sense over that larger region. Defining trig functions on the entire real line eliminates some techincal annoyances with respect to continuity, differentiability, etc. that would occur if they were defined on a closed interval with a boundary. It's generally useful to think of these functions as defined on the circle $S^1 = \mathbb{R}/\mathbb{Z}$ (or the equivalent), and that naturally leads to them being defined on $\mathbb{R}$ and periodic.
Which of the following vectors are in span[v1, v2, v3]?
Solve $$\pmatrix{2&3&-1\\1&-1&0\\3&2&1\\0&5&2}\pmatrix{c_1\\c_2\\c_3}=b_i$$ with $b_1=\pmatrix{9\\0\\11\\12}$ and $b_2=\pmatrix{2\\2\\2\\2\\}$. The one that has a solution, guess only one, is your solution.
When meet is strictly monotone? What is the name?
I don't think there is any such name because that is not an interesting property. Indeed, assume $x$ and $y$ are not related (that is, if the semilattice is not a chain); it follows that $x \wedge y < x$ and $x \wedge y < y$. If the operation would have such a property, then we would have $$x \wedge y > (x\wedge y) \wedge (x \wedge y) = x\wedge y,$$ a contradiction. So that property is only hold for semilattices which are chains, but in that case, it is trivial.
Prime ideals and powers of elements...
Suppose $M$ is not prime. Then there is some $x,y \in R$ such that $xy \in M$ but neither $x$ nor $y$ is in $M$. By hypothesis, both ideals $(M,x)$ and $(M,y)$ contain a power of $a$, since they properly contain $M$, and therefore, so does their product. But their product is contained in $M$ (since $xy \in M$), and therefore $M$ contains a power of $a$. Contradiction. The same argument holds if the set of powers of $a$ is replaced by any multiplicatively closed subset of $R$.
Condensed notation for specifying an interval for several variables
Regarding your proposal, I prefer the first one, I think most people would understand it. You can use $(\alpha,\beta)\in(0,\frac{\pi}4)^2$ Generally for an n-uplet $(x_1,x_2,\cdots,x_n)\in\mathbb R^n$ or $I^n$ for some interval $I$. Since $I^n$ is a notation for set product $I\times I\cdots\times I$, you can also specify different intervals for different variables. $\, z=re^{i\theta}$ with $(r,\theta)\in[0,3)\times[-\frac{\pi}2,\frac{\pi}2]$ $x\in\mathbb Q\iff x=\frac pq$ with $(p,q)\in\mathbb Z\times\mathbb N^*$
Is the hypotenuse of a triangle ever divisible by three (for primitive Pythagorean triples)?
The hypotenuse of a primitive triple is never divisible by $3$. For let $(x,y,z)$ be a primitive triple, and suppose $3$ divides $z$. Then $3$ cannot divide $x$ or $y$, else the triple would not be primitive. It follows that $x^2$ and $y^2$ have remainder $1$ on division by $3$, which means $x^2+y^2$ has remainder $2$ on division by $3$. Remark: A somewhat more elaborate argument shows that if $p$ is a prime of the form $4k+3$, then $p$ cannot divide the hypotenuse of a primitive triple.
What's equal this :$\sum_{\phi(n)=1}^{\infty}\frac{1}{\phi(n)}$?
I reckon $\phi(n)\le n$ and so $$\sum_{n=1}^\infty \frac1{\phi(n)}\ge\sum_{n=1}^\infty\frac1n$$ etc. ADDED IN EDIT I also reckon that if $A=\{\phi(n):n\in\Bbb N\}$ then $$\sum_{m\in A}\frac1m\ge\sum_p\frac1{\phi(p)}>\sum_p\frac1p$$ where $p$ runs through all primes.
Given a yearly interest rate of $5\%$, compounded monthly, what's the present value of £$1000$ in three years' time?
$$PV = \frac{1000}{\left(1 + \dfrac{.05}{12}\right)^{12 \cdot 3}} = 860.98$$
convergence of series an
Since $\displaystyle \sum_n \frac{a_n}{a_n+1}$ converges, the sequence $\displaystyle \frac{a_n}{a_n+1}$ converges to $0$, hence $a_n$ converges to $0$ (have a look at the function $x\mapsto \frac x{x+1}$). In particular, $a_n$ is bounded by some $M\geq 0$. Hence $\frac{a_n}{a_n+1}\geq \frac{1}{1+M} a_n$. Comparison test yields convergence of $\sum_n \frac{1}{1+M} a_n$, that is to say convergence of $\sum a_n$.
Equation of locus of points satisfied by $\frac{\left|z+3i\right|}{\left|z-6i\right|}=1$
To cut down on unanswered questions, here we go! The book's answer is certainly wrong, as one readily sees by considering $z=\frac94i.$ Now, we clearly cannot have $z=6i$ as a solution, for then we have $\frac90=1,$ which is nonsensical. Consequently, the given equation is equivalent to $$|z+3i|=|z-6i|,$$ or, put another way, to $$\bigl|z-(-3i)\bigr|=|z-6i|.$$ Since $|z-w|$ is the distance from $z$ to $w$ for all $z,w\in\Bbb C,$ then the equation above says that $z$ is equidistant from $-3i$ and $6i.$ Readily, putting $z=x+iy,$ this is equivalent to saying that $(x,y)$ is equidistant from $(0,-3)$ and $(0,6),$ i.e.: $$\sqrt{x^2+(y+3)^2}=\sqrt{x^2+(y-6)^2}.$$ This is, of course, equivalent to your approach, and solving the preceding equation yields $y=\frac32,$ as you say.
Why is $\alpha = x^2dx^1 - x^1dx^2$ not a differential of a function?
Hint We have $$ \frac{\partial f}{\partial x^1}=x^2 $$ and $$ \frac{\partial f}{\partial x^2}=-x^1 $$ so: $$ \frac{\partial^2 f}{\partial x^1\partial x^2}\ne \frac{\partial^2 f}{\partial x^2\partial x^1} $$
Is this enough to prove that the language L is not context-free? (Pumping lemma for CFL's)
It looks good to me. In general, for pumping lemma proofs by contradiction, you need to show that for all ($\forall$) pumping lengths there exists ($\exists$) a string $s$ (with $|s|\ge p$) that cannot be pumped. In your case, you chose a generic pumping length $p$. And you found a counter-example string $s$ that absolutely cannot be pumped (i.e. you showed that all possible cases cannot be pumped). Therefore, you've showed that for any arbitrary pumping length $p$, you can construct a string $s$ that cannot be pumped by the Pumping Lemma for CFL's. Thus, you've contradicted that $L$ is a CFL.
Surjectivity of floor of harmonic sequence
Proof without the lemma. Suppose that the map isn't surjective. Note that $\phi(1) = 1$. There exists $k\in \mathbb{N}$ such that $$\forall_{n\in \mathbb{N}}\lfloor H_n \rfloor \neq k$$ Since $\phi$ is non-decreasing, we must have $\phi(n)<k<\phi(n+1)$ for some $n$. But then we would have that $\phi(n+1)-\phi(n)\geq 2$, which is impossible. Indeed, $$\phi(n+1)-\phi(n) < H_{n+1}-H_n+1\leq 2.$$
Prove the matrix is positive
tried two , Sylvester Inertia $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ \frac{ 1 }{ 2 } & 1 & 0 \\ \frac{ 1 }{ 3 } & 1 & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 60 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & \frac{ 1 }{ 3 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 3 } \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 60 & 30 & 20 \\ 30 & 20 & 15 \\ 20 & 15 & 12 \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ \frac{ 1 }{ 2 } & 1 & 0 & 0 \\ \frac{ 1 }{ 3 } & 1 & 1 & 0 \\ \frac{ 1 }{ 4 } & \frac{ 9 }{ 10 } & \frac{ 3 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 420 & 0 & 0 & 0 \\ 0 & 35 & 0 & 0 \\ 0 & 0 & \frac{ 7 }{ 3 } & 0 \\ 0 & 0 & 0 & \frac{ 3 }{ 20 } \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 3 } & \frac{ 1 }{ 4 } \\ 0 & 1 & 1 & \frac{ 9 }{ 10 } \\ 0 & 0 & 1 & \frac{ 3 }{ 2 } \\ 0 & 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 420 & 210 & 140 & 105 \\ 210 & 140 & 105 & 84 \\ 140 & 105 & 84 & 70 \\ 105 & 84 & 70 & 60 \\ \end{array} \right) $$
Criteria for Lipschitz continuity
Note that the assumption of continuity is redundant with the displayed one. Indeed, for each $t\in [0,1]$, we can find $\delta(t)>0$ such that $|f(t)-f(s)|\leq (M+1)|t-s|$ if $|t-s|<\delta$, where $M$ is the supremum involved in the hypothesis. Otherwise, we would be able to find $t\in[0,1]$ and a sequence $\{t_n\}$ converging to $t$ such that $|f(t)-f(t_n)|>(M+1)|t-t_n|$, contradicting the assumption. The problem has been answered at math.overflow by Misha.
Uniform convergence of composition of functions and integration
Your idea works great. The graph over the second interval has length $\epsilon$ and bounded height, so goes to 0, while the first part needs a little more work. Pick $\delta$ so that if $0<u<\delta$, $|f(u)-f(0)|<\eta$ for any $\eta$. Then pick $n$ so $(1-\epsilon)^n<\delta$.
Find a change of coordinates matrix
Write $1$ as $a \cdot (1-t)+b \cdot 2t$, then first column of matrix is $\begin{vmatrix} a \\ b \end{vmatrix}$. Next write $1+t$ as $c \cdot (1-t)+d \cdot 2t$, second column of matrix is $\begin{vmatrix} c \\ d \end{vmatrix}$ .
Why does $\frac{1}{\sin x} = 2\sin x$?
Recall that $2=\sqrt 2\cdot\sqrt 2$ and therefore: $$\sin x=\frac{\sqrt 2}{2}=\frac{\sqrt 2}{\sqrt 2\cdot\sqrt 2} = \frac{1}{\sqrt 2}$$ Now multiply by $\frac{\sqrt 2}{\sin x}$ both sides and you have as needed.
How can I construct a homotopy between a constant function to a continuous function?
Let us answer the second question first, the first will follow through: We call a map homotopic to the constant map to be nullhomotopic. Suppose every map $f:Y\rightarrow X$ is nullhomotopic; then in particular, for $Y=X,$ $1_X:X\rightarrow X$ is nullhomotopic, where $1_X$ is the identity map on $X$. We say a space $X$ is contractible if it has the homotopy type of a point, which means that its identity map is homotopic to the constant map. Suppose $X$ is contractible. Then, $e_{x_0}\simeq 1_X,$ with some homotopy $f_t.$ For any map $f:Y\rightarrow X,$ we have $f=1_X\circ f\simeq e_{x_0}\circ f,$ by the homotopy $f_t\circ f;$ but $e_{x_0}\circ f$ is a constant map from $Y$ to $X.$ Hence $f$ is nullhomotopic. What this means in fact is that if all maps into a space $X$ are nullhomotopic, then it must be contractible. Some examples of a space that are not contractible are the circle $S^1$ and the set $\mathbb{R}-\{0\}$ (the last example is quite easy to see intuitively) and so such spaces, there exist functions which are null-homotopic. For question 3 and 4 - following the comment of @HallaSurvivor, even the discrete set $\{y_0\}$ is a set that is compact, connected, linearly ordered, and has least upper bound property, (there may be many more complicated examples) so I don't think you can generally define a inverse. As you see, you can do so for the unit interval $[0,1],$ and so if the $Y$ is homeomorphic to $[0,1]$ then we may easily be able to find a reverse. Suppose you are given a reverse path $\overline{f}$ for a path $f:[0,1]\rightarrow X.$ Then take the map $H:I\times I\rightarrow X$ as: $$H(t,s)=\left\{\begin{array}{ll} f(s), & t\leq \frac{1}{2}-s; \\ \overline{f}(s), & \frac{1}{2}-s\leq t \leq 1-s;\\ e_{x_0}(s), & 1-s\leq t. \\ \end{array}\right.$$ This map is continuous and on close observation is actually a homotopy between $f * \overline{f}$ and $e_{x_0}.$
Need a result of Euler that is simple enough for a child to understand
How about Euler's theorem on Eulerian paths in graphs, which originated from his solution to the Königsberg bridge problem?
Number of words such that only two $A's$ should be together
To remove the confusion, here is a third method: Total permutations - 3 A's together - no A's together $= 8!/3! - 6! - 5!\binom63 = 3600$
Find the values of $p$ such that $\left( \frac{7}{p} \right )= 1$ (Legendre Symbol)
Hint: Apply quadratic reciprocity to $\bigl(\frac{7}{p}\bigr)$. You will see that the only relevant things affecting the outcome are what $p$ is modulo 4, and what $p$ is modulo 7. By the Chinese remainder theorem, that is the same information as what $p$ is modulo 28.
Find the value of complex expression $\left(\frac{\sqrt{3}+i}{2}\right)^{69}$
$$\left(\dfrac{\sqrt{3}+i}{2}\right)^{69}=\left(\dfrac{\sqrt{3}}{2}+\dfrac{1}{2}i\right)^{69}=(\cos\dfrac{\pi}{6}+i\sin\dfrac{\pi}{6})^{69}=\cos\dfrac{69\pi}{6}+i\sin\dfrac{69\pi}{6}=-i$$ by De Moivre's formula.
Solve the equations for $0\leq x<2\pi$, note $x$ is in radians.
HINT: b) $\sec^2 x=1+\tan^2 x$. Then, solve the resulting quadratic equation. c) $\sin^2 x=1-\cos^2 x$. Then, solve the resulting quadratic equation.
Equivalence of unoriented knots by ambient isotopy
So, to resolve this, I think you need to be a little more careful with your definitions. The definition of equivalent knots you give is what I would say out loud if I was talking about knots, but not exactly what you are thinking of. I would say a knot is the image of an embedding, not the map itself. This is more along the lines of thinking of the knots as rope you can hold. So, your two knots are the image of $\phi$ and $\psi$, which is exactly the same set, as you pointed out. So the ambient isotopy is the identity map for all $t\in I$. If you want to use the maps, you are forcing an orientation upon your knots, even though you are talking about unoriented knots. In this case, your two knots are not equivalent because you can't make $\phi(t)=\psi(t)$ for all $t$. (Hueristically, you cant turn the knot around in the solid torus.) But you can resolve this by noticing that if you ignore orientation, like you want to anyway, you can reverse the orientation on $S^1$ before you apply $\psi$. Let $f:S^1\to S^1$ be $f(t)=-t$ then let $\bar{\psi}=\psi \circ f$ and this gives you that $\phi=\bar{\psi}$ and then these two maps are ambient isotopic. Hope this helps.
Basis for (direct) products of Vector Spaces
You can use Grassman formula which states: $$\dim(U+V) = \dim(U)+\dim(V)-\dim(U \cap V)$$ If $V$ and $U$ are in direct sum with each other then $$U\cap V = \{\emptyset\}$$ so the formula becomes $$\dim(U+V)=\dim(U)+\dim(V)=\dim(U\oplus V)$$ This can be generalised in such way: Let $V$ and $U$ be finite dimensional vector spaces over a field $K$ then $$\dim(U\oplus V)=\dim(U)+\dim(V)$$ In your example $$\mathbb{R}^n=\mathbb{R}\oplus\mathbb{R}\oplus\dots\oplus\mathbb{R} = n\dim(\mathbb{R})$$ For a more in-depth analysis see this forum post
Tightness, relative compactness and convergence of stochastic processes
There is the following general statement Let $(X,d)$ be a metric space and $(x_n)_{n \in \mathbb{N}} \subseteq X$. Then $x_n$ converges (in $X$) if, and only if, for any subequence of $(x_n)_{n \in \mathbb{N}}$ there exists a (further) subsequence which converges to a limit $x \in X$ and this limit does not depend on the chosen subsequence. The proof is not difficult; the implication "$\Rightarrow$" is obvious (if the sequence converges, then any subsequence converges and the limit does not depend on the chosen subsequence) and "$\Leftarrow$" can be proved by contradiction. Applying this statement in your framework, we can proceed as follows to prove the weak convergence of a sequence of probability measures, say $(\mu_n)_{n \in \mathbb{N}}$: Fix an arbitrary subsequence $(\mu_{n_k})_{k \in \mathbb{N}}$. Using compactness, show that this subsequence admits a convergent subsequence $(\mu_{n_{k_{\ell}}})_{\ell \in \mathbb{N}}$. Identify the possible limit of the sequence to conclude that the limit of the (convergent) subsequence does not depend on the chosen subsequence.
Relationship between number of trials and complementary CDF of a binomial distribution
Is the complementary CDF non-decreasing in n $n$ is a given number in a Binomial law. $k$ is the variable. It is non- increasing. A property of any CDF is that it is non-decreasing, and it is evident as, by definition, the CDF is the cumulative probability function. Its complement, 1-CDF, is obviously non-increasing in $k$.
~$_P$ is an equivalence relation in A.
You need to prove the following: $\forall x\in A,x\sim_Px$. $\forall x,y\in A,x\sim_P y\implies y\sim_p x.$ $\forall x,y,z\in A,x\sim_P y\,\land y\sim_P z\implies x\sim_P z.$ I don't know exactly what you mean by "not sure how to go about it in this instance", but I hope that doing the first instance will help. Let $x\in A$. You want to show that $x\sim_P x$, which means, as wrote in definition 2.22, that $x$ is in the same parts of the partition, which is trivial because $x=x$.
Rules in the ring with coefficients in finite field $\mathbb{F}_p$
As has been noted in the comments, for any commutative ring of characteristic $p$ where $p$ is prime, you have for all $x,y$, $(x+y)^p = x^p + y^p$. Where I come from it's called "the undergrad's dream". However, $x^p = x$ does not hold for any $x$. In fact, if $R$ is a commutative integral domain with subring $\mathbb{F}_p$, the only $x$'s such that $x^p = x$ are elements of $\mathbb{F}_p$ (that's because you have euclidian division by $X-k$ in $R[X]$, and so an easy argument on polynomial roots works here) I don't know whether the hypotheses I've given are minimal, but they're enough for what you're asking.
Find the fundamental group of $\Bbb C^2 \setminus \{(x,y):xy=0 \}$.
Your space $X$ is exactly equal to $\mathbb C^\ast \times \mathbb C ^\ast$ so that $$\pi_1(X)= \pi_1(\mathbb C^\ast \times \mathbb C ^\ast)=\pi_1(\mathbb C^\ast )\times \pi_1(\mathbb C^\ast )=\mathbb Z\times \mathbb Z$$
Prove the following: $[1-\lambda \operatorname{sum}(A^{-1})][1-\lambda \operatorname{sum}(B^{-1})]=1$
Let us denote by $\sigma(M)$ the sum of the entries of matrix $M$. If we denote by $v^T=(1,1,\cdots, 1)$, then $\sigma(M)= v^TMv$. The way you defined it, $E=vv^T$. Now if $A+B=\lambda E$, you have $$ \begin{aligned} \lambda \sigma(A^{-1})\sigma(B^{-1})&amp;= \lambda(v^TA^{-1}v)(v^TB^{-1}v)= v^TA^{-1}(\lambda E) B^{-1}v\\ &amp;=v^TA^{-1}(A+B)B^{-1}v = v^TA^{-1}v+v^TB^{-1}v=\sigma(A^{-1})+\sigma(B^{-1}) \end{aligned} $$
Maximization of a statistical property of a subset of random numbers
I'm not sure, but I think it suffices to consider an $N$-element subset of all $2^N$ possible subsets: If you order $\rho_i$ descendingly, the index set $I$ maximizing your quantity will contain exactly the first $N_I$ elements of the ordering. If $\pi{:}\ [N]\to[N]$ is the ordering operator and $\phi_i=\rho_{\pi(i)}$, you will get $\phi_1\ge\phi_2\ge\dots$. Then, for a given $N_I$, $$ \frac{1}{\sqrt{N_I}} \sum_{j\in I} \phi_j$$ is maximized for $I=\{1,2,\dots,N_I\}$. It remains to determine $N_I$, but in this regard you might find help in the theory of order statistics.
Finding the value of $\int_0^{\pi/2} \frac{dt}{1+(\tan(x))^{\sqrt{2}}}$
It actually works for $$ I=\int_0^{\pi/2}\frac{dx}{1+(\tan x)^r},\ \ \ r\geq0 $$ Let $y=\pi/2-x$. Then $$ I=\int_{\pi/2}^0\frac{-dy}{1+(\tan(\pi/2-x))^r}=\int_0^{\pi/2}\frac{dy}{1+\left(\frac1{\tan y}\right)^r} =\int_0^{\pi/2}\frac{(\tan y)^r}{1+(\tan y)^r}\,dy. $$ Then $$ 2I=I+I=\int_0^{\pi/2}\frac{dx}{1+(\tan x)^r}+\int_0^{\pi/2}\frac{(\tan x)^r}{1+(\tan x)^r}\,dx=\int_0^{\pi/2}\frac{1+(\tan x)^r}{1+(\tan x)^r}\,dx=\frac\pi2. $$ Then $$ I=\frac\pi4. $$
About the proof that every real number in the unit interval is the limit of a sequence of dyadic numbers
(1).Let $x_1=0$ if $0&lt;x&lt;1/2$ and $x_1=1$ if $1/2\leq x&lt;1.$ Let $y_1=x_1/2.$ We define the sequences $(x_n)_{n\in \Bbb N}$ and $(y_n)_{n\in \Bbb N}$ as follows: (2). Suppose $y_n\leq x.$ Let $x_{n+1}=0$ if $x&lt;y_n+2^{-(n+1)}.$ Let $x_{n+1}=1$ if $x\ge y_n+2^{-(n+1)}.$ And let $y_{n+1}=y_n+x_n2^{-(n+1)}.$ In both cases we have $y_n\leq x\implies y_{n+1}\leq x.$ And we have $y_1\leq x.$ So by induction we have $y_n\leq x$ for all $n$. (3). We have $y_n\geq x- 2^{-n}\implies y_{n+1}\geq x-2^{-(n+1)}.$ Proof: Suppose $y_n\geq x-2^{-n}.$ Then $\quad$(i).If $y_n\leq x-2^{-(n+1)}$ then $x_{n+1}=1$ so $\quad y_{n+1}=y_n+2^{-(n+1}\geq (x-2^{-n})+2^{-(n+1)}=x-2^{-(n+1)}.$ $\quad$(ii). If $y_n&gt;x-2^{-(n+1)}$ then $x_{n+1}=0$ so $y_{n+1}=y_n&gt;x-2^{-(n+1)}.$ In both (i) and (ii) we have $y_{n+1}\geq x-2^{-(n+1)}.$ And we have $y_1\geq x-2^{-1}.$ So $y_n\geq x-2^{-n}$ for all $n$ by induction. (4). Since $x-2^{-n}\leq y_n\leq x$ for all $n,$ we have $x=\lim_{n\to \infty}y_n=\sum_{n=1}^{\infty}x_n2^{-n}.$ Remark. In (2) we use recursion and induction together, to define $x_{n+1}$ and $y_{n+1}$ recursively from $y_n$, with the inductive hypothesis that $y_n\leq x.$ If you wanted, you could instead say " If $y_n&gt;x$ then let $x_{n+1}=1=y_{n+1}$ " ( or some other arbitrary values) for a purely recursive def'n, and then prove separately that $y_n\leq x$ for all $n$ by induction.
How many paths would confirm the existence of limit of a two variable function?
Here are two examples (Examples 9.1 and 9.2) taken from Gelbaum and Olmsted's Counterexamples in Analysis that illustrate how badly "path checking" can fail: 1) $$ f(x,y)=\cases{ {x^2y\over x^4+y^2 },&amp; if $(x,y)\ne(0,0)$\cr 0,&amp;$(x,y)=(0,0)$} $$ Here the limit of $f$ as $(x,y)$ makes any straight line approach to the origin is $0$. Yet $f$ does not have a limit at $(0,0)$, as there are points arbitrarily near the origin at which $f$ takes the value $1/2$ (namely $(a,a^2)$. 2) $$ f(x,y)=\cases{ {e^{-1/x^2}y\over e^{-2/x^2}+y^2 },&amp; if $x\ne 0$\cr 0,&amp;$x=0$} $$ Here the limit of $f$ as $(x,y)$ makes any approach to the origin along a curve of the form $y=cx^{m/n}$, where $c\ne0$ and $m,n$ are relatively prime positive integers, is $0$. Yet $f$ does not have a limit at $(0,0)$, as there are points arbitrarily near the origin at which $f$ takes the value $1/2$ (namely $(a,e^{-1/a^2})$.
Eigenvector of Matrix with Duplicate Columns
The eigenvector to $15$ is the vector $(1,2,3,4,5)^T$, the eigenvectors to $0$ are vectors with sum of entries equal to one.
Expected Value and Variance Questiob
Your answer for part (b) is correct. For part (a), just some math manipulation. It might be helpful to first rewrite $E\left((2+X)^2\right)=E\left(X^2 + 4X + 4\right)$. Then, using rules of expectations, you can rewrite $E\left(X^2 + 4X + 4\right)$ as $E\left(X^2\right)+E\left(4X\right)+E\left(4\right)$. Rules of expectations also tell us that $E(4X) = 4E(X)$, so at this point, plug in what you know: $$E\left(X^2\right)+4E\left(X\right)+E\left(4\right) = E\left(X^2\right) + 4*1 + 4 = E\left(X^2\right) + 8$$ You then want to note the relationship between variance and expectation: $Var(X) = E(X^2) - [E(X)]^2$ You can plug in your given values, and you should be good from there. I.e. $$[Var(X) = 5] = E(X^2) - [1]^2$$ So $E(X^2)=6$, which gives you that $E\left((2+X)^2\right)= 8+6 = 14$. Hope that helps!
What does the solution to the transport equation describe?
The quantity $u(x,t)$ can describe: The height of a wave at point $x$ at time $t$ The temperature at point $x$ at time $t$ The concentration of some substance at point $x$ at time $t$ The number of cars on a 50-meter stretch of a road... PDEs are not tied to physical quantities; they describe physical processes. The PDE $u_t+cu_x$ describes the process of propagation (of whatever) with constant velocity. The PDE $u_t = ku_{xx}$ describes the process of diffusion, of whatever is able to diffuse. It is not really correct to say that "in $u_t =k u_{xx}$, the function $u$ is temperature". It could be that, or it could be radioactivity level, etc. Conversely, the temperature could be modeled by $u_t=ku_{xx}$, or by $u_t = cu_x$, or by $u_t = cu_x + ku_{xx}$, or by many other equations. It depends on what processes one chooses to focus on.
Function linear in one variable and quadratic concave on another, is concave in jointly?
No, you may not. Suppose $f(x,y,z) = -yz^2$, with $D_y = \mathbb{R_+}$, then the Hessian is: $$\begin{pmatrix}0 &amp; -2z \\ -2z &amp; -2y\end{pmatrix}$$ which is indefinite for $z \neq 0$, since the eigenvalues are $-y \pm \sqrt{y² + 4z²}$
Lebesgue measureable --> outer measure = 0
First assume that $A$ is bounded. By regularity of Lebesgue measure there exist compact sets $K_n \subset A$ such that $m(A)&lt; m(K_n)+\frac 1 n$. Take $A_1=\cup_n K_n$ and $A_2=A\setminus \cup_n K_n$. For the general case apply the result to $A \cap [-N,N]$ for each $N$. I will leave the rest to you.