title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
problem on meromorphic functions
You probably want something about the domain of $f$ being connected otherwise there are tons of such possible functions. Clearly it is neither $1$ nor $2$ since both $f_1(z)=z$ and $f_2(z)=2z$ are examples of such functions. If $4$ were true, it would imply $3$ so you only need to prove $4$. To prove $4$, consider the function $\frac{z}{f(z)}$ and apply the Schwarz lemma.
Disconnected Zariski open subsets of $\mathbb{C}^n$
No. In the usual topology, this is because if $U$ is connected of dimension $n$ and $Z \subset U$ is of real codimension at least two then $U \backslash Z$ is connected. In the Zariski topology, something is stronger : $U$ is irreducible since $\Bbb C^n$ is, in particular connected.
Proving $1+5+9+\cdots+(4n+1) = (n+1)(2n+1)$ by induction (is there a typo?)
It may help you to realize that $$ 1+5+9+\cdots+(4n+1) = (n+1)(2n+1) $$ may actually be rewritten as $$ \sum_{i=0}^n(4i+1)=(n+1)(2n+1). $$ Thus, for $n\geq 0$, let $S(n)$ denote the statement $$ S(n) : \sum_{i=0}^n(4i+1)=(n+1)(2n+1). $$ Base case ($n=0$): $S(0)$ says that $4(0)+1=1=(0+1)(2(0)+1)$, and this is true. Induction step: Fix some $k\geq 0$ and assume that $S(k)$ is true where $$ S(k) : \sum_{i=0}^k(4i+1)=(k+1)(2k+1). $$ To be shown is that $S(k+1)$ follows where $$ S(k+1) : \sum_{i=0}^{k+1}(4i+1)=(k+2)(2k+3). $$ Beginning with the left-hand side of $S(k+1)$, \begin{align} \sum_{i=0}^{k+1}(4i+1) &= (4k+5)+\sum_{i=0}^k(4i+1)\tag{by defn. of $\Sigma$}\\[0.5em] &= (4k+5)+(k+1)(2k+1)\tag{by $S(k)$, the ind. hyp.}\\[0.5em] &= (4k+5)+(2k^2+3k+1)\tag{expand}\\[0.5em] &= 2k^2+7k+6\tag{simplify}\\[0.5em] &= (k+2)(2k+3),\tag{factor} \end{align} we end up at the right-hand side of $S(k+1)$, completing the inductive step. Thus, by mathematical induction, the statement $S(n)$ is true for all $n\geq 0$. $\blacksquare$
Proof of remainder always less than divisor
In a loose sense $S$ is the set of possible remainders: it’s the set of amounts by which $n$ differs from a multiple of $m$. That is, you look at all possible multiples $qm$ of $m$ (for $q\in\Bbb N$) and see how much $n$ differs from each of them; those differences are the members of $S$. As a result $S$ contains the non-negative numbers in the following list: $$\begin{align*} &n-0\cdot m=n\;,\\ &n-1\cdot m=n-m\;,\\ &n-2\cdot m=n-2m\;,\\ &n-3\cdot m=n-3m\;,\\ &\qquad\qquad\;\,\vdots \end{align*}\tag{1}$$ Remember, we’re looking for natural numbers $q$ and $r$ such that $n=qm+r$, which means that $r=n-qm$; thus, if such $q$ and $r$ exist, the remainder $r$ must be one of the numbers in the list $(1)$. We also want $r$ to be non-negative, so $r$ will actually be in $S$ and not just in the list $(1)$. We know that $n\in\Bbb N$, so we know that the list $(1)$ does have at least one non-negative member, and therefore $S\ne\varnothing$. You ask how $n$ can be a candidate for the remainder: what if $m>n$? What if, for instance, $m=5$ and $n=3$? Then dividing $n$ by $m$ yields a quotient $q=0$ and a remainder $r=n=3$: $3=0\cdot 5+3$. Why might $r$ be the smallest element of $S$? I’m not sure what you’re asking here. In the proof we simply let $r$ be the smallest element of $S$ and then prove that this number has the desired properties. Are you asking why we might think of trying this value of $r$? We already know that the remainder has to be in $S$, and we also know that it has to be less than $m$, so we want it to be small rather than large; it makes sense, therefore, to try the smallest member of $S$ to see whether it does the job. The argument in step 8 is by contradiction: we suppose that $r\ge m$ and derive a contradiction, so that we can conclude that $r<m$, just as we want. Recall that $r$ is the smallest member of $S$ and that $q$ has been chosen so that $n=qm+r$. Thus, $n-qm=r$, and since we’ve assumed that $r\ge m$, this implies that $n-qm\ge m$. Subtract $m$ from both sides of the inequality to get $n-qm-m\ge 0$, and then combine the two $m$ terms to get $n-(q+1)m\ge 0$. Now $q\in\Bbb N$, so $q+1\in\Bbb N$, and therefore $n-(q+1)m$ is one of the numbers in the list $(1)$. For convenience let $s=n-(q+1)m$; as we just saw, $s$ is in the list $(1)$. Moreover, we just showed that $s\ge 0$, so $s$ is actually in $S$. (Recall that $S$ is the set of non-negative members of the list $(1)$.) But if you look back at the algebra that we just did, you can see that $$s=n-(q+1)m=(n-qm)-m=r-m\;.$$ Now recall that by hypothesis $m\in\Bbb N^+$, so $m>0$, and therefore $s=r-m<r$. In other words, $s$ is a member of $S$ that’s smaller than $r$, contradicting our choice of $r$ as the smallest member of $S$. This contradiction shows that $r$ cannot be greater than or equal to $m$ and hence that $r<m$.
Dual of $L^p$ space avoiding reflexivity and Radon-Nikodym Theorem
As far as I can see there is a mistake in 7) and that is the only reason you got stuck. Replace $f$ in 6) by $\min (\frac f {1-h},k)$. You will get 7) with an extra factor of $1-h$ on the left. Now let $k \to \infty$.
Fourier transform of an absolute value
Using the definition we have $$F(f) = \int_{-\infty}^{\infty} f(t) e^{- 2 \pi i t f} \ dt = \int_{-\frac{5T}{4}}^{\frac{5T}{4}} \vert \cos(\frac{2\pi t }{T}) \vert e^{- 2 \pi i t f} \ dt $$ Noting that $\cos(x) \geq 0$ when $- \frac{\pi}{2} \leq x \leq \frac{\pi}{2}$ and negative elsewhere on $[-\pi,\pi]$, with periodicity on neighbouring intervals, and using the fact that $\cos(x)$ is an even function, we get $$F(f) = 2 \Big( \int_{0}^{\frac{T}{4}} \cos(\frac{2\pi t }{T}) e^{- 2 \pi i t f} \ dt - \int_{\frac{T}{4}}^{\frac{3T}{4}} \cos(\frac{2\pi t }{T}) e^{- 2 \pi i t f} \ dt + \int_{\frac{3T}{4}}^{\frac{5T}{4}} \cos(\frac{2\pi t }{T}) e^{- 2 \pi i t f} \ dt \Big)$$ Now to fully find $F(f)$ we need an expression for $\int \cos(\frac{2\pi t }{T}) e^{- 2 \pi i t f}$. There are many ways to do it. One way is to use Euler's formula, $$\int \cos(\frac{2\pi t }{T}) e^{- 2 \pi i t f} = \int \cos(\frac{2\pi t }{T}) \cos( 2 \pi i t f) - i \int \cos(\frac{2\pi t }{T}) \sin( 2 \pi i t f)$$ and solve each integral found in the real and imaginary parts above. I'll leave the math part for you ;)
Nilpotent ring example
Start with a field of $3$ elements $F_3$ and take the free algebra $F_3\langle x,y\rangle$ in noncommuting indeterminates, then take the quotient by $(x,y)^3$, and let $R$ be the ideal $(x,y)/(x,y)^3$ in the quotient ring. It's nilpotent because $a^3=0$ for everything in the ring, and it's not commutative because $xy\neq yx$. It also satisfies $3a=0$ for every $a\in R$, if that is what you meant by "additively nilpotent" (it should actually just be said that everything has additive order less than $3$.)
Is this a familiar distribution?
Yes, this is a Gaussian distribution: just complete the square in the exponent. After doing so, you will find that it is $N(\mu,\sigma)$ where $\mu=-\alpha/2\beta$ and $\sigma=1/\sqrt{2\beta}$ (assuming that $\beta>0$, otherwise your formula doesn't give a probability density).
Prove that rational numbers (not just positive) are countable without using axiom of choice.
You don't need the axiom of choice for the following statement: If $X$ is countable, and $f$ is a function whose domain is $X$, then the range of $f$ is countable. You also don't need the axiom of choice for the following statement: $\Bbb{N\times Z}$ is countable. Finally, define $f(n,m)=\frac nm$ or $0$ if $m=0$, and show that this is a surjection onto the rational numbers.
for a compact manifold $M$, is the dual space of $H^1(M)$ equal to $H^{-1}(M)$?
No, we only define the dual of $(H_0^1(M))$ is $H^{-1}(M)$, but not $H^1(M)$. Although the dual of $H^1(M)$ does exist, and it is smaller then $H^{-1}(M)$. If you want an intuitive explanation, I recommend you to read the definition by Fourior transform then you will see the dual of $H^s$ is $H^{-s}$, not by definition but by computation.
Show that for all $n \in \Bbb{N}^*$, $\frac{1}{n+1} \le u_n$
Taking the $Z-$transform for the equation that you proved, i.e. \begin{equation} u_{n+1}=-\frac{3}{4}+\frac{3}{4}(n+1)u_n \end{equation} \begin{equation} zU(z)-zu(0)=-\frac{3}{4}\frac{z}{z-1}-\frac{3z}{4}\frac{\text{dU(z)}}{\text{dt}}+\frac{3}{4}U(z) \end{equation} Assuming $u(0)=1$, and after some manipulation of the above equation and taking inverse $Z-$transform, we get, \begin{equation} u_n=\frac{1}{4}\Bigg((4c_1+3)\Big(\frac{3}{4}\Big)^n\Gamma(n+1)-4e^\frac{4}{3}E_{-n}\Big(\frac{4}{3}\Big)\Bigg) \end{equation} To find $c_1$, we substitute $u_0=1$ and use the asymptotic expansion of $E_n(x)$ viz., \begin{equation} E_n(x)= \frac{e^{-x}}{x}\Bigg[1-\frac{n}{x}+\frac{n(n+1)}{x^2}-...\Bigg] \end{equation} \begin{equation} E_0\Big(\frac{4}{3}\Big)=\frac{3}{4}e^{-\frac{4}{3}} \end{equation} Thus, $c_1=1$ Thus, \begin{equation} u_n=\frac{1}{4}\Bigg(7\cdot\Big(\frac{3}{4}\Big)^n\Gamma(n+1)-4e^{\frac{4}{3}}E_{-n}\Big(\frac{4}{3}\Big)\Bigg). \end{equation} Now, replacing $n$ with $-n$ and $x$ by $\frac{4}{3}$ in the Exponential Integral function $E_n(x)$, we get, \begin{equation} u_n=\frac{1}{4}\Bigg(7\cdot\Big(\frac{3}{4}\Big)^nn!-3\displaystyle \sum_{k=0}^{n} \Bigg(\frac{3}{4}\Bigg)^k{}^nP_k\Bigg) \end{equation} This can be represented in other form as: \begin{equation} y_n=\frac{4^{-n}\Big(7\cdot 3^{n+1}\cdot (n+1)!-4^{n+1}e^{4/3}E_{-n}\Big(\frac{4}{3}\Big)(n+3)\Big)}{3(n+1)} \end{equation} where, $ y_n = 4u_n-\frac{4}{n+1}$. Since this is a strictly increasing function, it is proved that $u_n\ge\frac{1}{n+1}$ Wolframalpha plot for $y_n$
How to find out if a number is a hundred or thousand?
Say $n$ is your number, then the largest order of the power of $10$ smaller than $n$ is $\lfloor\log_{10} n\rfloor$ (i.e. the largest integer $d$ s.t. $10^d\leq n$ the greatest power of $10$ smaller than $n$ is hence $10^{\lfloor\log_{10} n\rfloor}$ the most significant digit is $\left\lfloor\frac{n}{10^{\lfloor\log_{10} n\rfloor}}\right\rfloor$ Therefore, what you look for can be computed as $$ \left\lfloor\frac{n}{10^{\lfloor\log_{10} n\rfloor}}\right\rfloor \cdot 10^{\lfloor\ln_{10} n\rfloor} $$ I would then insert an if statement beforehand just to rule out the case $n=0$. A pseudocode would look like (I extended it for negative numbers using the absolute value abs and sign sign functions) if n == 0, then $~~~~$return 0 end if d = floor(log(abs(n)) / log(10)) $\color{green}{\%~~\text{NB: }{\tt d}\text{ is to be converted to integer type}}$ return sign(n) * floor(abs(n)/10^d) * 10^d Notice that this works also for decimal numbers smaller than $1$: e.g. for $n=0.024$ it would return $0.02$. If this is not the wanted behaviour, edit the first if statement accordingly.
Prove that the following is a base for $\mathbb R^n$
This may be nuking a mosquito, but the vectors are linearly independent since $$A= [v_1 \,\,\lvert \,\,\cdots \,\,\rvert \,\,v_n]$$ is invertible by Gershgorin’s circle theorem.
Find the smallest value of $a+b^3$, where $a$ and $b$ are positive real numbers satisfying $ab=1$
Hint: The constraint means that $b=a^{-1}$, so you need to minimize $f(a)=a+a^{-3}$ on an appropriate domain. Note: You are probably looking for a local minimum since $f(a)$ increases without bound as $a\to 0^+$ or $a\to\infty$.
For a distribution $p_{\mathbf{X}}$, $p_{\mathbf{X}}(\mathbf{x}) = p(x_1) \cdots p(x_d)$ if $p(\mathbf{x}) = q(r)$?
Edit: After further clarifications Assuming we don't have independence of the components of the vector then functional dependence on the radial component only will not be enough to claim it will factor as independent marginals, consider the bivariate Cauchy distribution $$ p(x,y) = \frac{1}{2\pi} \frac{c}{(c^2 + x^2 + y^2 )^{3/2}}, c > 0 $$ then you can show that $$ p(Y | X = x) = \frac{(c^2 + x^2)}{2(c^2 + x^2 + y^2 )^{3/2}}. $$ So using the bivariate Cauchy example we have seen that even though the density function depends only on the magnitude $\| \textbf{x} \|$ of it's components it does not factor into a product of independent marginals. So while the answer to your question as posed is a negative there is an asymptotic approximation that may be of some interest, although note that it is only a statement about the projection of the original distribution onto the unit sphere $S^{n-1} \subset \mathbb{R}^n$ and so while it says these projections are asymptotically iid Gaussian it of course does not say the original distribution in the ambient space $\mathbb{R}^n$ factors as a product. Let $\textbf{X} = (X_1,\ldots,X_n)$ be such that the scaled random variable $\textbf{Y}=(Y_1,\ldots,Y_n)$ with components $$ Y_k = \frac{X_k}{\| \textbf{X} \|} $$ is distributed uniformly on the unit sphere. That is we take a random variable such that the density is a function of the norm $\| \textbf{x} \|$ only and project it to a density defined over the unit sphere. Also let $\Phi(z)$ denote the distribution of the standard normal. Then if $\textbf{Y}_n$ is a sequence of observations of these scaled random variables you have that as $n \rightarrow \infty$ then $$ \mathbb{P}( n^{1/2} Y_{n1} \leq y_1, \ldots , n^{1/2}Y_{nn} \leq y_{n} ) \xrightarrow{\mathscr{D}} \prod_{i=1}^{n} \Phi(y_i), $$ where by $Y_{nk}$ we mean the $k$th component of the $n$th random vector $\textbf{Y}$. So that in the limit and appropriately scaled the observation of such a random variable is approximately the i.i.d. product Gaussian case. Original answer, assuming independence It is a classical result, (and can even be extended to infinite sequences) that if the distribution is rotatable, i.e. the probability remains invariant under rotations, by which I mean that if $x = (x_1,\ldots,x_n)$ and $y = (y_1,\ldots,y_n)$ are two points such that $x \neq y$ but $$ \sqrt{\sum_i x_i^2} = \sqrt{\sum_i y_i^2}, $$ then we have that $p(x) = p(y)$. So that as a function the probability density function is in only dependent on the radius of it's argument then we have Theorem [Rotatability and independence ] Let $X_1,\ldots,X_n$ be independent random variables with $n \geq 2$. Then $(X_1,\ldots,X_n)$ is rotatable if and only if the $X_i$ are i.i.d centered Gaussians.
Change of Variable for Lebesgue Integral on $\mathbb{R}^n$
As you notice, we can write $A=P^tDP$, where $D$ is diagonal, with real entries and $P$ orthogonal. The LHs is $$I:=\int_{\Bbb R^n}F(x^tP^tDPx)dx_1\dots dx_n.$$ Do the change $y=Px$; since $P$ is orthogonal the absolute value of the Jacobian is $1$, hence $$I=\int_{\Bbb R^n}F(y^tDy)dy_1\dots dy_n=\int_{\Bbb R^n}F\left(\sum_{j=1}^n\lambda_jy_j^2\right)dy_1\dots dy_n.$$ Since $\lambda_j>0$, let $t_j:=\sqrt{\lambda_j}y_j$ for $1\leq j\leq n$. The inverse of the Jacobian of this transformation is $\frac 1{\sqrt{\det D}}$, which is what we want since $\det D=\det A$.
what is asking to do to show that u(x,t)=$\phi(x)\theta(t)$ is a solution to the wave equation?
You just need to show that the function $u(x,t) = \phi(x)\theta(t)$ satisfies the wave equation. That is, show that $$\frac{\partial^2 u(x,t)}{\partial t^2} = c^2 \frac{\partial^2 u(x,t)}{\partial x^2}.$$ Since you have $u(x,t) = \phi(x)\theta(t)$, this reduces to showing that $$ \phi(x)\frac{\partial^2 \theta(t)}{\partial t^2} = c^2 \frac{\partial^2 \phi(x)}{\partial x^2} \theta(t). $$
A bizarre 3D projection in 2D for a sphere
Since $y$ and $z$ are aligned to the axes, they will be mapped at a 1-1 relationship. They leaves what to with $x$. There are 2 angles that are relevant; one is the angle that $x$ makes with the x-axis on the screen, which I will call $\theta$, and the other is the angle that the axis makes with the screen, which I will call $\phi$. For example, if $\phi$ is perpendicular to the screen, the $x-$coordinate will be completely suppressed. Looking at the picture, and picking appropriate trig functions yields the transformation: $$(x,y,z) \rightarrow (y-x\cos{\theta}\sin{\phi},z+x\sin{\theta}\sin{\phi})$$
Property of contractible spaces
You probably need the "lifting lemma": Suppose that $p: X \to Y$ is a covering map, and let $f : Z \to Y$ be a continuous map. Pick points $x \in X$, $y \in Y$ and $z \in Z$ so $x$ and $z$ are mapped to $y$ under these maps. If $f_{*} \pi_1(Z,z) \subset p_{*} \pi_1(X,x) \subset \pi_1(Y,y)$, then there is a unique lift $\tilde f: Z \to X$ so that $f(z) = x$ and so that the diagram commutes. (The converse is also true, but this is the nontrivial direction.) So the property being used here is that a contractible space has trivial fundamental group, so such a lift always exists.
Analytic Continuation proof questions
1) Because the function $G(\zeta):=\dfrac{F(\zeta)-F(z)}{\zeta-z}$ on $\mathcal N\backslash\{z\}$ is the zero function, i.e, $G(\zeta)=0$ for all $\zeta\in\mathcal N\backslash\{z\}$, therefore its limit for $\zeta\to z$ is $0$. We're not in the $0/0$ situation. It's as asking if $$\lim_{x\to 0}\dfrac{0}{x}=0$$ is wrong because it's the $0/0$ situation. 2) Let $F^{(n)}$ be the $n$-th derivative of $F$ for some $n\in\mathbb N$ (with $F^{(0)}=F$ by convention). Since $F$ is analytical on $D$, $F^{(n)}$ is continuous on $D$. In particular, for any parametric curve $\gamma(t)$ in $D$ so that $\gamma(1)=w$, we have $\lim_{t\to 1^-}F^{(n)}(\gamma(t))=F^{(n)}(w)$. In particular for the curve used in the proof, we have $F^{(n)}(\gamma(t))=0$ for $t\in (0,1)$ (so that $\gamma(t)$ is inside $\mathcal N$ for $t\in (0,1)$. Thus $F^{(n)}(w)=0$.
A "friendly" combinatoric problem
The question is equivalent to the following graph-theoretic question: Let $G$ be a $3$-regular graph, and let $e$ be an edge of $G$; then the number of Hamilton cycles containing $e$ is even. I'll leave this reformulation here as a hint. This is a special case of a more general theorem: it's enough to assume that each vertex has odd degree. That theorem is proved in the answer to this question.
Relation between the order of subgroups
This is quite clear once you show the formula $$|HK| = \frac{|H||K|}{|H \cap K|}$$ since $$|H \cap K| = \frac{|H||K|}{|HK|} > \frac{|G|}{|G|} = 1$$
How to show $\mathbb{R}^2/\mathbb{Z}^2$ is homeomorphic to $\mathbb{R}/\mathbb{Z} \times \mathbb{R}/\mathbb{Z}$
Consider two spaces $A,B$ with equivalence relations $R,S$. Then $R\times S$ is an equivalence relation, and there is a natural map $p:(A\times B)/(R\times S)\to A/R\times B/S$. This is always a continuous bijection, but might not be an homeomorphism. In your case since the spaces are compact Hausdorff, it is. One can weaken the hypothesis, however. I wouldn't know to what extent.
equivalence of all norms implies normed space is finite-dimensional?
Hint: If $X$ is infinite dimensional there exists a discontinuous linear functional $f$ on it. Define $\|x\|'=\|x\|+|f(x)|$ to get a new norm which is not equivalent to the original norm.
Existential quantifier distribution over imply
You need to find either a structure where $\exists x(p(x)\to q(x))$ is true and $(\exists x\,p(x))\to(\exists x\,q(x))$ is false, or the other way around. It is very difficult to make $\exists x(p(x)\to q(x))$ false -- that would require every $x$ to satisfy $p(x)$ but not $q(x)$, which would make the RHS false too. So we need to make $\exists x(p(x)\to q(x))$ true instead. We can do that without disturbing the quantifiers on the right, simply by having one $x$ where neither $p(x)$ nor $q(x)$ hold. Then, however, $(\exists x\,p(x))\to(\exists x\,q(x))$ must be false, and the only way to do that is for $\exists x\,p(x)$ to be true and $\exists x\,q(x)$ false ... Can you complete the construction of a counterexample from here?
Stochastic Leibniz Rule
I have not been able to find bibliography on this, but I think the result is correct. I also think your proof is fine, you use properties of the Brownian motion very nicely. Here is just an alternative proof which I personally find clearer. Using Fubini (and assuming $f$ is nice enough): \begin{align} g_T - g_0 = & \int_0^T \big(f(s, T) - f(s, s) \big) dW_s + \int_0^T f(t, t) dW_t, \\ = & \int_0^T\int_s^T \frac{\partial f}{\partial t}(s, t) dt dW_s + \int_0^T f(t, t) dW_t, \\ = & \int_0^T \int_0^t \frac{\partial f}{\partial t}(s, t) dW_s dt + \int_0^T f(t, t) dW_t, \end{align} which is just the integral form of your SDE.
Resolve $\cos(3x)= \cos(2x)$
Notice, we have $$\cos(3x)=\cos(2x)$$ $$3x=2n\pi\pm 2x$$ Where, $n$ is any integer Now, we have the following solutions $$3x=2n\pi+2x\implies \color{red}{x=2n\pi}$$ or $$3x=2n\pi-2x\implies \color{red}{x=\frac{2n\pi}{5}}$$
Monte Carlo estimation of a constant?
You need to pick points randomly in D. Then work out if the point lies inside or outside the circle (using distance formula). Repeat and keep track of what fraction are inside the circle. You can then multiple this fraction by the area of D (equal to 4) to get an estimate of $\pi$. $A_D$ is the area of D. In your example the $2\times2$ square so area of 4.
Recreational number theory problem
Count them. For example, the multiples of $pq$ are $pq, 2pq, \ldots, rpq$.
If $f^2$ is Riemann Integrable is $f$ always Riemann Integrable?
Yes, there exists such functions. Think of $$ f(x) = \begin{cases} 1 & \text{ if } x \in \mathbb Q \\ -1 & \text{ if } x \notin \mathbb Q. \end{cases} $$ It is well-known that $f$ is not Riemann-integrable over any interval $[a,b]$ (just compute the Riemann lower/upper sums). But $f^2 = 1$ is very integrable. =)
Can a scale invariant shape be drawn?
If you want it all to fit on a finite sheet of paper, it must be a single point.
Existence of a $P \in D$ such that $f(P) \int_D g = g(P) \int_D f.$
Try integrating the function $x\mapsto f(x)\int_Dg(y)\, dy - g(x)\int_Df(y)\, dy$ over $D$. Then you can use a variant of the mean value theorem for integrals, which can be stated like this: Let $D\subset\mathbb{R}^n$ be connected and compact, $f:D\rightarrow\mathbb{R}$ continuous. Then there exists a $\xi\in D$ such that $$\int_D f\, dx=f(\xi)\lambda(D).$$ Here $\lambda$ denotes the $n$-dimensional Lebesgue Measure. Hint for this statement: $f$ attains its minimal and maximal values and every value inbetween since $D$ is connected. One thing is puzzeling me though. My argument does not need $f,g\geq 0$...
Hessian matrix of $g\circ f$
I think your formula switched $f$ and $g$, I found: $(g\circ f)''(x)[u,v]=g''(f(x)[f'(x)[u], f'(x)[v]]+g'(f(x))\circ f''(x)[u,v]$. Write $f=(f^1,...,f^k)$ and plugging in $e_i=u, e_j=v$ gives: $\frac{\partial (g\circ f)}{\partial x_i\partial x_j}=\frac{\partial f^h}{\partial x_i}\frac{\partial f^\ell}{\partial x_j}\frac{\partial^2 g}{\partial x_h\partial x_{\ell}}+\frac{\partial g}{\partial x_m }\frac{\partial^2 f^m}{\partial x_i\partial x_j}$ Where term's are summed over $1\le h,\ell, m\le k$.
Understanding orientation of simplicial complex
Ah...a consistent choice of surface normals. Well, Alexandroff's book is talking about generalized triangulated surfaces, which may not be embedded (or immersed) in 3-space. But for those that are, I think I can make a pretty decent argument. Let me assume that the surface is at least somewhat smooth, like a soccer ball that's been divided up into pentagons and hexagons: the polygons aren't quite "flat", but the underlying surface is smooth enough to have a normal vector at every point. So suppose you have such a smooth surface in 3-space, and you have a triangulation of the surface by slightly curved triangles. I'm going to assume that the triangles are "small", in the sense that if you drew the normal lines to all points of the surface within the triangle, they'd all point almost in the same direction (e.g., the angle between any two would be no more than, say, 30 degrees). If that's not the case, you can subdivide repeatedly until it is. Suppose that you have such a triangulated surface, and the triangles of the surface are oriented by listing the triangle vertices in a particular order, like $(A, B, C)$ (the letters indicate vertices). Then you can compute $v = (B - A) \times (C - A)$ (that's the cross product of vectors) to get a vector that's more or less "normal" to the triangle. If you look at any normal line at a point of the triangle, you can choose one of two directions; $v$ will be close to one of these and far from the other. Pick the one that $v$ is close to. So given an ordering of the vertices of a triangle on a smooth surface, I've shown how to pick a normal vector to the surface at each point of that triangle. Applying this to each triangle of your triangulation, you get a normal vector at every point of your surface. There's one question that remains, though: do these choices of normal vectors "agree" with each other at adjacent triangles, or might they "flip" as you cross an edge? Let's look at two adjacent small triangles, and let's assume (by further subdivision if necessary) that all normal lines to BOTH triangles lies within 30 degrees of each other. TO make things easier, let's suppose that the two triangles are ABC and BCD, and put down a coordinate plane $P$ whose origin is at B, whose $y = 1$ point is at $C$, and for which $A$ is in the left half-plane ($x < 0$). Then because the normal vectors for ABC and BCD are within 30 degrees of each other, it must be the case that the orthogonal projection $D'$ of $D$ onto the plane $P$ lies in the right half-plane. We can extend the coordinates on $P$ to coordinates on all of space, by drawing the $z$-axis in a right-handed way. When we do, we see that the vector $v = (B - A) \times (C - A)$ points in the positive $z$ direction. Recall that $ABC$ was oriented as $(A, B, C)$, so its boundary consists of oriented edges $(B, A), (C, B), (A, C)$. Since ABD shares the edge $AB$ with this triangle, the orientation of $ABD$ must be $(A, D, B)$, so that its boundary is $(A, B), (D, A), (B, D)$, and the edges $(A,B)$ (from $ABC$) and $(B,A)$ (form $ABD$) cancel. The normal vector for $ABD$ is then computed as $w = (D-A) \times (B - A) = - (B-A) \times (D - A)$. Now $(D - A) \approx (D' - A)$, by the assumption of "normal lines not varying too much", so all we need to check is that $-(B-A) \times (D' - A)$ points in the right direction (i.e., along the positive-z axis in our coordinate system). But that's straightforward if you just draw a picture.
Adding to factorized polynomials
The equation you have written is wrong. Substitute $3$ for $x$ on both sides and see what you get. One way to quickly verify such formulas is to check roots.
An "identity element" in a commutative rng in a module action.
Take any infinite collection of fields $\{F_i\mid i\in I\}$ and form the rng $R=\oplus_{i\in I} F_i$, by which I mean the elements of $\prod F_i$ of finite support with coordinatewise addition and multiplication. As is well known, $R$ lacks an identity. Now, you can take any nontrivial idempotent $e\in R$ at all, and $M=eR$ is a module upon which $e$ acts like the identity. This should provide an ample number of examples.
Inverse Laplace transform of exponentials and Incomplete gamma functions
\begin{equation} A(s)=\frac{1}{s} e^{s^\beta z} \Gamma(0, s^\beta z)= \frac{1}{s} \int_0^{+\infty} \frac{e^{-u}}{s^\beta z+u} du \end{equation} Now we can write: \begin{equation} e^{-u}= \sum_{k=0}^{+\infty} \frac{(-u)^k}{k!} \end{equation} Then \begin{equation} A(s)=\frac{1}{s} \int_0^{+\infty}\sum_{k=0}^{+\infty} \frac{(-u)^k}{k!(s^\beta z+u)} du=\frac{1}{s} \sum_{k=0}^{+\infty} \frac{1}{k!}\int_0^{+\infty} \frac{(-u)^k}{s^\beta z+u} du=\sum_{k=0}^{+\infty} \frac{(-s^\beta z)^k}{s k!} \int_{0}^{+\infty} \frac{x^k}{1+x} dx \end{equation} By using the inverse Laplace transform \begin{equation} a(t)=L^{-1}\{A(s)\}=\sum_{k=0}^{+\infty}\frac{t^{-k\beta} (-z)^k}{k! \Gamma(1-k\beta)} \int_{0}^{+\infty} \frac{x^k}{1+x} dx \end{equation} As Sary pointed out, in my previous calculation I did a mistake.The integral converge to $\frac{-\pi}{\sin{\pi k}}$ is and only if the $k$ is a real number ranging in the interval (-1,0). In my previous computation I choosen a rectangular contour such that $u= u_r+i u_i$ and I choosen $u_i$ to include the pole.Unfortunately when doing that the integral along the side of the rectangle vanishes converge only if we assume that $k$ ranges between -1 and 0. Thank you Sary. \begin{equation} b(t)=L^{-1}\{B(s)\}=1-\frac{t^{-(1+\beta)}}{\Gamma(-\beta)} \star a(t) \end{equation} where $\star$ is the convolution. Now can we say more about the sum? I'm not familiar with the Hankel contour path so maybe this is the right way to find a compact solution.
Is it known whether the number of Proth-primes is infinite?
Yes, there is an infinite number of Proth primes (see the discussion here). The essence of the proof is to use the fact, that every arithmetic progression $a+bm$ with $gcd (a,b)=1$ has infinitely many primes.
Solving a special rational equation on a very small interval
As said in comments, let $x=\frac y{b_1}$ and $c_i=\frac {b_i}{b_1}$ to make $$\sum_{i=1}^n b_i \left( \frac{a_i}{1+b_i x}\right)^2=b_1\Bigg[\frac{a_1^2}{(1+y)^2}+\sum_{i=2}^n \frac{ a_i^2\, c_i}{(1+ c_i\,y)^2}\Bigg]$$ Now, to get rid of the vertical asymptote at $y=-1$, we shall muliply everything by $(1+y)^2$ and consider the function $$g(y)=b_1\Bigg[{a_1^2}+{(1+y)^2}\sum_{i=2}^n \frac{ a_i^2\, c_i}{(1+ c_i\,y)^2}\Bigg]-\phi(1+y)^2$$ for which $$g(0)=-\phi+b_1 \sum_{i=1}^n a_i^2\qquad \qquad g(-1)=b_1\,a_1^2 >0 \qquad \qquad g'(-1)=0$$ $$g''(-1)=-2\phi+2b_1 \sum_{i=2}^n \frac { a_i^2\, c_i } {(1-c_i)^2 }$$ Assuming $g(0)<0$, using Taylor around $y=-1$, we have $$g(y)=g(-1)+\frac 12 g''(-1) (y+1)^2+ O\left((y+1)^3\right)$$ and then a first estimate $$y_0=-1 +\sqrt {-\frac {b_1 a_1^2} {g''(-1)}}$$ Let us try using $n=5$, $b_i=10^{5-i}$, $a_i=p_{i+5}^2$ and $\phi=12345678987654321$. This gives as an estimate $$y_0=-1+\sqrt{\frac{175914852251400405000}{15208068862859378242920969041} }\sim -0.999892$$ while the exact solution, obtained using Newton method is $-0.999848$ Starting with this guess, Newton iterates would be $$\left( \begin{array}{cc} n & y_n \\ 0 & \color{red}{-0.99989}244905795162394 \\ 1 & \color{red}{-0.9998}3867358692746307 \\ 2 & \color{red}{-0.999847}63616543148912 \\ 3 & \color{red}{-0.999847899}77068160754 \\ 4 & \color{red}{-0.999847899999109}03399 \\ 5 & \color{red}{-0.99984789999910920552} \end{array} \right)$$ Very immodestly, let me precise that this method bears my name. Edit If you are coding, for a complete safety, combine bisection and Newton methods . If you go to this place, on page $359$, you will find subroutine RTSAFE (here and here) which does exactly that. It is very robust. I apologize for giving you a reference to Fortran coding. If you can access the books of Numerical Recipes, you will find the equivalent in C and C++.
Finding Histogram Mean
Use the midpoint of each bar, not the left of each bar. In other words, use $19.5$ rather than $19$; use $20.5$ rather than $20$, etc. Note that this makes sense because the histogram says that there are $20$ observations between $19$ and $20$, so we assume these observations are evenly distributed throughout the class $19 < x < 20$, and so the average of these $20$ observations is $19.5.$ Also, take note that we assume these observations are evenly distributed throughout the class, which is an assumption, i.e. it may be false. But this is what me must do with grouped data: we must make assumptions in order to calculate any statistics (e.g. the mean or interquartile range) from the data.
a problem on prime and maximal ideal from gallian contemporary algebra.
You may want to note that if you consider the surjective morphim $$ \begin{align} \varphi :\ &\Bbb{Z}[x] \to \Bbb{Z}_{2} \\&f(x) \mapsto [f(0)], \end{align}$$ (whereas I denote by $[a]$ the class of $a \in \Bbb{Z}$ in $\Bbb{Z}_{2}$), then $I = \ker(g)$, so that $\Bbb{Z}[x] / I \cong \Bbb{Z}_{2}$. Since $\Bbb{Z}_{2}$ is a field, you get that $I$ is maximal, thus prime.
Proving a limit is less than or equal to 1
Assume $x>1$; this necessarily implies that, for $n$ big enough, $x_n>1$. So if $x_n\leq 1$ for all $n$, then $x\leq 1$.
Need help formalising simple propositional logic sentences
I can understand that the use of 'cannot' is a bit confusing ... it seems to be stronger than just saying that David and Emily are not both happy: they may not both be happy now, sure, but to say that they cannot both be happy seems to say that they can't ever both be happy, i.e. that it is impossible for both to be happy. In fact, in modal logic you can express these kinds of stronger claims, where: $\square P$ means "It is necessary that P is true" $\Diamond P$ means "It is possible that P is true" Using those symbols, translating "David and Emily cannot both be happy" can be done as: $\neg \Diamond (r \land p)$ or, equivalently: $\square \neg (r \land p)$ But, I assume you are currently not doing any model logic at all, since you are just starting with propositional logic. As such, you should really just treat the sentence as "David and Emily are not both happy" Good for you for noticing that those two sentences are not quite the same thing though!!
Is $\|T^2\|=\|T S\|$
Counterexample for the statement: Let's look at a $2 \times 2$ matrix example – define $$ S = \pmatrix{1&1\\0&1}, \quad T= \pmatrix{\phi & 0\\0&0} $$ Where $\phi := \frac 12(1 + \sqrt5)$. Note that $\|S\| = \|T\| = \phi$, where $\|M\| = \sigma_1(M)$. However: $\|S^2\| = \sqrt{3 + 2\sqrt{2}} = 1 + \sqrt 2$ $\|ST\| = \phi$ $\|TS\| = \sqrt{2}\phi$ $\|T^2\| = \phi^2 = 1 + \phi$
Solving Inequalities similar to Nesbitt's
Enforcing the substitution $A=b+c, B=a+c, C=a+b$ the problem boils down to finding the minumum of $$\begin{eqnarray*}&& \frac{B+C-A}{2A}+\frac{A+C-B}{2B}+\frac{A+B-C}{C}\\&=&-2+\left(\frac{B}{2A}+\frac{A}{2B}\right)+\left(\frac{C}{2A}+\frac{A}{C}\right)+\left(\frac{B}{C}+\frac{C}{2B}\right)\end{eqnarray*}$$ By setting $\frac{A}{B}=x$ and $\frac{B}{C}=y$, that boils down to studying $$ f(x,y)= -2+\frac{1}{2x}+\frac{x}{2}+\frac{1}{2xy}+xy+y+\frac{1}{2y} $$ over $\mathbb{R}^+\times\mathbb{R}^+$. Such function has a unique stationary point at $(x,y)=\left(1,\frac{1}{\sqrt{2}}\right)$, hence the minimum of our expression is achieved at $(A,B,C)=(1,1,\sqrt{2})$ and it equals $\color{red}{2\sqrt{2}-1}$.
In how many different ways can $7$ identical objects be distributed between $3$ ordered boxes?
The solutions are correct and the numbers you got are not that of the corresponding $^nP_r$ since generally $^nP_r\ge \ ^nC_r$. It seems like you got $$9\times2=18$$and$$6\times2=12$$ instead of the correct answers. Note that $$\binom{n}{r}=\frac{n!}{r!(n-r)!}$$
Volume of bounded regions rotated about the x axis
You can compute that volume by the disk method:$$\int_1^2\pi f^2(x)\,\mathrm dx=\frac{431}5\pi.$$
Potential Energy in Geometric Mechanics
The phrasing "the potential energy is given by" is very strange. I don't think $V$ is meant to be extended: I believe $\nabla_{\dot q} \dot q = -\nabla V$ is to be interpreted as the equation of motion for a particle with position $q(t)$ at time $t$. Thus all three relevant objects live in $TQ$: $\dot q(t) \in T_{q(t)} Q \subset TQ$ is the velocity at time $t$ $\nabla_{\dot q(t)} q \in T_{q(t)} Q$ is the covariant acceleration at time $t$ $\nabla V|_{q(t)} \in T_{q(t)} Q$ is the gradient vector of the function $V:Q \to \mathbb R$ at the position $q(t)$.
Difference of two regularly open sets, if non-empty, has non-empty interior
The answer is yes. This holds in fact for any topological space (not just metric spaces). First, note that we must have $\bar{A}\subsetneq \bar{B}$, for if $\bar{A}= \bar{B}$, then $A=\mathrm{int}(\bar{A})=\mathrm{int}(\bar{B})=B$, which contradicts $A\subsetneq B$. Now $U:=\bar{A}^c$ (complement of $\bar{A}$) is a non-empty open set, which has non-empty intersection with $\bar{B}$. It follows that also $U\cap B\neq \emptyset$ since $U$ is open. Now $U\cap B\subseteq B\setminus A$ is open, hence $B\setminus A$ has non-empty interior.
The infinite sum of bounded sequence is bound, is the sequence convergent?
Suppose that $\lim_{k\rightarrow\infty}|| x_k - x_{k+1} || ^2 = \varepsilon > 0$. Then $\sum_{i = 1}^\infty ||x_i - x_{i+1} ||^2 \geq \sum_{i = 1}^\infty \eta$ where $\eta$ is any number less than $\varepsilon$ and greater than $0$. But this is a contradiction as you've already shown that the sum is bounded.
Area bounded by hyperbola
The area below the curve $f(x)=1/x$ from $0$ to $x_0$ is always infinite, because the area below the curve in the interval $[a,x_0]$ is given by: $$\ln(x_0)-\ln(a)$$ Now, what happens when $a \to 0$? You can apply a similar procedure for calculating the area behind $y=\ln(x)$, if you are not getting again an infinite value, it's because the function doesn't "explode" as "fast" as the hyperbola, making the analogous limit finite You may want to compare this case with the following: $$ \sum_{n=1}^{\infty} \frac{1}{n} = \infty $$ $$ \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$$ Here you have the sum of two infinite sequences that both converge towards zero, but the second one does it faster, making the sum finite. Going back to integrals, check the family of functions $f_p(x)=\frac{1}{x^p}$, you'll see that, for some values of $p$, the integral of $f_p$ between, let's say, $0$ and $1$ has a finite value, while for others it doesn't
Solving a limit using only precalculus algebraic manipulations.
Put $$a=x^2-1=(x-1)(x+1)$$ and $$b=x^2-4x+3=(x-1)(x-3).$$ observe that when $ x \to 1 $, $ a $ and $b $ go to zero. so $$\lim_{x\to 1}\frac{\tan(b)}{b}\frac{a}{\sin(a)}\frac{b}{a}$$ $$=\lim_{x\to 1}\frac ba=\lim_{x\to 1}\frac{x-3}{x+1}=-1$$
Vector-valued forms inside the first jet bundle
An element of $J_1(E)$ can be thought of simply as the value and first derivative of a section of $E$ at a single point. When $E$ is a trivial bundle $M \times V$ so that sections of $E$ are just vector-valued functions $M \to V,$ this is made very explicit by the canonical isomorphism \begin{eqnarray}J_1(E) &=& E \oplus (E \otimes \Lambda^1) \\j^1_x s &\mapsto&(s(x), ds(x)). \end{eqnarray} For more general vector bundles, to make this same explicit identification we need a way of differentiating sections. If we fix a linear connection $\nabla$ on $E$ then we can simply use $j_x^1s \mapsto (s(x),\nabla s(x)).$ You can check that this is an isomorphism by using local coordinates, where jets are simply Taylor polynomials. In general, this isomorphism clearly depends on our choice of connection. When restricted to the jets with target zero (i.e. $j^1_xs$ for sections with $s(x)=0$), however, it does not: in local coordinates we have $(\nabla s)_i^\alpha = \partial_i s^\alpha + \omega_{\beta i}^\alpha s^\beta$ where $\omega_{\beta i}^\alpha$ are the connection coefficients of $\nabla;$ so at a point where $s=0$ the formula is simply $(\nabla s)^\alpha_i = \partial_i s^\alpha.$ Thus we have a canonical inclusion $E \otimes \Lambda^1\subset J_1(E)$ given by restricting the isomorphism discussed above to $\{0\} \times (E \otimes \Lambda^1).$ Composing with $p$ yields an inclusion $$p^* (E \otimes \Lambda^1) \subset p^*J_1(E).$$ I haven't checked that this agrees with the formulae given in the paper - it's possible that there's some other way of doing this that I've missed. To check that this really is what the authors mean, I recommend that you carefully verify the equation immediately following the claim of the inclusion.
how to work out a very confusing integral
We have an integral of a product of a polynomial and a sine/cosine. That is something we can fruitfully apply integration by parts to, the trigonometric functions switch between sine and cosine, and differentiating reduces the degree of the polynomial, until after finitely many steps, we reach a constant and are left with a pure integral over a sine or cosine. Let's ignore the constant factor, and just integrate: $$\begin{align} \int_0^{2\pi} (x-\pi)^2\cos (nx)\,dx &= \left[(x-\pi)^2\frac{\sin (nx)}{n} \right]_0^{2\pi} - \frac2n \int_0^{2\pi} (x-\pi)\sin (nx)\,dx\\ &= \frac2n \int_0^{2\pi} (x-\pi)(-\sin (nx))\,dx\\ &= 2\left[(x-\pi)\frac{\cos (nx)}{n^2}\right]_0^{2\pi} - \frac{2}{n^2}\int_0^{2\pi} \cos (nx)\,dx\\ &= 2\left[\pi\frac{\cos (2n\pi)}{n^2} - (-\pi)\frac{\cos 0}{n^2}\right] - \frac{2}{n^2}\int_0^{2\pi} \cos (nx)\,dx\\ &= \frac{4\pi}{n^2} - \frac{2}{n^2}\int_0^{2\pi}\cos (nx)\,dx\\ &= \frac{4\pi}{n^2} - \left[\frac{2}{n^3}\sin (nx)\right]_0^{2\pi}\\ &= \frac{4\pi}{n^2}. \end{align}$$
Show if the region $u=\frac{1}{3}$ is (or not) a limit cycle.
As the constant function $u=\frac13$ satisfies this implicit ODE, this gives a closed orbit. As the sign of the derivative in the explicit form $\dfrac{du}{dϕ}=\pm (u-\frac13)\sqrt{u+\frac16}$ can take both variants, you have as many solutions moving away from that orbit as you have solutions moving toward that orbit.
Application Closed Graph Theorem to Cauchy problem
Consider the map $D\colon F\to E$ given by $$D(u) = \left(u^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u^{(i)}, (u^{(i)}(t_0))\right).$$ It is elementary to verify that $D$ is continuous, and then you can note that $D = T^{-1}$ to conclude. Alternatively, if you want to directly use the closed graph theorem, consider sequences $\bigl((f_k,w_k)\bigr)$ and $(u_k)$ with $u_k = T(f_k,w_k)$ such that $(f_k,w_k) \to (f,w)$, and $u_k\to u$. Then $$u^{(i)}(t_0) = \lim_{k\to\infty} u_k^{(i)}(t_0) = \lim_{k\to\infty} w_k^i = w^i$$ for $0 \leqslant i < n$ and $$u^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u^{(i)} = \lim_{k\to\infty} u_k^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u_k^{(i)} = \lim_{k\to\infty} f_k = f,$$ so $u = T(f,w)$, which shows that the graph of $T$ is closed.
Is $\mathbb{C}[x,y]/(y)$ always isomorphic to $\mathbb{C}[x]$
Consider the map $\mathbb{C}[x,y]\rightarrow \mathbb{C}[x]$ given by $f(x,y)\mapsto f(x,0)$. See that this is a ring homomorphism. Compute its kernel and image.
When is the linear function $\det(A)X-\operatorname{tr}(X)A$ diagonalizable?
To begin with, it's the same as asking when $G=\operatorname{tr}(\bullet)\cdot A=-F+\det(A)\operatorname{id}$ is diagonalisable. If $A=0$, then $F=G=0$, so let's focus on $A\ne0$. Now: $0\ne X$ is an eigenvector of $\operatorname{tr}(\bullet)A$ if and only if $$\operatorname{tr}(X)A=\lambda X$$ in particular, $A$ is an eigenvector of $G$ of eigenvalue $\operatorname{tr}A$. If $X$ and $A$ are linearly independent, then $\lambda X=\operatorname{tr}(X)A$ is satisfied if and only if $\lambda=\operatorname{tr}X=0$. So, we have three cases: if $\operatorname{tr}(A)\ne 0$, then $G$ (hence $F$) is diagonalisable, with eigenbasis $\{A, X_1,\cdots, X_{n^2-1}\}$ for any $X_1,\cdots, X_{n^2-1}$ basis of $\ker\operatorname{tr}$. if $0\ne A$ and $\operatorname{tr}A=0$, then $G$ (hence $F$) is not diagonalisable, because all its eigenvectors must be in $\ker\operatorname{tr}$. if $A=0$, then $G=F=0$ which are diagonalisable. Notice that you could have done the same had there been any functional $\phi\in\mathcal M_n(\Bbb K)^*\setminus\{0\}$ instead of $\operatorname{tr}$.
Divergence Theorem and flux
Recall that for a surface with parameterization $\vec{x}(\phi,\theta)$ we can compute $\vec{N}$ as $-\partial_\phi\vec{x} \times \partial_\theta\vec{x}$. Parameterize the surface by $\theta\in [0,2\pi)$ and $\phi\in [0,\frac{\pi}{2})$ as $$x=a\cos(\theta)\sin(\phi)\\y=a\sin(\theta)\sin(\phi)\\z=\frac{a}{2}\cos(\phi)$$ $$\partial_{\theta }\vec{x}=\begin{pmatrix}-a\sin \left(\theta \right)\sin \left(\phi \right)\\ a\cos \left(\theta \right)\sin \left(\phi \right)\\ 0\end{pmatrix},\:\partial _{\phi }\vec{x}=\begin{pmatrix}a\cos \left(\theta \right)\cos \left(\phi \right)\\ a\sin \left(\theta \right)\cos \left(\phi \right)\\ -\frac{a}{2}\sin\left(\phi \right)\end{pmatrix}$$ $$\partial_\phi\vec{x} \times \partial_\theta\vec{x}=\begin{pmatrix}\frac{-a^2}{2}\cos(\theta)\sin^2(\phi) \\ \frac{-a^2}{2}\sin(\theta)\sin^2(\phi)\\ -a^2\sin(\phi)\cos(\phi)\end{pmatrix} $$ $$\vec{A}=\begin{pmatrix}\frac{a^2}{2}\cos \left(\phi \right)\sin \left(\phi \right)\cos \left(\theta \right)\\ a^2\cos ^2\left(\theta \right)\sin ^2\left(\phi \right)\\ a^3\cos \left(\theta \right)\sin \left(\theta \right)\sin^2 \left(\phi \right)\cos\left(\phi \right)+\frac{a^2}{4}\cos ^2\left(\phi \right)+3\:\:\end{pmatrix}$$ $\vec{A}\cdot \vec{N}dS =(\frac{a^4}{4}\cos(\phi)\cos^2(\theta)\sin^3(\phi)+\frac{a^4}{2}\sin(\theta)\sin^4(\phi)\cos^2(\theta)+a^5\cos(\theta)\sin(\theta)\sin^3(\phi)\cos^2(\phi)+\frac{a^4}{4}\cos^3(\phi)\sin(\phi)+3a^2\sin(\phi)\cos(\phi))d\phi d\theta $. This looks bad, I know! But realize we're integrating from $0$ to $2\pi$ in $\theta$ so we can get rid of all the terms with odd powers of $\cos(\theta)$ or $\sin(\theta)$, and the integrals are easy: $$\int_0^{2\pi}\int_0^\frac{\pi}{2}\left(\frac{a^4}{4}\cos^2(\theta)\cos(\phi)\sin^3(\phi)+\frac{a^4}{4}\cos^3(\phi)\sin(\phi)+3a^2\cos(\phi)\sin(\phi)\right)d\phi d\theta$$ $$=\frac{a^4\pi}{16} + \frac{a^4\pi}{8} + 3a^2\pi$$ Again, this is only the flux through the upper surface of the spheroid. The bottom surface (at z=0) has a negative flux of $\int\int_{x^2+y^2=a} (A|_{z=0}\cdot-\hat{z})dA=-3\pi a^2$.
What's the name of a morphism the morphism category of the category of categories?
$\require{AMScd}$ I think you are mistaking between two distinct notions (see also the comments). The first notion is the arrow category, defined as follow. Let $\mathscr C$ be category. The category $\operatorname{Arr}(\mathscr C)$ (also denoted $\mathscr C^{\mathbf 2}$ or $\mathscr C^\rightarrow$) is the category whose objects are the arrow $f$ of $\mathscr C$, morphisms $(f\colon a \to b) \to (g \colon c \to d)$ are the commutative square $$ \begin{CD} a @>f>> b \\ @VVV @VVV \\ c @>>g> d , \end{CD}$$ composition is the concatenation of such squares. You can of course apply that definition with $\mathscr C = \mathsf{Cat}$. The second notion is the enrichment of $\mathsf {Cat}$ over itself. That is, the category $\mathsf{Cat}$ has the property that, for any two objects $A$ and $B$, the hom-set $\hom_{\mathsf{Cat}}(A,B)$ actually carries a category structure in such a way that the composition $$ \hom_{\mathsf{Cat}}(B,C) \times \hom_{\mathsf{Cat}}(A,B) \to \hom_{\mathsf{Cat}}(A,C) $$ is a functor. The short way to say it is : $\mathsf{Cat}$ is enriched over the (cartesian closed) monoidal category $(\mathsf{Cat},\times,\mathbf 1)$ (where $\mathbf 1$ is the final category). The two notions are very distinct and not to be confused !
Examples of sets whose cardinalities are $\aleph_{n}$, or any large cardinal. (not assuming GCH)
There are very few examples where you directly prove in ZFC that a certain set must have size $\aleph_n$. This is because most of the sets we construct are defined in terms of power sets. This means that we can compute the size of these sets in terms of ℶ numbers pretty easily, but we can't compute them in terms of ℵ numbers. The difficulty is related to the unprovability of the continuum hypothesis. It turns out that ZFC can say very, very little about the cardinalities of ℶ numbers. One way to get sets of a fixed cardinality is to talk directly about well orderings. For example, $\aleph_1$ is exactly the set of order types of well orderings of $\omega$ (regardless of what $\beth_1$ is). For large cardinals, there is no way to explicitly compute their cardinality. For example, any inaccessible cardinal number $\kappa$ will have the property that $|\kappa| = \aleph_\kappa$, so you will not be able to make progress by trying to compute its ℵ number.
Change of coordinates and limits of integration
In spherical coordinates, $z = \rho cos\theta$ $x = \rho sin\theta cos\phi$ $y = \rho sin\theta sin\phi$ Equate, $z^2 = 4 - x^2-y^2$ and $z^2 = x^2+y^2$ $x^2+y^2 = 2$ Substituting back you get $z = \sqrt{2}$ $\rho^2 = x^2+y^2 + z^2 = 4 \implies \rho =2$ $\rho$ runs from 0 to 2 $\rho cos\theta =\sqrt{2}$ $cos\theta = \frac{1}{\sqrt{2}}$ $\theta = \frac{\pi}{4}$ Thus $\theta$ runs from 0 to $\frac{\pi}{4}$ $\phi$ runs from 0 to $2\pi$
A twisted version of the modular curve $X_1(N)$
Aha, this is one of my favourites in the "subtle issues that people often overlook" categories :-) The point is that this modular curve, let's call it $Y_\mu(N)$ (classifying pairs $(E, \mu_N \hookrightarrow E)$) is canonically isomorphic to $Y_1(N)$ (classifying embeddings of the constant group scheme): given $\alpha: \mu_N \hookrightarrow E$ you can consider the Cartier dual of $\alpha$ as a map $\alpha^\vee: (\mathbf{Z}/N) \hookrightarrow E'$, where $E' = E / \operatorname{image}(\mu)$. So $(E', \alpha^\vee)$ is a point of $Y_1(N)$. Similarly, there's a map going the other way. So there are canonical maps $$Y_1(N) \to Y_\mu(N)$$ and $$Y_\mu(N) \to Y_1(N)$$ whose composite is the identity; and this works over pretty much any base ring where you can make sense of the objects involved. However, the above maps don't commute with the standard complex uniformisations of both $Y_1(N)$ and $Y_\mu(N)$ by the upper half-plane $\mathcal{H}$ modulo $\Gamma_1(N)$. (Actually, the map between them is precisely the Atkin--Lehner involution on the complex points, $z \mapsto -1/Nz$.) So these curves are "the same" as abstract curves; e.g. if $Y_1(N)$ has $\mathbf{Q}$-points, then so does $Y_\mu(N)$, but they aren't given by the images of the same points of $\mathcal{H}$. On the other hand, if you work over a $\mathbf{Z}[1/N, \zeta_N]$-algebra (such as $\mathbf{C}$), you can identify $\mu_N$ with $(\mathbf{Z}/N)$ as group schemes; this gives you a second isomorphism between the two curves, which does commute with the complex uniformisation -- but is not defined over $\mathbf{Z}[1/N]$. So these two objects are "nearly the same" in two different but incompatible ways: you can have an isomorphism between them that respects the Galois action, or one that respects the complex uniformisation, but not both. This causes no end of headaches (because a lot of authors are so sure they know which is the 'right' model of $Y_1(N)$ that they don't even bother to specify which one they're using).
proving that for every integer $x$, if $x$ is odd, then $x + 1$ is even (induction)
You should try instead assuming the theorem is true for all $n \leq k$. (instead of just $n = k$). Now, consider $k+1$. If $k+1$ is even, there is nothing to prove. If $k + 1$ is odd, we need to show $k + 2$ is even. Use your induction hypothesis here: in particular, you know that for $n = k-1$, if it is odd (can you show this?) then $n + 1 = k$ is even. What can you now conclude about $k + 2$? Breaking this down into smaller steps. Do you understand what the difference is in the induction hypothesis? (assuming for $n \leq k$ rather than $n = k$?) Now we have to prove the theorem for the next larger number, $n = k + 1$. Either it is even or odd. What should we do if it is even? If it is odd, we need to show the next bigger number, $n + 1 = k + 2$ is even. At this point, what does the induction hypothesis say about $k -1$? (i.e. fill in the blanks: "if $k - 1$ is odd, then ____") Is $k - 1$ odd? (otherwise the induction hypothesis doesn't say anything much). What does this tell you about $k$? (in particular, is it even/odd?) What does this tell you about $k + 2$? (the thing we're trying to prove something about).
Closed form expression for sequence of values created by differently signed series
Input: $m,n$. Algorithm: Solve $a_k:=ma_{k-1}-2$, $a_1=2$; and $b_k:=mb_{k-1}-1$, $b_1=1$. Output: The set $S_{m,n}$ is given by the fractal sequence $$b_n+a_1+a_2+a_1+a_3+a_1+\cdots+a_{n-1}+\cdots+a_1+a_2+a_1$$ Example: For $m=5$, $a_k=\frac{1}{2}(3.5^{k-1}+1)=(2,8,38,188,938,\ldots)$. $b_k=\frac{1}{4}(3.5^{k-1}+1)=(1, 4, 19, 94, 469, 2344,\ldots)$ $S_{5,5}$ is given by $$469+2+8+2+38+2+8+2+188+2+8+2+38+2+8+2$$ $$S_{5,5}=\{469, 471, 479, 481, 519, 521, 529, 531, 719, 721, 729, 731, 769, 771, 779, 781\}$$
How do we distinguish between (logical) axioms and other assumptions of a proof?
As far as this concrete definition of "consequence" goes, logical axioms and members of $\Gamma$ indeed play the same role. The reason for distinguishing between them is that there are other contexts where we're only interested in $\Gamma$ and where the logical axioms are considered to be an "internal detail" in the definition of "consequence". As one important example, one can show that your definition is equivalent to this "semantic" definition of consequence: $\mathscr C$ is a consequence of $\Gamma$ if and only if every truth assignment that makes every element of $\Gamma$ true, also makes $\mathscr C$ true. This correspondence would not work if we required all of the necessary logical axioms to be part of $\Gamma$. And even if we just look at "syntactic" definitions there are different proof systems for the propositional calculus that happen to produce the same consequence relation as the Hilbert system that Mendelson is presenting. These proof systems have their own logical axioms (or none at all), so again the different systems only give the same correspondence if we don't insist of having the logical axioms in $\Gamma$. An important case of this is when $\Gamma$ is the empty set. Then you only have the logical axioms to work with, but you can still prove that some formulas are consequences of the empty set, such as $$(A\to B)\lor(A\to C)\to (A\to (B\lor C))$$ You will need several axioms to prove this, and neither of them will be elements of $\Gamma$ in this case. On the other hand, we can also let $\Gamma_2$ consist of $A$ and $D$ and $(A\to B)\lor(A\to C)$, and show that $B\lor C$ is a consequence of $\Gamma_2$. The proof will need to use both some logical axioms and some of the assumptions in $\Gamma_2$, but those assumptions are certainly not axioms. When we distinguish between logical axioms and other assumptions, the idea is that the logical axioms are something that are part of the logic -- that is, they're really there to fix what the logical symbols like $\land$ and $\lor$ and $\to$ mean -- whereas the other assumptions are things you select from case to case when you apply the logic to a particular reasoning. Then you can assume you already know how the logical symbols work, and the additional assumptions can just use them to speak about how the non-logical symbols (which at this level are just the propositional letters) relate to each other. Somewhat confusingly, the "other assumptions" are often also called "axioms" when we view things from a different level of abstraction. One can avoid some of the confusion by calling them "non-logical axioms". If you select a particular $\Gamma$ containing non-logical axioms, these ought to have the same logical consequences no matter which proof system for (classical) propositional calculus your select. From a birds-eye perspective you can then ignore the details of the proof system.
Prove that $C_{A_7}((567))\cong H \times A_4$
An alternative argument would be to count the size of the conjugacy class. Because $A_7$ is triply transitive, all the 3-cycles fall into the same conjugacy class. How many 3-cycles are there? The set of three numbers from $\{1,2,3,4,5,6,7\}$ can be selected in ${7\choose 3}=35$ different ways, and using those three numbers we get two 3-cycles - $(abc)$ and $(acb)$. Hence there are $2\cdot35=70$ 3-cycles altogether. But the conjugates of a given element of a group are in bijective correspondence with the coset of the centralizer of the said element, so this implies that $C=C_{A_7}((567))$ is of index $70$ in $A_7$. Therefore $$|C|=\frac{|A_7|}{70}=\frac{2520}{70}=36.$$ The group you have described is clearly contained in $C$, and it has $36$ elements. Therefore it must be the centralizer.
Newton method iteration
This is not an answer but it is too long for a comment. Solving nonlinear equations using Newton-Raphson can be quite a long iterative process depending on the quality of the initial guesses. When you face systems like the one in your post, you can easily reduce the dimension of the problem eliminating as many variables as you can for linear or polynomila expressions. For example, in your case$$x^2 y+x^2-x z+6=0\tag 1$$ $$e^x+e^y-z=0 \tag 2$$ $$x^2-2 x z-4\tag 3$$ Use $(3)$ to get $$z=\frac{x^2-4}{2 x}\tag 4$$ Plug this in $(1)$ to get $$y=-\frac{x^2+16}{2 x^2}\tag 5$$ So, only remains equation $(2)$ which writes $$e^{-(\frac{8}{x^2}+\frac{1}{2})}+e^x-\frac{x}{2}+\frac{2}{x}=0$$ Assuming $x\neq 0$, multiply by $x$ and then look for the zero of $$g(x)=x\Big(e^{-(\frac{8}{x^2}+\frac{1}{2})}+e^x\Big)-\frac{x^2}2+2=0$$ which is much better conditionned. Using Newton method with $x_0=-1$ as in your problem, the successive iterates will be $$x_1=-2.12801532799644$$ $$x_2=-1.82837114694732$$ $$x_3=-1.79536109066658$$ $$x_4=-1.79493929006255$$ $$x_5=-1.79493922117722$$ which is the solution for fifteen significant figures. Now, reuse $(4)$ and $(5)$ to get $y=-2.9830787435267$ and $z=0.21677424591829$.
Is Brownian bridge a Markov process
Yes, the Brownian bridge process is a Markov process with respect to its own filtration, because $X_t$ is the Ito process with SDE $\mathrm{d} X_t = \frac{X_t}{1-t} \mathrm{d} t + \mathrm{d} B_t$ with initial condition $X_0 = 0$ on time domain $0 \le t < 1$. This is exercise 5.11 in Oksendal's book "Stochastic Differential Equations".
Positivity of Renyi Mutual Information
EDIT. I justify the positivity of the Renyi mutual information using its interpretation as Renyi divergence. I follow the expositions in T. Cover, J. A. Thomas "Elements of Information Theory" (chapter 2) and D. Xu, D. Erdogmuns "Renyi's Entropy, Divergence and their Nonparametric Estimators" Shannon entropy and mutual information In the setting of "classical" information theory the mutual information $I(X,Y)$ of the random variables $X$ and $Y$ is defined as $$I(X,Y):=D_{KL}(p_{XY}||p_Xq_Y),$$ where $D_{KL}(p_{XY}||p_Xq_Y),$ denotes the Kullback Leibler divergence (KL divergence) between the joint probability $p_{XY}$ and the product $p_Xq_Y$ of the prob. distribution of $X$ and $Y$. Using the Jensen inequality on the KL divergence it follows that $I(X,Y)$ is always non negative. I refer to the first reference for the computation in the discrete case. Introducing the Shannon entropies $H(X)$ $H(Y)$ of $X$ resp. $Y$ and the conditional entropy $H(X|Y)$ we arrive at the equivalent formulation $$I(X,Y)=H(X)+H(Y)-H(X|Y).$$ Renyi Entropy and mutual information Let us consider the Renyi $\alpha$-setting , now. With $$H_{\alpha}(X)=\frac{1}{1-\alpha}\log\int p^{\alpha}_X(x)dx$$ we denote the Renyi entropy of the r.v. $X$. The Renyi divergence of the distribution $g(x)$ from the distribution $f(x)$ is $$D_{\alpha}(f||g):=\frac{1}{\alpha-1}\log\int f(x)\left(\frac{f(x)}{g(x)}\right)^{\alpha-1}dx.$$ It can be proved that (please see the second reference at pag.81) $$D_{\alpha}(f||g)\geq 0 \forall ~f, g, \text{and}~\alpha>0,~~(*)$$ $$\lim_{\alpha\rightarrow 1}D_{\alpha}(f||g)=D_{1}(f||g)=D_{KL}(f||g).~~(*)$$ The Renyi mutual information $I_{\alpha}(X,Y)$ is defined naturally as the Renyi divergence between the joint distribution $p_{XY}$ of $X$ and $Y$ and the product of the marginal distributions $p_X$, $q_Y$, i.e. $$I_{\alpha}(X,Y):=D_{\alpha}(p_{XY}||p_Xq_Y).$$ This is a definition; you can find it, for example, at pag. 83 in the second reference. You can justify it through the overall $\alpha$-setting and the limit $$\lim_{\alpha\rightarrow 1}I_{\alpha}(X,Y)=I(X,Y),$$ which follows from property $(**)$ of the Renyi divergence. This limit is parallel to the fundamental $\lim_{\alpha\rightarrow 1}H_{\alpha}(X)=H(X):$ From property $(*)$ one derives nonnegativity of the Renyi mutual information. For these reasons, I would prove non negativity of the Renyi mutual information through the above definition. At the present stage I haven't been able to prove that $$I_{\alpha}(X,Y)=H_{\alpha}(X)+H_{\alpha}(Y)-H_{\alpha}(X|Y),$$ or to find such characterization in the literature. Even in the discrete case I got blocked because of the coefficient $\frac{1}{1-\alpha}$ in front of the entropies. The cases $0<\alpha<1$ and $\alpha>1$ must be studied separately and it seems that a straightforward application of Jensen's inequality is not possible.
How to get the RHS from the LHS $\sum_{k=0}^n2^kx^{k+2}=\frac{x^2-2^{n+1}x^{n+3}}{1-2x}\tag{1}$
$$ \sum_{k=0}^{n}{2^k{x}^{k+2}} = x^2\sum_{k=0}^{n}{2^k{x}^{k}}\\ \sum_{k=0}^{n}{2^k{x}^{k+2}} = x^2\sum_{k=0}^{n}{{(2x)}^{k}}\\ \sum_{k=0}^{n}{2^k{x}^{k+2}} = x^2\frac{1-{(2x)}^{n+1}}{1-2x}\\ \sum_{k=0}^{n}{2^k{x}^{k+2}} = \frac{x^2-x^2{2}^{(n+1)}{x}^{n+1}}{1-2x}\\ \sum_{k=0}^{n}{2^k{x}^{k+2}} = \frac{x^2-{2}^{n+1}{x}^{n+3}}{1-2x} $$ I used the formula for summing geometric series: $$ \sum_{k=0}^{n}{t^k} = \frac{1-{t}^{n+1}}{1-t} $$
intuitionistic logic - independence of premise
This is closely related of the principle known as independence of premise. The Wikipedia article gives two heuristic arguments why that principle is not constructively valid. I will sketch them here, in the context of this principle, which is very close but not quite the same. The first argument is based on the BHK interpretation. To prove the bottom formula, you would need to provide a specific $x$ so that, if $Fx$ holds then $Gx$ holds. To prove the top formula, you only need to provide a method that, given a proof of $\forall x Fx$, produces a proof of $\exists x Gx$. There is no obvious way to take a method of that kind and use it to produce a specific value of $x$, which makes us suspicious of the constructive validity of the deduction. The second argument is based on a weak counterexample. Let $Gx$ be such that we do not know whether there is any $x_0$ such that $Gx_0$. For example, $G$ could say that $x$ is a counterexample to Goldbach's conjecture. Let $Fx$ be the formula $\exists z Gz$, so $\forall x Fx$ is $\forall x \exists z Gz$. Then the implication $\forall x \exists z Gz \to \exists x Gx$ is trivial to prove constructively, but $\exists x ( \exists z Gz \to Gx)$ cannot be proved constructively unless we can produce an $x$ such that, if there is any counterexample to Goldbach's conjecture then $x$ is such a counterexample. No such value of $x$ is known at the present time. This weak counterexample can be used to prove more formally that the deduction from the question is not provable in various formal systems of constructive logic. The counterexample shows that, if we work in the language of arithmetic, and we assume the deduction rule from the question, then whenever $Hx$ is a decidable property of natural numbers we would have a way to determine whether $\exists x Hx$ is true. This would contradict the unsolvability of the Halting problem, among other things.
Differential forms, pullbacks and determinants
Suppose the map $f$ can be represented in local coordinates as $$ f : (x_1, \dots, x_n) \mapsto (y_1, \dots, y_n) := (f_1(x_1, \dots, x_n) \ , \ \dots \ , \ f_n(x_1, \dots, x_n)).$$ Then we have $$ f^\star (dy_i) = \sum_{j = 1}^n\frac{\partial f_i}{\partial x_j} dx_j$$ for every $i \in \{1, \dots, n\}$. I believe the $n$-form $\omega$ that you are interested in is "unit" volume form in this local basis, $$ \omega = \ dy_1 \wedge \dots \wedge dy_n.$$ Taking pull-backs, we have $$ f^{\star}(\omega) = \sum_{j_1 \dots j_n} \frac{\partial f_1}{\partial x_{j_1}}\dots \frac{\partial f_n}{\partial x_{j_n}} \ dx_{j_1} \wedge \dots \wedge dx_{j_n} .$$ The only terms that contribute in the sum are the terms where the $j_1, \dots, j_n$ are all distinct, i.e. where there exists a permutation $\sigma \in S_n$ such that $j_1 = \sigma(1), \ \dots, \ j_n = \sigma(n)$. Hence\begin{align} f^{\star}(\omega) & = \sum_{\sigma \in S_n} \frac{\partial f_1}{\partial x_{\sigma(1)}}\dots \frac{\partial f_n}{\partial x_{\sigma(n)}} \ dx_{\sigma(1)} \wedge \dots \wedge dx_{\sigma(n)} \\ & = \sum_{\sigma \in S_n} \frac{\partial f_1}{\partial x_{\sigma(1)}}\dots \frac{\partial f_n}{\partial x_{\sigma(n)}} \ {\rm sign}(\sigma) \ dx_1 \wedge \dots \wedge dx_n \\ & = (\det Df ) \ dx_1 \wedge \dots \wedge dx_n.\end{align}
I am trying to understand how sets are generally defined using ZF set theory.
ZF set theory is a theory that deals with sets that are determined by their members: the axiom of existentiality states that $A = B \Leftrightarrow (\forall x((x \in A) \Leftrightarrow (x \in B)))$. The axiom of replacement only helps to construct sets with a great number of members and doesn't produce any objects that aren't sets. If you want to have objects that aren't sets, then you need to change the ZF axioms to admit what are known as ur-elements: objects with no members that are distinct from the empty set. For your apples and pears, it is probably simpler to stick with ZF and use an encoding: $\mathit{Apple} = 0$ and $\mathit{Pear} = 1$, where $0$ and $1$ are defined in the usual way (as $\{\}$ and $\{\{\}\})$.
Prove that if f is continuous in A then |f| is also continuous.
There’s no reason to think that $f$ is either always positive or always negative. HINT: Prove and use the following facts: If $f(a)\ne 0$, then there is an $\delta>0$ such that $f(x)$ and $f(a)$ have the same sign whenever $|x-a|<\delta$. If $f(a)=0$, $|f(x)-f(a)|=\big||f(x)|-f(a)\big|$ for all $x$.
Derivative of $t \mapsto \Vert f+tg \Vert_p^p$
It is enough to show that the function $F$ is differentiable on every open subset of $\mathbb{R}$. So let $r>0$, and $$ \phi: X\times(-r,r) \to [0,\infty],\ \phi(x,t)=s(f(x)+tg(x))=:\phi^x(t), $$ where $$ s: \mathbb{R} \to [0,\infty),\ s(t)=|t|^p. $$ Since $s$ is differentiable, and $$ s'(t)=\begin{cases} p|t|^{p-2}t &\text{ for } t \ne 0\\ 0 &\text{ for } t=0 \end{cases}, $$ it follows that for every $x$ in $$ \Omega:=\{x \in X:\ |f(x)|<\infty\}\cap\{x \in X:\ |g(x)|<\infty\} $$ the function $\phi^x$ is differentiable and $$ (\phi^x)'(t)=\partial_t\phi(x,t)=g(x)s'(f(x)+tg(x)) \quad \forall\ t \in (-r,r). $$ Therefore $$ |\partial_t\phi(x,t)| \le G_r(x):=\max(1,r^{p-1})|g(x)|(|f(x)|+|g(x)|)^{p-1} \quad \forall\ (x,t) \in \Omega\times(-r,r) $$ Thanks to Hölder's inequality we have $$ \int_XG_r\,d\mu=\max(1,r^{p-1})\int_X|g|(|f|+|g|)^{p-1}\le \max(1,r^{p-1})\|g\|_{L^p(X)}\|(|f|+|g|)\|^{p-1}_{L^p(X)}, $$ i.e. $G_r \in L^1(X)$ Given $t_0 \in (-r,r)$ and a sequence $\{t_n\} \subset (-r,r)$ with $t_n \to t_0$ we set $$ \tilde{\phi}_n(x,t_0)=\frac{\phi(x,t_0)-\phi(x,t_n)}{t_0-t_n} \quad \forall x \in \Omega, n \in \mathbb{N}. $$ Then $$ \lim_n\tilde{\phi}(x,t_0)=\partial_t\phi(x,t_0) \quad \forall\ x\in \Omega. $$ Thanks to the MVT there is some $\alpha=\alpha(t_0,t_n) \in [0,1]$ such that $$ |\tilde{\phi}_n(x,t_0)|=|\partial_t\phi(x,\alpha t_0+(1-\alpha)t_n)|\le G_r(x) \quad \forall\ x \in \Omega, n \in \mathbb{N}. $$ Applying the dominated convergence theorem to the sequence $\{\tilde{\phi}_n\}$ we get for every $t_0 \in (-r,r)$: $$ \int_X\partial_t\phi(x,t_0)d\mu(x)=\int_X\lim_n\tilde{\phi}_n(x,t_0)d\mu(x)=\lim_n\int_X\tilde{\phi}_n(x,t_0)=\lim_n\frac{F(t_0)-F(t_n)}{t_0-t_n}= F'(t_0). $$ In particular we have $$ F'(0)=\int_X g(x)s'(f(x))=\int_X fg|f|^{p-2}. $$
Passing thresholds with uniform random variables
I am assuming all the rv are independent. Since $X_i$ is uniform, $P(X_i \ge a)=1-a.$ Clearly $A$ and $B$ have Binomial distributions so the means are "np": $E(A)= n(1-a)$ so $E(A-B)=n(1-a)-m(1-b).$ and the variance is "$npq:$" $Var(A)=na(1-a)$ and $Var(A-B)=na(1-a)+mb(1-b).$ $P(A=k)={n \choose k}(1-a)^ka^{n-k} $ and $P(B\le k)=\sum_{i=0}^k{m \choose i}(1-b)^ib^{m-i}. $ If $m\ge n:$ $P(A\ge B)= \sum_{k=0}^n P(B\le k)P(A=k).$ If $m\lt n $ then interchange the roles of $A,B$ and $a,b$ and $n,m.$ For $n,m$ large we can use the normal approximation to the binomial: $P(A-B\le x)\approx \Phi\left( \frac {x-E(A-B)}{\sqrt{Var(A-B)}} \right)$
Show that $(x,y)=(0,0)$ is the only solution for the system $\left\{\begin{array}{l}{a x+b y=0} \\ {c x+d y=0}\end{array}\right.$ iff $a d-b c \neq 0$
If $ad-bc\ne 0$ then try to solve your system by elimination method. Multiply the first equation by $-c$ and the second one by $a$ and add the to get $$(ad-bc)y=0$$ Since $ad-bc\ne 0$ we have to have $y=0$ Similarly you get $x=0$ The same goes for if $y\ne 0$ and $$(ad-bc)y=0$$then we have to have $ad-bc= 0$
The series $\sum_{n=1}^{\infty}\frac{\ln(n)}{n^x}$ converges for $x\in(3/2,\infty)$?
Hint. one may observe there exists $n_0>1$ such that for all $n\ge n_0$, $$ \left|\frac{\ln n}{n^{1/4}}\right|\le 1 $$ giving $$ \left|\frac{\ln n}{n^{x}}\right|\le \frac{1}{n^{x-1/4}},\qquad n\ge n_0, $$ leading to a convergent series for $x-\dfrac 14>1$ that is for $x>\dfrac 54$ the latter being true for $x>\dfrac32$.
A function with certain properties to determine the price of an item
My suggestion would be use a Geometric series which limit is bounded. another example is: increase = $ \frac{1}{k!}$ set variable k = amount left of the item $ Price_{k-1} = Price_{k} + increase_{k}$ this way the Price of the item will increase by a fraction each time. the limit of the series used for the variable increase is e. That should keep the price from being over inflated, you are just summing backwards of the series.
Is it possible for a generalized eigenvector to have two different eigenvalues?
Yes, it is necessarily true that $\lambda_1 = \lambda_2$. In particular: suppose that $(T - \lambda_1)^{d_1} v = 0$ and $(T - \lambda_1)^{d_1 - 1}v \neq 0$. Then $w = (T - \lambda_1)^{d_1 - 1} v$ is non-zero and satisfies $Tw = \lambda_1 w$. It follows that if $\lambda_2 \neq \lambda_1$, we have $$ \begin{align} (T - \lambda_1)^{d_1 - 1}[(T - \lambda_2)^{d_2}v] &= (T - \lambda_2)^{d_2}[(T - \lambda_1)^{d_1 - 1}v] \\ & = (T - \lambda_2)^{d_2} w = (\lambda_1 - \lambda_2)^{d_2}w \neq 0. \end{align} $$ It follows that $(T - \lambda_2)^{d_2}v \neq 0$.
Closure of a set A, Cl(A), is closed
You have already proven that a set is closed if, and only if $\partial A\subseteq A$. You have defined $\overline A=A\cup\partial A$. First prove that $$\tag 1 \partial(A\cup B)\subseteq \partial A\cup\partial B$$ $$\tag 2 \partial\partial A\subseteq\partial A$$ Then note that $\partial \overline A=\partial(A\cup\partial A)\subseteq\partial A\cup\partial\partial A\subseteq\partial A\subseteq \partial A\cup A=\overline A$. Proofs of $(1)$ and $(2)$. $(1)$ Suppose that $x\in \partial(A\cup B)$. Then for each nbhd $N$ of $x$, $N$ is not disjoint from $A\cup B$ neither from $X\smallsetminus (A\cup B)=(X\smallsetminus A)\cap (X\smallsetminus B)$. Thus, $N$ is not disjoint from neither $X\smallsetminus A$ nor $X\smallsetminus B$ and at least one of $A$ or $B$, so $x\in \partial A\cup\partial B$. $(2)$ We prove $\partial A$ is closed. If $x\notin \partial A$ then there exists a nbhd $N$ of $x$ disjoint from $A$ or $X\setminus A$. But if $y$ is any point in $N$, then $N$ itself is a nbhd of $y$ disjoint from $A$ or $X\setminus A$ and $y\notin \partial A$. Thus, $\partial A$ is closed, and $\partial\partial A\subseteq \partial A$.
Proving a function is quasi-concave but not concave.
If $ 0 \leq x \leq x'$, then for all $\alpha \in [0,1]$, one has $(1-\alpha)x + \alpha x' \geq x$. Because $f : x \mapsto x^2$ is increasing on $\mathbb{R}_+$, then $$f((1-\alpha)x + \alpha x') \geq f(x) = \min \lbrace f(x), f(x') \rbrace$$ The function, however, is not concave, but convex. Indeed for all $x \in \mathbb{R}$, $f''(x) \geq 0$.
Ordinal arithmetic and limit ordinals
Suppose first that $\xi$ is a limit ordinal, so that $\omega^\xi=\sup_{\eta<\xi}\omega^\eta$; then $$\zeta+\omega^\xi=\sup_{\eta<\xi}(\zeta+\omega^\eta)=\zeta+\sup_{\eta<\xi}\omega^\eta=\zeta+\omega^\xi\;.$$ Now suppose that $\xi=\eta+1$. Then $\omega^\xi=\omega^\eta\cdot\omega$, so $\zeta+\omega^\xi=\zeta+\omega^\eta\cdot\omega$. Since $\zeta<\omega^\xi$, there is a unique ordinal $\alpha$ such that $\zeta+\alpha=\omega^\xi$; clearly $\alpha\le\omega^\xi$. Suppose that $\alpha<\omega^\xi$; $\omega^\xi=\sup_{n\in\omega}\omega^\eta\cdot n$, so there is an $n\in\omega$ such that $\zeta<\omega^\eta\cdot n$ and $\alpha<\omega^\eta\cdot n$. But then $$\zeta+\alpha\le\omega^\eta\cdot n+\omega^\eta\cdot n=\omega^\eta\cdot(2n)<\omega^\xi\;,$$ which is impossible. Thus, $\alpha=\omega^\xi$, and $\zeta+\omega^\xi=\omega^\xi$.
Roulette: Expected payoff
Hmm, I am not sure I follow your formula for expected payoff, but here is how I would calculate it: There is a $\frac{18}{37}$ chance of winning on the first turn. There is a $\frac{18}{37}*\frac{19}{37}$ chance of winning on the second turn. ... There is a $\frac{18}{37}*\frac{19}{37}^{i-1}$ chance of winning on the $i$-th turn. When you win on turn $i$, you have put in $2^i-1$, and you get a payout of $2^i$, for a net winnings of 1 (of course!) So: $$ E = \sum_{i=0}^\infty \frac{18}{37}*\frac{19}{37}^i = \frac{18}{37}*\sum_{i=0}^\infty \frac{19}{37}^i = \frac{18}{37}*\frac{1}{1-\frac{19}{37}} = \frac{18}{37}*\frac{1}{\frac{18}{37}} = 1$$ (of course!)
Probability task (Find probability that the chosen ball is white.)
In case of $(w,a)$ or $(a,w)$ you need to consider that one of these two balls is chosen (randomly as by coin tossing, we should assume). Therefore these cases have to be weighted by a factor of $\frac 12$.
Taylor Expansion of complex function $\frac{1}{\sqrt{1-2tz+t^2}}=\sum_{n=0}^{+\infty}P_n(z)t^n$
May be, you could start writing $$1=\sqrt{1-2tz+t^2}~~\sum_{n=0}^{+\infty}P_n(z)t^n$$ and now consider $$\sqrt {1+x}=1+\frac{x}{2}-\frac{x^2}{8}+\frac{x^3}{16}-\frac{5 x^4}{128}+O\left(x^5\right)$$ Replace $x$ by $(t^2-2tz)$ and use the binomial theorem. If I did not make any mistake, the beginning of the sum should be looking as $$1+t z+t^2 \left(\frac{3 z^2}{2}-\frac{1}{2}\right)+t^3 \left(\frac{5 z^3}{2}-\frac{3 z}{2}\right)+\cdots$$
Do canonical forms serve only one purpose?
$$\frac1{1-\sqrt2}=1+\sqrt2\iff \color{red}1=(1-\sqrt2)(1+\sqrt2)=\color{red}{-1}$$ Ok, so that was just goofing around a trivial typing mistake. To try to actually and seriously address your nice question: No, the equation $\;\frac5x=3+x\;$ is not the same equation as $\;x^2+3x-5=0\;$...not even close: the first equation has a rational non-polynomial fucntion on the left side whereas the second expression above is a polynomial . What happens is that we usually are interested in the equations' solutions , and both equations above have the very same solutions, though one of them is defined on onje point more than the other one. About your question about fractions: simple definition, and $$\frac{51}{68}=\frac{21}{28}\iff51\cdot28=21\cdot68(=1,428)\;\;\;\color{green}\checkmark$$ Of course, you know all this and all the rest of things you "asked" (better wondered), and I'm not sure I can see what the actual intention of all this could be, yet when we have some uses we go with one thing, and for other ones we may go with another one FOr example, if there are $\;68\;$ people and we bought $\;51\;$ pizzas to share among them all, it may be easier and much clearer to actually write $\;\frac{51}{68}\;$ instead of an apparentely easier $\;\frac34\;$. The last fraction tells me very little in the first exposed situation. I insist with my students, either from univeristy or from hig school (when I have them), that if possible and reazonably easy and quick they must write an expression (and I'm thinking of functions now" in several equivalent ways depending on what the task is, for example $$f(x)=x-\frac1x=\frac{x^2-1}x=\frac{(x-1)(x+1)}x$$ The first form is nice to realize what the domain is and, more important, to differentiate it in case of necessity. The second and third forms are better to find out where the function vanishese and also for asymptotes . Finally, whether $\;c+a-b\;$ is better or worse than $\;a+c-b\;$ or $\;a-b+c\;$ is mostly a matter of tste, though I think that alphabetical order usually makes things easier to grasp, so I'd go with the third formk unless some conditions are given that may make other form easier to work with.
Compute the splitting field
You want to find the smallest field containing $\mathbb Q$ which contains all the roots of $x^6-1$. For concreteness, we can work in $\mathbb C$. The roots of $x^6-1$ are $e^{k\pi i/3}$ for $k=0,\ldots,5$. It is easy to see that these are all powers of $e^{\pi i/3}$, so $\mathbb Q(e^{\pi i/3})$ contains both $\mathbb Q$ and all roots of $x^6-1$. Thus it contains the splitting field. Since it is by definition the smallest field containing both $\mathbb Q$ and $e^{\pi i/3}$, the splitting field is contained in it as well. Thus the splitting field is $\mathbb Q(e^{\pi i/3})$.
Double Summation Simplification
We are using the hockey-stick identity for binomials: $\sum_{i=1}^k i = \binom{k+1}{2}$ $\sum_{k=1}^n \binom{k+1}{2} = \sum_{k=2}^{n+1} \binom{k}{2} = \binom{n+2}{3}$ $\binom{n+2}{3} = \frac{(n+2)!}{3! (n-1)!} =\frac{n (n+1)(n+2)}{6}$ Finally multiply by $3$ and you will get your result.
All roots $\lambda$ of $\det(A-\lambda B)=0$ are $\ge1$ when $B$ is p.d and $A-B$ is n.n.d.
By shifting $A$ and $\lambda$, you want to show that if $B$ is positive definite, then $A$ is nonnegative definite iff all the roots of $\det(A-\lambda B)$ are nonnegative. Since $B$ is positive definite, it admits a Cholesky factorization $B=LL^{\dagger}$ where $L$ and $L^{\dagger}$ are invertible. Now, the roots of $\det(A-\lambda B)=\det(A-\lambda LL^{\dagger})$ are precisely the roots of $\det(\lambda-L^{-1}A(L^{\dagger})^{-1})$, and $A$ is nonnegative definite iff $C\stackrel{\text{def}}{=}L^{-1}A(L^{\dagger})^{-1}$ is. Consequently, it suffices to show that $C$ is nonnegative definite iff all the roots of its characteristic polynomial $\det(\lambda-C)$ are at least zero. The roots of the polynomial are exactly the eigenvalues of $C$, so we need to know that $C$ is nonnegative definite iff all its eigenvalues are at least zero. I think this last equivalence is a standard result achieved by considering the quadratic form $x^{\dagger}Cx$ in $C$'s diagonalizing basis.
Prove that a subset of a linearly independent set is a linearly independent set
Step 1: Write down the definition of being linearly independent. (Add it here to your question!) Step 2: There is no step 2 ;-)
Assume that at each point of $γ$ the vector field $f$ is either tangent or points toward the interior of $Ω$. Then $f$ has a zero inside $Ω$.
Note that with your new condition the index continues to be $1$. Indeed, since the vector field cannot point inside, in case it moves from a "forward" tangent to a "bacward" tangent, it needs to go back to the original position since the curve closes. So, any amount that could contribute to a bigger/smaller index changes exactly in the same amount with the opposite sign. Of course, you could use instead the Poincaré-Bendixson theorem.
prove the products of analytic functions are analytic.
Assume $x_0=0$, put $$A_n:=\sum_{k=0}^n a_k x^k, \quad B_n:=\sum_{k=0}^n b_k x^k,\quad c_r:=\sum_{k=0}^r a_{r-k}b_k,\quad C_n:=\sum_{r=0}^n c_r x^r\ .$$ Let $\rho:=\min\{\rho_a,\rho_b\}>0$, where $\rho_a$ and $\rho_b$ are the radii of convergence of the two given series, and assume $|x|<\rho$. Let an $N>0$ be given. Then $A_NB_N-C_N$ contains only terms $a_jb_kx^{j+k}$ where at least one of $j$ and $k$ is $\geq{N\over2}$. It follows that $$|A_NB_N-C_N|\leq \sum_{j>N/2} |a_jx^j|\ \sum_{k=0}^\infty |b_kx^k|+\sum_{j=0}^\infty |a_jx^j|\ \sum_{k>N/2} |b_kx^k|\ .$$ Here the full sums on the right hand side are bounded, and the $j>N/2$, resp. $k>N/2$ sums converge to $0$ when $N\to\infty$. It follows that $$\lim_{N\to\infty}C_N=\lim_{N\to\infty}A_N\ \lim_{N\to\infty}B_N=\sum_{j=0}^\infty a_jx^j\ \sum_{k=0}^\infty b_k x^k\ ,$$ as desired. In particular the product series converges at the chosen $x$, hence has convergence radius at least $\rho$.
First-order homogenous linear ODE in 2 functions, with information about initial conditions
Set $$\frac{dP}{dt} = -bR.$$ $$\frac{dR}{dt} = 0$$ The solution is $R = k_1$ and $P = -bRt+ c_1 = -bk_1t + c_1$ by separation of variables. Here $k_1, c_1$ are integration constants. Set $$\frac{dR}{dt} = -bR.$$ $$\frac{dP}{dt} = 0$$ The solution is $R = c_2e^{-bt}$ and $P = k_2$.
Are $\mathbb{C}^2$ and $\mathbb{C}^2/(x,y)\sim(y,x)$ homeomorphic?
The "singular set" $(x,x)$ is codimension 2, not 1, so you should not visualuze this as something with boundary. To convince you "why" this works, let's instead consider $X_n = \Bbb R^n \times \Bbb R^n/(x,y)\sim (y,x)$. By considering the map $(x,y) \mapsto (x+y,x-y)$ we see this is the same as $\Bbb R^n \times \Bbb R^n/(x,y)\sim (x,-y)$. This is the same as $\Bbb R^n \times C(\Bbb{RP}^{n-1})$, the infinite cone on $\Bbb{RP}^{n-1}$. This is a manifold when, and only when, $n=2$ (because $\Bbb{RP}^1$ is a circle!) In the case $n=1$ you get $\Bbb R \times [0,\infty)$, like you expect. It is probably worth adding further that $\Bbb C^k/S_k$, modding out by the action of the symmetric group, is also homeomorphic to $\Bbb C^k$. Taking symmetric powers of complex curves is a powerful tool.
Combinations when repetitions are allowed
Suppose the gifts are lined up in a row, and numbered $1$ to $10$. There are $10$ slips of paper, numbered $1$ to $10$ in a hat. You draw $7$ slips, and get the gifts with hose numbers. So, you get four $A$'s if you choose slips $1,2,3,4,x,y,z$ where $5<x<y<z<11$ and also if you choose slips $1,2,3,5,x,y,z$ or slips $1,2,4,5,x,y,z$ and so on.
Variation on the birthday problem?
We could use a binomial distribution. Think of this like pulling $600$ people out of thin air, and one by one, you assign them a September birthday, or a non-September birthday, because if months and days are equally likely to contain birthdays, then the probability of "success" is constant. We have that $n$ is the "trials" (600 people) and $p$ is the probability $\displaystyle \frac{30}{365}=0.0829$ (for September). However, this method is seriously inconvenient, because this is a discrete (whole number) variable and individual values have to be calculated for $X=75$, $X=76$, etc. We should use a normal approximation. A normal approximation is good when we have $np>10$ and $n(1-p)>10$. The standard deviation would be $\sqrt{np(1-p)}=6.72$, and the mean would be $np=49.3$. The probability that $75$ or more people are born in September corresponds to a z-score of $z=\displaystyle \frac{75-49.3}{6.72}=3.82$. Z scores above $3$ are pretty rare! Using a probability table, we have $P(z>3.82)=\boxed{0.0000667}$, or $0.0066\%$
Advantage of using Hyperbolic Trigonometric functions?
In the first place, $\cos(z) = \dfrac{e^{iz}+e^{-iz}}{2} = \cosh(iz)$ and $\sin(z) = \dfrac{e^{iz}-e^{-iz}}{2i}=-i\sinh(iz)$. So why don't people stick to the exponential function and forget about all trigonometric or hyperbolic functions? In some cases it is indeed better to go straight to the complex exponential function, but in other cases one might consider it nice to express something using (what were originally) real functions. For example, vibrations on a string can be modeled as a sum of trigonometric functions. One could have solved the differential equations in full generality and obtain complex exponentials as solutions too, but our string has only real positions (in classical mechanics) and trigonometric functions are sufficient to provide a basis for the real solutions we seek.
Are there practical algorithms for computing exact eigenvalues?
This wikipedia page states "While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated." https://en.wikipedia.org/wiki/Eigenvalue_algorithm#Direct_calculation I think this may be what you are asking.