title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Ground state energy of Schrödinger energy functional for a potential $U(x) = -{1\over |x|^\alpha} $ with $\alpha >2$
I would suggest to try the following family of functions $$\psi_\epsilon(x)= c_\epsilon\frac{e^{-|x|^2}}{(|x|^2+\epsilon^2)^{(3-\alpha)/4}} $$ with $c_\epsilon$ some normalization contant such that $||\psi_\epsilon||=1$. With that you can observe that $\mathscr{E}(\psi_\epsilon) \to -\infty$ for $\epsilon \to 0$. The reason is the potential term. While the kinetic term is some finite positive number. The potential term is diverging in particular, we have that $$ -\int_{\mathbb R^3} \frac{1}{|x|^\alpha} |\psi_\epsilon|^2 \,d^3 x = - 4\pi c_\epsilon^2 \int_0^\infty r^{2-\alpha} \frac{e^{-2 r^2}}{(r^2+\epsilon^2)^{(3-\alpha)/2} }dr = O(1) - 4\pi c_\epsilon^2 \underbrace{\int_0^1 \frac{r^{2-\alpha}}{(r^2+\epsilon^2)^{(3-\alpha)/2} }dr}_{\sim |\log \epsilon|}$$ such that we find that the infimum is $-\infty$.
Airport queues permutation
Let's first decide how many people stand in each queue. For that purpose, have $k$ identical chairs that we will first put into the queues. The number of ways to arrange those chairs is the same as the way to break $k$ into a sum of $n$ non-negative numbers, which is ${n+k-1\choose k}=\frac{n(n+1)(n+2)\cdots(n+k-1)}{k!}$ (for reference, see e.g. the Wikipedia Stars and Bars article). Now we've set up the chairs: for each setup we have $k!$ ways (all permutations) to put the people on them, so the total number is: $$\frac{n(n+1)(n+2)\cdots(n+k-1)}{k!}\cdot k!=n(n+1)(n+2)\cdots(n+k-1)$$
Can real polynomial in one variable with all coefficients irrational have infinite number of integers as its values?
Any polynomial of degree greater than or equal to one satisfies what you want. Polynomial functions in $\mathbb{R}$ are continuous. Odd-degree polynomials go to $+\infty$ as $x \rightarrow +\infty$ and $-\infty$ as $x \rightarrow - \infty$ (or a swap of these, depending on the leading coefficient) and even-degree polynomials go either to $+ \infty$ as $x \rightarrow \infty$ (any of them) or $- \infty$ as $x \rightarrow \infty$. Either way, by the intermediate value theorem, we have that all integers above a certain value $M$, or below, depending on how the polynomial behaves at infinity, are reached as a value of the given polynomial.
Finding the sum of arithmetic series when last term and common difference is given .
The $n$th term of an arithmetic series is given by $$T_n=a+(n-1)d$$where $T_n$ is the $n$th term. If you know $n$, the last term and $d$ you can use this to calculate a value for $a$, the first number. From here you can you the summation formula for an arithmetic series that is $$S_n=\frac{n}{2}\big(2a+(n-1)d\big)$$ where $S_n$ is the sum of the series.
How many solutions $x(t)$ of this ODE satisfy $\lim_{t \to \pm \infty}x(t) =0$?
A solution of homogeneous equation is $$ x(t) = Ce^{t}. $$ Now let $C = C(t)$. Then we obtain an equation on $C(t)$: $$ C'(t)e^{t} = -e^{-t^2} $$ or $$ C'(t) = -e^{-t^2-t} = -e^{\frac{1}{4}}e^{-\left(t+\frac{1}{2}\right)^2}. $$ So $$ C(t) = c - \sqrt[4]{e}\int_0^te^{-\left(u+\frac{1}{2}\right)^2}du $$ or $$ C(t) = c - \sqrt[4]{e}\int_0^{t+\frac{1}{2}}e^{- u^2}du. $$ Finally we get $$ x(t) = ce^t - e^{t+\frac14}\int_0^{t+\frac{1}{2}}e^{- u^2}du. $$ When $t \to \pm\infty$ the integral $$ \int_0^{t+\frac{1}{2}}e^{- u^2}du $$ is bounded. So the asymptotic behaviour of $x(t)$ is the one of $Me^t$. When $t \to -\infty$ $x(t) \to 0$ anyway. When $t \to +\infty$ $$ x(t) \to \left\{\begin{align*} +\infty, &\quad M > 0\\ 0, &\quad M = 0\\ -\infty, &\quad M < 0 \end{align*}\right. . $$ The expression for $M$ is $$ M = c - e^{\frac14}\int_0^{\frac{1}{2}}e^{- u^2}du. $$
Determine all subgroups of $\mathbb{R}^*$ that have finite index.
$\mathbb{R^+}$(set of all positive nonzero real numbers) is the only proper subgroup of $\mathbb{R^*}$ of finite index. To prove first part let us assume that $\mathbb{R^*}$ has a proper subgroup $H \neq \mathbb{R^+}$ such that $[\mathbb{R^*} : H] = n$ is finite. Thus we have $(xH)^n = x^n H = H$ for each $x\in \mathbb{R^*}$. Thus $x^n \in H$ for each $x\in \mathbb{R^*}$. Now let $x \in \mathbb{R^+}$, then $ x = (\sqrt[n]{x})^n \in H$. Thus, $\mathbb{R^+} \subset H $. Since $H\neq \mathbb{R^+}$ and $ \mathbb{R^+} \subset H$, we may conclude that $H$ must contain a negative number say $-y$ for some $y\in \mathbb{R^+}$. Since $\frac{1}{y}\in \mathbb{R^+}\subset H $ and $-y\in H$, and since $H$ is closed under multiplication we conclude that $-y(\frac{1}{y}) = -1 \in H$. Since $H$ is closed and $ \mathbb{R^+} \subset H$, and $-1 \in H$.We conclude that $\mathbb{R^-} \subset H$, where $\mathbb{R^-}$ is the set of all nonzero negative real numbers. Since $\mathbb{R^+}\subset H $ and $\mathbb{R^-}\subset H $ , we conclude that $H =\mathbb{R^*} $, which is a contradiction since $H$ is a propser subgroup of $\mathbb{R^*} $. Hence $\mathbb{R^+}$ is the only proper subgroup of $\mathbb{R^*}$ of finite index.
A tetrahedron inside another tetrahedron. Could the contained tetrahedron have a greater perimeter then the outside one?
The tetrahedron with vertices $$ (1,0,0) \qquad (\cos\tfrac{2\pi}3,\sin\tfrac{2\pi}3,0) \qquad (\cos\tfrac{4\pi}3,\sin\tfrac{4\pi}3,0) \qquad (0,0,0) $$ has perimeter $3\sqrt3+3$ and contains the tetrahedron with vertices $$ (1,0,0) \qquad (\cos\tfrac{2\pi}3,\sin\tfrac{2\pi}3,0) \qquad (\cos\tfrac{4\pi}3,\sin\tfrac{4\pi}3,0) \qquad (1,0,0) $$ which has perimeter $5\sqrt3$. (Both tetrahedra are degenerate, for ease of calculation; for a nondegenerate example, replace $(0,0,0)$ in the outer tetrahedron with $(0,0,\varepsilon)$ and replace one of the $(1,0,0)$ in the inner tetrahedron with $(1-\delta)(1,0,0) + \delta(0,0,\varepsilon)$, where $\varepsilon$ and $\delta$ are positive and small.) Outer tetahedron, roughly: Inner tetrahedron, roughly:
Continuity at a Point (Local Property)
The statement is correct, and your misunderstanding here is subtle: in your coubterexample, $f$ and $g$ are equal at $a$ but not in an open interval containing $a$. When the author says continuity is a local property, they mean that if there is some open interval $(x, y)$ on which the functions are equal at ALL points in that interval (not just at one or a few points), then they are either both continuous or both not continuous at each point $a$ in that interval. The reason the term local is used is that it doesn't matter how small the interval is, as long as you can find some tiny interval containing $a$ for which the functions are equal everywhere in it, their continuity/discontinuity will be the same at $a$. It is "local" because all that matters is the behavior points which are sufficiently close to $a$, and not points that are "further away". However, it must be some interval of the point, not just the point itself, on which the functions are equal. The term local property is used a lot, and always means "only small intervals around a point matter, not large ones". However, the actual value of a function at the point is usually not that important to its local behavior.
Find The distance covered by the Falling Circle.
Find a common tangent to circles 2 & 3 such as the one shown in figure. Find a point B on the tangent having x-coordinate = x1. Let the circle 3 touch the tangent at point P(can be easily computed). Calculate BP(using the coordinates) and CP(knowing the radii of the circles and the fact that they touch each other). Then BC = BP - CP. AB can also be calculated using coordinates.
Is this the wrong way to find $\delta$?
You're right that your inequality might not hold for lesser $x$, since the function $x\mapsto x^{n-1}+x^{n-2}a+x^{n-3}a^2+\ldots+a^{n-1}$ is only guarunteed to be increasing when $a$ and $x$ are positive. A much more important error you made is that you should be working with $(a+1)$ not $(a-1)$. In particular, your goal is to ensure that $|x^n-a^n|<\varepsilon$. Your reasoning at the moment is: Since $|x-a|\cdot |k^{n-1}+k^{n-2}a\ldots+a^{n-1}|<|x-a|\cdot |x^{n-1}+x^{n-2}a+\ldots +a^{n-1}|$ in the desired interval for some $k$, if we bound the former by $\varepsilon$, then the latter will also be bounded by $\varepsilon$. This isn't good, because it's trying to say that if $A<B$ and $A<C$, then $B<C$ - which you should see the issue with. What you want to do is to say: Since $|x-a|\cdot |x^{n-1}+x^{n-2}a+\ldots +a^{n-1}|<|x-a|\cdot |k^{n-1}+k^{n-2}a\ldots+a^{n-1}|$ in the desired interval for some $k$, if we bound the latter by $\varepsilon$, then the former will also be bounded by $\varepsilon$. which holds if you let $k=(a+1)$ and then $\delta=\min(1,\frac{\varepsilon}{(a+1)^{n-1}+(a+1)^{n-2}a+\ldots+a^{n-1}})$. This, of course, only works when $x\geq 0$, but the argument for negative $x$ is fairly obvious from there (since $x^n$ is either an even or odd function)
Finding an orthonormal basis z of $\Re^n$ with respect to which the matrix $A_z$ is diagonal
The matrix $A_z$ will be diagonal if and only if each basis element of $z$ is an eigenvector of $\bf A$. To include $a$ into the basis $z$ is a good start, since ${\bf A}(a)=(1+l)a$. Note, however, that $a^\perp=\{x\in\Bbb R^n:(a,x)=0\}$ is a whole subspace of dimension $n-1$. But, luckily any vector $x\in a^\perp$ is an eigenvector of $\bf A$, because then ${\bf A}(x)=x$. So, we can extend the single vector $a$ by an arbitrary orthonormal basis of $a^\perp$, and this will work.
Finding roots of unity
You're off to a good start. Note that in your simplified expression $(w+z^4)^5$, $w$ and $z^4$ could be any fifth roots of unity. So the problem reduces to showing that the sum of two fifth roots of unity raised to the fifth power yields a real number. So let $a$ and $b$ be any fifth roots of unity. Note that $\bar{a}=a^4$ and $\bar{b}=b^4$. We have $(a+b)^5= a^5 + 5a^4b+10a^3b^2+10a^2b^3+5ab^4+b^5$ Now note that $a^5=1$ and $b^5=1$. So these terms contribute only real values to the sum. But also $a^4b$ and $ab^4$ are conjugates. So $5a^4b+5ab^4$ is real. Similarly, $a^3b^2$ and $a^2b^3$ are conjugates. So $10a^3b^2+10a^2b^3$ is also real.
Partial derivative definition
Partial derivatives are defined in terms of functions. Total derivatives are defined in terms of variables. They are the same concept defined in 2 different languages. Partial derivatives, when used correctly, are always the derivative of a function with respect to one of it's parameters. So, for example, suppose you are given to assume: $$f(x, y) = x^2y + \sin(x)$$ And asked to calculate $\frac{\partial f}{\partial x}$. Here, you see that the denominator is $x$, and so you look for that in the definition of $f$, see that it is the first parameter, and take the derivative of $f$ with respect to that first parameter. So $$\frac{\partial f}{\partial x} = 2xy + \cos(x)$$ $\frac{\partial f}{\partial z}$ in this case is meaningless, because $z$ is not used as one of the parameters defining $f$. Suppose you were asked to calculate $$\frac{\partial f(z, {\color{red} x}^2)}{\partial {\color {green} x}}$$ First, realize that the red $x$ and the green $x$ represent 2 different things. The red $x$ is a variable, the green $x$ represents the parameter used to define $f$. They are not interchangable, it is rude to use the same name for 2 different concepts, but it is not too uncommon. Second, the above is a shorthand for $\left(\frac{\partial f}{\partial x}\right)(z, x^2)$, that is, take the derivative then apply the arguments. It's value is $2zx^2 + \cos(z)$. What is the partial derivative of $$\frac{\partial x}{\partial y}$$ when $x$ and $y$ are a part of a function $f(x,y)$? It is never correct to take the partial derivative of a variable with respect to another variable, or to take the partial derivative of a parameter with respect to another parameter. It is only meaningly to take the partial derivative of a function with respect to one of it's parameters. It's should be mentioned that referring to a parameter outside of the definition of a function is an abuse of notation, the name is not in scope at that point. A total derivative is always of a variable with respect to another variable. Sometimes, a total derivative will be written as the derivative of a function with respect to a variable, but there the function is just a shorthand for "the variable representing the output of a function". Suppose you are given $$f(x) = x^2$$ And asked to calculate $\frac{{\rm d}f}{{\rm d}x}$. That is not defined. The $x$ in the definition is a parameter, the $x$ in the denominator is a variable. On the other hand, what if you are asked to calculate $\frac{{\rm d}f(x)}{{\rm d}x}$, then both the previous $x$s represent variables. To evaluate this, you apply the argument, then calculate the derivative: $$\frac{{\rm d}f(x)}{{\rm d}x}$$ $$\frac{{\rm d}x^2}{{\rm d}x}$$ $$2x$$ Note that the is the opposite order of what is done with partial derivatives. With a partial derivative, you first calculate the derivative, then apply the arguments. With a total derivative, you first apply the arguments, then calculate the derivative. Total derivatives are the type of notation you would expect a scientist to use, because the variables represent concepts like time, temperature, gravitational force, etc. Partial derivatives are what purists use, because functions are very easy to define concepts and make it easier to connect differential calculus to formal foundations of mathematics. It is common to see these notations confused and used incorrectly even in college textbooks.
Hilbert-C*-Modules and interior tensor products
Have a look in Chapter 4 of Lance's book (Hilbert $C^*$-modules, a toolkit for operator algebraists), in particular page 42 where he shows that $T\otimes 1$ is well defined. Lance explains that you actually need to show something more general to get the well-definedness i.e. what you want to show is that, for any algebraic tensor $\sum_i x_i\otimes y_i$ in $E\otimes_\pi F$, $\|\sum_i Tx_i\otimes y_i\|^2\leq \|T\|^2 \|\sum_i x_i\otimes y_i\|^2$. By definition of $E\otimes_\pi F$, this amounts to showing that $$\|\sum_{i,j} \langle y_i , \pi(\langle Tx_i, Tx_j\rangle) y_j\rangle\|\leq \| T\|^2 \|\sum_{i,j} \langle y_i , \pi(\langle x_i, x_j\rangle) y_j\rangle\| \quad\quad\qquad(1)$$ How to prove this? Let $X=(\langle x_i, x_j\rangle)_{i,j} \in M_n(A)$ and let $W=(\langle Tx_i, Tx_j\rangle)_{i,j} \in M_n(A)$. Lemma 4.2 (page 32/33) states that $W\leq \|T\|^2 X$. By complete positivity of $\pi$, we have $\pi^{(n)}(W)\leq\|T\|^2 \pi^{(n)}(X)$ and hence $0\leq \langle y,\pi^{(n)}(W)y\rangle \leq \|T\|^2\langle y,\pi^{(n)}(X)y\rangle$ where $y=(y_1,\cdots,y_n)\in F^n$- e.g. see proof of Prop 4.5 (page 40). Hence $\|\langle y,\pi^{(n)}(W)y\rangle\| \leq \|T\|^2\|\langle y,\pi^{(n)}(X)y\rangle\|$. This is exactly equivalent to equation $(1)$, so the proof is complete.
Formula for product of matrix exponential, commutator, converging sequence.
Yes. It is straightforward to derive this from the Baker–Campbell–Hausdorff formula: $$e^{tx} \, e^{ty} = e^{t(x+y) + \frac{t^2}{2} [x,y] + o(t^2) }$$ For notational comfort I have written $t = 1/k$. So we can write \begin{align*} e^{tx} \,e^{ty}\, e^{- t(x+y)} & = e^{t(x+y) + \frac{t^2}{2} [x,y] + o(t^2) }\,e^{- t(x+y)} \end{align*} Using the Baker–Campbell–Hausdorff formula again: \begin{align*} e^{tx} \,e^{ty}\, e^{- t(x+y)} &= e^{\left(\frac{t^2}{2} [x,y] + o(t^2) \right) + \frac{1}{2}\left(\left[t(x+y) + \frac{t^2}{2} [x,y] + o(t^2), - t(x+y)\right]\right) + o(t^2)}\\ & = e^{\frac{t^2}{2} [x,y] + o(t^2)} \end{align*} As you probably know, the notation "$o(t^2)$" is a conventional shortcurt for "$t^2 \,z_t$ where $z_t$ is some sequence such that $\lim z_t = 0$" (but this sequence can be different every time I write "$o(t^2)$")
application of Ito's lemma
Try this approach: For the partial derivative with respect to $\xi$, $$\partial_{\xi}\phi(t,\xi)=i\mathbb{E}\left[X_t e^{i\xi X_t}\right] \; .$$ For the partial derivative with respect to $t$, I'll take the differential but only varying t, so as to connect with the SDE: $$d_t\phi(t,\xi)=\mathbb{E}\left[e^{i\xi (X_t+dX_t)} - e^{i\xi X_t}\right]=\mathbb{E}\left[(i\xi dX_t-\frac{\xi^2}{2}dX_t^2)e^{i\xi X_t}\right] \; .$$ Now $$ dX_t = (X_t - \mu) dt + \sigma \sqrt{X_t}dW_t $$ and $$ dX_t^2 = \sigma^2 X_t dt $$ This implies $$d_t\phi(t,\xi)=i\xi \mathbb{E}\left[dX_t e^{i\xi X_t}\right]-\frac{\xi^2}{2}\mathbb{E}\left[dX_t^2 e^{i\xi X_t}\right] $$ and thus $$d_t\phi(t,\xi)=i\xi \mathbb{E}\left[(X_t-\mu) e^{i\xi X_t}\right]dt+i\xi \mathbb{E}\left[\sqrt{X_t} dW_t e^{i\xi X_t}\right]-\frac{\sigma^2 \xi^2}{2}\mathbb{E}\left[X_t e^{i\xi X_t}\right]dt $$ Now, the middle term contains $dW_t$ as a consequence, taking its expectation gives zero. You can now recognize the other terms as containing $\phi$ or $\partial_{\xi}\phi$, so you've got yourself an ordinary PDE for $\phi$. That should be the aim of the computation.
Proof of Lemma 6.3 in Carothers' Real Analysis
Using your notations, suppose that:$\exists z\in E$ such that $z \in B\left(x,\frac{\epsilon_x}{2}\right) \cap B\left(y,\frac{\delta_y}{2}\right)$ It follows that: $d(x,z)\lt \epsilon_x/2$ and $d(y,z)\lt \delta_y/2\implies d(x,z)\lt \epsilon_x$ and $d(y,z)\lt \delta_y$ Therefore, $z\in B(x,\epsilon_x) $ and $z\in B(y, \delta_y)$ whence it follows that $z\in B(x,\epsilon_x) \cap B(y, \delta_y) $, which is a contradiction as $E\cap B(x,\epsilon_x) \cap B(y, \delta_y)=\phi$
Prove;$\left|\sum\limits_{n \in I}Re(\lambda(n))\right|\le 1 \implies \sum\limits_{n \in I} \left|Re \lambda(n)\right|\le 2$
What if take $\lambda(1) = 100 +i$ and $\lambda(2) = -100+\frac12$ and $\lambda(n) =0$ Every where else then this does not make sense Since $$\left|\sum_{n \in I} \operatorname{Re}(\lambda(n))\right| =|100-100-\frac12| =\frac12 \leq 1 $$ and $$ 200+\frac12= |100|+|100+\frac12| =\sum_{n \in I} \left|\operatorname{Re} \lambda(n)\right|\leq 2$$ Which is Heavily absurd
Schrödinger Kernels on manifolds
Unitarity has rather little to do with it, as the Schrodinger operator on $\mathbb{R}^n$ is unitary, and for any rapidly decreasing initial data (no regularity assumptions here! just decay ones) we have in fact that the solution is smooth for all positive times. Compactness, however, of the manifold has quite a lot to do with it. This is because compactness implies that every geodesic is trapped, so we cannot have dispersion to infinity. More precisely: Consider first the linear wave equation. We know that this equation has propagation of singularities along null geodesics. Roughly speaking we have that all frequencies are transported at the same speed and so if a collection of plane waves add to produce a singularity at time $t$, it will continue to do so at later times. For the linear Schrodinger equation, the situation is different, the frequencies are not all traveling at the same speed. So if you have a high frequency wave packet and a low frequency one, some time later their spatial support will separate and won't constructively add to a singularity. This is why Schrodinger equation is smoothing for rapidly decaying initial data: if the data is decaying fast, all the action starts out near the origin, and so after some small time the wave packets, which were all originally located near the origin, now burst all over the place and cannot add up to a singularity anymore. However, if you now try to do Schrodinger's equation on a manifold for which the geodesic flow no longer guarantees that wave packets be transported by distance $\approx |\xi|t$, where $\xi$ is the frequency of the wave packet and $t$ is the elapsed time, then the above smoothing heuristic will no longer work. And in fact, this argument can be made rigorously in the case of non compact, asymptotically flat manifolds. See Craig, Walter, On the microlocal regularity of the Schrödinger kernel. Partial differential equations and their applications (Toronto, ON, 1995), 71--90, CRM Proc. Lecture Notes, 12, Amer. Math. Soc., Providence, RI, 1997 In the case of a compact manifold, no geodesic can "escape" to infinity, so all wave packets will remain in finite distance of each other. By a covering argument, there will necessarily be points where a infinite number of the wave packets can accumulate and potentially cause the solution to be singular. This intuition has been carried out in special cases. For example, it is known that the Schrodinger kernel on the sphere $\mathbb{S}^d$ is a distribution with singular support in all of $\mathbb{R}\times \mathbb{S}^d$. [Edit April 2019: please take the previous sentence with a grain of salt; I cannot relocate the reference that gave me that statement, and in particular cannot double check whether I made a typo in it.] You can find more references in this MathOverflow post of Mazzeo. Edit Let me expand a bit further on my comment, which may give you an answer to your second question. The main issue is the following: when we think of a "convolution kernel" as a solution to an evolutionary partial differential equation we generally expect the kernel $E_t$ to be in $(C^\infty_c(X))'$, where $X$ is the background manifold. That is to say, we expect $E_t$ to be a distribution for each time $t$. By convolution we can guarantee that for any $v$ a distribution with compact support (in notations, $v\in \mathcal{E}'(X)$) that $E_t*v$ is a distribution, and we have a distributional solution to the Cauchy problem. Now, if for $t > 0$ we have that the singular support of $E_t$ is the empty set, then by the properties of the convolution we have that $E_t*v \in C^\infty(X)$. This is what I think of as "smoothing", and we see that it is immediately tied to the singular support of the convolution kernel. Where compactness enters is the following trivial fact: If $X$ is compact, then $C^\infty(X) = C^\infty_c(X)$, and the space of distributions and the space of distributions with compact support are the same. We know that $L^2(X) \subset (C^\infty_c(X))'$, that is, $L^2$ functions are locally integrable and can be interpreted as distributions. In general, however, $L^2$ functions do not have compact support. But by the above trivial fact, we have that if $X$ is compact manifold, $L^2(X) \subset \mathcal{E}'(X)$. This implies that a smoothing kernel on a compact manifold will smooth any $L^2(X)$ function. This is what justifies your reasoning that on a compact manifold the Schrodinger kernel cannot be smooth. On the other hand, this argument breaks whenever $X$ is non-compact. As $L^2(\mathbb{R}^n) \setminus \mathcal{E}'(\mathbb{R}^n)$ is non-empty, the originally defined convolution kernel cannot necessarily be applied to all $L^2$ functions (the convolution of two distributions of non-compact support may fail to be a distribution). For the case of the Schrodinger operators, as it turns out, we can take the convolution and still end up with a distribution, but the uniform estimates required for "smooth kernel implies smooth solution" is no longer true on the whole of $L^2(\mathbb{R}^n)$. Hence in general on a non-compact manifold $X$ one cannot conclude "unitary on $L^2(X) \implies $ lack of smoothing on $\mathcal{E}'(X)$".
Change of variable applied over a Binomial distribution
$Y$ need not have a Binomial distribution since its values need even be integers. Even if they are, the values need not be $0,1,2...,m$ for any integer $m$. We can only say $Y$ also take exactly $n+1$ values, namely $\xi^{-1}(i): 0\leq i\leq n$ and write down the corresponding probabilities.
Proving $\oint{u \nabla v d\vec{r}}$ = $-\oint{v \nabla u d\vec{r}}$
It follows from the fundamental theorem of line integrals that $\oint \nabla(uv) d\vec{r} = 0$. Since $\nabla(uv) = u \nabla(v) + v \nabla(u)$, you get the result by integrating both sides.
Linear ODE with non-constant coefficients
We have a 1st order linear ODE with varying coefficients $$\dot{x} = (2t-1) \, x - 1$$ where the initial condition is $x_0$. The homogeneous solution is $$x_h (t) = x_0 \, \exp\left(\displaystyle\int (2 t - 1) \, \mathrm{d}t\right) = x_0 \, \exp\left(t^2 - t\right)$$ Hence, we try a solution of the form $$x (t) = \kappa (t) \, \exp\left(t^2 - t\right)$$ which yields a 1st order ODE in $\kappa$ $$\dot{\kappa} (t) = - \exp\left(-t^2 + t\right)$$ Integrating, $$\kappa (t) = \kappa_0 - \displaystyle\int_0^t \exp\left(-\tau^2 + \tau\right) \, \mathrm{d} \tau$$ Hence, the solution to the given ODE is $$x (t) = \left(x_0 - \displaystyle\int_0^t \exp\left(-\tau^2 + \tau\right) \, \mathrm{d} \tau\right) \exp\left(t^2 - t\right)$$ Note that the integrand above is a Gaussian function, which isn't integrable.
Is there a complete set of mutually disjoint trios in a (5,8,24) Steiner system?
Yes, such a family can be constructed. Let us begin by constructing the Steiner system as supports of words of weight 8 in the extended Golay code. Here the $[23,12,7]$ binary Golay code $G$ is the ideal $G:=R\cdot p$ of $R=F_2[x]/\langle x^{23}+1\rangle$ generated by the factor $$p(x)=1+x+x^5+x^6+x^7+x^9+x^{11}\mid x^{23}+1.$$ We view polynomials as strings of 23 bits simply by listing their coefficients, and index bit positions with the exponents. The code $G$ is extended to a code $\overline{G}$ of length 24 by adding an overall parity bit to the end - call this last position $\infty$ with a view of matching the bit positions with points of the projective line over $F_{23}$. So the polynomial $p(x)$ corresponds to a word of weight 7 in $G$, and in the extended code $\overline{G}$ it becomes a word of weight 8 giving the octad $o_1=\{0,1,5,6,7,9,11,\infty\}$. Being an ideal $G$ is stable under multiplication by $x$, i.e. the 23-cycle $\alpha(i)\equiv i+1\pmod{23}$ (acting on the bit positions or the set of exponents of $x$, whichever way you prefer). Being an ideal $G$ is also stable under the Frobenius map = squaring. As the order of $2$ modulo $23$ is equal to $11$ this breaks the bit positions/exponents into two 11-cycles: $$\beta=(0)(1,2,4,8,16,9,18,13,3,6,12)(5,10,20,17,11,22,21,19,15,7,14)(\infty)$$ or $\beta(i)\equiv2i\pmod{23}$. We view both of these as permutations in $Aut(\overline{G})\cong M_{24}$. They are both in the point stabilizer of $\infty$. We see that $\beta\alpha\beta^{-1}=\alpha^2$ ($i\mapsto i+2\pmod{23}$). Therefore $\beta$ normalizes the group $\langle\alpha\rangle$, and together they generate a group $H= \langle \alpha,\beta\rangle\cong C_{23}\rtimes C_{11}$ of size $11\cdot23=253$. It is easy to see that $H$ does not fix any octads. This is because $M_{24}$ acts transitively on the set of $759=3\cdot11\cdot23$ octads, and neither $11^2$ nor $23^2$ divide the order $|M_{24}|$. Therefore the octads are partitioned into three full size orbits of $H$. With the aid of CAS (I used Mathematica) it is easy to generate all the octads and list all those that have supports disjoint from $o_1$. There are exactly 30 of those (15 pairs, as predicted by your data telling that $o_1$ is a member of exactly 15 trios). The method that I used in generating them spewed out $o_2=\{3,4,8,10,16,19,21,22\}$ as the first octad of this kind. Because $H$ stabilizes $\infty$ it is immediately clear that $o_2$ does not belong to the orbit $H\cdot o_1$. It is straightforward to check that the octad complementing this trio $o_3=\{2,12,13,14,15,17,18,20\}$ does not belong to the orbit $H\cdot o_2$. Thus all the octads belong to exactly one of the orbits $H\cdot o_j, j=1,2,3$. For any $\gamma\in H\le M_{24}$, the octads $\gamma(o_j),j=1,2,3$ obviously form a trio. Thus together these 253 trios cover all the octads as requested. This solution is admittedly somewhat unsatisfactory in the sense that at a critical point, $o_3\notin H\cdot o_2$, I used brute force. It would not surprise me, if the same thing happened to any trio containing $o_1$. I verified this for one other pair of octads $o_2',o_3'$ completing $o_1$ to a trio. The simple idea of using the group $H$ is my key input. After finding $o_2$ the rest could probably be done without the aid of a computer. After all, the group $H$ consists of affine mappings from $F_{23}\cup\{\infty\}$ to itself of the form $i\mapsto ui+v$, where $u$ is a quadratic residue modulo $23$ and $v\in F_{23}$ is arbitrary. Edit: Ted already noticed that I had jumped to a 'conjecture' on too scant data. Today I checked out all the 15 trios including $o_1$. The exhaustive tally is that 9 out of those work the same way as the example trio $\{o_1,o_2,o_3\}$. For the remaining 6 trios the two complementary octads actually belong to the same orbit of $H$, and hence those trios cannot be used to solve Ted's question. At least not using this particular conjugate of $H$ (another one may work).
A group with certain three distinct cyclic subgroups
Take the quaternion group of order 8: $Q=\{1,-1,i,-i,j,-j,k,-k\}$, and put $a=i$ and $b=j$.
Is it possible to use physics or other form of non-canonical reasoning to study functions?
$% Predefined Typography \newcommand{\paren} [1]{\left({#1}\right)} \newcommand{\bparen}[1]{\bigg({#1}\bigg)} \newcommand{\brace} [1]{\left\{{#1}\right\}} \newcommand{\bbrace}[1]{\bigg\{{#1}\bigg\}} \newcommand{\floor} [1]{\left\lfloor{#1}\right\rfloor} \newcommand{\bfloor}[1]{\bigg\lfloor{#1}\bigg\rfloor} \newcommand{\mag} [1]{\left\vert\left\vert{#1}\right\vert\right\vert} \newcommand{\bmag} [1]{\bigg\vert\bigg\vert{#1}\bigg\vert\bigg\vert} % \newcommand{\labelt}[2]{\underbrace{#1}_{\text{#2}}} \newcommand{\label} [2]{\underbrace{#1}_{#2}} % \newcommand{\setcomp}[2]{\left\{{#1}~~\middle \vert~~ {#2}\right\}} \newcommand{\bsetcomp}[2]{\bigg\{{#1}~~\bigg \vert~~ {#2}\bigg\}} % \newcommand{\iint}[2]{\int {#1}~{\rm d}{#2}} \newcommand{\dint}[4]{\int_{#3}^{#4}{#1}~{\rm d}{#2}} \newcommand{\pred}[2]{\frac{\rm d}{{\rm d}{#2}}#1} \newcommand{\ind} [2]{\frac{{\rm d} {#1}}{{\rm d}{#2}}} % \newcommand{\ii}{{\rm i}} \newcommand{\ee}{{\rm e}} \newcommand{\exp}[1] { {\rm e}^{\large{#1}} } % \newcommand{\red} [1]{\color{red}{#1}} \newcommand{\blue} [1]{\color{blue}{#1}} \newcommand{\green}[1]{\color{green}{#1}} $Using idealized operational amplifiers (opamps), you can create idealized integrating circuits: If the voltage $V_\text{in} - \text{Ground}$ is given by the function $V_\text{in}(t)$, and the output voltage $V_\text{out} - \text{Ground}$ is given by the function $V_\text{out}(t)$, then the circuit maintains: $$V_\text{out}(t) = \dint{V_\text{in}(t')}{t'}{-\infty}{t}$$ as well as integrating circuits: Similarly, if the voltage $V_\text{in} - \text{Ground}$ is given by the function $V_\text{in}(t)$, and the output voltage $V_\text{out} - \text{Ground}$ is given by the function $V_\text{out}(t)$, then the circuit maintains: $$V_\text{out}(t) = \pred{V_\text{in}(t)}{t}$$ The main problem with this sort of "physics" approach to differential calculus is that the circuit components are idealized. The opamps have their own power supply, so the calculation only works within a certain range. The capacitors will explode if you try to push too much through them. Also the differentiator circuit is very unreliable, any small white noise can cause large spikes (effectively a momentary dirac delta distribution) on the output. But for your question, this might be a feature, since spikes would indicate discontinuity. Both images are from wikimedia commons.
Word problem related to ratios and proportions
The first part of the question states that if Farmer John had 42 hens, exactly one-third or 14 of those hens would be speckled; since the "missing" solid-coloured hen is not speckled, he has 14 speckled hens. 7 of these speckled hens lay speckled eggs, and John needs 132 (11 dozen) speckled eggs. Each hen and a half lays an egg and a half in a day and a half. Therefore, among the speckled hens that lay speckled eggs: one hen lays one egg in a day and a half seven hens lay seven eggs in a day and a half seven hens lay 132 eggs in $\frac32\times\frac{132}7=\frac{198}7=28\frac27$ days. Therefore Farmer John needs 29 whole days – a lunar month – to collect 11 dozen speckled eggs for sale.
Find $\lim_{x \to \infty} x^3 \left ( \sin\frac{1}{x + 2} - 2 \sin\frac{1}{x + 1} + \sin\frac{1}{x} \right )$
Let $t=\frac1x$. Then, $$\lim_{x \to \infty} x^3 \left ( \sin\frac{1}{x + 2} - 2 \sin\frac{1}{x + 1} + \sin\frac{1}{x} \right ) =\lim_{t \to 0} \frac1{t^3} \left ( \sin\frac{t}{1 + 2t} - 2 \sin\frac{t}{1+t } + \sin t \right )$$ Use $\frac 1{1+a} = 1-a+a^2+O(a^3)$ to expand, $$\sin\frac{t}{1 + 2t} - 2 \sin\frac{t}{1+t } + \sin t$$ $$=\sin(t-2t^2+4t^3)+\sin t - 2 \sin(t-t^2+t^3)+O(t^4)$$ $$=2\sin(t-t^2+2t^3)\cos t^2 - 2 \sin(t-t^2+t^3)+O(t^4)$$ $$=2[\sin(t-t^2+2t^3) - \sin(t-t^2+t^3)]+O(t^4)$$ $$=4\cos t\sin\frac{t^3}2+O(t^4)= 4\cdot 1\cdot \frac{t^3}2+O(t^4)=2t^2+O(t^4)$$ where $\cos t^2 = 1 + O(t^4)$ is applied. Thus, $$\lim_{t \to 0} \frac1{t^3} \left ( \sin\frac{t}{1 + 2t} - 2 \sin\frac{t}{1+t} + \sin t \right )=\lim_{t \to 0} \frac{2t^3+O(t^4)} {t^3}=2$$
Find the modulus of continuity of $f(x)=\frac{1}{x}$, at $I=(0, 1)$
The method is not correct for following reasons: In the definition of (global) modulus of continuity, only elements of the definition set are used. $0^+$ is not such an element. As a consequence, if you use limits like $f(0^+)$ to compute the modulus of continuity, you should prove how a limit can be used. What you can do is to compute $$\frac{f(1/n)-f(1/2n)}{1/n -1/2n}=2n^2$$ and deduce that the modulus of continuity is equal to $\infty$.
I can prove a Contradiction - Where's my mistake?
To begin with, note that the elements $u,v$ are not in $B$ so your definition of $Q'$ doesn't really make sense. I suppose you mean $Q'=(\bar u,\bar v)\subset B$, and I'll assume that. The key to your paradox is quite simple: the ideal $Q'\subset B$ is not prime, and thus not maximal! Indeed if we consider the projection $q:\mathbb C[x,y,u,v,w]\stackrel {def}{=}R \to B$ we get $q^{-1}(Q')=(x,y,u,v,w^n-2)\subset R$ which is clearly not prime. Actually, if we call $Q(\zeta)\stackrel {def}{=}(\bar x,\bar y,\bar u,\bar v,\bar w-\zeta) \subset B $ , a maximal ideal, we have $Q'=\bigcap \limits_{\zeta^n=2} Q(\zeta)=\prod \limits _{\zeta^n=2} Q(\zeta)$, so that $V(Q')\subset Y$ consists of the $n$ reduced, rational points $Q(\zeta) \in Y$. By the way, the schematic fiber $V(\bar x, \bar y)\subset Y$ of the origin of $\mathbb A^2_{\mathbb C}$ under $\pi $ is not reduced and has as reduction $V(Q')$. Edit: Answer to question in comment. The surface $Y$ is indeed smooth (it is isomorphic to the product of the affine $v$-line with the plane curve $w^n-u^n-2=0$ in the $w,u$-plane). Hence the ideal $Q(\zeta)$ can be described by two generators near the point $Q(\zeta)$ , namely $\bar v$ and $\bar u$. If this seems strange remember that: $\bar x=\bar u^n,\bar y=\bar v^n $ and $\bar w-\zeta=\frac {\bar u^n}{\bar w^{n-1}+\ldots+ \zeta^{n-1}}$ in the local ring of the point $Q(\zeta)\;$ [since $0=\bar w^n-\bar u^n-2=\bar w^n-\bar u^n- \zeta^n=(\bar w-\zeta)(\bar w^{n-1}+\ldots+ \zeta^{n-1})-\bar u^n \;$ ]
Compute the expectation of $\log|X|$ if $X$ is uniformly distributed on the unit ball.
If you want to use that formula, you need to notice that $\log|X| \leq 0$ so instead you would write $$\begin{align*}\mathbb{E}\log|X| &= -\mathbb{E}[-\log|X|] \\&= -\int_0^\infty \mathbb{P}(-\log|X| > x)\,dx \\&= -\int_0^\infty \mathbb{P}(|X| < e^{-x})\,dx\end{align*}$$ Now note that $0 < x < \infty \implies 0 < e^{-x} < 1$ so you can use $(*)$ directly.
Are the terms 'clan' and 'tribe' common in mathematics?
They are not new terms. They are terms originally used in French. Even today we find some texts in French using those terms, although "ring" and "$\sigma$-ring" have became kind of international standard.
If $q:R^n\to R$ is convex and $d^0,\ldots,d^k\in R^n$ are linearly independent, then $R^k\ni l\mapsto q(x^0+\sum_{i=0}^kl_id^i)$ is convex, too
Lemma: If $q$ is strictly convex and $\ker A=0$ then $f(x)=q(Ax+b)$ is strictly convex. Proof: If $x_1\ne x_2$ then $y_1=Ax_1+b\ne y_2=Ax_2+b$ and for $\lambda\in(0,1)$ we have $$ f(\lambda x_1+(1-\lambda)x_2)=q(\lambda y_1+(1-\lambda)y_2)<\lambda q(y_1)+(1-\lambda)q(y_2)=\\=\lambda f(x_1)+(1-\lambda)f(x_2). $$
Improper integral: why $\int_0^1(x^2+ x^{1/3})^{-1}\,dx$ is convergent and not $\int \frac{1}{x^2}\,dx$ ???
For positive $x$ we have $\frac{1}{x^2+x^{1/3}}\lt \frac{1}{x^{1/3}}$, and $\frac{1}{x^{1/3}}$ blows up "slowly" as $x$ approaches $0$ from the right. More formally, we show that the improper integral $$\int_0^1 \frac{dx}{x^{1/3}}$$ converges, by calculating $$\lim_{x\to 0^+}\int_0^1\frac{dx}{x^{1/3}}.$$ The integral is equal to $\frac{3}{2}(1-\epsilon^{2/3})$, and approaches $\frac{3}{2}$ as $\epsilon$ approaches $0$ from the right. It follows by Comparison that $\int_0^1 \frac{dx}{x^2+x^{1/3}}$ converges. As to $\int_0^1 \frac{dx}{x^2}$, we calculate $\int_\epsilon^1 \frac{dx}{x^2}$. The integral is $\frac{1}{\epsilon}-1$, and blows up as $\epsilon$ approaches $0$ from the right.
What are the proprieties of $\mathbf{M}$ in order for $\mathbf{pM}=\mathbf{e}$ to be achieved by $\mathbf{p}$ being a probability vector?
This is not true. Take $$\begin{align} p&=\begin{bmatrix}.3&.3&.4\end{bmatrix}\\ M&=\begin{bmatrix} 1.4 & 1.4 & .2\\ 1.3 & 1.2 & .5\\ .475 & .55 & 1.975\end{bmatrix} \end{align}$$ All the conditions are fulfilled, but since the largest elements in columns $1$ and $2$ are both in row $1$, no permutation of the rows can bring them both to the diagonal. EDIT For some reason, I thought you must mean that the diagonal elements are the largest in their column, but now I see that if we swap the first and second rows, the diagonal elements are the largest in their rows. Is that what you meant? I'm fairly sure I could construct a counterexample for that, too.
Number of partitions of $12$
HINT: $$ 1+x+x^2+x^3+\ldots =\frac{x}{x-1} $$ for formal sums. Your first factor is $$ 1+x^2+x^4+\ldots=\frac{x}{1-x^2} $$ you can ignore the fact that this is an infinite sum since you are interested in a certain coefficient therefore can ignore the rest. For more hints check this link out. Hope this helps
Difficulty proving gauge invariance on an SU(N)-valued potential
Haha ok I've done that thing where I figure out the answer to my own question 10 mins after asking it. May as well give the solution, it might help others. Of course, my transform \begin{equation} \mathcal{A}\rightarrow\mathfrak{g}\mathcal{A}\mathfrak{g}^{-1} \end{equation} is faulty - the correct expression (letting $\mathcal{A}\equiv\mathcal{A}_\mu dx^{\mu}$) is given by \begin{equation} \mathcal{A}_\mu\rightarrow\mathfrak{g}\mathcal{A}_\mu\mathfrak{g}^{-1}-(\partial_\mu\mathfrak{g})\mathfrak{g}^{-1}. \end{equation} This causes the problem terms to vanish, as this instead yields (noting $\mathfrak{g}$ is diagonal) \begin{equation} \begin{split} \mathcal{A}\rightarrow&Adt-\dot{\mathfrak{g}}\mathfrak{g}^{-1}dt+Bdr-\mathfrak{g}^\prime\mathfrak{g}^{-1}dr+\frac{1}{2}\mathfrak{g}(C-C^\dagger)\mathfrak{g}^{-1}d\theta\\&-\frac{i}{2}\left[\mathfrak{g}(C+C^\dagger)\mathfrak{g}^{-1}\sin\theta+D\cos\theta\right]d\phi\\ \rightarrow& Adt+\mathfrak{g}^{-1}\dot{\mathfrak{g}}dt-\dot{\mathfrak{g}}\mathfrak{g}^{-1}dt+Bdr+\mathfrak{g}^{-1}\mathfrak{g}^\prime dr-\mathfrak{g}^\prime\mathfrak{g}^{-1}dr+\frac{1}{2}(C-C^\dagger)d\theta\\ &-\frac{i}{2}\left[(C+C^\dagger)\sin\theta+D\cos\theta\right]d\phi\\ \rightarrow&\mathcal{A}. \end{split} \end{equation}
Taylor series to solve limit equation
$$(1+1/n)^{n+x}=(1+1/n)^n \cdot (1+1/n)^x \to e \cdot1^x=e$$ as $n \to \infty$ for all $x$.
A sample of 10 articles was chosen out of 20. What is the probability of a certain item being among them?
In your denominator, you have counted the total number of ways to select $10$ distinct items out of a group of $20$ items, such that the order of selection is not relevant. Therefore, in your numerator, you should count the total number of such selections in which a specific single item is included among the $10$ selected. To do this, imagine that you have "pre-selected" that item. Now, there are $19$ items remaining, and you have $9$ more items to select. Thus, there are $$\binom{19}{9}$$ such desired outcomes, out of the possible $\binom{20}{10}$. Note that it is important to keep the way you counted such selections consistent when enumerating the desired outcomes versus the possible outcomes--otherwise, you're not counting the same things. The way the professor solved the question is similar but slightly different; this approach instead counts the complementary outcome of avoiding the selection of the desired object. To do this, imagine simply throwing that item away. Now you have $19$ items from which $10$ must be selected, giving $$\binom{19}{10}$$ choices. Then the desired probability is $1$ minus the complementary probability. If all of the above is difficult to conceptualize, suppose you have $20$ balls numbered $1$ through $20$ inclusive. Suppose all of the balls are white, except for ball number $8$, which is black. If you want to count the number of ways to select $10$ of the balls such that one of them is the black $8$-ball, then since it doesn't matter the order in which you select the balls, just pick the $8$-ball first, leaving you with $9$ more balls to choose from the $19$ remaining. Conversely, with the professor's approach, you simply discard the $8$-ball, leaving you with $19$ balls from which you must select $10$. Incidentally, the fact that the resulting probability is $1/2$ also proves that $$\binom{19}{9} = \binom{19}{10},$$ which we also see because $$\binom{n+m}{n} = \frac{(n+m)!}{n! m!} = \binom{n+m}{m}.$$
Are all convex optimization problems easy to solve?
No, not all convex programs are easy to solve. There are intractable convex programs. Roughly speaking, for an optimization problem over a convex set $X$ to be easy, you have to have some kind of machinery available (an oracle) which efficiently can decide if a given solution $x$ is in $X$. As an example, optimization over the cone of co-positive matrices is convex, but intractable. Given a matrix $A(x)$, it is hard to decide of $A(x)$ is co-positive ($z^TA(x)z\geq 0 ~\forall z\geq 0$). Compare this to the tractable problem of optimizing over the semidefinite cone $z^TA(x)z\geq 0 ~\forall z$
Construct a calculus which produces exactly all pairs $(S,t)$, such that $free(t)=S$.
Note that in a term all variables are free. So the following calculus will do: $${\over (\{v_n\}, v_n)} $$ where $v_n$ is a variable. $${\over (\emptyset, c)}$$ where c is a constant symbols. $${(S_1, t_1) \\ \dots \\ (S_n, t_n) \over ( \bigcup_i S_i, ft_1\dots t_n) }$$ where f is an n-ary function symbol.
A new characterization of an annulus in the plane?
This is not an answer to the answer, but the proof of the following fact, that I stated in a comment to the OP. Lemma: Let $X\subseteq\mathbb{R}^2$ be a non-empty compact set, and let $L$ be the set of distinct lines about which $X$ is symmetric. Then there is a point $p\in\mathbb{R}^2$ which is common to every line in $L$. Proof: We will proceed in two parts, both by contradiction on the fact that $X$ is compact. Suppose there are two lines $\ell_1$ and $\ell_2$ in $L$ that have no points in common, i.e. they are parallel. Let $\ell'$ be the line exactly in the middle of $\ell_1$ and $\ell_2$. Then for every point $x\in X$ there is one of the two lines which is closer than the other to $x$, say $\ell_1$ (except if the point is exactly $\ell'$, in which case choose whichever one, it doesn't matter). Let $x'$ be the point which is mirrored to $x$ by $\ell_2$, then the distance of $x'$ from $\ell'$ is greater than the distance of $x$ from $\ell'$. By assumption, $x'\in X$, so we can repeat this procedure to construct a sequence of points of $X$ going to infinity, which contradicts the fact that $X$ is compact. It follows that every two lines in $L$ must intersect. Now take three lines $\ell_1,\ell_2$ and $\ell_3$ in $L$ and suppose they don't have a common point. We know by what we have said before that they must bound a triangle in the plane, let $c$ be the geometric center of this triangle. Let $x\in X$ and let $\ell$ be the line passing through $x$ and $c$. Suppose without loss of generality that if you follow $\ell$ starting from $x$ and going towards $c$, the last one of the other lines you cross is $\ell_1$. Let $x'$ be the point mirrored to $x$ by $\ell_1$, then the distance from $x'$ to $c$ is strictly greater than the distance from $x$ to $c$. Again, this leads to a contradiction to the fact that $X$ is compact. It is straightforward to see that this implies that all lines in $L$ have a common point. QED
Finding the second moment of a control system
The quick way is to note that you're actually studying an Ornstein-Uhlenbeck process whose explicit solution is $$ X_t^u=X_0 e^{-ct}+\int_0^t e^{-c(t-s)}\,dW_s. $$ Now if $X_0$ is independent of $W$ (otherwise your formula doesn't hold), we have $$ E[(X_t^u)^2]=e^{-2ct}E[(X_0)^2]+2e^{-ct}E[X_0]E\left[\int_0^t e^{-c(t-s)}\,dW_s\right]+E\left[\left(\int_0^t e^{-c(t-s)}\,dW_s\right)^2\right]. $$ The second term vanishes and for the last we have $$ E\left[\left(\int_0^t e^{-c(t-s)}\,dW_s\right)^2\right]=\int_0^t e^{-2c(t-s)}\,ds=\frac{1-e^{-2ct}}{2c} $$ by Ito's isometry. This gives your formula.
How do I show that there exists only one group of order 2 up to isomorphism?
One can directly check that there is an isomorphism from $(G,e)$ to $(G',e'),$ say $G=\{e,a\}$ and $G'=\{e',a'\}.$ Then clearly $a^2=e$ and $a'^2=e'.$ Consider the map $G\to G'$ by mapping $e$ to $e'$ and $a$ to $a'.$ Then it's a bijection by the construction. What you have to check is that it's a homomorphism(almost trivial). (This may rephrase what you want to convey by your tables!)
Limit operations with viscosity solutions
The two operations apply to different objects, so although they share similarities, they are not strictly equivalent in any way I'm aware of. In the Perron method, one has a family of subsolutions $\mathcal{F}$, and it is the pointwise supremum over this family. In the end, one shows that $w(x)$ is a viscosity solution of the equation of interest, and so $w\in \mathcal{F}$, and the supremum is thus attained (and there is no limit). On the other hand, the limsup operation applies to a sequence of functions. The context here is usually some approximation scheme for the viscosity solution (e.g., vanishing viscosity or finite difference schemes) where there is a natural ordering to the family of functions (e.g., increasing grid resolution, decreasing viscosity parameter). The two operations indeed share a lot of similarities. They are both based on utilizing the maximum principle to pass to limits within the viscosity solution framework. EDIT: To answer your edited question, taking the limsup and * separately gives a different operation. Consider the sequence of functions $u_n(x) = 1_{(0,1/n)}(x)$. Then $\limsup_n u_n(x)=0$, but the combined limsup and * operation gives a value of $1$ at $x=0$.
If $F$ is a distribution function and $t>0$, can we show that $F(F^{-1}(t))\ge t$?
I think I know what question you meant to ask, but you wrote it in a confusing (more precisely, incorrect) way since $M_t$ is treated as both a set and a number in your question. (It's a set in the definition you wrote and in your equations $(3)$ and $(4)$, but it's a number in the highlighted portion of your question.) So here is what I think your question is a special case of: Show that for all $t\in\mathbb R$ and for all right-continuous functions $F\colon\mathbb R\to\mathbb R$ we have the inequality $$ F\bigl(\inf\{x\in\mathbb R\colon F(x)\geq t\}\bigr)\geq t $$ whenever $\inf\{x\in\mathbb R\colon F(x)\geq t\}\in\mathbb R$. Note that $F$ doesn't need to be a distribution function, and $t$ does not have to positive. Why is this true? Well, if we write $y$ for the infimum of the set of $x\in\mathbb R$ satisfying $F(x)\geq t$ then we can find a sequence $x_1,x_2,\ldots$ tending to $y$ from above such that $F(x_n)\in [t,\infty)$ for all $n$. By right continuity, we have that $F(x_n)$ tends to $F(y)$, and thus since $[t,\infty)$ is closed it follows that $F(y)\in [t,\infty)$ as well, proving the inequality.
Solutions to $f'=f$ over the rationals
More generally, for any function $g:\mathbb Q\rightarrow\mathbb Q$ and any point $(x_0,y_0)\in\mathbb Q^2$, there exists $f:\mathbb Q\rightarrow\mathbb Q$ such that $f'(x_0)=y_0$ and $f'(x)=g(f(x))$. Choose an enumeration $x_0,x_1,\ldots$ of $\mathbb Q$ starting with $x_0$. Let $Q_n=\{x_0,\ldots,x_n\}$, so $\mathbb Q=\bigcup_n Q_n$. We will inductively construct continuous functions $a_n,b_n:\mathbb Q\rightarrow\mathbb Q$ with the properties $a_{n-1}(x)\leq a_n(x)\leq b_n(x)\leq b_{n-1}(x)$ If $x\in Q_n$ then $a_n(x)=b_n(x)$ and $a_n'(x)=b_n'(x)=g(a_n(x))$. If $x\in\mathbb Q\setminus Q_n$ then $a_n(x)<b_n(x)$. We'll use the parabolic functions $c(s,t)$ and $d(s,t)$ defined by $$ c(s,t)(x)=t+g(t)(x-s)-(x-s)^2, $$ $$ d(s,t)(x)=t+g(t)(x-s)+(x-s)^2. $$ Note that $c(s,t)(x)<d(s,t)(x)$ for $x\neq s$ and both functions pass through $(s,t)$ with derivative $g(t)$. We can take $a_0=c(x_0,y_0)$ and $b_0=d(x_0,y_0)$. Suppose $n>0$ and $a_{n-1},b_{n-1}$ are constructed. Then $a_{n-1}(x_n)<b_{n-1}(x_n)$, so choose $y_n$ strictly between these. Choose an open interval $I$ containing $x_n$ such that $c(x_n,y_n)>a_{n-1}$ and $d(x_n,y_n)<b_{n-1}$ on $I$. Shrink $I$ so that its closure doesn't intersect $Q_{n-1}$. Let $J$ be an open interval containing $x_n$ whose closure is inside $I$. We define $a_n$ to equal $a_{n-1}$ outside $I$, $c(x_n,y_n)$ inside $J$, and interpolate linearly between $I$ and $J$ so that the result is still continuous. Define $b_n$ similarly. Since $\bigcup_n Q_n=\mathbb Q$, both $a_n$ and $b_n$ converge pointwise to a function $f$ as $n\rightarrow\infty$. For any $x\in\mathbb Q$ we have $x=x_n$ for some $n$, and $a_n\leq f\leq b_n$, so property 2 and the squeeze theorem imply that $f$ satisfies the required equation.
Why are elementary row operation linear transformation?
Let's concentrate on a $3\times 3$ matrix for now. The general theory will become clear. The core idea is that row operations correspond to multiplying by a particular matrix. Given a matrix $$ M \equiv \begin{bmatrix} v_1^1 & v_1^2 & v_1^3 \\ v_2^1 & v_2^2 & v_2^3 \\ v_3^1 & v_3^2 & v_3^3 \\ \end{bmatrix} $$ We can write our matrix $M \equiv \begin{bmatrix} r_1 \\ r_2 \\ r_3 \end{bmatrix}$ where each $r_i$ are the rows, defined as $r_i \equiv \begin{bmatrix} v_i^1 & v_i^2 & v_i^3 \end{bmatrix}$. Now, we can look at the row transformation $R_1 \rightarrow \alpha R_1 + \beta R_2+ \gamma R_3$ on matrix $M$ to yield matrix $M'$ as: $$ \begin{align*} M' &\equiv \begin{bmatrix} \alpha & \beta & \gamma \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} M \\ &= \begin{bmatrix} \alpha & \beta & \gamma \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_1 \\ r_2 \\ r_3 \end{bmatrix} \\ &= \begin{bmatrix} \alpha r_1 + \beta r_2 + \gamma r_3 \\ r_2 \\ r_3 \end{bmatrix} \end{align*} $$ So, the matrix $M'$ (obtained after a row transformation) is a linear transform applied onto the original matrix $M$. For a general transformation, we can create the transformation matrix appropriately, generalising from this example. This explains why row transformations cannot change the span of the rows: all these transformations can do is to take combinations of existing rows, which does not allow one to access vectors outside the subspace spanned by $\{ r_1, r_2, r_3 \}$. (note: you might want to check that it's indeed legal to collapse a matrix into the rows $r_i$, and that the composition rules do work out. They do, but it's a good exercise to check that writing the matrix as $r_i$ and performing transformations is the same as writing the entire $v_i^j$.)
Solving the non exact differential equation
The equation is separable, $$2a^2y=x(y-a)\frac{dy}{dx},$$ $$2a^2\frac{dx}{x}=\frac{y-a}{y}dy,$$ $$2a^2\log x+C=y-a\log y.$$
Conditional probability with two fair dice
There are five outcomes in the joint event "a four shows and the sum is even": $$\{(4,2),(4,4),(4,6),(2,4),(6,4)\}$$ How many outcomes form the conditioning event: "a four shows"?
Dual Spaces and Natural maps
There are similar posts here: Why are vector spaces not isomorphic to their duals? A basis for the dual space of $V$ Isomorphisms Between a Finite-Dimensional Vector Space and its Dual Dual of a vector space The moral is for finite dimensional case, the two space $V$ and $V^{*}$ are isomorphic because they have the same dimension and they are over the same field. For infinite dimensional case (for example if $V$ has a countable basis), then $V^{*}$ is larger than $V$ because the cardinality of the two bases differ; the $V$ has cardinality $\mathbb{N}$ while $V^{*}$ has cardinality of all maps from $\mathbb{N}$ to $\mathbb{N}$. We know the space of maps from $\mathbb{N}$ to two points has cardinality $c$, so the second one is strictly larger than the first one. In otherwords they are not isomorphic.
A question on the greatest common divisor of integers and their divisor sum
From the Wikipedia page on the greatest common divisor, we use the following property: Property G $$\gcd(a, b \cdot c) = 1 \iff \{\gcd(a, b) = 1 \land \gcd(a, c) = 1\}.$$ In particular, we obtain $$\gcd(XY, \sigma(XY)) = 1 \iff \gcd(XY, \sigma(X)\sigma(Y)) = 1$$ (since $\gcd(X,Y) = 1$ and $\sigma$ is weakly multiplicative) $$\iff \{\gcd(XY, \sigma(X)) = 1 \land \gcd(XY, \sigma(Y)) = 1\}$$ (using Property G) $$\iff \{\gcd(X, \sigma(X)) = 1\} \land \{\gcd(Y, \sigma(X)) = 1\} \land \{\gcd(X, \sigma(Y)) = 1\} \land \{\gcd(Y, \sigma(Y)\} = 1\}$$ (using Property G) $$\iff \{\gcd(Y, \sigma(X)) = 1\} \land \{\gcd(X, \sigma(Y)) = 1\}$$ (since $\gcd(X,\sigma(X)) = 1$ and $\gcd(Y,\sigma(Y)) = 1$). QED I hope that everything that I have written out is correct! =)
Maximizing directional derivatives?
To evaluate partial derivatives of a function $f(x,y)$, you fix one of the variables as the constant, and derivate with respect to the other variable. For instance, if you're trying to find, say, $\partial_x f$, treat $y$ as a constant; you can temporarily change $y$ it for $a$ if you wish so as to see it more clearly. The maximum value of the directional derivative will occur in the direction along the gradient vector (at a given point). This maximum value will be the norm of the gradient vector (at that point) -- just review the definition of directional derivative, it's a dot product between the gradient vector and a unit vector that gives the "direction".
Asymptote vertical / horizontal
The function have not asymptote when $x \to +\infty$. The only asymptote is $y=0$ when $x \to -\infty$. $\displaystyle \lim_{x \to -\infty} x^{-3}e^{\frac{x^3}{3}} = \lim_{x \to -\infty} x^{-3} \times \lim_{x \to -\infty} e^{\frac{x^3}{3}}=0 \times 0 =0$. The formula $\lim_{x \to \infty}{\frac{f(x)}{x}} = k \in R$ is good also for $k=0$ but if directly $\lim_{x \to \infty}{f(x)} =0$ this is sufficient to conclude that $y=0$ is asymptote in $+\infty$.
Is ${f'_n(z)}$ converge uniformly on $D$?
Clearly, $f_n'(z) \to 0$ for each $z \in D$. If $f_n' \to 0$ uniformly then there exists $m$ such that $|f_n'(z)| <\frac 1 {2e}$ for all $n \geq m$ for all $z \in D$. Put $z=1-\frac 1 n$. We get $(1-\frac 1 n)^{n-1} <\frac 1 {2e}$ for all $n \geq m$. Letting $n \to \infty$ we get $\frac 1 e \leq \frac 1 {2e}$ which is a contradiction. Hence the convergence is b=not uniform.
Prove or disprove: $ A^2 = I \Longrightarrow A=I \vee A=-I $
No. Here are two counterexamples (with their interpretation in $\Bbb R^2)$: Symmetry with respect to the $x$ axis: $$\begin{pmatrix}1&0\\0&-1\end{pmatrix}$$ Note that any symmetry would work. For example: $$\begin{pmatrix}0&1\\1&0\end{pmatrix}$$ By the way, this would be true in a field because $X^2=I\iff X^2-I=0\iff X^2-I^2=0\iff (X-I)(X+I)=0$ but the matrices only form a ring, not a field so we don't have $(X-I)(X+I)=0 \implies X-I=0\lor X+I=0$
$\kappa$-complete, $\lambda$-saturated ideal equivalence
You omitted one hypothesis: you’re trying to prove that if $\lambda\le\kappa$ and $\Bbb I$ is a $\kappa$-complete ideal on $\kappa$ containing all of the singletons, and if there is no pairwise disjoint family $\{X_\alpha:\alpha<\lambda\}\subseteq\wp(\kappa)\setminus\Bbb I$, then $\Bbb I$ is $\lambda$-saturated. Suppose, on the contrary, that there is a family $\{X_\alpha:\alpha<\lambda\}\subseteq\wp(\kappa)\setminus\Bbb I$ such that $X_\alpha\cap X_\beta\in\Bbb I$ whenever $\alpha<\beta<\lambda$. Suppose first that $\lambda<\kappa$. Let $I=\bigcup\{X_\alpha\cap X_\beta:\alpha<\beta<\lambda\}$; $I$ is the union of $\lambda$ elements of $\Bbb I$, so $I\in\Bbb I$. For $\alpha<\lambda$ let $Y_\alpha=X_\alpha\setminus I$; clearly $Y_\alpha\notin\Bbb I$, and $Y_\alpha\cap Y_\beta=0$ if $\alpha<\beta<\lambda$. By hypothesis no such family $\{Y_\alpha:\alpha<\lambda\}$ exists, so we must have $\lambda=\kappa$. For this case we need a slightly more sophisticated version of the same basic idea. For each $\alpha<\kappa$ let $$Y_\alpha=X_\alpha\setminus\bigcup\{X_\alpha\cap X_\beta:\beta<\alpha\}\;;$$ $|\alpha|<\kappa$, so $\bigcup\{X_\alpha\cap X_\beta:\beta<\alpha\}\in\Bbb I$, and therefore $Y_\alpha\notin\Bbb I$. But clearly $Y_\alpha\cap Y_\beta=0$ if $\alpha<\beta<\kappa$, since $Y_\alpha\cap Y_\beta\subseteq X_\alpha\cap \left(X_\beta\setminus\big(X_\beta\cap X_\alpha\big)\right)$, and again we contradict the hypothesis that no such family exists. This complete the proof. Note that I could actually have done it in a single case using the more sophisticated version even when $\lambda<\kappa$; I just thought that it would be a little clearer if I started with the simple version of the idea.
Complex Analysis :Real integral with residues calculation
Let $g(x) = \frac{\sin 2x}{1+\frac{\sin x}{2}} $ so $ \int_{-\pi}^\pi g(x) dx = \Im I.$ Since $h(x) = g(x-\pi/2)$ is odd, and periodic with period $2\pi$, $$ \Im I = \int_{-\pi}^\pi g(x) dx = \int_{-\pi}^\pi h(x) dx = 0.$$
Is $f - 3I$ an isomorphism if $f$ is orthogonal?
Suppose that $f - 3I$ is no isomorphism. Then the equation $fv = 3v$ has a non-trivial solution $v \neq 0$. This is equivalent to the fact that $f$ has eigenvalue $3$. But since $f$ is orthogonal, all eigenvalues of $f$ lie on the unit circle, i.e., they have absolute value $1$. Thus, $3$ can't be an eigenvalue of $f$. The idea of the proof that all eigenvalues $\lambda$ of $f$ orthogonal fulfill $\lvert \lambda \rvert = 1$ is discussed e.g. in this question.
Fidning multivariable limit
It is a limit in two variable but the indeterminate form that comes out is due to a one of them. Then we calculate: $$\lim \limits_{(x, y) \to (2,0)} \frac{1-\cos y}{xy^2}=\frac{1}{2}\lim_{y\to 0}\frac{1-\cos y}{y^2}$$ But it is a well-known limit: $$\lim_{t\to 0} \frac{1-\cos(t)}{t^2} = \frac{1}{2} $$ Then $$\lim \limits_{(x, y) \to (2,0)} \frac{1-\cos y}{xy^2}=\frac{1}{2}\lim_{y\to 0}\frac{1-\cos y}{y^2}=\frac{1}{2}\cdot \frac{1}{2}=\frac{1}{4}$$
Transversals of Latin Squares
I don't think there's any slick way to determine that this Latin square has exactly $3$ transversals---we just count them. E.g., here's some GAP code: L:=[[1,2,3,4,5],[2,4,1,5,3],[3,5,4,2,1],[4,1,5,3,2],[5,3,2,1,4]];; ExtendPartialTransversal:=function(T) local i,j,TNew; # we try to add entry (i,j,L[i][j]) to T without clashing # looking at row i i:=Size(T)+1; # looking at column j for j in [1..5] do # column already used if(ForAny([1..i-1],k->T[k][2]=j)) then continue; fi; # symbol already used if(ForAny([1..i-1],k->T[k][3]=L[i][j])) then continue; fi; # add to partial transversal TNew:=Concatenation(T,[[i,j,L[i][j]]]); # if transversal complete, then print, otherwise extend if(Size(TNew)=5) then Print(TNew,"\n"); else ExtendPartialTransversal(TNew); fi; od; end;; # start with the empty partial transversal ExtendPartialTransversal([]); which returns the three transversals: [ [ 1, 1, 1 ], [ 2, 4, 5 ], [ 3, 3, 4 ], [ 4, 5, 2 ], [ 5, 2, 3 ] ] [ [ 1, 4, 4 ], [ 2, 1, 2 ], [ 3, 5, 1 ], [ 4, 3, 5 ], [ 5, 2, 3 ] ] [ [ 1, 5, 5 ], [ 2, 3, 1 ], [ 3, 4, 2 ], [ 4, 1, 4 ], [ 5, 2, 3 ] ] and shows there's no others by exhaustive search.
An example of a representation which is simultaneously of real and quaternionic type
Take any rep $V$ over $\Bbb R$ and consider $\Bbb H\otimes_{\Bbb R}V$ as a complex vector space. Say $V$ is a representation of $G$ over $\Bbb R$. Assume we've picked a basis. Then $G$ acts by real matrices, and if we extend scalars (by tensoring with a larger ring of scalars) $G$ still acts by those real matrices. Since $\Bbb C\subset\Bbb H$ we can interpret $\Bbb H\otimes_{\Bbb R}V$ as a complex vector space. The map $j$ comes from the scalar action of ${\bf j}\in\Bbb H$. Since $\Bbb H=\Bbb C\oplus\Bbb C{\bf j}$ this space is $\Bbb C\otimes_{\Bbb R}(V\oplus{\bf j}V)$ so it's of real type.
Let $f : [0,1] \rightarrow \mathbb{R}$ be continuous. Show that there exists $\psi \in [0,1]$ with $f(\psi) = 0$
Extract from $x_{n}$ a convergent subsequence $x_{n_{k}}$ (which necessarily exists because $[0, 1]$ is compact). Denote the limit of $x_{n_{k}}$ by the symbol $\psi$. This limit lies in $[0, 1]$ because the interval is closed. This proof is not quite complete, but almost... :)
Picking balls with replacement
You can neglect it, since the task is only interested in the order of the red and white balls, if you pick a black ball, that doesn't effect anything in that case(You can count picking black as the null element). For example, if you pick a red ball 2 times, and then a black ball 2 times, and then a white ball 2 times $\rightarrow RRBBWW$, that will be the same as you drew red ball 2 times, and white ball 2 times, since the task is only interested in their order$\rightarrow RRBBWW=RRWW$
Can you find the lower bounds and upper bounds for $|A|$ ....?
The lower bound will be $0$, which is atainable if you take $\theta_k = k\cdot \frac{2\pi}{13}$. Take, for example, just two values of $\theta$ instead of $13$. If you take $\theta_1=\pi$ and $\theta_2=0$, then the exponents are $e^0=1$ and $e^{i\pi} = -1$. The average of $1$ and $-1$ is $0$. For all $13$ values, the easiest way to prove that their average is $0$ is to show their sum is $0$. For this, consider that the $13$ values I proposed are $13$ distinct roots of the polynomial $p(x) = x^{13} + 1$. What do we know about the sum of the roots of a polynomial?
How to determine if a set is a subspace of the vector space of all complex $2\times 2$ matrices?
In this context, a set $S$ subspace iff for all $x,y \in S$ and $a \in \Bbb C$, we have $x + y \in \Bbb C$ $a x \in \Bbb C$ So, for example, I know that the first set (call it $S_1$) is not a subspace because $M = \pmatrix{1&0\\0&1} \in S$, but $$ i M = \pmatrix{i&0\\0&i} \notin S $$ On the other hand, I know that the second set (call it $S_2$) is a subspace. If $M,N \in S_2$ and $a \in \Bbb C$, then the sum of the diagonal of $M+N$ will be zero, as will be the sum of the diagonal of $a M$. You are correct in your guess that $S_2$ is the only subspace. The key to "disproving" the rest is to find an element (or elements) of the set that break the rules of a subspace.
Solving for matrix of a Linear transformation
The matrix representation of $T$ will be a $2\times 2$ matrix (because $V$ is 2-dimensional). The most you'll be able to say about it though is that it will have the form $$[T]_{\{b_1,b_2\}} = \begin{bmatrix} | & | \\ [T(b_1)]_{\{b_1,b_2\}} & [T(b_2)]_{\{b_1,b_2\}} \\ | & |\end{bmatrix}$$ Without choosing a specific $T$ there's really not anything else you can say about its matrix representation.
Computing the Inverse of a two dimensional map?
$x \mapsto Ax +By + C$ and $y \mapsto Dx$ can be rewritten as : $$\left(\begin{matrix} x' \\ y' \\ 1 \end{matrix} \right)=M\left(\begin{matrix} x \\y\\1\end{matrix} \right)$$ Where $$M=\left(\begin{matrix} A & B & C \\ D & 0 & 0 \\ 0 & 0 &1\end{matrix} \right).$$ If $M$ is invertible, then $\left(\begin{matrix} x \\y\\1\end{matrix} \right)=M^{-1}\left(\begin{matrix} x' \\ y' \\ 1 \end{matrix} \right)$. So existence of an inverse is guaranteed by invertibility of the subjacent matrix when you have a linear system. Which sums up here to $D\ne 0 \ne B$. Note that we added a row for the affine part. By doing so, we where able to express your function as a linear system. We could have written also : $$\left(\begin{matrix} x' \\ y' \\ 0 \end{matrix} \right)=M'\left(\begin{matrix} x \\y\\1\end{matrix} \right)$$ But the invertibility would have been more difficult to express in term of matrices.
Maximizing Profit with two limited products
Calling $f(x,z) = (x-0.5)(z-1)(11-2x-z)$ the relative minimum/maximum/saddle points obey the condition $$ \frac{\partial f}{\partial x} = 4x (1-z)+z (13 -z)-12 = 0\\ \frac{\partial f}{\partial y} = x (x+z-6.5)-0.5 z+3 = 0 $$ Those solutions are shown in red, over the level contour map for $f(x,z)$. Now I leave to you the corresponding qualification as relative minimum/maximum/saddle points.
If $\lim_{x\to x_{0}} f(x) = L$ and $\lim_{x\to x_{0}}g(x)=\infty $, then $\lim_{x\to x_{0}}(f(x) + g(x)) = \infty$
For it to tend to $\infty$ we need that $\forall\ K>0, \exists\ \delta>0: x\in B_\delta(x_0)\implies f(x) > K$. Let K > 0. If you're close enough to $x_0$ ($\delta_1(\varepsilon)$), then $f(x)> L-\varepsilon$ and if you're close enough to $x_0$ ($\delta_2(R)$) then $g(x)> R$. so if $x\in B_{min(\delta_1,\delta_2)}(x_0)$ we have that$f(x)+g(x)> L-\varepsilon+R > K \iff R> K-L+\varepsilon$ Note that you can thus choose $R$ so that it fulfills that. EDIT: Given that my original answer produced some confusion. Let's do it being a little more careful. Perhaps this way you can see it more clearly. We need to prove $$\operatorname{lim}_{x\rightarrow x_0} (f(x)+g(x)) = +\infty$$ That means that $\forall \ K>0, \exists\ \delta>0: x\in B_\delta(x_0)\implies f(x) > K$ Someone gives you $K>0$, now you have to find a $\delta$ that works for that K. Take $\varepsilon=1$, and $R=K-L+2$. Since $g(x)\rightarrow \infty $ then there is a $\delta_1>0: x\in B^*_{\delta_1}(x_0) \implies g(x)>R$, by definition of the limit. Now, since $f(x) \rightarrow L$. Then there is a $\delta_2>0: x\in B^*_{\delta_2}(x_0)\implies |f(x)-L|<\varepsilon \implies f(x)>L-\varepsilon$. Take $\delta = min \lbrace\delta_1,\delta_2\rbrace$ then if $x\in B^*_{\delta}(x_0) \implies f(x)+g(x)>L-\varepsilon+R = K+1 > K$ So we have what we wanted
Why non-real means only the square root of negative?
If you posit $\log(-x)=\gamma \log(x)$ for all $x$, and if you want to allow the usual operations like division, you are going to be forced to conclude that $\gamma=\log(-x)/\log(x)$ for all $x$, and in particular that $$\frac{\log(-2)}{\log(2)}=\frac{\log(-3)}{\log(3)}$$ But the various logs in this equation already have definitions, and according to those definitions, the equation in question is not true (for any of the various choices of $\log(-2)$ and $\log(-3)$. Therefore, your $\gamma$ can exist only if you either ban division or change the definition of the log. Likewise for your other proposal $x^{\gamma k}=-x^k$. This is why you can't just go adjoining new constants willy-nilly and declaring them to have whatever properties you want. In the case of $i$, the miracle is that you can define it in a way that does not require you to revise the existing rules of arithmetic. Such miracles are rare.
Is it possible to express "any set B composed of any two elements from set A" using set theory notation?
If $A$ is a set and $n\in\Bbb N$, $[A]^n$ is a common notation for the set of $n$-element subsets of $A$. (In fact this notation is used more generally, with any cardinal $\kappa$, finite or infinite.) If you want the family of all $2$-element subsets of $A$, you can write $[A]^2$. If you want to say that $C$ is a member of that family: $C\in[A]^2$. This notation is quite standard, but it’s not universally known, so you should probably define it the first time you use it. Added: In case you find yourself wanting the subsets of $A$ having at most $n$ elements, you can write $[A]^{\le n}$; this is equally standard.
Let G be a finite Abelian group of order $p^nm$, where p is a prime that does not divide m. Then $G=H\times K$ where H and K are the following sets.
You simply have $(x^sm)^{p^n}=(x^{mp^n})^s=e^s=e$ by the corollary, hence by definition $x^{sm}$ is in $H$. Similarly $x^{tp^n}$ lies in $K$.
Volume of a cube with integration.
The problem is that with $\int_0^x x^2dx$ you are actually calculating the volume of a kind of inverted pyramid. As @Fermat suggested, you have to maintain the side fixed (through $a^2$), otherwise it changes as the third axis does.
Describing all solutions of $Ax=0$ in parametric vector form
So this is a $3x6$ matrix with $rank=3$. Then the solution should have $n-r=3-6=3$ parameters! All of your work is correct except for the answer. In the answer you provided $x_4=0$ for any given $x_2,x_6$. In reality, $x_4$ is a parameter that can take on ANY value and you can see from your reduced matrix that $x_4$ is completely independent from the other variables! Your final solution should be: $x = $$\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4\\ x_5\\ x_6 \end{bmatrix}$ = $x_2$$\begin{bmatrix} 4\\ 1\\ 0\\ 0\\ 0\\ 0 \end{bmatrix}$ + $x_4$$\begin{bmatrix} 0\\ 0\\ 0\\ 1\\ 0\\ 0 \end{bmatrix}$+ $x_6$$\begin{bmatrix} -9\\ 0\\ 1\\ 0\\ 4\\ 1 \end{bmatrix}$
Cryptography friends: How to prove that a hash function can produces a length-fixed output?
It's a weird chaining mode of RSA, and the output can be seen as a string of the length equal to that of $n$, as we output a number modulo $n$. So yes to Q1, provided you output a fully padded string. It's not easy to calculate, but that's a bit subjective. The cost is two RSA operations essentially (xor is very cheap on computers) and those are relatively expensive compared to real hash functions (like SHA256 or SHA3 and many others). It's easily doable, but relatively expensive. It's very insecure though.
How to simplify the following series for $m=n$
Note that $$\sum_{m\ge 1}\tanh^{2m-1}u=\frac{\tanh u}{1-\tanh u}=\frac{e^{2u}-1}{2}$$ Hence the given sum is $$S=\sum_{n\ge 1}\frac{(-1)^{n-1}}{n^2}\frac{e^{nx}-1}{2}=-\sum_{n\ge 1}\frac{\alpha^{n}}{2n^2}-\frac{\zeta(2)}{4}$$ where $\alpha=-e^{x}$. It remains to evaluate the series $\displaystyle\sum_{n\ge 1}\frac{\alpha^{n-1}}{n^2}$.
What is known about the transformation of a power series in which $z^n$ is replaced with $z^{n^2}$?
If you know a formula for the ordinary generating function of the sequence and its $j^{th}$ derivatives, which must exist for all $j \geq 0$, then this article (2017) provides you with an integral representation of the transformed series in question. In particular, if $G(z)$ is the ordinary generating function of the sequence $\{g_n\}_{n \geq 0}$ and $q \in \mathbb{C}$ is such that $0 < |q| < 1$, then we have proved in the article that $$\sum_{n \geq 0} g_n q^{n^2} z^n = \frac{1}{\sqrt{2\pi}} \int_0^{\infty} \left[\sum_{b = \pm 1} G\left(e^{bt \sqrt{2\log(q)} z}\right)\right] e^{-t^2 / 2} dt. $$ The article terms this general procedure for modifying the original sequence generating function a square series transformation integral, but more generally, some of the most interesting applications of this method include new integral representations for theta functions and classical identities such as the series expansion for Jacobi's triple product.
Rectangular to Spherical and Cylindrical Points
Cylindrical Coordinates: Cylindrical coordinates have a radius and angle $r$ and $\theta$ (usually measured counterclockwise from the positive $x$ axis) corresponding to the Cartesian coordinates $x$ and $y$, and a height $h$ corresponding to the Cartesian coordinate $z$. Basically we convert the first two coordinates to polar form and then add a height component. To find $r$, we need to find the distance of $\left(-1,-\frac{1}{2}\right)$ from the origin $(0,0)$. This is $$r=\sqrt{1^2+\left(\frac{1}{2}\right)^2}=\sqrt{\frac{5}{4}}=\frac{\sqrt{5}}{2}$$ To find $\theta$, we realize that the point $\left(-1,-\frac{1}{2}\right)$ is in the third quadrant and that we can use a right triangle to find the angle over $180^\circ$. Using the fact that the tangent function of an angle is opposite over adjacent, we have that the extra angle is $\arctan\left(\frac{\frac{1}{2}}{1}\right)=\arctan\left(\frac{1}{2}\right)$. Hence $\theta=180^\circ+\arctan\left(\frac{1}{2}\right)$, or $\theta=\pi+\arctan\left(\frac{1}{2}\right)$ if you are using radians instead of degrees. Finally, the height is just the same as the $z$-coordinate, so we have $$(r,\theta,h)=\left(\frac{\sqrt{5}}{2},\pi+\arctan\left(\frac{1}{2}\right),-\frac{\sqrt{3}}{2}\right)$$ Spherical Coordinates: Spherical coordinates have a radius $r$ and two angles $\theta$ and $\rho$, where $\theta$ corresponds to our angle $\theta$ in cylindrical coordinates, and $\rho$ corresponds to the angle between the point and the positive $z$ axis as measured from the origin. In this case we have to recompute $r$ because this radius is from the point $\left(-1,-\frac{1}{2},-\frac{\sqrt{3}}{2}\right)$ to the origin $(0,0,0)$ (this is in $3$ dimensions where as in cylindrical coordinates we just used the radius in $2$ dimensions). $$r=\sqrt{1^2+\left(\frac{1}{2}\right)^2+\left(\frac{\sqrt{3}}{2}\right)^2}=\sqrt{\frac{8}{4}}=\sqrt{2}$$ $\theta$ is the same as before, so we are just left with the computation of $\rho$. Again we can use a right triangle, except this time one side of it will be the $r$ we found in the cylindrical coordinates, and the other side will be the height. First we note that the angle is greater than $90^\circ$, so we can compute the piece of the angle that is bigger than $90^\circ$ and then add it. Again we just use the $tan$ relation and calculate the extra angle as $\arctan\left(\frac{\frac{\sqrt{3}}{2}}{\frac{\sqrt{5}}{2}}\right)=\arctan\left(\sqrt{\frac{3}{5}}\right)$. Hence $\rho=90^\circ+\arctan\left(\sqrt{\frac{3}{5}}\right)$ or $\frac{\pi}{2}+\arctan\left(\sqrt{\frac{3}{5}}\right)$ if you are using radians instead of degrees. Finally we have $$(r,\theta,\rho)=\left(\sqrt{2},\pi+\arctan\left(\frac{1}{2}\right),\frac{\pi}{2}+\arctan\left(\sqrt{\frac{3}{5}}\right)\right)$$
Half-open topology on $\mathbb R$ is separable, and $A \setminus \hat A$ is countable
You're right about the countable dense subset. (every set $[a,b)$ contains an open interval $(a,b)$ which contains a rational etc.) If $a \in A\setminus \hat{A}$, then there is a basic subset $[a,f(a))$ with $f(a) \in \Bbb Q$ such that $[a,f(a)) \cap A = \{a\}$ (this is what not being a limit point entails, plus we use the density of $\Bbb Q$). Suppose that we have $a_1 < a_2$ in $ A\setminus \hat{A}$ and $f(a_1) = f(a_2)$. But then $a_2 \in [a_1, f(a_1)$ and this contradicts how $f(a_1)$ was chosen. So $f: A\setminus \hat{A} \to \Bbb Q$ is injective and so the domain of $f$ is at most countable.
Can Atlas on S^1 only contain one chart?
If $U$ is any (nonempty) open subset of $\mathbb{R}$, then $U$ minus any point is disconnected. However, $S^1$ minus a point is always still connected.
Probability of streaks
According to this page, there is a closed form expression for just this problem. ...the probability, S, of getting K or more heads in a row in N independent attempts (where p is the probability of heads and q=1-p is the probability of tails) is: $$ S(N,K) = p^K\sum_{T=0}^\infty {N-(T+1)K\choose T}(-qp^K)^T-\sum_{T=1}^\infty {N-TK\choose T}(-qp^K)^T $$ With the unusual convention that ${A\choose B}= 0$ for $A < B$. Numerical evaluation gives me 0.0441372 for the case of $p=1/2$, $N=100$, $K=10$. Edit 1 Reworking it a bit to get rid of that weird convention just changes the upper limit. $$ p^k \sum _{t=0}^{\frac{n-k}{k+1}} \binom{n-k (t+1)}{t} \left(-q p^k\right)^t-\sum _{t=1}^{\frac{n}{k+1}} \binom{n-k t}{t} \left(-q p^k\right)^t $$ The following Mathematica code gives you numbers, just plug in p, n, k in the last substitution bit. -\!\( \*UnderoverscriptBox[\(\[Sum]\), \(t = 1\), FractionBox[\(n\), \(1 + k\)]]\( \*SuperscriptBox[\((\(- \*SuperscriptBox[\(p\), \(k\)]\)\ q)\), \(t\)]\ Binomial[n - k\ t, t]\)\) + p^k \!\( \*UnderoverscriptBox[\(\[Sum]\), \(t = 0\), FractionBox[\(\(-k\) + n\), \(1 + k\)]]\( \*SuperscriptBox[\((\(- \*SuperscriptBox[\(p\), \(k\)]\)\ q)\), \(t\)]\ Binomial[ n - k\ \((1 + t)\), t]\)\) //. {k -> 10, n -> 100, q -> 1 - p, p -> 1/2} // N Edit 2 It has recently been pointed out by Mark L. Stone that the above is for a streak of heads, but not for the case of either streak occurring. I'd recommend reading his post below.
Prove the difference between a number and the same number with two digits switched is always divisible by 9
I would drop the $x > y$ requirement. It's clutter, in my opinion. It suffices that $x \neq y$. Take for example $1729$. Switch two digits to get $1927$. Then $1927 - 1729 = 198$. But $1729 - 1927 = -198$. Taking your assertion $(a * 10^x + b*10^y) - (b*10^x + a*10^y) \leftrightarrow $ $a(10^x - 10^y) + b(10^y - 10^x) \leftrightarrow (a - b)(10^x - 10^y)$ and plugging in $7$ and $9$ both ways, we get $$(700 + 9) - (900 + 7) \leftrightarrow (7 - 9)(100 - 1)$$ and $$(900 + 7) - (700 + 9) \leftrightarrow (9 - 7)(1 - 100)$$ Then, ignoring signs, we arrive at the same result.
Predicate logic: Two variables - same value allowed?
I'll put the answer here so the question can be marked as answered. "For every $x$" means for every $x$, period. So, yes, there are no restrictions on $x$ not being equal to $y$; any such restrictions would have to be given as predicates (a clause $x\neq y$). I'll note (as I did in the comments) that whether the transformation from $(x\geq y) \wedge \neg(y\geq x)$ is equivalent to $x\gt y$ (and whether $\gt$ is areflexive) depends on the interpretation of $\geq$ and $\gt$ on the universe in question. The specification of the universe should not be only the set, but also the meaning of the relational and functional symbols like $\geq$. Only if there is some working convention can you simply assume that the meaning will be "the usual one".
Positive Operators and Invertibility
If you know that $\|(T-I)x \|\leq \|(T+I)x \|$ for all $x \in H$, then you have that $$\|(T-I)(T+I)^{-1}x \| \leq \|(T+I)(T+I)^{-1}x \|= \|x \|, \ x \in H. $$ Hence $\|(T-I)(T+I)^{-1}\| \leq 1 $. On the other hand, note that if $T+T^*\geq 0$ then $$ 2\mbox{Re}(\langle Tx,x \rangle)=\langle Tx,x \rangle+\langle x,Tx \rangle=\langle Tx,x \rangle+\langle T^*x,x \rangle =\langle Tx+T^*x,x \rangle \geq 0$$ for all $x \in H$. Then we have $$\|(T+I)x \|^2 = \|Tx \|^2+2\mbox{Re}(\langle Tx,x \rangle) + \|x \|^2\geq \|x \|^2, \ x \in H. $$ Hence $$\|(T+I)x \| \geq \|x \|, \ x \in H.$$
Showing that this set satisfies the closed criterion
OK, here's a deal: (a) The ray $\{\lambda v | \lambda \ge 0\}$ is closed. This is rather easy. Nothing to do here. (b) In finite dimensions, the convex hull of a compact set is compact. I've proven this for another question here. (c) The Minkowski sum of a compact set and closed set is again closed. This has been neatly shown here. Conclude that your set $S$ is closed.
Condition for solution of linear system
You know that $A^Tx = b$ has a solution only if $b$ is in the space spanned by the columns of $A^T$ (which are the rows of $A$). So you could say that $b$ must be orthogonal (ie scalar product is $0$) to the orthogonal space of $\text{Col } A^T$ (So if $b$ is orthogonal to the orthogonal of $\text{Col }A^T$ it means that it must be in $\text{Col }A^T$, which is what we want) Now, the orthogonal complement of $\text{Col }A^T$ is $\text{Ker } A$! So in the end you can say that for every $u \in \text{Ker } A$ it must hold that $$b^Tu = 0$$ @A.Chattopadhyay. Is that your comment or are you reporting what your professor told you? Anyhow, you must have misunderstood what I wrote; yours is not a countexample. In fact, if $A^T = \begin{pmatrix}1 & 0 & 1 \\ 0 & 1 & 1 \\ 1& 1 &2\end{pmatrix}$ and $b = \begin{pmatrix} 1 \\ -1 \\ 0\end{pmatrix}$, we have that $\text{ Ker } A = \left\{u \in \mathbb R^3: u = \begin{pmatrix} -t \\ -t \\ t\end{pmatrix} \text{ for } t \in \mathbb R\right\}$. Hence for every $u \in \text{Ker } A$ we have $$u^Tb = -t\cdot 1 -t\cdot(-1) + t\cdot 0 = 0$$ as expected
Finding the elements in $\mathbb{Z}[i]/(2+2i)$
You're not using cancellation in $\mathbb{Z}$ or $\mathbb{Z}[i]$, you're trying to use cancellation in $\mathbb{Z}[i]/(2 + 2i)$. (Indeed, note that $2i + 2 = 0$ is false in $\mathbb{Z}[i]$; it's a true equation only in the quotient ring.) How do you know that this quotient ring is an integral domain? You can't use cancellation if you don't know you're working with an integral domain (or at least that the element you're cancelling is a non-zero-divisor).
Volume of solid of revolution by revolving the region $y=x^2$,$x=0$,$y=9$
We could set it up by integrating volume of cylindrical shells as well (shell method). As we are rotating the region around x-axis, At distance $y$ from x-axis, the shell width = $ \sqrt y \ $ and $0 \leq y \leq 9$. So $ \ V = \displaystyle \int_0^9 2 \pi y \ \sqrt y \ dy = \frac{972 \pi}{5}$ We could also set up using the disk method as (similar to what you did), $\displaystyle \int_0^3 \int_{x^2}^{9} 2 \pi y \ dy \ dx = \frac{972 \pi}{5}$
Convergence of Neumann-series
By definition of Neumann series you stated, $B^k=(S^{-1}DS)^k=S^{-1}DS\ \cdot S^{-1}DS\ ...\ S^{-1}DS=S^{-1}D^kS$, Therefore $\sum_{k=0}^n B^k=\sum_{k=0}^n S^{-1}D^kS=S^{-1}(\sum_{k=0}^nD^k)S$. Since you know that $\sum_{k=0}^nD^k$ converges already (actually the limit is $(I-D)^{-1}$), we know $\sum_{k=0}^n B^k=S^{-1}(\sum_{k=0}^nD^k)S$ also converges, and the limit is $S^{-1}(I-D)^{-1}S=(S^{-1}(I-D)S)^{-1}=(I-B)^{-1}$. Let me know if this solves your question.
initial algebra and free monoid
In the right upper part of the diagram, you can reduce $F(\mathsf{free} f) \circ \mathsf{inr}$ to $\mathsf{id} \otimes \mathsf{free} f : A \otimes A^{*} \to A \otimes G$. You end up with $(\mathsf{id} \otimes \mathsf{free} f) \circ (\mathsf{id} \otimes e)$, which is $\mathsf{id} \otimes e_G$. There you can swap the order with $f \otimes \mathsf{id}$ (a.k.a. $- \otimes -$ functoriality) to apply $(G, m_G, e_G)$'s right identity law. For the uniqueness part, I think you can use the initiality of $A^{*}$.
How to describe a function with cases involving both equalities and inequalities
The most common way I know to express such things is with iff (if and only if) in text, not in an equation. So in your first example, we have that $f(n) = n$ if and only if $n = M$. If you want it as an equation: $$f(n) = n \Leftrightarrow n=M$$ About your comment: If you only want partial cases, the cases environment is even worse. In this, we assume that the list of cases is complete. For example, something like $$f(n) = \begin{cases} 5 & n = 7 \\ 2 & n = 3\end{cases}$$ is not where we use cases, as we only give two values, not the whole definition of the function. So if you only have partial information and don't know all the values, I would strongly suggest not to use cases. Otherwise, a reader (as happened to me) would assume that this are all cases, i.e. the case $n > M$ is excluded somehow.
Finding $\lim_{x\rightarrow 0}\frac{x}{2}\sqrt{\frac{1+\cos(x)}{1-\cos(x)}}$
To begin with, it's $\sqrt{\frac{1-\cos{x}}{1+\cos{x}}}=\left| \tan{\left( \frac{x}{2}\right)} \right|$. Anyway, you could have answered without expanding into Taylor series. $$ \frac{\frac{x}{2}}{\left| \tan{\left( \frac{x}{2}\right)} \right|} = \frac{\frac{x}{2}\left| \cos{\frac{x}{2}} \right|}{\left| \sin{\left( \frac{x}{2}\right)} \right|} = \frac{\frac{x}{2}}{\left| \sin{\frac{x}{2}} \right|}\left| \cos{\frac{x}{2}} \right|$$ You know that $\lim_{x \to 0}\left| \cos{\frac{x}{2}} \right| = 1$ and it is easy to prove that $\lim_{x \to 0}\frac{x/2}{\left| \sin{x/2} \right|}$ doesn't exists. In fact, its lateral limits exist, and are 1 from one side and -1 from the other!
a function with linear growth is $C^1$ and Lipschitz.
The function $$x \mapsto |x|$$ has linear growth (as you define it) but is not $\mathcal{C}^1$.
Jointly Gaussian and Brownian bridge
If you multiply a Gaussian random vector by a constant matrix (i.e. a non-random matrix) then what you get is always a Gaussian random vector. You have independent Gaussians $B_t$ and $B_1-B_t.$ Independent univariate Gaussians are jointly Gaussian. Your random vector is tuple of a constant linear combinations of these two.
Vector Space confusion
Linearity isn't sufficient in general to prove the sets are vector spaces, but it is necessary. Since you are given the parent set is a vector space, you simply have to check closure under addition, closure under scalar multiplication, and containment of the zero-vector. This is called the subspace test. I would try and find where things break and go from there. For a vector space, we have closure over addition. So let's look at $B$. What happens if $f(0) = 1$ and $f^{\prime}(0) = 0$. Now take $g(0) = 0$ and $g^{\prime}(0) = 1$. Clearly, $f, g \in B$. So if $B$ is a vector space, then $f + g \in B$. However, $(f + g)(0) = 2$. So closure under addition fails. For $A$, let's look at addition. An even function is a function such that $f(x) = f(-x)$. So if I take two even functions and add them, will the result be even? Are the functions commutative over addition? What about associative? You may find properties of addition over $\mathbb{R}$ to be helpful here. By this, I mean that since the functions map to values in $\mathbb{R}$, it suffices to consider commutativity, associativity, and distributivity over $\mathbb{R}$. Now is there an identity in $A$? Is there a function such that for all $f \in A$, there exists a $g \in A$ such that $f + g = f$? Again, think about the additive identity on $\mathbb{R}$. Hopefully this will help you get started. I've included more steps than necessary, but I did so in the hopes of clarifying general vector space proofs as well, rather than restricting to the case of subspaces (which you should do here). Please let me know if I can clarify some.
A set $A$ is finite if, and only if every nonempty set of subsets of $A$ has a maximal element in the sense of $\subset$
I would say your proof of $(\implies)$ is fine, although a rigorous proof should be done by induction on the cardinality of $B$. It's also good to abstract a bit and prove a more general fact: every nonempty finite partial order has a maximal element. This is also proved by induction on the cardinality of the order. In the proof of $(\impliedby)$, the set $$\bigcup \{ A_n : n \in \omega \}$$ is the same as $\{ f(n) : n \in \omega \}$ and is not a family of subsets of $A$, but a subset of $A$ itself. The right family to consider is $\{ A_n : n \in \omega \}$, in which case the rest of the proof is correct. A simple proof of $(\impliedby)$ not using the axiom of choice or any other serious tool is as follows: let $\mathcal{A}$ be the family of all finite subsets of $X$. It is nonempty, since $\varnothing \in \mathcal{A}$. By the assumption, there is a maximal element $A$ in $\mathcal{A}$. We claim that $X = A$. Suppose not. Then there is some $x \in X \setminus A$. The set $A^* = A \cup \{ x \}$ is a finite set with $A \subsetneq A^*$, which contradicts the maximality of $A$.
Formula needed for calculating probability of recurring events
Same as before: if $n$ opportunities are left before the next pick, after the pick there may be $n+X-1$ or $n-1$ left and these occur with probabilities $Y$ and $1-Y$ respectively. Thus the average number of opportunities $Z_n$ starting from $n$ opportunities left is such that $$Z_n=1+YZ_{n+X-1}+(1-Y)Z_{n-1}.$$ Since $Z_n=nZ_1$, this yields $Z_n=n/(1-YX)$, assuming that $YX\lt1$. In particular, $$Z_X=X/(1-YX).$$ If $YX\gt1$, one never goes broke with positive probability hence $Z_n=+\infty$ for every $n\geqslant1$. If $YX=1$, one goes broke with full probability but the time it takes has infinite average.
How to find the sum of two linear subspaces?
One way is to find a basis for each of the individual subspaces. Then combine those bases together and remove any of those vectors that are linearly dependent on the previous ones. What is left is a basis of your sum space. Your $U$ has the single basis vector $(1,1,1)$, and your $W$ has the two basis vectors $(0,1,0)$ and $(0,0,1)$. Check if those are linearly dependent and describe the span of the union of those bases.
Prove that if $\sigma(n)=2n+1$ then $n$ is an odd perfect square.
I had seen this result before, but not the proof, so I looked up Cattaneo's original argument. It's in Italian. What follows is his proof with some details filled in. First, if $\sigma(n) = 2n+1$ then $\sigma(n)$ is odd. If $n = \prod_{i=1}^r p_i^{a_i}$ is the prime factorization of $n$, then we know that $$\sigma(n) = \prod_{i=1}^r (1 + p_i + p_i^2 + \cdots + p_i^{a_i}).$$ Since $\sigma(n)$ is odd, though, each factor $1 + p_i + p_i^2 + \cdots + p_i^{a_i}$ must also be odd. For each odd prime factor $p_i$, then, there must be an odd number of terms in the sum $1 + p_i + p_i^2 + \cdots + p_i^{a_i}$. Thus $a_i$ must be even. This means that if $p_i$ is odd, $p_i^{a_i}$ is an odd perfect square. The product of odd perfect squares is another odd perfect square, and therefore $n = 2^s m^2$, where $m$ is odd. Now we have $\sigma(n) = (2^{s+1}-1)M$, where $M = \prod_{p_i \text{ odd}} (1 + p_i + p_i^2 + \cdots + p_i^{a_i})$. Since $\sigma(n) = 2n+1$, \begin{align*} &(2^{s+1}-1)M = 2^{s+1}m^2 + 1 \\ \implies &(2^{s+1}-1)(M - m^2)-1 = m^2. \end{align*} This means that $-1$ is a quadratic residue for each prime factor of $2^{s+1}-1$. A consequence of the quadratic reciprocity theorem, though, is that $-1$ is a quadratic residue of an odd prime $p$ if and only if $p \equiv 1 \bmod 4$. Thus all prime factors of $2^{s+1}-1$ are congruent to $1 \bmod 4$. The product of numbers congruent to $1 \bmod 4$ is still congruent to $1 \bmod 4$, so $2^{s+1}-1 \equiv 1 \bmod 4$. However, if $s > 0$, then $2^{s+1}-1 \equiv 3 \bmod 4$. Thus $s$ must be $0$. Therefore, $n = m^2$, where $m$ is odd.
Split cab fare proportional to saving made across the board
I'm not sure there's any one right answer, but taking the original individual fares as the basis, and subtracting the savings proportionally, may work. The individual fares, one would hope, would encompass all factors for the cost of the trip for all concerned, and hence would make any joint savings achieved as fair as they could be. Sum of individual fares: $\$13.50$ Combined fare: $\$11.00$ Total savings: $\$2.50$ Rider A's fraction of the savings: $4/13.50 \times \$2.50 = \$0.74$ Rider B's fraction of the savings: $5/13.50 \times \$2.50 = \$0.93$ Rider C's fraction of the savings: $4.5/13.50 \times \$2.50 = \$0.83$ Rider A's discounted fare: $\$3.26$ Rider B's discounted fare: $\$4.07$ Rider C's discounted fare: $\$3.67$ They all save money, and in proportion to what their trip would have cost individually. The ordering of the list doesn't change, either; B still pays the most, and A pays the least.