title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Method for coming up with consecutive integers not relatively prime to $(100!)$
It took me much effort, but finally I found an approach leading to high differences! Here the result : ? q=1;while(q>0,n=Mod(0,1);z=vector(252,s,s);p=1;while(p<97,p=nextprime(p+1);mer k=[];maxi=0;for(k=0,p-1,x=[];for(j=1,length(z),if(Mod(z[j],p)==k,x=concat(x,z[j] )));if(length(x)>maxi,merk=[k];maxi=length(x));if(length(x)==maxi,merk=concat(me rk,k)));x=[];k=merk[random(length(merk))+1];n=chinese(n,Mod(-k,p));for(j=1,lengt h(z),if(Mod(z[j],p)==k,x=concat(x,z[j])));z=setminus(z,x);q=length(z)));n=compon ent(n,2);anz=0;for(j=-20,260,if(gcd(n+j,100!)==1,anz=anz+1;print1(j," ")));print ;print(anz);print(n) -20 -16 -14 -10 -4 -2 256 260 8 1561607423896275886962003608302951953 ? The displayed number is $N$. The numbers $N-2$ and $N+256$ have no prime factor below $100$ , but all numbers between have one. So the difference is $258$. If I remember an OEIS-entry correctly, this is the maximum possible difference.
Convergence of $x_n(t)=\frac{2nt}{1+n^2t^2}$ in $C_{[0,1]}$ and $C_{[1,\infty]}$
Note that $$\lim_{n\to \infty}\frac{2nt}{1+n^2t^2}=0$$ for all $t$. To examine whether the convergence is uniform, we simply note from the AM-GM inequality that $1+n^2t^2\ge \frac1{2|nt|}$, with equality if and only if $t=1/n$. Hence $$\sup_{t} \frac{2nt}{1+n^2t^2}= 1$$ and the convergence is not uniform for $t\in [0,1]$. The convergence is obviously uniform on all compact subsets of $[\delta,\infty)$ for $\delta>0$.
Sum of squared eigenvalues of $A$ equals $\operatorname{tr}(A^2)$?
Yes, this always holds when $A$ is square (which it must be to have eigenvalues). One can use Jordan Normal Form or a density argument, the key idea in both proofs being that $$ \operatorname{tr}(M) = \operatorname{tr}(UMU^{-1}) $$ for any invertible matrix $U$. JNF proof: Actually, all we need is that any square matrix $A$ is similar to an upper-triangular matrix $B$, $A=UBU^{-1}$. The multiset of eigenvalues is preserved under similarity and the eigenvalues of an upper triangular matrix are the diagonal elements, so the diagonal elements of $B$ are the eigenvalues of $A$, repeated according to multiplicity. The square of an upper-triangular matrix is again upper-triangular, and direct calculation shows that the diagonal elements of the square are the squares of the diagonal elements, so $$ \operatorname{tr}(A^2) = \operatorname{tr}(UBU^{-1}UBU^{-1}) = \operatorname{tr}(B^2) = \sum_i \lambda_i^2 $$ as required. Density proof: The function $f:A \mapsto \operatorname{tr}(A^2)$ is continuous on the vector space of matrices (this space is finite-dimensional, so any norm induces the same topology) so if we can prove something about $f$ on a dense subset of the set of matrices, we can extend it to the whole space by continuity. Matrices with distinct eigenvalues are dense (not a proof, but this is plausible since matrices with two eigenvalues the same have fewer parameters to play with, so one might expect them to form a submanifold of lower dimension), and diagonalisable, and for such matrices $A=UDU^{-1}$, $D$ diagonal, one obviously has $$ \operatorname{tr}(A^2) = \operatorname{tr}(UDU^{-1}UDU^{-1}) = \operatorname{tr}(D^2) = \sum_i \lambda_i^2. $$
Trouble with trigonometric limits without derivatives
You have $$e^x=1+x+\frac{x^2}{2}(1+\epsilon_1(x))$$ and $$\sin(x)=x+x^2\epsilon_2(x)$$ cause $x\mapsto \sin(x)$ is odd. then the limit is $$\frac{1}{2}$$
What is the definition of a group entirely in first-order predicate logic?
The first order theory of group is described as follows: The language of group theory is $\mathfrak{L} = \{\cdot, e\}$, where $\cdot$ is a binary function symbol and $e$ is a constant symbol. The first-order theory $T$ of group theory consist of the following axioms : 1) $(\forall x)(e \cdot x = x \cdot e = x)$ 2) $(\forall x)(\forall y)(\forall z)(x \cdot (y \cdot z) = (x \cdot y) \cdot z)$ 3) $(\forall x)(\exists y)(x \cdot y = e)$ A structure in the language $\mathfrak{L}$ is a tuple $(G, \cdot^G, e^G)$ where $G$ is a set, $\cdot^G$ is a function $G \times G \rightarrow G$ and $e^G \in G$. A structure in the language $\mathfrak{L}$ is a model of $T$, the Theory of Group Theory, if and only if $G$ satisfies all the axioms of $T$. A $\mathfrak{L}$-structure $(G, \cdot^G, e^G)$ that is a model of $T$ is called a group.
Solve a system of two congruences with modules not pairwise coprime
$x \equiv 225 \pmod {250}$ so $x=25(9+10j)$ for some $j \in \mathbb{Z} $. $x \equiv 150 \pmod {1225}$ so $x=25(6+49k)$ for some $k \in \mathbb{Z} $. So we need to find $y$ satisfying \begin{eqnarray*} y \equiv 9 \pmod {10} \\ y \equiv 6 \pmod {49} \end{eqnarray*} $10$ and $49$ are coprime so we can use the Chinese remainder theorem. We have $y=349$ and $x=8725$.
Quickest general strategy for village meeting
This is not a complete answer, but a proof that a good enough partial strategy for "few" villagers is sufficient. If the only villagers are only located in two groups at two opposite corners of the square, then the best (and only) strategy takes time $2\sqrt 2 < 2+\sqrt 2$. Suppose you have a not necessarily optimal strategy that takes time at most $T \ge 2+\sqrt 2$ to solve any square. (we know that those exist by the divide and conquer argument presented in the question) If you can reach out to $n^2-1$ enough people in time $\frac 12 \sqrt 2 + \varepsilon$, then you can separate the square in $n^2$ small squares, use the strategy on the small squares, then return to the center, which gives you a strategy with a total time of $(\frac 12 \sqrt 2 + \varepsilon) + (1- \frac 1{2n})\sqrt 2 + \frac 1n T + (\frac 12 - \frac 1 {2n}) \sqrt 2$. As $n \to \infty$, this converges down to $2\sqrt 2 + \varepsilon$. Under the assumption that given enough people in the square you can make $\varepsilon$ converge to $0$, you can start with any reasonable strategy and turn it into a new strategy whose worst time converges to $2\sqrt 2$ as the number of villagers gets large. And from the previous worst-case examples, we know that $2\sqrt 2$, and the time $\frac 12 \sqrt 2 + \varepsilon$ are the best possible. Now I will prove that if you have enough people in the square, then you can reach $N$ villagers in close to $\frac 12 \sqrt 2$ time. Suppose you have $N$ villagers located in a tiny rectangular band of length $\frac 12 \sqrt 2$ and width $w$ and the headsman is starting in the middle of one extremities of the band. Then by visiting every villager in order and branching, you can reach out to them in time at worst $\frac 12 \sqrt 2 + (\frac 12 + \lfloor \log_2(N) \rfloor )w$ : the amount of "zigzagging" needed is spread out to all the villagers thanks to the branching. Finally, since the rectangle "covers" an angle of $2\arctan (w/\sqrt 2)$, you can cover a whole circle of radius $\sqrt 2/2$ with $\lceil \pi / \arctan (w / \sqrt 2) \rceil$ rectangles. Therefore, if there is $\lceil \pi / \arctan (w / \sqrt 2) \rceil (N-1)+1$ villagers in total, then by the pigeonhole principle there is a band containing $N$ of them, and thus there is a way to reach out to $N$ villagers in time $\frac 12 \sqrt 2 + (\frac 12 + \lfloor \log_2(N) \rfloor )w$ Picking $n=5$ and $T = 2+\sqrt 2$ we obtain a recursive strategy that works in time $2+\sqrt 2$ as long as we can reach $24$ people in time $(16-5\sqrt 2)/10 = 0.892893\ldots$ To manage this we need bands with width $w = (8/5-\sqrt 2)/(\frac 12+4) = (16/5 - 2\sqrt 2)/9 = 0.041286\ldots$ So we need $108$ bands, and finally $2485$ villagers. If you can devise a strategy that works in time $2+\sqrt 2$ for up to $2484$ villagers, then this method gives you a strategy that always works in this time.
How can we check the gluing property of sheaf of ideals?
Just a remark: For ideal sheaves, and more generally subsheaves of a sheaf, the sheaf condition simplifies a little bit. If $F$ is a sheaf (say abelian groups), then a subsheaf $G$ of $F$ is given by subgroups $G(U) \subseteq F(U)$ for all opens $U$ with the following properties: If $s \in G(U)$ and $V \subseteq U$, then $s|_V \in F(V)$ already lies in $G(V)$. If $U = \cup_i U_i$ and $s \in F(U)$ has the property that $s|_{U_i} \in G(U_i)$ for all $i$, then it already follows that $s \in G(U)$. If $F$ is the structure sheaf of a ringed space, this gives a description of ideal sheaves which is really useful in practice.
Obtain value of variable through inverse function
Let $S(x):= \frac{e^{2x}}{e^{2x}+1}$. Then we have $0<S(x)<1$ for all $x \in \mathbb R.$ Try to show that $S : \mathbb R \to (0,1)$ is bijective. To determine $S^{-1}$, let $y \in (0,1)$ and consider the equation $S(x)=y.$ Elementary computations give $$x= \frac{1}{2} \ln (\frac{y}{1-y}).$$ Thus $$S^{-1}(y)=\frac{1}{2} \ln (\frac{y}{1-y}).$$
Showing $P(A\cap C)\geq P(A)P(C)$ from $A \cap B \subset C \subset A \cup B$ without using the independence of $A$ and $B$
It seems like you are using something like $P((A \setminus C) \cup (A \cap C)) = P(A \setminus C)P(A \cap C)$ which is wrong. It is true, however, that $P((A \setminus C) \cup (A \cap C)) = P(A \setminus C) + P(A \cap C)$.
Kuratowski's closure-complement problem for the upper/lower bound functions
There are two main properties of interest for the $\newcommand{\up}{{\uparrow}}\newcommand{\dn}{{\downarrow}}\up,\dn$ functions (note that since these functions are dual to each other there are also dualized versions of the theorems below), which hold for any relation $\le$ at all: $A\subseteq B\to B\up\subseteq A\up$. Suppose that $A\subseteq B$ and $x\in B\up$; then given any $y\in A$ we have $y\in B$ so $y\le x$. Thus $x\in A\up$. $A\subseteq A\dn\up$. Suppose that $x\in A$, and $y\in A\dn$. Then $y\le x$ by definition of $\dn$, so $x\in A\dn\up$ by definition of $\up$. We can combine these to prove that $A\dn\up\dn=A\dn$: on the one hand we have $A\dn\subseteq A\dn\up\dn$ by the second theorem applied to $A\dn$, but on the other hand we also have $A\subseteq A\dn\up\to A\dn\up\dn\subseteq A\dn$. Thus we can eliminate any $\dn\up\dn$ or $\up\dn\up$ sequence. Now consider the case where $\le$ is a bounded partial order. Then $A\up$ contains $1$ for any $A$, while $x\in\{1\}\up\to 1\le x\to x=1$, so $1\in A\up\up\subseteq \{1\}\up\subseteq\{1\}$ and hence $A\up\up=\{1\}$. Dually, $A\dn\dn=\{0\}$. Thus we get the following list (plus the duals of listed sets): $$\begin{array}{ll} A&A\up\\ A\up\up=\{1\}&A\up\dn\\ A\up\up\dn=X&\require{cancel}\cancel{A\up\dn\up=A\up}\\ \cancel{A\up\up\up=A\up\up}&\cancel{A\up\dn\dn=A\dn\dn}\\ \cancel{A\up\up\dn\up=A\up\up}&\cancel{A\up\up\dn\dn=A\dn\dn}\\ \end{array}$$ Thus we have a total of $8$ sets generated (note that $A$ and $A\up\up\dn$ are their own duals), and this bound is saturated even in the complete total order example: Let $X=[0,3]$ and $A=[1,2]$ with the standard order. Then $$\begin{array}{ll} A=[1,2],&A\up=[2,3],&A\up\up=\{3\},&A\up\dn=[0,2],\\ X=[0,3],&A\dn=[0,1],&A\dn\dn=\{0\},&A\dn\up=[1,3]\end{array}$$ are the required distinct sets. If $\le$ is a total order, we can make progress in a different direction. If $x\in A\up\up$, then either $A\up$ is empty (so $A\up\up=X$), or there is some $y\in A\up$, and then for any $z\in X$ either $z\le y\le x$ or $y\le z$ hence $z\in A\up$ and $z\le x$, so $x$ is the top element. Thus we have either that $A$ is bounded (and hence follows the same pattern as above), or $A\up\up=\emptyset$ replaces $\{1\}$ above; since $A\up\up\dn=\emptyset\dn=X$ is still true this does not generate any new sets over the BPO case. This leaves open the case of $\le$ an unbounded partial order.
Restriction of Continuous Function on Compact Hausdorff Space
We could argue that $K=\emptyset$ solves the problem. This part is wrong, or inconclusive at best: I'll rectify it in the following paragraph. For a non-empty subset, let's preliminarily consider a hypothetical non-empty compact subset $Q$ such that $f[Q]\subseteq Q$. Then, you can consider the family of compact sets $Q_0=Q$ and $Q_{n+1}=f[Q_n]$: in other words, $Q_n=f^n[Q]$. By the hypothesis that $f[Q]\subseteq Q$, we have that $Q_{n+1}\subseteq Q_n$ for all $n$, which implies that this is a decreasing sequence of closed ($X$ is Hausdorff) non-empty compact set. Therefore its intersection $K=\bigcap_{n\in\Bbb N} Q_n$ is non-empty. Moreover, $f[K]=K$ (not necessarily, that I know of: as it's been pointed out, only $f[K]\subseteq K$ is immediate). Rectification of part 1: As it has been pointed out, the procedure used to devise a non-empty subset $K$ such that $f[K]=K$ starting from some non-empty compact $Q$ such that $f[Q]\subseteq Q$ needs to be amended. Let $\kappa$ be an inital ordinal strictly larger than $\lvert X\rvert$ and define by transfinite induction the generalised sequence $Q_\bullet: \kappa+1\to \mathcal \{\text{compact subsets of }X\}$ $$\begin{cases}Q_0=Q\\ Q_{\beta+1}=f[Q_\beta]\\ Q_{\beta}=\bigcap_{\gamma<\beta} Q_\gamma&\text{if }\beta\text{ is a limit ordinal}\end{cases}$$ Notice that the hypothesis that $X$ is Hausdorff is needed for the sequence $Q_\bullet$ to be well-defined, i.e. to guarantee that its range stays in the family of compact subsets of $X$. Now, it is clear that $Q_\bullet$ is weakly decreasing. It cannot be strictly decreasing, because otherwise $\lvert X\setminus Q_\kappa\rvert\ge \kappa>\lvert X\rvert$. Therefore, there is some $\beta$ such that $Q_\beta=Q_{\beta+1}$. $K=Q_\beta$ satisfies $f[K]=K$ by definition, so let's pick the one corresponding to the least such ordinal $\beta$. We just need to prove that $Q_\beta\ne \emptyset$. If $\beta=\gamma+1$, for some ordinal, then we are ok, because $f[Q_\gamma]=\emptyset$ implies $Q_\gamma=\emptyset$, against minimality of $\beta$. If $\beta$ is a limit ordinal, then $\bigcap_{\gamma<\beta} Q_\gamma=\emptyset$ implies that $\{X\setminus Q_\gamma\}_{\gamma<\beta}$ is an open (recall that $X$ is Hausdorff) cover of $X$. Therefore there is a finite subcover $\{X\setminus Q_{\gamma_1},\cdots, X\setminus Q_{\gamma_t}\}$, say, with $\gamma_1<\cdots<\gamma_t<\beta$. But since $X\setminus Q_{\gamma_1}\subseteq\cdots\subseteq X\setminus Q_{\gamma_t}$, we have that $Q_{\gamma_t}=\emptyset$ and $f\left[Q_{\gamma_t}\right]=\emptyset$, against minimality of $\beta$. This procedure may be used on the whole space $X$. Another way to find a starting compact is to select a non-empty subset $U$ and consider $U^f:=\bigcup_{n\in\Bbb N} f^n[U]$ and $Q=\overline{U^f}$. Notice that $f\left[U^f\right]\subseteq U^f$, and therefore $f[Q]\subseteq \overline{U^f}=Q$. Remark: The transfinite induction I've made uses choice in its cardinality argument. However, that passage may be avoided by simply having $\kappa$ be an ordinal which does not inject into the family of compact sets of $X$ (for instance, the Hartogs number of $\mathcal P(X)$). Then $Q_\bullet$ cannot be strictly decreasing, because it cannot be injective.
Is traveling east in a positive imaginary direction the same thing as traveling north in a positive real direction?
That's a strange but interesting question. I think that with the right conventions a reasonable answer is "yes". Start by thinking about the complex numbers in the usual way as the Euclidean coordinate plane. Then it's reasonable to think of the four compass directions ENWS as specifying travel parallel to the coordinate axes in the obvious way. "Adding $5$" to any complex number takes you five units to the right (east). "Adding $5i$" to any complex number takes you five units up (north). Then of course "go $5$ miles northeast" is the same as "go $5/\sqrt{2}$ miles east, then $5/\sqrt{ 2}$ miles north", or just "go $5(1+i)/\sqrt{2}$". All in all, somewhat weird but a nice idea.
Finite time controllability Gramian
The definition you present is clear and unambiguous. Notice that if the Gramian in positive-definite, then it is positive definite in any interval that contains the 1st one. It is thus straightforward to establish your statements concerning increasing and decreasing endpoints. Alternative definitions of controllability can be made, but they cannot be more clear than the one you presented. I'd say forget them. It is only in the case of time-invariant systems that we can simply say "controllable" without mentioning the interval at all, in all other cases the definition that specifies both endpoints avoids needless complications.
Jumping frog problem
Hint: After $n$ turns the frog is back where it started, by your formula. (If $n$ is odd then $n\mid n(n+3)/2$.) It is also about to make a jump of length $2$ (mod $n$). So in the next set of $n$ jumps it will visit the same locations as the first $n$, in the same order, and this pattern will repeat. So it suffices to show that the frog doesn't visit all the locations in the first $n$ jumps.
Expression in polar coordinates for a homogeneous smooth function of degree $k$
Let us write $r=|x|$ and $\omega=\frac{x}{|x|}\in S^{n-1}$. Then $$u(x) = u\left(|x|\cdot \frac{x}{|x|}\right)=u(r\omega) = r^k u(\omega)$$ As you can see, no assumptions on $u$ are required. This is valid for every function $u:\mathbb{R}^n\setminus\{0\}\to\mathbb{C}$ homogenous of degree $k$.
Prove $n+1$ items in $n$ buckets implies some bucket has $2$ items.
Induction will do it. If you have $n+1$ items in $n$ buckets, with no more than one item per bucket, remove one nonempty bucket along with the item in it to be left with the exact same situation with $n$ replaced by $n-1$. Clearly, the case $n=0$ is impossible.
Norm of multiplication operator by a Fourier transform of an $L^1$ function
As you remarked, for $g∈ L^1$, we have $\|\mathcal{M}_{\widehat g}\|_{L^2\to L^2} = \|\widehat g\|_{L^\infty} ≤ \|g\|_{L^1}$. $\bullet$ If $g≥ 0$, then $\|g\|_{L^1} = ∫ g = \widehat{g}(0) ≤ \|\widehat g\|_{L^\infty}$. $\bullet$ However, in the general case, this reverse inequality is false. See for example the answer of Giuseppe Negro here: Estimate the $L^1$-norm of the Fourier transform, or the answer of David C Ullrich here: Is the inverse of the Fourier transform $L^1(\mathbb R)\to (C_0(\mathbb R),\Vert \cdot \Vert_\infty)$ bounded?.
Why boundary conditions in Sturm-Liouville problem are homogeneous?
If you want a solution with $\gamma_1\ne 0$ and/or $\gamma_2\ne 0$, then you can subtract a function from your solution that satisfies the non-zero endpoint conditions, and you have effectively converted the problem to an inhomogeneous problem with homogeneous endpoint conditions, which can be solved using separation of variables.
resolve for theta when using $2$ cosines
HINT: Draw $\cos(\theta-90^\circ)$. What other function (you know) does it look like? The product of $\cos$ and the other function is pretty common...the wiki page on trigonometric identities is great! One more hint? $\sin(A+B)=\sin A\cos B+\sin B\cos A$
If two planes in $\mathbb{R}^3$ pass by the origin, do they necessarily intersect at multiple points?
Your drawing does not show 2 planes. A plane does not have edges, instead planes are infinite in all directions. You can think of them as cutting the entire space over in 2 parts. This means that they actually will intersect in a line, except for the case where the planes are equal and they intersect in a plane.
Prove $f:[0,2] \to \mathbb{R}$ is a continous function where $f(x) = \sqrt{x}$
If $2\geq a > 0, |\sqrt{x}-\sqrt{a}| = \dfrac{|x-a|}{\sqrt{x}+\sqrt{a}}\leq \dfrac{|x-a|}{\sqrt{a}}<\dfrac{\delta}{\sqrt{a}}< \epsilon\iff \delta < \epsilon\cdot \sqrt{a}$. Thus this suggests that you choose $\delta = \dfrac{\epsilon\cdot \sqrt{a}}{2}$ will work. If $a = 0, |\sqrt{x}-\sqrt{0}|= \sqrt{|x-0|}<\sqrt{\delta}<\epsilon\iff \delta < \epsilon^2$. Thus you can choose $\delta = \dfrac{\epsilon^2}{2}$ will work.
Arrangement of houses with 2 colors
One way to show that $25$ is the maximum would be to observe that the chain $$11\to22\to6\to17\to1\to12\to23\to7\to18\to2\to13\to24\to8\to19\to3\to14\to25\to9\to20\to4\to15\to26\to10\to21\to5\to16$$ where each step in the chain either goes up $11$ or down $16$, accounts for all the numbers from $1$ to $26$. This shows that in any stretch of $26$ houses, all houses have the same color as the $11$th house.
Unrationalize Denominator
$$\frac{1}{2\sqrt{\frac{1}{2}}}=\frac{1}{\sqrt{4}\sqrt{\frac{1}{2}}}=\frac{1}{\sqrt{\frac{4}{2}}}=\frac{1}{\sqrt{2}}$$ Now you can multiply by "1": $$\frac{1}{\sqrt{2}}\frac{1}{1}=\frac{1}{\sqrt{2}} \frac{\sqrt{2}}{\sqrt{2}}=\frac{\sqrt{2}}{(\sqrt{2})^2}=\frac{\sqrt{2}}{2}$$
Expressing Complex Number in terms of its conjugate
No. All of those are analytic and compositions of analytic functions are analytic.
Number of options to fill matrix with increasing rows and columns
There is a bijection of this matrix to the ways of nesting parenthesis in a typical algebra (which is well known to be counted by the Catalan numbers). Take a "valid" nesting like (()(()())), where valid means that the number of opening and closing parentheses is the same, and that reading left-to-right the number of opening parentheses so far is not less than the number of closing parentheses. Now enumerate the parentheses (independent of type) from left to right with 1,2,...,2n, and put the labels of the opening parentheses in one row and the labels of the closing parentheses in the other. The requirements of monotonicity in both directions of the 2*n matrix are obviously fulfilled.
Evaluate $\sum_{n=1}^\infty 2^{-\frac{n}{2}}$
We have that $$\sum_{n=1}^\infty 2^{-\frac{n}{2}}=\sum_{n=1}^\infty \left(\frac1{\sqrt 2}\right)^n$$ then refer to the geometric series.
Calculus related rates problem - the relation between distance and time
Start by defining your axes and variables. Let East be $+x$, North be $+y$, measured in meters with time in seconds. If A starts at the origin, B starts at $(350,0)$ As A is riding North, his location at time $t$ is $(0,5t)$ What is B's location after he starts riding? Now find the distance $d$ as a function of time. You are then asked for $\frac {dd}{dt}$ at 25 minutes.
proving $\csc^2 \left( \frac{\pi}{7}\right)+\csc^2 \left( \frac{2\pi}{7}\right)+\csc^2 \left( \frac{4\pi}{7}\right)=8$
$(1)$ Using this, $\sin 7x=7s-56s^3+112s^5-64s^7$ where $s=\sin x$ If $\sin 7x=0,7x=n\pi$ where $n$ is any integer. So, $x=\frac{n\pi}7$ where $n=0,1,2,3,4,5,6$ So, the roots of $7s-56s^3+112s^5-64s^7=0$ are $\sin\frac{n\pi}7$ where $n=0,1,2,\cdots 5,6$ So, the roots of $64s^6-112s^4+56s^2-7=0$ are $\sin\frac{n\pi}7$ where $n=1,2,\cdots 5,6$ So, the roots of $64t^3-112t^2+56t-7=0$ are $\sin^2\frac{n\pi}7$ where $n=1,2,4$ or $3,5,6$ So, the equation whose roots are $\csc^2\frac{n\pi}7$ where $n=1,2,4$ or $3,5,6$ is $64\frac1{t^3}-112\frac1{t^2}+56\frac1t-7=0\iff 7t^3-56t^2+112t-7=0$ So, $\csc^2 \left( \frac{\pi}{7}\right)+\csc^2 \left( \frac{2\pi}{7}\right)+\csc^2 \left( \frac{4\pi}{7}\right)$ is the sum of roots $=\frac{56}7=8$ $(2)$ $\cos2x=2c^2-1$ where $c=\cos x$ $\cos4x=2\cos^22x-1=2(2c^2-1)^2-1=8c^4-8c^2+1$ If $\cos4x=0,4x=(2m+1)\frac\pi2,x=\frac{(2m+1)\pi}8$ where $m=1,2,3,4$ So, the equation whose roots are $\cos\frac{(2m+1)\pi}8$ where $m=1,2,3,4$ is $8c^4-8c^2+1=0$ Now, as $$\cos2u=\frac{1-\tan^2u}{1+\tan^2u}\implies cos\frac{(2r+1)\pi}8=\frac{1-\tan^2\frac{(2r+1)\pi}{16}}{1+\tan^2\frac{(2r+1)\pi}{16}}$$ If $y=\tan^2\frac{(2r+1)\pi}{16},y=\frac{1-c}{1+c}\implies c=\frac{1-y}{1+y}$ So, $8c^4-8c^2+1=0$ becomes $$8\left(\frac{1-y}{1+y}\right)^4-8\left(\frac{1-y}{1+y}\right)^2+1=0$$ whose roots are $y=\tan^2\frac{(2r+1)\pi}{16}$ where $r=1,2,3,4$ or, $$8(y-1)^4-8(y-1)^2(y+1)^2+(y+1)^4=0$$ On simplification we get, $y^4-28y^3+52y^2-36y+1=0$ So, $\sum_{1\le r\le4}\tan^2\frac{(2r+1)\pi}{16}=\frac{28}1$
Absolute Continuity defined by Necas
This theorem basically tells you that among "almost every" lines parallel to the axis, the function behaves like an absolutely continuous function. In another words, this theorem tells you that the Sobolev function behaves very good at the most of points, and the bad points have measure $0$ and hence in some case is ignorable. Maybe you should compare this result with 1 dimensional case. An one dimensional Sobolev function is absolutely continuous, not bad points; but when dimension rise, the Sobolev function no longer equal to an absolutely continuous function, however, most of it is still can be represented by an absolutely continuous function. For your second question. This is not a new definition of absolute continuity, but rather a theorem tells you that which part of a Sobolev function can be identified as an absolute continuous function.
Covariation of Wiener processes, $\langle W_1,W_2\rangle_t = \rho t$.
This does not have to be true, this is a (very strong) hypothesis about the joint distribution of two Brownian motions $(W_1(t))_{t\geqslant0}$ and $(W_2(t))_{t\geqslant0}$ defined on a common probability space, which may be true or not, hence, trying to prove it is pointless. At most, one can note that $\rho t=\mathbb E(W_1(t)W_2(t))$ and, since $\mathbb E(W_1(t)^2)=\mathbb E(W_2(t)^2)=t$, the variance-covariance inequality imposes that indeed $|\rho|\leqslant1$.
Open intervals are connected using a continuous function
You can use two facts. The first one is that $\mathbb{R}$ is connected and the second one is that a continuous map preserves the connectedness.
What is the probability that there will not be two adjacent children in the row with green hats?
Hint: Let $N$ denote number of sums $n_0+n_1+n_2+n_3+n_4+n_5=15$ where $n_0$ and $n_5$ are nonnegative integers and $n_1,n_2,n_3,n_4$ are positive integers. Think e.g. of $n_0$ as the number of children at left side of the utmost left child that wears a green hat. Let $M$ denote number of sums $n_0+n_1+n_2+n_3+n_4+n_5=15$ where $n_i$ is a nonnegative integer for $i=1,\dots 5$. Both problems can be solved with stars and bars. Then the probability takes value: $$\frac{N}{M}$$
How can we find a new sum of multiplications based on a previous one?
Since it is a finite sum you can multiply out the brackets and split up the sums, to give $$\sum_{k=0}^{2^n-1} (a_k+c)(b_k+d) = \sum_{k=0}^{2^n-1} a_kb_k + d\sum_{k=0}^{2^n-1} a_k + c\sum_{k=0}^{2^n-1} b_k + 2^ncd$$ So we calculate $\sum a_k$ (which is $2^n$ times the arithmetic mean of the $a_k$s) and $\sum b_k$ (analogous), and then it's just simple arithmetic. But I'm not sure what you mean about finding the "fastest way". There is another perspective: you can define $$ \begin{align} \mathbf{a} &= (a_0, a_2, \dots, a_{2^n-1}) \\ \mathbf{b} &= (b_0, b_2, \dots, b_{2^n-1}) \end{align}$$ Then we have $$\sum_{k=0}^{2^n-1} a_kb_k = \mathbf{a} \cdot \mathbf{b}$$ If, further, we define $$\mathbf{1} = (\underbrace{1, 1, \dots, 1}_{2^n})$$ then we get $$\sum_{k=0}^{2^n-1} (a_k+c)(b_k+d) = (\mathbf{a}+c\mathbf{1}) \cdot (\mathbf{b} + d\mathbf{1})$$ I'm not sure if this geometric picture helps, but it's something you could investigate.
Finding the centraliser of an element in $S_5$
For $\sigma \in S_5$ we have that $(12)(34) = \sigma^{-1}(12)(34)\sigma = (\sigma(1), \sigma(2))(\sigma(3), \sigma(4))$ holds if and ony if $\{\sigma(\{1,2\}),\sigma(\{3,4\})\}=\{\{1,2\},\{3,4\}\}$. Thus we have two possibilities: either $\sigma(\{1,2\})=\{1,2\}$ and $\sigma(\{3,4\})=\{3,4\}$ or $\sigma(\{1,2\})=\{3,4\}$ and $\sigma(\{3,4\})=\{1,2\}$. In the first case, we get the possibilities $\mathrm{id},(12),(34),(12)(34)$, in the second case, we get the possibilities $(13)(24),(14)(23),(1324),(1423)$
Killing fields, about the definition
Act by isomoetries means that if $\phi_t$ is the flows of $X$, $\phi_t^*g=g$ where $g$ is the metric. $g_x(X,Y)=g_{\phi_{-t}(x)}(d\phi_{-t}(X),d\phi_{-t}(Y))$
Is a bar over a significant figure needed for rounding?
As I suggested in a comment, you could use scientific notation. Alternatively, since you state your teacher doesn't want you to use this, as Wikipedia's Significant rules explained section of its "Significant figures" article states: An overline, sometimes also called an overbar, or less accurately, a vinculum, may be placed over the last significant figure; any trailing zeros following this are insignificant. For example, $13\bar{0}0$ has three significant figures (and hence indicates that the number is precise to the nearest ten). Less often, using a closely related convention, the last significant figure of a number may be underlined; for example, "$2\underline{0}00$" has two significant figures. In the combination of a number and a unit of measurement, the ambiguity can be avoided by choosing a suitable unit prefix. For example, the number of significant figures in a mass specified as $1300$ g is ambiguous, while if stated as $1.3$ kg it is not. I don't know about the history & reasons for using one option compared to the other, but one small issue I can see with using an overbar is that it may be somewhat confusing with situations where this is also to indicate a repeating decimal, e.g., $2.3\bar{4} = 2.34444\ldots\;$ . Note the first two options are also used in other Web sites, e.g., Significant figures, in its practice problems, an over line in the third & and an under line in its fifth. As for whether or not something like this is required at all, the Wikipedia article says: Zeros to the right of the significant figures are significant if and only if they are justified by the precision of their derivation. Nonetheless, to be unambiguous & to clearly differentiate your answer from the case of there possibly being $5$ significant digits instead in your particular case of $49,000$, I suggest you should explicitly indicate $9$ is the last significant digit, with the most commonly used options (without scientific notation) being an overbar (i.e., so it's $4\bar{9},000$) or an underbar (i.e., so it's $4\underline{9},000$). Alternatively, as suggested by the third option & which Dan stated in a comment, you can also use a different unit of measurement, in particular, you could say it's $4.9\text{ m}^2$ instead since $10,000\text{ cm}^2 = 1\text{ m}^2$.
Inductive Definition on the set of strings
The set $\Sigma^*$ contains all strings. The set $\Sigma^+$ contains all non-empty strings. Your inductive definition of $\Sigma^*$ will go like this: a string is either empty or of the form $sa$ where $s$ is a string and $a$ is a character. So to form a string, you start with the empty string and keep adding characters at the end.
Probability and series. (I just need help with the arguments provided )
All the events $[X=2k]$ are pairwise disjoint, hence $$ P(X \text{ is even})=P\left(\bigcup_{k\geq1} [X=2k]\right)=\sum_{k\geq1}P([X=2k]). $$
Is the quotient $X/G$ homeomorphic to $\tilde{X}/G'$?
By definition, $q \circ p(x) = \psi(x)=\psi(y) = q \circ p(y) \in X/G$. So $p(x),p(y)$ are in the same orbit of the action of $G$ on $X$. Pick $g \in G$ such that $g \cdot p(x) = p(y)$. In $\tilde X$, the point $x$ is a lift of $p(x)$ and the point $y$ is a lift of $p(y)$. As shown in my answer to your previous question, there exists $g' \in G'$ which is a lift of $g$ such that $g' \cdot x = y$.
A question about variable substitution
Since your last integral is over an interval of the length of the period ($\pi-x-(-\pi-x)=2\pi$), it is equal to $\int_{-\pi}^\pi f(x-y) \, dy$ and you're done. See here: Integral of periodic function over the length of the period is the same everywhere (add $\pi+x$ to both limits of integration to see it clearly.)
How many elements are there in this quotient ring?
This is how I look at such problems. First of all, $\Bbb Z[\sqrt{2}]\cong \Bbb Z[x]/(x^2-2)$. Then using isomorphism theorems, $\Bbb Z[\sqrt{2}]/(17)\cong \Bbb Z[x]/(x^2-2,17)\cong(\Bbb Z/(17))[x]/(x^2-2)=\Bbb F_{17}[x]/(x^2-2)$. (I'm taking some liberties with the notation: the parenthesis around elements denote the ideal generated in the ring in context. That's why even though the $(17)$ all look alike, they're actually different sets as they are generated in their respective rings.) So the question amounts to figuring out the structure of $\Bbb F_{17}[x]/(x^2-2)$, but quotients of polynomial rings over fields are pretty easy to analyze. The ideal $(x^2-2)$ is going to be prime iff $x^2-2$ is irreducible over $\Bbb F_{17}$, but you'll discover quickly that it has two distinct roots over this field, and is reducible. Given the two roots $\alpha,\beta$, the Chinese remainder theorem says that $\Bbb F_{17}[x]/(x^2-2)\cong \Bbb F_{17}[x]/(x-\alpha)\times \Bbb F_{17}[x]/(x-\beta)\cong \Bbb F_{17}\times \Bbb F_{17}$, so the ring has $289$ elements.
Conceptual question of coplanar matrix
To be honest I find the term 'coplanar matrix' a bit confusing (this is not criticism of you, but of the person introducing this term). A set of points in some space (say $\mathbb{R}^n$) is said to be coplanar if they all lie in a single two-dimensional plane. From your question I infer that the definition of coplanar matrix is this: An $n \times n$-matrix is coplanar if the $n$ points in $\mathbb{R}^n$ described by its columns lie in the same plane through the origin or equivalently An $n \times n$-matrix is coplanar if the $n + 1$ points in $\mathbb{R}^n$ described by its columns and the zero vector are coplanar in the ordinary sense. With this cleared up the answer to the question is to look at the column-space of the matrix, that is the space spanned by the columns of the matrix, i.e. the set of all linear combinations of the columns. This is a $k$-dimensional subspace of $\mathbb{R}^n$ for some number $k$ (with $0 \leq k \leq n$). This $k$ is by definition the rank of the matrix. If all columns lie in the same plane through the origin, then all their linear combinations do too and so the entire column space lies in this plane. In other words, we have a $k$-dimensional space that is somehow contained in a plane, that is: in a 2-dimensional space. It is intuitively obvious that this can only be the case if $k \leq 2$. This hopefully answers your first question. For the second question: if $n > 2$ (so the matrix has more than 2 rows) then a coplanar matrix can indeed not be invertible. Conceptually this can be seen by interpreting the matrix as a linear map: this map maps the entire $n$-dimensional space into the plane spanned by its columns. Necessarily this means mapping various points in space to the same point in the plane. Now we can see why it is hard to write an inverse: if many points in space are mapped to one point $x$ in the plane, to which of the original points should the inverse map send $x$? EDIT: here is a concrete example. Suppose you have the map $M$ that sends $(x, y, z) \in \mathbb{R}^3$ to $(x, y, 0)$. So after applying the map every point lies in the $x, y$-plane, making the matrix corresponding to this map coplanar. (I leave it to you to actually write down the matrix.) Now the problem with finding an inverse map $M^{-1}$ is clear. $M$ maps $(1, 2, 3)$ to $(1, 2, 0)$. It also maps $(1, 2, 4)$ to $(1, 2, 0)$. Now to which point should the map $M^{-1}$ send $(1, 2, 0)$? To $(1, 2, 3)$? To $(1, 2, 4)$? To something else? This is unanswerable so $M^{-1}$ cannot exist.
integral of square of Brownian motion
The expectations is easy to calculate, using Fubini's theorem, which applies since the integrand is positive: $$ \begin{align} E\left[\int_0^t B(s)^2 ds\right] = \int_0^tE[B(s)^2]ds = \int_0^ts\,ds = \frac{t^2}{2} \end{align} $$
How to find the value of 5 variables
A quick brute-force Python script reveals $$9^5-2+6^3=59263$$
On the impossibility of proving certain problems using double counting.
According to this question on Math Overflow ("Recursions which define polynomials") there is no known combinatorial interpretation of the numbers $$A(m,n) = \frac{(2m)! (2n)!}{m! n! (m+n)!}.$$ The question does link to Ira Gessel's paper "Super Ballot Numbers" (Journal of Symbolic Computation 14 (1992) 179--194). In Section 6 Gessel calls these "super Catalan numbers" and gives a few proofs of their integrality. Equation (32) consists of the formula $$\sum_n 2^{p-2n} \binom{p}{2n} A(m,n) = A(m,m+p), \:\:\: p \geq 0.$$ Gessel says that this formula, together with $A(0,0) = 1$ and $A(m,n) = A(n,m)$, "in principle... gives a combinatorial interpretation to $A(m,n)$, although it remains to be seen whether [the formula] can be interpreted in a 'natural' way." So "no known combinatorial interpretation, but a recursive formula that might lead to one" appears to be the state of things at this point.
Converting a Riemann sum to an integral
Hint: You are close, a little more manipulation will do it. In the expression $\frac{n}{n+i}$, divide "top" and "bottom" by $n$. We get $$\frac{1}{1+\frac{i}{n}},$$ which is the value of $f$ at $\frac{i}{n}$, with $f(x)=\frac{1}{1+x}$. We can reach the same conclusion in one step, by noting that $$\frac{1}{n+i}=\frac{1}{n}\frac{1}{1+\frac{i}{n}}.$$ In any case, our sum is equal to $$\sum_{i=0}^{n-1}\frac{1}{n}f(i/n), \qquad\qquad(\ast)$$ which is a familiar type of Riemann sum. The simplest kind of Riemann sum has shape $$\sum \frac{L}{n}f(iL/n),$$ where we sum from $i=0$ to $n-1$ (equal-width intervals, evaluation at left endpoints) or from $i=1$ to $n$ (evaluation at right endpoints). This was the motivation for trying to express our terms as $\frac{1}{n}f(i/n)$. If the function $f$ is well-behaved, the limit as $n \to\infty$ of these Riemann sums is $$\int_0^L f(x)\,dx.$$ For another way to identify the interval of integration, note that we are evaluating $f$ at the numbers $\frac{0}{n}$, $\frac{1}{n}$, $\frac{2}{n}$, and so on up to $\frac{n-1}{n}$. What interval are these (equally spaced) division points of?
Invert the softmax function
Note that in your three equations you must have $x+y+z=1$. The general solution to your three equations are $a=kx$, $b=ky$, and $c=kz$ where $k$ is any scalar. So if you want to recover $x_i$ from $S_i$, you would note $\sum_i S_i = 1$ which gives the solution $x_i = \log (S_i) + c$ for all $i$, for some constant $c$.
How do I find the minimum and maximum of a multivariable function given two constraints?
Since the objective function is a function of $y,z$ can we rewrite the constraints to eliminate x, and have objective and constraints in the same variables? $3x + z = 5\\ x = \frac {5-z}{3}\\ x^2 + y^2 = 1\\ \left(\frac {5-z}{3}\right)^2 + y^2 = 1$ We have an ellipse. We need to find where the tangent of the ellipse is parallel to $y+4z$ $-2\frac {5-z}{9}\ dz + 2y\ dy = 0\\ \frac {dy}{dz} = \frac {5-z}{9y}\\ \frac {dy}{dz} = -4\\ 5-z = -36y$ and plug this back into our constraint. $145 y^2 = 1\\ y = \pm \frac {1}{\sqrt {145}}\\ z = 5 \pm \frac {36}{\sqrt{145}}\\ f(x,\frac {1}{\sqrt {145}},5+\frac {36}{\sqrt{145}}) = 20 + \sqrt{145}\\ f(x,-\frac {1}{\sqrt {145}},5-\frac {36}{\sqrt{145}}) = 20 -\sqrt{145}$ Which is the same as you have above. Otherwise, we could do something with Lagrange multipliers $F(x,y,z,\lambda,\mu) = y+4z + \lambda (x^2 + y^2 - 1) + \mu (3x + z -5)\\ \frac {\partial F}{\partial x} = 2\lambda x + 3\mu = 0\\ \frac {\partial F}{\partial y} = 1 + 2\lambda y = 0\\ \frac {\partial F}{\partial z} = 4 + \mu = 0\\ \frac {\partial F}{\partial \lambda} = x^2+y^2 - 1 = 0\\ \frac {\partial F}{\partial \mu} = 3x+z - 5 = 0$ And solve.
Integer solution of $x^3 - x + 9 = 5 y^2$
The substitution $U=5x, V=25y$ turns this equation into that of an elliptic curve with the short Weierstrass form $$ U^3-25U+1125=V^2. $$ According to its LMFDB entry the integer points on this elliptic curve are $(U,V)=(4,\pm33)$ There are no solutions with $V$ a multiple of $25$, so the original equation has no integer solutions.
$G$ abelian $p$-group, $x \in G$ of max order. Then cosets in $G/\langle x \rangle$ have represetatives with same order
Your question is just Lemma 8.3 from S.Lang, Algebra, Springer, 3rd ed., 2002. Addendum: Page 43: Lemma 8.3. Let $b$ be an element of $A/A_1$, of period $p^r$. Then there exists a representative $a$ of $\bar{b}$ in $A$ which also has period $p^r$. Here $A_1$ is the subgroup generated by an element of maximal order.
Which results via algebraic or direct manipulations of divergent series can be rigorously justified and why?
The error becomes evident when you consider the analogous finite sum. Let $$S(n) = \sum_{k=1}^n k^2 = 1^2 + 2^2 + \cdots + n^2.$$ Then following your logic, $$4S(n) = \sum_{k=1}^n (2k)^2,$$ and $$-3S(n) = S(n) - 4S(n) = \sum_{k=1}^n k^2 - (2k)^2.$$ When we perform the cancellation of the even terms, the first sum becomes $$\sum_{j=1}^{\lceil n/2 \rceil} (2j-1)^2$$ as you expect, but the number of terms that are cancelled in the second sum is not the entire sum: what is left over is $$\sum_{j=\lfloor n/2 \rfloor+1}^{n} (2j)^2.$$ So $$-3S(n) = \sum_{j=1}^{\lceil n/2 \rceil} (2j-1)^2 - \sum_{j=\lfloor n/2 \rfloor + 1}^n (2j)^2.$$ Then you shift the summation index by $1$ and perform more calculations. For the sake of clarity, assume $n = 2m$ is even, so that we have $$-3S(2m) = \sum_{j=1}^m (2j-1)^2 - \sum_{j=m+1}^{2m} (2j)^2,$$ consequently your shift and subtraction becomes $$\begin{align*} 0 &\overset{?}{=} 3S(2(m+1)) - 1 - 3S(2m) \\ &= \left( \sum_{j=2}^{m+1} (2j-1)^2 - \sum_{j=m+2}^{2m+2} (2j)^2 \right) - \left(\sum_{j=1}^m (2j-1)^2 - \sum_{j=m+1}^{2m} (2j)^2\right) \\ &= \sum_{j=1}^m (2j+1)^2 - (2j-1)^2 \\ & \quad - \left((2(2m+2))^2 + (2(2m+1))^2 - (2(m+1))^2 \right) \\ &= 8\sum_{j=1}^m j - (28m^2 + 40m + 16). \end{align*}$$ And here now is where you see that your reasoning fails, because this remainder term is quadratically increasing in $m$, and is not vanishing as $n \to \infty$.
Intuition of Homogenous Equations in Linear Algebra
Firstly, A is not a function in this context. However, we can describe a certain transformation T such that T(x) = Ax (i.e. defined by the matrix). Q1. What is the equivalent of the horizontal asymptote in linear algebra? My guess is the origin. A1. "The horizontal asymptote" is too ambiguous. Perhaps if you can elaborate there, I can provide an answer. Q2. But geometrically, how can there be two vectors that when modified result in the same zero vector? Does that mean space sort of collapses on itself? A2. There are not always two vectors that result in a zero vector after a transformation. Sometimes there are more, sometimes there are less; it depends on a property of the matrix we call the "Null Space". The null space tells which vectors are transformed into the zero vector. (The vectors x such that Ax = 0.) Often times we can think of the null space as the "collapsing" of our initial space, the domain. Loosely speaking, if we "lose" dimensions during the transformation, then we are "losing" vectors to the null space. Also, I should end this portion by saying that it is not usual for only 2 vectors to be in the null space of a matrix. More often there are usually one (the zero vector: A(0) = 0) or infinitely many. However, a more appropriate way to describe it would be with the dimension of the null space, often called the "nullity". Q3. Basically why there needs to be a free variable is so that the solution set for Ax=b can "shift" along that variable axis to also include b=0, correct? A3. I don't understand what you're asking here, but again elaboration may help. You should look into pre-established analogies for things like these in linear algebra. Q4. Can't we just pick another point to solve for, and translate those solutions to fit the new solution sets for Ax=b? A4. This may be possible in some cases; however, it is neither efficient or useful (in general). The equation Ax = 0 is important because of properties such as nullity, rank (Rank-Nullity Theorem), eigenvalues, eigenvectors, one-to-one, etc... Q5. Linear transformations can collapse the solution set, as in take the vector from R^n to R^n-1 if the columns of A are linearly dependent. But can the solution set be expanded from R^n to R^n+1? A5. Yes, they can collapse with a null space as I describe previously. As I will show, the dimension (of the co-domain!) can increase as well. Allow me to illustrate this final answer with a matrix A whose dimension is k by p. Consider the vectors in its domain (you should see that they are all p by 1 column vectors, or p-dimensional). What dimension is the co-domain? $(k,p) * (p,1) = (k,1)$ The dimension is k! So all that is required for the dimension (from the domain to the co-domain) to increase is to have k > p. Equivalently stated, the matrix A should have more rows than columns. Note: Above I say "co-domain" and not "range". My apologies for the misconstruing of your question. It should be noted that it is not possible for a linear transformation to increase its range from the dimension of its domain. This is because the null space always contains at least the zero vector (See again rank-nullity theorem). Thanks to @amd for pointing this out.
Show that $\lim_{n \to \infty}\ M_n =M$
Here is an outline (hints) for a proof: Since $f$ is continuous on the closed interval $[0,1]$ the extreme value theorem tells us that $f$ attains its maximum $M$ on $[0,1]$. Let $x_*$ be any point where $f(x_*) = M$. Since $f$ is continuous then for any $\epsilon > 0$ there exist a $\delta >0$ s.t. if $|x-x_*| < \delta$ then $|f-M| < \epsilon$. This implies (why?) that $M^n \geq \int_0^1 f^ndx \geq \int_{\text{max}(0,x_*-\delta)}^{\text{min}(1,x_*+\delta)} f^ndx \geq (M-\epsilon)^n\delta $. Take the $n$'th root and then the limit $n\to\infty$ keeping $\epsilon$ and $\delta$ fixed. From the result you get use the fact that $\epsilon > 0$ was arbitrary to conclude the proof.
Left and right derivatives at a point
Yes. Suppose that $f$ is piecewise linear with a jump discontinuity at $0$, where the slope of $f$ is the same to the left and to the right of $0$.
What is the definition of locally finite random subset?
A random subset $\mathcal{N}$ being (almost surely) locally finite means that $\#(\mathcal{N}\cap B)$ is (almost surely) finite for every compact subset $B$ of the target space. In the setting of point processes, one considers only (almost surely) locally finite random subsets $\mathcal{N}$ so, in a way, one avoids at all cost the power set of the target space, which is much too big for measurability purposes. The distribution of a locally finite random subset $\mathcal{N}$ is defined by the distributions of the finite families of integer valued random variables $\#(\mathcal{N}\cap B)$ for compact subsets $B$, just like the distribution of an infinite sequence $(\xi_n)_n$ indexed by the integers is defined by the distributions of the random vectors $(\xi_n)_{n\in I}$ for every finite $I$, aka the marginals of the process. Note in particular that one assumes that $\#(\mathcal{N}\cap B)$ is measurable for every compact $B$. When $\#(\mathcal{N}\cap B)$ is almost surely finite for every compact $B$, one can identify the random subset $\mathcal{N}$ with the random measure which puts a unit Dirac mass on every point in $\mathcal{N}$, defined formally bas the unique measure $N$ such that, at least for every measurable bounded function $f$ with bounded support, $$ \int f(x)\, \mathrm{d}N(x)=\sum_{x\in\mathcal{N}}f(x). $$ This identification goes through the (trivial) remark that, for every measurable subset $B$, the events $[\mathcal{N}\cap B=\emptyset]$ and $[N(B)=0]$ coincide.
Closed form for $c_m = \sum_{n=|m|}^{\infty} \left(\dfrac{1}{2}\right)^{2n} (-1)^{m+n}{2n \choose m+n}$, $m$ integer
If $m \ge 0$ I get $$ \frac{(3-2\sqrt{2})^m}{\sqrt{2}}$$ and if $m < 0$, $$ \frac{(3+2\sqrt{2})^m}{\sqrt{2}}$$
If $p(x)=ax^3 -2x^2 +bx+c$, find $a, b$ and $c$ if $p(0)=12$, $p(-1)=3$ and $p(2)=36$
Use $p(2)=36$ , you'll have $8a - 8 + 2b + 12 = 36$. Simplify it, you'll get $4a+b=16$. Together with the equation you've written down, $a=3$,$b=4$
solve $x' = t^\alpha +x^\beta$
I don't think you'll get a closed-form solution in general. Maple doesn't find one. Even in the special case $\alpha=2, \beta=3$ it doesn't find one. Nor does Wolfram Alpha. Of course if $\beta = 1$ you have a linear equation. If $\beta = 2$ a solution can be found in terms of Bessel functions.
Why is the Generalization Axiom considered a Pure Axiom?
This axiom is "useful" in proving the Generalization Theorem : If $\Gamma \vdash \varphi$ and $x$ does not occur free in any formula in $\Gamma$, then $\Gamma \vdash ∀x \varphi$. See : Herbert Enderton, A Mathematical Introduction to Logic (2nd ed - 2001), page 117. There are other axiomatizations of first-order logic that avoid this "unnatural" axiom; see : Joseph Shoenfield, Mathematical Logic (1967), page 21 or George Tourlakis, Lectures in Logic and Set Theory. Volume 1 : Mathematical Logic (2003), page 34.
Calculating points of intersection and their multiplicities
First of all, in order to use Bezout's theorem to say we have 4 points of intersection we need to be looking at a projective curve. We can do this by homogenizing the equations to get $yz = x^2$ and $yz = 2x^2$, and the point $p$ we care about is $[0,0,1]$ in projective coordinates. Now we can also easily see that $[0,1,0]$ is also a solution, which is "at infinity" in the affine picture. Moreover if both $y$ and $z$ take nonzero values then see we can't have any solutions as $x^2 \ne 2x^2$ for $x\ne 0$. So we see that the multiplicities of these two intersection points needs to sum to 4. But now we can observe that there is a symmetry between $y$ and $z$ in our equation, in particular if we look at our second intersection in the affine chart where $y = 1$ it looks like $z = x^2$ and $z = 2x^2$ at $x = z = 0$ which obviously looks the same as the intersection point we started with. Hence they both need to have intersection multiplicity 2.
Prove by induction that $S(n,3) > 3^{n-2}$ for all $n≥6$
It doesn't really make sense to have base case $n=2$ if you're proving it for $n \geq 6$. Start with the base case, $n=6$ and $n=7$ and prove each of those. From there, for any $k \geq 6$, you assume it true for $n=k$ and $n=k+1$ and can prove it for $n=k+2$ you can use the recurrence relation $S(n+1,k)=k*S(n,k)+S(n,k-1)$ and then invoke the inductive hypothesis.
Basis of Domain and Existence of Unique Linear Map
If you want to define a linear map $f \colon V \rightarrow W$ you can do that by just specifying the images of a basis of $V$. This means you get the linearity for free (so you do not need to worry about that) and for many constructions it is very useful too have this kind of control over linear maps. For example, one can show that every subspace $U \subset V$ arises as the kernel of a linear map. How does one prove that? Well... choose a basis of that subspace and extend it to a basis of $V$. Define a linear map by sending the basis of $U$ to $0_W$ and make sure that the rest is not in the kernel (for example by mapping the other basis vectors to themselves). Another nice thing is that this property often helps with counting linear maps with additional properties. For example you can ask yourself the following question: Are there none, one or multiple linear maps $f \colon \mathbb{R}^2 \rightarrow \mathbb{R}^2$, such that $f((1,1)) = (2,3)$ and $f((1,0)) = (1,0)$? Now you can answer that immediately since the vectors whose image we specified are a basis of $\mathbb{R}^2$. There is also another way of defining linear maps by matrices. Actually, matrices correspond to linear maps once you have chosen bases for the involved vector spaces. To get this correspondence you also meed your statement to get the uniqueness of the matrix.
Let $f:(-a,a)\setminus\{0\}\to (0,\infty)$ satisfying $\lim_{x\to 0} (f(x)+ \frac{1}{f(x)}) = 2.$ Show that $\lim_{x\to 0} f(x) = 1$
May I guess that you are looking for something like this: Given $\epsilon$, pick the $\delta$ for which $$|x| < \delta \implies\left|\dfrac{(f(x)-1)^2}{f(x)}\right| < \min\{\frac{\epsilon^2}{\epsilon +2} \ , \ \epsilon \} $$ holds. Then $|x| < \delta $ implies $$\left|\dfrac{(f(x)-1)^2}{f(x)}\right| < \frac{\epsilon^2}{\epsilon +2} $$ implies $$|f(x)-1|^2 < \frac{\epsilon^2|f(x)|}{\epsilon +2} < \frac{\epsilon^2(\epsilon+2)}{\epsilon +2} = \epsilon^2$$ since $0<f(x) < \epsilon + 2$. And therefore $$|f(x)-1| < \epsilon$$ And we are done.
Maximise the happiness among children
Construct a graph as follows: Each vertex represents a child Two vertexes share an edge iff the children are exclusive in terms of happiness; that is, if there exists a piece of candy that both children want then they share an edge Now your problem is to find the maximum number of vertices which do not share an edge. This is called an "independent set". Unfortunately, this is NP-Hard. Solving it exactly is not known in polynomial time, but it is a well studied problem so you have options. The problem of finding the maximum independent set is the same as the problem as finding the maximum clique in the dual graph, so you may see the problem described that way. http://en.wikipedia.org/wiki/Maximum_independent_set#Finding_maximum_independent_sets This is the example from your question. The 3 vertexes are the circles which represent the 3 children. The red edge represents that the first and second child both want candy number 2. The blue edge represents that the second and third child both want candy 3. The maximum independent set is 1st and 3rd since they do not share an edge.
polynomial everywhere positive property.
If $\lvert x\rvert\geqslant M$, then\begin{align}p(x)&=x^n\left(a_n+\frac{a_{n-1}}x+\cdots+\frac{a_1}{x^{n-1}}+\frac{a_0}{x^n}\right)\\&\geqslant x^n\left(a_n-\frac12\lvert a_n\rvert\right)\\&=x^n\left(a_n-\frac12a_n\right)\\&=\frac12a_nx^n\\&\geqslant\frac12a_nM^n.\end{align}
Finding the Binomial Coffecient
HINT: $$\sum_{r=0}^4 x^r=\frac{1-x^5}{1-x}$$ $$\implies(x^0+x^1+x^2+x^3+x^4)^6=(1-x^5)^6(1-x)^{-6}$$ Now $(1-x^5)^6=1-\binom61x^5+\binom62(x^5)^2-\binom63(x^5)^3+\cdots+x^{30}$ and $(1-x)^{-6}=1+(-6)(x)+\frac{(-6)(-6-1)}{2!}x^2+\frac{(-6)(-6-1)(-6-2)}{3!}x^3+\cdots$
Image sheaf isomorphism : another proof? (Exercise 2.1.4 in Hartshorne)
If you showed part (a) in exercise 2.1.4, then you are almost done for (b) ! And I don't think there is any easier way that this. It might look too long for you now, but in a few months, I believe that you will actually find this proof rather short ! Indeed, you have a morphism of presheaves $$\mathrm{im}(f) \to \mathscr G,$$ which is injective on each open set $U$, so by (a) you get an injective morphism of sheaves $$\mathrm{im}(f)^+ \to \scr G^+.$$ But $\scr G$ is a sheaf by assumption, so that $\scr G^+ \cong G$. Therefore, the image sheaf $\mathrm{im}(f)^+$ is isomorphic to a subsheaf of $\scr G$
The radius of convergence
The limit $\lim_{n\to\infty}\frac{|x|}{|\sin n|}$ does not exist. However, $\limsup_{n\to\infty}\frac{|x|}{|\sin n|}=+\infty$, since $\liminf_{n\to\infty}|\sin n|=0$. Therefore, the radius of convergence of your series is $0$.
What's the meaning of "relatively prime to $p$"?
Two positive integers $x$ and $y$ are relatively prime or coprime if they have no common factor other than $1$. In the case one of them is prime, say $x$, this is equivalent to saying that $x$ does not divide $y$.
How can you say it is true or false?
It's not quite right. Saying $p(x)$ has two roots $\alpha$ and $\beta$ does not mean that $p(x)=(x-\alpha)(x-\beta)$. You are forgetting that any constant times $(x-\alpha)(x-\beta)$ will still have roots $\alpha$ and $\beta$. So, what you can conclude is that $$ p(x)=c(x-\alpha)(x-\beta), $$ where $c$ is a constant. From the expansion, it follows, in fact, that $c=a$.
Basis for the Null Space
No; consider $F_{r\times n-r}=0$.
Does the operator $A^*A$ have a name?
In the context of Quantum Mechanics the operaror a*a with the same characreristis of your given operator is called the Number operator (in the framework of creation and anihilation operators). The number in this case accounts for the number of particles in a given quantum state.
Probability of hitting empty bottles
It can be rephrased as: "if I randomly select $b$ of the $n$ bottles then what is the probability that they are all empty?" The answer is:$$\frac{\binom{n-v}b\binom{v}0}{\binom{n}b}=\frac{\binom{n-v}b}{\binom{n}b}$$ Application of hypergeometric distribution.
Does this recursive sequence have a closed-form?
$$a_{2n}=2^{n+1}-1,\qquad a_{2n+1}=2a_{2n}=2^{n+2}-2.$$
Finding a general formula for moments?
Hint: The moment generating function of a random variable $Y$ is $E(e^{tY})$, which is $$1+\frac{E(Y)}{1!}t+\frac{E(Y^2)}{2!}t^2+\frac{E(Y^3)}{3!}t^3+\frac{E(Y^4)}{4!}t^4+\cdots.\tag{1}$$ Now expand $e^{t^2/2}$ in a power series. We get $$1+\frac{1}{2\cdot 1!}t^2+\frac{1}{2^2\cdot 2!}t^4+\frac{1}{2^3\cdot 3!}t^6+\cdots.\tag{2}$$ Compare (1) and (2) to write down the answers to your question.
Some doubts about interpretation of an atomic formula in predicate calculus
I'm going to start from scratch because your post is very long. Let us start with a language $$ L=\{R_i \,:\, i\in I\}\cup\{f_j \,:\, i\in J\}\cup \{c_k\, :\, k\in K\} $$ The $R_i$ are relation symbols, the $f_j$ function symbols and the $c_k$ constant symbols with $I,J,K$ indexing sets. Now an interpretation/structure/model for this language is a tuple $$ \mathfrak{A}=\langle A,\{\textbf{R}_i \,:\, i\in I\},\{\textbf{f}_j \,:\, i\in J\}, \{\textbf{c}_k\, :\, k\in K\}\rangle$$ Where $A$ is a (depending on your conventions, possibly required to be non-empty) set, if the symbol $R_i$ is $n$-ary then the corresponding $\textbf{R}_i\subseteq A^n$. If the symbol $f_j$ is $n$-ary then the corresponding $\textbf{f}_j$ is a function $A^n\rightarrow A$ and $\textbf{c}_j\in A$. The idea of this is that you just interpret a relation symbol as a relation on the domain, a function symbol as a function and a constant symbol as an element. Now we can use this to define a satisfaction relation on the formulae in the language $L$. But we have to be careful as we may have free-variables in the formula. So inductively, let $i:\{\text{variables}\}\rightarrow A$. Then inductively we define an interpretation of terms, and use this to do formulae. $$ c_i^\mathfrak{A}=\textbf{c}_i $$ $$ x_i^\mathfrak{A}=i(x_i) $$ $$ f_i(t_1,t_2,\ldots,t_n)^\mathfrak{A}=\textbf{f}_i(t_1^\mathfrak{A},t_2^\mathfrak{A},\ldots,t_n^\mathfrak{A})$$ So the idea of this definition is that we find out the value of a term by evaluating any constants or variables according to $i$ and the structure $\mathfrak{A}$ and then apply functions as defined in the structure. Now we are ready to define satisfaction of an atomic. Atomic formulas look like $t_1=t_2$ where $t_i$ are terms or like $R(t_1,\ldots,t_n)$ where $R$ is an $n$-ary relation symbol and $t_i$ are terms. We define $$ (\mathfrak{A},i)\models t_1=t_2 \Leftrightarrow t_1^\mathfrak{A}=t_2^\mathfrak{A} $$ and $$ (\mathfrak{A},i)\models R(t_1,\ldots,t_n) \Leftrightarrow \langle t_1,\ldots,t_n\rangle\in \textbf{R} $$ So we say that a claim that $t_i=t_j$ is true if they are interpreted as the same object in the domain, and a claim that a relation $R$ holds of some $n$-tuple of terms if we have stipulated that the interpretation of the terms are in the domain of $\textbf{R}$. Hope this clears a few things up.
Strategy for tackling the $\lim_{n\to+\infty}\frac{(-1)^nn}{(1+n)^n}$
For $n\ge 2$ we have $(1+n)^n\ge (1+n)^2\ge n^2$ and already $\frac{(-1)^nn}{n^2}=\frac{(-1)^n}n\to 0$.
Show that if $x\geq 0$ and $n$ is a positive integer, then $\sum_{k=0}^{n-1}\left\lfloor {x+\frac{k}{n}}\right\rfloor=\lfloor {nx}\rfloor$
Here is a solution I saw many many years ago and really love: Let $f(x)=\sum_{k=0}^{n-1}\lfloor x+\frac{k}{n}\rfloor -\lfloor nx\rfloor$. Then $$f(x+\frac{1}{n})= \sum_{k=0}^{n-1}\lfloor x+\frac{k}{n}+\frac{1}{n}\rfloor -\lfloor nx+1\rfloor= \sum_{k=1}^{n}\lfloor x+\frac{k}{n}\rfloor -\lfloor nx\rfloor -1 $$ $$=\sum_{k=1}^{n-1}\lfloor x+\frac{k}{n}\rfloor +\lfloor x+1 \rfloor-\lfloor nx\rfloor -1 =\sum_{k=1}^{n-1}\lfloor x+\frac{k}{n}\rfloor +\lfloor x \rfloor-\lfloor nx\rfloor =f(x) $$ Thus $f$ is periodic with period $\frac{1}{n}$. Moreover, if $x \in [0, \frac{1}{n})$ then $$f(x)=\sum_{k=1}^{n-1}0-0=0 \,.$$ Thus $f$ is periodic with period $T=\frac{1}{n}$ and zero on $[0,T)$, hence $f$ is the zero function.
Boundedness Spectral Triple Axioms for de Rham Complex
Yes and yes. To fix notation, what we're dealing with is the commutative spectral triple $(C^\infty(X),L^2(X,\wedge T^\ast X),d+d^\ast)$, where $(X,g)$ is a compact oriented Riemannian manifold. Recall, in particular, that $\wedge T^\ast X$ is a Hermitian vector bundle with Hermitian metric $(\cdot,\cdot)$ induced by the Riemannian metric $g$, so that the inner product on $L^2(X,\wedge T^\ast X)$ is $$ \langle \xi,\eta \rangle := \int_X (\xi,\eta) \,\mathrm{dVol}_g. $$ Since the underlying manifold is compact, any continuous function on $X$ is bounded, so that for any $f \in C^\infty(X)$ and $\xi \in L^2(X,\wedge T^\ast X)$, $$ \|f \xi \|^2 = \int_X (f \xi,f\xi) \,\mathrm{dVol}_g = \int_X \lvert f\rvert^2 (\xi,\xi) \,\mathrm{dVol}_g \leq \| \lvert f \rvert^2 \|_\infty \int_X (\xi,\xi) \,\mathrm{dVol}_g = \|f\|_\infty^2 \|\xi\|^2, $$ which shows that multiplication by $f \in C^\infty(X)$ is bounded with operator norm $\|f\| \leq \|f\|_\infty$. More generally, you can show that any bundle endomorphism $E$ of $\wedge T^\ast X$ defines a bounded operator $E : L^2(X,\wedge T^\ast X) \to L^2(X,\wedge T^\ast X)$ with operator norm $$ \|E\| \leq \left\|x \mapsto \|E_x\| \right\|_\infty, $$ where $\|E_x\|$ denotes the operator norm of the linear operator $E_x$ on the finite-dimensional inner product space $\wedge T^\ast_x X$; off the top of my head, use compactness of $X$ and your favourite partition of unity to treat $E$ as just a matrix-valued function. Now, let $f \in C^\infty(X)$, let $\xi$, $\eta \in C^\infty(X,\wedge T^\ast X)$. By a straightforward computation, you can show that $$ \langle \xi, [d+d^\ast,f]\eta \rangle = \langle \xi, df \wedge \eta \rangle + \langle d\overline{f} \wedge \xi, \eta \rangle. $$ From this, it follows that for all $g \in C^\infty(X)$ and $\xi$, $\eta \in C^\infty(X,\wedge T^\ast X)$, $$ \langle \xi, [d+d^\ast, f]g\eta \rangle = \langle \xi, df \wedge g\eta \rangle + \langle d\overline{f} \wedge \xi, g \eta \rangle = \langle \overline{g} \xi, df \wedge \eta \rangle + \langle d\overline{f} \wedge \overline{g}\xi,\eta \rangle\\ = \langle \overline{g}\xi,[d+d^\ast,f]\eta \rangle = \langle \xi,g[d+d^\ast,f]\eta \rangle, $$ so that $[d+d^\ast,f]$ is $C^\infty(X)$-linear on $C^\infty(X,\wedge T^\ast X)$, i.e., $[d+d^\ast,f]$ is a bundle endomorphism of $\wedge T^\ast X$. Hence, we can apply 1. to conclude that $[d+d^\ast,f]$ is bounded. It's worth noting that the fact that $[d+d^\ast,f]$ is a bundle endomorphism is just another way of saying that $d+d^\ast$ is a first-order differential operator.
Why does $\int_0^\pi \int_0^1 r^2 $cos$\theta\ dr d \theta \neq 2\int_0^{\pi/2} \int_0^1 r^2 $cos$\theta\ dr d \theta $
The function $\cos \theta $ is positive in the first quadrant but changes sign in the second quadrant so the function values are not symmetrical.
Counting commuting Pauli Strings of a certain weight
Note that the three operators $X \otimes X$, $Y \otimes Y$, $Z \otimes Z$ mutually commute. So it is not true that Two Pauli strings of length n commute if and only if they don't have different non-identity entries in any slot.
Prove expression is not prime
From the context of the question, I assume $n ≥ 0$. Clearly for even $n$, $(n+4)^4+4$ is even. Hence suppose $n$ is odd, so $n = 2k+1$. Then $(n+4)^4+4 = (2k+5)^4+4 = 16 k^4 + 160 k^3 + 600 k^2 + 1000 k + 629 = (4 k^2 + 16 k + 17) (4 k^2 + 24 k + 37)$. Clearly this is composite, as for $n≥0$, this is a product of two natural numbers greater than $1$.
Check if a positive solution exist of a linear equation with two variables?
In this case, there are always positive solutions. The reason is that the graph of your equation intersects the axes at the points $(0, c/b)$ and $(c/a,0)$. Then the line segment connecting these points has all positive solution pairs. Specifically, pick any $x$ such that $0 < x < c/a$. Set $y = \frac{c-ax}{b}$. Then $0 < y < c/b$. The result is a solution with positive $x,y$.
Deriving surface area of a sphere from the circumference
Denote by $\theta\in[-\pi/2,\pi/2]$ the geographical latitude on this sphere. Then $z(\theta)=r\sin\theta$, and the radius $\rho$ of the latitude circle at latitude $\theta$ is given by $\rho(\theta)=r\cos\theta$. Consider now an infinitesimal latitude zone $Z:\ [\theta,\theta+\Delta\theta]$ on this sphere. Its area is given by $${\rm area}(Z)\doteq2\pi\rho(\theta)\,(r\,\Delta\theta)=2\pi r^2\cos\theta\,\Delta\theta\ .\tag{1}$$ On the other hand the $z$-coordinates of the two boundary circles differ by $$ \Delta z:=z(\theta+\Delta\theta)-z(\theta)\doteq r\, \cos\theta \>\Delta\theta\ .\tag{2}$$ Combining $(1)$ and $(2)$ we see that $${\rm area}(Z)\doteq2\pi\,r\>\Delta z\ ,$$ so that the total area of the sphere comes to $${\rm area}(S^2_r)=2\pi\,r\int_{z=-r}^{z=r}\Delta z=4\pi\,r^2\ .$$
Uniform convergence...
In fact, $f_n' < 0$ on $[1,\infty)$ and the global maximum of $f_n$ occurs at $x=1$. (The critical point is outside of the interval in question.)
Finding an ellipse knowing two points and the arc length
First attempt: Going from a sketch in GeoGebra the equation is roughly $$ \left(\frac{x - 513 }{513}\right)^2 + \left(\frac{y- 100}{877.5}\right)^2 = 1 $$ Algebraic model: Assuming the minor axis is parallel to the $x$-axis, the equation of the ellipse is $$ \left(\frac{x - b}{b}\right)^2 + \left(\frac{y - 100}{a}\right)^2 = 1 \quad (1) $$ with unknowns $a, b$. Note: Other orientations parallel to one of the coordinate system axes are hardly possible, because the distance between $P$ and the second point $Q$ is $d = 641.6$, while the intended arc length between $P$ and $Q$ is $s = 650$, less then $10$ units more. I tried an orientation parallel to the $y$-axis, but that would need a much larger piece of arc on the ellipse. Inserting the first point $P = (0, 100)$: $$ \left(\frac{0 - b}{b}\right)^2 + \left(\frac{100 - 100}{a}\right)^2 = 1 $$ It satisfies equation $(1)$, so that curve goes through $P$. The second point $Q = (145,725)$ gives the equation $$ 1 = \left(\frac{145-b}{b}\right)^2 + \left(\frac{725 - 100}{a}\right)^2 = \left(1 - \frac{145}{b}\right)^2 + \left(\frac{625}{a}\right)^2 $$ which relates $a$ and $b$ and solving gives: $$ a = \frac{625}{\sqrt{1 - \left(1 - \frac{145}{b}\right)^2}} $$ and $$ \left(1 - \frac{625^2}{a^2}\right) b^2 = (b-145)^2 = b^2 -290 b + 145^2 \iff \\ b^2 = \frac{290}{625^2}a^2 b - \frac{145^2}{625^2}a^2 \iff \\ \left(b - \frac{145}{625^2}a^2 \right)^2 = \frac{145^2}{625^4}a^4 - \frac{145^2}{625^2}a^2 = \frac{145^2}{625^4}a^2 \left(a^2 - 625^2\right) \iff \\ b = \frac{145}{625^2}a^2 + \frac{145}{625^2} a \sqrt{a^2 - 625^2} \iff \\ b = \frac{145}{625^2}a \left(a + \sqrt{a^2 - 625^2} \right) \quad (2) $$ We choose this parameterization of the arc: $$ x = b - b \cos t \quad y = 100 + a \sin t \\ \dot{x} = b \sin t \quad \dot{y} = a \cos t $$ Note: While $t$ runs from $0$ to $2\pi$, $t$ in general is not the angle $\beta = \angle(R, C, P)$, where $R = (x(t),y(t))$ and $C = (b, 100)$ is the center of the ellipse. For a circle ($a = b$) it would be, but for a non-circle ellipse the relationship is non-linear: $$ \beta = \arctan\left(\frac{a \sin t}{b \cos t}\right) = \arctan\left(\frac{a}{b} \tan t\right) \quad t \in [0, \pi/2] $$ The chosen curve leads to the arc length via $ds^2 = dx^2 + dy^2$: \begin{align} s &= 650 \\ &= \int\limits_{0}^{t^*} \sqrt{b^2 \sin^2 t + a^2 \cos^2 t} \, dt \\ &= a \int\limits_{0}^{t^*} \sqrt{1 - (1- (b/a)^2) \sin^2 t} \, dt \\ &= a \int\limits_{0}^{t^*} \sqrt{1 - \epsilon^2 \sin^2 t} \, dt \\ &= a \, E(t^*, \epsilon) \quad (3) \end{align} with $t^*$ determined by $$ 145 = b - b \cos t^* \quad 725 = 100 + a \sin t^* $$ where we use $$ t^* = \arcsin \frac{625}{a} \quad (4) $$ The above integral is an elliptical integral (Legendre, 2nd kind) which can only be approximated by elementary functions. For the eccentricity $\epsilon$ we get $$ 1 - \epsilon^2 = \frac{b^2}{a^2} = \frac{145^2}{625^4} \left(a + \sqrt{a^2 - 625^2} \right)^2 \iff \\ \epsilon = \sqrt{1-\frac{145^2}{625^4} \left(a + \sqrt{a^2 - 625^2} \right)^2} \quad (5) $$ Using the above equations $(1)-(5)$ we can formulate an equation in terms of the unknown $a$: $$ 650 = a E\left(\arcsin \frac{625}{a}, \epsilon(a)\right) \iff \\ F(a) = 650 - a E\left(\arcsin \frac{625}{a}, \epsilon(a)\right) = 0 \quad (6) $$ Solving this numerically (details below), we get this solution: $$ a = 721.384624 \quad b = 289.634475 $$ and the equation $$ \left(\frac{x - 289.634475}{289.634475}\right)^2 + \left(\frac{y- 100}{721.384624}\right)^2 = 1 $$ which is quite different from the attempt to derive these values by the sketch! Numerical solution: Looking through my mostly open source toolset I decided to use Maxima, because it features an implementation of the needed incomplete elliptic integral of the second kind: First we define $t^*(a)$: (%i) ts(a) := asin(625/a); 625 (%o) ts(a) := asin(---) a Then we define $m(a) = \epsilon^2(a)$, the second needed parameter for the elliptic integral implementation elliptic_e(phi, m): (%i) m(a) := 1-((145^2)/(625^4))*(a + sqrt(a^2 - 625^2))^2; 2 145 2 2 2 (%o) m(a) := 1 - ---- (a + sqrt(a - 625 )) 4 625 And then we define $F(a)$: (%i) F(a):= 650 - a*elliptic_e(ts(a), m(a)); (%o) F(a) := 650 - a elliptic_e(ts(a), m(a)) Now we need a way to find roots $a$ of $F(a)$. First we try it graphically: (%i) plot2d(F(a), [a,625,1000]); (%o) So the root is around $725$. Next we try the Newton method: (%i) load(newton1); (%o) /usr/share/maxima/5.27.0/share/numeric/newton1.mac then we execute it using $a_0 = 800$ and a precision $10^{-6}$: (%i) an : newton(F(a), a, 800, 1/100000); (%o) 721.3846241818675 The result is saved as $a_n$. From this we calculate $b_n$: (%i) bn = sqrt(1-m(an)) * an; (%o) 289.6344752045375 = 289.6344752045375 For testing if these values fulfill equation $(1)$ we define (%i) eq(a,b) := ((145-b)/b)^2 + ((725-100)/a)^2; 145 - b 2 725 - 100 2 (%o) eq(a, b) := (-------) + (---------) b a and apply this test function to the calculated values (%i) eq(an, bn); (%o) 1.0 And this looks good! Some more tests: (%i) eq(x,y,a,b) := ((x-b)/b)^2 + ((y-100)/a)^2; x - b 2 y - 100 2 (%o) eq(x, y, a, b) := (-----) + (-------) b a (%i) eq(0,100,an,bn); (%o) 1.0 (%i) eq(145,725,an,bn); (%o) 1.0 (%i) eq(513,877,an,bn); (%o) 1.754880658615442 (%i) eq(2*bn,100,an,bn); (%o) 1.0 (%i) eq(bn,100-an,an,bn); (%o) 1.0 (%i) eq(bn,100+an,an,bn); (%o) 1.0 And finally some looks at the arc length: (%i) F(an); (%o) - 6.5450649344711564E-9 (%i) ts(an); (%o) 1.047926026242998 (%i) an * elliptic_e(ts(an), m(an)),numer; (%o) 650.0000000065451 (%i) an * elliptic_e(%pi/2, m(an)),numer; (%o) 830.6877077032057 (%i) an * elliptic_e(0, m(an)),numer; (%o) 0.0 Phew.
Using Poisson distribution to find the probability that the interval between arrivals exceeds some value
You should know that if the number of arrivals in every time interval has a Poisson distribution then the inter-arrival times are independent identically distributed with an exponential distribution with mean $1/\lambda$ where $\lambda$ is the mean number of arrivals per unit time. This is also the case at any time of the waiting time until the next arrival. In this case $\lambda=3$ per minute. Now take it from there...
Radius and Interval of Convergence $\sum_{n=0}^{\infty}\frac{7^n}{n!}x^n$
Another way to check:the Cauchy-Hadamard theorem simply gives a formula for the radius of convergence: https://en.wikipedia.org/wiki/Cauchy%E2%80%93Hadamard_theorem
Find a complete metric on $\mathbb{R} \setminus \cup_{n\ge 1}\{\frac{1}{n}\}$ that gives the natural topology
Let's first construct a complete metric on $\mathbb{R}\backslash\{0\}$. This subset of $\mathbb{R}$ is homeomorphic to the hyperbola $\{ (x, y) \ | x y- 1 =0\}$ by the map $x \mapsto (x, \frac{1}{x})$. Consider the restriction of the metric from $\mathbb{R}^2$ makes the hyperbola a complete subspace, since it's closed. We get a complete metric on $\mathbb{R} \backslash \{0\}$, $$d(x_1, x_2) = \sqrt{(x_1- x_2)^2 + (\frac{1}{x_1} - \frac{1}{x_2})^2 }$$ Suppose now that we have a closed subset $A$ of $\mathbb{R}$ and we want a complete metric on $\mathbb{R} \backslash A$. Let $f$ a continuous function on $\mathbb{R}$ with $0$ level set $A$. $\mathbb{R} \backslash A$ is homeomorphic to the closed subset of $\mathbb{R}^2$ $\{ (x,y)\ | f(x) \cdot y - 1 = 0\}$ by the map $x \mapsto (x, \frac{1}{f(x)})$. Take the pull-back of the metric from $\mathbb{R}^2$. $$d(x_1, x_2) = \sqrt{(x_1-x_2)^2 + (\frac{1}{f(x_1)} - \frac{1}{f(x_2)})^2 } $$ In general, let $A_n$ a family of closed subsets. Let $f_n$ a continuous real valued function with $0$ level $A_n$. The space $\mathbb{R}\backslash \cup_n A_n$ is homeomorphic to the closed subset of the complete metric space $\mathbb{R}^{\mathbb{N}}$ $$\{ (x, y_1, y_2, \ldots ) \ | f_n(x) y_n -1 = 0 \textrm{ for all } n \ge 1 \}$$ by the map $$x \mapsto ( x,\frac{1}{f_1(x)}, \frac{1}{f_2(x)},\ldots \}$$ The induced metric on $\mathbb{R}\backslash \cup_n A_n$ is complete. In our case, metric can be $$d(x_1,x_2) = |x_1-x_2| + \sum_{n=1}^{\infty} \frac{1}{2^n} \cdot\frac{|\frac{1}{nx_1-1}- \frac{1}{nx_2-1}|}{1+ |\frac{1}{nx_1-1}- \frac{1}{nx_2-1}|} $$
Set of Rotations Cyclic?
Yes, the rotations are cyclic, generated by a rotation of angle $2\pi/n$.
Permutations: Dividing 5 pieces of fruit between 2 baskets
Consider filling Basket A first. For this, you have five fruit and must choose three of them. There are 5 choices for the first fruit, and once this is chosen there are 4 for the next fruit and finally 3 choices for the last fruit and so $5\cdot 4\cdot 3$ choices. Since the order of the fruit going in to the basket is irrelevant one must divide through the number of ways of ordering the 3 fruit to avoid overcounting. There are $3!$ ways to order these three fruits (namely the number of permutations of three elements) and so there are \begin{align*} \frac{5\cdot 4\cdot 3}{3!} = \frac{5!}{3!2!}=\begin{pmatrix} 5 \\ 3\end{pmatrix} \end{align*} choices. For Basket B, there are only two remaining fruit and both must go into the basket and so there are no extra choices and so this is the final answer.
Prove that $x_0$ is Lyapunov unstable
Following the structure of Arrowsmith & Place: "Dynamical systems ...", p92 Select some $R>r>0$ and some $x_1\in B(x_0,r)$ with $h(x_1)>0$. Then there is some neighborhood $U$ around $x_0$ where $h(x)<h(x_1)/2$ for all $x\in U$. As $h(φ(x_1,t))$ is an increasing function, this trajectory stays outside $U$. Now $\bar B(x_0,R)\setminus U$ is a closed and thus compact set, the positive continuous function $\nabla h\cdot X$ has a positive minimum $K$ there. As long as the trajectory $φ(x_1,t)$ stays inside $\bar B(x_0,R)$, we conclude that $h(φ(x_1,t))\ge h(x_1)+Kt$. As $h$ is bounded on the compact set $\bar B(x_0,R)$, the trajectory has to leave this closed ball around $x_0$ of radius $R$. Thus $x_0$ is unstable, as $r>0$ can be arbitrarily small and $R>r$ arbitrarily large.
Probability : Two dice are thrown r times.
Your answer is correct; it's based on De Morgan's law and the expansion of the probability of a union (of 6 events, in your case). Note that the formula returns zero for r from 0 to 5 (check) and tends to one (rather slowly) as r increases.
How to prove $(b-2)^2 > 12a(5c + 2)$ provided $(3a + b + 5c)(5c + 2) < 0$?
It seems the following. Put $x=3a$, $y=b-2$, and $z=5c+2$. Given $(x+y+z)z&lt;0$ we need to show that $y^2&gt;4xz.$ Assume the converse. Then $(x+z)^2\ge 4xz&gt;y^2$. So $|x+z|&gt;|y|$ and the sign of $x+y+z$ is equal to the sign of $x+z$. Then $(x+z)z&lt;0$, but $xz&gt;y^2/4\ge 0$ and $z^2\ge 0$, a contradiction.
Example $x$, $y$ and $z$ values for $x\uparrow^\alpha y=z$ where $\alpha\in \Bbb R-\Bbb N$
note for the unfortunate reader: my native language is not English so I apologize in advance for errors the references link seems to be fixed now On non-integer rank Hyperoperations When you ask if values of $G(n,-,-)$ can be provided if $n\in\Bbb Z $ or $n\in \Bbb R$ you are actually asking if is possible to define an extension of $G$ to the integers, rational or real numbers (or complex) satisfying the Hyperoperations recursion over all the domain. Brief introduction Notation 1: With $G:\Bbb N\times\Bbb N\times\Bbb N\to\Bbb N$ we mean the Goodstein function [1] or equivalently the Hyperoperations sequence: let's work with $G$ (a $3$-ary function) as if it is an indexed family of binary functions $+_{s\in \Bbb N}:\Bbb N \times \Bbb N \to \Bbb N$ and call the index $s$ the rank of the hyperoperation. $$m+_s n:=G(s,m,n)$$ Notation 2: The infix notation that I'm using ($+_s$) is not common at all, as the Goodstein $G$. Usually the most common notations for $+_s$ are $H_s$ (Wikipedia page prefix notation) or $[s]$ (Square bracket infix notation) or maybe $A_s(m)=2[s]m$ (comes from the Ackermann function and is used in some recursion papers about the Grzegorczyk hierarchy and in some recent works of D. Kouznetsov [6]) and the Knuth's Uparrow notation $\uparrow^n$ (that has a different indexing starting from $\uparrow^0=\times$) [2] $$H_s(x,y)=x[s]y=x\uparrow^{s-2}y=G(s,x,y)=x+_s y$$ Definition 1 (Hyperoperations sequence): We define the indexed family $\{+_s\}_{s\in \Bbb N}$ recursively over the natural numbers ($b,n, s\in \Bbb N$) $i) $ $b+_0n=n+1$ $ii)$ $b+_{s+1}0=b_{s+1}$ $iii) $ $b+_{s+1}(n+1)=b+_s(b+_{s+1}n)$ Where base values $b_{s+1}$ that give us the "natural/classical" hyperoperations sequence are the following $iv)\,\,\,\, b_ {s+1}:= \begin{cases} b, &amp; \text{if $s=0$} \\ 0, &amp; \text{if $s=1$ } \\ 1, &amp; \text{if $s\gt 1$ } \\ \end{cases}$ I will call the argument $b$ base, $s$ rank and $n$ exponent. Remark 0 Note that this definition gives us the "standard" hyperoperations $H_1(x,y)=x+_1y=x+y$ $H_2(x,y)=x+_2y=xy$ $H_3(x,y)=x+_3y=x^y$ $H_4(x,y)=x+_4y={}^{y}x$ (Tetration) Terminology 1: About the use of the term rank for the argument $s$ I don't think that is official but at least it makes sense for various reasons. As far as I know the term was introduced by K. A. Rubtsov and G.F Romerio in different papers/reports for the first time since the year 2006 [3] (p.3) and the term has been widely adopted by the Tetration Forum in the subsequent years. Another good reason is that every Hyperoperation $H_n$ belongs to the class $\mathcal E^n$ of the Grzegorczyk Hierarchy [4], a Sub-recursive hierarchy that organizes the Primitive Recursive functions according to their growth rate. Since the position inside a hierarchy is usually denoted by the term rank (See $V_\alpha$ hierarchy of sets for example) it seems to me a perfect choice. About the base and exponent terms I'm not sure, Rubtsov and Romerio tried to introduce, and used systematically, the the terms hyperbase for $b$ and hyperexponent for $n$ together with an uniform terminology for right and left inverse hyperoperations [5]. Remark 1: This sequence follows from original Goodstein definition and it implies that the $0$-th rank hyperoperation trivially coincides with the successor function $a+_0 n=n+1$ and so does all the integers-negative ranks: you can find more about this in the good David K answer here, in my and Ibrahin Tencer's answers here [7] and on the Tetration Forum [8] Anyways it is possible to avoid the imposition of $a+_0 n=n+1$ with alternative definitions of the hyperoperations sequence that give us more freedom for the negative ranks: about this there is a large amount of work by Rubtsov and Romerio under the name of Zeration [3], [9], [10] and by Cesco Reale in [11]. The topic is quite controversial and not very well known so I suggest you those two thread on TetrationForum [12], [13] Back to your question, Imho is important to notice that is unlikely that you can extend the rank to non-ingers values without finding a way to extend the base and the exponent too. If you look at the sequence $x_s:=2+_s n$, for a fixed $n$ , we have that $x_s\in \Bbb N$ if the rank is a natural number but if we want it to be continuous or analytic in the variable $s$, thus extending to $s \in \Bbb R$, is very likely that for most of the non-integers ranks $x_s$ will have non-integer values making the functions $+_s$ not closed on the naturals for most of the non-integers ranks even when the base and exponent are natural numbers. Example: if $2+_{1.5}3=q$ and $q\in \Bbb R\setminus \Bbb Z$ then evaluating $2+_{1.5}4=q$ would need to know how to evaluate $2+_{0.5}q$ That's my impression. The higher-order function iteration approach An interesting way to continue [14] is to look at some suitable space of binary functions $\mathcal H$ with $+_s\in\mathcal H$ together with a function $\Sigma:\mathcal H\to \mathcal H$ with the following property $$\Sigma [+_s]=+_{s+1}$$ and continue investigating its dynamics because it turns out that the Hyperoperations are the natural iterations of this map $\Sigma$ applied to the $0$-th rank hyperoperation $$\Sigma^{\circ n}[+_0]=+_n$$ The operator $\Sigma$ increases by one the rank of the hyperoperations so it's plausible to expect that the fractional/real/complex iteration of this map Is going to give us the fractional/real/complex rank hyperoperations: $$\forall \sigma\in \Bbb C(\Sigma^{\circ \sigma}[+_0]=+_\sigma)$$ Terminology 2: The iteration we are talking about $-^{\circ n}$ can be defined recursively in the following way: given a function $f:X\to X$ $i)$ $f^{\circ 0}(\beta)=\beta$ or $f^{\circ 0}={\rm id}_X$ $ii)$ $f^{\circ n+1}(\beta)=f(f^{\circ n}(\beta))$ or $f^{\circ n+1}=f\circ f^{\circ n+1}$ My naive opinion is that if one is able to find a good space $\mathcal H$ such that $\Sigma$ is an operator then we could try to apply the powerful tools of operator theory. This kind of point of view can be used for a larger class operation sequences. The idea of reducing the non-integer ranks problem ot a non-integer iteration problem is, again, as far as I know due to Henrik Trappmann [15](2008), the founder of the Tetration Forum . Some years later this idea was better developed by James Nixon (2011) with the concept of "meta-superfunctions" [16] who is still working on this point of view (see later). To explain better this we have to first find what kind of map is or should be $\Sigma$, the map that increases the rank by one unit: the discourse is a bit long so I'll send you to one of my answer at MSE [17]. As you can read in my answer $\Sigma$ is closely related to the process of iterating a function, also called "taking the superfunction" in the case of $1$-ary functions and is also closely related to known problems as finding the solutions of Abel functional equations of the form $\chi(z)+1=\chi(f(z))$, Schröder's equations $s\cdot\Psi(z)=\Psi(f(z))$ A massive amount of research on finding/building unique superfunctions [20] was made and is still carried by D. Kouznetsov [18], [19] and you can find most of his work in this online enciclopedia [23]. Finding an unique $\beta$-based superfunction actually give us an "higher order" (because send functions to functions) function that maps the function $f(x)$ to the function $F(z)=f^{\circ z}(\beta)$ Definition 2 Given a function $f$ we define intuitively its $\beta$-based superfunction $F_\beta$ as the function that maps to every $z$ the $z$-th application of $f$ to $\beta$ $$F(z)=f^{\circ z}(\beta)$$ Proposition 1: The $\beta$-based superfunction of $f$ satisfies this equations $i)$ $F_\beta(0)=\beta$ $ii)$ $F_\beta(z+1)=f(F_\beta(z))$ Definition 3 Given a suitable collection functions $H$ we define intuitively the $\beta$-based superfunction map as a function $\mathcal S:H\to H$ that maps to every $f$ its $\beta$-based superfunction $F_\beta$ $$\mathcal S_\beta:f\mapsto F_\beta$$ $$\mathcal S_\beta[F](z)=f^{\circ z}(\beta)$$ Is easy to see what this has to do with hyperoperations and with the $\Sigma$. Lets define the sequence of "hyper-exponentiations" Definition 4 We define the family $\{H_{b,s}\}_{b,s\in \Bbb N}$ of hyper-exponentiations as follows $$H_{b,s}(n):=b+_s n$$ we have that $H_{b,1}(n)={\rm add}_b(n)=b+n$, $H_{b,2}(n)={\rm mul}_b(n)=bn$ and $H_{b,3}(n)=\exp_b(n)=b^n$ Proposition 2: From the definition 1 we have the following $$H_{b,s+1}(n+1)=H_{b,s}(H_{b,s+1}(n))$$ In other words we have that the superfunction map is the map $\Sigma$ we are looking for because we have that every $(s+1)$-rank hyper-exp is the superfunction of the $s$-rank hyper-exp $$\mathcal S[H_{b,s}]=H_{b,s+1}$$ At this point it makes sens to ask if finding the non-integers iterations of $\mathcal S_\beta$ really gives us non-integer ranks hyper-exponentiations functions for some bases $b$ $$\mathcal S_{\beta=1}^{\circ \sigma}[H_{b,2}]=H_{b,2+\sigma}$$ Remark 2: In the equation above I've setted $H_{b,2}$ and $\beta=1$ because for all the natural ranks $s\ge 3$ we have that $H_{b,s}(0)=1$ by definition. Question 1: The real question is: It is possible to find the unique... real... non-integer iterations of $\mathcal S$? As far as I know the answer to this question is still unknown or at least unknown to me, an amateur mathematician, even if some hot posts by the user JmsNxn appeared in the last two months on the Tetration Forum [21], [22]. References 1 - R.L. Goodstein, Transfinite Ordinals in Recursive Number Theory, The journal of Symbolic Logic Vol 12, No 4 (Dec. 1947), pp. 123-129 2 - Wikipedia, Hyperoperations - Notations 3 - K. A. Rubtsov, G. F. Romerio, Ackermann’s Function and New Arithmetical Operations. Manuscript cited in the bibliography of Stephen Wolfram’s book A New Kind of Science,(2003) 4 - A. Grzegorczyk, Some classes of recursive functions Rozprawy matematyczne, Vol 4, pp. 1–45. (1953) 5 - G. F. Romerio, Terminology Proposals for a Hyper operation Environment, uploaded at Tetration Forum 6 - D. Kouznetsov, Evaluation of holomorphic ackermanns, Applied and Computational Mathematics. Vol. 3, No. 6, 2014, pp. 307-314. doi: 10.11648/j.acm.20140306.14 7 - MSE: *Does anything precede incrementation in the operator “hierarchy”?(2013) 8 - TetrationForum, JmsNxn, Extension of the Ackermann function to operator less than addition (2011) 9 -K. A. Rubtsov, G. F. Romerio, Progress Report on Hyper-operations, Zeration, Wolfram Research Institute, USA NKS Forum IV, (2007). 10 - K. A. Rubtsov, G. F. Romerio, New Notes On Zeration, Wolfram Research Institute, USA NKS Forum, (2014). 11 - C. Reale, Zeroth-rank operation and non transitive numbers., Text in Italian (2012) 12 - TetrationForum, G.F Romerio, Zeration (2008) 13 - TetrationForum, tommy1729, Zeration=Inconsistent? (2014) 14 - TetrationFrum, KingDevyn, Negative, Fractional, and Complex Hyperoperations 15 - TetrationFrum, User:bo198214 aka H. Trappmann, non-natural operation ranks (2008) 16 - TetrationFrum, User:JmsNxn aka J. Nixon, generalizing the problem of fractional analytic Ackermann functions (2011) 17 -MSE: Notation for function $+\mapsto \times$, MphLee's answer (2013) 18 - Dmitrii Kouznetsov, personal page at Institute fo Laser Science University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan 19 -Dmitrii Kouznetsov, Research proposals: Superfunctions 20 -TORI enciclopedia , Page:Superfunction 21 - TetrationFrum, User:JmsNxn aka J. Nixon, Bounded Analytic Hyper operators (2015) 22 - TetrationFrum, User:JmsNxn aka J. Nixon, On constructing hyper operations for bases $b&gt;\eta$ (2015) 23 - TORI Enciclopedia
Urn I contains 6 whites and 4 blacks balls. Urn II contains 2 white and 2 black balls.
Here's a hint. After the transfer, you'll have either $4W$ and $2B$, $3W$ and $3B$, or $2W$ and $4B$. So, first calculate the probability of each of these states for Urn II. (For example, to get $4W, 2B$ you'll need to draw $2W$ from Urn I. What is the probability of that happening? The probability of drawing $2W$ from Urn I is $\frac{6}{10} \frac{5}{9} = \frac{1}{3}.$) Then, calculate the probability of drawing $1W, 1B$ from Urn II, given each of the configurations (and their associated probabilities). So, given $2W$ were drawn from Urn I, the probability of getting $1W,1B$ is $$P((1W,1B) | 2W) = \frac{1}{3}\left[\frac{4}{6}\frac{2}{5} + \frac{2}{6}\frac{4}{5}\right] = \frac{8}{45}.$$ Then figure out the others.
Extension of Fundamental Theorem of Algebra
Suppose $p(x)$ has $n$ real roots: $a_1 &lt;a_2 &lt;\dots &lt;a_n$, then we may write $p(x)=a \prod_{i=1}^n(x-a_i)$. It now suffices to show that the derivative of $q(x)=\prod_{i=1}^n(x-a_i) \in \mathbb{R}[x]$ has $n-1$ distinct real roots. On each $(a_i,a_{i+1})$, by Mean Value Theorem, there exists a number $c$ where $q'(c)=\frac{p(a_i)-p(a_{i+1})}{a_i-a_{i+1}}=0$. Thus $q'(c)$ has at least $n-1$ distinct real roots, and these are all the roots of $q'(x)$ since it is a polynomial of degree $n-1$.
Two topologies coincide if they have the same convergent nets
More generally: in any topological space, $U$ is an open set iff for every net $x_\lambda$ converging to $x \in U$, $x_\lambda \in U$ eventually. So if $T_1$ and $T_2$ agree with respect to convergence of nets, they have the same open sets and are the same topology.
Decide if sets are equivalent
HINT: Think of each $f\in\{0,1\}^{\Bbb N\times\Bbb N}$ as an infinite matrix $M_f$ of zeros and ones: the entry in row $k$, column $n$ of $M_f$ is $f(k,n)$. Take a matrix $M_f$ with $f\in A$; all of its rows from some point on consist entirely of ones. What does $M_f$ look like when $f\in B$? Can you think of a familiar operation on matrices that relates these two types of matrix?