title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Isomorphisms between vector space subspaces
You say "I've gathered that an isomorphism should preserve all of the structure, but I have a difficult time understanding how this comes from one-to-one and onto mapping." In algebra, an isomorphism is defined to be a bijective homomorphism. It is the homomorphic quality which is structure preserving. (A bijection being a 1-1, onto mapping.) Further, you state "I want to know if for any two vector spaces with an isomorphism between them, if there exists an isomorphism between each of their subspaces." By definition, an isomorphism between two vector spaces will also act as an isomorphism between subspaces.
General formula for Evaluating $\sum_{n=0}^\infty n^ar^n$ where $ |r|<1 , a\ge0$
$$\sum_{n=0}^\infty n^a r^n=\frac{\sum_{m=0}^{a-1}A(a,m)x^{m+1}}{(1-r)^{a+1}}$$ for $a\ge1$, where the $A(a,m)$ are Eulerian numbers.
$A=\lambda I_n\iff (\forall M,N\in M_n(\mathbb{R}),~ MN=A \Rightarrow ~ NM=A)$
Here is an alternative answer: Take $N = \begin{bmatrix} \lambda_{1} &amp; 0 &amp; 0 &amp; \dots &amp; 0 \\ 0 &amp; \lambda_{2} &amp; 0 &amp; \dots &amp; 0 \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ 0 &amp; 0 &amp; 0 &amp; \dots &amp; \lambda_{n} \end{bmatrix}$ and $M = \begin{bmatrix} \frac{a_{11}}{\lambda_{1}} &amp; \frac{a_{12}}{\lambda_{2}} &amp; \frac{a_{13}}{\lambda_{3}} &amp; \dots &amp; \frac{a_{1n}}{\lambda_{n}} \\ \frac{a_{21}}{\lambda_{1}} &amp; \frac{a_{22}}{\lambda_{2}} &amp; \frac{a_{23}}{\lambda_{3}} &amp; \dots &amp; \frac{a_{2n}}{\lambda_{n}} \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ \frac{a_{n1}}{\lambda_{1}} &amp; \frac{a_{n2}}{\lambda_{2}} &amp; \frac{a_{n3}}{\lambda_{3}} &amp; \dots &amp; \frac{a_{nn}}{\lambda_{n}} \end{bmatrix}$ Where $\lambda_i \in \mathbb{R}^+$ and $\lambda_i \neq \lambda_j$ for $i \neq j$ So we have $A = \begin{bmatrix} a_{11} &amp; a_{12} &amp; a_{13} &amp; \dots &amp; a_{1n} \\ a_{21} &amp; a_{22} &amp; a_{23} &amp; \dots &amp; a_{2n} \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ a_{n1} &amp; a_{n2} &amp; a_{n3} &amp; \dots &amp; a_{nn} \end{bmatrix}$ and $MN = A$. But then we must also have $NM = A$. Looking at the elements of $MN - NM = 0$ we see we must have $a_{ij} = 0$ for $i \neq j$. Thus $A$ has nonzero elements only on its diagonal. Finally, by letting $M$ be a permutation matrix we see that flipping rows $i$ and $j$ of $A$ must be equivalent to flipping columns $i$ and $j$ of $A$, so all the $a_{ii}$ must be equal and so $A$ is a homothety.
Interval of convergence of the power series representation of $\int_0^x \frac{1}{1+t^4}dt$
That integral converges for all $x$, so I guess you mean the series for that function. If so, then the integral has the same radius of convergence as the series for the integrand, which can be found by geometric series. It's the sum of $$1-t^4+t^8 - t^{12} +\cdots$$ with common ration $r=1$. So that's an easier way than your use of the ratio test, but you still get the right answer. For the endpoints, which is your real question, your work is fine and you have the correct answer.
application of strong vs weak law of large numbers
From section 7.4 of Grimmett and Stirzaker's Probability and Random Processes (3rd edition). The independent and identically distributed sequence $(X_n)$, with common distribution function $F$, satisfies $${1\over n} \sum_{i=1}^n X_i\to \mu$$ in probability for some constant $\mu$ if and only if the characteristic function $\phi$ of $X_n$ is differentiable at $t=0$ and $\phi^\prime(0)=i \mu$. For instance, the weak law holds but the strong law fails for $\mu=0$ and symmetric random variables with $1-F(x)\sim 1/(x\log(x))$ as $x\to\infty$.
Solving system of linear inequalities via elimination
Yes, it is valid. You can prove it's validness in two steps: First, if $a&lt;b$ then $a+x&lt;b+x$. Secondly, if $x&lt;y$ then $0&lt;y-x$, by adding $-x$ to both sides. Hence $b+x&lt;b+x+y-x=b+y$. Finally, $a+x&lt;b+y$
Sparse Grid vs. Full Grid
Your sparse grid method does not use uniform grids. Instead, it iteratively chooses grid points based on the value (and rate of change) of the integrand at prior grid points. The strategy for choosing each new echelon of grid points is not the same for all sparse grid methods, but in general these methods will do very well with functions which are roughly uniform over a roguhly elliptical region. Your hypersphere volume integral is almost an unfair poster-child for the benefits of sparse grid methods. As an analogy, compare the trapezoid rule and Simpson's rule for integrating $x^3$ over the range $1$ to $3$. The trapezoid rule with spacing $0.1$ uses $21$ samples and has error $1.37$. Simpson's rule with spacing $1$ uses $3$ samples and has error $0.00$. The huge discrepancy is because the problem chosen coincides with the cleverness injected into the more complicated algorithm. In general, well-designed sparse grid methods will either work much faster or give a much lower error than uniform grid methods, or both.
Prove that two subspaces of a vector space intersect only at 0
Thanks for the responses, everyone. This is what I came up with: $\forall u \in \mathbb{R}^{m+n}, u=v_i \in V \lor u=w_i \in W$ Therefore, because of the definition of the direct sum and subspaces, we know that $u_i \neq v_i \land u_i \neq w_i$ unless $u_i=0$ therefore $V \cap W = \{0\}$.
Finding a function that is harmonic in an annulus,
Hint: To match the $a_n \cos (n\theta)$ and $b_n \cos(n\theta)$, take a suitable combination of the real parts of $z^n$ and $z^{-n}$. EDIT: If this combination is $u_n(r,\theta) = c_n r^n \cos(n \theta) + c_{-n} r^{-n} \cos(n \theta)$, we need $$ \eqalign{c_n + c_{-n} &amp;= a_n \cr 2^n c_n + 2^{-n} c_{-n} &amp;= b_n\cr} $$ Solve this system of equations for $c_n$ and $c_{-n}$. Then show $\sum_{n=-\infty}^{-1} c_n z^n$ converges absolutely and uniformly on $|z| \ge 1$ and $\sum_{n=0}^\infty c_n z^n$ converges absolutely and uniformly on $|z| \le 2$.
Showing a reduction by 30% using dimensional analysis
Doubling $E$ in the formula $E=Cv^2Ap$, where $C,A,p$ remain constant, amounts to multiplying speed $v$ by $\sqrt{2}$. This will result in the time-to-destination divided by $\sqrt{2}$. Since $1/\sqrt{2}\approx 0.7$, the new time to destination is about 70% of the old one; this is the gain of 30% to which the problem referred.
Location of an arbitrary point of an ellipse
It would have been more helpful if you presented the question which contains in its solution this supposition. Anyway, it is because the other cases can be done in a similar manner due to symmetry. $1)$ The part of the ellipse in the $2$nd quadrant is symmetric to that in the first quadrant with respect to the $y$-axis. $2)$ The part in the third quadrant is the symmetric of that in the first w.r.t the origin. $3)$ The part in the fourth quadrant is symmetric to that in the first with respect to the $x$-axis.
Is this set meager? $A = \{x\in \mathbb{R}: \exists c>0, |x-j2^{-k}|\geq c2^{-k}, \forall j\in \mathbb{Z}, k\geq 0 \}$
We have $A=\bigcup A_n$, where $$A_n=\{x\in \mathbb{R}: |x-j2^{-k}|\geq 2^{-n-k}, \forall j\in \mathbb{Z}, k\geq 0 \}.$$ Spoiler: So it suffices to show that each $A_n$ is meager. Since $$A_n=\bigcap_{j\in\Bbb Z, &gt;! k\ge 0} \Bbb R\setminus (j2^{-k}-2^{-n-k}, j2^{-k}+2^{-n-k}),$$ it is closed. Since $A_n$ is disjoint with the set of diadic rationals, the set $A_n$ is nowhere dense.
Deriving multivariate equations with respect to one variable
Saying that we leave the other function variables unchanged is not accurate. When taking a partial derivative with respect to one of the variables (of a multivariate functions), we treat the remaining variables as constants. And that's not the same as "leaving them unchanged". For example, if $$L(a,y)=y\ln(a)+y,$$ then $$\frac{\partial L}{\partial a}=y\cdot\frac{1}{a}+0=\frac{y}{a}.$$ Note that we didn't leave the second $y$ unchanged: with respect to $a$, it's a constant term whose derivative is zero. (P.S. By the way, you're using the term term incorrectly.)
Cramers rule and inverse in complex numbers.
I think you mean $(a,c) \cdot (b,d) = (ac - bd, ad + bc)$. $\begin {bmatrix} a &amp; -b \\ b &amp; a \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ By cramers rule: $x = \frac {\begin{Vmatrix} 1 &amp; -b \\ 0 &amp; a \end{Vmatrix}} {\begin{Vmatrix} a &amp; -b \\ b &amp; a\end{Vmatrix}} = \frac {a} {a^2 + b^2}$ $y = \frac {\begin{Vmatrix} a &amp; 1 \\ b &amp; 0 \end{Vmatrix}} {\begin{Vmatrix} a &amp; -b \\ b &amp; a\end{Vmatrix}} = \frac {-b} {a^2 + b^2}$
Prove that $f\ast g$ is continuous if $f\in C(\mathbb{T})$ and $g\in R(\mathbb{T})$
To answer the question you asked (more or less, "Can I swap the limit and the integral?"), I have no idea. There are some cases where you can, and I can never remember the rules. I had a student who called such swaps "engineer's prerogative," but I think this is a little unkind to engineers. I'm sure others can point you to theorems. But often it's easier just to do things by hand. If you instead look at \begin{align} C(x) - C(x_0) &amp;= \frac{1}{2\pi} \int_0^{2\pi} f(x-t)g(t)~dt - \frac{1}{2\pi} \int_0^{2\pi} f(x_0-t)g(t)~dt\\ &amp;= \frac{1}{2\pi} \int_0^{2\pi} [f(x-t) - f(x_0 - t)]\cdot g(t)~dt\\ &amp;= \frac{1}{2\pi} \int_0^{2\pi} [f((x-x_0) - s) - f(-s)]\cdot g(x_0 + s)~ds \end{align} you can estimate the integrand: Since $f$ is continuous on the compact set $[0,2\pi]$, it's uniformly continuous. So for $\epsilon &gt; 0$, there's a $\delta$ such that $|p - q| &lt; \delta$ implies $|f(p) - f(q)| &lt; \epsilon$. Furthermore, because $g$ is integrable, so is $|g|$ (explanation.) That means that for $|x - x_0| &lt; \delta$, we have \begin{align} | C(x) - C(x_0) | &amp;= \frac{1}{2\pi} |\int_0^{2\pi} [f((x-x_0) - s) - f(-s)]\cdot g(x_0 + s)ds| \\ &amp;\le \frac{1}{2\pi} \int_0^{2\pi} |[f((x-x_0) - s) - f(-s)]\cdot g(x_0 + s)|ds \\ &amp;= \frac{1}{2\pi} \int_0^{2\pi} \epsilon \cdot |g(x_0 + s)|ds \\ &amp;= \epsilon \frac{1}{2\pi} \int_0^{2\pi} |g(x_0 + s)| ds \\ &amp;= \epsilon \frac{1}{2\pi} \int_0^{2\pi} |g(s)| ds \text{, by periodicity}\\ &amp;= \epsilon M \end{align} where $M$ is the integral of $|g|$. So as $\epsilon$ goes to zero, the difference goes to $0$ and $C$ is continuous at $x_0$.
Prob. 9, Chap. 6, in Baby Rudin: Which one of these two improper integrals converges absolutely and which one does not?
Observe that on any interval of the form $[(k-1/3)\pi, (k+1/3)\pi]$ (where $k \in \mathbb Z$), we have $|\cos(x)| \geq 1/2$. Therefore the integral $$\int_0^{\infty}\left|\frac{\cos(x)}{1+x}\right|\ dx$$ is at least as large as $$\sum_{k=1}^{\infty}\int_{(k-1/3)\pi}^{(k+1/3)\pi} \frac{1}{2(1+x)}\ dx$$ As the integrand is monotonically decreasing for positive $x$, it follows that on the interval $[(k-1/3)\pi, (k+1/3)\pi]$ (with $k &gt; 0$) we have $$\frac{1}{2(1+x)} \geq \frac{1}{2(1 + (k + 1/3)\pi)}$$ and therefore $$\sum_{k=1}^{\infty}\int_{(k-1/3)\pi}^{(k+1/3)\pi} \frac{1}{2(1+x)}\ dx \geq \sum_{k=1}^{\infty} \frac{\pi}{3(1+(k+1/3)\pi)}$$ which diverges by limit comparison with $\sum 1/(3k)$.
Chromatic polynomial of a $8$-vertex graph
It's easy to see this from first principles: remember that $P(x)$ is the number of colourings of the graph using $x$ colours. If you fix the colours of the two vertices in the middle (which used to be $u$ and $v$), then the remaining graph splits into two separate complete graphs which are very simple to count colourings in. (At this point you should ask yourself: if there are $3$ ways to colour each copy, how many different ways are there to colour both together? Hint: it's not $6$.) The exact number of colourings of each complete graph (as a function of $x$) depends on whether you colour $u$ and $v$ the same colour or distinct colours, so consider these two cases separately, and remember to account for how many ways there are to choose the two initial colours.
Using Uniform Continuity to show that a limit exists
Suppose $f:\mathbb{Q}\to\mathbb{R}$ is a uniformly continuous function on $\mathbb{Q}$, and for any $x\in\mathbb{R}$, $x$ is a cluster point of $\mathbb{Q}$. Now, suppose that $\displaystyle \lim_{y \to x}f(y)$ is not defined for all $y\in \mathbb{R}$. Then, according to the Divergence Criteria, the sequnce $(y_n)$ in $\mathbb{Q}$, with $y_n\neq x$ for all $n\in\mathbb{N}$, is such that $(y_n)$ converges to $x$, and the sequence $(f(y_n))$ does not converge in $\mathbb{R}$. Since the sequence $(y_n)$ in $\mathbb{Q}$, with $y_n\neq x$ for all $n\in\mathbb{N}$, is convergent, then according to the Cauchy Convergence Criterion, $(y_n)$ is a Cauchy sequence. Thus, according to Part (1), $(f(y_n))$ is a Cauchy sequence, so that according to the Cauchy Convergence Criterion, the sequence $(f(y_n))$ converges in $\mathbb{R}$. However, the sequence $(f(y_n))$ does not converge in $\mathbb{R}$. Thus, a contradiction has been reached. Therefore, $\displaystyle \lim_{y \to x}f(y)$ is defined for all $y\in \mathbb{R}$. Part (1) refers to the proof that if $f:\mathbb{Q}\to\mathbb{R}$ is a uniformly continuous function and $(x_n)$ is a Cauchy sequence in $\mathbb{Q}$, then $(f(x_n))$ is a Cauchy sequence in $\mathbb{R}$.
How do I solve $\overline{(A \cap B) \cup (\overline{A} \cap C)} = (A \cap \overline{B}) \cup (\overline{A} \cap \overline{C})$?
Hint: To show that $$ (\overline{B} \cap A) \cup (\overline{A} \cap \overline{C}) \cup (\overline{B} \cap \overline{C})$$ is same as $$ (\overline{B} \cap A) \cup (\overline{A} \cap \overline{C}) $$ it is enough to show that $$(\overline{B} \cap \overline{C})$$ is a subset of $$ (\overline{B} \cap A) \cup (\overline{A} \cap \overline{C})$$ Prove this by writing $E$ as $(E \cap A) \cup (E \cap \overline A)$ where $E$ stands for $$(\overline{B} \cap \overline{C})$$
Metric Space Inquiry on Self Mapping Function
What do the axioms of an equivalence relation have to do with it? Do you mean the axioms here: https://en.wikipedia.org/wiki/Metric_%28mathematics%29 Maybe as a conceptual hint, you can think about the more general situation where $Y$ is a injection into a metric space $(X,D)$. Then you can try to put a metric on $Y$ by thinking of it as a subset of $X$.
Pre-calc complex roots of unity help
Let $\omega$ is 5th root of unity $$\omega=e^\frac{2\pi i}{5}=\cos\frac{2\pi}{5}+i\sin\frac{2\pi}{5}= \frac{\sqrt 5-1}{4}+i\frac{\sqrt{10+2 \sqrt{5}}}{4}$$ Then equation $$z^5=i$$ has 5 solutions $$z_k=i\omega^{k-1},\quad k=1\ldots 5$$ 2. $$-2-2i=2^{3/2}e^{-\frac{3\pi i}{4}}$$ Then $$z=2^{1/2}e^{-\frac{\pi i}{4}}=2^{1/2}\left(\cos\frac{\pi}{4}-i\sin\frac{\pi}{4}\right)=1-i$$ is solution of equation $$z^3 = -2 - 2i$$ in the fourth quadrant.
Hypothesis testing for a Poisson distribution
To get an answer it is necessary to fix a certain $n$. So let's set $n=5$ as per Neyman Pearson's Lemma, the critical region is $$\mathbb{P}[Y\geq k]=0.05$$ where $Y\sim Po(5)$ It is easy to verify with a calculator (or manually in 5 minutes) that $$\mathbb{P}[Y\geq 10]=3.18\%$$ and $$\mathbb{P}[Y\geq 9]=6.81\%$$ It is evident that there's no way to have a non randomized test which gets exactly a 5% size...thus the test must be randomized in the following way: If the sum of the observations is 10 or higher I reject $H_0$ If the sum of the observations is 8 or lower I do not reject $H_0$ If the sum of the observations is exactly 9 I toss a fair coin and I reject $H_0$ if the coin shows Head. this can be formalized as follows: $$ \psi(y) = \begin{cases} 1, &amp; \text{if $y&gt;9$} \\ 0.5, &amp; \text{if $y=9$} \\ 0, &amp; \text{if $y&lt;9$ } \end{cases}$$ And the total size is $$\alpha=0.5\times P(Y=9)+P(Y&gt;9)=0.5000\times0.0363+0.0318=0.0500$$ .., as requested
Can I say the f(x) = 5x + 2 is bijective?
Note that this function isn't well-defined, as not all real inputs into $f$ yield an integer output. If it were instead defined as $f: \mathbb{R} \to \mathbb{R}$, then this would be fine. Yes, you can say that. Proof of injectivity: Suppose $f(a) = f(b)$. Then $5a + 2 = 5b + 2$, so $5a = 5b$. This yields $a=b$, so $f(x) = f(y)$ only if $x=y$. Proof of surjectivity: Note that for any $n \in \mathbb{Z}$, if $f(x) = n$, then $5x + 2 = n$, so $x = \frac{n-2}{5} \in \mathbb{R}$, so $x$ is a valid input for the function. Thus, $f$ is both injective and surjective, so it is bijective.
Divisibility problem involving the $2015^{th}$ power
Well, the sequence $a_n=(5+2\sqrt6)^n+(5-2\sqrt6)^n-10$ must follow some linear recurrence relation like $a_n=10a_{n-1}-a_{n-2}+80$ with initial conditions $a_1=0,\;a_2=88$, and every other of these seems to be divisible by 960, which should be easy to prove by induction.
unit-speed parametrisation of rational curves
Perhaps I'm missing something very obvious... but assume $$x(s) = \frac{p_1(s)}{q_1 (s)},\quad y(s) = \frac{p_2(s)}{q_2 (s)},\quad\mbox{and}\quad z(s) = \frac{p_3(s)}{q_3 (s)}. $$Put $W(s)= q_1 (s)q_2(s)q_3(s)$, and write $$x(s) = \frac{p_1(s)q_2 (s)q_3 (s)}{W (s)},\quad y(s) = \frac{q_1 (s)p_2(s)q_3 (s)}{W (s)},\quad\mbox{and}\quad z(s) = \frac{q_1 (s)q_2 (s)p_3(s)}{W (s)}. $$Now rename the numerators to $X(s)$, $Y(s)$ and $Z(s)$.
Prove that there exists an automorphism permuting the zeroes of an irreducible polynomial in the splitting field
Let $\alpha$ and $\beta$ be two roots of $p$ in $E$. Then $F(\alpha) \cong F(\beta)$ as field extensions of $F$. If this isn't obvious, note they are both isomorphic to $F[x] / p(x)$. The embedding $F(\alpha) \hookrightarrow E$ makes $E$ the splitting field of $p$ over $F(\alpha)$. The embedding $F(\alpha) \to F(\beta) \hookrightarrow E$ also makes $E$ the splitting field of $p$ over $F(\alpha)$. Because the splitting field of a polynomial over a field is unique, these two different extensions of $F(\alpha)$ must be isomorphic. This gives an isomorphism of $E$ that sends $\alpha$ to $\beta$.
Number of non isomorphic graphs
Think about how many graphs you can make on a graph with $V = 30$ and $E = 3$. You could have a path with three edges, a path with two edges and a path with one edge, three isolated edges, a triangle, and a star.
Non-computable c.e. sets are Kurtz random
No c.e. set is Kurtz random because any infinite c.e. set has an infinite computable subset and then a test consisting of clopen sets can easily be built to zoom in on that infinite computable subset. I'll provide more details if you'd like.
Is this the projective plane or the Klein bottle? (Fundamental polygon)
Closed surfaces are determined by the two properties: Orientability Euler characteristic As you have already correctly determined, the considered surface is not orientable, because the edge $c$ is glued with the opposite orientation. Thus one has to determine the Euler characteristic $\chi$, which can be calculated by the formula: $$\chi = \#\{Vertices\} - \#\{Edges\} + \#\{2-cells\}$$ In your example, the number of edges (=3) and number of $2$-cells (=1) are easily determined. To determine the number of vertices, one has to understand which of the $6$ vertices in the picture are identified. It turns out that there are 2 distinct vertices in the quotient. Hence, one has $\chi = 2 - 3 + 1 = 0$, thus the quotient surface is a Klein bottle (The projective space has Euler characteristic $\chi= 1 - 1 + 1 = 1$).
Derivation of the general forms of partial fractions
Your first and second cases are equivalent. You need to understand complex numbers to be able to see that. The complex numbers, written $\mathbb{C}$ are of the form $x+iy$ where $x$ and $y$ are real numbers that you already know and $i$ is a special number with $i^2 = -1$. Usually, we have $i = \sqrt{-1}$. To see that the second case is the same as the first, notice that: $$\frac{2x+3}{(x-1)(x^2+4)} \equiv \frac{A}{x-1}+\frac{B}{x-2i}+\frac{C}{x+2i}$$ where $A=1$, $B=-\frac{1}{2}-i\frac{1}{4}$ and $C=-\frac{1}{2}+i\frac{1}{4}$. The only reason you are taught the second form is that you don't know about $x\pm 2i$ and they recombine them: $$\frac{B}{x-2i} + \frac{C}{x+2i} \equiv \frac{1-x}{x^2+4}$$ The third case isn't quite the same. But notice that the denominator is a cubic. When you multiply it out, you'll have an $x^3$ and lower powers. We couldn't get away with just two fraction because $$\frac{A}{x-a}+\frac{B}{x-b} \equiv \frac{A(x-b)+B(x-a)}{(x-a)(x-b)}$$ and this only has a quadratic expression as the denominator.
Sum of squares finite for two sequences implies sum of products finite?
Note that we have $$\overbrace{\sum_{k=1}^n a_k b_k \leq \sqrt{\sum_{k=1}^n a_k^2} \sqrt{\sum_{k=1}^n b_k^2}}^{\text{Cauchy-Schwarz inequality for finite dimensional vector-space}} \leq \sqrt{\sum_{k=1}^{\infty} a_k^2} \sqrt{\sum_{k=1}^{\infty} b_k^2} = \text{constant}$$ Now let $n \to \infty$, to conclude what you want.
Question about negative value using the ratio convergence test for integrals
Since the integrand is non-negative, it suffices to verify finiteness: $$ \text{For which $p$ is }\int_{\mathbb R^+}x^p\tan^{-1} x\ dx&lt;\infty\text{?} $$ Since $\tan^{-1}x=x+o(x)$, we have the bound $\tan^{-1}x\geq cx$ for $x\in [0,\epsilon)$. Thus $p$ must satisfy $$ \int_{0}^{\epsilon}x^{p+1}&lt;\infty, $$ which means $p&gt;-2$. On the other hand, since $\tan^{-1}x \geq 1$ for $x&gt;N$ we must have $$ \int_{N}^{\infty}x^p&lt;\infty, $$ which means $p&lt;-1$. Therefore $p\in (-2,-1)$. To show that every $p$ in this range works, observe that $$ \int_{\mathbb R^+}x^p\tan^{-1}x\ dx\leq \int_{0}^{N}x^{p+1}+\int_{N}^{\infty}x^p\cdot \frac{\pi}{2}&lt;\infty. $$ We have used the bounds $\tan^{-1}x\leq x$ for $x\geq 0$ and $\tan^{-1}x\leq \frac{\pi}{2}$.
Prove that EXT,TOT and INF are not recursively enumerable
First, a quick comment on extendibility in general. The function $g$ you describe is extendible to a total recursive function, contrary to what you claim - namely, it's extended by the identity function $x\mapsto x$. When we extend a partial recursive function to a total recursive function, we don't need (a priori) to keep track of the original domain, so the fact that $dom(g)$ is complicated in no way directly prevents $g$ from being extendible. You have to work a bit harder to get a non-extendible function. As a partial hint, note that (fixing some $x$) if we have some $s$ such that we know $$\varphi_x(x)\downarrow\iff\varphi_x(x)[s]\downarrow,$$ then we can tell whether $x\in K$ just by running $\varphi_x(x)$ for $s$-many steps; conversely, for $x\in K$ we can find the stage $s$ at which point $\varphi_x(x)\downarrow$. But let's say we've resolved the problem above, and we have a non-extendible function $h$. Then how can we use this to reduce $\overline{K}$ to $EXT$? Well, suppose you're given an $x$ and you want to tell whether $x\in \overline{K}$. To do this, you want to build a function $f_x$ which is in $EXT$ iff $x\in\overline{K}$ - that is, iff we never see $\varphi_x(x)$ converge. The general strategy for doing this sort of thing is to think of $f_x$ in terms of "until" - namely, you want $f_x$ to sound like $$\mbox{"do [blah] until (if ever) $\varphi_x(x)$ converges, after which point do [foo]."}$$ Here [blah] should be some behavior which makes $f_x$ look extendible, and [foo] should be some behavior which makes $f_x$ look non-extendible. Looking extendible is easy - for example, we can simply require $f_x(y)$ to not be defined until we see $\varphi_x(x)$ converge (the everywhere-undefined function is definitely extendible!). Looking non-extendible is harder, but here's where our $h$ - once we have it - comes in: the $f_x$ we want should be "Look like the always-undefined function until we see $\varphi_x(x)$ converge, at which point behave like $h$." Now you just need to make this precise.
general solution of a nonlinear third order partial differential equation
Firstly, rescale the variables as $$ x\rightarrow ax \quad t\rightarrow bt \quad y\rightarrow cy $$ and choose $$ a=\frac{2h_0}{3}\sqrt{\frac{h_0}{g}}\quad b=h_0c\quad \frac{h_0^3c^2}{9}=1, $$ the equation takes the form $$ \frac{\partial x}{\partial t}+\lambda x\frac{\partial x}{\partial y}+\frac{\partial x}{\partial y}+\frac{\partial^3 x}{\partial y^3}=0 $$ being now $\lambda$ an ordering parameter, introduced by convenience, taken to be 1 at the end of the computation. Then, you can consider two different situations: $\lambda\rightarrow 0$ and $\lambda\rightarrow\infty$. For the former case, take the series $$ x=x_0+\lambda x_1+O(\lambda^2). $$ I limit the computation at the first order just to explain the technique. Put this into the equation and you will obtain the set of equations $$ \frac{\partial x_0}{\partial t}+\frac{\partial x_0}{\partial y}+\frac{\partial^3 x_0}{\partial y^3}=0, $$ $$ \frac{\partial x_1}{\partial t}+\frac{\partial x_1}{\partial y}+\frac{\partial^3 x_1}{\partial y^3}=-x_0\frac{\partial x_1}{\partial y}-x_1\frac{\partial x_0}{\partial y}, $$ $$ \vdots. $$ Now, you can realize that the equation for $x_0$ is linear and can be solved exactly, e.g. with a Fourier series, given the proper boundary and initial conditions. The procedure can be iterated at whatever order you like, just consider that going at higher orders implies more involved computations. Finally, the opposite limit can be evaluated by rescaling the time variable as $t\rightarrow \lambda t$. Then, take the series $$ x=x_0+\frac{1}{\lambda}x_1+O\left(\frac{1}{\lambda^2}\right). $$ This kind of approach is known in fluid dynamics as a boundary layer problem. Then you get the set of equations $$ \frac{\partial x_0}{\partial t}+x_0\frac{\partial x_0}{\partial y}=0, $$ $$ \frac{\partial x_1}{\partial t}+x_0\frac{\partial x_1}{\partial y}+x_1\frac{\partial x_0}{\partial y}=-\frac{\partial x_0}{\partial y}-\frac{\partial^3x_0}{\partial y^3}, $$ $$ \vdots. $$ I do not enter too much into these equations as in boundary layer problems there is a further complication arising by the boundary conditions. I just notice that the leading order equation can be solved by the characteristic method. This should give you some starting point to work with.
Simplification of combinatorial formula
If we ignore the binomial $\binom{n}{m}$ in the denominator, because it is constant across the $k$-sum, conversion to $\Gamma$-functions yields $$\binom{n}{m}E(s)=$$ $$\frac{m}{2m-1}\binom{n-m}{m-1}{}_3F_2(1-2m,1-m,1-m;2-2m,2-2m+n;1)$$. This might have a simpler representation if the hypergeometric series can be reduced with one of the formulas in http://arxiv.org/abs/1105.3126 ,which I did not check.
Find recurrence relation
Note: my numbers are not matching yours, so possibly what follows contains a blunder. Let $A_n$ be the number of "good" strings of length $n$. Let $B_n$ be the number of strings of length $n$ wherein the last character does not match its predecessor but which are otherwise good. And let $T_n=A_n+B_n$. We note the initial conditions $A_0=1,\;A_1=0,\;B_0=0,\;B_1=7$ Then $$n≥2\implies A_n=A_{n-1}+B_{n-1}=T_{n-1}$$ $$n≥2 \implies B_{n}=6\times A_{n-1}=6T_{n-2}$$ It follows that, for $n≥3$, $T_n$ satisfies the linear recursion: $$T_n=T_{n-1}+6T_{n-2}$$ Can you finish from here? For comparison: I get $$\{A_n\}=\{1,0,7,7,49,91,385,931,3241,8827,\cdots\}$$ Also worth noting: For $n≥3$ we can solve the linear recursion (using the values previously computed to supply initial conditions for $T_3,\;T_4$). We get $$n≥3\implies A_{n+1}=T_n=\frac 75 \times (3^n-(-2)^n)$$
Simplifying $0.300 (1 \pm 0.0633)$
recall the distributive property? $$a(b+c) = ab + ac$$ or in your case $$0.300(1±0.0633) = (0.300)(1)\pm (0.300)(0.0633)$$ $$\dots$$
Differentiation with help of Frenet Frame
Hint: $\beta(s)=\alpha(s) + \gamma(s)$ where $\gamma$ has constant length. So $&lt;\gamma',\gamma&gt;=0$. Now after differentiating $\beta$ you get $\beta(s)'=u(s)\mathbf{B}$. So $\gamma'=u(s)\mathbf{B}-\mathbf{T}$. Combining this with $&lt;\gamma',\gamma&gt;=0$ gives $u(s)=0$.
Flattening matrix derivatives
I would proceed like that: Consider a basis of the matrices $\mathbb R^{p \times p}$. This is pretty easy. Take for example $(E_{ij})_{\substack{1 \le i \le p \\ 1 \le j \le p}}$ where $E_{ij}$ is the matrix having all coefficients vanishing except the one at $i$th-row and $j$th-column which is equal to one. $Df(E_{ij})$ is a vector $v_{ij} \in \mathbb R^p$. Define the coefficient $(l,(i,j))$ of your expected matrix $M \in \mathbb R^{p \times p^2}$ to be the $l$th-coordinate of $Df(E_{ij})$.
Proving that no number is the successor of itself
In order to answer that, it must be seen what lines # and fou say. Line 3: If $s(0)\neq0$ and if, for every natural $n$, $s(n)\neq n\implies s\bigl(s(n)\bigr)\neq s(n)$, then, for every $n$, $s(n)\neq n$. This is just the induction principle applied to the proposition $(\forall n\in\mathbb{N}):s(n)\neq n$. Line 4: $s(0)\neq 0$ This is here to assert that the base case of induction holds. Then lines 5 to 10 are here to prove that if $s(n)\neq n$, then $s\bigl(s(n)\bigr)\neq s(n)$, that is, in order to comple that induction proof. So, he starts by picking an $a\in\mathbb N$ and he assumes that $s(a)\neq a$. In order to prove that $s\bigl(s(a)\bigr)\neq s(a)$, he assumes that they are equal and reaches a contradiction. Since such a contradiction is reached, $s\bigl(s(a)\bigr)$ is indeed different form $s(a)$ and the theorem is proved.
How can I convert a truncated p-adic rational number back into its original form?
Just as it is the case with $n$-ary expansions of real numbers, a $p$-adic number is rational iff its $p$-adic expansion is eventually periodic. For your specific example, there is an obvious choice of extending it with the repeated period $111000$ which gives you $1/9$. Although this might be understood from the context, one should still clarify that repeating period, because one formally has infinitely many other choices to extend the truncated expression in an eventually periodic way. ("Keeping" the truncated expression means you extend it with a period of $0$'s; but of course you can also choose the next 50000 digits arbitrarily and then declare that 50012-string the period to be repeated from now on. Or, after that just attach zeroes: This way you see there's even infinitely many natural numbers that extend your expression, namely, all that are congruent to it modulo $p^{\text{length of your truncated expression}}$.
Existence of an extending measure
Let $\Omega=\{1,2,3\}$. Define $\mu(\{1\})=\mu(\{2\})=\mu(\{1,3\})=1$ and $\mu(\{2,3\})=2$ (and $\mu(\varnothing)=0$). Let $\mathcal A$ consist of just the $5$ sets on which I've just defined $\mu$. This satisfies your hypotheses because the only time disjoint sets in $\mathcal A$ have their union in $\mathcal A$ is when one of the sets is empty. But there is clearly no extension of $\mu$ to a measure on the generated $\sigma$-algebra.
Galois extension with Galois group $S_3$
Your reasoning is missing the fact that only in Galois extensions are you given that $e, f, r$ are independent of the prime chosen. So you can have $p = P_1P_2$ (indeed you must, as you note!) Your assumption of regularity is the main issue, but there are plenty of examples where you do not have something totally split in the decomposition field, in the case the subextension is Galois this changes, but obviously unless $D_p\trianglelefteq G$ the subextension need not be Galois. Note I spoke earlier on the inertial groups, but the problem with that is that of course this is for local fields. The local information is still useful, as you note that completing or localizing at one of the primes still gives you information, but the splitting information is embedded in the step prior to this when engaging with $D_p$.
Calculate the top area of a truncated cone with known volume, height, and bottom area
Welcome to MSE. First of all area of the lake can not be $4.3 km^2$ becase even if the lake is cylinder the volume will be $4.3\times 0.01=0.043 km^3$ not $0.3 km^3$. So I think the surface area must be $43 km^2$. The cross section of the lake is trapezoid. We can consider a rectangle with width average of diameters of upper and bottom surfaces. In this case we consider a cylinder instead of a truncated cone.Let diameter of surface be $2r$ and that of bottom be $2r_1$ and corresponding diameter of the cylinder be $2r_a$, we have: $$2r_a=\frac{2r+2r_1}2\rightarrow r_a=\frac{r+r_1}2$$ $$r=\sqrt{\frac {43}{3.14}}\approx 3.7$$ $$\big(\frac{r+r_1}2\big)^2\times 3.14\times 0.01=0.3\rightarrow r_1+r=6.8$$ Hence the radius of bottom area is: $6.8-3.7=3.1$ And it area is: $$A=3.1^2\times 3.14\approx 30.2 km^2$$
Number of powers of $2$ having leading digit $1$
If $2^m$ starts with a $1$ then $2^{m-1}$ and $2^{m+1}$ do not. And if $2^m$ has one more decimal digit than $2^{m-1}$ then it must start with a $1$. So all you need to know is how many powers of $10$ are less than or equal to $2^M$. This is $$\lfloor 1+ \log_{10} 2^M \rfloor = 1+\lfloor M\log_{10} 2\rfloor $$ where you need to add $1$ to deal with $2^0=1$. For $M \gt 0$ you could instead write $\lceil M\log_{10} 2\rceil$.
Solving $A_n A_{n+1}=A_{n}+2 A_{n+1}$ to disagree with a question
This is a Ricatti recurrence: $\begin{align*} w_{n + 1} = \frac{a w_n + b}{c w_n + d} \end{align*}$ with $c \ne 0$ and $a d - b c \ne 0$. There are several ways to solve them. Brand &quot;A Sequence Defined by a Recurrence Relation&quot;, AMM 62:7 (1955), pp 489-492 goes as follows. Define: $\begin{align*} y_{n + 1} &amp;= \alpha - \frac{\beta}{y_n} \\ \alpha &amp;= a + d \\ \beta &amp;= a d - bc \end{align*}$ Replacing $y_n = x_{n + 1} / x_n$ gives now: $\begin{align*} x_{n + 2} - \alpha x_{n + 1} + \beta x_n &amp;= 0 \end{align*}$ We need two starting values, pick $x_0 = 1$ for convenience giving $x_1 = y_0$, and you are set. Another road is to recognize the recurrence as a Möbius tranformation: $\begin{align*} w_{n + 1} &amp;= \frac{a w_n + b}{c w_n + d} \end{align*}$ It turns out those compose just like $2 \times 2$ matrices multiply, so if you define: $\begin{align*} M &amp;= \pmatrix{a &amp; b \\ c &amp; d} \\ M^n &amp;= \pmatrix{a^{(n)} &amp; b^{(n]} \\ c^{(n)} &amp; d^{(n)}} \end{align*}$ then: $\begin{align*} w_n &amp;= \frac{a^{(n)} w_0 + b^{(n)}}{d^{(n)} w_0 + d^{(n)}} \end{align*}$ Yet another way is given by Mitchell &quot;An Analytic Ricatti Solution for Two-Target Discrete-Time Control&quot;, Journal of Economic Dynamics and Control 24:4 (2000), pp 615-622. Define the auxilliary sequence: $\begin{align*} x_n &amp;= \frac{1}{1 + \eta w_n} \end{align*}$ to get: $\begin{align*} x_{n + 1} &amp;= \frac{(d \eta - c) x_n + c} {b \eta^2 - (a - d) \eta - c) x_n + a \eta + c} \end{align*}$ Picking $\eta$ so that $b \eta^2 - (a - d) \eta - c = 0$, this is a linear recurrence. Bonus is that it is first order, so it can be solved even if the coefficients aren't constant.
Relation between operator norm of a matrix and norm of inverse
$(\mathbb{R}^2, \|\cdot\|_{sup})\to(\mathbb{R}^2, \|\cdot\|_1)$ i.e. $\|(x,y)\|_1=|x|+|y|$ $A= \left[ {\begin{array}{cc} 0 &amp; 1\\ 1 &amp; 0 \\ \end{array} } \right]=A^{-1}$ We have $\|(1,1)^{t}\|_{sup}=1$. Thus $\|A(1,1)^{t}\|_1=\|(1,1)^{t}\|_1=2$ then $\|A\|=\|A^{-1}\|&gt;1$. Finally $\|A\|\cdot\|A^{-1}\|&gt;1$.
Is the Schwarz inequality a special case of the Cauchy-Schwarz inequality?
As written now (integral on the interval $[a,b]$) isn't. But in a more general context (measure theory), both (1) and (2) are particular cases of a more general theorem: $$ \left|\int_X fg\,d\mu\right|^2\le \left(\int_X f^2\,d\mu\right)\left(\int_X g^2\,d\mu\right) $$ (1) is the particular case where the space is a finite set with a discrete uniform measure. In (2) we have an interval with the Lebesgue measure.
Proof for result of sum of 3 elements of recursive sequence
$$a_k=a_{k-1}\cdot a_{k-3}$$ $$=a_{k-2}\cdot a_{k-4}\cdot a_{k-3}$$ $$=a_{k-3}\cdot a_{k-5}\cdot a_{k-4}\cdot a_{k-3}$$ $$=a_{k-4}\cdot a_{k-5}$$ $$=a_{k-5}\cdot a_{k-7}\cdot a_{k-5}$$ $$=a_{k-7}, \forall k&gt;7.$$ Thus, this sequence has cycle of $7$; the rest of the proof is easy.
How are these expressions $\leq$ and not $=$?
Consider the simlar formulation $x^2$ is minimized when $x=0$. $0^2\le x^2$ for all $x$.
Inner Product with a Linear Transformation
$a$ and $b$ are variable vectors and you have to find condition(s) on $T$ (not on these vectors) which make(s) $\langle a,b\rangle_1$ an inner product. Apply definition of inner product. $\langle a,b\rangle_1$ satisfies all properties of inner product for any $T$ except the condition $\langle a,a\rangle_1=0$ implies $a=0$. In other words what we need is $\|Tx\|=0$ implies $x=0$. This is true iff $T$ is injective.
Compound angle formula
Hint: I assume you know the formula for $\tan(A\pm B)$ (otherwise, your teacher is evil :) ). Plug $A=\tan^{-1}a$, $B=\tan^{-1}b$ into that formula and rearrange it to get a formula for $tan^{-1}a - tan^{-1}b = tan^{-1}$(some expression involving $a$ and $b$). We start with the known $\tan$ compound angle formula: $$\tan(A + B) = \frac{\tan A + \tan B}{1 - \tan A \tan B}$$ Substituting $A=\tan^{-1}a$, $B=\tan^{-1}b$ : $a = \tan A$, $b = \tan B$ $$\tan(\tan^{-1}a + \tan^{-1}b) = \frac{a + b}{1 - ab}$$ $$\tan^{-1}a + \tan^{-1}b = \tan^{-1}\left(\frac{a + b}{1 - ab}\right)$$ Now let $a = 3$, $b = -\frac{1}{2}$ Note than $\tan(-x) \equiv -\tan x$ for all $x$ $$\tan^{-1}3 - \tan^{-1}\left(\frac{1}{2}\right) = \tan^{-1}\left(\frac{3 - \frac{1}{2}}{1 + 3\cdot\frac{1}{2}}\right)$$ $$\tan^{-1}3 - \tan^{-1}\left(\frac{1}{2}\right) = \tan^{-1}\left(\frac{6 - 1}{2 + 3}\right) = \tan^{-1}1 = \frac{\pi}{4}$$
mse of non-normal variance estimator
You have to use the chisquare distribution for that. $$(n-1)S^2/\sigma^2\sim\chi^2(n-1)$$
ODE with change of variable
hint The equation is $$\frac{uu'}{1+u-u^2}=\frac{1}{x}$$ observe that $$\frac{u}{u^2-u-1}=\frac{a}{(u-u_1)}+\frac{b}{(u-u_2)}$$ with $u_1=\frac{1-\sqrt{5}}{2}$ $$u_2=\frac{1+\sqrt{5}}{2}$$ $$b=\frac{-u_2}{u_1-u_2}=\frac{u_2}{\sqrt{5}}$$ $$a=\frac{-u_1}{\sqrt{5}}$$ After integration, we get $$(u-u_1)^a(u-u_2)^b=\frac{\lambda}{x}$$ solve for $u$.
Why do we need the axiom of choice?
As I understand your proposal, the problem is that it requires you to have a condition to construct that set. In many situation it's certainly possible to do so and in those situation you need not to resort to the axiom of choice to guarantee the existence of a selector function. A vast amount of mathematics can get by without the need of the axiom of choice. However if you have a situation when you can't guarantee that existence by a concrete example you will have a bit more trouble. In some special cases you could perhaps prove the existence without the use of AoC, but in general you would need to use the AoC to prove that. There is proof that the AoC can't be proven from the other axioms.
Show the equivalence relation and show that the equivalence classes of this relation is closed and connected
Hint: For the transitivity, if $U,V$ are connected, and $U\cap V$ is not empty, $U\cup V$ is connected. Let $f:U\cup V\rightarrow\{0,1\}$ be a continuous function, the restriction of $f$ to $U$ is constant, the restriction of $f$ to $V$ is also constant. this implies that $f_{\mid U\cap V}=f_{\mid U}$ and $f_{\mid U\cap V}=f_{\mid V}$. For the second part, remark that the adherence of a connected subset is connected. Let $E$ be a connected subset and $\bar E$ its adherence, let $f:\bar E\rightarrow\{0,1\}$ since $E$ is coneected, $f_{\mid E}$ is constant, write $f_{\mid E}=0$. Let $x\in \bar E, x=limx_n$, $f(x)=limf(x_n)=0$ implies that $f$ is constant.
Intervals and polynomials
Generally, the interval range enclosure won't be tight. Therefore, if the resulting interval [enclosure] contains zero [possibly as its endpoint], does it mean there must be one more more roots inside the original interval? Could it happen that there might be no zero? There could be roots. Or not. It requires further (interval) analysis: monotonicity test using enclosure for the derivative and interval Newton method. But first bisect and calculate enclosures over smaller intervals until you arrive at the following, more favorable, situation: If the resulting interval [enclosure] contains no zero ..., does it mean there is no polynomial root inside the original interval? Yes.
Geometric or binomial distribution?
We solve only the expectation part, in order to introduce an idea. But to make what the monkey types more interesting, Let us assume that the monkey has $5$ letters available. Let $X_1$ be the waiting time (the number of key presses) until the first "new" letter. Of course $X=1$. Let $X_2$ be the waiting time between the first new letter, and the second. Let $X_3$ be the waiting time between the second new letter and the third. Define $X_4$ and $X_5$ similarly. Then the total waiting time $W$ is given by $W=X_1+X_2+X_3+X_4+X_5$. By the linearity of expectation we have $$E(W)=E(X_1)+E(X_2)+\cdots+E(X_5).$$ Clearly $E(X_1)=1$. Once we have $1$ letter, the probability that a key press produces a new letter is $\frac{4}{5}$. So by a standard result about the geometric distribution, $E(X_2)=\frac{5}{4}$. Once we have obtained $2$ letters, the probability that a letter is new is $\frac{3}{5}$. Thus $E(X_3)=\frac{5}{3}$. Similarly, $E(X_4)=\frac{5}{2}$ and $E(X_5)=\frac{5}{1}$. Add up. To make things look nicer, we bring out a common factor of $5$, and reverse the order of summation. We get $$E(W)=5\left(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}\right).$$
I want to know if the Laurent series for $(e ^ z/ z) + e ^ {1 / z}$ is resolved correctly
No, that is not correct. A Laurent series centered at $0$ is an expression of the form $\sum_{n=-\infty}^\infty a_nz^n$ and what you got is not of that form. Your computations are correct though. You should be able to deduce from them that$$(\forall z\in\Bbb C\setminus\{0\}):\frac{e^z}z+e^{1/z}=\sum_{n=-\infty}^{-2}\frac1{(-n)!z^n}+\frac2z+2+\sum_{n=1}^\infty\frac{z^n}{(n-1)!}.$$
Prov that function is eventually periodic to origin.
1. Let us show that $f^n(v)\to 0$, $n\to\infty$, for any $v=(a,b,c,d)\in \mathbb{R}^4$. It is enough to consider non-negative vectors. It is clear that $f$ does not increase the maximum of the numbers, which we denote by $||v||$. Therefore, there exists the limit $\lim_{n\to\infty} ||f^n(v)||$. In particular, the sequence $\{f^n(v),n\ge 1\}$ has a limit point, say, $v_0$. Clearly, $||f^m(v_0)|| = ||v_0||$ for any $m\ge 0$. Also, extracting a convergent subsequence for preimages of elements converging to $v_0$, $v_0 = f(u_0)$. Assume that $v_0\neq 0$. Let $a&gt;0$ be the maximal coordinate of $f(v_0)$, wlog the first. Then $v_0$ has either the form $(a,0,*,*)$ or $(0,a,*,*)$. Moreover, the sum of the remaining coordinates must be $a$ due to the fact that $v_0 = f(u_0)$. Case a Both remaining coordinates are less that $a$ (and therefore positive). Then we have either (here $c$ means any number less then $a$) $$(a,0,c,c)\to (a,c,c,c) \to (c,c,c,c),$$ or $$(0,a,c,c)\to (a,c,c,c) \to (c,c,c,c),$$ which contradicts to $||f^m(v_0)|| = ||v_0||$. Case b One of the remaining coordinates is $a$ (and the other is zero). Then we have essentially two possibilities $$ (a,0,a,0)\to (a,a,a,a)\to (0,0,0,0), $$ and $$ (0,a,a,0)\to (a,0,a,0)\to (a,a,a,a)\to (0,0,0,0), $$ both contradicting the assumption. Therefore, $v_0=0$, hence, $\lim_{n\to\infty} f^n(v) = 0$, as claimed. 2. First we prove this for real numbers. Notice that the polynomial $(x-1)(x+1)^3+1$ has a positive root $\lambda \approx 0.83929$. Set $(t,u,v,w) = (1,1+\lambda,(1+\lambda)^2,(1+\lambda)^3)$. Then $$f(t,u,v,w) = (\lambda,\lambda(1+\lambda),\lambda(1+\lambda)^2,(1+\lambda)^3-1)\\ = (\lambda,\lambda(1+\lambda),\lambda(1+\lambda)^2,\lambda(1+\lambda)^3)=\lambda\cdot (t,u,v,w).$$ (I'll not write the vector here, since $\lambda$ is a cumbersome root of a cubic equation, and $(t,u,v,w)$ is even more cumbersome; the numerical values are $(t,u,v,w)\approx (1, 1.8393, 3.383, 6.2223)$.) Therefore, for any $n\ge 1$ $f^n(t,u,v,w) = \lambda^n\cdot (t,u,v,w)\neq 0$. Now by way of contradiction assume that there exists $n\ge 1$ such that $f^n(a,b,c,d) = 0$ for any integer $a,b,c,d$. Then, obviously, $f^n(a,b,c,d) = 0$ for any rational $a,b,c,d$. In view of continuity, $f^n(a,b,c,d) = 0$ for any real $a,b,c,d$, which contradicts the previous paragraph. This proof also gives a way to construct integer quadruples leading to arbitrarily long iteration sequences: one just needs to multiply $(t,u,v,w)$ by a large number and take integer parts. Say, the approximate values $(10000,18393,33830,62223)$ lead to a $24$-step iteration sequence. The application of this method for triples (perhaps, unsurprisingly) leads to funny example of long iteration triples: $(F_{n-2},F_{n-1},F_n)$, where $F_n$ is the $n$th Fibonacci number.
Partial differential equation with two variables
You differentiate the first equation w.r.t $x$. You just write the corresponding term. $$\frac{\partial^2 u}{\partial t\partial x}+g\frac{\partial^2 h}{\partial x^2}=0 \quad (I)$$ Now you differentiate the second equation w.r.t $t$. $$\frac{\partial^2h}{\partial t^2}+H\frac{\partial^2 u}{\partial x \partial t}=0\quad (II)$$ Now you can solve ($I$) for $\frac{\partial^2 u}{\partial t\partial x}$ and insert the corresponding term into ($II$).
Hilbert vs Inner Product Space
If you have a vector space $X$ with an inner product $\langle \cdot, \cdot \rangle$, this defines a norm $\|\cdot\|$ by $\|x\|=\sqrt{\langle x, x\rangle}$ (it is a good exercise to prove that this is in fact a norm). Similarly, this defines a metric, $d(x,y)=\|x-y\|$ (it is again a good exercise to prove that this is in fact a metric). This is the case for any inner product space, so yes, an inner product always defines a metric. However, not every metric is defined by an inner product! A sequence of elements $\{x_n\}$ in $X$ is called a Cauchy sequence if $\|x_n-x_m\|\to0$ as $n,m\to\infty$. An inner product space $X$ is called a Hilbert space if it is a complete metric space, i.e. if $\{x_n\}$ is a Cauchy sequence in $X$, then there exists $x\in X$ with $\|x-x_n\|\to0$ as $n\to\infty$.
The smallest positive period of the function $\sin{(k\cos{x})}$
Let $T$ be the minimum period of $f$. Since the period of $\sin(x)$ is $2\pi$, it must be true that $$k\cos(x+T)=k\cos(x)+2\pi n\\ \implies \cos(x+T)=\cos(x)+\frac{2\pi n}{k}$$ for some $n \in \Bbb{Z}$. The range of $k\cos(x+T)$ is $[-1,1]$. The range of $\cos(x)+\frac{2\pi n}{k}$ is $\big[-1+\frac{2\pi n}{k}, 1+\frac{2\pi n}{k}\big]$. These can only be equal if $n=0$. Hence, $$\cos(x+T)=\cos(x)$$ But we know the minimum period of $\cos(x)$ is $2\pi$, so $T=2\pi$.
Expanding the series ...
$$x_{2n}-x_n=\sum^{2n}_{k=1}\frac{1}{k}-\sum^{n}_{k=1}\frac{1}{k}=(1+\frac{1}{2}+..+\frac{1}{n}+\frac{1}{n+1}+...+\frac{1}{2n})-(1+\frac{1}{2}+..+\frac{1}{n})=\frac{1}{n+1}+...+\frac{1}{2n}$$
Showing $\prod\limits_{i<j} \frac{x_i-x_j}{i-j}$ is an integer
If $x_1$, $\ldots$, $x_n$ are integers, then $\prod_{1\le i&lt;j\le n} \frac{x_i-x_j}{i-j}$ is integral because it is the determinant of the integral matrix $$\left(x_i \choose j-1\right)_{i,j=1,\ldots,n}.$$ You can see this by starting with the formula for the determinant of the Vandermonde matrix, $$\det\left( (x_i^{j-1})_{i,j=1,\ldots,n}\right) = \prod_{1\le i&lt;j\le n} (x_j-x_i);$$ then, divide column $j$ of the Vandermonde matrix by $(j-1)!$ to get the matrix $$\left(\frac{x_i^{j-1}}{(j-1)!}\right)_{i,j=1,\ldots,n}$$ given by Eric Naslund above. Its determinant equals $\prod_{1\le i&lt;j\le n} \frac{x_j-x_i}{j-i}=\prod_{1\le i&lt;j\le n} \frac{x_i-x_j}{i-j}$. Finally, taking $j=n$, $n-1$, $\ldots$, $1$ in succession, add in rational multiples of columns $1$, $\ldots$, $j-1$ to column $j$ to change each $\frac{x_i^{j-1}}{(j-1)!}$ to $x_i \choose j-1$; this does not change the determinant of the matrix.
Another ring of integers in a cubic extension
Let $\beta=\frac12(\alpha+\alpha^2)$. Then $$\beta^2=\frac{\alpha^2+2\alpha^3+\alpha^4}4 =\frac{\alpha^2+(2+\alpha)(4+\alpha)}4 =2+\frac{3\alpha+\alpha^2}2=2+\alpha+\beta$$ so $$\alpha=\beta^2-\beta-2.$$
Proof that every closed subset of $\mathbb R$ is finite or countable or continuum.
Let $C$ be the set. First, without loss of generality: $C$ is nowhere dense. $C\subseteq [0,1]$ Show that if $C$ is uncountable, there's always an open interval outside of $C$ which divides $C$ into 2 uncountable halves. If you repeat this process in both halves and so on, compactness implies there's a point in every possible branching, so there's continuum many of them.
Calculate $p_*(\pi_1(\tilde{X},e_i))$
EDIT: Full disclosure! The first definition I gave for regular covering spaces is incorrect. (I've starred the incorrect portion.) Although, the covering space that we've constructed using a covering space action is regular. My bad! A few things to note: ****Incorrect portion: A covering space is regular if and only if it's deck transformation group is normal. (There are other equivalent definitions. You should see Hatcher for more details.)**** The correct portion: A covering space is regular if and only if the action of the deck transformation group is transitive. You're trying to construct a two-fold cover. Naturally, the cardinality of the deck transformation group of your covering space should be two. Which groups or group has a cardinality of two? I like the first picture a lot! You're on the right track. Consider embedding that triple-8 space into the $x$-$y$ plane of $\mathbb{R}^3$ in the following way: Note that the embedded space has 180-degree rotational symmetry about the $z$-axis. Thus, $\mathbb{Z}_2$ acts on the triple-8 space by a rotation angle of 180 degrees about this axis. That is, the action of $\mathbb{Z}_2$ identifies two points on the triple-8 space if and only if they're related by a 180 degree rotation. In fact, this action of a group on a topological space describes a covering space action. See Hatcher again to fill in the details. The fundamental group of the covering space given by this covering space action is precisely $2\mathbb{Z} \star \mathbb{Z}$---just as you calculated. You just need to argue that the covering space that me and you have constructed is a covering space by chasing the definitions in Hatcher (i.e. look up covering space action).
Proof a formula of the Fibonacci sequence with induction
$$F_{k} = \frac{\phi^k + \psi^k}{\sqrt{5}}$$ $$F_{k-1} + F_{k-2} = \frac{\phi^{k-1} + \psi^{k-1}}{\sqrt{5}} + \frac{\phi^{k-2} + \psi ^{k-2}}{\sqrt{5}}$$ $$= \frac{1}{\sqrt{5}} \left(\phi^{k-2} + \psi ^{k-2} + \phi^{k-1} + \psi^{k-1}\right)$$ From here see that $$\phi^{k-2} + \phi^{k-1} = \phi^{k-2}(\phi + 1) = \phi^{k-2}\left(\frac{3+\sqrt{5}}{2}\right)$$ $$ = \phi^{k-2}\left(\frac{6+2\sqrt{5}}{4}\right) = \phi^{k-2}\left(\frac{1+2\sqrt{5}+5}{4}\right) = \phi^{k-2}\left(\frac{1+\sqrt{5}}{2}\right)^2 = \phi^{k-2}\phi^2 = \phi^k$$ Similarily $$\psi^{k-2} + \psi^{k-1} = \psi^{k-2}(\psi + 1) = \psi^{k-2}\left(\frac{3-\sqrt{5}}{2}\right) $$ $$ = \psi^{k-2}\left(\frac{6-2\sqrt{5}}{4}\right) = \psi^{k-2}\left(\frac{1-2\sqrt{5}+5}{4}\right) = \psi^{k-2}\left(\frac{1-\sqrt{5}}{2}\right)^2 = \psi^{k-2}\psi^2 = \psi^k$$ Therefore, we get that $$F_{k-1} + F_{k-2} = \frac{\phi^k + \psi^k}{\sqrt{5}}$$
Equivalence of two harmonic problems on different domains
Your approach to the solution of the problem by using the maximum principle for the Laplace operator is correct. However, since the maximum principle for the Laplace operator is a strong maximum principle, it can also be used to prove directly that, if $M=\max_{B_R(0)\setminus B_1(0)}w$ and $m=\min_{B_R(0)\setminus B_1(0)}w$ then $M=m=0$. Let's see how. Maximum principle for the Laplace operator ([1], theorem 2, §2.1 p. 53). Let $$ \Delta u\ge 0\text{ in }D $$ If $u$ attains its maximum $M$ at any point of $D$, then $u\equiv M$ in $D$. Note that $D$ is a connected domain (i.e. a connected open set) in $\Bbb R^n$, not necessarily bounded nor simply connect (it can have holes but every two point in it can be joined by a continuous path), with also no requirements on the boundary $\partial D$. as Protter and Weinberger note ([1], §2.1 p. 54), the maximum principle implies a minimum principle, just by considering $-u$ instead of $u$, i.e. let $$ \Delta u\le 0\text{ in }D $$ If $u$ attains its minimum $m$ at any point of $D$, then $u\equiv m$ in $D$. Now, since $\Delta u=0$ implies $\Delta w\ge 0$ and $\Delta w\le 0$, if $w$ has a maximum $M$ in $B_R(0)\setminus B_1(0)$ then $w=M$ on the whole $B_R(0)\setminus B_1(0)$ and since $w=0$ on $\partial B_R(0)\cup\partial B_1(0)$ this implies $M=0$, and the same happens if we assume that $w$ has a minimum, $m=0$. Thus $w$ is constant and equal to zero on through the whole closed domain $B_R(0)\setminus B_1(0)\cup \big(\partial B_R(0)\cup\partial B_1(0)\big)\iff u=v$ on the same closed domain. Final notes Saying that the solutions of a given equation satisfy a "strong maximum principle" means that if one of them reaches its maximum value at a point of the interior of its domain of definition, it is actually constant though the domain. Otherwise, when the solutions of a given equation reach their maximum value on the boundary of their domain of definition, it is said that they satisfy a weak maximum principle, since this leaves the possibility that the same maximum value could be reached at an interior point. The solution of Laplace's equation satisfy a strong maximum principle, and this the stronger statement implies $w=u-v=0$ in our case. The solution to your problem by using the maximum is possibly the "right" one, because it works for every connected domain $D$ and every sufficiently regular boundary value $u|_{\partial D}$. However, in this particular case, due to the spherical symmetry of the $B_R(0)\setminus B_1(0)$ domain, we can solve the boundary value problem for $w$ directly by using the expression of $\Delta w$ in spherical coordinates (written below for $n=3$ for the sake of simplicity): $$ \begin{split} \Delta w &amp;= \frac{1}{r^2} \frac{\partial}{\partial r} \left(r^2 \frac{\partial w}{\partial r} \right) + \frac{1}{r^2 \sin \theta} \frac{\partial}{\partial \theta} \left(\sin \theta \frac{\partial w}{\partial \theta} \right) + \frac{1}{r^2 \sin^2 \theta} \frac{\partial^2 w}{\partial \varphi^2} \\ &amp;= \frac{1}{r} \frac{\partial^2}{\partial r^2} (rw) + \frac{1}{r^2 \sin \theta} \frac{\partial}{\partial \theta} \left(\sin \theta \frac{\partial w}{\partial \theta} \right) + \frac{1}{r^2 \sin^2 \theta} \frac{\partial^2 w}{\partial \varphi^2} \end{split} $$ Now, since our boundary values have a spherical symmetry, we can assume that all the derivatives of $w$ respectively to the angle variables vanish, and thus Laplace's equation reduces to the following ordinary differential equation respect to the radial variable $r$ $$ \begin{split} \Delta w=\frac{1}{r} \frac{\partial^2}{\partial r^2} (rw)=0&amp;\iff\frac{\partial^2}{\partial r^2} (rw)=0\\ &amp;\iff \frac{\partial}{\partial r} (rw)=b\quad b=\mathrm{const.}\\ &amp; \iff rw = a+br\!\quad a =\mathrm{const.}\\ &amp; \iff w =\frac{a}{r} +b \end{split} $$ and the given boundary values for $w$ imply $a=b=0$ and thus $w=0$. Reference [1] Protter, Murray H.; Weinberger, Hans F., Maximum principles in differential equations, Corrected reprint, New York-Berlin-Heidelberg-Tokyo: Springer-Verlag, pp X+261, (1984), MR0762825, Zbl 0549.35002,
If each $a_n >0$ and $\sum a_n$ diverges, prove that $\sum a_n x^n \to +\infty$ as $x\to1^-$.
Let $f_n(x)=\sum_{k=1}^n a_nx^n$. Since this is continuous, $\forall\epsilon&gt;0\,\exists\delta&lt;1$ such that $\delta&lt;x&lt;1$ implies $$\sum_{k=1}^{n} a_k-\sum_{k=1}^{n} a_kx^k&lt;\epsilon$$ Let $\epsilon=\sum a_k-M$. Choose $N$, such that $\sum_{k=1}^{N} a_k&gt;M$. Now, $$\sum_{k=1}^{N} a_kx^k&gt;\sum_{k=1}^{N}a_k-\epsilon=M.$$
Find the units digit in the number $7^{9999}$.
It's not cancelling out If $(a,n)=1,a^{\phi(n)}\equiv1\pmod n$ by Euler's Now if $b$ is any integer, $$a^{b\cdot\phi(n)+c}=\left(a^{\phi(n)}\right)^b\cdot a^c\equiv1^b\cdot a^c\pmod n\equiv a^c$$
Perturbation of initial conditions that preserve constants of motion
My first thought is that transformations from the non-relativistic Poincare group, the Galilean transformations, conserve $H$, $P$, and $L$.
Why is this version of Bayes theorem correct?
We have that $\Pr (A \vert E) \Pr (E) = \Pr (E \vert A) \Pr (A) = \Pr (A \cap E)$. Let $E = B \cap C$, so that $$\Pr (A \vert B\cap C) \Pr (B\cap C) = \Pr (B\cap C \vert A) \Pr (A) = \Pr (A \cap B \cap C)$$ The left and right expressions give us that $$ \Pr(A \vert B \cap C) = \frac{\Pr (A \cap B \cap C)}{ \Pr(B \cap C)} = \frac{\Pr(A\cap B \vert C) \Pr (C)}{\Pr (B \vert C) \Pr (C)} = \frac{\Pr(A \cap B \vert C)}{\Pr(B \vert C)} $$ where the second equality follows from the definition of conditional probability, similar to the second displayed equation above.
simplify summation with binomial coefficients
Alright, some of the comments have pointed out that it seems pretty difficult to find a definite formula but we can simplify it down to something much nicer to handle. We have, $$\sum ^n _{k=0} \left(n-k\right)!\binom{n}{k}^2$$ Now, multiplying and dividing $k!$ and $n!$we get, $$n! \sum^n _{k=0} \frac{1}{k!} \frac{\left(n-k\right)! \cdot k!}{n!} \cdot\binom{n}{k}^2 $$ This is equivalent to $$n! \sum^n _{k=0} \frac{\binom{n}{k}^2}{\binom{n}{k}k!}$$ Which is $$n! \sum^n _{k=0} \frac{\binom n k}{k!}$$
Condition for $n-1$ derivatives of a polynomial to be greater than $0$
If you ask the question for $x=0$, the answer will be "all coefficients must be positive" (this is easy to show by setting $x=0$ in all derivatives). If you ask it for any $a$, this translates to "all coefficients of $P(x+a)$ must be positive". But the computation of the coefficients of $P(x+a)$ is tedious: $$\sum_{i=0}^n p_i(x+a)^i=\sum_{i=0}^n p_i\sum_{j=0}^i\binom ija^{i-j}x^j.$$
Existence of a "global" sign function associated to the cycles of a simple undirected graph
Consider the graph a ------8------ b | \ | | \ | | \ 1 | \ | | \ | 8 8 c | \ | | \ | | \ 1 | \ | | \ | d --1-- e --1-- f it has three cycles, each of which has a sigma (the 8's in each of the cycle must have a different signum). however there is no global $\xi$ function, because then at least two of the 8's edges would have a same sign.
Little task about matrix I don't quite understand
Always remember the only matrices that commutes with all other matrices are the multiples of the identity. Here $A $ satisfies this condition, so for $B \in \mathbb R^{2\times 2}$ $A \cdot B = B \cdot A $. Also, in general, matrix multiplication is commutative when the two are diagonal matrices and are of the same dimension. Hope it helps.
Multivariable Calculus: Manifolds
You should prove that $0$ is regular value of function $\phi: \mathbb{R}^2 \to \mathbb{R}$, that means in practise that matrix: $$\begin{bmatrix} y^3 + x^3 &amp; 3xy^2 + y^3 \end{bmatrix}$$ is not zero matrix if $\phi(x,y)=0$. $\textbf{Edit:}$(How to prove that zero is regular value of function $\phi$). Suppose that for $(x,y)$: $$\begin{bmatrix} y^3 + x^3 &amp; 3xy^2 + y^3 \end{bmatrix}=\begin{bmatrix}0 &amp; 0\end{bmatrix}$$ Then $x=-y$ (by first coordinate). Next let's look on second coordinate. If $x=-y$ then $-3y^2+y^3=2y^3=0$, so $x=y=0$. So $(x,y)=(0,0)$. But we check that if $(x,y)=(0,0)$ then $\phi(x,y) \neq 0$. This way you show that $M$ is submanifold of $\mathbb{R}^2$, but every submanifold is manifold. Dimension of $M$ is equal to $\dim \mathbb{R}^2 -\dim \text{Im}D\phi=2-1=1$.
Use rational zero theorem to find real zeros of $2x^3-3x^2-x+1$
Hint By rational root theorem you get that $1/2$ is a root to the cubic and so $2x-1$ is a factor of the cubic. Applying division algorithm, we get, $$2x^3-3x^2-x+1=(2x-1)(x^2-x-1)=0$$Now can you solve the quadratic $x^2-x-1=0$?
Are continuous functions on a closed interval nowhere infinite?
A continuous real function on a closed finite interval is bounded there, and achieves its maximal and minimal values within the interval. These two are the well known Weierstrass Theorems I and II . I suppose and hope the above answers your question...
If $\mathcal O_P(C)$ is a DVR, then $P$ is non-singular
I thought about splitting this up into a bunch of algebra exercises but decided against it. Let's just plow through it. Let's just assume that $P = (0, 0)$, corresponding to the maximal ideal $\mathfrak{m} = (x, y) \subseteq A = k[x, y]$. $C$ is cut out of $\mathbb{A}^2$ by some polynomial $f \in A$ and of course $f \in \mathfrak{m}$ if $P$ lies on $C$. The local ring $\mathcal{O}_{C, P}$ is obtained from $\mathcal{O}_{\mathbb{A}^2, P}$ by modding out by $f$. Now, $\mathfrak{m}_{C, P} = \mathfrak{m}_{\mathbb{A}^2, P}/(f)$ and $\mathfrak{m}_{C, P}^2 = (\mathfrak{m}_{\mathbb{A}^2, P}^2 + (f))/(f)$ and hence $\mathfrak{m}_{C, P}/\mathfrak{m}_{C, P}^2 \simeq \mathfrak{m}_{\mathbb{A}^2, P}/(\mathfrak{m}_{\mathbb{A}^2, P}^2 + (f))$. This is a really important point: to get the cotangent space to $C$ at $P$ you take the cotangent space to $\mathbb{A}^2$ at $P$ and quotient out by (the residue class of) the defining equation for $C$. All that (introductory, granted) algebra to make an intuitive point. One more bit of algebra: I can identify $\mathfrak{m}_{\mathbb{A}^2,P}/\mathfrak{m}_{\mathbb{A}^2,P}^2$ with $\mathfrak{m}/\mathfrak{m}^2$. The former is the latter localized at $\mathfrak{m}$, but $A/\mathfrak{m}^2$ is already a local ring, so there's no need to localize. Now, you can write $f$ as $$ f(x, y) = ax + by + (\text{higher order terms}) $$ and here, really, $a = (\partial f/\partial x)(P)$ and $b = (\partial f/\partial y)(P)$. The residue class of $f$ mod $\mathfrak{m}^2$ is thus $a\bar{x} + b\bar{y}$. Here $\bar{x}$ and $\bar{y}$ form a basis for $\mathfrak{m}/\mathfrak{m}^2$. If the quotient is going to be one-dimensional then one of $a, b$ has to be nonzero.
Fundamental points of Cremona plane transformation
Let $U = \mathbb{P}^2 \setminus \{P_0, P_1,P_2\}$ where $P_i$ are the points. We try to extend $\varphi$ past $U$. For points $p \in V_p(x_1)$ we have $$\varphi(p) = \varphi([p_0, 0, p_2]) = [0, p_0 p_2, 0] = [0,1,0].$$ For $p \in V_p(x_2)$ we see that the rational map gives $\varphi(p) = [0,0,1]$. Define $r_\lambda = [p, 0, \lambda]$ and $s_\lambda = [p, \lambda, 0]$. Then $$\lim_{\lambda \to 0} \varphi(r_\lambda) = [0,1,0] \text{ and } \lim_{\lambda \to 0} \varphi(s_\lambda) = [0,0,1].$$ But $\lim_{\lambda \to 0} r_\lambda = s_\lambda = [1,0,0]$ so that if $\varphi$ extends to $P_0$ then $\varphi(P_0) = [0,1,0] = [0,0,1]$ which is a contradiction. By symmetry we see that $\varphi$ does not extend to $P_1, P_2$.
Diner Combinations, Each Pair Sits Together Exactly Once
Before we tackle the general $N^2$ problem, let's give a solution of the $4^2 = 16$ diner problem, grouping them into five courses seated at four tables of four diners each: First course: X X X X O O O O O O O O O O O O O O O O X X X X O O O O O O O O O O O O O O O O X X X X O O O O O O O O O O O O O O O O X X X X Second course: X O O O O X O O O O X O O O O X X O O O O X O O O O X O O O O X X O O O O X O O O O X O O O O X X O O O O X O O O O X O O O O X Third course: X O O O O O X O O X O O O O O X O X O O O O O X X O O O O O X O O O O X O X O O O O X O X O O O O O X O X O O O O O O X O X O O Fourth course: X O O O O O O X O X O O O O X O O O X O O X O O O O O X X O O O O X O O O O X O X O O O O O O X O O O X X O O O O O X O O X O O Fifth course: X O O O O X O O O O X O O O O X O O O X O O X O O X O O X O O O O O X O O O O X X O O O O X O O O X O O X O O O O O O X O O X O The arrangement of $N^2$ diners at $N$ tables through $N+1$ courses of a meal amounts to what in incidence geometry is called a finite affine plane. An affine plane is a system of points and lines such that: Any two distinct points lie on a unique line. Each line has at least two points. Given a line and point, there is a unique parallel line containing the point. There exist three non-colinear points (points not all on the same line). NB: By parallel lines we mean either disjoint or equal lines. The diners are our points, and the $N$-sets of diners served at separate tables during one course are our parallel lines: each diner is at exactly one of the $N$ tables during a course. If a line of an affine plane contains $n$ points, we say it is a finite affine plane of order $n$. The following deductions can be made: All lines contain $n$ points. Every point is contained in $n+1$ lines. There are $n^2$ points in all. There are a total of $n^2 + n$ lines in all. In all the known examples of finite affine planes, $n$ is either a prime or a prime power. A finite affine plane of order $n$ exists if and only if a finite projective plane of order $n$ exists, and thus is equivalent to existence of $n-1$ mutually orthogonal latin squares of order $n$. A famous open problem is posed by the conjecture that $n$ not a prime power is impossible. This Prime Power Conjecture remains a topic of active research. The nonexistence of two orthogonal latin squares of order 6 (Euler/Tarry; see D. Stinson's A Short Proof... for a modern 4 page treatment) implies there is no finite affine plane of that order, and Lam's extensive computer investigations proved there is no finite affine plane of order 10. Many additional nonexistence results ($n=14,21,22,\ldots$) are shown by the Bruck-Ryser-Chowla Theorem. The smallest open case is currently order $n=12$. Linear algebra can be used to construct affine planes or projective planes from finite fields (necessarily of prime power order), and the results are called Galois geometries. It is known that there are affine planes that are not isomorphic to any of these (but so far only for prime power orders). Construction: Let finite field $\mathbb{F}_q$ where $q=p^k$ have a prime power order. Partition the Cartesian product $\mathbb{F}_q\times \mathbb{F}_q$ into $q+1$ families of parallel lines through every pair of distinct points $(x_1,y_1),(x_2,y_2)$. Each line has $q$ points, so each point will have $q+1 = \frac{q^2-1}{q-1}$ lines through it. Any $q$ parallel lines must cover the plane, and the required $q+1$ classes of parallel lines are parameterized by their "slope" $s=0,\ldots,q-1,\infty$, where infinite slope means "vertical" lines (first coordinate held constant) and otherwise $s= \frac{y_2-y_1}{x_2-x_1}$. Constructing the finite field of order $q=p^k$ can be done as a quotient ring $\mathbb{Z}_p[X]/f(X)$ where $f(X)\in \mathbb{Z}_p[X]$ is an irreducible polynomial of degree $k$. I had occasion to construct a finite field of order $5^3=125$ by this method in this Answer.
Prove that $\dim \ker(H) = 2^{m} -1 -m$
Basically you need to check that $H$ has full rank $m$. The parity check matrix $H$ determines a mapping $\Bbb{F}^n\to\Bbb{F}^m$. If you know that $H$ has full rank, then that mapping is onto, and the code is the kernel which, by rank-nullity, has rank $n-m=2^m-1-m$. Why has $H$ got full rank? You can surely convince yourself that a subset of columns of $H$ forms an $m\times m$ identity matrix.
$\alpha \in \Omega_{\mathbb Q}^{x^3-2}$(splitting field) is such that $\alpha^5 \in \mathbb Q$ then $\alpha \in \mathbb Q$
Let's use your idea about the degrees in a tower of extensions. We'll prove the following thing: If $a$ is a rational number such that the polynomial $X^5 - a$ is reducible over $\mathbb{Q}$ then $a$ is the fifth power of a rational number. Now, we have the following decomposition over $\mathbb{C}$ $$X^5 - a= \prod_{\eta^5 = 1} ( X- \eta \cdot a^{1/5})$$ Assume that $X^5 - a$ is reducible over $\mathbb{Q}$. Then the product of some $k&lt;5$ of the factors on the right will be a polynomial with coefficients in $\mathbb{Q}$. In particular, the free term in that product will be in $\mathbb{Q}$. Taking absolute values we obtain $$(a^{1/5})^k \in \mathbb{Q}$$ and hence $a \in \mathbb{Q}$. Indeed, look at the expansion of $a$ in a product of prime numbers with some integral (positive or negative) exponents. $(a^{k/5})= (a^k)^{1/5}$ is in $\mathbb{Q}$ means that all the exponents for $a^k$ are divisible by $5$, that is, all exponents of $a$, multiplied by $k$, are divisible by $5$. Since $1\le k &lt; 5$, it follows that all the exponents of $a$ are divisible by $5$. This argument works for any $p$ prime instead of $5$. We are done now: The degree of the extension $\mathbb{Q}(\alpha)/ \mathbb{Q}$ is $1$ or $5$ and since it divides $6 = \deg E/ \mathbb{Q}$ it must be $1$, hence $\alpha \in E$.
WolframAlpha and I don't agree on $( xy\sin y )/(3x^2+y^2)$ as $(x,y)\to(0,0)$
Yes, the limit is $0$. You can also prove that using the fact that$$\frac{xy\sin y}{3x^2+y^2}=\frac{xy^2}{3x^2+y^2}\times\frac{\sin y}y,$$that$$\lim_{(x,y)\to(0,0)}\frac{xy^2}{3x^2+y^2}=0$$ and that $\lim_{y\to0}\frac{\sin y}y=1$.
When implies a linear relation in the function parameters also a linear relation in the derivatives?
Answered by Abel in comments: If $f(x,y)=g(x-by)$ then $\frac{\partial}{\partial x} f(x,y)=g'(x-by)$ and $\frac{\partial}{\partial y} f(x,y)=-bg'(x-by)$, hence the relation $$\frac{\partial}{\partial y} f(x,y)=-b\frac{\partial}{\partial x} f(x,y)$$ holds. Alternative, geometric explanation: the function $f$ is constant on every line of the form $x=by+C$, and the gradient of $f$ is always orthogonal to level curves of $f$.
Is haskell style pattern matching allowed in conventional mathematics and if not, how do you work out the numerator of an arbitrary rational number?
That isn't a well-defined function, unless you specify some further conditions. Since $1/2 = 2/4$, we should have $f(2/4) = f(1/2) = 1$, not $2$. But we can define a function $f$ such that $f(x) = a$, where $x = a/b$ in lowest terms. Because this uniquely specifies what $a$ is, this defines a function. However, I don't think it's possible to express $f$ in terms of $+, -, \times, \div$. Edit: you want $f$ expressed in "mathematical notation". The above is a valid mathematical definition of $f$, but I assume you're looking for a more algorithmic definition. Such a thing will be hard to produce, unless you implement it inside the "rational" object. I would have said impossible earlier today, but I came across a result by Julia Robison, stating that you can tell if a rational number is an integer just using $+,\times,&lt;$. That being said, it's wildly impractical to implement it using this. If you're doing a proof, you can simply declare that the function exists, and move on. Julia Robinson, Definability and Decision Problems in Arithmetic. The Journal of Symbolic Logic, Vol. 14, No. 2 (Jun., 1949) , pp. 98-114
Direct sum of completions is faithfully flat
If there is a nonzero element $a\in M$ then there is a maximal ideal $\mathfrak{p}$ containing the annihilator of $a$, giving us an injective map $R/\mathfrak{p} \to M$. By flatness, there is an injective map $R_\mathfrak{p} \otimes R/\mathfrak{p} \to R_\mathfrak{p} \otimes M$. But $R_\mathfrak{p} \otimes R/\mathfrak{p} = R_\mathfrak{p} / \mathfrak{p}R_\mathfrak{p} \neq 0$, so $R_\mathfrak{p} \otimes M \neq 0$. Note that this points to a more general fact: to check that a flat module is faithfully flat, it's enough to check that its tensor product with $R/\mathfrak{p}$ is nonzero for all maximal ideals $\mathfrak{p}$.
if $f(n+1)-f(n)=P(n)$, exist a polynomial $Q(x)$ such that for all $n \in \mathbb{Z}$ : $Q(n)=f(n)$
We will show that $S(x)$ exists for a single polynomial of degree $d$ for each $d$. Let $P_d(x)=x(x-1)(x-2)\cdots(x-(d-1))$. So $P_0(x)=1$, $P_{d+1}(x)=(x-d)P_d(x)$. Then $P_d(x)$ is of degree $d$. Show that $P_{d+1}(x+1)-P_{d+1}(x)=(d+1)P_d(x)$. Thus, letting $S_d(x)=\frac{1}{d+1}P_{d+1}(x)$ we get that $S_d(x+1)-S_d(x)=P_d(x)$. Now, since the $P_d$ are polynomials of degree $d$, we can write any polynomial $P$ as a sum: $$P(x)=\sum_{d=0}^D a_dP_d(x)$$ For some real values of $a_d$. Then define: $$S(x)=\sum_{d=0}^{D} \frac{a_d}{d+1}P_{d+1}(x)$$ And show $S(x+1)-S(x)=P(x)$.
Covariant Contravariant Dot product and Length
The intuition here is as follows: we define the dual basis to "correct for" all the departures from orthonormality of the original basis. So if the angle between two basis vectors in the original basis was acute, the angle in the new basis will be obtuse; if one if the basis vectors was longer in the original basis, it will be shorter in the new basis. Then when we "average out" the two bases by taking the product of the coordinate in one basis with the coordinate in the other basis, all the departures from orthonormality are corrected for, and we get the appropriate length as if we had a single orthonormal basis. Of course, to describe precisely how this "correction" is done and to prove that it works, you have to go through the math, as, e.g., the book Giuseppe Negro pointed you to does. Did you have a particular problem with the proof there? Or were you looking for intuition?
Finding the Moment Generating Function of $X^2$ when $X\sim N(0,1)$
The mgf will only be defined when $t\lt 1/2$. We need $$\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}e^{-x^2(1/2-t)}\, dx.$$ The exponent can be written as $-(x^2/2)(1-2t)$. Make the change of variable $u=x\sqrt{1-2t}$. Then $dx =\frac{1}{\sqrt{1-2t}}\,du$. Note that as $x$ travels from $-\infty$ to $\infty$, so does $u$. You should end up with something like $$\frac{1}{\sqrt{1-2t}}\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}e^{-u^2/2}\,du.$$ Now we recognize that the integral part is $1$.
How to solve the inequality $\frac {5x+1}{4x-1}\geq1$
Hint: $\frac{5x+1}{4x-1} \geq 1$ is equivalent to $\frac{5x+1}{4x-1} -1 \geq 0$ and $\frac{5x+1-(4x-1)}{4x-1} \geq 0$
How to evaluate sequence of operations on an object?
Your first two examples are very different from the third example in more ways than just saying that the third is an arithmetic question while the first two are not. In the first two examples you are asking about conventions regarding notation. Given two functions $f$ and $g$ on a set $X$, you are asking why the composition $f\circ f\circ g$ is denoted $f^2g$ rather than $2f+g$. But again, these are questions of notation. If I use $f^2g$ to denote $f\circ f\circ g$, I am not at all suggesting that there is some kind of multiplication of numbers involved. Similarly, if I were to use $2f+g$ to denote this function instead then, again, this does not mean that addition of numbers is involved. I am simply choosing to denote $f\circ f\circ g$ in a different way. So your question is: Why is the notation $f^2g$ more common than $2f+g$? The answer is that people often use the addition symbol $+$ to denote binary operations that are commutative: $x+y=y+x$ for all objects $x$ and $y$. Since composition of functions is not commutative, people usually don't use the addition symbol in this way. On the flip side, people do use the multiplicative notation for general operations that are not necessarily commutative. So this is why $f^2g$ is more likely to be used than $2f+g$. Now, your third question is not a notation question. It is a mathematical question that is asking about something very different from the first two questions. You are asking why the number of outcomes of flipping 3 coins is 8 and not 6. For one thing, you can count them: HHH HHT HTH HTT THH THT TTH TTT So perhaps the real question is the following. Suppose we have a task that can be broken into two steps. Say there are $m$ ways to do step 1, and $n$ ways to do step $2$. Why is the total number of ways to do the whole task $mn$ and not $m+n$? This question is equivalent to the following: Suppose $|A|=m$ and $|B|=n$. Then why is $|A\times B|=mn$ and not $m+n$? This is an equivalent question because I can think of $A$ as the set of ways to do step $1$ and $B$ as the set of ways to do step 2. So $A\times B$ is the set of ways to do the whole task since I can represent doing the whole task as an ordered pair $(a,b)$ where $a$ comes from $A$ and $b$ comes from $B$. The proof that $|A\times B|=mn$ is not too hard. Write $A\times B=\bigcup_{a\in A}X_a$ where $X_a=\{(a,b):b\in B\}$. If $a\neq a'$ then $X_{a}\cap X_{a'} = \emptyset$. So $|A\times B|=\sum_{a\in A}|X_{a}|$. For any $a\in A$, there is a clear bijection between $X_{a}$ and $B$ in which one sends $(a,b)$ to $b$. So $|X_{a}|=|B|$ for all $a\in A$. So $|A\times B|=\sum_{a\in A}|B|=|A|\cdot |B|=mn$. Your example with coins had three steps instead of two, but you can generalize to any number of steps using induction. In combinatorics, this is called the &quot;multiplication principle&quot;. See: https://en.wikipedia.org/wiki/Rule_of_product
identity tensor proof
$AI = A$ by the definition of the identity $I$. If $\det$ is a function (which it is), then we have to have $\det (AI) = \det (A)$.
Unique limits in metric spaces
I am not sure what you mean by "the limit of $f$". But we can see that any sequence in a metric space has at most one limit, which hopefully answers your question. If $x_n \rightarrow x$ and $x \ne y$ then by the definition of the metric, $d(x, y) &gt; 0$. Because $x_n \rightarrow x$, there is a $N$ such that for all $k &gt; N$, $d(x_k, x) &lt; \frac{d(x, y)}{2}$. Then by the triangle inequality $d(x, y) \le d(x_k, x) + d(x_k, y)$, we see that $d(x_k, y) &gt; \frac{d(x, y)}{2}$. Thus the sequence $x_n$ doesn't converge to $y$.
mixed permutations and combinations
You’re off on the wrong foot right away with that $6$P$6$ for the case in which the team is made up entirely of singles: there is only one such team, and you’re counting $6!=720$. We’re not picking six people and assigning each of them a specific rôle; we’re just picking a group of $6$ people. If we pick them from the singles, we can do it in $\binom66=1$ way. I think that I’d break it down according to how many singles we choose. As we just saw, there is one team consisting of $6$ singles. To form a team with $5$ singles, we can pick the singles in $\binom65=6$ ways and the sixth person in $\binom{10}1=10$ ways for a total of $6\cdot10=60$ teams. To form a team with $4$ singles, we can pick the singles in $\binom64=15$ ways. And since we’re allowed one couple, we can pick any two of the other ten people, so there are $\binom{10}2=45$ to pick the other two members of the team. That gives us another $15\cdot45=675$ teams. $3$ singles can be picked in $\binom63=20$ ways, and we can pick any three of the other ten people, so we get another $20\binom{10}3=20\cdot120=2400$ teams. Now it gets a little trickier, since we start running into restrictions on whom we can choose from the couples. $2$ singles can be picked in $\binom62=15$ ways. There are $\binom{10}4=210$ ways to pick $4$ people from the couples, but some of these are forbidden: specifically, we may not pick two couples. Since there are $5$ couples altogether, there are $\binom52=10$ pairs of couples. That’s $10$ sets of four people that we aren’t allowed to choose, leaving $210-10=200$ sets that we are allowed to choose. Thus, we can form $15\cdot200=3000$ teams with $2$ singles. One single can be picked in $6$ ways. There are $\binom{10}5=252$ ways to choose $5$ people from the couples, but here again some are not allowed. Specifically, we have to throw out those groups that consist of two-and-a-half couples. As before, there are $\binom52=10$ ways to pick two couples, and there are then $6$ ways to pick one more person from the remaining three couples. That makes $10\cdot6=60$ ways to fill out the team and gives us $6\cdot60=360$ teams. There’s one more case, the teams containing no singles; I’ll let you have a chance to work it out on your own, but feel free to ask if you get stuck.
Proof the non-existence of integer solution
Yes, your solution is correct. Here is another method: assume $x$ and $y$ are integers, and write the equation as $3(x+6y) = 1$. The left-hand side is a multiple of $3$, while the right-hand side is not. This leads to the same conclusion as before.
Is there a name for the logical scenario where A does not necessarily imply B, but B implies A?
In context, if $B$ implies $A$ but $A$ does not necessarily imply $B$, we typically say: $B$ implies $A$, but the converse does not hold.