title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find Integral Solutions of $m^2+(m+1)^2 = n^4+(n+1)^4$
If $n$ is negative then we can replace $n$ by the positive integer $-1-n$. So suppose $n$ is non-negative. Similarly we can suppose $m$ is non-negative. Now consider $m=n^2+n$ and $m=n^2+n+1$. The respective values of $m^2+(m+1)^2$ are less than and greater than $n^4+(n+1)^4$ except in the one case that $n=0$. So all solutions have $n=0$ or $-1-n=0$ i.e. $n\in \{0,-1\}$. Then $m\in \{0,-1\}$ also.
Labelling the edges of a cube with {1, 2, 3,....,12}
If there exists an integer $i$ where $1 \le i \le 12$ such that $4 |( 91 - i)$ then equality of the sums is possible. $91$ is not divisible by $4$ but $92$ is. So perhaps we can rephase the above in the following manner to make things easier: let $j = i + 1$, if there exists an integer $j$ where $2 \le j \le 13$ such that $4 |( 92 - j)$, then equality of the sums is possible. Clearly $j$ must be a multiple of 4 so possible candidates are $4, 8, 12$. It then follows that $i$ can be $3, 7, 11$. EDIT: This is in response to the second part of your question where you were asking whether there's a systematic way to construct possible values of $i$.
what is the connected component of $\Bbb{C}^*$
Observe that $\mathbb{C} \setminus \{0\}$ is homeomorphic to $\mathbb{R}^2 \setminus \{0\}$. Now we can easily show that $\mathbb{R}^2 \setminus \{0\}$ is path connected. Pick $x, y \in \mathbb{R}^2 \setminus \{0\}$ such that $x \neq y$ and let $z \in \mathbb{R}^2 \setminus \{0\}$ such that $z \neq x$ and $z \neq y$. Then define $f : [0, 1] \to \mathbb{R}^2 \setminus \{0\}$ by $f(a) = (1-a)x + tz$, then $f$ represents the straight line path between $x$ and $z$. Similarly define $g : [0, 1] \to \mathbb{R}^2 \setminus \{0\}$ by $g(a) = (1-a)z + ty$, then $g$ represents the straight line path between $z$ and $y$. Now define $h : [0, 1] \to \mathbb{R}^2 \setminus \{0\}$ by $$h(a) = \begin{cases} f(2a) \ \ \ \ \text{if} \ \ a \in [0, \frac{1}{2}]\\ g(2a) \ \ \ \ \text{if} \ \ a \in [\frac{1}{2}, 1]\\ \end{cases}$$ Then since $g(\frac{1}{2}) = f(\frac{1}{2}) = z$ we have $h$ to be continuous by the gluing lemma and $h(0) = f(0) = x$ and $h(1) = g(1) = y$ and thus $h$ is a path from $x$ to $y$ in $\mathbb{R}^2 \setminus \{0\}$ thus $\mathbb{R}^2 \setminus \{0\}$ is path-connected and thus connected. So $\mathbb{C} \setminus \{0\}$ must be connected since homeomorphisms preserve connectedness. And thus $\mathbb{C} \setminus \{0\}$ is the only connected component of $\mathbb{C} \setminus \{0\}$.
Question from Putnam '08: Given $F_n(x)$, find $\lim_{n\to\infty}\frac{n!F_n(1)}{\ln(n)}$
I feel like your perception of this problem may be backwards: to my mind, the induction to prove the form of $F_n(x)$ is the 'meat' of the problem, and once you've got that result it's the rest of the problem that's trivial. To answer your specific questions, though: you shouldn't need an epsilon-delta proof for limits on a Putnam; once you have an explicit form for $F_n$ (and note that the first integral is improper so a little justification may help there), you can manipulate the limit quite a bit — as long as you don't do anything improper, you should be fine. In this case, if you didn't know the form of the Harmonic series explicitly (and I would personally take $H_n = \ln n+O(1)$ as well-enough established that it didn't need independent justification, but I wouldn't fault someone for feeling otherwise) then you can use Riemann estimates for $\int_1^n \frac1x dx$ to bound it: just break it up as $\sum_{i=1}^{n-1}\left(\int_i^{i+1}\frac1xdx\right)$ and note that the integral in parentheses is bounded between $\frac1{i+1}$ and $\frac1i$. Summing, this gives $\ln n\leq H_n\leq \ln n+1$, and that's more than enough to give the result: since $F_n(1) = -\frac{H_n}{n!}$ then $-\frac{\ln n+1}{n!}\leq F_n(1)\leq -\frac{\ln n}{n!}$ and so $-\left(1+\frac1{\ln n}\right)\leq \frac{n!F_n(1)}{\ln n}\leq -1$; the squeeze here is trivial, and you don't need L'Hopital's rule at all. In general, rigor in contest problems is to be encouraged, but it should also be the last thing you work on; for an exam like the Putnams where you (almost certainly) won't be able to complete all the problems, putting effort into a new problem is IMHO more likely to bear fruit (and points) than the last few drops of rigor on a problem you've already gotten a result for.
Understanding multi-variable Taylor formula
Denote $V = \Bbb{R}^n$ and $W = \Bbb{R}^m$. If $f:V \to W$ is $k$ times differentiable at a point $a$, then the $k^{th}$ differential at $a$ is a multilinear map $d^kf_a: \underbrace{V \times \cdots \times V}_{k \text{ times}} \to W$. Usually, for convenience, if $\xi \in V$, then by $(\xi)^k$, we mean the $k$-tuple $(\xi, \dots, \xi) \in \underbrace{V \times \cdots \times V}_{k \text{ times}}$. Another thing to note is that $d^kf_a$ is symmetric with respect to all its arguments. More precisely, if $\sigma:\{1, \dots k\} \to \{1, \dots, k\}$ is any bijection (i.e a permutation of $S_k$, the set of $k$ elements), then for any $\xi_1, \dots \xi_k \in V$ we have \begin{align} d^kf_a(\xi_{\sigma(1)}, \dots, \xi_{\sigma(k)}) = d^kf_a(\xi_1, \dots, \xi_k) \end{align} (this can be seen as the reason for why the mixed partial derivatives of a sufficiently differentiable are equal). Because of this, we say that $d^kf_a$ is a symmetric, $k$-linear map from $V^k$ into $W$. For your second question, you need to recall some linear algebra. In general, if $V$ and $W$ are real vector spaces (in your particular example, $V = \Bbb{R}^2, W = \Bbb{R}$), and $g: V \times V \to W$ is a bilinear map, and $\xi,\eta \in V$, then to compute the quantity \begin{align} g(\xi,\eta) \in W \end{align} we can do the following: choose a basis $\{e_1, \dots, e_n\}$ for $V$. Then, in terms of this basis, we can "expand" the vectors $\xi$ and $\eta$ \begin{align} \xi = \sum_{i=1}^n \xi_i e_i \qquad \text{and} \qquad \eta = \sum_{i=1}^n \eta_i e_i \end{align} for some $\xi_i, \eta_i \in \Bbb{R}$. So, now using bilinearity of $g$ we can compute things easily: \begin{align} g(\xi,\eta) &= g \left(\sum_{i=1}^n \xi_i e_i, \sum_{j=1}^n \eta_j e_j \right) \\ &= \sum_{i=1}^n \sum_{j=1}^n \xi_i \eta_j \cdot g(e_i, e_j) \\ &= \begin{pmatrix} \xi_1 & \dots & \xi_n \end{pmatrix} \cdot [g] \cdot \begin{pmatrix} \eta_1 \\ \vdots \\ \eta_n \end{pmatrix} \end{align} where $[g]$ is the $n \times n$ matrix whose $ij$ entry is $g(e_i,e_j)$. Hence what this says is that to compute the value of a bilinear map on two vectors, $g(\xi,\eta)$, we can think of it as matrix multiplication: \begin{equation} g(\xi,\eta) = \xi^t \cdot [g] \cdot \eta. \end{equation} Here we think of the vectors $\xi,\eta \in V= \Bbb{R}^n$ as column vectors. Now we have sufficient theory to apply it to your question. Here our bilinear map is $d^2f_a: \Bbb{R}^2 \times \Bbb{R}^2 \to \Bbb{R}$, and $\xi = \begin{pmatrix} x\\y \end{pmatrix} \in \Bbb{R}^2$, and $a = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \in \Bbb{R}^2$, so that $\xi-a = \begin{pmatrix} x\\y-1 \end{pmatrix}$. The second term in the Taylor expansion is (ignoring the $1/2$) \begin{align} d^2f_a(\xi-a, \xi-a) &= (\xi-a)^t \cdot [d^2f_a] \cdot (\xi-a) \\\\ &= (x,y-1) \cdot \begin{pmatrix} \partial_{1,1}f(a) & \partial_{1,2} f(a) \\ \partial_{2,1} f(a) & \partial_{2,2} f(a) \end{pmatrix} \cdot \begin{pmatrix} x \\ y-1 \end{pmatrix} \\\\ &= (x,y-1) \cdot \begin{pmatrix} -1 & -1 \\ -1 & -1 \end{pmatrix} \cdot \begin{pmatrix} x \\ y-1 \end{pmatrix} \end{align} The first equal sign is because of everything I've said above. For the second equal sign, you need to know that $d^2f_a(e_i,e_j) = \partial_{i,j}f(a)$ (the order of $i,j$ doesn't matter because it is a symmetric bilinear map). In words this says, the second differential evaluated on the standard basis vectors gives the corresponding second partial derivatives.
Find the first four nonzero terms in each of two power series solutions about the origin
Let $y(x)=\sum_{k\geq 0}a_kx^k$, then $$y''(x)=\sum_{k\geq 2}k(k-1)a_kx^{k-2}=\sum_{k\geq 0}(k+2)(k+1)a_{k+2}x^{k}=2a_2+6a_3 x+12a_4 x^2+o(x^2).$$ Moreover $$e^x=\sum_{k\geq 0}\frac{x^k}{k!}=1+x+\frac{x^2}{2}+\frac{x^3}{6}+o(x^3).$$ Plug all these series in $e^xy''(x) + xy(x)$. The coefficients of resulting series should be all zero. We have that the first few terms of the expansion are: $$e^xy''(x) + xy(x)=2a_2+(6a_3+2a_2+a_0)x+(12a_4+6a_3+a_2+a_1)x^2+o(x^2)$$ which implies that $$\mbox{$a_2=0$, $6a_3+2a_2+a_0=0$, $12a_4+6a_3+a_2+a_1=0$.}$$ The coefficients $a_n$ are not uniquely determined because the differential equation admits infinite solutions.
Understanding how a differential equation is solved with distributions
The distributional derivative of $h(t)$ is $$h'(t) = \frac{-1}{(RC)^2} e^{-t/RC}Y(t) + \frac{1}{RC}\delta_0 = \frac{-1}{RC}h(t) + \frac{1}{RC}\delta_0$$ You can check this by noticing that for any smooth function $\phi(t)$ with compact support, $$\int_{-\infty}^\infty h(t)\phi'(t)dt = - \int_{-\infty}^\infty \frac{-1}{(RC)^2}e^{-t/RC}Y(t)\phi(t)dt + \frac{1}{RC}\phi(0)$$ (To compute the integral on the left, note that you can change the bounds to $[0, \infty)$ for free, then integrate by parts. The boundary term at $\infty$ is zero since $\phi$ as compact support, and the boundary term at 0 is $\frac{1}{RC}\phi(0)$.) Remember that the derivative of a convolution can be put on either product, so $$RC(h(t) \star x(t))' = RC\left(\frac{-1}{(RC)^2} e^{-t/RC}Y(t) + \frac{1}{RC}\delta_0\right) \star x(t))$$ so $$RC y'(t) = RC(h(t) \star x(t))' = -(h(t) \star x(t)) + x(t) = -y(t)+x(t)$$ and $$RC y'(t) + y(t) = -y(t)+x(t)+y(t) = x(t)$$ as desired.
Will the convergence of $\frac{1}{N}\sum_{n=1}^{N}a_n$ imply the convergence of $\sum_{n=1}^{N}\frac{a_n}{n^2}$?
Define $s_0=0, s_n = a_1 + \cdots + a_n, n>0.$ Summing by parts (note Daniel Fischer mentioned this in a comment) gives $$ \sum_{k=1}^{n}\frac{a_k}{k^2}= \sum_{k=1}^{n}\frac{s_k-s_{k-1}}{k^2}= \frac{s_n}{n^2} + \sum_{k=1}^{n-1}s_k\cdot \left ( \frac{1}{k^2}-\frac{1}{(k+1)^2}\right)$$ The first term on the right $\to 0,$ so we can ignore it. The remaining sum equals $$\sum_{k=1}^{n-1}s_k\cdot \frac{2k+1}{k^2(k+1)^2}= \sum_{k=1}^{n-1}\frac{s_k}{k}\cdot \frac{2k+1}{k(k+1)^2}.$$ The sequence $\dfrac{s_k}{k}$ is bounded. Since $\sum_{k=1}^{\infty}\dfrac{2k+1}{k(k+1)^2} < \infty$ we have $$\sum_{k=1}^{\infty}\frac{s_k}{k}\cdot \frac{2k+1}{k(k+1)^2}<\infty.$$ This implies $\sum_{k=1}^{\infty}\dfrac{a_k}{k^2}<\infty.$
What software/website can help me graph and get a diagram of an equation $f(x, y)$
Geogebra 3d is the easiest one, it is free and online. Otherwise you can use Matlab (or Octave) which can do much more than geogebra, but you have to learn their programming language
Is double quantifying a variable possible in predicate logic?
Indeed, your intuition is correct about the extra existential quantifier. We have, per your post: $R(x, y)$: “x has read y.” $S(x)$": “x is a student.” Domain for $x$: all people. $\quad$Domain for $y$: all books $$\forall x \exists y(S(x) \lor \forall y(R(x,y)))\tag{1}$$ Was this exactly how you encountered the problem? If so, are you trying to translate? Or are you trying to express a statement? Assuming that you encountered this, as is, your translation would be correct if there were no $\exists y$ outside the parentheses. However, it may also serve as an example of how the closest quantifier to the quantified variable "overrides" any earlier quantification, in which case you are correct in your translation (with $\forall y$ over-riding $\exists y$ since it is closest to the quantified variable y), so $(1)$ can be expressed by: $$\forall x (S(x) \lor \forall y(R(x,y))) \equiv \forall x \forall y(S(x) \lor R(x,y))\tag{2}$$ So I'd agree that, as is, the statement reads: "Everyone is either a student or has read every book." Note: If the intent is to say (the highly unlikely) "Every student has read every book", we would write: $$\forall x(S(x) \implies \forall y(R(x,y))) \equiv \forall x \forall y(S(x) \implies R(x,y))\tag{3}$$ If the intent is to express (the most likely case) "Every student has read some book", we would write: $$\forall x(S(x) \implies \exists y(R(x,y))\ \equiv \forall x \exists y(S(x) \implies R(x,y))\tag{4}$$ Finally, we can express the unlikely case: (5)"There is a student who has read every book" or the trivial case (6)"There is a student who has read some book", we could write, respectively $$\exists x(S(x) \land \forall y(R(x,y))) \equiv \exists x \forall y(S(x) \land R(x,y))\tag{5}$$ $$\exists x (S(x) \land \exists y(R(x,y)))\equiv \exists x\exists y(S(x) \land R(x,y)))\tag{6}$$ If nothing else, the above demonstrates how the order and placement(scope) of quantifiers and the quantified variables is crucial, as is the choice of quantifier used.
Is a good automatic presentation known for the 'right dodecahedral' honeycomb?
This tiling is the same as the tiling by fundamental domains of the reflection group generated by reflections in the four faces of a regular, right angled dodecahedron. The standard "Coxeter group presentation" has 12 generators and 32 relators, as follows: There are 12 generators, one per face, which represents reflection across that face. There are also 12 relators, one per face, saying that the square of the reflection is the identity. Finally there are 20 relators, one per edge, saying that the two reflections across the two adjacent faces commute with each other (equivalently, the product of those two reflections is the identity). You can then represent each tile uniquely using the ShortLex automatic structure, once you pick an order of the generators: use standard Coxeter group ideas to determine which words represent geodesics; and then represent each tile uniquely by the first element ordered lexicographically. By the way, I'm pretty sure those images were not generated in an ad hoc fashion. Instead, if those images are derived from Geometry Center software then they were indeed generated using ShortLex automatic structures.
Laplace inverse as a double integral
Let $G(s) = \frac{F(s)}{s}$ Then, as you stated, (Selecting variables y for g and x for f) $$\mathcal{L}^{-1}[G(s)]= g(y) = \mathcal{L}^{-1}\left[\frac{F(s)}{s}\right] = \int^x_0{f(x)dx}$$ $$\mathcal{L}^{-1}\left[\frac{G(s)}{s}\right] = \mathcal{L}^{-1}\left[\frac{F(s)}{s^2}\right] = \int^y_0{g(y)dy} = \int^y_0{\int^x_0{f(x)dx}dy}$$ Thus the property is proved!
limit points of $z^n$, where $z\in\mathbb{C}$,$n\in\mathbb{N}$, and $|z| = 1$.
Let's define $f(\theta)=e^{i \pi \theta}$. When $\theta=\frac{p}{q}$, then $f^{n}(\theta)=f^{n \mod 2q}(\theta)$, so you have finitely many limit points on the unit circle. If $\theta$ is irrational, then you can prove that the set $\{f^n(\theta):n \in \mathbb Z\}$ is dense on the unit circle (e.g. by the usual $\epsilon$ argument, you show that for any $z$ on the unit circle, there exists $N \in \mathbb Z$ such that $|z-f^N(\theta)|< \epsilon$).
Proof of fundamental theorem of integral calculus
This is the third chapter of my "share your knowledge, Q&A style" trilogy: Spectral Theorem, Weak Compactness of the Closed Unit Ball of a Hilbert Space, and now FTIC. I looked around the Web, and all I found were incomplete proofs, in the sense that they assumed results I never knew of, or proved less strong statements. This is why I am posting this. Let me outline the strategy, and then prove every point. Prove the easier direction: the integral function of an $L^1$ function is absolutely continuous; this will come in handy in that; Proving the weaker statement that if $f$ is absolutely continuous and a.e. differentiable, then it is the integral of its derivative; Proving an absolutely continuous function has bounded variation; Proving a BV function is the difference of two monotone increasing functions; Proving a version of the Vitali convering theorem; Using step 5 to prove a monotone increasing function is a.e. differentiable; Combining step 6 and step 4 to conclude a BV function is a.e. differentiable, combining this with step 3 to deduce the property for a.c. functions, and using step 2 to conclude the FTIC. Since this is a huge thing, I will post this in bits. Step 1: the integral function of an $L^1$ function is absolutely continuous. If $F(x)=F(a)+\int_a^xf(t)\mathrm{d}t$ with $f\in L^1(a,b)$, then: $$\sum|F(b_i)-F(a_i)|=\sum\left|\int_{a_i}^{b_i}f(t)\mathrm{d}t\right|\leq\sum\int_{a_i}^{b_i}|f(t)|\mathrm{d}t=\int_{\bigcup[a_i,b_i]}|f(t)|\mathrm{d}t.$$ So if we can prove that for all $\epsilon$ there exists $\delta$ such that $m(A)<\delta$ implies $\int_A|f(t)|\mathrm{d}t$, the result follows. That is where the link comes in. If $f$ is bounded, then $|f|\leq M$, hence $\int_A|f|\leq Mm(A)$, hence set $\delta=\frac\epsilon M$. Otherwise, define $|f|_M(t)=\min\{|f|(t),M\}$. By dominated convergence, $\int_a^b(|f|-|f|_M)\to0$ for $M\to\infty$, hence for sufficiently large $M$ we can make it less than $\frac\epsilon2$, and having fixed that $M$ we set $\delta=\frac\epsilon M$, so that: $$\int_A|f|=\int_a^b(|f|-|f|_M)+\int_A|f|_M\leq\frac\epsilon2+Mm(A),$$ and $m(A)<\frac\epsilon M$ implies $\int_A|f|\leq\epsilon$. So we are done. Step 2: an absolutely continuous and almost everywhere differentiable function is an integral function. For each $n\in\mathbb{N}$ we partition $[a,b]$ into intervals of length $\frac{b-a}{2^n}$ by setting $x_{i,n}=\frac{i}{2^n}(b-a)+a$. Set: $$h_n(x)=\sum_{i=1}^{2^n}\frac{f(x_{i,n})-f(x_{i-1,n})}{x_{i,n}-x_{i-1,n}}\chi_{[x_{i-1,n},x_{i,n})}.$$ On one hand, since $f$ is a.e. differentiable, and $h_n$ are essentiali incremental ratios on thinner and thinner intervals, $h_n\to f'$ a.e. On the other hand, we see that: $$\int_a^bh_n(x)\mathrm{d}x=\sum_{i=1}^{2^n}\int_{x_{i-1,n}}^{x_{i,n}}h_n(x)\mathrm{d}x=\sum_{i=1}^{2^n}[f(x_{i,n})-f(x_{i-1,n})]=f(b)-f(a).$$ So all we have to prove is that the limit passes under the integral. We will prove that in fact the convergence is in $L^1$. Fix $\epsilon>0$. $F$ is a.c., so we find $\delta$ such that $\sum(b_i-a_i)<\delta\implies\sum|F(b_i)-F(a_i)|<\frac\epsilon4$. Since $f'\in L^1$, as shown above, we find $\rho$ such that $m(A)<\rho\implies\int_A|f'|<\frac\epsilon4$. We will later prove the following lemma. Lemma For each $\epsilon>0$ there exist $k,n_k\in\mathbb{N}$ such that: $$k\cdot m\left(\left\{x\in I:\sup_{n\geq n_k}|h_n(x)|>k\right\}\right)<\epsilon.$$ We thus choose the $k,n_k$ corresponding to $\min\{\delta,\frac\epsilon4,\rho\}$ in the lemma. Let us call $A$ the set in the lemma corresponding to those $k,n_k$. What said above implies: \begin{align*} m(A)<{}&\delta, \\ k\cdot m(A)<{}&\frac\epsilon4, \\ \int_A|f'(x)|dx<\frac\epsilon4. \end{align*} In fact, we have $km(A)<\delta$ by choice of $k,n_k$. The first equation follows from this. The second is the lemma. The third is because $m(A)<\rho$, and by choice of $\rho$. We now remark that: \begin{align*} \int_a^b|h_n(x)-f'(x)|\mathrm{d}x={}&\int_{I\smallsetminus A}|h_n(x)-f'(x)|\mathrm{d}x+\int_A|h_n-f'(x)|\mathrm{d}x< \\ {}<{}&\int_{I\smallsetminus A}|h_n(x)-f'(x)|\mathrm{d}x+\int_A|h_n(x)|\mathrm{d}x+\frac\epsilon4, \end{align*} where the inequality is the triangular inequality plus the third equation above. By definition of $A$ we have that eventually, i.e. for $n\geq n_k$, $|h_n|\leq k$ for all $x\in I\smallsetminus A$, hence $|h_n-f'|\leq k+|f'|$ on $I\smallsetminus A$ and $n\geq n_k$. Hence, dominated convergence implies that piece tends to zero, so for $n$ big enough it is less than $\frac\epsilon4$. For such a choice of $n$, we have, combining this remark with the inequality above, that: $$\int_a^b|h_n(x)-f'(x)|\mathrm{d}x<\frac\epsilon2+\int_A|h_n(x)|\mathrm{d}x.$$ Now we split $$ into $B={x\in A:|h_n(x)|\leq k}$ and $C=A\smallsetminus B$, for each fixed $n\geq n_\epsilon$, where $n_\epsilon$ is such that $n\geq n_\epsilon$ implies the above bound on that integral on $I\smallsetminus A$. For the integral over $B$, we have: $$\int_B|h_n(x)|\mathrm{d}x\leq km(B)\leq km(A)<\frac\epsilon4,$$ by the second equation of the series of three above.If $C=\varnothing$, certainly the integral on $C$ is bounded by $\frac\epsilon4$. We now use absolute continuity of $F$ to prove this actually always holds, and thus conclude this proof. $h_n$ is constant on intervals of the form $[x_{i-1,n},x_{i,n})$, so there exist pairwise distinct indices $i_l$ for $l=1,\dotsc,p$ with $p\leq 2^n$ such that: $$C=\bigcup_{l=1}^p[x_{i_l-1,n},x_{i_l,n}).$$ Using the first inequality of the series of three above, we get: $$\sum_{l=1}^p(x_{i_l,n}-x_{i_l-1,n})=m(C)\leq m(A)<\delta,$$ and by choice of $\delta$ from absolute continuity of $F$: $$\int_C|h_n(x)|\mathrm{d}x=\sum_{l=1}^p\int_{x_{i_k-1,n}}^{x_{i_l},n}|h_n(x)|\mathrm{d}x=\sum_{l=1}^p|f(x_{i_l,n})-f(x_{i_l-1,n})|<\frac\epsilon4.$$ So finally our big integral is estimated by $\epsilon$, for any $\epsilon$, hence it tends to 0. Proof of the lemma Fix $\epsilon>0$ and choose $\rho>0$ such that $m(E)<\rho\implies\int_E|f'(x)|\mathrm{d}x<\frac\epsilon2$. Let $N\subseteq I$ be such that $h_n\to f'$ pointwise outside $N$. Since $f'\in L^1$, $m(\{x\in I\smallsetminus N:|f'(x)|\geq k\})\to0$ for $k\to\infty$, so we can find $k$ big enough so that $m(\{x\in I\smallsetminus N:|f'(x)|\geq k\})<\rho$. This gives us: $$km(\{x\in I\smallsetminus N:|f'(x)|\geq k\})\leq\int_{\{x\in I\smallsetminus:|f'(x)|\geq k\}}|f'(x)|\mathrm{d}x<\frac\epsilon2.$$ Let us set: $$E_j=\left\{x\in I\smallsetminus N:\sup_{n\geq j}|h_n(x)|>k\right\},$$ for all $j\in\mathbb{N}$. $m(E_j)$ clearly tends to the measure of $\bigcap_jE_j$. That set is clearly within $\{x\in I\smallsetminus N:|f'(x)|\geq k\}$, so we can find $n_k$ such that: $$m(E_{n_k})\leq m(\{x\in I\smallsetminus N:|f'(x)|\geq k\})+\frac{\epsilon}{2k},$$ so we multiply by $k$, use the inequality before the definition of $E_j$, and deduce $km(E_{n_k})<\epsilon$. Step 3: an absolutely continuous function is of bounded variation. By absolute continuity we find $\delta>0$ such that $\sum(b_i-a_i)<\delta\implies\sum|f(b_i)-f(a_i)|<1$. Let $N$ be the least integer such that $N>\frac{b-a}{\delta}$, and let $a_j:=a+j\frac{b-a}{N}$ for $j=0,1,\dotsc,N$. It follows that: $$\bigvee_a^bf=\sum_{j=1}^N\bigvee_{a_{j-1}}^{a_j}f<N.$$ Hence, $f$ is BV. This proves the first equality. I originally thought it should be an inequality, and remarked that anyways this works all the same, then I found this link and convinced myself it is an equality. Step 4: a BV function is the difference of two monotone increasing functions. I'm lucky in this step since I have LaTeX code (the source is a math SX answer), so I will just copy-paste, with a blockquote. Let $f$ a function of bounded variation. Let $F(x):=\sup \sum_{j=1}^{n-1}|f(x_{j+1})-f(x_j)|=:\operatorname{Var}[a,x]$, where the supremum is taken over the $x_1,\ldots,x_n$ which satisfy $a=x_1<x_2<\ldots<x_n=x$. Since $f$ is of bounded variation, $F$ is bounded, and by definition increasing. Let $G:=F-f$. We have to show that $G$ is bounded and increasing. Boundedness follows from this property for $f$ and $F$, now fix $a\leq x_1<x_2\leq b$. We have $$G(x_2)-G(x_1)=F(x_2)-f(x_2)-F(x_1)+f(x_1)\geq 0$$ because $\operatorname{Var}[a,x_1]+f(x_2)-f(x_1)\leq \operatorname{Var}[a,x_1]+|f(x_2)-f(x_1)|\leq \operatorname{Var}[a,x_2]$. If $f$ and $g$ are of bounded variation so is $f-g$. If $f$ is increasing then we have, if $a=x_0<x_1<\ldots<x_n=b$ that $\sum_{j=1}^{n-1}|f(x_{j+1})-f(x_j)|=|f(b)-f(a)|$, so $f$ is of bounded variation. So the difference of two bounded monotonic increasing functions is of bounded variation. Remark This proves, in fact, more than the step I need, since it proves BV implies to difference of monotone increasing functions, but also that the converse holds, provided the two monotone functions are bounded. Thanks Davide Giraudo for this answer. Step 5: Vitali Covering Theorem (or some version of it) Definition If $E\subseteq\mathbb{R}$, I will call a collection $\Gamma$ of closed intervals in $\mathbb{R}$ a Vitali covering of $E$ if for all $\delta>0$ and all $x\in E$ we can find an interval $I\in\Gamma$ such that $x\in I$ and $\ell(I)<\delta$, where $\ell([a,b])=b-a$. With that, the precise statement I intend to prove now is the following. Theorem Let $E\subseteq\mathbb{R}$ have finite Lebesgue outer measure and let $\Gamma$ be a Vitali covering of $E$. Then, for $\epsilon>0$, we can find a finite disjoint collection $\{I_1,\dotsc,I_N\}$ of intervals in $\Gamma$ such that: $$\lambda^\ast\left(E\smallsetminus\bigcup_{n=1}^NI_n\right)<\epsilon,$$ $\lambda^\ast$ being the Lebesgue outer measure. To prove this, let $G$ be an open set containing $E$ with finite Lebesgue measure. $\Gamma$ is a Vitali covering, so we may assume $G$ contains the union of $\Gamma$. We now choose a sequence $(I_n)_{n=1,\dotsc}$ of disjoint intervals of $\Gamma$ recursively. We choose first any $I_1\in\Gamma$. Then supposing $I_1,\dotsc,In$ have been defined, we set $k_n$ to be the supremum of the lengths of those intervals of $\Gamma$ which are dijoint from all $I_k$: $$k_n:=\sup\{\ell(I):I\in\Gamma,I\cap I_k=\varnothing\,\,\forall k=1,\dotsc,n\}.$$ We choose $I_{n+1}$ from $\Gamma$ such that it is disjoint from the $n$ previously chosen intervals and has length $k_n$. Since these intervals we have chosen are all disjoint, their union has measure the sum of their lengths (series, in fact, if they are infinite), so that sum/series is finite because the union is contained in $G$. This implies that $k_n\to0$. Also, since the codas of a convergent seies are infinitesimal, this implies we can find $N>0$ such that: $$\sum_{n=N+1}^\infty\ell(I_n)<\frac\epsilon5.$$ So if we can prove that the other intervals leave out at most $\epsilon$ from $E$, we have concluded. For this purpose, we set $J_n:=I_n+2\ell(I_n)[-1,1]$, for all $n\in\mathbb{N}$. If we prove these intervals, from $N$ to infinity, cover what the $I_n$'s from 1 to $N$ leave out of $E$, we are done. So let $x\in E\smallsetminus\bigcup_1^NI_n$. $\Gamma$ is a Vitali covering, so we can find $I\in\Gamma$ with $x\in I$ and $I\subseteq G\smallsetminus\bigcup_1^NI_n$. Then $I\cap I_n\neq\varnothing$ for some $n$, otherwise $\ell(I)<k_n$ for all $n$ which contradicts that $k_n\to0$. Let $n_0$ b the smallest integer such that $I\cap I_{n_0}\neq\varnothing$. Then $n_0>N$ and $\ell(I)\leq2\ell(I_{n_0})$. It follows that $I\subseteq J_{n_0}$, as desired. Step 6: a monotone increasing function is almost everywhere differentiable. Actually, we prove something more than what we need: a sort of FTIC for monotone functions. Theorem An increasing real-valued function $f$ on an interval $[a,b]$ is differentiable almost everywhere. Its derivative $f'$ is measurable and: $$\int_a^bf'(x)\mathrm{d}x\leq f(b)-f(a).$$ We set: \begin{align*} D^+f(x)={}&\limsup_{h\to0^+}\frac{f(x+h)-f(x)}{h} & D^-f(x)={}&\limsup_{h\to0^-}\frac{f(x+h)-f(x)}{h} \\ D_+f(x)={}&\liminf_{h\to0^+}\frac{f(x+h)-f(x)}{h} & D_-f(x)={}&\liminf_{h\to0^-}\frac{f(x+h)-f(x)}{h}. \end{align*} So high sign, limsup; low sign, liminf; + sign, $h\to0^+$; - sign, $h\to0^-$. e further set: $$A=\{x\in[a,b]:D^+f(x)>D_-f(x)\} \qquad B=\{x\in[a,b]:D^-f(x)>D_+f(x)\}.$$ For any $x$, we have $D_-f(x)\leq D^-f(x)$ and $D_+f(x)\leq D^+f(x)$. If in addition $x\notin A\cup B$, we have: $$D^+f(x)\leq D_-f(x)\leq D^-f(x)\leq D_+f(x)\leq D^+f(x),$$ implying they are all equal, and hence $f'(x)$ exists. So if we show $A$ and $B$ have measure 0, $f$ is differentiable almost everywhere. We work for $A$, and $B$ is dealt with in much the same way. Set: $$A_{s,t}=\{x\in[a,b]:D^+f(x)>s>t>D_-f(x)\}.$$ Clearly we have: $$A=\bigcup_{\substack{s>t \\ s,t\in\mathbb{Q}}}A_{s,t},$$ and that is a countable union, so if we prove all those sets have measure zero, then we are done. By definition of $D_-f(x)$, for all $x\in A_{s,t}$ there exists an arbitrary small interval $[x-h,x]$ contained in $O$ with $f(x)-f(x-h)<th$. The collection of such intervals is a Vitali covering of $A_{s,t}$. By step 5, we can find disjoint intervals $I_1,\dotsc,I_M$ in finite number such that: $$\lambda^\ast\left(A_{s,t}\smallsetminus\bigcup_{j=1}^MI_j\right)<\epsilon.$$ Let us say $I_j=[x_j-j_j,x_j]$ for all $j=1,\dotsc,M$. Then we have: $$\sum_{j=1}^M[f(x_j)-f(x_j-h_j)]<t\sum_{j=1}^Mh_j<t\lambda(O)<t(a+\epsilon).$$ Let: $$\[G=A_{s,t}\cap\left(\bigcap_{j=1}^M(x_j-h_j,x_j)\right).$$ By definition of $D^+f(x)$, for each $y\in G$ there exists an arbitrary small interval $[y,y+k]$ contained in some $I_j$ such that $f(y+k)-f(y)>sk$. Again, by step 5 there exists a finite disjoint collection of such intervals $\{J_1,\dotsc,J_K\}$ such that: $$\lambda^\ast\left(G\smallsetminus\bigcup_{i=1}^K\right)<\epsilon.$$ It follows that: $$\lambda^\ast\left(\bigcup_{i=1}^KJ_i\right)>\lambda^\ast(G)-\epsilon.$$ But $A_:{s,t}\smallsetminus G=A_{s,t}\smallsetminus\bigcup_1MI_j$. Hence: $$\lambda^\ast(A_{s,t})\leq\lambda^\ast(A_{s,t}\smallsetminus G)+\lambda^\ast(G)=\lambda^\ast(G)+\lambda^\ast\left(A_{s,t}\smallsetminus\bigcup_{j=1}I_j\right)<\lambda^\ast(G)+\epsilon.$$ Consequently: $$\lambda^\ast\left(\bigcup_{i=1}^KJ_i\right)>\lambda^\ast(G)-\epsilon>\lambda^\ast(A_{s,t})-2\epsilon=a-2\epsilon.$$ Now suppose $J_i=[y_i,y_i+k_i]$ for all $i=1,\dotsc,K$. Each $J_i$ was chosen contained in $I_j$ for some $j$. If we sum over those $i$ for which $J_i\subseteq I_j$, we find: [\sum_{J_i\subseteq I_j}[f(y_i+k_i)-f(y_i)]\leq f(x_j)-f(x_j-h_j),] because $f$ is increasing. Hence: $$s(a-2\epsilon)<s\sum_{i=1}^Kk_i<\sum_{i=1}^K[f(y_i+k_i)-f(y_i)]\leq\sum_{j=1}^M[f(x_j)-f(x_j-h_j)]<t(a+\epsilon).$$ Summing up, for all $\epsilon$ we have: $$s(a-2\epsilon)<t(a+\epsilon),$$ which means $sa\leq ta$. But if $a>0$ we divide and get $s<t$, a contradiction by choice of $s,t$. $a<0$ is not allowed since it is a measure. Hence $a=0$, as desired. Now we have $\frac{f(x+h)-f(x)}{h}$ has a limit for almost every $x$. We define $g(x)$ to be that limit where it exists, and 0 elsewhere. Set $f(x)=f(b)$ for $x>b$ and define: $$g_n(x)=n\left[f\left(x+\frac1n\right)-f(x)\right],$$ for $a\leq x\leq b$. Each $g_n$ is nonnegative since $f$ is increasing, and $g_n$ converges to $f'$ almost everywhere. Also: $$\int_a^bg_n(x)\mathrm{d}x=n\left[\int_b^{b+\frac1n}f(x)\mathrm{d}x-\int_a^{a+\frac1n}f(x)\mathrm{d}x\right]\leq f(b)-f(a).$$ By Fatou's lemma: $$\int_a^bf'(x)\mathrm{d}x\leq\liminf_{n\to\infty}\int_a^bg_n(x)\mathrm{d}x\leq f(b)-f(a),$$ which completes our proof. Step 7: conclusion. Step 6 tells us a monotone increasing function is almost everywhere differentiable (and a little extra). But step 4 says a BV function $f$ is $g-h$ with $g,h$ monotone increasing. $g$ will be differentiable outside $N_g$, and $h$ outside $N_h$, both zero-measure sets. Hence, their union has measure zero, and outside that union they are both differentiable, which by linearity of the derivative implies $f$ is. So a BV function is a.e.d.. But an a.c. function is BV, hence a.e.d., by step 3. Finally, we can say step 2 does not need the a.e. differentiability hypothesis, which proves the other direction of our statement. Remarks My original strategy was with the same first three steps, but then I planned to establish the Simple Vitali Lemma found here on pp. 3-5, taking the definitions of p. 27 (Lebesgue set) and 31 (Regularly shrinking sets) of the same document to plug them into the proof of the theorem at the end (pp. 35-38), to finally prove the monotone case as is done here (Theorem 24, pp. 9-10), and then prove a BV function is the difference of two monotone functions. However, this is rather longer than what I did above, so thanks @Chilango for that reference, it shortened my work (and my post) by a significant amount. The contents of those documents are anyway pretty interesting. As are those of these two: one and two, which probably have much in common with the other two. So these are a couple of extra references for the curious. And finally this post is over. In particular, those two references hide the proof of the Lebesgue differentiation theorem, stating that if $f$ is an integrable (i.e. $L^1$) function, then: $$\lim_{r\to0}\frac{1}{|B_r(x)|}\int_{B_r}f\mathrm{d}\mu=f(x),$$ for a.e. $x$. In particular, for single-variable functions, this means an integral function is a.e. differentiable and its derivative is a.e. equal to the function it is an integral of, if said function is $L^1$. To establish that, a somewhat more sophisticated version of the Vitali theorem above is proved, one that seems much like the Simple Vitali Lemma of my older reference; The Hardy-Littlewood theorem is also proved, a theorem giving an estimate related to the "maximal function" of a function; Lastly, on p. 13 of this reference, the first of the two, while all the rest of what I just mentioned is in the second one, the density of $\mathcal{C}_c$ in $L^1$ is proved; this is needed for the differentiation theorem, to approximate with continuous functions, where the maximal function is $0$.
Cokernel in abelian category is epic?
The definition of the cokernel of $f:A \to B$ is the following : a morphism $g : B \to C$ such that for any $g^\prime : B \to D$ with $g^\prime \circ f=0$, there exists a $\textbf{unique}$ $\phi : C \to D$ such that $g^\prime= \phi \circ g$. So for any $h:C \to D$, if $h \circ g=0$ then letting $g^\prime=h \circ g: B \to D$, we have $g ^\prime \circ f=0$. Since $g$ is the cokernel of $f$, there is a unique $\phi : C \to D$ such that $\phi \circ g=g^\prime$. But $\phi=0$ and $\phi=h$ are possible solutions, so they are equal, that is $h=0$. We show that way that $g$ is an epimorphism.
Countably additive finite signed measures form a Banach Space.
The argument of $\sigma$-additivity for $\nu$ in the question is wrong. To show that $\nu$ is $\sigma$ additive, I will prove the following: (1) If $(A_n)_{n\in \mathbb N}\subset \mathcal A$ is a sequence such that $A_n\searrow \emptyset$, then $\nu(A_n)\to 0$. Given $\varepsilon >0$ there is a $\mu_{m}$ such that $\|\nu - \mu_{m}\|_\infty<\varepsilon$. Since $\mu_m$ is a measure, $\mu_{m}(A_n) \stackrel{n}{\to} 0$. Hence, there is $n_0\in \mathbb N$ such that: $n\geq n_0 \implies |\mu_m(A_n)| < \varepsilon$. Thus: $n\geq n_o \implies |\nu(A_n)| \leq |\nu(A_n)-\mu_m(A_n)| + |\mu_m(A_n)| <2\varepsilon$ Since $\nu$ is finitely additive and satisfies (1), $\nu$ is upper-continuous: \begin{align*} A_n \nearrow A &\implies A\setminus A_n \searrow \emptyset \\ &\implies \nu(A) - \nu(A_n) \to 0\\ &\implies \nu(A_n) \to \nu(A). \end{align*} Hence $\nu$ is $\sigma$-additive. The convergence $\mu_n \to \nu$ follows from the fact that $(\mu_n)$ is also a Cauchy sequence in the norm $\|\cdot\|_\infty$.
Does $Ax=x$ imply $A^* x=x$, if $A^*$ is the conjugate transpose of $A$?
No, take the matrix $$\begin{pmatrix} 1 & 1\\ 0 & -1\end{pmatrix}$$ which has $x=(1,0)^T$ as an eigenvector with eigenvalue 1. Yet $A^*x=(1,1)^T\neq x$.
Calculating the expected value of the maximum of some RVs
If $X$ is uniformly distributes on $[0,\theta]$ then $P\{X\leq x\}=\frac x {\theta}$ (for $0 \leq x \leq \theta$) and not $x$ ; the density is $\frac 1 {\theta}$ on $(0,\theta)$. Once you make this correction you will get the right answer.
Prove that the integral of a monotone function is between a given area interval
If $f(b)> f(a),$ then $\int_a^b (f(b)-f(x)) d x \geq 0,$ since the integrand is positive, while $\int_a^b (f(s)-f(x)) d x \leq 0,$ since the integrand is negative.
geometric interpretation of analytical hahn-banach theorem
In finite dimensions the picture is quite clear, because we have the inner product. In $\mathbb{R}^2$, each linear functional is specified by taking the inner product with a vector $x = (x_1,x_2)$. The equation $$ \langle x,y\rangle = 1 $$ specifies a line in $\mathbb{R}^2$ at distance $1/\Vert x\Vert$ from $0$. (This is easy geometry.) Regarding $\mathbb{R}^2$ as a subspace of $\mathbb{R}^3$, we can extend $x$ to the linear functional $x = (x_1,x_2,0)$, which obviously has the same norm as $(x_1,x_2)$. The equation $$ \langle x, y\rangle = 1 $$ (keeping in mind this is the inner product in $\mathbb{R}^3$ now) specifies a plane in $\mathbb{R}^3$, and clearly it contains the line $L$ in $\mathbb{R}^2 \subset\mathbb{R}^3$ from earlier and is the same distance from the origin.
How do I show that $\cos(t)+1=2\cos^2(\frac{t}{2})$
Notice that $$ \cos ( 2 \alpha ) = \cos^2 \alpha - \sin^2 \alpha $$ Therefore, with $\alpha = t/2$, one has $$ \cos ( t ) = \cos^2(t/2) - \sin^2(t/2) $$ since $\sin^2 (t/2) = 1 - \cos^2 (t/2)$ , one has the result.
Construct a permutation of the set N of all natural numbers that maps all the multiples of 3 onto the set of all even numbers.
We satisfy the condition of the question first: $f(3k)=2k$ for $k\ge0$. Now just assign the rest of the domain (non-multiples of $3$) to the rest of the codomain (odd numbers) in order, which yields $$f(n)=\begin{cases} 2k&n=3k\\ 4k+1&n=3k+1\\ 4k+3&n=3k+2\end{cases}$$ where $k$ is also a natural number. That this is a permutation can be verified by noting that $\{3k,3k+1,3k+2\}$ and $\{2k,4k+1,4k+3\}$ both define complete residue systems.
Simplification of Kampé de Fériet function
Hint: $\int_0^zt^m~_0F_1(;1;-t)~_2F_3(1,1;2,m,m+1;-at)~dt$ $=\int_0^zt^m\sum\limits_{n=0}^\infty\dfrac{(-1)^nt^n}{(n!)^2}\sum\limits_{k=0}^\infty\dfrac{(-1)^ka^kt^k}{(m)_k(m+1)_k(k+1)}dt$ $=\int_0^z\sum\limits_{n=0}^\infty\sum\limits_{k=0}^\infty\dfrac{(-1)^{n+k}a^kt^{n+k+m}}{(n!)^2(m)_k(m+1)_k(k+1)}dt$ $=\left[\sum\limits_{n=0}^\infty\sum\limits_{k=0}^\infty\dfrac{(-1)^{n+k}a^kt^{n+k+m+1}}{(n!)^2(m)_k(m+1)_k(k+1)(n+k+m+1)}\right]_0^z$ $=\sum\limits_{n=0}^\infty\sum\limits_{k=0}^\infty\dfrac{(-1)^{n+k}a^kz^{n+k+m+1}}{(n!)^2(m)_k(m+1)_k(k+1)(n+k+m+1)}$
A noetherian ring $R$ which is commutative integral domain but not a PID?
In the polynomial ring $\mathbb{Z}[x]$, the ideal $$I = \langle 2, x\rangle$$ is not principal.
The sum of two infinite series
By the ratio test, there is an $m$ sufficiently large and there are $r_a,r_b<1$ such that forall $n\ge m$, $$|a_n|\le |a_m|r_a^{n-m}$$ and $$|b_n|\le |b_m|r_b^{n-m}.$$ Then $$|a_n+b_n|\le|a_n|+|b_n|\le |a_m|r_a^{n-m}+|b_m|r_b^{n-m}\le(|a_m|+|b_m|)(\max(r_a,r_b))^{n-m},$$ assuming $r_a\ge r_b$.
Let Q(n) be the sum of digits of n. Prove that Q(n) = Q(2n) implies 9|n
Hint $$Q(n) \equiv n \pmod{9}$$ Therefore $Q(n)=Q(2n)$ implies $$n \equiv 2n \pmod{9}$$
Question of remainder on dividing by 7
While lab's answer is very elegant, it does rely, in some sense, on luck (as do all elegant answers). Here follows a thorough answer that will let you solve any such problem: First of all, as far as the remainder when divided by $7$ is concerned, there is no difference between $10$ and $3$, so I'm going to work with $$ 3^{10} + 3^{10^2} +3^{10^3} + \cdots + 3^{10^{100}} $$ The remainders when divided by $7$ of successive powers of $3$ goes like this: $$ 3^1 \mapsto 3\\ 3^2 \mapsto 2\\ 3^3 \mapsto 6\\ 3^4 \mapsto 4\\ 3^5 \mapsto 5\\ 3^6 \mapsto 1\\ 3^7 \mapsto 3 $$ and so on. It turns out that the only thing that is important for the $7$-remainder of a power of $3$ is the remainder of the exponent when divided by $6$ (this is what Fermat's little theorem would tell you directly, so you didn't have to check if you knew that one). So we need to find the $6$-remainder of the different powers of $10$. Now, as for the $6$-remainder, there is no difference between $10$ and $4$, so I will be focusing on the $6$-remainder of $4^n$. Now, the $6$-remainder of the different powers of $4$ are: $$ 4^1 \mapsto 4\\ 4^2 \mapsto 4\\ 4^3 \mapsto 4 $$ and we see that the remainder is the same all the way (Euler's theorem would've told us that the only thing that could matter was whether the exponent was even or not, and we see here that even that doesn't matter). So we see that the $7$-remainder of $$ 10^{10} + 10^{10^2} +10^{10^3} + \cdots + 10^{10^{100}} $$ is the same as the $7$-remainder of $$ 3^{10} + 3^{10^2} +3^{10^3} + \cdots + 3^{10^{100}} $$ which again is the same as the seven-remainder of $$ 3^{4} + 3^{4^2} +3^{4^3} + \cdots + 3^{4^{100}} $$ which again is the same as the $7$-remainder of $$ 3^{4} + 3^{4} +3^{4} + \cdots + 3^{4} = 100\cdot 3^4 = 8100 $$ and the remainder of $8100$ when divided by $7$ is $1$.
Use of Cantor Schroder-Bernstein theorem?
The Cantor-Bernstein theorem is probably one of the most useful and easily applied theorems in set theory. Theorem. If there exists an injection $f\colon A\to B$ and an injection $g\colon B\to A$ then there exists a bijection $h\colon A\to B$. That is all. Under the axiom of choice we can replace the injections with surjections, or so, but still constructing injections is often simple enough, especially at the level of this question. So to use the theorem you really just need to come up with two injections $f\colon[0,1]\to[1,\infty)$ and $g\colon[1,\infty)\to[0,1]$. This will be sufficient to conclude that there is a bijection between these two subsets of $\Bbb R$.
Help understanding proof that uses Sylow Theorems
$Q\cong Z_{q}$: there is a theorem says there is only one group of prime order $q$, $Z_{q}$. This theorem follows from Lagrange's theorem. (cor. 10 page 90 in Dummit and Foote.) $QP\leq G$: by a theorem: if $H\leq G$ and $K\unlhd G$, then $HK\leq G$. (cor. 15 page 94 in Dummit and Foote.) $QP\cong Q\times P$: by this theorem: if $H,K\unlhd G$ and $H\cap K=1$, then $HK\cong H\times K$. (thm 9, page 171 in Dummit and Foote.) $Z_{q}\times Z_{p}\cong Z_{qp}$: by proposition: $Z_{m}\times Z_{n}\cong Z_{mn}$ if and only if $(m,n)=1$. (prop. 6, page 163 in Dummit and Foote.) $Q\times P\cong Z_{q}\times Z_{p}$: you can prove this: if $f_{1}:H_{1}\to K_{1}, f_{2}:H_{2}\to K_{2}$ are group isomorphisms, then $f:H_{1}\times H_{2}\to K_{1}\times K_{2}$ defined by $f(h_{1},h_{2})=(f_{1}(h_{1}),f_{2}(h_{2}))$ is a group isomorphism.
Graph od periodic extension of function and its Fourier cosine series
What's the length of the interval where your function is defined? The domain is $(\frac 32,3)$. The length of this interval is $\frac 32$. So you can just shift everything by that value, and your interval will become $(0,\frac 32)$. Then the problem reduces to something that you already know how to solve: $$ f(x)= \left\{ \begin{array}{ll} 1 & ,x \in ( 0, \frac{1}{2}) \\ 3-x & ,x \in [\frac 12,\frac32) \end{array} \right. $$ While this function is periodic, it is not even. You can extend it as: $$ f(x)= \left\{ \begin{array}{ll} x & ,x \in ( 0, 1) \\ 1 &, x\in [1,2)\\ 3-x & ,x \in [2,3) \end{array} \right. $$ Now you can make an even, periodic function out of it.
Approximating $e^{\frac 1 {10}}$ with Taylor expansion
$|e^x-\sum_{i\le k} {x^i \over i!} | = | \sum_{i > k} {x^i \over i!} | \le {1 \over (k+1)! } \sum_{i > k} {|x|^i} = {1 \over (k+1)! }|x|^{k+1} {1 \over 1-|x|}$. With $x={1\over 10}$, a few computations shows that $k=2$ satisfies ${1 \over (k+1)! }|x|^{k+1} {1 \over 1-|x|} < {1 \over 10^3}$.
How Do I Compute the Eigenvalues of a Small Matrix?
Here's a cool way to compute eigenvalues and eigenvectors of matrices. Unfortunately, it requires solving a $n$ degree polynomial, where the matrix is $n\times n$ so it's not suited for large matrices, but for many problems it is sufficient. Additionally, there are some conditions on the matrix that make it doable for larger matrices. Let's suppose you have a matrix $A$ over some field, $F$. When $v\neq 0$ is an eigenvector, $v$ satisfies $Av=\lambda v$ for some $\lambda\in F$, with $\lambda\neq 0$. Thus $Av-\lambda v = 0$ where $0$ is the zero matrix, so $(A-\lambda I)v = 0$. If $\det(A-\lambda I)\neq 0$, then it would be invertible. Multiplying both sides by the inverse gives $v=0$, so for eigenvectors we are going to have a determinant of $0$. By considering $\lambda$ as a variable, we can take the determinant and produce a polynomial of degree $n$ over $F$ which is known as the characteristic polynomial of the matrix $A$. It is commonly denoted $p_A(\lambda)$. This polynomial has several interesting properties, but what is relevant to us is that its zeros are exactly the eigenvalues of $A$. For small cases, this gives us a surefire way to find the eigenvalues of a matrix. For larger matrices, this polynomial is not necessarily solvable, but still worth looking at, as some of its roots might be obvious. Additionally, under some circumstances, it will have solutions that we can solve for. Once we have obtained however many eigenvalues as we are able to compute, $\{\lambda_1,\dots,\lambda_m\}$ we can then directly find the corresponding eigenvectors by looking at the equation $Av=\lambda_i v$. This gives rise to a system of equations that has infinitely many solutions (as a scalar multiple of an eigenvector is an eigenvector), but all of them are eigenvalues of $A$ corresponding to $v$. The reason why this approach doesn't work in general is that it's not always possible to algebraically solve polynomials of large degree. Here's an example computation (taken from wikipedia). The eigenvectors, $v$, of $A= \begin{bmatrix} 2 & 0 & 1\\0 & 2 & 0\\ 1 & 0 & 2\end{bmatrix}$, satisfies the equation $(A-\lambda I)\mathbf{v}=0$. This means that $$\det\left(\begin{bmatrix} 2-\lambda & 0 & 1\\0 & 2-\lambda & 0\\ 1 & 0 & 2-\lambda\end{bmatrix}\right)=0$$ or that $0=6-11\lambda+6\lambda^2-\lambda^3$. Thus we have that the characteristic polynomial is $p_A(\lambda)=\lambda^3-6\lambda^2+11\lambda-6$. The solutions to this polynomial are $\{1,2,3\}$, so those are the eiganvalues of $A$. They give rise to the eigenvectors $(1,0,-1),(0,1,0),$ and $(1,0,1)$ respectively.
Find ${\rm d}y/{\rm d}x$ and simplify as much as possible, $y=x/(2x+5)^3$
You can use the quotient rule directly: $$\frac{{\rm d}y}{{\rm d}x} = \frac{(2x+5)^3-6x(2x+5)^2}{(2x+5)^6} = \frac{2x+5-6x}{(2x+5)^4} = \frac{-4x+5}{(2x+5)^4}.$$ This in fact indicates that you've got a little mistake: it's $(2x+5)^{-4}$ instead of $(2x+5)^2$ in what you have written there. I, particularly, prefer to leave stuff in a single fraction. Another approach: writing $y = x(2x+5)^{-3}$ and using the product rule, we have: $$\frac{{\rm d}y}{{\rm d}x} = (2x+5)^{-3} - x(-3(2x+5)^{-4}2) = (2x+5)^{-3}-6x(2x+5)^{-4}.$$ Your mistake probably was thinking that $(({\rm stuff})^{-3})' = -3({\rm stuff})^{-2}$, because $2 < 3$, and also a sign mistake. You decrease the exponent, so $-3 \to -4$.
computing the speed of decreasing of the temperature of a function that depends on the position on the space
The chain rule says that $$\frac d{dt} T(r(t)) = \frac d{dt}T(x(t),y(t),z(t)) = \nabla T(r(t))\cdot \frac{dr}{dt} = \frac{\partial T}{\partial x}\frac{dx}{dt} + \frac{\partial T}{\partial y}\frac{dy}{dt} + \frac{\partial T}{\partial z}\frac{dz}{dt}\,.$$ To complete the problem, we need to know more than the speed (which is the magnitude of the velocity vector). We need to know both the position and the velocity vector at the desired instant $t_0$. Then we can evaluate the gradient vector at the desired point ($r(t_0)$) and dot it with the velocity vector.
Write down the character of W.
If $A\in{\rm End}(V)$ and $B\in{\rm End}(W)$ then there is an endomorphism $A\otimes B\in{\rm End}(V\otimes W)$ which acts on pure tensors as $(A\otimes B)(v\otimes w)=Av\otimes Bw$. We have ${\rm tr}(A\otimes B)={\rm tr}(A){\rm tr}(B)$. This tells you how to write down the character of $U_5\otimes U_6$. Since characters determine representations, to decompose $U_5\otimes U_6$ as a direct sum of irreducible representations, it suffices to write its character as an integer sum of irreducible characters. By viewing the characters as linearly independent vectors, this becomes a basic linear algebra problem.
Prove that $B \setminus (\bigcup_{i \in I} A_i) = \bigcap_{i \in I} B \setminus A_i$.
As far as I see, your problem is to understand why you can infer $x \in B$ from \begin{align}\tag{1} \forall i \in I \,(x \in B) \end{align} knowing that $I \neq \emptyset$. Your question is legitimate because in $(1)$, $x \in B$ under the hypothesis $i \in I$ (while in the conclusion $x \in B$ there is no further hypothesis). Indeed, a formally proper way to write $(1)$ is the following: \begin{align}\tag{2} \forall i \, (i \in I \to x \in B) \end{align} Intuitively, from $(2)$, or equivalently $(1)$, you can infer $x \in B$ (without any further hypothesis) because the statement $x \in B$ does not depend on $i$, since $i$ does not occur in $x$ or in the definition of $B$. Hence, the hypothesis $i \in I$ does not play any role to conclude $x \in B$ and you can discard it. But you can do it provided that your hypothesis $i \in I$ is true, i.e. $I$ must be non-empty. More formally, since $I$ is non-empty, there exists $i \in I$. According to $(2)$, for such a $i$ we have $i \in I \to x \in B$. By modus ponens (since $i\in I$ and $i \in I \to x \in B$) you can conclude that $x \in B$. Note that the hypothesis that $I$ is non-empty is crucial. If $I = \emptyset$ then $(2)$, or equivalently $(1)$, is vacuously true: since the hypothesis $i \in I$ is false, then the implication $i \in I \to x \in B$ is true regardless of $x \in B$ or $x \notin B$ (for every $i$ in the universe). So, for $I = \emptyset$ you cannot conclude whether $x \in B$ or not. As a consequence, when $I = \emptyset$, we have that $B \setminus (\bigcup_{i \in I} A_i) \neq \bigcap_{i \in I} B \setminus A_i$ (unless $B$ is the whole universe), because it can be easily shown that, for $I = \emptyset$, we have $B \setminus (\bigcup_{i \in I} A_i) = B$ while $\bigcap_{i \in I} B \setminus A_i$ is the whole universe.
Proving an Inequality for Upper Right-Hand Dini Derivatives
First use that $(f+g)(x)=f(x)+g(x)$ by definition. $\limsup_{h \to 0^+} \frac{(f+g)(x_0 + h) - (f+g)(x_0)}{h}=\limsup_{h \to 0^+} \frac{f(x_0+h)+g(x_0 + h) - f(x_0)-g(x_0)}{h}=\limsup_{h \to 0^+}\frac{f(x_0+h)- f(x_0)+g(x_0 + h) -g(x_0)}{h}=\limsup_{h \to 0^+}\frac{f(x_0+h)- f(x_0)}{h}+\frac{g(x_0 + h) -g(x_0)}{h} (\ast)$. Now, using subadditivity of $\limsup$ $(\ast) ≤\limsup_{h \to 0^+} \frac{f(x_0 + h) - f(x_0)}{h}+\limsup_{h \to 0^+} \frac{g(x_0 + h) - g(x_0)}{h}.$ And you're done.
Substitute s for cos u and ds for -sin u
So you have: $$I=\frac23\int{\frac{\sin{u}}{\cos{u}}d{u}}$$ Let: $$ s=\cos{u}$$ Therefore: $$ d{s}=-\sin{u}\ d{u}$$ Now $\cos{u}$ is in the denominator so: $$I=\frac23\int-\frac{d{s}}{s}$$
What is the definition of integration $\int{\rm d} z{\rm d} \bar z$?
Start with $z=x+iy$, compute the (absolute value of the) Jacobian to get a Gaussian integral of the type $$ \int_{-\infty}^\infty dx\,\int_{-\infty}^\infty dy \, e^{-(x^2+y^2)}\, . $$ This kind of notation is common in mathematical physics, especially in the study of coherent states, where it is often rewritten as $d^2\alpha$, with $\alpha=x+ip$, as in this wiki page. The notation using $z$ rather than $\alpha$ is favoured in the early papers of Perelomov and the Russian school.
How is $L^p(\partial \Omega)$ defined if $\Omega$ is an interval?
The bug is when you say that you can see $\partial \Omega $ as a subset of $\mathbb R^{n-1}$. Your example show exactly that. And in all generality, I don't know if you can give to $\partial \Omega$ a "natural measure".
Recurring decimal and GIF
Actually this is an interesting question: the discrepancy given by the GIF function you mention, or simply called floor function, on the number $2.9999\dots$ is due to the discountinuity of this function at any integer point. For any number of the sequence $a_0=2$, $a_1=2.9$, $a_2=2.99$, $a_3=2.999$ and so forth the GIF function is indeed $2$ and hence $$\lim_{n}\lfloor a_n \rfloor=\lim_{n\to \infty}\lfloor 2.99\dots9 \rfloor=2$$ but if you exchange the limit and the function, then the limit $\lim_{n\to \infty}a_n=2.99\dots\equiv 3$ thus $$\lfloor\lim_{n\to\infty} a_n\rfloor=\lfloor 3\rfloor=3.$$ Without sequences, this is saying that $$\lim_{x\nearrow 3^-}\lfloor x\rfloor=2\neq 3=\lim_{x\searrow 3^+}\lfloor x\rfloor.$$
Does $\lim\limits_{(x,y)\to (0,0)}\frac{x^4}{y}$ exist?
If you really want to do it with a single sequence of points, and want to avoid the line $y=0$, it can be done this way. If $n$ is odd, then $x_n=\frac{1}{n}$ and $y_n=\frac{1}{n}$. If $n$ is even, then $x_n=\frac{1}{n}$ and $y=\frac{1}{n^4}$. However, using two different paths is clearer.
Conditional Probability with balls in an urn
The probability that the contents of the urn are two red is indeed $\frac{1}{4}$, as is the probability of two blue, and the probability of mixed is therefore $\frac{1}{2}$. The derivation could have been done more quickly. Question (a) asks for the probability both are red given that the two drawn balls are red. Let $R$ be the event both are red, and $D$ be the event both drawn balls are red. We want $\Pr(R|D)$. By the usual formula this is $\frac{\Pr(R\cap D)}{\Pr(D)}$. To find $\Pr(D)$, note that if both balls are red (probability $\frac{1}{2}$), then the probability of $D$ is $1$, while if one ball is red and the other is not (probability $\frac{1}{2}$) then the probability of $D$ is $\frac{1}{4}$. Thus the probability of $D$ is $\left(\frac{1}{4}\right)(1)+\left(\frac{1}{2}\right)\left(\frac{1}{4}\right)$. This is $\frac{3}{8}$. The probability of $R\cap D$ is $\frac{1}{4}$. So the ratio is indeed $\frac{2}{3}$. For (b), you can use the calculation of (a). With probability $\frac{2}{3}$ we are drawing from a double red, and we will get red with probability $1$. With probability $\frac{1}{3}$ the urn is a mixed one, and the probability of drawing a red is $\frac{1}{2}$, for a total of $\frac{2}{3}\cdot 1+\frac{1}{3}\cdot\frac{1}{2}$. One can also solve (b) without using the result of (a). With I hope self-explanatoru notation, we want $\Pr(RRR|RR)$. The probabilities needed in the conditional probability formula are easily computed. We have $\Pr(RRR\cap RR)=\Pr(RRR)=\frac{1}{4}\cdot 1+\frac{1}{2}\cdot\frac{1}{8}=\frac{5}{16}$. Divide by $\frac{3}{8}$.
Is it possible to construct probability theory which is not based on measure theory but on logic?
Yes. If you think about it, you need $\Omega$ and $P$ first. Or else for a given $p\in\Omega$, $\neg p$ does not make sense. Since $$\neg p = \{q\in\Omega | q \neq p\}$$ The $\sigma$-algebra $F$ is in some sense "What events in $\Omega$ satisfy conditions of $P$"
Each probability measure on a countable space comes from a weight function
On $\Omega$ define the relation:$$x\sim y\iff \forall A\in\mathcal F[\{x,y\}\subseteq A\vee\{x,y\}\subseteq A^{\complement}]$$ This can be shown to be an equivalence relation. Reflexivity and symmetry is evident and if $x\sim y\wedge y\sim z$ then the existence of a set $A\in\mathcal F$ with $x\in A\wedge z\notin A$ leads to a contradiction of $x\sim y$ if $y\notin A$ and to a contradiction of $y\sim z$ if $y\in A$. Let $[x]$ denote the equivalence class. For every $y\notin[x]$ there is a set $A_y\in\mathcal F$ that does not contain $x$ so that $[x]^{\complement}=\bigcup_{y\notin[x]}A_y$. So if $\Omega$ is countable then $[x]\in\mathcal F$ because $\bigcup_{y\notin[x]}A_y$ is then a countable union of elements in $\mathcal F$. So we end up with a partition of elements of $\mathcal F$ that are not empty and are such that a non-empty subset of such an element is not an element of $\mathcal F$. Then $\mathcal F$ will be exactly the collection of the unions of such sets. As described in the comments you can now define $p:\Omega\to\mathbb R$ as a function prescribed by: $$x\mapsto\frac{P([x])}{|[x]|}$$if $[x]$ is a finite set. Next to that it needs to be defined on $\{x\in\Omega\mid [x]\text{ not finite}\}$ as well. If $[x]$ is infinite then just let $p$ be defined on its elements on such a way that: $$\sum_{y\in[x]}p(y)=P([x])$$ Actually that works for finite sets also.
Solve inequality logarithm
Assuming you want to show for what $x, \ log_{(1-|x|)}|(3x-1)|<1$ (with base 1-|x|), we have the following: What is the inverse of $log|x|$? (i.e. its exponential form) We also assume $logx$ has the base $e$ (when the base is not specified) So upon inverting the logarithm to it's exponential equivalent, we have: $$log_{(1-|x|)}|(3x-1)|<1\Rightarrow (1-|x|)<(3x-1)$$ Noting $logx$ is only defined for $x>0$, we thus immediately see x is bounded above by $1$. That is: $$log(1-|x|) \Rightarrow x<1$$ Why? Consider $log(1-x)>0$ A lower bound for x follows from Considering the above inequality $(1-|x|)<(3x-1)$ Thus we have: For $x>0$, $(1-x)<(3x-1)\Rightarrow 2<4x\Rightarrow x>\frac 12$ Thus the values for $x$, such that$ \ log_{(1-|x|)}|(3x-1)|<1$ are $\frac 12<x<1$
Unusual Weighted Average Calculation
I think you could do a weighted average as planned, but use (benchmark.timeoffRequests - score.timeoffRequests) in place of score.timeoffRequests in the average.
On a substitution to solve a high order differential equation (exercise).
hint: use the fact that $e^{i\sqrt{2}t}=\cos(\sqrt{2}t)+i\sin(\sqrt{2}t)$
Solutions to $x_1+2x_2+3x_3+4x_4+5x_5+6x_6+7x_7+8x_8+9x_9+10x_{10}\equiv0\mod11$
Rewrite it as $$10x_{10} \equiv -x_1 - \ldots - 9x_9 ~\text{mod}~ 11.$$ Since $10 \in (\mathbb{Z}/11\mathbb{Z})^*$, this equation has exactly one solution for each choice of $x_1,\ldots,x_9$, namely $x_{10} := x_1 + \ldots + 9x_9 ~ \text{mod}~ 11$. So in total there are $10^9$ solutions. Edit: Clarification. $10 \in (\mathbb{Z}/11\mathbb{Z})^*$ means that $10$ is invertible in the ring $\mathbb{Z}/11\mathbb{Z}$. You may think of that ring as "integers modulo 11". Note that $10 \equiv -1 ~\text{mod}~ 11$ and thus $10 \cdot 10 \equiv 1 ~\text{mod}~ 11$. As we see, $10$ is its own inverse in $\mathbb{Z}/11\mathbb{Z}$. Multiplying both sides of the rewritten equation by $10$ yields $$ 10\cdot10x_{10} \equiv 10\cdot( -x_1-\ldots-9x_9)~\text{mod}~11 ~~~\Leftrightarrow~~~ x_{10} \equiv (-1) (-x_1-\ldots-9x_9)~\text{mod}~11.$$
Can you generalize the Triangle group to other polygons?
Concretely, if $P$ is a convex polygon in a plane $X$ (a complete simply connected Riemannian surface of constant curvature, i.e. the Euclidean plane, the hyperbolic plane or a round sphere) with the consecutive angles $\frac{\pi}{n_1},..., \frac{\pi}{n_k}$, then the group $G$ of isometries of $X$ generated by isometric reflections in the edges of $P$ has the presentation $$ \langle s_1,...,s_n| s_i^2, (s_i s_{i+1})^{n_i}, i=1,...,k\rangle. $$
Power Series in Two Variables and Radius of Convergence
Consider $\sum_{n=1}^{\infty}(2^n\sin y)x^n.$ If $y\in \mathbb R \setminus \pi\mathbb Z,$ then $$\limsup_{n\to \infty}|2^n\sin y|^{1/n} = 2.$$ Hence for those values of $y,$ the radius of convergence is $1/2.$ On the other hand, if $y \in \pi\mathbb Z,$ then the series vanishes identically and the radius of convergence is $\infty.$
Find all ordered pairs $(a,b)$ of positive integers for which $\frac{1}{a} + \frac{1}{b} = \frac{3}{2018}$
You want to look at factorizations of $2018^2$ where each term $\equiv -2018 \equiv 1 \mod 3$. Since $2018 = 2 \times 1009$ with $1009$ prime, there are not too many solutions.
How to solve the "four" variables problem
Hint: $$5(yz-xw)^2+(xy+5zw)^2=5(5^2)+105=230$$ Now $$5y^2z^2+5x^2w^2+x^2y^2+25z^2w^2=(5z^2+x^2)(y^2+5w^2)$$
Is this a Non-homogeneous DE
first step solve the equation $$\frac{dy}{dt}+6y(t)t=0$$ and after this get a special solution of the inhomogeneous equation
Exercises about Distributions
Perhaps you could try Claude Zuily, Problems in Distributions and Partial Differential Equations (North-Holland 1988). Another book that might be of interest for you is Duistermaat J., Kolk J. Distributions: Theory and Applications (Birkhäuser 2010). Solutions to selected (starred) problems are given at the end of the book.
Combinatorial arguments for number of partitions of $n$ into $k$ distinct parts
Is there any bijective map between these two kind of partitions? Hint. Note that if $0\leq x_1\leq x_2\leq \dots \leq x_k$ with $$x_1+x_2+\dots +x_k=n$$ then $1\leq y_1<y_2< \dots <y_k$ with $$y_1+y_2+\dots +y_k=n+1+2+\dots+k=n+{k+1\choose 2}$$ where $y_i=x_i+i$ for $i=1,2,\dots,k$, and, as you noticed ${k+1\choose 2}$ is the sum of all positive integers up to $k$.
Ratio of quadratic forms of powers of a matrix.
As a symmetric matrix is always diagonalizable (http://control.ucsd.edu/mauricio/courses/mae280a/lecture11.pdf) Moreover, $A$ can be diagonalized by an orthogonal matrix $P$, meaning that we can write : $$\tag{1} A=PDP^{-1}=PDP^{T}$$ with $D=diag(\lambda_1, \ \lambda_2, \ ... \lambda_n).$ Moreover, $A$ being positive definite, we can assume: $$\tag{*}\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_n>0.$$ From (1), we can deduce : $$y^T A^{m} y=y^T PD^{m}P^{T} y= (P^Ty)^T D^{m} (P^{T} y)= $$ $$=z^T D^{m} z=\sum_{k=1}^n \lambda_k^m z_k^2,$$ where $z$ is defined as $z:=P^Ty$, and $z_k \ (k=1...n)$ are the coordianates of $z.$ Thus: $$\frac{y^T A^{m+1} y}{y^T A^m y}= \frac{\sum_{k=1}^n \lambda_k^{m+1} z_k^2}{\sum_{k=1}^n \lambda_k^m z_k^2}$$ Now, for the sake of simplicity of presentation, let us take $n=2$. Let us factorize $\lambda_1^{m+1}$ in the numerator and $\lambda_1^{m}$ in the denominator. $$\frac{\lambda_1^{m+1} z_1^2+\lambda_2^{m+1} z_2^2}{\lambda_1^{m} z_1^2+\lambda_2^{m} z_2^2}=\frac{\lambda_1^{m+1}z_1^2}{\lambda_1^{m}z_1^2} \times \frac{1+k^2 \rho^{m+1}}{1+k^2 \rho^{m}}$$ where $$\rho:=\tfrac{\lambda_2}{\lambda_1}, \ \ \ k:=\tfrac{z_2}{z_1}.$$ $\lambda_1$ being the largest eigenvalue of matrix $A$, we have $\rho \leq 1$. Thus: $$\text{If} \ m \to \infty, \ \ \ \frac{\lambda_1^{m+1}z_1^2}{\lambda_1^{m}z_1^2} \times \frac{1+k^2 \rho^{m+1}}{1+k^2 \rho^{m}} \ \to \ \lambda_1$$ Therefore, the right answer is Answer #3.
Find $ \vec{z} \parallel \vec{x}$ and $\vec{y}-\vec{z} \perp \vec{x}$
You've got a good notion, but you've made an arithmetic error. Your corrected equation is $$y_1x_1-\alpha x_1x_1+y_2x_2-\alpha x_2x_2=0$$ or equivalently $$\vec x\cdot\vec y-\alpha(\vec x\cdot\vec x)=0.$$ Can you take it from there?
Identifying combinations removing 2 possiblities from a set
Number of combinations =$5!\times^7C_5$
Cyclotomic fields of finite fields
If you only want to know what the Galois group is, in case the base is a finite field, the answer is easy, when you remember that the groups always are cyclic. In the case $\Bbb F_7$ and $X^5-1$, all you need to do is ask what the first power of $7$ is such that $5|(7^n-1)$, that’s the order of the multiplicative group of $\Bbb F_{7^n}$. The answer here is $7^4=2401$, so the Galois group is cyclic of order $4$.
Prove that $\bar F=F^\text{sep}F^{p^{-\infty}}$
Let $L=F^{p^{-\infty}}$ to save typing and let $a\in\bar F \setminus L$. Then there is no $n$ with $a^{p^n}\in L$ since then for some $m$ we would have $a^{p^{m+n}}\in F$ contradicting $a\notin L$. Now suppose the minimal polynomial $f$ of $a$ over $L$ were not separable. Then $f=g^p$ for some $g\in L[a][X]$. But since the $p^{th}$ powers of the coefficients of $g$ are in $L$, its coefficients are in $L$ (by the definition of $L$). Thus, $g\in L[X]$, and so $f$ is not irreducible in $L[X]$, which is a contradiction, since it was chosen as a minimal polynomial. Thus $a$ is separable over $L$. Since $a$ was arbitrary in $\bar F$, $\bar F$ is separable over $L$. This shows that $\bar F$=$L^{sep}$. It's not immediately obvious to me that it shows $\bar F$ is the compositum of $F^{sep}$ and $L$, but it's been a long time since I studied any of this. Anyway, I'm glad I was able to help with the part that wasn't obvious to you.
On the sequence of positive integers satisfying $\sigma(n)\mid (n(\sigma_0(n))^2)$
OEIS refer to section B2 in Guy, Unsolved Problems in Number Theory, for types of "semi" perfect numbers. They do define the harmonic numbers but do not say much more. Anyway, your sequence is a superset of the harmonic numbers (named by Pomerance in 1973), also called Ore numbers. The harmonic numbers begin $$ 1, 6, 28, 140, 270, 496, 672, 1638, 2970, 6200, 8128, 8190, 18600, 18620, 27846, 30240, 32760, 55860, 105664, $$ and are discussed at https://oeis.org/A001599
Question about unique representation of reals
In this construction reals are equivalence classes, as you understood. What your quote says is that one method to take a representative of a class is to take a base b development of the number you want to get. What you have is indeed a member of $\pi$ (seen as an equivalence class), Wikipedia just gave you some canonical representative.
Limit of an expression as $x$ tends to a particular quantity on a curve
For every point $(x,y)$ on $C$, $(xy+\frac12)^2=x^2y^2+xy+\frac14=x+\frac14$ hence $x\geqslant-\frac14$. Thus, limits when $x\to-\infty$ are undefined on $C$.
Tough Combinatorics Question (possibly related to Stirling numbers)
We can select $i\geq0$ persons to be used once in ${N\choose i}$ ways, and then can select $j\geq0$ persons to be used twice in ${N-i\choose j}$ ways. Necessarily $i+j\leq N$. Assume that the $i$, resp. $j$, persons have been selected. We then produce a clone of each of the $j$ persons and have before us $i+2j$ persons, $j$ of them clones. These $i+2j$ persons can be linearly arranged in $(i+2j)!$ ways, but we have to divide this number by $2^j$ since we cannot distinguish between a real person and its clone. It follows that the total number of admissible arrangements comes to $$\sum_{i=0}^N\sum_{j=0}^{N-i}{N!\over i!\,j!\,(N-i-j)!}\ {(i+2j)!\over 2^j}\ .$$ Maybe this expression can be simplified somewhat.
How to calculate these totient summation sums efficiently?
For the case $j=0$, you can define some auxiliary summations to formulate an algorithm that runs in $O(n^{3/4})$ time: $$F(N) = \lvert \{ a,b : 0 < a < b \le N \} \rvert$$ $$R(N) = \lvert \{ a,b : 0 < a < b \le N, \gcd(a,b) = 1 \} \rvert$$ You can see that we are looking for $R(N) + 1$. Also, $F(N)$ is $\dfrac{N(N-1)}{2}$. Now observe something nice: R$\left( \Big\lfloor\dfrac{N}{m}\Big\rfloor \right)$ = $\lvert \{ a,b : 0 < a < b \le N, \gcd(a,b) = m \} \rvert$ Why? This is because you can multiply every coprime pair of $(a,b)$ by $m$. This fact lets you write $F$ in terms of $R$: F(N) = $\displaystyle\sum_{m=1}^N{ R\left(\Big\lfloor\dfrac{N}{m}\Big\rfloor\right) } $ Since we are looking for $R(N)$, we solve for the first term of the right summation. $R(N) = F(N) - \displaystyle\sum_{m=2}^N{ R\left(\Big\lfloor\dfrac{N}{m}\Big\rfloor\right) } $ Note this interesting property of the floor function here: $\Big\lfloor\dfrac{N}{m}\Big\rfloor$ will stay constant for a range of $m$. This lets us calculate the summation in chunks. example: $\Big\lfloor\dfrac{1000}{m}\Big\rfloor$ is constant for $m$ in the range of [501,1000]. Here's a program I wrote in C++ that caches R to trade O(log n) memory for a large speedup
If $P$ is a statistically complete set of distributions, the only sufficient subfield is the trivial one
This is essentially Theorem 3.1 in Lehmann E. L., Scheffé Henry. Completeness, Similar Regions, and Unbiased Estimation: Part I. ankhyā: The Indian Journal of Statistics (1933-1960), Vol. 10, No. 4 (Nov., 1950), pp. 305-340
Cumulative distribution function and Brownian motion
Note that $\frac {B_t} {\sqrt {T-t}} \to \infty$ on the set $B_T >0$ and $\frac {B_t} {\sqrt {T-t}} \to -\infty$ on the set $B_T <0$. Hence $\lim_{t\to T} M_t =I_{\{B_T >0\}}$ almost surely.
Good reference for values of Ramsey Numbers
"Small Ramsey Numbers" by Stanisław Radziszowski https://www.combinatorics.org/ojs/index.php/eljc/article/view/DS1
How to solve $3(a+1)(b+1)=3^a \times 2^b$?
Note that $a+1\lt3^a$ if $a\gt1$ and $3(b+1)\lt2^b$ if $b\gt3$. Consequently $3(a+1)(b+1)\lt3^a\cdot2^b$ unless either $a=1$ or $0\le b\le3$. (Note, $a$ cannot be $0$, since the left hand side is divisible by $3$.) Thus we have five subcases to consider: $a=1$ and $6(b+1)=3\cdot2^b$ $b=0$ and $3(a+1)=3^a$ $b=1$ and $6(a+1)=3^a\cdot2$ $b=2$ and $9(a+1)=3^a\cdot4$ $b=3$ and $12(a+1)=3^a\cdot8$ Tackling them one at a time.... $2^b\gt2(b+1)$ if $b\gt3$, but the equation $2^b=2(b+1)$ is not solved by $b=0$, $1$, $2$, or $3$. $3^a\gt3(a+1)$ if $a\gt2$, $3^1\not=3(1+1)$, but $3^2=3(2+1)$, so $(a,b)=(2,0)$ is a solution. Same as 2. $(a,b)=(2,1)$ is a solution. $3^a\gt{9\over4}(a+1)$ if $a\gt1$, but $9(1+1)\not=3\cdot4$. $3^a\gt{12\over8}(a+1)={3\over2}(a+1)$ if $a\gt1$, but $12(1+1)=3^1\cdot8$, so $(a,b)=(1,3)$ is a solution. And that's all. The equation $3(a+1)(b+1)=3^a\cdot2^b$ has exactly three solutions: $(a,b)=(2,0)$, $(2,1)$, and $(1,3)$. Remark: This analysis feels a little clunky, but I don't see any simple way to streamline it. Maybe someone else can. (Update: i707107's $\tau$-based answer streamlines things considerably. I wish I'd thought of it!)
Bounded Self-adjoint Operator on Hilbert Space
Unless I am missing something, it seems to me that we can get this inequality without the 4 by writing $$1 = \|x\|^2 = \langle A^{1/2} x, A^{-1/2} x \rangle$$ and using Cauchy-Schwarz.
double area integrals over coherence functions on circles
In answer to my own question, the equality can be shown as follows. First, we realize that $\int_0^{2\pi}\int_0^b\int_0^b r_1r_2\frac{J_1\left (\alpha\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\theta)}\right )}{\alpha\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\theta)}} dr_1dr_2d\theta\\=\frac{1}{2\pi}\int_0^{2\pi}\int_0^{2\pi}\int_0^b\int_0^b r_1r_2\frac{J_1\left (\alpha\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\theta_1-\theta_2)}\right )}{\alpha\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\theta_1-\theta_2)}} dr_1dr_2d\theta_1d\theta_2$ From here we notice that $\sqrt{r_1^2+r_2^2-2r_1r_2\cos(\theta_1-\theta_2)}$ is the distance between two particular points in the circle of radius $b$. Let that distance be denoted as $L$. Then we have $\frac{1}{2\pi}\iint_{A_1}\iint_{A_2}\frac{J_1(\alpha L)}{\alpha L} dA_1 dA_2$. We can write this instead as $\frac{(\pi b^2)^2}{2\pi}\int_0^{2b}\frac{J_1(\alpha L)}{\alpha L}p(L) dL$, where $p(L)$ is the probability density of choosing two points of distance $L$ within a circle of radius $b$. This probability density is well known. See for instance equation (5) of Ricardo García-Pelayo 2005 J. Phys. A: Math. Gen. 38 3475 doi:10.1088/0305-4470/38/16/001 and references therein. We have $p(L)=\frac{L}{\pi b^2}\left ( 4\cos^{-1}[d/(2b)]-L\sqrt{4b^2-L^2}/b^2 \right ), 0\leq L \leq 2b$ The integral can now be easily evaluated and gives the required equality. This technique is usable any time the integrand is a function of only one parmater, like L in this case.
Conditional Independence and probability
If I understand your question correctly, I think that you should also consider the chance that a certain coin was chosen if you got 10 heads in a row. P(0.9 coin | first 10 heads) = P(0.9 coin and first 10 heads)/P(first 10 heads) = $\frac{0.5*0.9^{10}}{0.5*0.9^{10} + 0.5*0.1^{10}} = 0.9999999997132$ (according to Wolfram Alpha). The chance of 11th coin heads is $0.9999999997132*0.9 + (1 - 0.9999999997132)*0.1$ which is super close to 0.9. Hope this helps!
There are n objects and n boxes, how many ways can we place the objects so exactly one box remains empty
If both objects and boxes are distinguishable: There are $n$ ways to select a box that'll be empty. Since rest of the boxes will have at least 1 object each, therefore there will one box out of the $n-1$ that will have two objects in it. There are $n-1$ ways to choose this box. Further, we have ${n\choose2}$ ways to put two objects in the selected box and $(n-2)!$ ways to arrange the rest of the objects in the remaining $n-2$ boxes such that each of those boxes gets exactly $1$ object. So the number of ways will be $n(n-1){n\choose2}(n-2)!=n!{n\choose2}$ A) If both objects and boxes are indistinguishable, there will be only $1$ way of placing the objects. B) If the objects are indistinguishable, there will be only $1$ way of placing the objects after choosing the boxes. So there are $n(n-1)={n\choose2}$ ways. Another method when both are distinguishable: We first select $2$ objects that will be together in the box containing $2$ objects. There are $n\choose 2$ ways to do this. We now have $1$ "double object", $1$ "empty object" and $n-2$ normal objects. Since these are obviously distinguishable, we need to place $n$ objects in $n$ boxes. So there are $n!$ ways to do this. Therefore our answer will be $n! {n\choose 2}$. This can be done similarly for (B).
Using Lagrange Multipliers with Constraints of a Line and a Parabola
You need to look for points within the region and on the boundary of the region which will be candidates for the absolute maximum and minimum of $f$. In the interior of the region, you should look for critical points by finding what points $P$ give $\nabla f(P) = 0$. However, you also have to check the values of $f$ on the boundary. In this case the boundary above will be the line $y=4$, while the boundary below will be $y=x^2$. You need to check both boundaries! One method to check for the possible candidates for maxima and minima which are on the boundary is to use the method of Lagrange multipliers. For example, if you want to check for extreme points of $f$ on $y=x^2$, you can set $g(x,y) = y-x^2$ and apply the Lagrange multiplier method giving you the system of equations $$ y=x^2 \\ \nabla f = \lambda \nabla g $$ Make sure to pay attention to the fact that the boundary $y=x^2$ only considers points with $-2 \leq x \leq 2$, since otherwise you've gone past your other boundary $y=4$. On the other hand, to look for extreme points (max/min points) of $f$ on $y=4$, you can plug $4$ in for $y$ into the equation for $f$ and decide which values of $x$ between $-2 \leq x \leq 2$ make $f$ the largest and smallest. Once you have made a list of all possible candidates for max and min (the critical points on the interior of the region, and those found on the boundary), you then decide what the absolute max and min are for $f$ by simply checking the value of $f$ evaluated at each of those points.
RSA modulus and order of multiplicative elements
Is $\Bbb Z^*_n$ the multiplicative group of integers coprime to $n$? Yes, if $\alpha \gt p$ (it can't be equal, then it wouldn't be coprime to $n$) you take $\alpha \pmod p$ and look for its multiplicative order. The multiplicative order is the lowest solution to $\alpha^k=1 \pmod p$ So if $p=13, \alpha=2, \operatorname {ord}_p(\alpha)=12$, but if $p=13, \alpha=4, \operatorname {ord}_p(\alpha)=6$ For your example, you are correct that $\operatorname {ord}_{15}(4)=2$, but $ \operatorname {ord}_3(4)=1$ and $ \operatorname {ord}_5(4)=2$ with $\operatorname{lcm } 2$, so all is well.
1 to the infinity indeterminate limit
Hint: use $\lim\limits_{x \to 0}\frac{\ln (1+x)}{x}=1$. Then $\lim\limits_{x \to a} \ln f(x) = \lim\limits_{x \to a} \left( \frac{\ln (1+[f(x)-1])}{f(x)-1}\cdot[f(x)-1] \right)=...?$
Diadics and tensors. The motivation for diadics. Nonionic form. Reddy's "Continuum Mechanics."
First let's look at how to think of matrix transformations on ${\mathbf R}^3$. For any 3 x 3 matrix $A$, we can write the function $L({\mathbf x}) = A{\mathbf x}$ as a sum of 3 separate functions built out of the rows of $A$, using dot products. Example. Suppose $$ A = \left( \begin{array}{ccc} 1 & 2 & 3\\ 4 & 7 & 2 \\ 9 & 2 & 5 \end{array} \right) = \left( \begin{array}{c} r_1\\r_2\\r_3 \end{array} \right), $$ where $r_1, r_2$, and $r_3$ are the rows of $A$. Then $$ A{\mathbf x} = (r_1 \cdot {\mathbf x})e_1 + (r_2 \cdot {\mathbf x})e_2 + (r_3 \cdot {\mathbf x})e_3. $$ This expresses the matrix transformation ${\mathbf x} \mapsto A{\mathbf x}$ as a sum of 3 linear transformations ${\mathbf x} \mapsto (r_i \cdot {\mathbf x})e_i$. The matrix way of writing $L({\mathbf x})$ is "nonionic" form (apologies to the chemists, but it doesn't mean "not ionic", but rather "nine-ish"), while the other way, as a sum of three terms with dot products, is the dyadic form. For any two vectors $v$ and $w$ in ${\mathbf R}^3$ we can write down a linear transformation ${\mathbf R}^3 \rightarrow {\mathbf R}^3$ by the rule $L_{v,w}({\mathbf x}) = (v \cdot {\mathbf x})w$. The vectors $v$ and $w$ are fixed, while ${\mathbf x}$ varies. Such linear functions are not the most general linear functions from ${\mathbf R}^3$ to ${\mathbf R}^3$, since the values of $L_{v,w}$ are all scalar multiples of $w$ and thus lie along a line (which is not one of the standard axes if $w$ is not lying along an axis). An example is $L_{e_1,e_3}$: $L_{e_1,e_3}(a_1e_1+a_2e_2+a_3e_3) = a_1e_3$. Do you see the relation of matrix transformations with these dot product linear transformations $L_{v,w}$? We saw above how any matrix transformation can be written as a sum of three $L_{v,w}$'s, where the $w$'s are taken to be the standard basis. But we don't have to use the standard basis for the $w$'s. For example, we can just start with an $L_{v,w}$ where $w$ is not an $e_i$ and then write that linear transformation as a sum of such special functions with $w$ being an $e_i$. For example, if $v = (2,1,0)$ and $w = (1,2,3)$ then for ${\mathbf x} = (a,b,c)$ we have $$ L_{v,w}({\mathbf x}) = (v \cdot {\mathbf x})w = (2a+b)w = \left( \begin{array}{c} 2a+b\\4a+2b\\6a+3b \end{array} \right) = \left( \begin{array}{ccc} 2&1&0\\4&2&0\\6&3&0 \end{array} \right) \left( \begin{array}{c} a\\b\\c \end{array} \right). $$ This last formula expresses $L_{v,w}$ as a matrix transformation, so by the same ideas as in the first example we have $$ L_{v,w}({\mathbf x}) = (r_1 \cdot {\mathbf x})e_1 + (r_2 \cdot {\mathbf x})e_2 + (r_3 \cdot {\mathbf x})e_3 $$ where the $r_i$'s are the rows: $r_1 = (2,1,0)$, $r_2 = (4,2,0)$, and $r_3 = (6,3,0)$. Thus $$ L_{v,w} = L_{r_1,e_1} + L_{r_2,e_2} + L_{r_3,e_3}. $$ Notice in particular that a single $L_{v,w}$ can be a sum of other $L_{v,w}$'s. So far I haven't used any funky words like "dyad". I've shown by examples how any matrix transformation can be written as a sum of $L_{v,w}$'s. Definition: A dyadic is just an $L_{v,w}$. A dyad is any sum of dyadics. In concrete terms, a dyad is just a general linear transformation from ${\mathbf R}^3$ to itself, while a dyadic is a linear transformation whose image is one-dimensional (one of the $L_{v,w}$'s). If you know what tensor products of vector spaces are, then a dyadic is the same thing as an elementary tensor in $({\mathbf R}^3)^* \otimes_{\mathbf R} {\mathbf R}^3$, where $({\mathbf R}^3)^*$ is the dual space of ${\mathbf R}^3$ (can be identified with ${\mathbf R}^3$ using the dot product). A dyad is a general tensor in $({\mathbf R}^3)^* \otimes_{\mathbf R} {\mathbf R}^3$. This tensor product can be interpreted as the collection of linear maps ${\mathbf R}^3 \rightarrow {\mathbf R}^3$, which is just the 3 x 3 matrices. A polyad is a member of a tensor product of multiple copies of a vector space and its dual space. A polyadic is an elementary tensors in such a tensor product space. If you want to read a story about this terminology, see the last two paragraphs of http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf.
Acceleration question
Instructions: Click the start button. Open calculator. Click "view" and select Scientific. Click on the radio button "Radians" Redo your computation. Get $0.1447678757...$
Find the coordinates of the centroid
Assuming I read your statement correctly, here's the region $A$: Its area is $$A = \int\limits_{x=0}^1 f(x)\ dx$$ and center of mass has coordinates: $$\bar{x} = \frac{\int\limits_{x=0}^1 x f(x)\ dx}{A}$$ and $$\bar{y} = \frac{\int\limits_{y=0}^{f(1)} y (1 - f^{-1}(y))\ dy}{A}$$ and $f^{-1}(y) = 1+ \ln (\pi - \sin^{-1} y)$
What distinguished the Möbius strip from the cylinder as fibre bundles?
The cylinder, as a fiber bundle, has a section which is never zero. On the moebius band, such a section does not exist: if you parametrize the segment $F$, which is the fiber, with $(-1,1)$, then if a never-vanishing section starts positive, after one turn it becomes negative. In terms of fiber bundles a section is a continuous map from the base to the fiber, that is to say a map $$s:M\to E$$ such that $\pi(s(x))=x$ for evey $x\in M$ (so $s(x)$ is in the fiber of $x$). EDIT following a comment: To be precise, one should make a difference between fiber bundles and vector bundles: In the case of fiber bundles, the fiber has no vectorial sturcture, so it is not defined who is the zero of the fiber, so the zero section is not defined. So, to be coherent with terminology of fiber bundles, one should say that the Moebius strip, as a fiber bundle, has not two sections that are everywhere different from each other, while the cylinder has plenty of that.
Using bayes's theorem for probability
Hint: Suppose you had $n$ NPN transistors in the left drawer and $p$ PNP transistors in the right drawer. After the the shuffling of transistors, there are two possibilities: (1) $n-1$ NPN on left, $1$ NPN and $p$ PNP on the right with probability $\frac2{p+2}$ (2) $n-2$ NPN and $1$ PNP on left, $2$ NPN and $p-1$ PNP on the right with probability $\frac{p}{p+2}$ Assuming that we have chosen the drawers with equal probability, Left drawer and we drew PNP: $\overbrace{\frac2{p+2}0}^{(1)}+\overbrace{\frac{p}{p+2}\frac1{n-1}}^{(2)}=\frac{p}{(p+2)(n-1)}$ Right drawer and we drew PNP: $\overbrace{\frac2{p+2}\frac{p}{p+1}}^{(1)}+\overbrace{\frac{p}{p+2}\frac{p-1}{p+1}}^{(2)}=\frac{p}{p+2}$ We drew PNP: $\frac{p}{(p+2)(n-1)}+\frac{p}{p+2}=\frac{pn}{(p+2)(n-1)}$ However, I don't see how to use the formula you cite. This seems to require $P(X|Y)=\frac{P(X\text{ and }Y)}{P(Y)}$
Irreducible Representation by Restriction
I'm not really sure why you'd insist you don't want to use induced representations here; Frobenius reciprocity means it's the natural approach, and this is an immediate corollary of Frobenius reciprocity... But OK. Suppose there is an irreducible representation $\sigma$ of $H$ which isn't contained in the restriction of an irreducible representation of $G$. Then $\sigma$ doesn't occur in the restriction to $H$ of the regular representation $\Bbb{C}[G]$ of $G$, which contains a copy of $\Bbb{C}[H]$, and hence of $\sigma$, so you're done.
How do I prove that this limit is equal to e without L'Hospital?
Write it as $(1+\frac{1}{n})^n$ and get the binomial expansion $\sum_{k=0}^n\frac{n(n-1)...(n-k+1)}{n^k}\frac{1}{k!}$. As $n\to \infty$, expansion converges to infinite series for $e=\sum_{k=0}^\infty\frac{1}{k!}$ Note: this is way I first learnt it in high school.
Indiscernible sequences in countable complete theory
This is an almost immediate consequence of indiscernibility. The intuitive meaning of $\overline{c}\sim\overline{d}$ is that $\overline{c}$ and $\overline{d}$ "come in the same order" in $I$ relative to the set $J$. In particular, if $\overline{e}$ is a finite tuple from $J$, then $\overline{c}\overline{e}$ and $\overline{d}\overline{e}$ are both finite tuples from $I$ which "come in the same order", so they satisfy the same formulas by indiscernibility. More precisely: Suppose $\varphi(x,\overline{e})\in \text{tp}_\mathcal{M}(t^M(\overline{c})/J)$ and $\overline{c}\sim \overline{d}$. Consider the formula $\psi(\overline{z},\overline{w})$ given by $\varphi(t(\overline{z}),\overline{w})$. We have $\mathcal{M}\models \varphi(t^M(\overline{c}),\overline{e})$, so $\mathcal{M}\models \psi(\overline{c},\overline{e})$. And by indiscernibility, since $\overline{c}\sim\overline{d}$, we also have $\mathcal{M}\models \psi(\overline{d},\overline{e})$. So $\mathcal{M}\models \varphi(t(\overline{d}),\overline{e})$, and $\varphi(x,\overline{e})\in \text{tp}_\mathcal{M}(t^M(\overline{d})/J)$. Thus $\text{tp}_\mathcal{M}(t^M(\overline{c})/J) = \text{tp}_\mathcal{M}(t^M(\overline{d})/J)$.
Sobolev spaces over closed domains.
No, this intuition is not correct, because of the issue of domains with cracks. For an example, consider the unit disk $D\subset\mathbb{R}^2$, and the domain $\Omega$ which is obtained from $D$ by removing several straight lines. The domain $\Omega$ is not connected, even though $\bar{\Omega} = D$. This means that there can be piecewise constant functions in $W^{m,p}(\Omega)$ even though this wouldn't be allowed for functions in $W^{m,p}(D)$. The issue is that the space of test functions on $\Omega$ and $D$ are very different, so they give rise to different notions of distributional derivative and thus different Sobolev spaces. In particular, there are no test functions (smooth functions with compact support) on $\Omega$ which are nonzero on the "cracks" of the domain, so it cannot detect the discontinuities that may form for functions in $W^{m,p}(\Omega)$.
Find $2^{3^{100}}$ (mod 5) and its last digit
Yes, you solve modulo $10$ for the last digit. Note that the number is even, so its last digit is taken from $2, 5, 6, 8$--it's not divisible by $5$ so the last digit cannot be $0$. Your original solution is a bit off, though $3^4\equiv 1\mod 5$, but in the exponent it's every $4$ which gives $1$, so we want $3^2\equiv 1\mod 4$. Then $3^{100}\equiv 1\mod 4$. So $$2^{3^{100}}=2^{4k+1}\equiv (16)^k\cdot 2\mod 5$$ which gives the same result, but by sounder methods. Since it is $2$ mod $5$ as you have said, the only choices are $2$ and $7$. $7$ is odd, so it must be $2$.
Let $f(x)=\frac{1}{x^3+3x^2+3x+5}$, then what is $f^{(99)}(-1)$?
Note that in a neighbourhood of $x=-1$, $$f(x)=\frac{1}{x^3+3x^2+3x+5}=\frac{1/4}{1+\frac{(x+1)^3}{4}}=\sum_{n=0}^{\infty}\frac{f^{(n)}(-1)}{n!}(x+1)^n.$$ Now recall that for $|t|<1$, $$\frac{1}{1+t}=\sum_{n=0}^{\infty}(-1)^n t^n.$$ Can you take it form here?
Can a principal ideal contain a non-principal ideal?
The ideal $R = (1)$ is always a principal ideal of the ring $R$, but $R$ is not necessarily a PID.
Defining a probability distribution
You should check that $v$ satisfies the definition of a probability measure/distribution. That is, you should check that 1) $v$ is a mapping from $\mathcal{F}$ to $[0,1]$. 2) $v(\emptyset)=0$ 3) For any sequence of disjoint sets $(A_n)_{n\in\mathbb{N}} \subseteq \mathcal{F}$ the following should hold: $$ v\left(\bigcup_{n\in\mathbb{N}} A_n\right)=\sum_{n\in\mathbb{N}} v(A_n) $$ 4) $v(\Omega)=1$ Let me know if any of these causes problems.
Approximation Property for Infimum
Yes, you've correctly written the approximation property for infimum and your proof is also correct. Note that we need $S$ to be bounded below to have an infimum so you may want to include this condition of $S$ to the property. Another way of writing the approximation property (using notation $\varepsilon$) is that: If $b$ is the infimum of nonempty set $S$ then for any $\varepsilon>0$, there exists $a \in S$ such that $b \le a<b+\varepsilon$.
What are the algebraic procedures from $0.127mm \times 92^{36-n\over 39}$ to $e^{2.1104-0.11594n}mm$?
The number is $(0.127 92^{\frac{36}{39}}) 92^{\frac{-n}{39}}$. If $e^a = 0.127 92^{\frac{36}{39}}$, then $a = \ln(0.127 92^{\frac{36}{39}}) \approx 2.11039$, and if $e^{-bn} = 92^{\frac{-n}{39}}$, then $-bn = - \frac{n}{39} \ln 92$, or $b = \frac{1}{39} \ln 92 \approx 0.11594$. Hence $(0.127 92^{\frac{36}{39}}) 92^{\frac{-n}{39}} \approx e^{2.11039-0.11594n}$.
First Success distribution PMF sum problem
This is a finite geometric series, and finite geometric series can be easily summed: $$a+ar+\dots+ar^n=a\frac{1-r^{n+1}}{1-r}$$ Thus $$1+(1-p)+\dots+(1-p)^{m-2}=\frac{1-(1-p)^{m-1}}{1-(1-p)}$$ and the result follows.
A geometry problem - proving that points are concyclic
Does this work? Let $\Gamma_{P,C(XYZ)}$ denote the power of $P$ wrt the circle through points $X,Y,Z$. $AM \perp FE$ and $IM \perp FE$, thus $I,M,F$ are collinear. Thus $APDI$ is a cyclic quadrilateral $\Leftrightarrow |AM|\cdot |AI| = |AP| \cdot |AD|$. Now $|AP| \cdot |AD| = \Gamma_{A,C(IME)}=|AE|^2$ since $A$ is tangent to $C(IME)$ in $E$ ($IE$ is a diameter since $\angle IME=90^{\circ}$, and $IE \perp AE$). Similarly $|AM|\cdot |AI|=\Gamma_{A,C(IMF)}=|AF|^2$. Clearly $|AF|=|AE|$ so we're done.
prove the limit in a formal way
You want to show that for any $\epsilon>0$, there is some $N$ such that $$n\geq N\implies \left|\frac{n+1}{2^{n\cdot n!}}\right|<\epsilon.$$ In order to do so, we can use the fact that the real numbers are Archimendian, meaning that for any $\epsilon>0$ we have some natural number $m$ such that $m>1/\epsilon$, which implies $\epsilon>1/m$. Thus we need only show that for any natural number $m$, there is some $N$ such that $$n\geq N\implies \left|\frac{n+1}{2^{n\cdot n!}}\right|<\frac1m$$ which is equivalent to showing that for any natural number $m$, there is some $N$ such that $$n\geq N\implies n+1<\frac{2^{n\cdot n!}}m.$$ What if we try $N=m+2$? Well, we can use the fact that $n!> n$ and $2^n> n+1$ when $n>2$ to get that $$n\geq m+2\implies \frac{2^{n\cdot n!}}m\geq 2^n>n+1$$ thus we are done.
S-indexed Family of A's vs. a Family of A's indexed by S
Yes, the two wordings mean the same. In both case $K$ has the shape $(f_1,f_2,f_3)$, where each of the $f_i$s is an operation symbol. Formally, this just means that $K$ is some map* with domain $S$ such that each value of the map is an operation symbol. Writing them as, say, $f_2$ instead of $K(2)$ is just a notational choice that may make formulas that involve the symbols easier to read. In many cases where one says something like "an $S$-indexed family of operation symbols", it is implied that different elements of $S$ correspond to different operation symbols (in other words, that the map is injective), but this is not strictly required by the wording "indexed family" and in practice it is up to the reader to figure out whether such an assumption makes better sense in context than allowing some of the indexed symbols to be identical. In particular, the assumption that the elements of the family are different for different indices will often be in force when it's a family of symbols in logic or formal language theory. *: In some contexts the wording "$S$-indexed family" may be used where $S$ is a proper class rather than a set. Then the family itself must be a proper class, which one may or may not consider to qualify as a "map".
Prove that RX is an ideal of R
Since $R$ is commutative the elements of $RX$ have the form $r_1x_1 + r_2x_2 + \cdots + r_kx_k$. The difference of any two such elements is again such a combination so $RX$ is a subgroup of the additive group of $R$. Furthermore if $r \in R$ then the fact that $RX$ is a left ideal is immediate, while the right follows from the commutivity of $R$.
What range of values can $\int \int |f|dm$ can possibly have?
If $\iint_{\mathbb R^2} |f| \, dm$ was a finite real number, then by Fubini's theorem $$ \int_{\mathbb R} \left( \int_{\mathbb R} f(x,y) \, dx \right) \, dy = \int_{\mathbb R} \left( \int_{\mathbb R} f(x,y) \, dy \right) \, dx = \int_{\mathbb R^2} f \, dm. $$ We have that $\int_{\mathbb R} \left( \int_{\mathbb R} f(x,y) \, dx \right) \, dy \ne \int_{\mathbb R} \left( \int_{\mathbb R} f(x,y) \, dy \right) \, dx$, so $\iint_{\mathbb R^2} |f| \, dm$ must be $+\infty$. (This is just the contrapositive of Fubini's theorem.)
Any elementary proof of the monotonicity of $a_{n} =(1+\frac{1}{n})^{n+\frac{1}{2}}$?
$$ \left(n+\tfrac{1}{2}\right)\log\left(1+\tfrac{1}{n}\right)=\int_{0}^{1}\frac{n+\tfrac{1}{2}}{x+n}\,dx= \int_{-1/2}^{1/2}\frac{1}{1+\frac{2x}{2n+1}}\,dx = \int_{0}^{1/2}\frac{2}{1-\left(\frac{2x}{2n+1}\right)^2}\,dx $$ produces $$ \left(n+\tfrac{1}{2}\right)\log\left(1+\tfrac{1}{n}\right)= \int_{0}^{1}\frac{dx}{1-\left(\frac{x}{2n+1}\right)^2} $$ hence it is clear that the RHS is decreasing, since for any $x\in(0,1)$ and any $N>n$ we have $$ \frac{1}{1-\left(\frac{x}{2N+1}\right)^2} < \frac{1}{1-\left(\frac{x}{2n+1}\right)^2}.$$
Let W be the set of matrices in M22(R) with a trace of 0. Show W is a subspace of M22(R).
You only have to prove it satisfies 2 axioms, not all of them. The first is that the sum of 2 matrices in W is also in W. Take 2 matrices with trace 0, so diagonal x, -x for the first, and diagonal y, -y for the second. The sum has diagonal x+y, -x-y which has sum 0, so it has trace 0. You can't prove a matrix in W has sum 0, you must assume it. Second axiom to check is that a scalar multiple of a matrix in W also lies in W.