title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
$f$ is a non-constant polynomial, $A $ is a set of measure zero, Is this true that $m(f^{-1}A)=0$, where $m$ stands for the Lebesgue measure.
following @AlexBecker idea : Suppose $f$ is a polynomial of degree $n$, so it has at most $n-1$ pieces, each of which is monotone. Partition $E$ such that $E=E_1 \cup E_2 \cup ... \cup E_{n-1}$ such that $f$ restricted to each $E_k$ is (strictly) monotone. We can say $$f=f|_{E_1}+f|_{E_2}+...+f|_{E_{n-1}}$$ Therefore we have : $$f^{-1}A=(f|_{E_1})^{-1}A \ \bigsqcup \ (f|_{E_2})^{-1}A \ \bigsqcup \ ... \bigsqcup \ (f|_{E_{n-1}})^{-1}A \ $$ If I show that for each $1 \leq k \leq n-1 $ the set $C_k=(f|_{E_{k}})^{-1}A$ has zero measure, we are done. Suppose that's not the case, i.e. $mC_k=r_k>0$, and note that since $f|_{E_{k}}$ is strictly monotone on $E_k$, we can find $m_k>0$ (monotone increasing) or $m_k<0$ (monotone decreasing) such that $f'(x) \geq m_k$ for all $x \in E_k$. Therefore we have that $$|m_k| \leq \frac{m \left[f|_{E_{k}}C_k \right]}{m C_k} \Longrightarrow 0 < r_km_k \leq m \left[f|_{E_{k}} C_k \right] \leq mA \ \ \ (by \ \ Monotonicity)$$ A contradiction. Therefore $$mf^{-1}A=m(f|_{E_1})^{-1}A \ + \ m(f|_{E_2})^{-1}A \ + \ ... + \ m(f|_{E_{n-1}})^{-1}A \ = 0 + 0 + ... + 0 = 0 $$
Prove that when $n$ is square free, then $a^2b = a^2c \text{ mod }n$ implies that $ab = ac \text{ mod } n$
You are given that $p_1 \cdots p_n | a^2(b-c)$, so for each $i$ you have that $p_i|a^2(b-c)$. This means that either $p_i|a^2$ or $p_i | (b-c)$. In the first case you have $$p_i|a^2 \implies p_i| a \implies p_i | a(b-c).$$ In the second case you have $$p_i | (b-c) \implies p_i | a(b-c).$$ Either way, you conclude that $p_i | a(b-c)$. Now can use the fact that if $p$ and $q$ are distinct primes, then $p|k \wedge q|k \implies pq|k$. Since $p_i | a(b-c)$ for each $i$ you have $$ \left( \prod_{i=1}^n p_i \right) \bigg| \,a(b-c).$$
Functions and their Distinct Variants
If we redefine $k$ as the number of equivalent functions with total degree $n$ there is always a function with $n=k$. We can just use $x_1^{n-1}x_2$. The $x_2$ can go in any position in the string, so there are $n$ different functions. If you insist on $n$ different variables, there are $n!$ permutations of them. Your example with two variables and two orders works because $2=2!$. The only other case of that is $1!=1$, so for $n=k=1$ there is a solution $x_1$, which has only one function.
Does $A_n\leq G\leq S_n$ imply either $G=S_n$ or $G=A_n$?
Promoting @ThomasAndrews' comment into an answer . . . Yes, you're correct. In general, if $G_1\le G_2$ and $\lvert G_2\rvert=p\lvert G_1\rvert$ for some prime $p$ for finite groups $G_1, G_2$, then for any $G$ such that $G_1\le G\le G_2$, either $G=G_1$ or $G=G_2$.
Proving elements of a polynomial ring are integral over another.
We have $x^2=y^3+1$ and $zx=1$. We might as well write $1/x$ for $z$. Then $t=x+ay+b/x$ so $$ay=t-x-\frac bx.$$ Then $$a^3x^2=a^3y^3+a^3=\left(t-ax-\frac bx\right)^3+a^3.$$ Multiplying by $x^3$ gives $$a^3x^5=(tx-ax^2-b)^3+a^3x^3.$$ If $a\ne0$ this equation can be rewritten as $$a^3x^6+\textrm{ lower terms in }x, t$$ which, when we divide by $a^3$ gives $x$ as integral over $k[t]$. If $a=0$ we get $t=x+a/x$ and then $$x^2-tx=a=0$$ so still $x$ integral over $k[t]$. As $y$ is integral over $k[x]$ then $y$ is integral over $k[t]$.
Question Regarding Transition Diagram for a Markov Chain
Based on the diagram, yes $q$ represents $1-p$. This is pretty standard notation. The example is keeping track of the number of heads. So the state space is $\{0,1,2,...\}$. You should interpret the diagram as follows: Suppose we're at state $n$. One of two things happens. Either we flip heads, with probability $p$, and advance to state $n+1$. Or we obtain tails, with probability $q$, and stay at state $n$.
Expand $X(t) = e^{W(t)}$
$$\int_0^t e^{W(s)}dW(s)=\int_0^t e^{W(s)}\frac{dW(s)}{ds}ds$$ making the substitution $u=W(s)$ gives: $$\int_{W(0)}^{W(t)} e^udu=e^{W(t)}-e^{W(0)}$$ I am not sure if this is what you were looking for so let me know If we work backwards from the result you are trying to prove: $$dX(t)=\frac12X(t)dt+X(t)dW(t)$$ we are able to rewrite this as: $$\frac{dX(t)}{X(t)}=\frac12dt+dW(t)$$ now integrate both sides and we get: $$\ln X(t)=\frac t2+W(t)+C\tag{1}$$ which seems like a contradiction as in your definition we have: $$X(t)=e^{W(t)}\Rightarrow \ln X(t)=W(t)$$
Do we need Gröbner bases to study factor rings of polynomials?
Groebner basis are mainly a computational tool rather than a theoretical one. For example, they are important in solving zero-dimensional systems of polynomial equations or for eliminating redundant variables in underdetermined polynomial systems of equations. Consequently, you can very well study quotients of polynomial rings without any knowledge of Groebner basis. Conversely, many techniques of Groebner basis apply to quotients of polynomial rings, i.e. to finitely-generated algebras over a field or a ring.
Absolute values and inequalities
Fix $w$ and consider the function $f(z)=\frac{z-w}{1-\bar{z}w}$. Now show that $f(f(z))=z$. What kind of functions satisfy the functional equation $f(f(x))=x$?
Given a linear Hilbert-Schmidt embedding $ι$ between Hilbert spaces, prove that $ιι^*$ is a bounded, linear operator with finite trace
You have probably figured it out already, but in case someone else is faced with the same problem, here is an answer to your question: Since the embedding operator $\iota$ is linear and Hilbert-Schmidt (in particular, a bounded linear operator), the same holds for its adjoint $\iota^{*}$. Thus, $Q_{1} = \iota \iota^{*}$ is bounded and linear as the composition of two such operators. That it is also trace-class (an important property for defining cylindrical Wiener processes) follows from the fact that the operator $\iota$ is assumed to be Hilbert-Schmidt. As you seem to be working with the book of Röckner and Prevot (or Röckner/Liu), Proposition B.0.8 in there ensures that $Q_{1}$, as the composition of two Hilbert-Schmidt operators is a nuclear operator, i.e. in $L_{1}(U_{1})$. Since every nuclear operator is trace-class (Remark B.0.4 in the book), you get that $\mathrm{tr}~ Q_{1} < \infty$. If you prefer a concrete calculation over abstract results, you can write \begin{align*} \mathrm{tr}~ Q_{1} &= \sum_{n \in \mathbb{N}} \langle Q_{1} e_{n}, e_{n} \rangle_{1} = \sum_{n \in \mathbb{N}} \langle \iota \iota^{*} e_{n}, e_{n} \rangle_{1} = \sum_{n \in \mathbb{N}} \langle \iota^{*} e_{n}, \iota^{*} e_{n} \rangle_{0} \\ &= \sum_{n \in \mathbb{N}} \| \iota^{*} e_{n} \|_{0}^{2} = \| \iota^{*} \|_{L_{2}(U_{1},U_{0})}^{2} = \| \iota \|_{L_{2}(U_{0},U_{1})}^{2} < \infty \end{align*} where in the penultimate step we used that the adjoint of a Hilbert-Schmidt operator has the same norm as the operator itself (Remark B.0.6 (i)), and that $\iota$ is a Hilbert-Schmidt embedding. Hope that helps, Andre
If $a,b \in \mathbb{C}$ are transcendental over $\mathbb{Q}$ then is $a^b$ necessarily transcendental over $\mathbb{Q}$?
No, for instance $e$ and $\log 2$ are transcendantal over $\mathbb Q$, but $$e^{\log 2}=2$$ isn't.
Is it possible to optimize solution of this linear system?
The system can be written in the block form $$\tag{1} Ax:=\begin{bmatrix}L & -e \\ e^T & 0\end{bmatrix}\begin{bmatrix}a\\ p\end{bmatrix}=\begin{bmatrix}f \\ g \end{bmatrix}=:b, $$ where $a:=[a_1,\ldots,a_n]^T$, $e:=[1,\ldots,1]^T$, etc. Note that $L$ is lower triangular. To solve this is simple (no need to factorize anything or, worse, to actually attempt to use the Cramer's rule (please don't mention this rule in the numerical linear algebra section)). The first equation in (1) reads: $La-ep=f$; hence we have $a=L^{-1}(f+ep)$. The second equation in (1), $e^Ta=g$, gives hence $e^TL^{-1}(f+ep)=g$ and thus $$\tag{2} e^TL^{-1}e=g-e^TL^{-1}f. $$ To solve this efficiently, first solve $$ L^Th=e, $$ which can be done simply by the back substitution ($L^T$ is upper triangular). This transforms (2) to $$ (h^Te)p=g-h^Tf, $$ which is a scalar equation, and therefore $$ p=\frac{g-h^Tf}{h^Te}. $$
Embedding of elliptic curves into $\mathbb{P}^2$ by arbitrary line bundle of degree $3$
AFAICT the answer depends on $x$ a little. If $x$ is one of the half-periods, then $\wp'$ has a simple zero at $x$, the other simple zeros of $\wp'$ being the other two non-trivial half-periods, call them $x_2$ and $x_3$. Then the function $$ f=\frac{(\wp-\wp(x_2))(\wp-\wp(x_3))}{\wp'} $$ has a simple pole at $x_0$, and a simple pole at $x$. The other zeros of the denominator $\wp'$ are cancelled by the factors in the numerator. The pole at $x_0$ is simple, because the pole order of $\wp$ at $x_0$ is two, and that of $\wp'$ is three. OTOH, if $x$ is not one of the half-periods, then the points $\pm x$ are distinct. It is known that $\wp(z)=\wp(x)$, iff $z$ is congruent to $\pm x$ modulo the lattice of periods. Therefore the function $$ f=\frac{\wp'-\wp'(-x)}{\wp-\wp(x)} $$ has no pole $-x$, because the zero of the denominator at $-x$ is simple. This function thus has a simple pole at both $x_0$ and $x$, and can be used. It may be easier to think about this in terms of the Weierstrass form of the elliptic curve, and the points $P(x)=(\wp(x),\wp'(x))=(u,v)$ that lie on the curve $$v^2=4u^3-g_2u-g_3.\qquad(*)$$ In the case of a half-period, the point $P(x)$ on the curve $(*)$ has a vertical tangent. In the other cases $\wp-\wp(x)$ is a local parameter at $x$, and we simply use the factor $\wp'-\wp'(-x)$ to cancel the pole at the other zero of the denominator. A cleaner way of using this may be out there.
multiplicative reduction of a elliptic curve $E$ splits
Here's some more evidence (and slightly alternate interpretations) for my comment that slopes in $k$ is equivalent to the equation defining the node splitting as a product of linear factors in the completed local ring. I am as yet unable to find the definitive history of the term, but I hope this sheds some light on the subject for the asker and bountier: Vakil's Rising Sea, section 29.3, "Defining types of singularities": Singularities are best defined in terms of completions. As an important first example, we finally define "node". 29.3.1. Definition. Suppose $X$ is a dimension $1$ variety over $\overline{k}$, and $p\in X$ is a closed point. We say that $X$ has a node at $p$ if the completion of $\mathcal{O}_{X,p}$ at $\mathfrak{m}_{X,p}$ is isomorphic (as topological rings) to $\overline{k}[[x,y]]/(xy)$. 29.3.B. Exercise. Suppose $k=\overline{k}$ and $\operatorname{char} k\neq 2$, and we have $f(x,y)\in k[x,y]$. Show that $\operatorname{Spec} k[x,y]/(f(x,y))$ has a node at the origin iff $f$ has no terms of degree $0$ or $1$, and the degree $2$ terms are not a perfect square. The definition of node outside the case of varieties over algebraically closed fields is more problematic, and we give some possible ways forward. For varieties over a non-algebraically closed field $k$, one can always base-change to the closure $\overline{k}$. As an alternative approach, if $p$ is a $k$-valued point of a variety over $k$ (not necessarily algebraically closed), then we could take the same definition as 29.3.1; this might reasonably be called a split node, because the branches (or more precisely, the tangent directions) are distinguished. Those singularities that are not split nodes, but which become nodes after base change to $\overline{k}$ (such as the origin in $\operatorname{Spec} \Bbb R[x,y]/(x^2+y^2)$) might reasonably be called non-split nodes. Stacks Project Tag 0C46, Nodal Curves: We have already defined ordinary double points over algebraically closed fields as follows: if $x\in X$ is a closed point of a $1$-dimensional scheme over an algebraically closed field $k$, then $x$ is an ordinary double point if $$ \mathcal{O}_{X,x}^\wedge \cong k[[x,y]]/(xy).$$ Definition 0C47. Let $k$ be a field. Let $X$ be a $1$-dimensional locally algebraic $k$-scheme. We say a closed point $x\in X$ is a node if there exists an ordinary double point $\overline{x}\in X_{\overline{k}}$ mapping to $x$. Stacks goes on to prove that if $x\in X$ is a node, then (under mild niceness hypotheses) the completion of the local ring at $x$ is isomorphic to $k[[x,y]]/(q(x,y))$ where $q$ is a nondegenerate quadratic form. Saying that this node is split is then equivalent to $q(x,y)$ being choosable as $xy$, which is the same as saying it splits in to distinct linear factors. There's also another characterization - to each $q$, we can associate a degree-two algebra extension of the residue field at $x$, and saying that the node $x$ is split is equivalent to this algebra extension splitting as a direct product of the residue field with itself (see 0CBT + OCBU).
decomposition of a square matrix
This is actually true if the matrix $A$ is diagonalizable. Such a decomposition always exists for any square complex matrix but $\Lambda$ is in this case only triangular. $\Lambda$ is guaranteed to be diagonal if and only if $A$ is hermitian. This is a consequence of the fundamental theorem of arithmetic. Your professor's claim is correct if $A$ is supposed to be symmetric (spectral theorem). This is the only class of real matrices that has the property of being systematically diagonalizable.
Prove either $G=ST$ or |$G|\geq|S|+|T|$
it is enough to show that when $|S|+|T|>|G|$ then $TS=G$. Define $S^{-1}=\{s^{-1}| s\in S\}$ and let $g\in G$. Notice that $|gS^{-1}|=|S^{-1}|$ thus $gS^{-1}$ and $T$ must intersect with each other. Thus, $gs^{-1}=t\implies g=ts \implies G=TS$ .
How to show $f(0)\leq \lim_{t\to a} f(t) +Ca$?
By the mean value theorem, $$f(t)-f(0)=t'f(s)$$ for some $s\in(0,t)\subset(0,a)$. Thus $$ f(0)=f(t)-tf'(s)\le f(t)+Ct\le f(t)+Ca $$ which also is valid under the limes.
Math card probability questions
The term without replacement means that the deck is different for each draw so the probability of getting a heart on the first draw is $\frac{13}{52}$ as there are 13 hearts in a full deck of 52 cards. Now given you have already drawn a heart on your first draw the probability of getting a heart on your second draw is $\frac{12}{51}$ as there are now only 12 hearts left in the pack and the pack has 51 cards remaining. So the probability of hearts on both your first and second draw is $\frac{13}{52} \cdot \frac{12}{51} = \frac{156}{2652} = \frac{1}{17}$ I'm sure you can see from here how you would calculate three hearts. The term with replacement means that the card is put back and mixed up again after each draw so the probability of drawing a heart on the second draw is $\frac{13}{52}$ because you are still drawing from a full pack of cards. For two hearts the probability is $\frac{13}{52} \cdot \frac{13}{52} = \frac{169}{2704} = \frac{1}{16}$ and for three hearts ...
The volumes of two similar cylinders.
Yes, that is correct. When stating an example like this I avoid factors of two to avoid confusion caused by the fact that $2+2=2 \cdot 2=2^2$ If the radius triples, the volume goes up nine times.
Are there cases in mathematics in which it is important to distinguish material implication ( '$\to$') and logical implication ('$\Rightarrow$')?
The symbols you want are $\to$ (\to) for material implication and $\implies$ (\implies) for logical implication. Insofar as mainstream mathematics distinguishes them, $p\implies q$ means that $p\to q$ is (a) true in all models of a theory of interest (however, in that context we'd usually write $\models$ (\models) instead of $\implies$ to make it clear) or (b) a tautology. And in modal logic, we can rewrite $p\implies q$ as $\Box(p\to q)$ (note the use of \Box). But in practice, $\implies$ is often used in proofs to indicate an inference from what was already known.
Show that the solution of the differential system are periodic.
This is an autonomous differential system and the solutions stay on a bounded closed curve hence either they converge to a point of the curve when $t\to\infty$ without ever reaching it, or they cycle in the sense that there exists some finite $T$ such that $(y(t+T),z(t+T))=(y(t),z(t))$ for every $t$. In the present case, the square of the velocity $(y')^2+(z')^2=y^6+z^6$ is uniformly bounded below by a positive constant on each curve $y^4+z^4=c$ with $c\gt0$, hence the solutions cycle.
Finite order divides maximal finite order?
Let $x_1=x^{p^r}$, $y_1=y^b$. Then $o(x_1) = a$, $o(y_1)=p^s$, which are coprime, so $o(x_1y_1)=ap^s > ap^r$, contradiction.
if $f: (0,\infty) \to (0,\infty)$ is a strictly decreasing then $f \circ f$ is decreasing?
Since $f$ is strictly decreasing on $(0,\infty)$, we have that for any $x_1,x_2 \in (0,\infty)$ such that $x_1 < x_2$, $f(x_1)>f(x_2)$, which in turn means that $f(f(x_1)) < f(f(x_2))$. Hence, in fact $f \circ f$ is strictly increasing.
Solving $\lim_{x\to\infty} \frac{x^{2x}}{x^{5x}}$
$$\frac{x^{2x}}{x^{5x}}=x^{-3x}=\frac{1}{(x^x)^3}$$ No need for L'Hopital's rule, as the denominator goes to $\infty $ as $x\to\infty$. The question in the title is different, but solved the same way.
Show the memoryless property is equivalent to other expressions.
The first is obvious: subtract 1 from both sides of your original equation. For the second, note that: $$\begin{align}P(X>s+t\mid X>s)P(X>s)&=P(X>t)P(X>s)\\ P(X>s+t, X>s)&=P(X>t)P(X>s)\\ P(X>s+t)&=P(X>t)P(X>s)\end{align}$$
Show $\sum_{k=1}^n (p_k + \frac{1}{p_k})^2\geq n^3 + 2n + \frac{1}{n}$ ; $p_k\geq 0 \forall k$ and $\sum_kp_k=1$
Let $$np' = \sum p_i$$ Consider $$\sum (p_i - p')^2$$ btw, I presume you meant $(\sum \frac{1}{p_k})^2$ instead of $\sum (\frac{1}{p_k})^2$
Prove monotocity of cubic Bezier's curve under certain restrictions
Your postulated result is not true. Take $x_0 = 1$, $x_1 = 2$, $x_2=0$, $x_3 = 2$. Then the resulting function $t \mapsto x(t)$ is not monotone on $[0,1]$. In fact $x(t) = 1+3t-9t^2+7t^3$. So $x'(0) > 0$, $x'(1) > 0$, and $x'(\tfrac12) < 0$.
$\|g\|_{L^{1}(\mathbb R)}=\sup \{ {|\int_{\mathbb R} fg|: f\in C_{c}^{\infty}(\mathbb R), \|f\|_{L^{\infty}(\mathbb R)}=1\}} ?$
Yes it is true because $C^\infty_c(\mathbb{R})$ is dense in $L^p(\mathbb{R})$ for any $1\leq p<\infty$ in the strong topology and is dense in $L^\infty$ in the weak-$\star$ topology. Edit: $L^\infty(\mathbb{R})$ is the dual of $L^1(\mathbb{R})$ and $C^\infty_c(\mathbb{R})$ is dense in $L^\infty(\mathbb{R})$. For any $f\in L^1(\mathbb{R})$ then the scalar product $\langle f,g\rangle$ is a continuous linear function on $L^\infty(\mathbb{R})$. If $g\in L^\infty$ but not in $C^\infty_c$ then there exists a sequence $(g_n)$ which converges weakly to $g$ and from the continuity argument we get that $\langle f,g_n\rangle\to\langle f,g\rangle$. We conclude that once we have define a functional on a dense subset we can extend it uniquely to the whole set.
Ring homomorphism: $0 < \mathrm{char}(f(R)) \leq \mathrm{char}(R)$
First at all, you can't conclude that $\mathrm{char}(f(R)) = \mathrm{char}(R) = n$, because you have to show that $n$ it's the minimum with that property; now you have showed that $n\cdot f(r)=0$ for all $r\in f(R)$. You can see the conclusion as follows, let $S=\{m\in\mathbb{N} \mid m\cdot f(r)=0 \quad \forall r\in f(R) \}$, as $n\in S$ and $S\subseteq \mathbb{N}$, it follows for the well ordering principle that there exists $\mathrm{min}(S)=m$, and of course $m\leq n$, by the definition of the characteristic of a ring we have that $\mathrm{char}(f(R))=m$ (here we have $m&gt;0$), another way it's to show that $\mathrm{char}(f(R)) \mid \mathrm{char}(R)$, your proof shows $\mathrm{char}(f(R))\neq 0$, now let $\mathrm{char}(f(R))=m$, by the division algorithm there exists unique $q,r\in \mathbb{Z}$ with $0\leq r&lt;m$ such that $n=mq+r$, but: \begin{align} 0&amp;=nf(x)=(mq+r)f(x)=mqf(x)+rf(x)=qmf(x)+rf(x)=q\cdot 0+rf(x)\\ &amp;=rf(x) \quad \forall x\in f(R) \end{align} Therefore, for the minimality of $m$ it follows that $r=0$, then $n=mq$, as we want to show.
$\mu * \nu$ a finite Borel measure in $\mathbb{R}$?
Let $\left(\Omega_{i},\mathcal{A}_{i}\right)$ be measurable spaces for $i=1,2$ and let $\rho$ be a measure on $\mathcal{A}_{1}$. Every measurable function $f:\Omega_{1}\to\Omega_{2}$ induces a measure on $\mathcal{A}_{2}$ by the prescription $A\mapsto\rho\left(f^{-1}\left(A\right)\right)$. This measure is denoted as $\rho f^{-1}$. Observe that $\rho f^{-1}\left(\Omega_{2}\right)=\rho\left(f^{-1}\left(\Omega_{2}\right)\right)=\rho\left(\Omega_{1}\right)$ showing that every $\rho f^{-1}$ is a finite measure if $\rho$ is a finite measure. Special case: $\Omega_{1}=\mathbb{R}^{2}$, $\Omega_{2}=\mathbb{R}$ and the $\mathcal{A}_{i}$ are the Borel $\sigma$-algebras on these sets. Let $\rho$ be the product measure $\mu\times\nu$ where $\mu,\nu$ are measures on $\left(\Omega_{2}=\mathbb{R},\mathcal{A}_{2}\right)$. Function $f:\mathbb{R}^{2}\to\mathbb{R}$ prescribed by $\left\langle x,y\right\rangle \mapsto x+y$ is measurable so $\rho f^{-1}$ is a well defined measure. For $A\in\mathcal{A}_{2}$ observe that: $$\rho f^{-1}\left(A\right)=\rho\left(f^{-1}\left(A\right)\right)=\mu\times\nu\left(\left\{ \left\langle x,y\right\rangle \mid x+y\in A\right\} \right)=\mu\star\nu\left(A\right)$$ That means exactly that: $$\rho f^{-1}=\mu\star\nu$$ We conclude that $\mu\star\nu$ is a finite measure if $\rho$ is a finite measure, which is evidently the case if $\mu,\nu$ are finite measures.
Calculating Fourier expansion without using $\frac{a_o}{2}+\sum_{n=1}^{\infty} (a_n\cos(\frac{2n\pi}{T})t+b_n\sin(\frac{2n\pi}{T})t$
the fourier expansion of a function is unique hence all of the methods are equivalent. The first formula is a particular case of the general definition of $$f_n=\frac{1}{L}\int_0^L e^{-\frac{2\pi i nx}{L}} f(x) dx$$ L being the period of $f$ and is only valid for real functions.
Is it true that $b^n-a^n < (b-a)nb^{n-1}$ when $0 < a< b$?
\begin{align} b^n-a^n &amp; = (b-a)(b^{n-1}+ b^{n-2}a + b^{n-3}a^2 + b^{n-4}a^3 + b^{n-5} a^4 +\cdots+a^{n-1}) \\[10pt] &amp; &lt; (b-a)(b^{n-1} + b^{n-2} b + b^{n-3}b^2 + b^{n-4}b^3+ b^{n-5}b^4 + \cdots + b^{n-1}) \\[10pt] &amp; = (b-a)(b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + \cdots + b^{n-1}) \\[10pt] &amp; = (b-a) n b^{n-1}. \end{align} The only positive integer $n$ for which this does not work is $n=1,$ where the second factor has only one term, which is $1.$ And in that case it works if you say $\text{“}\le\text{''}$ instead of $\text{“}&lt;\text{''}.$ \begin{align} b^2-a^2 &amp; = (b-a)(b+a) &lt; (b-a)(b+b) &amp; &amp; = (b-a)2b. \\[10pt] b^3-a^3 &amp; = (b-a)(b^2 + ba + a^2) &lt; (b-a)(b^2+b^2+b^2) &amp; &amp; = (b-a)3b^2. \\[10pt] b^4 - a^4 &amp; = (b-a)(b^3+b^2a+ba^2+a^3) \\ &amp; &lt; (b-a)(b^3+b^3+b^3+b^3) &amp; &amp; = (b-a)4b^3. \\[10pt] b^5-a^5 &amp; = (b-a)(b^4 + b^3a + b^2 a^2 + ba^3 + a^4) \\ &amp; &lt; (b-a)(b^4+b^4+b^4+b^4+b^4) &amp; &amp; = (b-a)5b^4. \\[10pt] &amp; \qquad\qquad\text{and so on.} \end{align}
Clarification wrt proof for linear regression cost function being convex
You can gather the definition of positive semi-definiteness from https://en.wikipedia.org/wiki/Definiteness_of_a_matrix We have to prove that: $v^T X^T X v$ $\ge 0$ Notice that, the above expression can be rewritten as: $v^T X^T X v$ = $(Xv)^T. Xv$ = $|| X.v ||_2 $ (Which is the euclidean norm of $X.v$) Since, the euclidean norm of the vector is a sum of squares the result follows.
Congruence question, does -1 matter?
Recall that $x\mid y$, the divides relation, says that there is an integer $n$ such that $y=nx$. Then if $x\mid y$ we have $y=nx$ and $-y=(-n)x$, and $-n$ is also an integer, so $x\mid-y$. More generally, if $k$ is an integer, then $x\mid y$ implies $x\mid ky$, and this is the special case when $k=-1$.
Variance of a function of a normal random variable
I think the way you want to go about this is to express the expected value of $f$ as $$E[f(V)] = \frac{\gamma}{\sqrt{2 \pi} \sigma_0} \int_{-\infty}^C dv \: v \exp{\left [ - \frac{(v-v_0)^2}{2 \sigma_0^2} \right ]} + \frac{C}{\sqrt{2 \pi} \sigma_0} \int_{C}^{\infty} dv \: \exp{\left [ - \frac{(v-v_0)^2}{2 \sigma_0^2} \right ]} $$ where $V$ is the normally distributed random variable. Also, $$E[f(V)^2] = \frac{\gamma^2}{\sqrt{2 \pi} \sigma_0} \int_{-\infty}^C dv \: v^2 \exp{\left [ - \frac{(v-v_0)^2}{2 \sigma_0^2} \right ]} + \frac{C^2}{\sqrt{2 \pi} \sigma_0} \int_{C}^{\infty} dv \: \exp{\left [ - \frac{(v-v_0)^2}{2 \sigma_0^2} \right ]} $$ The variance is $$\mathrm{Var}[f(V)] = E[f(V)^2] - E[f(V)]^2$$
How can the derivative of ln(x) be defined for values of x that are undefined for ln(x) itself?
The derivative of $\ln{x}$ is $1/x$ only when $x &gt; 0$. When $x &lt; 0$, $1/x$ is the derivative of $\ln(-x)$. Many calculus books will combine these two cases and say that the derivative of $\ln|x|$ is $1/x$ for $x \neq 0$.
Prove using induction principles
Hint. $$\sum\limits_{k=2^n}^{2^{n+1}-1} \frac{1}{k^a} \leq \sum\limits_{k=2^n}^{2^{n+1}-1}\frac{1}{2^{na}}=\frac{1}{2^{n(a-1)}}$$ and $$ \frac{1 - 2^{n(1-a)}}{1-2^{1-a}} + \frac{1}{2^{n(a-1)}} = \cdots $$ By the way, a circuitous route might by using the geometric series formula (you certainly don't need to do this, but it's interesting), you know that $$ \sum\limits_{k=0}^{n-1} \frac{1}{2^{k(a-1)}} = \frac{1-2^{n(1-a)}}{1-2^{1-a}} $$ and so $$ \frac{1 - 2^{n(1-a)}}{1-2^{1-a}} + \frac{1}{2^{n(a-1)}} = \sum\limits_{k=0}^{n-1} \frac{1}{2^{k(a-1)}} + \frac{1}{2^{n(a-1)}} = \sum\limits_{k=0}^n \frac{1}{2^{k(a-1)}} = \cdots $$
How to calculate $p$-value in a clinical trial
Without knowing which test they chose, and whether the test was stratified or adjusted by any covariates, it is unlikely that we can replicate the $p$-value. For instance, we could use Fisher's exact test, the chi-squared test (which is the same as the two-sample independent proportion test with a pooled standard error), or a two-sample independent proportion test with an unpooled standard error, or a likelihood ratio test, and these are just different choices of statistic, not considering whether continuity correction is used, or if there was some other adjustment for other prognostic factors that are not stated in the press release. That said, I would not be surprised if there was an error. Small biotech companies don't always perform the analyses correctly. But to really know for sure, one would have to read the study protocol and the statistical analysis plan, neither of which is generally available to the public.
Why is every subobject of a functor a subfunctor?
You're asking if every natural monomorphism is pointwise monic. Since every monomorphism can be characterized as a pullback, this is true if $\mathcal D$ has pullbacks, because pullbacks in $\mathcal D^\mathcal C$ are then computed pointwise, and evaluation functors preserve them. In general this is not true, but given the weakness of the sufficient condition, you probably won't find many counterexamples, although some have been constructed specifically for this purpose (see this, for example). However, as a sidenote, you'll find a lot more counterexamples if you're working instead with a subcategory of $\mathcal D^\mathcal C$, especially for the dual claim (epic natural transformation is pointwise epic). This is a famous example.
fitch proof help, don't quite understand the answer
What seems wrong to you? You can get $\textrm {Slithy}(a)\land\textrm{Mimsy}(a)$ by using the rule $\land$ Intro so long as you previously have each of $\textrm {Slithy}(a)$ and $\textrm{Mimsy}(a)$, which you do (8 lines up and 1 line up, respectively). You can pick out $\textrm {Mimsy}(a)$ from $\textrm {Mimsy}(a)\land\textrm{Gyre}(a)$ (which is the previous line) with $\land $ Elim, because that's exactly what $\land$ Elim lets you do. (You could also have picked out $\textrm{Gyre}(a)$). I’m referring to this. Maybe your reference to the rules is not so clear?
Prove that $f(x,y,z)=x^2 y+2xz^2$ is continuous at $(1,1,1)$
Extracting $\delta$, your expression is equal to $$ \delta(|y||x+1| + 2|z||x| + 3) $$ Now, if we say that $\delta$ will at most be chosen to be $1$ (there is nothing special about $1$, any value will do; I just like to use $1$), then we have $$ |y||x+1|\leq 6\\ |z||x| \leq 4 $$ so we get $$ \delta(|y||x+1| + 2|z||x| + 3) \leq \delta(6+8+3) = 17\delta $$ Thus, choosing $\delta = \min\left(\frac{\epsilon}{17}, 1\right)$ will make sure that as long as $$ |x-1|, |y-1|, |z-1| &lt; \delta $$ we have $$ |f(x, y, z) - f(1, 1, 1)| &lt; \epsilon $$ Note, however, that this isn't exactly the definition of continuity. The definition of continuity is that for any $\epsilon&gt;0$ there is a $\delta&gt;0$ such that for any $x, y, z$ with $$ \sqrt{(x-1)^2 + (y-1)^2 + (z-1)^2}&lt;\delta $$ we have $$ |f(x, y, z) - f(1, 1, 1)|&lt;\epsilon $$ You are picking $x, y, z$ from a cube centered around $(1,1,1)$, while the definition wants you to use a ball. In order to apply the above proof to the actual definition you will have to check that this doesn't lead to any kind of problems.
Evans-Gariepy proof of $f\in W^{1,\infty}_\text{loc}(U)$ iff $f$ is locally Lipschitz continuous in $U$
The idea is to split the integral and use change of variables in one of them: \begin{align} \int_U f(x)\frac{\phi(x+he_i)-\phi(x)}{h}dx &amp;= \frac{1}{h}\int_U f(x)\phi(x+he_i)dx - \frac{1}{h} \int_U f(x)\phi(x)dx \\ &amp; = \frac{1}{h}\int_{U} f(x)\phi(x+he_i)dx - \frac{1}{h} \int_{U-he_i} f(x+he_i)\phi(x+he_i)dx \\ &amp; = \frac{1}{h}\int_{U} f(x)\phi(x+he_i)dx - \frac{1}{h} \int_{U} f(x+he_i)\phi(x+he_i)dx \\ &amp;= -\int_U g_i^h(x)\phi(x+he_i)dx \end{align} In the second equality we used the change of variables $x \mapsto x+he_i$ and in the third, we used that if $h$ is small enough $supp \phi \subset U\cap(U-he_i)$. For the last step, it follows as you mentioned.
How to integrate $\int{xe^{x}\sin{x}}~dx$
Hint 1: Use integration by parts, where $x = u$ and $y = e^x \sin(x)$. Hint 2: The first hint may not be helpful, especially if you haven't already encountered the integral of $e^x \sin(x)$. Finding an antiderivative of that is a nontrivial sub-problem that involves using integration by parts twice and solving an equation for the variable $\int e^x \sin(x) \, \textrm{d}x$.
Limit of increasing sequence of nonpositive harmonic/subharmonic functions
As Daniel Fischer has pointed out in the comments, you can use the mean (or sub-mean) value properties of harmonic and subharmonic functions on the ball to prove that the limit function would be harmonic in (a) and subharmonic in (b). Thus the statement in (a) is true, and is proved in many texts on potential theory. However, in (b) I believe there is a problem, in that the resulting limit of an increasing sequence of non-positive continuous subharmonic functions need not be continuous. Consider the functions $f_n(x) = \displaystyle{\frac{-1}{n\|x\|}}$. For each $n$, we know that $f_n$ is harmonic on $B(0,1)\setminus \{0\}$, and hence the function $g_n(x) = \max(-1,f_n(x))$ is subharmonic on the whole of the unit ball. However, the limit of the sequence $\{g_n\}$ is the function which is zero throughout the unit ball, except for at the origin, where it is $-1$. Hence the limit is not continuous and the statement in (b) is false.
When will the equality involving inner product of averages of vectors holds
$$\sum\limits_{i=1}^{k} \left\langle x_{i} - \widehat{x} , \widehat{y} \right\rangle=\sum\limits_{i=1}^{k}\left( \left\langle x_{i} , \widehat{y} \right\rangle - \left\langle \widehat{x},\widehat{y} \right\rangle \right) =\sum\limits_{i=1}^{k} \left\langle x_{i} , \widehat{y} \right\rangle -\sum\limits_{i=1}^{k} \left\langle \widehat{x},\widehat{y} \right\rangle $$ and both terms are equal to to $k\left\langle \widehat{x},\widehat{y} \right\rangle$.
Demonstrating the image of the inverse image of a subset
If $Y \subset F$ and $f : E \to F$, then $f(f^{-1}(Y)) = Y \cap f(E)$ \begin{align} y \in f(f^{-1}(Y)) &amp;\implies \exists x \in f^{-1}(Y),\, y = f(x)\\ &amp;\implies (x \in E) \wedge (f(x) \in Y)\\ &amp;\implies f(x) \in Y \cap f(E)\\ &amp;\implies y \in Y \cap f(E)\\ \end{align} So $f(f^{-1}(Y)) \subseteq Y \cap f(E)$ \begin{align} y \in Y \cap f(E) &amp;\implies (y \in Y) \land (y\in f(E))\\ &amp;\implies (y \in Y) \land (\exists x \in E,\, y=f(x))\\ &amp;\implies \exists x \in E,\, x \in f^{-1}(y)\\ &amp;\implies y \in f(f^{-1}(y))\\ \end{align} So $Y \cap f(E) \subseteq f(f^{-1}(Y))$.
How can I calculate the exact solution to a differential equation?
This is incorrect. The auxiliary equation formed would be $$m^2+am=0$$ $$m(m+a)=0$$ $$m=0,\, -a$$ $$\therefore u=c_0+c_1 e^{-ax}$$ Assuming that $a\gt 0$ and $a\in \mathbb{R}$, $$u(\infty)=c_0=0.5$$ But one cannot determine the exact solution without another boundary condition. So the final solution is then $$u=0.5+c e^{-ax}$$
Prove that the sequence is bounded above.
1. $a_n\in\left[0,\sqrt{x}\right]$. Initially, $a_0=0\in\left[0,\sqrt{x}\right]$. Inductively, suppose $a_k\in\left[0,\sqrt{x}\right]$. We need to show $a_{k+1}\in\left[0,\sqrt{x}\right]$. We have $$ a_{k+1}=a_k+\frac{1}{2}\left(x-a_k^2\right)=-\frac{1}{2}a_k^2+a_k+\frac{x}{2}. $$ Note that the right-hand side can be regarded as a quadratic function of $a_k$, whose global maximal point is $a_{k*}=1$. Provided that $a_k\in\left[0,\sqrt{x}\right]\subseteq\left[0,1\right]$, it is clear that the maximal value of the right-hand side $-a_k^2/2+a_k+x/2$ is obtained at $a_{k*}=\sqrt{x}$, which gives $$ -\frac{1}{2}a_{k*}^2+a_{k*}+\frac{x}{2}=-\frac{x}{2}+\sqrt{x}+\frac{x}{2}=\sqrt{x}. $$ the minimal value of the right-hand side $-a_k^2/2+a_k+x/2$ is obtained at $a_{k**}=0$, which gives $$ -\frac{1}{2}a_{k**}^2+a_{k**}+\frac{x}{2}=0+0+\frac{x}{2}=\frac{x}{2}\ge 0. $$ Therefore, $a_{k+1}=-a_k^2/2+a_k+x/2\in\left[0,\sqrt{x}\right]$, as is expected. To sum up, it suffices to conclude that $a_n\le\sqrt{x}$ for all $n\ge 0$. 2. $a_{n+1}\ge a_n$. This step does not need to use mathematical induction. Provided that $a_n\in\left[0,\sqrt{x}\right]$, it is straightforward that $a_n^2\in\left[0,x\right]$. Thanks to $a_n^2\le x$, $$ a_{n+1}=a_n+\frac{1}{2}\left(x-a_n^2\right)\ge a_n+\frac{1}{2}\left(x-x\right)=a_n. $$ Therefore, $a_n$ is always non-decreasing. 3. $a_n\to\sqrt{x}$. Combine the above two steps, and the monotone convergence theorem implies that $a_n$ converges, denoted by $a_n\to y$. We need to show $y=\sqrt{x}$. Note that it makes sense to take the limit on both sides of $$ a_{n+1}=a_n+\frac{1}{2}\left(x-a_n^2\right), $$ which gives $$ y=y+\frac{1}{2}\left(x-y^2\right). $$ Solve this quadratic equation with respect to $y$ under the constraint $y\in\left[0,\sqrt{x}\right]$, and it follows immediately that $y=\sqrt{x}$, as is expected.
Expectation of sum is less than the second moment
The trick is that your random variables $X_i$ are independent, so that $$E[(f(X_i)-m)(f(X_j)-m)]=E[f(X_i)-m]E[f(X_j)-m]=0,$$ where $m=E[X]$. This shows that $$E\left[\frac1n\left(\sum_if(X_i)-E[f(X)]\right)^2\right]=\frac1n\sum_iE\left[\left(f(X_i)-E[f(X)]\right)^2\right]=\mathrm{Var}(f(X)).$$
Finding constants of a given curve
The extremum occurring at $(1,2)$ implies that the equation must be of the form $$y=a(x-1)^2+2.$$ (This is a translated version of $y=ax^2$.) Then plugging the known point, $$3=a(0-1)^2+2\implies a=1$$and $$y=x^2-2x+3.$$
Field automorphism over $\mathbb Q$ not mapping conjugate pairs
Hint: Let $\tau$ be the usual complex conjugation. If $\tau\sigma=\sigma\tau$ for all $\sigma\in Gal(K/\Bbb{Q})$, then the subgroup generated by $\tau$ is normal, and hence its fixed field is Galois over $\Bbb{Q}$. Because $Z(f)$ consists of primitive elements $\sigma$ is fully determined if we know $\sigma(z)$ for some $z\in Z(f)$.
Question related to chi-square test of independence and likelihood ratio test
The value of $\alpha$, the significance level, is predetermined depending on how likely the tester wants a Type I error to be. The typical values used are $0.1, 0.05,$ or $0.01$. Then, we calculate the likelihood ratio test statistic $\Lambda$, which is a random variable. Now, by the Neyman-Pearson lemma, for the most powerful test, we will always have $\Lambda \leq k$ for some $k \in \mathbb{R}$ as the rejection region, where $k$ is determined by $\alpha$ in the following way: as I have said, $\alpha$ is the probability of a Type I error, or the error of rejecting the null hypothesis when it is in fact true. So, in order to determine $k$, we solve the equation $$P(\Lambda \leq k \, | \, H_0) = \alpha \,\,,$$ where $H_0$ is the null hypothesis of the test. Note that this chi-square test for independence that you are using is derived from this setup, and so it is actually a likelihood ratio test - in other words, it is what this most powerful test reduces to when we compute $\Lambda$. The demonstration of this equivalence can be found in most college-level textbooks on statistics. Thus, the choice between $\chi_{n}^{2}(\frac{\alpha}{2})$ and $\chi_{n}^{2}(\alpha)$, for some $n$ degrees of freedom that depends on the sample size, as the critical value for rejection comes from this construction of the test, fundamentally.
Is $\mathbb{Z} \times \mathbb{R} \subseteq \mathbb{R} \times \mathbb{Z}$ true?
Yes, it is not true. It is not a subset strictly speaking. But both sets are isomorphic in a very obvious way, so you can actually identify both sets (by permuting the coordinates) and then write a subset by abuse of notation (as $A\subseteq A$).
Probability density function in given interval?
If $p(x)$ is the pdf of a random variable on some interval $[a,b]$, then $$ \int_a^b p(x)\ dx=1. $$ In this case $p(x)=1/x^2$ and $b=1$. Thus we get $1/a-1=1$, so $a=1/2$.
Do all equations in three variables represent a two-dimensional surface in $\mathbb{R}^3$?
If you restrict what types of equations you are looking at, the answer is yes. If you don't include such a restriction, then the answer is no. The subspace of $\Bbb R^3$ given by $\{(x,y,z)\in\Bbb R^3~:~ax + by + cz = 0\}$ where $a,b,c$ are real numbers with at least one of them non-zero will indeed always describe a two-dimensional subspace of $\Bbb R^3$ and describes a plane passing through the origin. The subset of $\Bbb R^3$ given by $\{(x,y,z)\in\Bbb R^3~:~ax+by+cz=d\}$ where at least one of $a,b,c$ is nonzero and $d$ is nonzero is a two-dimensional affine space and describes a plane not passing through the origin. If you were to consider other possible equations in forms different than $ax+by+cz=d$ then it will depend on what the equation is. $x^2+y^2+z^2=0$ describes the origin only. $x^2+y^2+z^2=1$ describes the unit sphere, $0=0$ describes the entirety of $\Bbb R^3$, etc...
Passion for Mathematics versus its role in Computer Science
Read through (and work many exercises in!) "Concrete Mathematics" by Graham, Knuth, and Patashkin. This is a wonderful book on discrete math with a slight slant toward matters applicable to computer science. It should definitely satisfy your thirst for more advanced math, and will come in handy if you ever have to consider analyzing algorithms. Plus it is fun (at least if you love math). Knuth, BTW, is a BIG NAME in computer science.
Is every prime element of a commutative ring "veryprime"?
Let $p$ be an element with $p\mid ab\implies p\mid a\lor p\mid b$ and that is not a divisor of zero. Assume $p\|ab\ne p\|a+p\|b$. Then certainly $p\|a$ and $p\|b$ are both finite. So write $a=p^ka'$, $b=p^mb'$ with $k,m\in\mathbb N_0$ and $p\nmid a'$, $p\nmid b'$. Then $ab=p^{k+m}a'b'$ and $p\|ab\ne k+m$ means that $p^{k+m+1} \mid ab$, say $ab=p^{k+m+1}c$. Then $$p^{k+m}(pc-a'b')=0. $$ As $p$ is not a divisor of zero, we conclude $pc=a'b'$, i.e., $p\mid a'b'$ contradicting $p\nmid a'$, $p\nmid b'$. We conclude that $p\|ab= p\|a+p\|b$. Now that I've gotten this far in writing this up, I see that quid presented the counterexaple suggested by this finding. At least this elaborates on quid's closing remark. :)
It is the set $V(y-\sin(x))\subset k^2$ a variety?
Suppose $X = V(y-\sin(x))\subset \Bbb A^2$ were a variety. Then $X$ also admits a description as $V(f_1,\cdots,f_n)$ for some finite list of nonzero polynomials $f_i(x,y)$, each of which vanishes on $V(y-\sin(x))$. By examining $f_i(x,0)$, each single-variable polynomial $f_i(x,0)$ should vanish on $V(y-\sin(x))\cap V(y)$. On the other hand, as you (intended to) identify in your post, $V(y-\sin(x))\cap V(y)$ is infinite, which is a problem (Why? Try to identify for yourself before mousing over the following spoiler). Any nonzero polynomial in one variable may have at most finitely many solutions. As $f_i(x,0)$ has infinitely many solutions, it must be the zero polynomial, which implies that $y|f_i$ for each $f_i$, or that $V(y-\sin(x))$ contains the $x$-axis, which it does not by your identification of $V(y-\sin(x))\cap V(y)$.
Can I get an independent proof of a closed form of these two related infinite series?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\sum_{m = -\infty}^{\infty}{\pars{-1}^{m} \over x - mN} = {1 \over x} + \sum_{m = 1}^{\infty}\bracks{{\pars{-1}^{m} \over x - mN} + {\pars{-1}^{-m} \over x + mN}} \\[5mm] = &amp;\ {1 \over x} + \sum_{m = 2,\ m\ \mrm{even}}^{\infty}\bracks{{1 \over x - mN} + {1 \over x + mN}} - \sum_{m = 1,\ m\ \mrm{odd}}^{\infty}\bracks{{1 \over x - mN} + {1 \over x + mN}} \\[5mm] = &amp;\ {1 \over x} + \color{#f00}{2}\sum_{m = 2,\ m\ \mrm{even}}^{\infty} \bracks{{1 \over x - mN} + {1 \over x + mN}} - \sum_{m = 1}^{\infty}\bracks{{1 \over x - mN} + {1 \over x + mN}} \label{1}\tag{1} \end{align} because $\ds{\sum_{m = 1}^{\infty}\cdots = \sum_{m = 2\,,\ m\ \mrm{even}}^{\infty}\cdots + \sum_{m = 1\,,\ m\ \mrm{odd}}^{\infty}\cdots}$ which yields the prefactor $\ds{\color{#f00}{2}}$ in expression \eqref{1}. Then, \begin{align} &amp;\sum_{m = -\infty}^{\infty}{\pars{-1}^{m} \over x - mN} = {1 \over x} + 2\sum_{m = 0}^{\infty}\bracks{% {1 \over x - \pars{2m + 2}N} + {1 \over x + \pars{2m + 2}N}} \\[5mm] - &amp;\ \sum_{m = 1}^{\infty}\bracks{% {1 \over x - mN} + {1 \over x + mN}} \\[1cm] = &amp;\ {1 \over x} + {1 \over N}\sum_{m = 0}^{\infty}\bracks{-\,{1 \over m + 1 - x/\pars{2N}} + {1 \over m + 1 + x/\pars{2N}}} \\[5mm] - &amp;\ {1 \over N}\sum_{m = 0}^{\infty}\bracks{-\,{1 \over m + 1 - x/N} + {1 \over m + 1 + x/N}} \\[1cm] = &amp;\ {1 \over x} + {1 \over N}\bracks{\Psi\pars{1 - {x \over 2N}} - \Psi\pars{1 + {x \over 2N}}} - {1 \over N}\bracks{\Psi\pars{1 - {x \over N}} - \Psi\pars{1 + {x \over N}}} \end{align} where $\ds{\Psi}$ is the Digamma Function. Then, \begin{align} &amp;\sum_{m = -\infty}^{\infty}{\pars{-1}^{m} \over x - mN} \\[5mm] = &amp;\ {1 \over x} + {1 \over N}\bracks{\Psi\pars{-\,{x \over 2N}} - {2N \over x} - \Psi\pars{1 + {x \over 2N}}} \\[5mm] - &amp;\ {1 \over N}\bracks{\Psi\pars{-\,{x \over N}} - {N \over x} - \Psi\pars{1 + {x \over N}}}\qquad\pars{~Recurrence Property~} \\[1cm] = &amp;\ {1 \over N}\bracks{\Psi\pars{-\,{x \over 2N}} - \Psi\pars{1 + {x \over 2N}}} - {1 \over N}\bracks{\Psi\pars{-\,{x \over N}} - \Psi\pars{1 + {x \over N}}} \\[5mm] = &amp;\ {1 \over N}\bracks{-\pi\cot\pars{\pi\bracks{-\,{x \over 2N}}}} - {1 \over N}\bracks{-\pi\cot\pars{\pi\bracks{-\,{x \over N}}}} \quad\pars{~Euler\ Reflection\ Formula~} \\[5mm] = &amp; {\pi \over N}\bracks{\cot\pars{\pi x \over 2N} - \cot\pars{\pi x \over N}} = \bbx{\ds{\pi/N \over \sin\pars{\pi x/N}}} \end{align} The other one can be evaluated in the same fashion&nbsp;!!!.
Proof of a specific Concave Function
Following from your second inequality $$4 - (tx_1 + (1-t)x_2)^2 \geq 4t - x_1^2t + 4 - x_2^2 - 4t + tx_2^2,$$ you get $$ - (tx_1 + (1-t)x_2)^2 \geq - x_1^2t - x_2^2 + tx_2^2.$$ If you expand the LHS and reorder all terms, you obtain $$ (1 - t) t (x_1 - x_2)^2 \geq 0.$$ Obviously, the equality holds for $t=0$, $t=1$ or $x_1=x_2$. In all other cases, the inequality is strict, but also true. Regarding the convexity of $$A=\{(x,y)|x\in D, f(x)\geq y\},$$ let $(x_1,y_1), (x_2,y_2)$ be points of $A$. For $t\in[0,1]$, we define $$(x_t,y_t) := t(x_1,y_1) + (1-t)(x_2,y_2) = (tx_1+(1-t)x_2,ty_1+(1-t)y_2).$$ Thus, $(x_t,y_t)$ belongs to $A$ if and only if $$f(tx_1+(1-t)x_2) \geq ty_1+(1-t)y_2.$$ You just need to apply the concavity of $f$ to conclude that the inequality is true.A
Probability that $\text{det}(A)$ is an even number.
Let $A=\left(\matrix{a&amp;b\\c&amp;d}\right)\in\text{Mat}_2(\mathbb{Z})$. $\text{det}(A)=2n$ for some $n\in\mathbb{Z}$ iff $\text{det}(A')=0$, with $$ A'=\left(\matrix{a_2&amp;b_2\\c_2&amp;d_2}\right)\in\text{Mat}_2(\mathbb{Z}_2), $$ where the coefficients are reduced modulo 2. The matrices with zero determinant in $\text{Mat}_2(\mathbb{Z}_2)$ are: \begin{align} \left(\matrix{0&amp;0\\0&amp;0}\right),&amp;&amp;\left(\matrix{0&amp;1\\0&amp;0}\right),&amp;&amp;\left(\matrix{1&amp;0\\0&amp;0}\right),&amp;&amp;\left(\matrix{0&amp;0\\1&amp;0}\right),&amp;&amp;\left(\matrix{0&amp;0\\0&amp;1}\right),\\ \left(\matrix{1&amp;1\\0&amp;0}\right),&amp;&amp;\left(\matrix{1&amp;0\\1&amp;0}\right),&amp;&amp;\left(\matrix{0&amp;1\\0&amp;1}\right),&amp;&amp;\left(\matrix{0&amp;0\\1&amp;1}\right),&amp;&amp;\left(\matrix{1&amp;1\\1&amp;1}\right).\\ \end{align} So the required probability is $$ \frac{10}{|\text{Mat}_2(\mathbb{Z}_2)|}=\frac{10}{|\mathbb{Z}_2|^{2\cdot 2}}=\frac{10}{16}=\frac{5}{8}. $$
Interpret a matrix as a function
The answer you've written as "A" is the standard one that you'd find in any linear-algebra textbook.
Random Graphs and connected components
In Diestel's Graph Theory, in section on Random Graphs, he argues (Corollary 11.3.3) that for every constant $p \in (0,1)$ and every integer $k$, almost every graph in $\mathcal{G}(n,p)$ is $k$-connected, which implies that it has one connected component.
Existence and uniqueness of an ODE
Yes, that's essentially right. For a first-order ODE $y' = F(y, t)$ for a nice function $F$ (here nice means Lipschitz continuous and with open domain), and an initial condition $y(t_0) = y_0$ (for which $(y_0, t_0) \in \text{dom}\, F$), there is a solution $y$ of the ODE satisfying the initial condition on some time interval $(a, b)$ containing $t_0$, and any two such solutions agree on the overlap. In your example, the first solution satisfies the initial condition $y(0) = \frac{1}{c}$ (note that this is actually an entire family of solutions, one for each constant value of $c$). The solution $y(t) = 0$ satisfies $y(0) = 0$, but this initial value cannot be written as $\frac{1}{c}$ for any $c$, so there is no contradiction.
Determine for which constants $a$, $b$, $c$ and $d$ it is true that $f \circ g = g \circ f$.
As an alternate pproach: Compute a few nice test values to obtain conditions. For exmaple $f(g(0))=g(f(0))$ gives you immediately that $b=cb^2+db=(cb+d)b$, which is true if and only if $b=0$ or $cb+d=0$. Try some other values for $x$, e.g. $x=1$ or $x=-\frac ba$ or $x=-\frac db$. Then combine the condtions obtained.
Krull Dimension of a scheme
If $X$ is a topological space and $X=\bigcup_{i\in I}U_i$ is an open cover, then the Krull dimension of $X$ is equal to the supremum of the Krull dimensions of the $U_i$. Call the latter number (which might be $\infty$) $s$. Since $\dim(U_i)\leq\dim(X)$ for all $i$, $s\leq\dim(X)$. For the reverse, let $Z_0\subsetneq Z_1\subsetneq\cdots\subsetneq Z_n$ be a chain of irreducible closed subsets of $X$. Then $Z_0$ must meet some $U_{i_0}$ (because an irreducible space is non-empty), and intersecting with that $U_{i_0}$ gives a chain $Z_0\cap U_{i_0}\subsetneq Z_1\cap U_{i_0}\subsetneq\cdots\subsetneq Z_n\cap U_{i_0}$ of irreducible closed subsets of $U_{i_0}$. The reason the inclusions remain strict is because $Z_j\cap U_{i_0}$ is dense in $Z_j$, so if $Z_j\cap U_{i_0}=Z_{j+1}\cap U_{i_0}$, then taking closures in $X$ gives $Z_j=Z_{j+1}$, which is not the case. Thus $s\geq\dim(U_{i_0})\geq n$, so $s\geq\dim(X)$.
Is the reciprocal of a vector defined in a way to allow the reverse of scalar or dot products?
Generally, a vector doesn't have an inverse. Consider, for example, $$[0,1] = k[1,0],$$ which clearly has no solution for any $k \in \Bbb R$. But if vectors had inverses, we would be able to solve for $k$: $$k = [0,1][1,0]^{-1}.$$ You might be interested in looking into matrices. In particular, square matrices can have inverses (but not all of them do).
Difference between gradient descent and finding stationary points with calculus?
Because the objective function (the sum of the errors squared, over the data points) is precisely a quadratic function, the method of steepest (gradient) descent will select the perfect direction on the first try, and if you go along the descent line until the minimum, will find the minimum in only one iteration. This is true not only for a one-dimensional line, but for any multivariate linear fit. The calculations needed to do the gradient descent are precisely the same as those needed to solve the simultaneous equations. And indeed, the practical person would use Method 1. However, if your objective function were not a perfect quadratic form, then two things happen. The Method 1 becomes impossible, since you can't solve the simultaneous non-liner equations, and the gradient descent method will choose a slightly inferior initial direction, so that multiple iterations will be needed. Here, the practical person is forced to use method 2, (or better, some method like conjugate gradient that deals with issues like spiralling in to the solution slowly, which often happens in naive steepest descent).
Find an example of a discontinuous positive semi-definite real function
Your thinking is right. From the definition of positive semi-definite function it is quite easy to see that if $f$ is positive semi-definite and $g(x)=f(x)$ for $ x \neq 0$ and $g(0)&gt;f(0)$ then $g$ is also positive semi-definite: $ \sum\limits_{i,j=1}^{n} c_j\overline {c_k} g(t_i-t_j)=\sum\limits_{i,j=1}^{n} c_j\overline {c_k} f(t_i-t_j)+ \sum\limits_{i=1}^{n} |c_j|^{2} (g(0)-f(0))$.
Converting from one to another numeral system
You can group the digits. As an octal digit accounts for 3 binary bits and a quaternary digit accounts for 2, the LCM of these is 6. So you can take pairs octal digits and convert them to three quaternary digits (like $33_8=123_4$) or vice versa. In a sense, you are going through binary, but in a hidden way. This still depends upon both bases of interest being powers of the same number (here, 2)
Looking for a simple interpretation
A function is self-inverse (that is $f(x)=f^{-1}(x)$) , if we have $$f(f(x))=x$$ This can be easily verified for this function.
Combinations formula shorthand with $2$ as the base.
Assume that you have a set $S$ with $n$ elements. Your combinations correspond to the (non-empty) subsets of $S$. To construct such a subset, for each of the $n$ elements you must decide whether it is to be contained in your subset or not. So, for each of the $n$ elements, you have $2$ possibilities, which makes a total number of $2^n$ possibilities. If your subset must not be the empty set, you have to subtract $1$ from that number.
Show that $\int_0^1\int_0^1\frac{\left[-\ln(x)\right]^s\left[-\ln(y)\right]^t}{1-xy}dxdy=\frac{\Gamma(s+t+1)\zeta(s+t+2)}{s+t \choose t}$
Through the substitutions $x=e^{-u},y=e^{-v}$ we get: $$ I(s,t)=\iint_{(0,+\infty)^2} \frac{u^s v^t}{e^{u+v}-1}\,du\,dv =\iint_{(0,+\infty)^2}u^s v^t\sum_{n\geq 1}e^{-nu}e^{-nv}\,du\,dv$$ so, by Fubini's theorem and the fact that $\int_{0}^{+\infty}t^a e^{-nt}\,dt = \frac{a!}{n^{a+1}}$ we get: $$ I(s,t) = \sum_{n\geq 1}\frac{s! t!}{n^{s+t+2}} =s!\cdot t!\cdot \zeta(s+t+2)$$ as wanted.
tough series involving digamma
Note that $$ \frac{1}{2k+1-a}+\frac{1}{2k+1+a}=\frac{4k+2}{(2k+1)^2-a^2}\tag{1} $$ and $$ \frac1a\left(\frac{1}{2k+1-a}-\frac{1}{2k+1+a}\right)=\frac{2}{(2k+1)^2-a^2}\tag{2} $$ Adding $(1)$ and $(2)$ and dividing by $4$ yields $$ \begin{align} \frac{k+1}{(2k+1)^2-a^2} &amp;=\frac{1+a}{8a}\frac{1}{k+(1-a)/2}-\frac{1-a}{8a}\frac{1}{k+(1+a)/2}\\ &amp;=\hphantom{+ }\frac{1-a}{8a}\left(\frac1k-\frac{1}{k+(1+a)/2}\right)\\ &amp;\hphantom{= }-\frac{1+a}{8a}\left(\frac1k-\frac{1}{k+(1-a)/2}\right)\\ &amp;\hphantom{= }+\frac{1}{4k}\tag{3} \end{align} $$ Now, using $$ \psi(a+1)+\gamma=\sum_{k=1}^\infty\frac{1}{k}-\frac{1}{k+a}\tag{4} $$ we get $$ \frac12\psi\left(\frac{a}{2}+1\right)+\frac\gamma2=\sum_{k=1}^\infty\frac{1}{2k}-\frac{1}{2k+a}\tag{5} $$ and subtracting twice $(5)$ from $(4)$ gives $$ \psi(a+1)-\psi\left(\frac{a}{2}+1\right)=\sum_{k=1}^\infty(-1)^{k-1}\left(\frac{1}{k}-\frac{1}{k+a}\right)\tag{6} $$ Furthermore, $$ \log(2)=\sum_{k=1}^\infty(-1)^{k-1}\frac1k\tag{7} $$ Using $(3)$, $(6)$, and $(7)$, we get $$ \begin{align} \sum_{k=1}^\infty(-1)^k\frac{k+1}{(2k+1)^2-a^2} &amp;=-\frac{1-a}{8a}\left(\psi\left(\frac{3+a}{2}\right)-\psi\left(\frac{5+a}{4}\right)\right)\\ &amp;\hphantom{= }+\frac{1+a}{8a}\left(\psi\left(\frac{3-a}{2}\right)-\psi\left(\frac{5-a}{4}\right)\right)\\ &amp;\hphantom{= }-\frac14\log(2)\tag{8} \end{align} $$ Equivalence of Forms: Using $(4)$, $(5)$, and $(7)$, we get $$ \begin{align} \sum_{k=1}^\infty\frac{1}{2k-1}-\frac{1}{2k-1+a} &amp;=\color{green}{\sum_{k=1}^\infty\frac{1}{2k-1}-\frac{1}{2k}}+\color{red}{\sum_{k=1}^\infty\frac{1}{2k}-\frac{1}{2k-1+a}}\\ &amp;=\color{green}{\log(2)}+\color{red}{\frac12\psi\left(\frac{a+1}{2}\right)+\frac\gamma2}\tag{9} \end{align} $$ Adding $(5)$ to $(9)$ yields $$ \begin{align} \psi(a+1)+\gamma &amp;=\hphantom{+}\log(2)+\frac12\psi\left(\frac{a+1}{2}\right)+\frac\gamma2\\ &amp;\hphantom{= }+\frac12\psi\left(\frac{a}{2}+1\right)+\frac\gamma2\tag{10} \end{align} $$ Rearranging $(10)$ shows that $$ \psi(a)=\log(2)+\frac12\psi\left(\frac{a}{2}\right)+\frac12\psi\left(\frac{a+1}{2}\right)\tag{11} $$ Applying $(11)$ gives $$ \psi\left(\frac{3+a}{2}\right)=\log(2)+\frac12\psi\left(\frac{3+a}{4}\right)+\frac12\psi\left(\frac{5+a}{4}\right)\tag{12} $$ and $$ \psi\left(\frac{3-a}{2}\right)=\log(2)+\frac12\psi\left(\frac{3-a}{4}\right)+\frac12\psi\left(\frac{5-a}{4}\right)\tag{13} $$ Plug $(12)$ and $(13)$ into $(8)$ $$ \begin{align} \sum_{k=1}^\infty(-1)^k\frac{k+1}{(2k+1)^2-a^2} &amp;=\hphantom{+}\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{5+a}{4}\right)\right)\\ &amp;\hphantom{= }+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{5-a}{4}\right)\right)\\ &amp;=\hphantom{+}\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{1+a}{4}\right)-\frac{4}{1+a}\right)\\ &amp;\hphantom{= }+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{1-a}{4}\right)-\frac{4}{1-a}\right)\\ &amp;=\hphantom{+}\color{red}{\frac{a-1}{16a}\left(\psi\left(\frac{3+a}{4}\right)-\psi\left(\frac{1+a}{4}\right)\right)}\\ &amp;\hphantom{= }\color{red}{+\frac{a+1}{16a}\left(\psi\left(\frac{3-a}{4}\right)-\psi\left(\frac{1-a}{4}\right)\right)}\\ &amp;\hphantom{= }\color{red}{+\frac{1}{a^2-1}}\tag{14} \end{align} $$
$\textrm{GL}_2(\mathbb{Z}/p^2\mathbb{Z}) \to \textrm{GL}_2(\mathbb{Z}/p\mathbb{Z})$ has no section for $p > 3$
Let $S=\pmatrix{1&amp;1\\0&amp;1}\in\text{GL}_2(\Bbb Z/p\Bbb Z)$. This has order $p$, and one's intuition suggests that any lifting to $\text{GL}_2(\Bbb Z/p^2\Bbb Z)$ should have order $p^2$, meaning that there's no section for $\pi$. But is this true? A lifting of $S$ has the form $S'=I+A$ where $$A=\pmatrix{ap&amp;1+bp\\cp&amp;dp}\in\text{GL}_2(\Bbb Z/p^2\Bbb Z).$$ Then $$A^2=\pmatrix{cp&amp;(a+d)p\\0&amp;cp},$$ $$A^3=\pmatrix{0&amp;cp\\0&amp;0}$$ and $A^4=0$. For $p\ge5$ then $$S'^p=I+pA+\binom p2A^2+\binom p3A^3=I+\pmatrix{0&amp;p\\0&amp;0}$$ which does mean that $S'$ does not have order $p$ in $\text{GL}_2(\Bbb Z/p^2\Bbb Z)$. This argument breaks down for $p\in\{2,3\}$. For instance with $p=3$ one has $$S'^3=I+3A+3A^2+A^3=I+\pmatrix{0&amp;(c+1)p\\0&amp;0}$$ so one can take $c=-1$. Of course this is somewhat shy of proving that in this case $\pi$ has a section, but it shows that this argument does not refute it.
Compute the cardinality of a field $K$ and show that $K$ contains a splitting field of $X^{31} - 1$
For part a, once you've shown that $P(X)$ is irreducible you can be done rather quickly; you know that $F[X]$ is a principal ideal domain so $\ker v_a=(A(X))$ for some $A(X)\in F[X]$, and you know that $P(X)\in\ker v_a$ so $P(X)=Q(X)A(X)$ for some $Q(X)\in F[X]$. Since $P(X)$ is irreducible it follows that $A(X)=uP(X)$ or $A(X)=u$ for some unit $u\in F$. If $A(X)=u$ then $$\ker v_a=(A(X))=(u)=F[X],$$ which is clearly false, so $A(X)=uP(X)$ and hence $\ker v_a=(uP(X))=(P(X))$. And then indeed every element of the quotient $F[X]/\ker v_a$ is of the form $aX^2+bX+c$, so $|K|=5^3$. For part b, consider the minimal polynomial of $\beta$ over $F$. This is an irreducible polynomial with a zero in $K$, hence its degree is at most $3$. Show that it cannot be less than $3$. Fort part c, note that $K-\{0\}$ is a multiplicative (abelian) group of order $5^3-1$. How many elements of order $31$ does it contain?
Prove $\mathrm{span}(T ) = \mathrm{span}(T \cup \{ 0 \} ) $
Yes, the other direction is identical! Although, one could use a well known formula regarding sums of subspaces, that $\text{sp}(U\cup W)=\text{sp}(U)+\text{sp}(W)$ when $W=\left \{ 0 \right \}$.
E[X*Y] for the sum and difference of a dice rolling and independence
We have $\Pr(X=2)\ne 0$ and $\Pr(Y=1)\ne 0$. But $\Pr(X=2\cap Y=1)=0$, so $X$ and $Y$ are not independent.
How many integer solutions are there to $x_1+x_2+x_3+x_4+x_5=31$
Transformation of variables works the best. We defiine $y_i=x_i-i$ i.e number of solutions of $$y_1+y_2+y_3+y_4+y_5+(1+2+3+4+5)=31$$ $$y_1+y_2+y_3+y_4+y_5=16$$ where each $y_i\ge 0$
Algorithms for factoring multivariate polynomials
There are algorithms for factoring univariate polynomials with integer coefficients like the Zassenhaus algorithm that reduces the problem modulo small primes and then uses Hensel-lifting to probe for the integer factors, and for which modular factors have to be combined. Or the LLL based factorization algorithm that looks for the minimal polynomial of a numerically with high precision computed root. So chose a random small to medium integer $m$ to replace $y$, perform univariate factorization and apply Hensel lifting modulo powers of $(y-m)$ to the factors. Again, if after the lifting there remain factors with too high a degree, look for combinations of them that are proper factors.
if $r\in (0,1), 1-r$ are the only sub limits of $a_{n}$ then $f(a_{n})$ converges when
Here's my own attempt at solving this (comments are appreciated!): We denote $b_{n}=f(a_{n})$ and by Bolzano-Weistrauss, $b_{n_{k}}\to b$. The subsequence $a_{n_{k}}$ might not converge, but again by Bolzano-Weistrauss there exists $a_{n_{k_{m}}}$ that converges to either $r$ or $1-r$ (because those are the only sublimits of $a_{n}$). if $a_{n_{k_{m}}}\to r$ then $b_{n_{k_{m}}}=f(a_{n_{k_{m}}})\to f(r)$ (because $f$ is continuous in $[0,1]$). Similarly, if $a_{n_{k_{m}}}\to 1-r$ then $b_{n_{k_{m}}}=f(a_{n_{k_{m}}})\to f(1-r)$. But since $\forall x\in [0,1], \ f(x)=f(1-x)$, we get $f(r)=f(1-r)$. To finish the argument we note that $b_{n_{k_{m}}}$ is a subsequence of $b_{n_{k}}\to b$, thus $b_{n_{k_{m}}}\to b$ and we conclude that $b=f(r)=f(1-r)$. Since $b$ was chosen arbitrarily we get that $b_{n}$ has a single sublimit which implies $b_{n}\to f(r)$
Solve $\int_0^x (z^2+1)e^{s\left(\frac{z^3}{3}+z\right)} \ dz$
The factor $s(z^2+1)$ is the derivative of $e^{s(z^3/3+z)}$ Once you put $y=z^3 +z$, the new limits are $0$ and $x^3/3 +x$
Undirected Graphs: why do cycles have to be of length $3$ or more?
Though I don't know which book you are using, I shall assume that all graphs are simple (otherwise we may refer to them as multigraphs). The case of cycles of length $2$ comes down to the difference between cycles and closed walks. Cycles are usually assumed to be simple, that is, not use any vertex or edge more than once (except that it starts and ends in the same vertex). For instance, we would not consider $C_4$ to have a cycle of length $6$, even though it does have a closed walk of length $6$. Therefore we also do not consider something like $\langle a,b,a\rangle$ to be a cycle. The case of the single-vertex cycle $\langle a\rangle$ is a bit trickier. Couldn't we say the begin and end point of this cycle are the same, with no edges in between them? (After all, a simple graph is not allowed to have loops.) Well, yes, that would make it a cycle of length $0$. This is a degenerate case: it does not suit any practical use, and it defies the usual properties of cycles that we know (for instance: "the number of vertices on a cycle is the same as the number of edges"). Furthermore, it complicates various definitions (such as: "a forest is a graph without cycles", or "the girth of $G$ is the length of the shortest cycle in $G$"). For this reason the trivial cycle is often excluded. Maybe somebody has a more compelling reason why a trivial cycle should be excluded, but I see it mostly as a matter of convention. In any case, rest assured: the definitions you encountered are common. In graph theory (and mathematics in general), sometimes definitions are modified to exclude degenerate cases or to prevent strange corner cases from turning up further down the line.
Is it covex function?$J_{new}(u)=\int_{\Omega} \sum_{i=1}^{N} \lambda_if(x)u_i(x)dx$
The argument $u$ is linear in $J$ and $J_{new}$.
Group of homomorphisms
Hint: If $A$ is cyclic, and $a$ generates it, then every homomorphism $f:A\to G$ is completely determined by $f(a)$.
Deciding if a mapping is an isomorphism
Remember that for it to be an isomorphism, it must be a homomorphism, and a bijection. That is to say, it must be surjective and injective. Given that, can you think of two non equal matrices that have the same determinant? If so, then what can you conclude about the mapping? Edit: As Display name points out, ⟨M2(R),∗⟩ is not a group. When answering, I made the assumption that the question was referring to a map between two monoids.
How can i simplify this matricial expression?
$$(B^T X)^T-A((B^{-1}A)^{-1}-B)=0$$ $$ X^T B - A(B^{-1}A)^{-1}+A B=0$$ right-multiply both sides by $B^{-1}A$ $$ X^T A - A + A^2=0$$ right-multiply both sides by $A^{-1}$ $$ X^T - I + A=0$$
Pearson correlation test
The critical value for $r$ as shown in the table, is given by the equation $$r_{\text{crit}} = \frac{t_{n-2,\alpha/2}^*}{\sqrt{n-2 + (t_{n-2,\alpha/2}^*)^2}},$$ where $t_{n-2,\alpha/2}^*$ is the upper $\alpha/2$ quantile of the student's $t$-distribution with $n-2$ degrees of freedom. This equation is the second one in the Wikipedia article subsection under &quot;Testing using Student's $t$-distribution.&quot; In particular, $$\Pr[T_{n-2} &gt; t_{n-2,\alpha/2}^*] = \alpha/2$$ where $T_{n-2}$ is a student's $t$ random variable with $n-2$ degrees of freedom. For instance, $n = 10$ and $\alpha = 0.1$ gives $t_{8,0.05}^* \approx 1.85955$. Then $$r_{\text{crit}} = \frac{1.85955}{\sqrt{10 + (1.85955)^2}} \approx 0.549357,$$ which is the entry for row $8$ column $2$ in the table. For the case in your comment, where $n = 10$ and $\alpha = 0.05$, we have $$t_{8, 0.025}^* \approx 2.306004.$$ This gives $$r_{\text{crit}} = \frac{2.306004}{\sqrt{8 + (2.306004)^2}} \approx 0.63189686.$$ This is row $8$ column $3$ of the table. This hypothesis test is 2-sided. If $r$ is negative, then you need to compare against $-r_{\text{crit}}$ for the hypothesis $$H_0 : r = 0 \quad \text{vs.} \quad H_1 : r \ne 0.$$ That is to say, we reject $H_0$ in favor of $H_1$ if $|r| &gt; r_{\text{crit}}$ where $r$ is the observed correlation from the data. Personally, I don't find the table very useful. Instead, I would directly calculate the test statistic via the first equation in the Wikipedia subsection $$T_{n-2} \mid H_0 = r \sqrt{\frac{n-2}{1-r^2}},$$ which is student $t$-distributed with $df = n-2$, thus making the need for a separate table irrelevant, since you can now just use a regular $t$-table.
What is the meaning of exterior here and how to find the solution?
A non degenerated parabola divides the plane in two regions. Only one of them is convex. The convex region is often called the "interior" of the parabola. Since the area has the $X$ axis as a symmetry axis, you can find the area above and multiply it by $2$. This part can be divided in two: a quarter of circle (at left) whose area can be found with the known formula $\pi r^2$, or if you are not allowed to use this kind of formulas, with the integral $$\int_{-4}^0f(x)dx$$ and the "curved triangle" at right, that is between the circle and the parabola, which can be found with the integral $$\int_0^2(f(x)-g(x))dx$$ where $f$ is the function that describes the circle and $g$ is that of the parabola. Note that you must solve the respective equations for $y$ in order to find appropiate expressions for $f$ and $g$.
how to test whether an unknown function is logarithmic function or not? can we figure that out by " the fundamental property of a logarithm"
Note: This depends a lot on exactly how you define "is a logarithm function". In this answer I have assumed that for you a "logarithm function" means any multiple of the natural logarithm, the latter being defined as the integral of $1/x$. Now, rereading the question, I'm not so sure about that anymore, so caveat lector. The fundamental property of a logarithm function is the formula $$ f(ab) = f(a) + f(b) $$ If some function $\mathbb R_+ \to \mathbb R$ satisfies this for all $a$ and $b$, and it is continuous and not identically zero, then it must be a logarithm function to some base, and that base is uniquely the number such that $f(\mathit{base})=1$. To prove this, I would start by showing that the natural logarithm satisfies the property for base $e$ (this would probably be the definition of $e$, in fact), and that every continuous function $f:\mathbb R_+\to \mathbb R$ such that $f(ab)=f(a)+f(b)$ and $f(e)=1$ must be equal the logarithm at $e^q$ for every rational $q$, and therefore everywhere by continuity. You can also show that if $f(ab)=f(a)+f(b)$ and $f(x_0)=0$ for some $x_0\ne 1$, then $f$ is zero everywhere. (Again, first prove this for rational powers of $x_0$, and then apply continuity). Now, if you're given some arbitrary continuous function that satisfies $f(ab)=f(a)+f(b)$ and is not zero everywhere, then in particular $f(e)$ must be nonzero. Therefore $x \mapsto \frac{f(x)}{f(e)}$ satisfies exactly the conditions that we've just see imply that it must be the natural logarithm, and therefore $f(x)=f(e)\ln(x)$ everywhere.
Complex integral - exercise
It is important to specify the direction of the contour. $$ \int_\gamma\frac{e^z}{z^2-4}\,\mathrm{d}z =\underbrace{\frac14\int_\gamma\frac{e^z}{z-2}\,\mathrm{d}z}_{I_1} -\underbrace{\frac14\int_\gamma\frac{e^z}{z+2}\,\mathrm{d}z}_{I_2} $$ where $\gamma$ is a counterclockwise circle with center $2$ and radius $\frac14$. $\gamma$ circles the simple pole of the integrand of $I_1$ at $z=2$ once, but does not contain the pole of the integrand of $I_2$ at $z=-2$. We can evaluate $I_1$ using Cauchy's Integral Formula and $I_2$ using Cauchy's Integral Theorem.
find the limit $\lim_{n\to\infty}\frac{\lim_{x\to\infty}n^2x}{e^9n}$
By considering the priority between the two limits, note that the inner limit diverges to $\infty$ for sufficiently large $n$ and the outer limit is simply $\lim_{n\to\infty}\infty$ which is $\infty$.
In what sense are the linear characters among the irreducible characters
The linear characters are precisely the characters (in the trace sense) of $1$-dimensional representations, which are automatically irreducible.
Understanding how for an infinite set $a$, $h(f)=(g(f\upharpoonright n),f(n))$ defines a bijection from $^{n+1}a$ onto $a\times a$
Note that here $f$ is a function from $n+1$ to $a$. So it's not $g(3)$, but rather $g(a_0,a_1,a_2)$. This is really the "obvious" way of doing that. Composing the bijection between $a\times a$ and $a$ over itself inductively.
Duhamel's formula, variation of constants formula, easy differentiation of the right hand side
Here is verification that the function satisfies the DE. Uniqueness depends on the function $f$ being "nice enough". When you differentiate the integral, you have to treat it like a product since the variable you are differentiating w.r.t. ($t$), shows up in both the integrand and limits.
Complex Analysis - Liouville's theorem
The zeros of non-constant analytic functions (with connected domain) are isolated points. So, since $g(0)=0$, there has to be a $r&gt;0$ such that $g$ has no other zero in $D(0,r)$. And then $\frac1g$ is meromorphic there, since it is the quotient of two meromorphic functions.
Solve $xu_x+yu_y+zu_z=4u$
Converting to spherical coordinates, we get $$ru_r = 4u \implies u = f(\theta,\phi)r^4$$ Then plugging in our boundary condition at $r\cos\theta = 1$ and $xy=r^2\sin^2\theta\sin\phi\cos\phi$, we can get $$f(\theta,\phi)r^4 = r^2\sin^2\theta\sin\phi\cos\phi\cdot(1)=r^2\sin^2\theta\sin\phi\cos\phi\cdot (r^2\cos^2\theta)$$ $$\implies f(\theta,\phi) = \cos^2\theta\sin^2\theta\sin\phi\cos\phi$$ by canceling out the $r^4$ on both sides. In other words when we convert back to Cartesian we get the solution $$u(x,y,z) = xyz^2$$
"if K is compact in R^p, then K x {a} is also compact in R^(p+1)" withough using Heine-Borel Theorem
Have you been taught that in $R^p$,compactness is the same as being closed and bounded? p.s. I didn't notice that you couldn't use Heine-Borel. In that case take any open cover $U_\alpha$ of $K\times(a)$. $U_\alpha\cap R^p$ will be an open cover of K.Choose a finite cover,and then at the most add one more open set to cover $K \times (a)$.
Is there any real number except 1 which is equal to its own irrationality measure?
Using the property stated in that article: $$\mu(x)=2 + \limsup \frac{\log a_{n+1}}{\log q_n}$$ where the continued fraction expansion for $x$ is $[a_0,a_1,...]$ and the $n$th convergent is $\frac{p_n}{q_n}$. Start with $a_0=2$ and $a_1=2$, so $q_0=1$, $q_1=2$. Now, assume you have a continued fraction $$\frac{p_n}{q_n}=[a_0,...,a_n]$$ Define $a_{n+1}$ to be the least integer such that $2+\frac{\log a_{n+1}}{\log q_n}&gt;\frac{p_n}{q_n}$. Then $x = [a_0,a_1,...] = \lim \frac{p_n}{q_n}$ will satisfy your requirement. Just show a bound on $2+\frac{\log a_{n+1}}{\log q_n}-\frac{p_n}{q_n}$. In particular, you can use that $\log (a_{n+1}-1)&gt;(\log a_{n+1}) -1$ to show that if $2+\frac{\log a_{n+1}}{\log q_n}-\frac{p_n}{q_n}&gt;\frac{1}{\log q_n}$, then $$2+\frac{\log (a_{n+1}-1)}{\log q_n}&gt;\frac{p_n}{q_n}$$ which would violate our definition of $a_{n+1}$. So $$\mu(x)=2+\limsup \frac{\log a_{n+1}}{\log q_n} = \lim \frac{p_n}{q_n}= x$$ So there exists such an $x$. You can easily get uncountably many such $x$ by choosing any values $a_{2n}\in\{1,2\}$ and then choose the $a_{2n+1}$ by the above condition, again making the $\limsup$ equal to the limit of $\frac{p_n}{q_n}$. I think the same argument can be made to show that the set is dense in $[2,\infty)$. Basically, you can make such a $x$ starting with any finite sequence $[a_0,...,a_n]$ with $a_0\geq 2$. Indeed, it is uncountable in any finite sub-interval $[a,b]$ with $b&gt;a\geq 2$. I don't think this resolves the measurability issue, contrary to my earlier claims. It feels like $\{x:\mu(x)=x\}$ should be measurable, since it feels fairly constructive. On the other hand, it feels like if the set $\{x:\mu(x)=x\}$ has non-zero measure, then $\{x:\mu(x)=x+\alpha\}$ should have non-zero measure, when $\alpha\in\mathbb R$, and thus we'd have an uncountable set of disjoint positive measures, which I believe is not possible. So my guess is that the set is measurable with measure $0$. But that is all gut, no proof.
CAR-algebra contains a unitary with full spectrum
Using the natural inclusions $M_2(\mathbb C)\subset M_4(\mathbb C)\subset\cdots\subset M_{2^\infty}$ via $$\tag{1} A\longmapsto \begin{bmatrix} A&amp;0\\0&amp;A\end{bmatrix}, $$ construct $$ U_1=\begin{bmatrix} e^{2\pi i 1/2}&amp;0\\0&amp;1\end{bmatrix},\ \ U_2=\begin{bmatrix} e^{2\pi i 1/4} &amp;0&amp;0&amp;0\\ 0 &amp;e^{2\pi i 2/4} &amp;0&amp;0 \\ 0&amp;0&amp; e^{2\pi i 3/4}&amp;0\\ 0&amp;0&amp;0&amp; 1\end{bmatrix}, $$ and $U_k\in M_2^{k}(\mathbb C)$ has diagonal $\{e^{2\pi i r/2^k}\}_{r=1}^{2^k}$. Using the embedding $(1)$, one can check that \begin{align} \|U_{k+1}-U_k\|^2&amp;=\max\{|e^{2\pi i 2r/2^{k+1}}-e^{2\pi i (2r+1)/2^{k+1}} |^2: \ r=1,\ldots,2^{k+1} \}\\ \ \\ &amp;=|1-e^{2\pi i 1/2^{k+1}} |^2={(1-\cos \pi/2^{k})^2+\sin^2\pi/2^k}\\ \ \\ &amp;=2(1-\cos\pi/2^k)=O(2^{-2k}). \end{align} Then $$ \|U_{k+\ell}-U_k\|=\|\sum_{j=1}^{\ell}(U_{k+j}-U_{k+j-1})\|\leq\sum_{j=1}^{\ell}\|U_{k+j}-U_{k+j-1}\|\leq c\,\sum_{j=1}^\ell 2^{-k-j}\leq c\,2^{-k+1}. $$ So the sequence $\{U_k\}$ converges to a unitary $U\in M_{2^\infty}$. For each $r,k$ we have $\lambda_{k,r}=e^{\pi i r/2^k}\in\sigma(U_m)$ as long as $m&gt;k$. So $U_m-\lambda_{k,r}I$ is not invertible for all $m&gt;k$, and its limit $U-\lambda_{k,r}I$ cannot be invertible (the set of invertible elements is open, so an invertible element cannot be a limit of non-invertible). Thus $\lambda_{k,r}\in\sigma(U)$. As the set $\{\lambda_{k,r}:\ k\in\mathbb N,\ r=1,\ldots,2^k\}$ is dense in $\mathbb T$, we conclude that $\sigma(U)=\mathbb T$.