title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
the least value for $m\ge 2005$ such that $a_{m+1}-1\mid a^2_m-1$
Note that $$ a_{n+1}-1=(n+1)(a_n-1) $$ so the question becomes: Find the least value $m \geq 2005 $ such that $m+1 | a_m+1$ With a small amount of effort, the first formula can give you an explicit formula for $a_n$.
Are there real definite integrals which can only be evaluated by contour integration?
The answer to this question is likely no. For the example provided here there are some answers using only real analysis: Calculating the integral $\int_{0}^{\infty} \frac{\cos x}{1+x^2}\mathrm{d}x$ without using complex analysis What are different ways to compute $\int_{0}^{+\infty}\frac{\cos x}{a^2+x^2}dx$? And so on. My understanding is this: while most if not all of the integrals with a closed form can be evaluated by real methods, the solution would usually be very complicated and require some ad hoc tricks. Complex analysis and contour integration actually give a unified framework for this problem and allow one to evlauate more and more complicated integrals using the same algorithms as for the simple ones.
Sensitivity Analysis, RHS change in some constraints
I think that "the change will be larger or smaller if the right-hand-side is changed further" refers to the objective function difference when the change in RHS is more important. More precisely, assume that when you change your RHS from $b$ to $b+\delta$, and that the objective function increases by $\Delta$. When you change your RHS from $b$ to $b+\delta'$ with $\delta'>\delta$, the objective function increases by $\Delta'$. The question is whether $\frac{\Delta}{\delta} < \frac{\Delta'}{\delta'}$? This may be easier to see what it means by taking $\delta=1$ and $\delta'=2$. You can get the correct answer by recalling that the optimum value of an LP in minimization with $\ge$ constraints is a convex function of its RHS. Thus, the more the RHS increases, the more the marginal cost increase is important.
Does this set of symmetric matrices form a smooth manifold?
(Partial answer) Let first answer an easier question: Let $M$ be the set of all real $k\times l$ matrices with rank$(A) = 1$. Then if $A \in M$, $A$ is controlled by one column: Let $M_j$, $j=1, \cdots, l$, be the subset with nonzero $j-th$ column. Then $$M_j \cong \big(\mathbb R^{k}\setminus \{0\}\big) \times \mathbb R^{l-1}$$ where $\mathbb R^{k}\setminus \{0\}$ corresponds to the $j-th$ column vector $\vec a_j$ and all other vectors are $\vec a_ic_i \vec a_j$, where $c_i \in \mathbb R$ and $i\neq j$. Now consider $M_i \cap M_j$, $i<j$, which is the set of all $A \in M$ with nonzero $i$-th and $j$-th columns. Let's say on $M_j$, $A$ is given by $$(\vec a_j, c_1, \cdots c_{j-1}, c_{j+1} \cdots, c_l)$$ That is, whenever $k\neq j$, we have $\vec a_k = c_k a_j$. So $\vec a_i = c_i \vec a_j$ $$\Rightarrow a_k = \frac{c_k}{c_k} a_i$$ whenever $k\neq j$. Then on $M_i$ $A$ is given by $$\bigg(c_i \vec a_j, \frac{c_1}{c_j}, \cdots, \frac{c_{i-1}}{c_i}, \frac{c_{i+1}}{c_i}, \cdots, \frac{c_{j-1}}{c_i}, \frac{1}{c_i}, \frac{c_{j+1}}{c_i}, \cdots ,\frac{c_l}{c_i}\bigg)$$ This map is obviously differntiable, thus $M$ is a smooth manifold (acutally a submanifold in $\mathbb R^{kl}$) with dimensional $k+l-1$. Now note that your space can be identified with some subspace of $M$. (Fixing some $i$ and $j$, then consider $j-i = k$ and $l-k = "l"$ (sorry for the bad choice of notation)). Indeed, the set of all such $A$ can be given by zero set of several smooth functions on $M$. For example, let $n=4$ and your are considering all the submatrix $i=1, j=2,k = 1, l = 3$. Then $M$ is the $2 + 3-1$-dimensional manifold constructed above, and the space you are considering is the zero set of $f (A) = a_{12} - a_{21}$ in $M$. I think it will not be difficult (but messy) then to check that your space is really a smooth manifold.
Value of $a$ for real roots and finding who has a higher chance of success when one picks $a$ from $N(0,1)$ and another from an $U$?
HINT: for the uniform distribution, you can easily determine the probability that a number $a$ randomly chosen in the range from $-2\sqrt{2\pi}$ and $-2\sqrt{2\pi}$ has absolute value $\geq 2\sqrt{\log 2}$, thus falling in the intervals that you have correctly identified. Simply calculate the proportion of the whole range that is identified by such intervals. For the standard normal distribution, even without calculating the exact probability to get a number $a$ that satisfies the conditions above, you can remind the proportions of the values that are included within $1,2,3...$ standard deviations of the mean. This is sufficient to solve the problem.
Modular arithmetic $a=bq+r$
Definition of congruence modulo $b$: $a\equiv r\mod b$ iff $b\mid a-r$ (divisibility) iff there exists $q$ such that $bq = a-r$. Now if $a=bq+r$, then $b\mid a-r$ and so $a\equiv r\mod b$.
Combinatorial problem on counting binary sequences
Your problem is equivalent to the following one. Start at $\langle 0,0\rangle$ on the integer grid, and take $n$ steps, where each step is either an up-step (from $\langle a,b\rangle$ to $\langle a+1,b+1\rangle$) for a down-step (from $\langle a,b\rangle$ to $\langle a+1,b-1\rangle$). How many ways are there to reach $\langle n,k\rangle$ while staying below the line $y=k$ for the first $n-1$ steps? For any such path the last step must be an upstep from $\langle n-1,k-1\rangle$, so we can equivalently count paths of length $n-1$ from $\langle 0,0\rangle$ to $\langle n-1,k-1\rangle$ that never rise above the line $y=k-1$. Let $\mathscr{P}$ be the set of all paths from the origin to $\langle n-1,k-1\rangle$, and let $\mathscr{P}_0$ be the subset of paths that do not rise above the line $y=k-1$. Clearly $n-k$ must be even, so let $n=2m+k$. Clearly any path in $\mathscr{P}$ must have $m+k-1$ up-steps and $m$ downsteps, and any combination of $m+k-1$ up-steps and $m$ down-steps is a path in $\mathscr{P}$, so $|\mathscr{P}|=\binom{2m+k-1}m=\binom{n-1}m$. Now suppose that $P\in\mathscr{P}\setminus\mathscr{P}_0$; then $P$ hits the line $y=k$, so there is a least $\ell$ such that $\langle\ell,k\rangle$ is in $P$. Reflect the part of $P$ from $\langle\ell,k\rangle$ to $\langle n-1,k-1\rangle$ in the line $y=k$, converting each down-step into an up-step and vice versa, to get a new path $P'$. That part of $P$ has a net fall of $1$ unit, so its reflection has a net rise of $1$ unit, and $P'$ therefore ends at $\langle n-1,k+1\rangle$. Thus, $P'$ has $$\frac{(n-1)-(k+1)}2=\frac{2m-2}2=m-1$$ down-steps and $(m-1)+(k+1)=m+k$ up-steps. Clearly there are $\binom{n-1}{m-1}$ such paths. Moreover, every path from the origin to $\langle n-1,k+1\rangle$ crosses the line $y=k$ at some point, and reflecting the part of it to the right of that point in the line $y=k$ produces a path in $\mathscr{P}\setminus\mathscr{P}_0$, so $|\mathscr{P}\setminus\mathscr{P}_0|=\binom{n-1}{m-1}$. It follows that $$\begin{align*} |\mathscr{P}_0|&=\binom{n-1}m-\binom{n-1}{m-1}\\ &=\frac{(n-1)!}{m!(n-m-1)!}-\frac{(n-1)!}{(m-1)!(n-m)!}\\ &=\frac{\big((n-m)-m\big)(n-1)!}{m!(n-m)!}\\ &=\frac{k}m\binom{n-1}{m-1}\,. \end{align*}$$ As a quick sanity check, note that when $k=1$ the paths in $\mathscr{P}_0$ are just the reflections in the $x$-axis of the Dyck paths from $\langle 0,0\rangle$ to $\langle n-1,0\rangle$, and it is well known that there are $$\begin{align*} C_m&=\frac1{m+1}\binom{2m}m=\frac{(2m)!}{m!(m+1)!}\\ &=\frac1m\binom{2m}{m-1}=\frac{k}m\binom{n-1}{m-1}\,. \end{align*}$$
Generalized Feedback Shift Registers
I just read "Generalized Feedback Shift Register Pseudorandom Number Algorithm" by T. G. Lewis and W. H. Payne. I think that paper settles the question I was raising (going to the source, right?). In essence, the question is "What is the correct procedure to use the Generalized Feedback Shift Register Algorithm (GFSR)?". 1.- Start with a sequence and a primitive polynomial $x^{p}+x^{q}+1$. For example, $a_{0}=a_{1}=a_{2}=a_{3}=a_{4}=1$ and $x^{5}+x^{2}+1$. 2.- Elements of the sequence follow $a_{k}=a_{k-p+q}\bigoplus a_{k-p}$ with $k=p, p+1,...$. In this example, since we have the first 5 elements of the sequence and according to the polynomial, we are given that $p=5, q=2$. Therefore, we can know the next elements of the sequence \begin{matrix} a_{6}=a_{3}\bigoplus a_{1}=0 \\ a_{7}=a_{4}\bigoplus a_{2}=0 \\ a_{8}=a_{5}\bigoplus a_{3}=1 \\ a_{9}=a_{6}\bigoplus a_{4}=1 \\ ... \\ \end{matrix} So, in this way we construct the rest of the sequence: $\{a_{i}\}_{0}^{30}={1111100011011101010000100101100}$ In order to produce a better random sequence, we apply Kendall's algorithm. Although there are several variations of Kendall's algorithm, the point is to shift the original sequence $1111100011011101010000100|101100$ forwards by 6 bits, that is, $1011001111100011011101010|000100$. And again three times more (until we are back with the original sequence). This process gives the following sequence \begin{matrix} \text{Key} & \text{Sequence} \\ 0 & \|11111\|00011011101010000100|101100\\ 1 & 1011001111100011011101010|000100\\ 2 & 0001001011001111100011011|101010\\ 3 & 101010000100101100111100|011011\\ 4 & 0110111010100001001011001|111100 \end{matrix} Finally, we take n-tuples (in this example, 5-tuples are used) which are positioned as the columns of a new array: \begin{matrix} W_{0}: & \|1\|1010 & W_{10}: & 01001& W_{20}: & 00111\\ W_{1}: & \|1\|0001 & W_{11}: & 10000& W_{21}: & 01111\\ W_{2}: & \|1\|1011 & W_{12}:& 10110& W_{22}: & 10010\\ W_{3}: & \|1\|1100 & W_{13}:& 10100& W_{23}: & 01100\\ W_{4}: & \|1\|0011 & W_{14}:& 01110& W_{24}: & 00101\\ W_{5}: & 00001 & W_{15}:& 11111& W_{25}: & 10101\\ W_{6}: & 01101 & W_{16}:& 00100& W_{26}: & 00011\\ W_{7}: & 01000 & W_{17}:& 11000& W_{27}: & 10111\\ W_{8}: & 11101 & W_{18}:& 01011& W_{28}: & 11001\\ W_{9}: & 11110 & W_{19}:& 01010& W_{29}: & 00110 \end{matrix} Each $W_{i}$ is called a 'word'. Since each column obeys the recurrence $a_{k}=a_{k-p+q}\bigoplus a_{k-p}$, each word must also obey $W_{k}=W_{k-p+q}\bigoplus W_{k-p}$. As far as I know, that's the correct procedure for using the GFSR algorithm. Corrections or comments will be appreciated.
Boundary is equal to its closure
Yes you are right, the boundary of $S$ is defined as $$Bd(S)=\overline{S}\setminus Int(S)=\overline{S}\cap (X\setminus Int(S))$$ which is the intersection of two closed sets and thus closed. Therefore, $Cl(Bd(S))=Bd(S)$ like what you claimed.
Ultrafilter closed under negative shift
No. Exactly one of $2\mathbb N$, $1+2\mathbb N$ must be in $\mathcal U$.
$\{x\in\mathbb R:x\sin x\le 1,x\cos x\le 1\}$ closed or open $?$
Take $f,g : \mathbb{R} \to \mathbb{R}$ be the functions $$ f(x)= x\sin(x) \text{ and } g(x) = x\cos(x) $$ Then the set in question is $$ S = \{x\in \mathbb{R} : f(x)\leq 1, g(x) \leq 1\} $$ Since $f$ and $g$ are continuous, if $(x_n) \subset S$ such that $x_n \to x$, then $$ f(x_n) \to f(x) \text{ and } g(x_n) \to g(x) $$ Since $f(x_n) \leq 1$ for all $n\in \mathbb{N}$, it must follow (why?) that $f(x) \leq 1$. Similarly, $g(x) \leq 1$, and so $$ x\in S $$ Edit: As indicated by a commenter, I include an explanation as to why this implies that $S$ contains all its limit points: If $x\in \mathbb{R}$ is a limit point of $S$, then for each $n \in \mathbb{N}$, the open set $$ (x-1/n,x+1/n) $$ must intersect $S$ non-trivially. Choose a point $x_n \in (x-1/n,x+1/n)\cap S$. Now note that $|x_n - x| < 1/n$, so $x_n \to x$. Now complete the argument as above to conclude that $x\in S$.
Proving $\frac{x}{\sqrt{y^2+yz+z^2}}+\frac{y}{\sqrt{z^2+zx+x^2}}+\frac{z}{\sqrt{x^2+xy+y^2}} \ge \sqrt{3}$
The function $f(x) = \frac 1 {\sqrt x}$ is convex, by Jensen's inequality $$ \dfrac{x}{\sqrt{y^2+yz+z^2}}+\dfrac{y}{\sqrt{z^2+zx+x^2}}+\dfrac{z}{\sqrt{x^2+xy+y^2}} \geq \\ \frac {(x + y + z)^{3/2}} { \sqrt{x(y^2 + yz + z^2) + y(z^2 + zx + x^2) + z(x^2 + xy + y^2)} } $$ Our inequality becomes $$ x(y^2 + yz + z^2) + y(z^2 + zx + x^2) + z(x^2 + xy + y^2) \leq \frac 1 3 (x + y + z)^3 $$ Expanding both sides, it is reduced to $$ xyz\leq \frac {x^3 + y^3 + z^3} {3} $$ that is true by AM-GM.
Are these two grammars equivalent?
That is correct. More specifically, the language that is generated by the first grammar is $\{a^n b^n \mid n \in \mathbb{N}\}$, while the second grammar generates $\{a^{n+ 1} b^{n + 1} \mid n\in \mathbb{N}\}$.
Evaluating $\sum _{n=1}^{\infty } \sinh ^{-1}\left(\frac{1}{\sqrt{2^{n+1}+2}+\sqrt{2^{n+2}+2}}\right)$
Note that $$\sinh^{-1} x-\sinh^{-1} y= \sinh^{-1} \left( x\sqrt{1+y^2}-y\sqrt{1+x^2} \right)$$ \begin{align*} \frac{1}{\sqrt{2^{n+2}+2}+\sqrt{2^{n+1}+2}} &= \frac{\sqrt{2^{n+2}+2}-\sqrt{2^{n+1}+2}} {(2^{n+2}+2)-(2^{n+1}+2)} \\ &= \frac{\sqrt{2^{n+2}+2}-\sqrt{2^{n+1}+2}} {2^{n+1}} \\ &= \sqrt{\frac{1}{2^{n}} \left( 1+\frac{1}{2^{n+1}} \right)}- \sqrt{\frac{1}{2^{n+1}} \left( 1+\frac{1}{2^{n}} \right)} \\ \sinh^{-1} \left( \frac{1}{\sqrt{2^{n+1}+2}+\sqrt{2^{n+2}+2}} \right) &= \sinh^{-1} \frac{1}{\sqrt{2^{n}}}-\sinh^{-1} \frac{1}{\sqrt{2^{n+1}}} \\ \sum_{n=1}^{\infty} \sinh^{-1} \left( \frac{1}{\sqrt{2^{n+1}+2}+\sqrt{2^{n+2}+2}} \right) &= \sinh^{-1} \frac{1}{\sqrt{2}} \\ &= \ln \left( \frac{1+\sqrt{3}}{\sqrt{2}} \right) \\ &= \ln \sqrt{2+\sqrt{3}} \end{align*}
prove $\lim\limits_{x\to-e^{+}}(x+e)^{1/2}\int_{0}^{+\infty}\frac{1}{e^t+xt}dt=\pi\sqrt{\frac{2}{e}}$
Rewrite the integral as $$I=\int_0^{\infty}\frac{dt}{e^t-et+(x+e)t}.$$ It is easy to check that $e^t\geq e t$ for all $t>0$ and the only point where the equality is achieved is $t=1$. Therefore the main asymptotic contribution to the integral will come from the vicinity of this point (note that $y=et$ is the tangent line to $y=e^t$ at $t=1$). Taylor expanding $$e^t-et=\frac{e}{2}(t-1)^2+O((t-1)^3)$$ and making the change of variables $t-1=\sqrt{x+e}\cdot\sqrt{\frac{2}{e}}\cdot s$, we then get \begin{align} I\simeq\int_{1-\Delta}^{1+\Delta}\frac{dt}{\frac{e}{2}(t-1)^2+(x+e)+\ldots}\simeq\frac{1}{\sqrt{x+e}}\sqrt{\frac{2}{e}}\int_{-\infty}^{\infty}\frac{ds}{ s^2+1}\simeq \frac{\pi}{\sqrt{x+e}}\sqrt{\frac{2}{e}}. \end{align}
find when matrix is not diagonalizable
Hint: A non-diagonalizable matrix must have a repeated eigenvalue.
study a sequence for increasing/decreasing
We define $f(x)=(1+k/x)^{x+1},$ and want to show this is eventually decreasing if $k=1,2$ while it is eventually increasing if $k\ge 3.$ If by $\exp(a)$ we mean $e^a,$ then we have $f(x)=exp(g(x))$ where $$g(x)=(x+1)\ln(1+k/x).$$ Note that since $\exp(u)$ is strictly increasing, we know that $f$ is increasing/decreasing iff $g$ is so. The derivative of $g(x)$ is now $$g'(x)=\ln(1+k/x)-\frac{k(x+1)}{x(x+k)},\tag{1}$$ after simplifying it. Now we can apply the series for $\ln(1+t)=t-t^2/2+t^3/3 \cdots$ to this, and note since we're only interested in eventually large $x$ that the value $k/x$ is eventually less than $1$ so that the log series will converge when $t=k/x.$ Furthermore note that then the log series is an alternating series whose terms have strictly monotone decreasing absolute values. This means that "tail ends" of the log series have definite sign, specifically a tail starting with a positive term has a positive sum, and a tail starting at a negagive term has a negative sum. Define $h(k)$ as the term $k(x+1)/[x(x+k)]$ which is the fraction subtracted from the log term on the right side of $(1).$ We first assume that $k\ge 3$ and we take the first two terms of the log series, namely $k/x-(1/2)(k/x)^2,$ and subtract off $h(k)$ which then gives $$\frac{k[(k-2)x-k^2]}{2x^2(x+k)}.$$ Since $k \ge 3$ this is eventually positive. Now putting the rest of the log series back we're beginning at a positive term $+(1/3)(k/x)^3,$ so that the total sum remains positive ande we have shown eventually $g'(x)>0$ in case $k \ge 3.$ In the two cases for $k=1,2$ we need to use the first three terms of the log series, and then subtract $h(k)$ as above, this time wanting the result to be eventually negative, because now the first term in the tail is the negative value $-(1/4)(k/x)^4$ making the tail cause the total series to give a negative sum and so get $g'(x)<0$ for large $x.$ What we find is that for $k=1$ our first three terms with $h$ subtracted is $(2-3x)/6x^3,$ while in case $k=2$ this is $-4(x-4)/(3x^3(x+3)),$ in each case eventually negative as desired in order to show $g'(x)<0$ for large $x.$
Find a orthogonal projection function $\cos t$ on kernel space $V = \operatorname{Span} \{\sin t, 1-\cos t\}$
Hint For a map $f \in V = \text{Lin}\{1, \sin t, \cos t\}$, the projection on $W= \{\sin t, 1-\cos t \}$ is a map $p(f)$ such that $p(f)(t)a \sin t+b (1-\cos t)$ where $a,b \in \mathbb R$ must be defined by the fact that $$\begin{cases} \langle f-p(f), \sin t\rangle &=0\\ \langle f-p(f), 1-\cos t\rangle &=0 \end{cases}$$ That gives two equations for the two unknowns $a,b$. Note: if you know orthonormal basis and Gram–Schmidt process, there is a more generic process
Expectation inequality of absolute values
We have that $$|EX| \leq E|X|$$ which is true in general. $$E(abs(X+Y)| X) \geq |E(X+Y|X)| = |X + E(Y|X)| = |X|$$ take the expected value (and use the tower rule, aka. the law of total expectation): $$EE(abs(X+Y)|X)\equiv E|X+Y| \geq E|X|$$
Sum of the this sequence
A sequence yielding $199$ is $$11,1,12,2,13,3,14,4,15,5,16,6,17,7,18,8,19,9,20,10.$$ As to why $199$ is indeed the maximum: It is clear that in every even spot you need a number in $\{1,\ldots,10\}$ while you need a number in $\{11,\ldots,20\}$ in every odd spot to maximize the sum (or vice versa, of course). Hence if the sum would be circular (add $|a_{20} - a_1|$ to the sum), every number in $\{11,\ldots,20\}$ gets added twice and every number in $\{1,\ldots,10\}$ gets subtracted twice. This results in $$20 \cdot 19 - 2 \cdot 10 \cdot 9 = 200$$ for the circular sum. Making $|a_{20} - a_1|$ smallest possible, i.e. 1, yields a maximum of $199$ for the non-circular sum. So as long as you put $a_1 = 11$ and $a_{20} = 10$ and alternate between large and small numbers, you will always get $199$.
Finding the angle of $\angle x$ and $ \angle y $
Since $ABCD$ is a rhombus, $\angle A = \angle DCB$, and $DC=CB$. Thus, $\triangle DCB$ is isosceles, and you can figure out $y$ from that. Once you find $y$, you should be able to figure out what the degree measure of $\angle ADC$ is, and from there, you can get $\angle EDC$. From there, you can get $x$.
Finding basis for vector space of polynomials with complex coefficients
You made a mistake. If $p(z)=p(-z)$, then $bz^3+dz$ is the null polynomial, and therefore$$p(z)=az^4+cz^2+e.$$and$$p(-2)=0\iff16a+4c+e=0\iff e=-16a-4c.$$So, your space is$$\{az^4+cz^2-16a-4c\mid a,c\in\Bbb C\},$$of which $\{z^4-16,z^2-4\}$ is a basis.
Arc length vs plane measure
The question is legit, but the answer is no: clearly any non-empty area shape contains infinitely many segments. Consequently if segments would have non zero measure, by additivity the measure of the area of the object will be infinite.
Expected value and memoryless property of Geometric
Based on your clarification of the rules in the comments, if you guess more than $1$, and always update on a miss by adding more than $1$, then as you noted, you will never win. It follows that in order to win, at some point, you must wait it out. Based on that, it's clear that no strategy can yield a probability of winning more than $1/6$. If your initial guess is $1$, then you win with probability $1/6$. If your initial guess is more than $1$, then if a $6$ appears on the first roll, you lose, hence no strategy with an initial guess of more than $1$ has a winning probability more than $(5/6){\,\cdot\,}(1/6)=5/36$. It follows that the unique optimal strategy is to make an initial guess of $1$ (and then you win or lose based on the result of the first roll).
Successive localizations of a module
Hint: Let $A\to B$ a (commutative) $A$-algebra, $B\to C$ a $B$-algebra, $M$ an $A$-module. There is a canonical isomorphism: $$(N\otimes_A B)\otimes_B C\simeq M\otimes_A C.$$
show the sequence has the limit 0, $x_n$=$\frac{10^{3n}}{n!}$
For all $n > 10^3$, you have $$x_n = \frac{10^{3n}}{n!} = \frac{10^3 \times 10^3 \times ... \times 10^3}{1 \times 2 \times ... \times 10^3} \times \frac{10^3 \times ... \times 10^3}{(10^3+1) \times ... \times n}$$ so $$x_n \leq \frac{10^3 \times 10^3 \times ... \times 10^3}{1 \times 2 \times ... \times 10^3} \times \left( \frac{10^3}{10^3+1} \right)^{n-10^3}$$ Now you have a geometric sequence that converges to $0$. By comparison, $(x_n)$ tends to $0$.
diagram of short exact sequence
A much simpler diagram suffices (or the symmetrical subdiagram with $Z$ instead of $T$): $$\begin{matrix} &&&0\\ &&&\downarrow\\ 0\to&H&\to &W_1&\to&T\\ &\downarrow&\searrow&\downarrow\rlap{\scriptstyle\subseteq}&&\|\\ 0\to &W_2&\stackrel\subseteq\to &V&\to &T \end{matrix} $$ where all meshes commute, the middle column is exact at $W_1$ (which just rephrases that we view $W_1$ as subspace of $V$) the top row is exact the lower row is a complex and is exact at $W_2$ (again this just rephrases that we view $W_2$ as subspace of $V$) Assume $w\in W_1\cap W_2$. Then $w\mapsto 0$ under $W_2\to V\to T$, hence also under $W_1\to V\to T$, hence by exactness of the top row $w$ "comes" from $H$. On the other hand, the diagonal arrow maps any element of $H$ to an element of both $W_1$ and $W_2$ (namely along $H\to W_1\hookrightarrow V$ and $H\to W_2\hookrightarrow V$). The found maps $H\leftrightarrow W_1\cap W_2$ are inverse.
Do The Eigenvectors of a Positive Semi-Definite matrix span the column space.
Let $$A=\pmatrix{0&0\cr1&2\cr}$$ Then $$B=A^tA=\pmatrix{0&1\cr0&2\cr}\pmatrix{0&0\cr1&2\cr}=\pmatrix{1&2\cr2&4\cr}$$ $B$ has eigenvalues $0$ and $5$ with eigenvectors $(2,-1)$ and $(1,2)$, respectively, so it is positive semi-definite. Those eigenvectors span all of ${\bf R}^2$, but the column space of $B$ is one-dimensional, spanned by $(1,2)$.
Does the boundary of metric balls have measure $0$ in a metric measure space with Radon measure?
No. For a trivial example consider a finite metric space. For a less trivial example, consider $\Bbb R$ with Lebesgue measure, but with the metric $d(x,y)=\min(|x-y|,1)$, which generates the usual topology.
How would do this Algebra question?
i) Consider the leading coefficient of $f(x)g(x)$ ii) Yes, by the point i)
Does the series converge/converge absolutely/diverge
The alternating series test says that $$ \sum_{n=2}^\infty\frac{(-1)^n}{n^a\log(n)} $$ converges conditionally since $\frac1{n^a\log(n)}$ monotonically converges to $0$. For absolute convergence, by comparison to $$ \sum_{n=3}^\infty\frac1{n^a} $$ the series converges absolutely for $a\gt1$. For $a=1$, the integral test shows that the series diverges absolutely since $$ \int_2^M\frac1{x\log(x)}\,\mathrm{d}x=\log(\log(M))-\log(\log(2)) $$ diverges as $M\to\infty$. If you can't use the integral test, you can use the condensation test for $a=1$: $$ \sum_{n=1}^\infty2^n\frac1{2^n\log(2^n)}=\frac1{\log(2)}\sum_{n=1}^\infty\frac1n $$ diverges since the harmonic series diverges. Since the series diverges for $a=1$, the comparison test with $$ \sum_{n=2}^\infty\frac1{n\log(n)} $$ shows that the series diverges absolutely for $a<1$.
Why is the set $\{u\in C^1(\overline{\Omega}): \|\nabla u\|_\infty<1, u=0\text{ on } \partial\Omega\}$ open?
The question as clarified in comments is to check that it is open in $C^1_0(\overline{\Omega})$. Define $F: C^1_0 \to \mathbb R, \ F(u):=\|\nabla u\|_\infty$. Note that this function is Lipschitz with constant 1, since $$ |F(u) - F(v)| = | \|\nabla u \|_\infty - \|\nabla v\|_\infty| \leq \|\nabla(u-v)\|_\infty \leq \|u-v\|_{C^1}$$ and your set can be written as $F^{-1}((-\infty,1))$.
Finding $a_{-1}$ in the Laurent series of $f(z)=z^{3}\cdot\cos(\frac{1}{z})\cdot e^{\frac{1}{z^{2}}}$
Looks fine to me. You could also consider \begin{align} e^{w^2}\cos w &amp;=\frac{e^{w^2+iw}+e^{w^2-iw}}{2} \\ &amp;=\frac{1}{2}\left(1+(w^2+iw)+\frac{(w^2+iw)^2}{2}+\frac{(w^2+iw)^3}{6}+\frac{(w^2+iw)^4}{24}\right)+\\ &amp;\phantom{{}={}} \frac{1}{2}\left(1+(w^2-iw)+\frac{(w^2-iw)^2}{2}+\frac{(w^2-iw)^3}{6}+\frac{(w^2+iw)^4}{24}\right)+o(w^4)\\ &amp;=1+w^2+w^4-w^2-w^4+\frac{1}{24}w^4+o(w^4)\\ &amp;=1+\frac{1}{24}w^4+o(w^4) \end{align} Thus $$ z^3e^{1/z^2}\cos\frac{1}{z}=z^3+\frac{1}{24}z^{-1}+o(z^{-1}) $$
Limit: How to Conclude
As pointed by David Mitra in comments, the function $$f(x) = \left\{1 + 6\left(\frac{\sin x}{x^{2}}\right)^{x}\frac{\log(1 + 10^{x})}{x}\right\}$$ is not well defined whenever $x \to \pm \infty$ because $\sin x$ becomes negative. The same holds when $x \to 0$. However if we restrict ourselves to $x \to 0^{+}$ then the function is well defined near $0$ and hence we may try to evaluate its limit as $x \to 0^{+}$. Clearly we can see that when $x \to 0^{+}$ then $\log(1 + 10^{x}) \to \log 2$ so that we ideally need to take care of the part $$g(x) = \frac{(\sin x)^{x}}{x^{2x + 1}}$$ which is better handled by taking logarithm. We have $$\begin{aligned}\log g(x) &amp;= x\log\sin x - (2x + 1)\log x\\ &amp;= x\log\left(\frac{\sin x}{x}\right) + x\log x - 2x\log x - \log x\\ &amp;= x\log\left(\frac{\sin x}{x}\right) - x\log x - \log x\end{aligned}$$ Now as $x \to 0^{+}$ we can see that $(\sin x)/x \to 1$ so that first term tends to $0$. The second term $x\log x$ also tends to $0$ and the last term $-\log(x)$ tends to $\infty$. So that $\log g(x)$ tends to $\infty$ as $x \to 0^{+}$ and hence $g(x)$ also tends to $\infty$ as $x \to 0^{+}$. Since $f(x) = 1 + 6g(x)\log(1 + 10^{x})$ it follows that $f(x) \to \infty$ as $x \to 0^{+}$.
How can I prove that I am using valid mathematical induction?
Instead of an equals sign with a question mark showing what you want to prove, you should start with one side of the equation and produce the other. Given your assumption, you can then write $$\sum_{k=1}^{n+1}2k-1=\left(\sum_{k=1}^{n}2k-1\right)+2n+1\\ =n^2+2n+1\\ =(n+1)^2$$ You should not go from what you hope to prove to what you know, because some steps may not be reversible. Here they are, so you can just do the steps in reverse order. You should either start with an equation you know to be true, like your last, and finish with what you want to prove, or start with an expression and compute the other side of the equation. I did the second of these above.
Lower bounding over probability distributions
Define $s=|\mathcal{X}|$ and assume $s/k\leq 1$, $n&gt;2$. Define $h=1-(s-1)/k$ and note that, by assumption, $0&lt;1/k\leq p(x) \leq h$ for all $x \in \mathcal{X}$. This is a partial answer that shows, sometimes, the optimal solution is to allocate probability $1/k$ on $s-1$ of the alphabet symbols in $\mathcal{X}$, and $h$ on the remaining symbol. Define $(p^*(x))_{x \in \mathcal{X}}$ as this mass function. Other times this solution is not optimal and I suspect that the equal allocation $p_{equal}(x) = 1/s$ for all $x \in\mathcal{X}$ is likely optimal in such cases. Overall, Lagrange multipliers can help for this probelm. Below I show one use, another use is via Karush-Kuhn-Tucker conditions, see here: https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions Constrained problem: \begin{align} \mbox{Minimize:} \quad &amp; \sum_{x \in \mathcal{X}} p(x)^2(1-p(x))^{n-2} \\ \mbox{Subject to:} \quad &amp; \sum_{x \in \mathcal{X}} p(x) = 1\\ \quad &amp; 1/k \leq p(x) \leq h \quad \forall x \in\mathcal{X} \end{align} Unconstrained problem Fix $\lambda \in \mathbb{R}$ and call it a "Lagrange multiplier." \begin{align} \mbox{Minimize:} \quad &amp; \sum_{x \in \mathcal{X}} p(x)^2(1-p(x))^{n-2} + \lambda \sum_{x \in \mathcal{X}} p(x) \\ \mbox{Subject to:} \quad &amp; 1/k \leq p(x) \leq h \quad \forall x \in\mathcal{X} \end{align} Claim (Lagrange multipliers): Fix $\lambda\in \mathbb{R}$. If $(p(x))_{x \in \mathcal{X}}$ is a solution to the unconstrained problem, and if $\sum_{x \in \mathcal{X}} p(x)=1$, then $(p(x))_{x \in \mathcal{X}}$ is also a solution to the constrained problem. Proof: Let $(p(x))_{x \in \mathcal{X}}$ be a solution to the unconstrained problem that satisfies $\sum_{x \in \mathcal{X}} p(x)=1$. Then it satisfies all constraints of the constrained problem. Let $(w(x))_{x \in \mathcal{X}}$ be another vector that satisfies all constraints of the constrained problem. We want to show that $p$ yields an objective value for the constrained problem that is less than or equal to that of $w$. Since $1/k \leq w(x) \leq h$ for all $x \in \mathcal{X}$ we have: $$ \sum_{x \in \mathcal{X}} p(x)^2(1-p(x))^{n-2} + \lambda\underbrace{\sum_{x \in \mathcal{X}} p(x)}_{1} \leq \sum_{x \in \mathcal{X}} w(x)^2(1-w(x))^{n-2} + \lambda\underbrace{\sum_{x \in \mathcal{X}} w(x)}_{1}$$ and so $$\sum_{x \in \mathcal{X}} p(x)^2(1-p(x))^{n-2} \leq \sum_{x \in \mathcal{X}} w(x)^2(1-w(x))^{n-2} $$ Thus, $p$ is optimal for the constrained problem. $\Box$ Define $\lambda \in \mathbb{R}$ to satisfy: $$ (1/k)^2(1-(1/k))^{n-2} + \lambda (1/k) = h^2(1-h)^{n-2} + \lambda h$$ The unconstrained minimization separates over each $x \in \mathcal{X}$. For a given $x \in \mathcal{X}$ the unconstrained minimization is: \begin{align} \mbox{Minimize:} \quad &amp; p(x)^2(1-p(x))^{n-2} + \lambda p(x) \\ \mbox{Subject to:} \quad &amp; 1/k \leq p(x) \leq h \end{align} The function to be minimized is differentiable in $p(x)$, so the minimum is at a critical point, being either an endpoint $1/k$ or $h$, or a point in between that has zero derivative. I chose the above value $\lambda$ so that both endpoints $x=1/k$ and $x=h$ achieve the same value for the expression: $$p(x)^2(1-p(x))^{n-2} + \lambda p(x)$$ In certain cases, these two endpoints $1/k$ and $h$ tie for minimizing this expression. Hence, in these cases, the mass function $(p^*(x))_{x \in \mathcal{X}}$, which uses only values $1/k$ or $h$, solves the unconstrained problem and satisfies $\sum_{x \in \mathcal{X}} p(x) = 1$, so it also solves the constrained problem. Specifically, $p^*$ is optimal when we evaluate the following expression over $1/k \leq x \leq h$: $$p(x)^2(1-p(x))^{n-2} + \lambda p(x)$$ and when this expression is optimized at the endpoints (both endpoints of this expression will always have the same value by definition of $\lambda$). I tested specific $(s,k,n)$ values and plotted $p^2(1-p)^{n-2}+\lambda p$ in matlab over the interval $[1/k,h]$. I get: $(s,k,n)=(5,10,4)$: Picture shows endpoints optimal, suggesting $p^*$ optimal. $p^*$ beats equal allocation. $(s,k,n)=(3,10,4)$: Picture shows endpoints optimal, suggesting $p^*$ optimal. $p^*$ beats equal allocation. $(s,k,n)=(15,80,4)$: Picture shows endpoints optimal, suggesting $p^*$ optimal. $p^*$ beats equal allocation. $(s,k,n) = (8,10,4)$: Picture shows endpoints not optimal. Equal allocation is better than $p^*$. $(s,k,n) = (100,2004)$: Picture shows endpoints not optimal. Equal allocation is better than $p^*$.
Find the area of a triangle inscribed in the ellipse
$A=(0,3)$ is a vertex of the given ellipse. If $ABC$ is equilateral, by symmetry we have that $B$ and $C$ share the same $y$-coordinate, hence they are points of the form $$ B=\left(-2\sqrt{1-\frac{y^2}{9}},y\right),\qquad C=\left(2\sqrt{1-\frac{y^2}{9}},y\right) $$ with $y\in(-3,3)$. In order that $ABC$ really is equilateral, we must have $$ 3-y = \sqrt{3}\cdot 2\sqrt{1-\frac{y^2}{9}} $$ hence $y=-\frac{3}{7}$ and $BC=\frac{16}{7}\sqrt{3}$. It follows that $[ABC]=\color{red}{\frac{192}{49}\sqrt{3}}$.
$\mu(E_{\epsilon})<+\infty$ such that if $F \cap E_{\epsilon} = \emptyset$, then $\left \| f \chi_{F} \right \|_{p} < \epsilon$.
Let $E_n=\{x\in X: |f(x)|\geq \frac{1}{n}\}$ and observe that: $$\infty &gt; \int_{E_n} |f|^p d\mu \geq \frac{1}{n} \mu (E_n) $$ thus $\mu(E_n) &lt; \infty$ for each $n\in\mathbb{N}$. We show that: $$\lim_{n\rightarrow\infty} \int_{E_n} |f|^p d\mu = \int |f|^p d\mu $$ We have that $|f|^p\chi_{E_n}\leq|f|^p$ for each $n\in\mathbb{N}$ and, also, $|f|^p\chi_{E_n}\rightarrow|f|^p$. Thus, by the dominated convergence theorem we get the above expression. Finally, given $\epsilon&gt;0$, there exists $N\in\mathbb{N}$ such that: $$\epsilon^p + \int_{E_N} |f|^p d\mu &gt; \int |f|^p d\mu $$ by taking $E_{\epsilon}=E_N$, if $F\cap E_{\epsilon}=\emptyset$, then: $$\epsilon^p + \int_{E_\epsilon} |f|^p d\mu \geq \int |f|^p d\mu \geq \int_{E_{\epsilon}\cup F} |f|^p d\mu = \int_F |f|^p d\mu + \int_{E_{\epsilon}} |f|^p d\mu \implies$$ $$\epsilon^p &gt; \int_F |f|^p d\mu$$
Elliptic Curves over Finite Fields as Two Cyclic Groups
If $E$ is an elliptic curve over $\mathbb{F}_q$, then the Weil pairing $E(\mathbb{F}_q)\times E(\mathbb{F}_q)\rightarrow \mathbb{F}_q^*$ shows that there exist positive integers $m_1,m_2$ such that $$ E(\mathbb{F}_q)\cong \mathbb{Z}/m_1 \mathbb{Z} \times \mathbb{Z}/m_2 \mathbb{Z}, $$ with $m_1\mid gcd(m_2,q-1)$, see Chapter III, Corollary $8.1.1$ in Silverman's book "The Arithmetic of Elliptic Curves". So the subjectivity of the Weil pairing should help.
At least one eigenvalue among all roots
You have $p(x)=k(x-c_1)^{\alpha_1}\cdots(x-c_m)^{\alpha_m}$ for some $k\in K\setminus\{0\}$ and some natural $\alpha_j$'s (you can't conclude that the $\alpha_j$'s are equal to $1$). And$$p(f)=0\iff k(f-c_1\operatorname{Id})^{\alpha_1}\circ\cdots\circ(f-c_m\operatorname{Id})^{\alpha_m}=0.$$But then some $f-c_k\operatorname{Id}$ is not invertible and therefore $c_k$ is an eigenvalue of $f$.
Elementary trigonometry: $\tan$
Since , $\tan^2 2\theta =\frac{4\tan^2\theta}{(1-\tan^2\theta)^2}$ solution is : $a=\frac{-1}{\tan \theta}$ , $b=\frac{1}{\tan \theta}$ , $c=\tan \theta$ , $d=-\tan \theta$
Meaning of multiple free variables in a traffic flow simulation using linear algebra.
One way to look at it is as having a &quot;basic&quot; solution $x$ and a family of all solutions $x+N$ s.t. $x\perp N$, where $N$ is the $k$-dimensional nullspace of your matrix. It just means that your data has nothing to say about the vectors in $N$. $x$ is the &quot;unavoidable&quot; part of the solution that is common to all solutions. If you were to solve this more generally using the pseudo-inverse (i.e., Moore-Penrose inverse), which also allows for over-determination, you would get a solution that doesn't touch the nullspace $N$.
Is there a way to extend operations as integration for summation?
This extension has been studied and is known as product calculus. Look here for product integral.
How to prove that for any real n*n matrix, the eigenvalues are real or are a complex conjugate pair?
The eigenvalues are the roots of the characteristic polynomial, and the coefficients of the characteristic polynomial are real since they depends on the element of the matrix. This gives that complex eigenvalues come in conjugate pairs.
The linear map $ T: \mathbb R^3{\rightarrow} \mathbb R^3$ with given matrix is a rotation about some line. Find the line.
Edit: As Lubin points out in his/her comment, your matrix $A$ does not represent a rotation. One can easily see that $AA^T\ne I$ and $\det(A)\ne1$. The $(2,2)$-th entry is probably wrong and it may be $-2/7$. If this is really the case, it's not hard to see that all row sums of the corrected $A$ are equal to $1$. Therefore the axis of rotation is the line spanned by $x=(1,1,1)^T$ (because $Ax=x$). In general, for a non-diagonal rotation matrix $A$, you can read off the axis from the skew-symmetric part of $A$ directly. See my answer to q766565 "Find the axis of rotation of a rotation matrix by INSPECTION (NOT by solving $Kv=v$)". In your example, the skew-symmetric part (up to a factor) of $A$ is equal to $$ W=A-A^T=\frac37\pmatrix{0&amp;1&amp;-1\\ -1&amp;0&amp;1\\ 1&amp;-1&amp;0}, $$ therefore the rotation axis is the span of $(w_{23},w_{31},w_{12})^T$, which is the line spanned by $(1,1,1)^T$.
Importance of Noether normalisation lemma
If you have found the Noether Normalization Lemma in a commutative algebra book, just read on. You will see many applications, for example in dimension theory. The Lemma implies for example the fundamental formula $\dim(X)=\mathrm{trdeg}(K(X)/k)$ for affine varieties $X$ over a field $k$, and that $\dim(X \times_k Y) = \dim(X) + \dim(Y)$ if $X,Y$ are affine varieties over $k$. The Lemma is the main ingredient in the proof of Zariski's Lemma, which in turn implies Hilbert's Nullstellensatz. By the way, the Lemma has a nice geometric interpretation: Every affine variety over a field has a finite map to some affine space $\mathbb{A}^n$ (and this $n$ is the dimension of the variety). See SE/986279 for a specific example. In Eisenbud's book on commutative algebra you will also find a finer version which starts with a sequence of subvarieties which then corresponds to the sequence $\mathbb{A}^0 \subseteq \mathbb{A}^1 \subseteq \dotsc \subseteq \mathbb{A}^n$.
How do I prove $\log(x^n)=n\log|x|$?
It is only true when $x^n&gt; 0$, so we assume it. We'll use the following definition, which is how Wikipedia and Wolfram define it: $$\log_b x=k\iff b^k=x$$ together with the exponentiation rule: $\,\displaystyle{b^{xy}=\left(b^y\right)^x}$ $$\log_b(x^n)=n\log_b |x|\iff b^{n\log_b |x|}=x^n$$ $$\iff \left(b^{\log_b |x|}\right)^n=x^n\iff |x|^n=x^n$$ $$\iff |x^n|=x^n,$$ which is true.
How can I prove this sequence converges to 1?
I guess they've meant "to compare to a geometric sum", here's what you can do: \begin{align} (1+a^n)^n &amp;= \sum_{k=0}^n \binom{n}{k} a^{nk} = \sum_{k=0}^n \frac{n(n-1)\dots(n-k+1)}{k!} a^{nk} \le \\ &amp;\le \sum_{k=0}^n \frac{n^k}{k!}a^{nk} \le \sum_{k=0}^n n^ka^{nk} \le \sum_{k=0}^\infty (na^n)^k = \frac{1}{1-na^n}\end{align} For $0&lt;a&lt;1$ we have $$\lim_{n\rightarrow\infty} na^n = 0$$so $$ \lim_{n\rightarrow\infty} \frac{1}{1-na^n} = 1 $$ and using the squeezing $$ 1\le a_n \le \frac{1}{1-na^n}$$ we get $$ \lim_{n\rightarrow\infty} a_n = 1$$
Schauder estimates of weak solutions of a elliptic PDEs of 2nd order
A reference I have found very useful is the book "Second Order Elliptic Equations and Elliptic Systems'' by Ya-Zhe Chen and Lan-Cheng Wu (English translation). The proofs are complete and well organized. In chapter 9, theorem 2.6 the theorem is proven when $k=\nabla \cdot F$ and $F$ is $C^{0,\alpha}$. The result also holds when $k\in L^p$ and $p&gt;n$ and you can think about it as there being some $F\in W^{1,p}$ that is Holder continuous by embedding. Very similar material is in the lecture notes (2012 in English and available online) by Mariano Giaquinta and Luca Martinazzi, "An Introduction to the Regularity Theory for Elliptic Systems, Harmonic Maps and Minimal Graphs." I think the results in chapter 5 encompass the result you ask about, and they also carry the results out to the boundary. This is all in the context of systems of equations, which is natural as the result does not require an application of the maximum principle. I am interested in how general such results hold, for example for the Stokes equations similar results are shown in a 1982 paper by M. Giaquinta and G. Modica, "Non linear systems of the type of the stationary Navier-Stokes system." I believe the the methods for this version of the Schauder estimates are very much due to S. Campanato in the 1960's, however I have not seen any translations of his works. The iteration arguments and embedding inequalities developed by C. Morrey also play a crucial role.
How to calculate the Fourier transform of the Kaiser-Bessel window?
Using a parity property, the Fourier integral can be written as \begin{equation} K=\frac{2}{I_0(\pi\alpha)}\int_0^{L/2}I_0\left(\pi \alpha \sqrt{1-{(2x/L)}^2}\right)\cos(2\pi fx)\,dx \end{equation} We use the quoted series expansion for the modified Bessel function to obtain after swaping integration and summation \begin{align} K&amp;=\frac{2}{I_0(\pi\alpha)}\sum_{m=0}^\infty \frac{1}{(m!)^2}\left( \frac{\pi \alpha}{2} \right)^{2m}\int_0^{L/2}\left( 1-(2x/L)^2 \right)^{m}\cos(2\pi fx)\,dx\\ &amp;=\frac{L}{I_0(\pi\alpha)}\sum_{m=0}^\infty \frac{1}{(m!)^2}\left( \frac{\pi \alpha}{2} \right)^{2m}\int_0^1\left( 1-t^2 \right)^{m}\cos(t\pi fL)\,dt \end{align} This cosine transform is tabulated in Ederlyi (TI 1.3.8) or can be related to an integral representation of the Bessel function: \begin{equation} J_{\nu}\left(z\right)=\frac{2(\tfrac{1}{2}z)^{\nu}}{\pi^{\frac{1}{2}}% \Gamma\left(\nu+\tfrac{1}{2}\right)}\int_{0}^{1}(1-t^{2})^{\nu-\frac{1}{2}}% \cos\left(zt\right)\mathrm{d}t \end{equation} With $\nu=m+1/2,z=\pi fL$ one obtains \begin{equation} \int_0^1\left( 1-t^2 \right)^{m}\cos(t\pi fL)\,dt=m!\sqrt{\pi}2^{-m+1/2}(\pi fL)^{-m-1/2}J_{m+1/2}(\pi f L) \end{equation} Then, after some simplifications, \begin{equation} K=\frac{L}{I_0(\pi\alpha)\sqrt{2fL}}\sum_{m=0}^\infty \frac{1}{m!}\left( \pi \alpha\right)^{2m}2^{-m}(\pi fL)^{-m}J_{m+1/2}(\pi f L) \end{equation} Such a series looks similar to the multiplication theorem for the Bessel functions: \begin{equation} J_{\nu}\left(\lambda Z\right)=\lambda^{\nu}\sum_{m=0}^{\infty}% \frac{(- 1)^{m}(\lambda^{2}-1)^{m}(\tfrac{1}{2}Z)^{m}}{m!}J_{\nu+ m}\left(Z\right) \end{equation} which is valid for any complex value of $\lambda$. We use $\nu=1/2,Z=\pi fL$ and $\lambda=\sqrt{1-\frac{\alpha^2}{f^2L^2}}$ with $\Im\lambda\ge0$ to write \begin{equation} K=\frac{L}{I_0(\pi\alpha)\sqrt{2fL}}\frac{1}{\left( 1-\frac{\alpha^2}{f^2L^2} \right)^{1/4}}J_{1/2}\left( \pi fL\sqrt{1-\frac{\alpha^2}{f^2L^2}} \right) \end{equation} and with the explicit expression for $J_{1/2}$, \begin{equation} K=\frac{L}{I_0(\pi\alpha)}\frac{\sin\left( \pi\sqrt{f^2L^2-\alpha^2} \right)}{\pi\sqrt{f^2L^2-\alpha^2} } \end{equation} which is valid for all the values of $f$. In particular, for $f&lt;\alpha/L$, it is convenient to write the above expression as \begin{equation} K=\frac{L}{I_0(\pi\alpha)}\frac{\sinh\left( \pi\alpha\sqrt{1-f^2L^2/\alpha^2} \right)}{\pi\alpha\sqrt{1-f^2L^2/\alpha^2} } \end{equation} which is the proposed expression for the Fourier transform.
Find the asymptotes of the following curve: $2r^2=\tan (2\theta)$
You have $\tan(2\theta) \to+\infty$ as $\theta\uparrow \pi/4.$ So $r\to+\infty$ as $\theta\uparrow\pi/4.$ So the line $\theta=\pi/4$ is an asymptote. That's the same as $x=y.$ Then think about periodicity of the trigonmetric function.
Definition of Random Sample in Estimation
I understand your confusion. What happens is that there is all this talk about the sample space in the first part of a stats class and then they introduce the idea that a random variable "maps" the sample space to a new sample space. However, either the people, as people, or their associated heights, can be validly called the sample space. However, it happens that one of the sample spaces results in "people" as the output, whilst the other returns numbers. It's really rather irrelevant which is the real sample. Of course, your are actually sampling people, but you might as well just skip a step an say you are sampling heights. The bottom line is that either can be a sample space or population, it depends on what level of abstraction you are working from.
Removable singularities in Sobolev spaces
The equality $$ \int_{\Omega \setminus F} (\dots) \, dx = \int_{\Omega^{(i)}} \int_{ \{x_i \in \mathbb{R} \, : \, (x_1, \dots, x_i, \dots, x_N) \in \Omega \setminus F \}} (\dots) \, dx_i \, \dots \, dx_{i-1} \, d{x_{i+1}} $$ is Fubini's theorem. It has nothing to do with the functions being integrated, or with the assumption on $F$. It's just doing integration in order: over $x_i$ first, then over the other variables. The following step uses the fact that integrating something over $ \Omega^{(i)}$ gives the same result as integrating over $\Omega^{(i)}\setminus F^{(i)}$, since $F^{(i)}$ is negligible. So we can write $$\cdots = \int_{\Omega^{(i)}\setminus F^{(i)}} \int_{ \{x_i \in \mathbb{R} \, : \, (x_1, \dots, x_i, \dots, x_N) \in \Omega \setminus F \}} (\dots) \, dx_i \, \dots \, dx_{i-1} \, d{x_{i+1}} $$ But now the domain of integration over $x_i$ can be simplified: since the projection of $F$ is removed, there is no danger of $(x_1, \dots, x_i, \dots, x_N)$ being in $F$. This leads to $$\cdots = \int_{\Omega^{(i)}\setminus F^{(i)}} \int_{ \{x_i \in \mathbb{R} \, : \, (x_1, \dots, x_i, \dots, x_N) \in \Omega \}} (\dots) \, dx_i \, \dots \, dx_{i-1} \, d{x_{i+1}} $$
How to solve this equation using $ \log $?
$$3^{4x}-3^{2x}\cdot3^{\log_312}+3^3=0$$ $3^{\log_312}$ - is a basic logharithmic rule then substitute $3^{2x} = t$ and solve square equation that depends on $t$ I think you can finish it yourself :)
When to include Jacobian to find surface area of a double integral that involves polar coordinates?
$\textbf{Comment}$: A very simple answer to your question is, once you switched to polar coordinates to describe the domain of $(x,y)$, you switched coordinate systems. I give a more elaborate answer below, however here is an example in which no coordinate change occurs. $\textbf{Problem}$: Calculate the area of the closed unit disk $E$. We can parametrize the disk using the domain $D = \{(r, \theta): r \in [0,1], \theta \in [0, 2 \pi]\}$. Thus, the area is given by the integral below. $$\int_{r=0}^1 \int_{\theta = 0}^{2\pi} 1 \ d\theta \ dr = 2\pi$$ Pay close attention to this calculation. Observe that we can use $\sigma(r, \theta) = (r, \theta)$ (the identity) as a "parametrization" of $E$ since, $$\left(\int_{\partial E} \|\sigma_r \times \sigma_{\theta}\| = 1 \right) = 0$$ Your teacher is correct. If you already parametrize a surface with some smooth map $\sigma(u,v)$ where $(u,v) \in D$ then we have, $$\textbf{Surface Area}(S) = \int_D \|\sigma_u \times \sigma_v\| \ du \ dv$$ $\textbf{Integrals using parametrizations}$: In most calculus books, they allow parametrizations to be non-injective on $\partial D$ i.e the boundary of $D$. In essence this would mean $\sigma(D) = S$ where $S$ is your surface, but no compact surface arises in this fashion. However, most compact surfaces in these calculus texts will have parameterizations that vanish on the boundary i.e , $$\int_{\partial D} \|\sigma_u \times \sigma_v\| \ du \ dv = 0$$ and so $\sigma(\textbf{int}(D)) \subset S$ and $\sigma(D) = S$. The above now says that for special surfaces (almost all the ones in a calculus book) you can use $\sigma$ to parametrize all of $S$ since the boundary won't contribute to the integral. \begin{align*} \int_D \|\sigma_u \times \sigma_v\| \ du \ dv &amp;=\int_{\partial{D}} \|\sigma_u \times \sigma_v\| \ du \ dv + \int_{\textbf{int}(D)} \|\sigma_u \times \sigma_v\| \ du \ dv \\ \\ &amp;=\int_{\textbf{int}(D)} \|\sigma_u \times \sigma_v\| \ du \ dv\end{align*} $\textbf{Using Jacobian}$: Let us define $\textbf{n}(u,v) = \sigma_u \times \sigma_v$ i.e $\textbf{n}: D \to \mathbb{R}^3$. Suppose we start with $\sigma(u,v)$ as a parametrization of $S$ and want to switch to another $\psi: V \to S$ which also parametrizes $S$. The change of variables map is given by $ \phi:=\sigma \circ \psi^{-1} : (u,v) \mapsto (x,y)$ i.e $\phi: D \to V$ and hence by the change of variables formula, $$\textbf{Surface Area}(S) = \int_{V} (\textbf{n} \circ \phi) \ \|\textbf{Jac}(D\phi)\| \ dx \ dy$$
A connected open set is path connected
Hint: Show that if $U$ is an open set in a normed space, then for each $x \in U$ the set of points that are path-connected to $x$ is open and closed (relative to $U$).
Projection to the plane in the direction of the vector?
I'll try to do it without getting away too much from your point of view. You have a plane $T : x + 2y - 3z + 3 = 0$, which I can rewrite as $$ \begin{bmatrix} 1 \\ 2 \\ -3 \end{bmatrix} \begin{bmatrix} x &amp; y &amp; z \end{bmatrix} = -3. $$ Now you are projecting in the direction $\begin{bmatrix} 1 &amp; 1 &amp; -1 \end{bmatrix}$, so you want to find $\alpha$ such that for arbitrary $x,y,z$ real, we have that $$ \begin{bmatrix} 1 \\ 2 \\ -3 \end{bmatrix} \left( \begin{bmatrix} x &amp; y &amp; z \end{bmatrix} + \alpha \begin{bmatrix} 1 &amp; 1 &amp; -1 \end{bmatrix} \right) = \begin{bmatrix} 1 \\ 2 \\ -3 \end{bmatrix} \begin{bmatrix} x + \alpha &amp; y + \alpha &amp; z - \alpha \end{bmatrix} = -3. $$ But the last equation can be re-written as $$ (x+\alpha) + 2(y + \alpha) -3(z - \alpha) = x + 2y - 3z + 6\alpha = -3. $$ Therefore, $$ \alpha = \frac{x+2y-3z+3}6 $$ gives you the unique $\alpha$ for which this is possible. If you actually need to compute the algorithm (matrix) which finds the projection as a function of $x$,$y$ and $z$, just compute the vector $[x+\alpha, y+\alpha,z-\alpha]$ and see the matrix there. This would give $$ \begin{bmatrix} x + \alpha &amp; y + \alpha &amp; z - \alpha \end{bmatrix} = \begin{bmatrix} \frac{7x + 2y - 3z + 3}6 &amp; \frac{x+8y-3z+3}6 &amp; \frac{-x-2y+3z-3}6 \end{bmatrix} $$ or, written in column form (as standard) $$ \begin{bmatrix} x + \alpha \\ y + \alpha \\ z - \alpha \end{bmatrix} = \begin{bmatrix} \frac{7x + 2y - 3z + 3}6 \\ \frac{x+8y-3z+3}6 \\ \frac{-x-2y+3z-3}6 \end{bmatrix} = \begin{bmatrix}\frac 76 &amp; \frac 26 &amp; \frac {-3}6 \\ \frac 16 &amp; \frac 86 &amp; \frac {-3}6 \\ \frac {-1}6 &amp; \frac{-2}6 &amp; \frac 36 \end{bmatrix} \begin{bmatrix} x \\ \\ \\ y \\ \\ \\ z \end{bmatrix} + \begin{bmatrix} \frac 36 \\ \frac 36 \\ \frac{-3}6 \end{bmatrix} = P \left( \begin{bmatrix} x \\ \\ \\ y \\ \\ \\ z \end{bmatrix} \right). $$ where $P$ would be the "projection map". Hope that helps,
Show that $\frac{a^x-1}{x}\to\log(a)$ monotonically as $x\searrow0$
Let $\phi(x):=a^x=e^{x\log a}$ The limit you are after is $\phi'(0)=\lim_{x\rightarrow0}\frac{\phi(x)-\phi(0)}{x}$ Assuming $a&gt;0$, $\phi$ is convex: $\phi''(x)=(\log a)^2a^x&gt;0$. Recall that a function $\varphi:(\alpha,\beta)\rightarrow\mathbb{R}$, $-\infty\leq \alpha&lt;\beta\leq \infty$, is convex if $$\begin{align} \varphi((1-t) x+ t y)\leq (1-t)\varphi(x)+t \varphi(y)\tag{1}\label{convex} \end{align}$$ for any $\alpha&lt;x&lt;y&lt;\beta$ and $0\leq t\leq 1$. If strict inequality holds in $\eqref{convex}$ with $0&lt;t&lt;1$, then $\varphi$ is strictly convex. Geometrically, if $\varphi$ is convex and $\alpha&lt;x&lt;u&lt;y&lt;\beta$ then the point $(u,\varphi(u))$ on the graph of $\varphi$ lies below the straight line joining $(x,\varphi(x))$ and $(y,\varphi(y))$. Let $u=(1-t)x+ty$, It is easy to check that $\eqref{convex}$ is equivalent to any of the inequalities $$ \begin{align} \frac{\varphi(u)-\varphi(x)}{u-x}\leq\frac{\varphi(y)-\varphi(x)}{y-x}\leq \frac{\varphi(y)-\varphi(u)}{y-u}\tag{2}\label{convex-equiv} \end{align} $$ For fixed $\alpha&lt;x&lt;\beta$, inequalities~\eqref{convex-equiv} show that the map $u\mapsto \tfrac{\varphi(u)-\varphi(x)}{u-x}$ decreases as $u\searrow x$ and increases as $u\nearrow x$. In your case $$ \frac 1x (a^{x}-1)=\frac{\phi(x)-\phi(0)}{x-0} $$
One-sided transient Markov Chain
Note that $$\prod_{j=0}^\infty (1-\alpha_j) = \mathbb P\left(\bigcap_{j=0}^\infty \{X_{j+1}=j+1\mid X_j=j\} \right)$$ and $$\bigcap_{j=0}^\infty \{X_{j+1}=j+1\mid X_j=j\}= \bigcap_{j=1}^\infty \{X_j\ne0\mid X_0=0 \},$$ so $\prod_{j=0}^\infty (1-\alpha_j)&gt;0$ if and only if the chain is transient. Suppose the chain is transient. Let $j,k$ be nonnegative integers with $j&lt;k$ and set $$\tau_k = \inf\{n&gt;0:X_n=k \}.$$ Since each state is visited finitely many times, it follows that $$ \mathbb P(\tau_k&lt;\infty\mid X_0=j)=1. $$
Prove that $f([r],[s])=[r+s], g([r],[s])=[r \cdot s]$ are well-defined functions
Let $r_1,r_2,s_1,s_2\in\mathbb{Z}_m$ such that $[r_1]=[r_2]$ and $[s_1]=[s_2]$. So there exists integers $i,j$ such that $r_1=r_2+i\cdot m$ and $s_1=s_2+j\cdot m$. Thus: \begin{align*} f([r_1],[s_1])&amp;=[r_1+s_1]=[(r_2+i\cdot m)+(s_2+j\cdot m)]=[(r_2+s_2)+(i+j)\cdot m] \\ &amp;=[r_2+s_2]=f([r_2],[s_2])\\ g([r_1],[s_1])&amp;=[r_1\cdot s_1]=[(r_2+i\cdot m)\cdot(s_2+j\cdot m)]\\ &amp;=[r_2\cdot s_2+r_2\cdot j\cdot m+s_2\cdot i\cdot m]=[r_2\cdot s_2+m(r_2\cdot j+s_2\cdot i)]=[r_2\cdot s_2]\\ &amp;=g([r_2],[s_2]). \end{align*} Thus, $f$ and $g$ are well defined.
$A$ and $B$ nonempty subsets of $\mathbb{R}$, if $\sup(A)=\sup(B)$ then $\forall a \in A, \exists b \in B$ such that $a<b$
By contradiction. Assume $\exists a \in A$ such that $\forall b \in B$, $b\leq a$. Then $\sup(B) \leq a &lt; \sup(A)$, which is a contradiction.
Formal symbol for the integer division operation
$\lfloor a/b \rfloor$ (@barakmanos comment suggestion's) is probably the only formal way to express this, without creating new definitions. Alternatively, you could follow @Henry 's suggestion to explicitly define a symbol to do the integer division operation. The common ones are \ (backslash, as opposed to / forward slash used for regular division), or div. The advantages of using $\lfloor a/b \rfloor$ are that: you don't need a new definition you often need to combine integer division with +1 for various indexing problems. This can be easily achieved with $\lceil a/b \rceil = \lfloor a/b \rfloor + 1$ (edit:) for $a/b \notin \mathbb{Z}$ The edit suggested in the comment (@Tonyk) arguably invalidates the second advantage, since $a/b$ will likely also be an integer in indexing problems. I would personally still prefer the $\lfloor a/b \rfloor$ notation, simply for the first point.
How to solve a limit of a complex integral over part of the real axis?
I think a branch of the logarithm is sufficient, $$\lim_{\epsilon\to 0}\int_{-1}^1 \frac{1}{x-i\epsilon} dx = \lim_{\epsilon\to 0} Log(1-i\epsilon)-Log(-1-i\epsilon) $$ $$= \lim_{\epsilon\to 0} ln|1-i\epsilon|-ln|-1-i\epsilon|+Arg(1-i\epsilon)-Arg(-1-i\epsilon)=0-0+0-(-\pi i)=\pi i.$$
How to show $A=\begin{bmatrix}A_1\\A_2\end{bmatrix}$ is non-singular when $N(A_1)=R(A_1^T)$?
Well, $A_2x=0$ is also given by $Ax=0$. You should prove $x=0$. Also, the correct form of the identity is $$R(B^T) \ =\ N(B)^\perp$$ and the exercise is basically equivalent to it. But the adjoint property enables a quick proof for $R(B^T)^\perp =N(B)$: Both conditions are equivalent to that $$0=\langle x, B^Ty\rangle =x^TB^Ty=\langle Bx, y\rangle$$ holds for all $y$.
Are vector bundles on $\mathbb{P}_{\mathbb{C}}^n$ of any rank completely classified? (main interest $n=3$)
This doesn't answer your question, but maybe is worth a look. This paper classifies vector bundles on smooth affine threefolds. The methods are highly sophisticated.
What about image of infinity?
You can deduce the image of $\infty$ by continuity of the application~; there is no need for an extra argument. In other words, if the image of the real line without the point at infinity is a circle minus a point, then the image of the real line with infinity will be a circle with that point. The key argument is really that Moebius transformations are invertible maps of the Riemann sphere preserving those "generalized circles" (which when you see the Riemann sphere as a geometrical sphere embedded in three-dimensional real space, those "generalized circles" are simply hyperplane sections of that sphere). I really recommend this "visualisation" proof, it is extremely well done. When you know that these are Moebius transformations, it is clear that it maps hyperplane sections to hyperplane sections since it corresponds to rigid transformations of three-dimensional real space. Hope that helps,
Conditional Probability calcualtion
1) Yes 2) the first of your two suggestions 3) No. What you have suggested is the joint probability that both Martin and Norman are late. To get the conditional probability, you would need to divide this by the probability that Norman is late, i.e. by P(Norman Late=T) which would be P(Train Strike=T)*P(Norman Late=T|Train Strike=T)+P(Train Strike=F)*P(Norman Late=T|Train Strike=F)
Linear Programming Word Problem: Theater
Yes y &lt;= 2x The easiest way to test this is with some data points and sketch a graph: y_______x 2_______1,2,3,4,5… 4_______2,3,4,5,6… 6_______3,4,5,6,7… Hence: y &lt;= 2x 2 &lt;= 2x1 True and 2y >= x 2x2 >= 5 False Graph of y &lt;= 2x
Question about the properties of an ideal in the polynomial ring over a field
By definition, irreducible elements are different from $0$, so your conterexample is wrong. Coming back to the problem: $F[x]$ is a PID, so $(f,g)$ is a principal ideal generated by the greatest common divisor (GCD) of $f$ and $g$. Assume that $f$ and $g$ are both irreducible. Since $\deg f\neq\deg g$ and $f$, $g$ are irreducible, we get that their GCD is $1$. Thus $(f,g)=(1)$, that is, $(f,g)=F[x]$, a contradiction.
Find the interval over which this function is greater than 1
Starting with $$\lvert w\sigma'(wa+b)\rvert \geq 1$$ Using the definition of the sigmoid and its derivative, $$\lvert w\rvert \frac{e^{wa+b}}{(1+e^{wa+b})^2} \geq 1$$ Let $u \equiv e^{wa+b}$. Then, $$\lvert w\rvert \frac{u}{(1+u)^2} \geq 1$$ Rearranging, $$u^2+(2-\lvert w\rvert)u+1 \leq 0$$ Solving for $u$ gives $$\frac{(\lvert w\rvert-2)-\sqrt{w^2-4\lvert w\rvert}}{2} \leq u \leq \frac{(\lvert w\rvert-2)+\sqrt{w^2-4\lvert w\rvert}}{2}$$ This is where the interval comes from. Before we get the width of that interval, we must finish solving for $a$. Note that we know both interval bounds are real-valued because $\lvert w \rvert \geq 4$. Plugging in for the definition of $u$ and doing some rearranging to better match the answer, $$\frac{\lvert w\rvert\left(1-\sqrt{1-4/\lvert w\rvert}\right)}{2}-1 \leq e^{wa+b} \leq \frac{\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)}{2}-1$$ From here you can quickly solve for $a$: $$\frac{1}{\lvert w\rvert}\ln\left(\frac{\lvert w\rvert\left(1-\sqrt{1-4/\lvert w\rvert}\right)}{2}-1\right) - \frac{b}{\lvert w\rvert}\leq a \leq \frac{1}{\lvert w\rvert}\ln\left(\frac{\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)}{2}-1\right)-\frac{b}{\lvert w\rvert}$$ So you can see $a$ lies on that interval. To get the width of the interval, simply subtract the lower bound from the upper bound: $$\left[ \frac{1}{\lvert w\rvert}\ln\left(\frac{\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)}{2}-1\right)-\frac{b}{\lvert w\rvert}\right]-\left[\frac{1}{\lvert w\rvert}\ln\left(\frac{\lvert w\rvert\left(1-\sqrt{1-4/\lvert w\rvert}\right)}{2}-1\right) - \frac{b}{\lvert w\rvert}\right]$$ $$=\frac{1}{\lvert w\rvert}\ln{\left(\frac{\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)-2}{\lvert w\rvert\left(1-\sqrt{1-4/\lvert w\rvert}\right)-2}\right)}$$ Notice we lose the dependence on $b$. Multiplying the numerator and denominator inside the logarithm by $\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)-2$ and simplifying: $$=\frac{1}{\lvert w\rvert}\ln{\left(\left[\frac{\lvert w\rvert\left(1+\sqrt{1-4/\lvert w\rvert}\right)}{2}-1\right]^2\right)}$$ $$=\frac{2}{\lvert w\rvert} \ln{\left(\frac{\lvert w\rvert(1+\sqrt{1-4/\lvert w\rvert})}{2}-1\right)}$$ This is the width of the interval. From our solution to $a$, you can see that the interval is centered about $-b/\lvert w\rvert$.
Let $a, b, c ∈ R$, $a^{2}+b^{2}+c^{2}=1$and $A = ab + bc + ca$. Then $A$:
It is obvious that $A=1$ can be reached, simply set $a=b=c$. This already gets rid of option (A) and (B). Since (D) includes (C) (that is, if (C) is true then (D) must also be true), one of the answers must be (D). The question now is: can $A=\frac12$ occur? Well, in the reasoning \begin{align} (a+b+c)^2&amp;\geq 0\\ a^2+b^2+c^2+2(ab+bc+ca)&amp;\geq 0\\ 1+2A&amp;\geq 0\\ A&amp;\geq -\tfrac12\\ \end{align} what do we need for equality? We merely need to have the first expression to have an equality, that is, $a+b+c=0$. Now we need to find $a,b,c$ with $a+b+c=0$ and $a^2+b^2+c^2=1$. Substitution yields $a^2+b^2+(-a-b)^2=1$ or $$a^2+ab+(b^2-\tfrac12)=0$$ which is simply a quadratic in $a$ and thus solvable, making $A=\frac12$ possible. The only answer can be (D). To give explicit $a,b,c$ with $A=-\frac12$: solving the quadratic yields \begin{align} a&amp;=\tfrac12-\tfrac 16\sqrt{3}\\ b&amp;=\tfrac13\sqrt{3}\\ c&amp;=-\tfrac12-\tfrac 16\sqrt{3} \end{align} so that $a^2+b^2+c^2=1$ and $A=ab+bc+ca=-\frac12$.
If $f\in L^1(\mathbb{R})$ and $M>0$ is it true that $\mathcal{F}\left( \chi_{[-M,M]}\mathcal{F}(f) \right)\in L^1(\mathbb{R})$?
Fix a function $g$ from Schwarz class such that $g = 1$ on $|x| \leq 1$, and define $f$ to be the inverse Fourier transform of $g$, namely $f = \mathcal{F}^{-1}(g)$. Then $f$ is again from Schwarz class, in particular from $L^1(\mathbb{R})$, and taking $M=1$ we get $$ \mathcal{F}(\chi_{[-1,1]}\mathcal{F}(f)) = \mathcal{F}(\chi_{[-1,1]}g) = \mathcal{F}(\chi_{[-1,1]}) $$ which is not $L^1$.
Additional Assumption in Munkres Problem on Continuous Functions
If $i$ is continuous, then $i^{-1}U = U \in \tau'$ for all $U \in \tau$, hence $\tau \subset \tau'$. If $\tau \subset \tau'$ and $U \in \tau$, then $i^{-1}U = U \in \tau'$, hence $i$ is continuous.
Why isn't the fundamental theorem of line integrals applicable here?
V is conservative, except at (0,1) where it is not defined, but the first curve doesn't pass through this point The issue is whether the curve surrounds the point, not whether it passes through. V contributes a fixed amount ($2\pi$) to the integral for every time the integration path winds around the singular point $(0,1)$. The number of times winding occurs is measured taking orientation into account, so that clockwise and anticlockwise loops cancel each other out.
how to integrate this simple looking integral
You can use a simple substitution as follows: Let $a = u^{25} + 1, \implies \frac{da}{du} =25u^{24}$, $$du = \frac{da}{25u^{24}}$$ $$ \int\frac{u^{24}}{u^{25} + 1}du =\frac{1}{25} \int\frac{1}{a}da$$ I'm sure you can finish off from here.
sigma algebra preimage
Here are two general results you can use. Let $f:X\to Y$ be any function. Let $S\subseteq X$. If $f$ is injective, then $S=f^{-1}\big(f(S)\big)$. Let $T\subseteq Y$. If $f$ is surjective, then $f\big(f^{-1}(T)\big)=T$.
Prove that $\int_{0}^\pi x^{2k} \cos(h x) dx\geq 0$.
Let $h=2n$, and $f$ be convex on $[0,\pi]$. Then $g(x)=\frac1{2n}\,f\!\left(\frac{x}{2n}\right)$ is convex on $[0,2\pi n]$. $$ \begin{align} \int_0^\pi f(x)\cos(hx)\,\mathrm{d}x &amp;=\int_0^\pi f(x)\cos(2nx)\,\mathrm{d}x\tag{1}\\ &amp;=\int_0^{2\pi n}\frac1{2n}\,f\!\left(\frac{x}{2n}\right)\cos(x)\,\mathrm{d}x\tag{2}\\ &amp;=\int_0^{2\pi n}g(x)\cos(x)\,\mathrm{d}x\tag{3}\\ &amp;=\sum_{j=0}^{n-1}\int_0^{2\pi}g(x+2\pi j)\cos(x)\,\mathrm{d}x\tag{4}\\ \end{align} $$ Explanation $(1)$: substitute $h=2n$ $(2)$: substitute $x\mapsto\frac{x}{2n}$ $(3)$: $g(x)=\frac1{2n}\,f\!\left(\frac{x}{2n}\right)$ $(4)$: $\cos(x+2\pi j)=\cos(x)$ If $h$ is convex, then for $z\ge y$, we have $h(z+\pi)-h(y+\pi)\ge h(z)-h(y)$. Therefore, $$ \begin{align} \int_0^{2\pi}h(x)\cos(x)\,\mathrm{d}x &amp;=\int_0^{\pi/2}\left(h(x)-h(\pi-x)-h(\pi+x)+h(2\pi-x)\right)\cos(x)\,\mathrm{d}x\\ &amp;=\int_0^{\pi/2}\left(\left[h(2\pi-x)-h(\pi+x)\right]-\left[h(\pi-x)-h(x)\right]\right)\cos(x)\,\mathrm{d}x\\[6pt] &amp;\ge0\tag{5} \end{align} $$ Combining $(4)$ and $(5)$ and noting that $f(x)=x^{2k}$ is convex on $[0,\pi]$ gives the desired result.
Why is this binomial coefficient bounded thus?
For every $0\leqslant i\leqslant n$, ${n\choose i}\leqslant\sum\limits_{t=0}^n{n\choose t}=2^n$. Use this for $n=2k-2$ and $i=k-1$.
Complement of multiplicative set is a (prime) ideal.
Let $S$ be a maximal multiplicative subset of $R\setminus\{0\}$ and $\mathfrak{p}:=R\setminus{S}.$ As you mentioned above, it's enough to prove that $\mathfrak p$ is an ideal. Clearly, $0\in\newcommand{\p}{\mathfrak{p}}\p.$ Let $x,y\in \p$. If we can show that $s(x+y)=0$ for some $s\in S$, then $x+y\in \p$(because $s(x+y)=0\notin S$ implies that $s\notin S$ or $x+y\notin S$ and the only possibility is $x+y\notin S$). With that in mind, consider the smallest multiplicatively closed set containing $S$ and $x$; it is the set $\tilde S=\{sx^n\mid s\in S, n\geq0\}.$ Since $S$ is a maximal multiplicative subset of $R\setminus\{0\}$ and $\tilde S$ properly contains $S$, we have $sx^n=0$ for some $s$ and $n$. Similarly, we get $ty^m=0$ for some $t\in S$ and $m$. Thus, for a large enough number, say $N$, we have $st(x+y)^N=0$(Ok, this is not what we wanted, but we are close). Since $st\in S$, we see that $(x+y)^N\in\p$. Write $(x+y)^N=(x+y)(x+y)^{N-1}$. If $x+y\in\p$, then we are done. Otherwise, $x+y\in S$ and by the above argument, $(x+y)^{N-1}\in\p$. So after a finite number of steps, we'll see that $x+y\in\p$. Similarly, you can show that $rx\in\p$ for all $r\in R$.
20 blue balls and 11 yellow, drawing 6 times with no replacement, what is the chance that at least one is yellow OR the first two draws are the same?
OR in mathematics is the inclusive or, so if either condition or both is satisfied the sentence is satisfied. OR in English is ambiguous, it can be inclusive or exclusive. Given the problem, I would take it as inclusive here. As you say, that makes the probability $1$. Either the first two balls are blue and hence the same color or at least one of them is yellow. We win either way.
How I can solve a system of this type
Note that first equation does not depend on $x_2$. Hence, you can solve it first, then substitute the solution to the second equation, and then solve second equation. First equation can be solved by substitution $x_1' = p(x_1)$, where $p$ is unknown function. Then we have $x_1'' = \frac{d}{ds}p(x_1(s)) = p' p$. Therefore, you first equation reduces to $$ p'(x_1) p(x_1)=x_1 p^2(x_1). $$ Hence, either $p(x_1) = 0$, or $p'(x_1)=x_1 p(x_1)$. Both these equations are integrable, and you can find $p$ explicitly. Then, you integrate the equation $x'=p(x)$, and find explicitly $x_1(s)$.
Find the maximum possible dimension of $W:={\rm Span}\{v_1,\dots,v_{16}\}$
Well, the maximum POSSIBLE is $min(\dim V,15)$ since you can indeed pick them all independently, and the min would be 8, since you don't know the rank of the first family of vectors.
Identifying and sketching $\operatorname{Re}(z^3)=1$
Writing $z$ in polar coordinates gives us $\operatorname{Re}(z^3)=r^3\cos(3t)$. There are couple of interesting things to note about equation $$r^3\cos(3t) = 1.$$ First of all, if $\cos(3t) = 0$, $r$ explodes to infinity. Since roots of $\cos$ are $\pi/2 + k\pi,\ k\in\mathbb Z$, we get that $r$ explodes to infinity for angles $\pi/6 + k \pi/3,\ k\in\mathbb Z$. Visually: The red lines will be asymptotes for our graph. They also happen to be places where $\cos(3t)$ changes sign. This is important since $r$ must be positive. Our graph will, thus, lie in the areas with $+$ signs: Let us now exploit symmetries of the equation $\operatorname{Re}(z^3)=1$ (they should be visible from the previous picture already). Rotational symmetry: If $\omega = e^{i\frac{2\pi}3}$, then $\omega^3 = 1$ and multiplication by $\omega$ is rotation by $120^\circ$. If $z_0$ is a solution to the equation, then $$\operatorname{Re}((\omega z_0)^3) = \operatorname{Re}(\omega^3z_0^3) = \operatorname{Re}(z_0^3) =1,$$ so $\omega z_0$ is a solution as well. We conclude that our graph has rotational symmetry. (Alternatively, use $\cos(3(t + 2\pi/3)) = \cos(3t + 2\pi) = \cos(3t)$) Reflection symmetry: If $z_0$ is a solution, then so is $\overline{z_0}$: $$\operatorname{Re}(\overline{z_0}^3) = \operatorname{Re}(\,\overline{z_0^3}\,) = \operatorname{Re}(z_0^3) = 1.$$ Thus, our graph is symmetric with respect to the real axis. (Side note: if you know some group theory, these symmetries generate dihedral group $D_3$) Finally, note that $0\leq\cos(3t)\leq 1$ implies that $r^3\geq 1$. Also, $z_0 = 1$ is an obvious solution. Using all of this information, we get that our graph looks like this:
Pigeonhole Principle and Sets
Imagine the following array: $$\begin{array}[cccccccccc] .1 &amp;&amp; 2 &amp;&amp; 3 &amp;&amp; 4 &amp;&amp; \ldots &amp;&amp; n-2 &amp;&amp; n-1 &amp;&amp; n \\ 2n &amp;&amp; 2n-1 &amp;&amp; 2n-2 &amp;&amp; 2n-3 &amp;&amp; \ldots &amp;&amp; n+3 &amp;&amp; n+2 &amp;&amp; n+1\end{array}$$ Notice that each column sums to $2n+1$ and all of the numbers from $1$ to $2n$ are used in the array. There are $n$ columns. What you want to prove is that if you were to highlight $n+1$ numbers in this array (i.e. the elements of $T$), there would be a whole column highlighted, and that pair would sum to $2n+1$. The pidgeonhole principle essentially says that we cannot possibly highlight $n+1$ numbers such that no two lie in the same column, if there are but $n$ columns. If you want to see this, then just take a small array, like for $n=3$: $$\begin{array}.1 &amp;&amp; 2 &amp;&amp; 3\\6 &amp;&amp; 5 &amp;&amp; 4\end{array}$$ Now, let's start highlighting some numbers, trying to avoid putting two in a column. Our goal is to highlight $4$ numbers, as that is the size of the set $T$. We could start by putting $1$ in $T$: $$\begin{array}.\color{red}1 &amp;&amp; 2 &amp;&amp; 3\\6 &amp;&amp; 5 &amp;&amp; 4\end{array}$$ but now we know we can't put $6$ in $T$ too, because that would sum to $2n+1$. So we might choose $5$ as our next number, forbidding $2$ and we might choose $4$ as the number after that: $$\begin{array}.\color{red}1 &amp;&amp; 2 &amp;&amp; 3\\6 &amp;&amp; \color{red}5 &amp;&amp; \color{red}4\end{array}$$ So, now we have a highlighted number in every column- and adding any further number to the set $T$ would create a pair summing to $7$. But this means that we can't have a fourth element in $T$, at least given how we started - and the pidgeonhole principle guarantees that we can never choose a set of size $4$ without putting two elements in one column. The key point here is that we should imagine that, as we're creating $T$, we're not choosing numbers to put in it, we're choosing which column to take the numbers from. There are $n$ columns, and we need to make $n+1$ choices - thus we will, at some point, choose the same column twice, and in this context, that means we need to have both elements of some column in $T$, and this forms a pair summing to $2n+1$.
Is $(\mathbb{Z}/1\mathbb{Z}, + , \cdot)$ a field?
Here is one of the field axioms as listed in Wikipedia: Additive and multiplicative identity: there exist two different elements $0$ and $1$ in $F$ such that [for all $a\in F$] $a + 0 = a$ and $a · 1 = a$. $\mathbb Z/\mathbb Z$ doesn't satisfy this axiom because it doesn't have two different elements. The convention that fields (and integral domains more generally) have at least two elements, or equivalently, that $0\neq 1$, is analogous to the convention of defining a prime number to not be $1$, and defining a prime ideal to not be the entire ring. It avoids having to frequently make exceptions for the trivial case.
Intersections of Hamming balls and "circles"
To get started, let us work on the circle intersection where the distances are the same: $|C(x,d)\cap C(y,d)|$. Let the distance between $x$ and $y$ be $m$. Then we pick $i$ places of the disagreement where we agree with $x$, have to have $i$ of the disagreement that we agree with $y$, and $d-i$ of the others where we have to disagree with both to get the correct distance. Each of those $d-i$ can have $s-2$ choices, as they have to disagree with $x$ and $y$, as can the $m-2i$ that we didn't choose to agree with x or y. So $$|C(x,d)\cap C(y,d)|=\sum_{i=0}^{\frac{m}{2}} \binom{m}{i} \binom{m-i}{i} \binom{n-m}{d-i} (d+m-3i)^{(s-2)}$$ If you can sum this, you are better than I. For the ball, we can just sum over $m$.
Finding vector length for high dimensions
The same way:$$\bigl\|(x_1,\ldots,x_n)\bigr\|=\sqrt{{x_1}^2+{x_2}^2+\cdots+{x_n}^2}.$$This is the usual norm in $\mathbb{R}^n$.
How to evaluate $\int_{-\infty}^{\infty}\frac{x\arctan\frac1x\ \log(1+x^2)}{1+x^2}dx$
Integration by parts reduces the original problem to the evaluation of $$ \int_{0}^{+\infty}\frac{\log^2(1+x^2)}{1+x^2}\,dx $$ which is pretty straightforward: since $$ \int_{0}^{+\infty}(1+x^2)^{s-1}\,dx =\frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{1}{2}-s\right)}{\Gamma(1-s)}$$ by applying $\frac{d^2}{ds^2}$ to both sides, then considering $\lim_{s\to 0^+}$, we get: $$ \int_{0}^{+\infty}\frac{\log^2(1+x^2)}{1+x^2}\,dx =\frac{\pi^3}{6}+2\pi\log^2(2).$$ You may find another example of this technique at page 81 of my notes.
Set theory: equivalence relation proof question
Recall that $$G\circ G=\{\,(x,z)\mid \exists y\in A\colon (x,y), (y,z)\in G\,\}.$$ Showing $G\subseteq G\circ G$ is easy as $(x,y)\in G$ and $(y,y)\in G$ (by reflexivity) implies $(x,y)\in G\circ G$. Showing $G\circ G\subseteq G$ is also easy: Assume $(x,z)\in G\circ G$. Then there exists $y\in A$ with $(x,y)\in G$ and $(y,z)\in G$. By transitivity, $(x,z)\in G$, as was to be whon. Note that we did not use symmetry at all in the proof. Indeed the claim holds also for reflexive transitive relations, such as $\le$.
Reference request: Real analysis with infinity
Berberian's book Fundamentals of real analysis defines $a+\infty$ in page 74. $\displaystyle\sum_{n=1}^\infty a_n$ where one of the terms can be infinity in page 76. $\displaystyle\lim_{n\to\infty} a_n$ where $(a_n)$ is a sequence of extended real numbers in page 82.
using the Milne Thomson theorem to calculate complex potential
Let $f_1(z) = k\ln(z-3a)$ and $f_2(z)$ be the complex potential for the line source at $x=3a$ and $x=-3a$ respectively (without the cylinder). The complex potential $f(z)$ for these double line sources (without the cylinder) is just the sum of the individual complex potential, i.e. $$ f(z) = f_1(z) + f_2(z) = k\ln\left(z^2 - 9a^2\right). $$ By virtue of the Milne-Thomson Circle theorem, we just need to compute $\overline{f\left(\dfrac{a^2}{\bar z}\right)}$: \begin{align*} f\left(\frac{a^2}{\bar z}\right) &amp; = k\ln\left(\frac{a^4}{\bar z^2} - 9a^2\right) \\ \overline{f\left(\frac{a^2}{\bar z}\right)} &amp; = k\ln\left(\frac{a^4}{z^2} - 9a^2\right). \end{align*} The desired expression for $w(z)$ follows by adding $f(z)$ and $\overline{f\left(\dfrac{a^2}{\bar z}\right)}$ together and I will leave this computation for you.
Calculating the diameter of planets when sun's diameter is scaled down to 50px
Yes, you have determined that the scale factor is $27828 \frac {\text{km}}{\text{pixel}}$ so you should divide any value in km by $27828$ to get the number of pixels. The earth would be $\frac {12742}{27828}$ pixels in diameter, which is about $\frac 12$ of a pixel. How will you show it? Its distance to the sun would be $\frac {149\ 598\ 000}{27828} \approx 5376$ pixels. At a screen resolution of $72$ pixels per inch that is almost $75$ inches. I hope you have a big monitor. Space is big.
Can i prove in this way?(convergence of infinite sum )
What you've written doesn't appear to make sense. Here's how I would do it: Let $(s_n)$ be the sequence of partial sums. Then $\sum a_n$ converges iff $(s_n)$ converges iff $(s_n)$ is Cauchy iff for all $\epsilon&gt;0$ there's an $N$ such that whenever $n,m \geq N$ we have $\left|s_n-s_m \right|=\left|\sum_{k=m+1}^{n} a_k \right|&lt;\epsilon$.
Epsilon-Delta Proof $ \lim\limits_{n\to\infty}\frac{n+\sin(n)}{n+1} = 1$
Hint: Use the triangular inequality for the numerator and the boundedness of $\sin(n)$ by $|\sin(n)|\leq 1$: $$\biggl|\frac{\sin(n)-1}{n+1}\biggr|\leq \frac{|\sin(n)|+1}{|n+1|}\leq \frac{1+1}{|n+1|}=\frac{2}{n+1}$$
Sum of the sets in $\mathbb R^2$
Your solution to part a is not correct. We can write the sum of sets as a union: $$A+B = \bigcup_{b\in B}(A+b)$$ where $A+b=\{a+b|a\in A\}$ is the set $A$ translated in the plane. So the sum is a union of translations of either set. You can use this to work out what each of the given sums looks like, and from that work out if they are open or closed. For the $W+X$ case you don't even need to work out what it looks like. As $W$ is open, any translation of it is. Thus $W+X$ being the union of open sets is open. For part b, take $(x,y)=(x_1,y_1)+(x_2,y_2)\in X+Y$, where $(x_1,y_1)\in X$ and $(x_2,y_2)\in Y$. Then we know that $y_1=0$ and as $1=x_2y_2$, $y_2\not=0$. Thus $y=y_1+y_2=y_2\not=0$. As $x_1$ has free rein over $\mathbb R$, $x$ does also. Thus $X+Y=\mathbb R^2\setminus X$. That is, it is the plane without the $x$ axis. This is fairly clearly not closed; any point on the $x$ axis is a limit point not in the set. Try to employ a similar method to work out what the set $Y+Z$ looks like, and from that decide if it's open or closed. If you need more help, I can give more hints.
Convergence of factor
You made a simple mistake in the last step of the limit calculation, where you forgot that $\frac{n!}{(n+1)!}=\frac{n!}{n!\cdot (n+1)}$, meaning that $$\lim_{n\to\infty} \frac{n!}{(n+1)!}=\lim_{n\to\infty}\frac{1}{n+1}\neq \infty.$$
Recurrence of Log function
Assuming $\log$ means logarithm to base 2, you are already pretty close to a solution; you just need to apply some basic summation formulas: $$\sum_{i=0}^{k-1} x^i = \frac{1-x^k}{1-x}, \sum_{i=0}^{k-1} i x^i = \frac{x-kx^k-(k-1)x^{k+1}}{(1-x)^2}.$$ Then $$T(n) = \sum_{i=0}^{k-1} 4^i(2^{-i}n + \log(2^{-i}n)) + 4^k T(0) = \sum_{i=0}^{k-1} 2^i + \log(n) \sum_{i=0}^{k-1} 4^i - \sum_{i=0}^{k-1} i 4^i + 4^kT(0).$$ Calculating each sum seperately, we get $\sum_{i=0}^{k-1} 2^i = 2^k-1 = n-1$, $\sum_{i=0}^{k-1} 4^i = \frac{4^k-1}{4-1} = \frac{(2^k+1)(2^k-1)}{3} = \frac{(n+1)(n-1)}{3} $, $\sum_{i=0}^{k-1} i4^i = \frac{4-k4^k-(k-1)4^{k+1}}{(1-4)^2}=\frac{4-4^k(4+3k)}{9} = \frac{4-n^2(4+3 \log n)}{9}$. The answer is just putting these together.
Can we say classical logic has DNE axiom as well because it's equivalent to LEM?
Having just seen this unanswered question, I shall make a few comments to clarify. There are two sets of Logic lecture notes being compared here: Lectures on Philosophical Logic Lectures on Constructive Logic The first set of lectures describe the history of logic, and here "Classical" means (philosophers') understanding of Classical Greek thinking and classical mathematics. This is a two valued theory and the list of supplementary inference rules on that page are all derivable in classical logic. However this page, and indeed this course, is not presenting a formal theory of mathematical logic. It is primarily concerned about "methods of argumentation". So the term "Axiom" is a non-formal phrase here (although the axioms correctly describe such Classical Logic). The second set of lecture notes is about a form of "Constructive Logic" used in describing proofs computationally. Its earlier chapters are on formal theories like "Martin Lof Type Theory". These theories are both formally presented and do not assume axioms like $P \vee \lnot P$ ("Law" of Excluded Middle) and $P = \lnot \lnot P$ ("Double Negation Elimination"). The missing link between the two sets of lecture notes, from a modern logic perspective, is the development in the 20th Century of "Intuitionistic Logic"(https://en.wikipedia.org/wiki/Intuitionistic_logic) (under Brouwer) and its eventual linking with the recursion and computation theory of Godel, Turing, etc. This intuitionistic logic has axioms from which neither LEM (nor DNE) can be derived. Hence the need to explicitly add one or the other of these axioms in Chapter 7 of the notes to emulate "Classical Logic" in these systems.
Solving equation-systems so it's understandable by an 11 year old
How about this approach: If the second number is twice as large as the first one, and the third number is three times as large, then all three numbers together are six times as large as the first one alone. Knowing that all three numbers together also equal $7.2$, an attentive student's intuition should lead him to the solution without any formal arithmetic at all. In particular, it should be immediately apparent what the first number is.
How to justify this limit?
Let's formalize the question as: Let $f$ be a continuous function and let $g$ be any function such that $\lim_{x\rightarrow a}g(x) = \infty$. Suppose $\lim_{x\rightarrow a}f(g(x)) = b$ and for some function $p(x)$, we have $\lim_{x\rightarrow a}p(x) = 1$. Under what circumstances does $\lim_{x\rightarrow a}f(p(x)g(x))$ exist, and in particular, when does it equal $b$? If $\lim_{x\rightarrow a}g(x)$ were finite, then the result follows from the composition and product laws for limits. If $g$ is continuous, then the result follows. Indeed, $g$ takes on every value larger than some threshold value $L$ while in the vicinity of $x=a$. In other words, if $y_n$ is any sequence which grows without limit, then we can find a sequence $x_n\rightarrow a$ such that for all but finitely many terms of the sequence, $g(x_n) = y_n$ and for such points, $y_i \leq y_j$ implies that $x_i \leq x_j \leq a$. It follows that $f(y_n)$ and $f(g(x_n))$ have the same limit, which must be $b$ because $\lim_{x\rightarrow a}f(g(x)) = b$. In particular, pick any $u_n\rightarrow a$ and define $y_n \equiv p(u_n)\cdot g(u_n)$. Then $y_n$ grows without limit, and we can find an increasing sequence $x_n\uparrow a$ such that $y_n = g(x_n)$ for all but finitely many terms, so the limit of $f(y_n)=f(p(u_n)\cdot g(u_n))$ is the limit of $f(g(x_n))$, namely $b$. Because this is true for any sequence $u_n\rightarrow a$, it establishes $\lim_{x\rightarrow a}f(p(x)\cdot g(x)) = b$. If $g$ is not continuous, then the result does not necessarily follow. For example, take a continuous function $f(x) = x\sin{(2\pi x)}$, and take $g(x) = \lfloor 1/x^2 \rfloor$. Then $\lim_{x\rightarrow 0}g(x) = \infty$ as required. Moreover, $(f\circ g) = 0$ because $f$ is zero on integer values. Hence $\lim_{x\rightarrow 0}f(g(x)) = 0$. But we can find a function like $p(x) = \frac{\lfloor 1/x^2\rfloor + 0.25}{\lfloor 1/x^2 \rfloor}$ which approaches 1 slowly enough that $f(p(x)g(x))$ approaches a value much different from $f(g(x))$: we have that $\lim_{x\rightarrow 0}p(x) = 1$. But $p(x)g(x) = \lfloor 1/x^2\rfloor + 0.25$. Hence $f(p(x)g(x)) = (\lfloor 1/ x^2 \rfloor + 0.25) \cdot \sin(2\pi \lfloor 1/x^2\rfloor + \pi/2) = (\lfloor 1/x^2 \rfloor + 0.25)$. Hence $\lim_{x\rightarrow 0} f(p(x)g(x)) = \infty \neq \lim_{x\rightarrow 0} f(g(x)) = 0$. The problem is apparently that, because $g$ is not continuous in the region of interest, we can choose $p(x)$ so that $p(x)g(x)$ takes on values that $g(x)$ never takes, even if $p(x)\rightarrow 1$. As a result, we can perversely choose $f(x)$ so that $f(p(x)g(x))$ consists of a different set of values and approaches a different limit than $f(g(x))$.