title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
probability of distribution | I read your question as follows: what are, when throwing a fair dice twice, the probabilities...
P(2 4s) = $\frac{1}{6} \frac{1}{6} = \frac{1}{36}$
P(1 4) = P(4 in first but not second throw) + P(4 in second but not first throw) = (by symmetry) $2 \frac{1}{6} \frac{5}{6} = \frac{10}{36}$
P(no 4) = $\frac{5}{6} \frac{5}{6} = \frac{25}{36}$,
noting that the 3 probabilities sum to $1$, as they should. $4$ is not special, so same result for any of the $6$ numbers. |
Boundedness of inverse in Evans Chapter 6 (why is the sequence bounded in Sobolev spaces?) | You don't need to use (i), (ii) is enough and gives
\begin{align}
\|u\|_{H^1_0(U)}^2
&\le \frac1\beta B[u,u] + \frac\gamma\beta \|u\|_{L^2(U)}^2
\\&= \frac1\beta (u,f)_{L^2(U)} + \frac\gamma\beta\|u\|^2_{L^2(U)}
\\&\le \frac1\beta\|u\|_{L^2(U)}\|f\|_{L^2(U)}+\frac\gamma\beta\|u\|_{L^2(U)}^2
\\&=\frac1\beta\|f\|_{L^2(U)}+\frac\gamma\beta
\end{align}
where we used the definition of a weak solution $B[u,v]=(f,v)_{L^2(U)}$ valid for all $v\in H^1_0(U)$ (in particular valid for $v=u$). |
probability of arrivals of buses and waiting time | The following figure depicts $12$ different possibilities to arrive:
The coloring tells which bus will be catched. In the case of $1,3,4,6,7,9,10,12$ the average waiting time will be $2.5$ minutes. In the case of $2,5,8,11$ the average waiting time will be $7.5$ minutes. The probability of arriving at any of the intervals is $\frac1{12} $.
Now, the average waiting time is
$$2.5\frac8{12}+7.5\frac4{12}=4\text{ minutes}.$$
Furthermore there are $4$ positions to get to the blue location and there are $8$ positions to get to the red location. The corresponding probabilities are the
$$P_{\text{blue}}=\frac4{12}=\frac13,\ P_{\text{red}}=\frac8{12}=\frac23.$$ |
What is the fastest technique to find complex roots of a function? | In your case, the equation in $x$ is quadratic and you found both solutions. So now, you must solve $y^3 = 8$ and $y^3 = -1$.
Start with $y = re^{ia}$ so $y^3 = r^3 e^{i (3a)}$ and find $r$ for both equations and then find the angles. Update your answer or comment here and I will be glad to help further if needed. |
How is $\frac{\big(\frac{3}{2}\big)^{99}-1}{\big(\frac{3}{2}\big)^{100}-1}\approx\frac{1}{\big(\frac{3}{2}\big)}$ | $(3/2)^{99}$ and $(3/2)^{100}$ are both really big compared to the $-1$ term in the numerator and the denominator. So the idea is that you can ignore the $-1$ terms and evaluate the fraction as $(3/2)^{99} / (3/2)^{100} = 1/(3/2)$. |
A problem on almost sure convergence | The sequence $\{X_n\}$ actually converges pointwise to $0$, because for each $\omega$, there is an integer $N(\omega)$ for which $X_n(\omega)=0$ whenever $n\geqslant N(\omega)$.
In particular, it does converge to $0$ in probability. |
What is the actual geometric meaning of trigonometric operations such as adding cos,sine,tan | We have
$$a\,\sin\theta+b\,\cos\theta=\sqrt{a^2+b^2}\,\sin(\theta+\alpha)$$
where $\alpha$ is the unique angle such that $\cos\alpha=a/\sqrt{a^2+b^2}$ and $\sin\alpha=b/\sqrt{a^2+b^2}$, in case $a^2+b^2 >0$.
Note that $\alpha$ is the angle of the vector $(a,b)$, measured from the positive half of $x$-axis, and $r:=\sqrt{a^2+b^2}$ is its length.
It means that, $a\,\sin\theta+b\,\cos\theta$ is the $y$ coordinate of the rotation of $(\cos\theta,\,\sin\theta)$ by $\alpha$, multiplied by $r$.
In your example $a=b=1$ so $r=\sqrt2$ and $\alpha=\pi/4$. Then the rotated and stretched vector will have angle $\pi/4+\pi/4=\pi/2$ and length $\sqrt2$. Its $y$ coordinate is indeed $\sqrt2$. |
How is the volume of a cross-polytope in $\mathbb{R}^n = \frac{2^n}{n!}$? | These are fairly intuitive inductions if you go from $1$ to $2$ dimensions and from $2$ to $3$ dimensions. Similar steps in higher dimensions are essentially the same.
For $n=1$ you are measuring the length from $-1$ to $+1$ which is $2=\frac{2^1}{1!}$
For the induction step you start at $n$ dimensions with a cross-polytope in black with hypervolume $V_{n} = \frac{2^n}{n!}$. You then introduce an orthogonal vertical line in red in the $n+1$th dimension with height $h$ from $-1$ to $+1$ and see that similar copies in grey of the cross-polytope have linear proportion $1-|h|$ and so $n$-dimensional hypervolume proportion $(1-|h|)^n$; it is easier just to consider the top half and double the result. So integrating over the slices to find the next hypervolume: $$V_{n+1} = 2\int\limits_{h=0}^1 \frac{2^n}{n!}(1-h)^n\,dh = 2\left[\frac{2^n}{n!} \times \frac{-1}{n+1}(1-h)^{n+1}\right]^1_0=\frac{2^{n+1}}{(n+1)!}$$ |
If $\quad f(x)+xf'(x)+f''(x)=0, $ prove that $f(x)=e^{\frac{-x^2}{2}}$ | You are very close. From $(e^{x^2/2}f(x))'=c_1e^{x^2/2}$, just integrate and you find
\begin{align*}
&\int_0^x(e^{t^2/2}f(t))'\, dt=c_1\int_0^xe^{t^2/2}\, dt \\
&\implies e^{x^2/2}f(x)-f(0)=c_1\int_0^xe^{t^2/2}\, dt \\
&\implies f(x)=f(0)e^{-x^2/2}+c_1e^{-x^2/2}\int_0^xe^{t^2/2}\, dt
\end{align*}
Finally, we know $f(0)=1$. Moreover, the equation $f(x)+xf'(x)+f''(x)=0$ implies that $f''(0)=-1$. This fact should allow you to solve for $c_1$. You could invoke theorems, but this way is arriving at the answer by seeing the calculations for yourself. |
Find the matrix of the transformation with respect to the given basis | You don’t proceed from here. Finding the rank and nullity of a matrix doesn’t really tell you anything that you’d need to know to perform a change of basis on it.
Recall that the columns of a transformation matrix are the images of the domain basis vectors expressed relative to the basis of the codomain. So, for each basis vector $\alpha_j$, find the expression of $\phi\alpha_j$ as a linear combination $\sum_{i=1}^4 c_{ij}\alpha_i$. The required transformation matrix is then the coefficient matrix $[c_{ij}]$.
This computation can be accomplished all at once via matrix multiplication. Let $B$ be the matrix with the vectors $\alpha_k$ (expressed relative to the standard basis) as its columns. The columns of $A_\phi B$ are then the images of these vectors, also expressed relative to the standard basis. Observe that $B$ can be interpreted as converting from the $\{\alpha_k\}$ basis to the standard one (why?), so $B^{-1}$ converts from the standard basis to the $\{\alpha_k\}$ basis. Thus, $B^{-1}A_\phi B$ is the required matrix. This operation is known as a similarity transformation or conjugation of $A_\phi$. |
Reference for unbounded operators | I would like to suggest Reed/Simon: Methods of Modern Mathematical Physics 1, chapter VIII. It gives a quite careful treatment and emphasizes connections to e.g. quantum mechanics, where unbounded operators occur very naturally (e.g. because they have arbitrarily high eigenvalues, such as $$-\frac{d^2}{dx^2}+x$$ defined on Schwartz space $\mathscr{S}(\mathbf{R})$). For the same reason it discusses a lot concerning the spectral theorem for unbounded operators. |
An example of differentiable functions with discontinuous partial derivatives | Away from the origin, one can use the standard differentiation formulas to calculate that
Both of these derivatives oscillate wildly near the origin. For example, the derivative with respect to $x$ along the $x$-axis is
for $x\neq 0$, where $\operatorname{sign}(x)$ is $\pm1$ depending on the sign of $x$. In this case, the sine term goes to zero near the origin but the cosine term oscillates rapidly between $1$ and $−1$, as it is not multiplied by anything small.
This is an example of differentiable functions with discontinuous partial derivatives. See the analysis in the nice linked article. And also this old question on the site:
Can "being differentiable" imply "having continuous partial derivatives"? |
How many ways to multiply n matrices? | We see in OPs example all $5$ different ways to multiply four matrices according to the associative law. This corresponds to the Catalan number
$$C_3=\frac{1}{4}\binom{6}{3}=5$$.
We write these $5$ variants explicitely with dots and obtain
\begin{align*}
&(((A \cdot B)\cdot C)\cdot D)\\
&((A\cdot (B\cdot C))\cdot D)\\
&((A\cdot B)\cdot (C\cdot D))\\
&(A\cdot ((B\cdot C)\cdot D))\\
&(A\cdot (B\cdot (C\cdot D)))\\
\end{align*}
We can bijectively transform this represention into strings of valid pairs of open and closed parentheses. We do so by skipping the matrices and all open parentheses and replacing the dots with opening parentheses.
\begin{align*}
&(\ )\ (\ )\ (\ )\\
&(\ (\ )\ )\ (\ )\\
&(\ )\ (\ (\ )\ )\\
&(\ (\ )\ (\ )\ )\\
&(\ (\ (\ )\ )\ )\\
\end{align*}
In general we consider strings of length $2n$ consisting of $n$ open and $n$ closed parentheses. Valid sequences can be characterized, that parsing a string from left to right, starting with $0$ from the beginning and adding $1$ when reading an open parenthesis and subtracting $1$ when reading a closed parenthesis we always get a non-negative number. At the end we get $0$.
Now let's count the number $C_n$ of all valid sequences of length $2n$. The number of all sequences is
\begin{align*}
\binom{2n}{n}
\end{align*}
A bad sequence contains $n$ open and $n$ closed parentheses, but reaches the value $-1$ at a certain step for the first time during parsing. When we have reached the value $-1$ we have parsed precisely one closing parentheses more than open parentheses.
We now reverse from that point on all parentheses, i.e. we exchange all open with closed parentheses and vice-versa. This results in a sequence with two more closed parentheses than open parentheses. So we have a total of $n+1$ closed parentheses and $n-1$ open parentheses.
It follows, the number of bad sequences is
\begin{align*}
\binom{2n}{n+1}
\end{align*}
$$ $$
We conclude the number $C_n$ of all valid sequences of length $2n$ is
\begin{align*}
C_n=\binom{2n}{n}-\binom{2n}{n+1}=\frac{1}{n+1}\binom{2n}{n}\qquad \qquad n\geq 1
\end{align*}
In OPs example $n\geq 2$ matrices imply $n-1$ dots for multiplication. These dots can be substituted with $n-1$ open parentheses giving $C_{n-1}=\frac{1}{n}\binom{2(n-1)}{n-1}$ different valid arrangements. |
Two dimensional valuation domain with value group $\Bbb Z \oplus \Bbb Q$ | I suppose the order on $\mathbb Z\oplus\mathbb Q$ is the lexicographical order.
Denote by $v$ a (surjective) valuation whose value group is $\mathbb Z\oplus\mathbb Q$, and set $R=R_v$.
Then $(m,q)\in\mathbb Z\oplus\mathbb Q$ is such that $(m,q)>(0,0)$ iff $m\ge1$, or $m=0$ and $q>0$. In the first case $(m,q)=(m,q-1)+(0,1)$, while in the second $(0,q)=(0,q/2)+(0,q/2)$. This shows that $M=M^2$.
We can identify $P$ with the set $\{x\in R:v(x)\in\mathbb Z_{>0}\oplus\mathbb Q\}$, and its easily seen that if $v(x_0)=(1,0)$ then $x_0\in P-P^2$. Moreover, $x_0^n\in P^n-P^{n+1}$. |
Combinatorics error correcting code | The calculation can be done without a calculator. Since $56=3\cdot 17+5$, we have $56\equiv 5\pmod{17}$. We have $36\equiv 2\pmod{17}$, so $36^4\equiv 2^4\equiv -1\pmod{17}$. But $(-1)(-55)=55\equiv 4\pmod{17}$. Thus the first part of our expression is $\equiv 20\equiv 3\pmod{17}$.
A similar calculation shows that the second part is $\equiv 8\pmod{17}$.
For $35\equiv 1\pmod{17}$, and $67\equiv 16\pmod{17}$. We have $-14\equiv 3\pmod{17}$, so $(-14)^2\equiv 9\pmod{17}$. It remains to calculate $(16)(9)$ modulo $17$. One can work directly, multiplying to get $144$, and finding the remainder. But here is a useful trick: $16\equiv -1\pmod{17}$, so $(16)(9)\equiv (-1)(9)=-9\equiv 8\pmod{17}$.
Now add. The sum is $\equiv 11\pmod{17}$. |
if monic polynomial divides product, then it must divide at least one of them | Hint:
If $x-c$ divides $f(x)g(x)$ then $c$ is a root of $f(x)g(x)$.
So $f(c)g(c)=0$ and consequently $f(c)=0$ or $g(c)=0$. |
Prove that the equation $3^k = m^2 + n^2 + 1$ has infinitely many solutions in positive integers. | EDIT(ELABORATION)
Note that $$(a^2+b^2)(c^2+d^2)=(ad+bc)^2+(ac-bd)^2 \tag{1}$$ Thus a product of two numbers that are a sum of $2$ squares is als0 a sum of two squares.
CLAIM
For all $t \in \mathbb{N}$, we have that $3^{2^{t}}-1$ is a sum of two squares.
PROOF
It is true when $t=1$ since $$3^{2}-1=2^2+2^2$$
Assume it is true when $t=a$. Note that for $t=a+1$, $$3^{2^{a+1}}-1=\left(3^{2^a} -1 \right) \left(3^{2^a}+1 \right)$$
By the inductive hypothesis, $3^{2^a} -1 $ is a sum of two squares. Also, $3^{2^a}+1$ is a sum of two squares from . Thus, the inductive hypothesis is true when $t=a+1$ from $(1)$. We are done. The result follows, as $k=2^{t}$. |
Is gradient descent nothing other than discretized gradient flow? | What you are missing is that $\lambda$ "is" the time scale $\Delta t$. So you should rather interpret the first formula as
$$\xi_{t+\Delta t}:=\xi_t - (\Delta t)\nabla_{\xi_t}f,$$
and with this you get what you want. |
Calculating transition probabilities | I said in comments that I thought you do not have information from the long term distribution about moving left or right, and only partial information about moving up or down. But you can say that the transition probability of moving from the bottom to the middle row is double the transition probability of moving from the middle row to the bottom row, while the transition probability of moving from the middle to the top row is $1.5$ times the transition probability of moving from the top row to the middle row
I am still not clear about the question, but let's suppose any answer meeting the condition will do, so then you could have for example
$\Pr(1 \to 2)= \Pr(1 \to 4) = \Pr(2 \to 1)= \Pr(2\to 3)=\Pr (2 \to 5) = \Pr(3 \to 2)$ $=\Pr(3 \to 6) = 0.3$
$\Pr(4 \to 1)= \Pr(4 \to 5) = \Pr(4 \to 7)= \Pr(5\to 2)=\Pr (5 \to 4) = \Pr(5 \to 6)$ $=\Pr(5 \to 8) =\Pr(6 \to 3) =\Pr(6 \to 5) =\Pr(6 \to 9) = 0.15$
$\Pr(7 \to 4)= \Pr(7 \to 8) = \Pr(8 \to 5)= \Pr(8\to 7)=\Pr (8 \to 9) = \Pr(9 \to 6)$ $=\Pr(9 \to 8) = 0.1$
implying probabilities of no movement in a particular time step of
$\Pr(1 \to 1) = 0.4$, $\Pr(2 \to 2) = 0.1$, $\Pr(3 \to 3) = 0.4$, $\Pr(4 \to 4) = 0.55$, $\Pr(5 \to 5) = 0.4$, $\Pr(6 \to 6) = 0.55$, $\Pr(7 \to 7) = 0.8$, $\Pr(8 \to 8) = 0.7$, $\Pr(9 \to 9) = 0.8$
If you simulate this with any starting position, I would expect that after say $100$ steps you would find the probability of each of the positions $1$ to $3$ having probability close to $\frac1{18}$, of each of the positions $4$ to $6$ having probability close to $\frac1{9}$, and of each of the positions $7$ to $9$ having probability close to $\frac1{6}$, adding up by row to $\frac16$, $\frac13$ and $\frac12$, which is what the question asked for |
how to proceed ? Have I correctly transformed the question to an equation? | $\dfrac{M}{W}=\dfrac{5}{2}=t \implies W=2t$
$\dfrac{4}{7}W=56 \implies W=98$
$\therefore t=49 \implies M=490$ |
Topology $\text{i})$ What is a topology? $\text{ii})$ What does a topology induced by a metric mean? | A topology is any collection of sets that satisfies the given three conditions. That means that most sets have more than one possible topology that can be defined on them. For example, there are several possible topologies you can define on $\mathbb R$.
That means that your comment of
I cannot find a topology for any specific given set.
Makes no sense, since if I simply give you a set, there is no way for you to discover the topology.
To improve your intuition of topology, it's easiest to first look at topologies induced by metric spaces. If $(X,d)$ is a metric space, then the set of all open sets of $X$ (where open is defined using only $d$), is a topology on $X$. Remember, a set $A$ is open in a given metric if for every element $a\in A$, there exists a ball that contains $a$ and is itself contained in $A$.
For example, if $X$ is equipped with a discrete metric, then every singleton set $\{x_0\}$ is an open set because it is actually an open ball:
$$\{x_0\}=\{x\in X: d(x,x_0) < \frac12\}.$$
Furthermore, that means that every set $A\subseteq X$ is open, because for each $a\in A$ you can find a ball that contains it and is contained in $A$. Namely, that ball is $\{a\}$.
That means that the set of all open sets given the discrete metric is just the set of all sets. Therefore, the topology induced by the discrete metric is the discrete topology.
And finally, you ask what a topological space is. It is simply a pair of sets $(X,\tau)$. Where in metric spaces, the $d$ is a function that tells you the distances between elements of $X$, here, $\tau$ is a family of sets that tells you the properties of subsets of $X$ (i.e., it tells you what elements are open). |
Definition of distribution function for a Lebesgue-Stiljes measure | The definition works whenever intervals of the form $]-\infty, x]$ have finite measure, so it includes cases where $\mu(\mathbb R)=\infty$. If such intervals have infinite measure a distribution function doesn't seem to make much sense.
p.s. I have just read your comment. You would not need a distribution function but e.g. the function $\mathbb R \to \mathbb R$, $x\mapsto \mu(]x,0])$ if $x<0$ and $\mu([0,x])$ if $x\geq 0$. Distribution functions wouldn't yield he bijection anyway, as they do not cover all increasing, right continuous functions. |
Linking Probability and tensor calculus. | Suppose you have a perfect cube in the same shape as some ideal die, and assume rigid body mechanics, and that its inertia tensor is a multiple of the identity, and its COM (center of mass) is at its center; then its motion must be exactly the same as that of the fair die, in which case (by symmetry) all faces are equally likely: if some toss comes up "3", then by rotating the die before tossing, you can make any other face occupy the place where "3" was, and thus make that face come up. Why? Since nothing else changes -- the geometry, inertia tensor, and COM are all identical in both cases, and those are the only things involved in the equations of rigid body mechanics, the equations of motion for the re-oriented die are the same as those of the originally-oriented die.
Thus, by a contrapositive argument, if you have a die that's perfectly cubical, and it's NOT a fair die, then it must in fact either have a messed up COM or inertia tensor.
Post-comment addition: Consider a "die" made from a thin square piece of lead that forms one side of the die, with the rest made from say, a very light hollow shell of carbon fiber, with the "dots" painted in lightweight paint. Then (assuming the cube has side 2, and is centered at the origin, and the "heavy" face is at $x = +1$, we get (approximately):
$$
COM = (1,0,0)\\
I = \begin{bmatrix}
0 & 0 & 0 \\
0 & K & 0 \\
0 & 0 & K
\end{bmatrix}
$$
for some positive value $K$. This die is surely biased, in the sense that it's very likely to land with the lead side down. And that means that any die whose COM and $I$ are similar enough to these is also biased. |
Is the field characteristic necessary for this proof? | The eigenvalues are still $\pm1$ when $\operatorname{char}(\mathbb F)=2$. Of course, since $-1=1$ in this case, all eigenvalues of $A$ are equal to $1$.
The characteristic matters when eigenspaces or diagonalisation are concerned. When $\operatorname{char}(\mathbb F)\ne2$, since the annihilating polynomial $x^2-1=(x-1)(x+1)$ is a product of distinct linear factors, $A$ must be diagonalisable.
However, when $\operatorname{char}(\mathbb F)=2$, the annihilating polynomial $x^2-1=(x-1)^2$ has repeated factors. Therefore we cannot infer that $A$ is diagonalisable. In fact, since all eigenvalues of $A$ are equal to $1$ but $A$ is not the identity map, it cannot possibly be diagonalisable.
Put it another way, when $\operatorname{char}(\mathbb F)\ne2$, the eigen "vectors" of $A$ corresponding to the eigenvalue $-1$ are are the skew-symmetric matrices in $M_{n\times n}(\mathbb F)$, while the eigen "vectors " corresponding to the eigenvalue $1$ are are the symmetric matrices. Since every square matrix is the sum of a skew-symmetric matrix and a symmetric matrix, you can construct an eigenbasis of $M_{n\times n}(\mathbb F)$ from the eigenvectors of $A$.
The situation is different when $\operatorname{char}(\mathbb F)=2$. In this case, since all eigenvalues are equal to $1$, the only eigen "vectors" of $A$ are the symmetric matrices. You cannot build an eigenbasis from them because there are matrices in $M_{n\times n}(\mathbb F)$ that are not symmetric (such as most upper triangular matrices). |
Using Newton's Method to solve finite volume PDEs | I assume that $\bar{u}$ must be defined from other values on the grid (i.e. sum average of surrounding values at a point in time). Then, you do not get $2N$ additional values because each of $\bar{u}$. Anyhow, you cannot have $2N$ variables for $N$ equations and expect a unique solution.
Assuming I am right, the solution to your system of equations (SoE) still demands that you solve an implicit system. Here, it is optimal (I think) to use the Newton-Raphson method to find the solution at every time step. Since your SoE is linear you can obtain the Jacobian exactly by finite differences (this is the only computing intensive part). The Newton-Raphson will then require a single iteration. |
Integral formula for polar coordinates | I recently discovered a very nice treatment of spherical integration in the paper Integration over spheres and the Divergence Theorem by John A. Baker (American mathematical monthly 104 (1997), 36-47). The story goes as follows. Let $n \geq 2$ and $g \colon S^{n-1} \to \mathbb{R}$ be continuous.
Define $\hat{g} \colon \mathbb{R}^n \to \mathbb{R}$ by
$$
\hat{g}(x)=
\begin{cases}
g(|x|^{-1}x) &\hbox{if $x \neq 0$} \\
0 &\hbox{if $x=0$}.
\end{cases}
$$
Now define
$$
\int_{S^{n-1}} g\, d\sigma_{n-1} = n\int_{B(0,n)} \hat{g}(x)\, dx.
$$
The following result can be proved ($B(a,b)$ is the spherical shell with radii $a$ and $b$).
Theorem. Suppose $0 \leq a < b$ and $f \colon B(a,b) \to \mathbb{R}$ is continuous. Then
$$
\int_{a \leq |x| \leq b} f(x)\, dx = \int_a^b r^{n-1} \left( \int_{S^{n-1}} f(rs)\, d\sigma_{n-1}(s)\right)dr.
$$
The proof is very nice, and uses the differentiability properties of the map
$$
\varphi(r) = \int_{a \leq |x| \leq r} f(x)\, dx,
$$
since it turns out that
$$
\frac{d\varphi}{dr} = r^{n-1} \int_{S^{n-1}} f(rs)\, d\sigma_{n-1}(s).
$$ |
Find integers $x$ and $y$ such that $8^x-9^y=431$ | We suspect that $512-81$ is the largest solution. Proof by contradiction:
Giving new names to $x,y,$ we say
$$ 512(8^x - 1) = 81 (9^y - 1) $$
We ASSUME both $x \geq 1, y \geq 1.$
First, $8^x \equiv 1 \pmod {81}.$ A calculation shows that $x$ must be divisible by $18$
Next,
$$ 8^{18} - 1 = 2^{54 - 1} = 3^4 \cdot 7 \cdot 19 \cdot 73 \cdot 87211 \cdot 262657 $$
The final prime factor is $262657,$ and this must divide $9^y - 1,$ or
$$ 9^y \equiv 1 \pmod{262657} $$
Yet another calculation (keyword order) tells us that the $y$ is divisible by $$ 2^7 \cdot 3 \cdot 19.$$ Then $9^y - 1$ is divisible by $9^{128} - 1,$ and
$$ 9^{128} - 1 = 3^{256} - 1 = 2^{10} \cdot 5 \cdot 17 \cdot 41 \cdot 193 \cdot 257 \cdot 275201 \cdot \mbox{BIG} $$
That's all we needed. We find $9^y-1$ divisible by $1024.$ Therefore $512(8^x - 1) $ is divisible by $1024,$ which is a contradiction of $x \geq 1.$
=============================
The large prime factors of $3^{256} - 1$ are shown in:
? factor( 3^2 + 1)
%3 =
[2 1]
[5 1]
? factor( 3^4 + 1)
%4 =
[ 2 1]
[41 1]
? factor( 3^8 + 1)
%5 =
[ 2 1]
[ 17 1]
[193 1]
? factor( 3^16 + 1)
%6 =
[ 2 1]
[21523361 1]
? factor( 3^32 + 1)
%7 =
[ 2 1]
[926510094425921 1]
? factor( 3^64 + 1)
%8 =
[ 2 1]
[1716841910146256242328924544641 1]
? factor( 3^128 + 1)
%9 =
[ 2 1]
[ 257 1]
[ 275201 1]
[ 138424618868737 1]
[ 3913786281514524929 1]
[153849834853910661121 1]
?
========================== |
calculate $\int_{0}^{2\pi}\sum_{k=n}^{\infty}e^{ik\theta}d\theta$ | You are starting with this expression: $$\int_{0}^{2\pi}\sum_{k=n}^{\infty}e^{ik\theta}d\theta$$
The way it's written, it only makes sense if
$$\sum_{k=n}^{\infty}e^{ik\theta}$$
is defined. But that is an infinite sum whose terms do not approach $0$, so that sum/limit does not exist. So the original expression doesn't have a value in the first place. |
Show $f$ is continuous on $(a,b)$, if $\forall C$ : closed subinterval, $\forall x,y : \in C, \exists M(C) >0 \ \ s.t. |f(x) - f(y) | < M(C) |x-y|$ | Fix $x \in (a,b)$. Then, there is $\delta_1 > 0$ such that $(x-2\delta_1, x+2\delta_1) \subset (a,b)$. Then, $[x-\delta_1, x+\delta_1] =: C \subset (a,b)$, so there is $M(C) > 0$ such that $|f(y) - f(z)| < M(C) |z-y|$ for all $z \neq y \in C$. In particular, $|f(y) - f(x)| < M(C) |y-x|$ for all $C \ni y \neq x$ Note that, since $\delta_1$ depends only on $x$, $C$ (hence $M(C)$ as well) depends only on $x$. Let $\epsilon > 0$ be arbitrary and take $\delta = \min\{ \epsilon/M(C), \delta_1 \}$. Then $|y-x| < \delta \implies |f(y) - f(x)| < \epsilon$. Again $\delta$ depends only on $x$ and $\epsilon$ because $M(C)$ and $\delta_1$ depend only on $x$. So $f$ is continuous (recall that, in the definition of continuity [not uniform continuity], $\delta$ is allowed to depend both on $x$ and $\epsilon$). |
The order of element in $\mathbb{Z} / 2^{2014}\mathbb{Z}$ | What you are doing is exactly correct, I do not understand the confusion.
We find that if $2^{2014} | 17^n -1$, then $v_2(17^n -1) \ge 2014$. But by LTE, we find that $v_2(17^n -1) = 4 + v_2(n)$. Plugging this in,
$$ \begin{align}
4 + v_2(n)&\ge 2014 \\
\implies v_2(n)&\ge 2010\\
\implies n&\ge 2^{2010}
\end{align}$$
As desired. |
About the matrix representation of group algebra | Let $C_3 = \langle\, g \mid g^3 = 1 \,\rangle$ and consider the representation $\rho \colon C_3 \to GL_3(\mathbb{R})$ defined by
$$ \rho \colon g \mapsto \begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{bmatrix}.$$
This is a faithful representation of $C_3$. You might want to read about the regular representation of a group to see why it works in general. |
How to prove $U = \mathbb{R}$? | Not true.
$$
V\subset \mathbb R\setminus\{0\}=U
$$
Clearly, $U$ is open! |
Matrices, transposes and ranks | Hint: For example: if $A$ is invertible, then $O(A) = O(A^T) = \{0\}$. Find an invertible matrix for which $A \neq A^T$. |
Every left ideal is complemented on the regular representation module | This is certainly not true without any hypotheses on $R$. For instance, if $R$ is an integral domain and $I,J<R$ are non-zero ideals, then we have $IJ\neq\{0\}$ and so $I\cap J\geqslant IJ\neq\{0\}$. Thus $R$ cannot be decomposed into a direct sum of any two non-zero ideals.
However, if $_RR$ is semisimple, then the result does hold. In fact, we have the following:
Lemma: For any ring $R$ and any (left) $R$-module, if $M$ is semisimple then $M$ is "completely reducible": ie, for any $A\leqslant M$, there is $B\leqslant M$ such that $A\oplus B=M$.
Proof: Let $A\leqslant M$. Since $M$ is semisimple, we can write $M=\bigoplus_{i\in I}M_i$, where each $M_i\leqslant M$ is simple. Now, let $S=\{N\leqslant M:A\cap N=\{0\}\}$, partially ordered by inclusion. $S$ contains the trivial submodule, and thus is non-empty. Also, if $(N_j)_{j\in J}$ is a chain in $S$, then clearly $N:=\bigcup_{j\in J}N_j$ lies in $S$; indeed, if we have $n\in N\cap A$, then $n\in N_j$ for some $j$, whence $n\in N_j\cap A=\{0\}$ and so $n=0$. Thus we may apply Zorn's lemma to find $B\in S$ maximal with respect to inclusion.
We claim that $C:=A\oplus B=M$. Since $M=\bigoplus_{i\in I}M_i$, it suffices to show $M_i\leqslant C$ for every $i$, so let $i\in I$ and suppose for contradiction that $M_i\nleqslant C$. Since $M_i$ is simple, this means that $C\cap M_i=\{0\}$. In particular, $M_i\cap B=\{0\}$, so $B':=B\oplus M_i$ strictly contains $B$. We claim that $B'\cap A=0$, which will contradict maximality of $B$ in $S$. Indeed, suppose we have $x\in B'\cap A$. Then we have $x=b+m$ and $x=a$ for some $b\in B$, $m\in M_i$, and $a\in A$. This means $a-b=m\in C\cap M_i=\{0\}$, whence $a=b$. Since $A\cap B=\{0\}$ by construction, this means $a=b=0$, and so $x=0$, giving the desired contradiction. $\square$
In fact, the converse of this lemma holds as well; as an exercise, try to prove it! (Hint: first show that a module is semisimple if and only if it is a sum (not necessarily direct) of simple submodules, using Zorn's lemma.) This characterization of semisimple modules is so common that it is sometimes taken as definition! In any case, the (left) submodules of $R$ are precisely the left ideals, so your desired result then follows immediately from the lemma. |
Jacobian of parametrized ellipsoid with respect to parametrized sphere | The map $(x,y,z)\mapsto (ax,by,cz) = (X,Y,Z)$ takes three variables to three variables, rather than the two variables of your parametrization. The Jacobian matrix of this three-dimensional transformation is
$$J = \begin{bmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{bmatrix}.$$
(So the answer to your first question is, "both." The off-diagonal entries are $0$.)
What maybe is confusing you is that this is a three-dimensional transformation and makes no reference to $\theta$ or $\phi$. (Is this why you expect there to be off-diagonal entries?) To see what's going on with the ellipsoid, in particular, you need to find the Jacobian of your parametrization (which tells you how the $\theta$ and $\phi$ directions are distorted by embedding them in three-space) and then compose it with the matrix $J$. This will be equivalent to differentiating the composition $(X(\theta,\phi), Y(\theta,\phi), Z(\theta,\phi))$.
Just for kicks, here's another attack that doesn't make any explicit use of calculus. Since the unit sphere is defined implicitly by $x^2 + y^2 + z^2 = 1$, the tangent space to the sphere at the point $p$, is all the vectors perpendicular to $(x,y,z)$. Multiply these vectors by $J$ and you have the vectors to the ellipsoid at $(X,Y,Z)$. |
Understanding integration with delta function | I think there are some mistakes here. The point of the first step is to make the argument of the delta function just be the variable of integration. So (using cleaner notation) we have
$$\int_{-\infty}^x f(y) \delta(ax-by) dy$$
and we change variables to $z=ax-by$. This turns the lower limit into $+\infty$ and the upper limit into $(a-b)x$. (I assume $b>0$.) Then $y=-\frac{z-ax}{b}$ and $\frac{dy}{dz}=-\frac{1}{b}$ So we have
$$-\frac{1}{b} \int_{+\infty}^{(a-b)x} f \left ( \frac{z-ax}{b} \right ) \delta(z) dz$$
Reversing the limits (so that the orientation is correct) changes it to:
$$\frac{1}{b} \int_{(a-b)x}^{+\infty} f \left ( \frac{z-ax}{b} \right ) \delta(z) dz.$$
Finally the integral is either $f \left ( -\frac{ax}{b} \right )$ if $(a-b)x<0$ or $0$ if $(a-b)x>0$. It is badly defined if $(a-b)x=0$.
Generally speaking, if $f$ is a continuously differentiable function which only has simple roots $r_i$, then $\delta(f(x))=\sum_i \frac{1}{|f'(r_i)|} \delta(x-r_i)$, and so $\int_a^b g(x) \delta(f(x)) dx = \sum_{i : r_i \in (a,b)} \frac{1}{|f'(r_i)|} g(r_i)$. (Again we must require that none of the $r_i$ be exactly equal to $a$ or $b$, otherwise things become badly defined again.) |
How to write this in perfect square form? | Use $\sec^2x = \tan^2 x + 1$.
Then we have
$$
e^{-2x}\left( \tan^2 x + 1 -2\tan x \right) = e^{-2x}\left( \tan^2 x -2\tan x + 1 \right) = e^{-2x} (\tan x -1)^2
$$ |
Direct proof of the existence of Strong Induction using the Well Ordering Principle | Sometimes when you want to prove something is true, you can approach it from many different angles, and both direct and indirect (contradiction) methods are possible.
But the PMI and WO are core primal concepts; each principle when held up to a 'carnival math mirror' looks like the other statement. The PMI is about 'higher-higher-higher' while the WO is about 'lower-lower-lower'.
Think of a penny as representing a truth. You can see a penny by looking at it face up - heads, or face down - tails. Maybe we are introduced to the 'penny truth' by looking at it heads-up. When we look at the tail side, we want to know if it is a penny. So, suppose it is not. Then when flipped over it will still not be a penny. But when you do flip it, you see the heads side. A contradiction.
You might find the link, minimal counterexample, of interest. |
Finding a largest chain | The maximum cardinality of a chain in $P(\omega)$ is at least $2^{\aleph_0}$ as you showed using Dedekind cuts, and it's no bigger than that because that's the cardinality of $P(\omega)$ itself, so it's exactly $2^{\aleph_0}$.
The same problem for $P(\omega_1)$ is much harder. The maximum cardinality of a chain in $P(\omega_1)$ is at least $2^{\aleph_0}$ because $P(\omega)\subset P(\omega_1)$, and it's at most $2^{\aleph_1}$ because $|P(\omega_1)|=2^{\aleph_1}$, but there could be lots of cardinals between $2^{\aleph_0}$ and $2^{\aleph_1}$. Actually, what I loosely referred to as "the maximum" may not exist: there is certainly a least cardinal $\lambda$ such that $P(\omega_1)$ does not contain a chain of cardinality $\lambda$, but I see no obvious reason why $\lambda$ can't be a limit cardinal. At least, the cofinality of $\lambda$ must be greater than $\omega_1$: it's easy to see that, if $P(\omega_1)$ contains a chain of cardinality $\kappa_\alpha$ for each $\alpha\lt\omega_1$, then it also contains a chain of cardinality $\kappa=\sum_{\alpha\lt\omega_1}\kappa_\alpha$.
If $2^{\aleph_0}=\aleph_1$, then there is a chain of cardinality $2^{\aleph_1}$ in $P(\omega_1)$. Hint: it's like the construction you used for $P(\omega)$, but with $\{0,1\}^{\omega_1}$ (ordered lexicographically) playing the role of $\mathbb R$, and the elements with countably many nonzero coordinates playing the role of rational numbers.
Therefore, we can prove in ZFC that there is a chain of cardinality $\aleph_2$ in $P(\omega_1)$. It's an odd proof by cases, where we use one construction if $2^{\aleph_0}=\aleph_1$, and another construction if $2^{\aleph_0}\ge\aleph_2$. I wonder if there's a more elegant proof.
So, if either $2^{\aleph_0}=\aleph_1$ or $2^{\aleph_0}=2^{\aleph_1}$, then there is a chain of cardinality $2^{\aleph_1}$ in $P(\omega_1)$. If $\aleph_1\lt 2^{\aleph_0}\lt2^{\aleph_1}$. we seem to be in a gray area. According to comments by Ashutosh on this question at Math Overflow, William Mitchell constructed a model of set theory in which $P(\omega_1)$ does not contain a chain of cardinality $2^{\aleph_1}$, in his paper "Aronszajn trees and the independence of the transfer property", Ann. Math. Logic 5 (1972), 21-46.
There may be some relevant information in James E. Baumgartner's paper "Almost-disjoint sets, the dense set problem and the partition calculus", Ann. Math. Logic 9 (1976), 401-439. |
$ABCD$ right angle trapezoid | As $AB // EO // CD$, $\hat{CEO} = \hat{ECD}$ and $\hat{BEO} = \hat{EBA}$, so it's necessary to prove that $\hat{EBA} = \hat{ECD}$...
Let $\hat{EBA} = \alpha$, $\hat{ECD} = \beta$ and $F$ the intersection between $EO$ and $CB$. Notice that $ABO\cong CDO$ ($AB // CD$ and $\hat{AOB} = \hat{COD}$), so $$AB:CD=AO:OC=BO:OD$$
By applying $Thales'$ $Theorem$ you obtain that $$AE:ED=AO:OC$$ $$\to AO:OC=AB:CD=AE:ED$$ $$\to ED=\frac{AE\cdot CD}{AB}$$
Now, $$\tan\beta = \frac{ED}{CD} = \frac{AE\cdot CD}{AB}\frac{1}{CD} = \frac{AE}{AB} = \tan\alpha$$ $$\to \beta = \alpha$$ It can't be $\to \beta = \alpha \pm \pi$ because both $\beta$ and $\alpha$ are $\lt \frac{\pi}{2}$
$Q.E.D.$ |
Find all real numbers x, y, z, u and v in $\sqrt{x}+\sqrt{y}+2\sqrt{z-2}+\sqrt{u}+\sqrt{v}=x+y+z+u+v$ | HINT.-You have always $2\sqrt{z-2}-z\le -1$ and $x-\sqrt x$ takes its minumun at $x=\frac 14$ this minimun being equal to $-\frac 14$. Since
$$2\sqrt{z-2}-z=(x-\sqrt x)+(y-\sqrt y)+(u-\sqrt u)+(v-\sqrt v)$$ it follows that the only solution is $$(x,y,z,u,v)=(\frac14,\frac14,3,\frac14,\frac14)$$ |
Smallest integer $n$ such that $\left(1-\frac{n}{365}\right)^n < \frac{1}{2}$ | To find an integer $n$ such that this holds, rewrite the inequality as $1-(n/365)<\exp(-(\ln2)/n)$ and use the fact that $\exp(-x)>1-x$ for every $x$. Then $n$ will do as soon as
$1-(n/365)<1-(\ln2/n)$, that is, $n^2>365\cdot\ln2$. Since $\ln2<.7$, one knows that $365\cdot\ln2<365\cdot.7=255.5<256=2^8$, hence every $n\geqslant2^4=16$ will do.
The numerical values the reasoning above requires to know to be performed without a calculator are the fact that $\ln2$ is (just) below $.7$ and the first powers of $2$.
It happens that the inequality does not hold for $n=15$ hence $n=16$ is the correct answer but at the moment I do not know how to prove this part without a calculator, except using the (alternating) expansion of the exponential at the second order to lower bound it. This is cumbersome, but here we go.
Since $\exp(-x)<1-x+\frac12x^2$ for every nonnegative $x$, it is enough to check that for $n=15$, $n^2<365\cdot\ln2\cdot(1-\ln2/n)$, which is true if $\ln2>\frac{n}2\left(1-\sqrt{1-\frac{4n}{365}}\right)$. Using $\sqrt{1+x}<1+\frac12x$ for $x=\frac{4n}{365-4n}$ yields $\sqrt{1-\frac{4n}{365}}>\frac{365-4n}{365-2n}$. Hence $\ln2>\frac{n^2}{365-2n}$ is enough, that is, for $n=15$, $\ln2>\frac{45}{67}$. Since $\frac{45}{67}\approx.672$, this proves the thing if one knows that $\ln2$ is greater than $.68$.
The numerical value the reasoning above requires to know to be performed without a calculator is the fact that $\ln2$ is about $.69$. |
Linear Transformation, if $T(x) \neq 0$ for some $x \in V_1$, prove that $Tx \neq 0 \iff x \neq 0$ | This is false. All you need for a counterexample is a nonzero linear transformation which is not injective. For instance, the one whose matrix rel the standard basis for $\Bbb R^3$ is $\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}$. |
The definition of the factorial using continuations in the Lambda calculus | $Y$ is Haskell Curry's Y-combinator, which is used here to express the recursive factorial function as a closed lambda expression, rather than as a non-closed lambda expression that contains a symbol that represents a self-reference. But this has no relevance for continuation passing, it's just a standard trick when dealing formally with recursion.
$k$ is a continuation function which is supplied as an extra argument. Instead of reducing to $n!$ as a straight-forward factorial function would, this function reduces to $k(n!)$. So instead of returning the result $n!$ to where the function was applied, the continuation function $k$ controls what happens next with the result. It could for example use the result in some further computation which invokes yet another continuation function, and so on. |
Estimating an integral in terms of a parameter | Let me make the change of variable $y=x-l$, so that the integral becomes
$$
\int_{|y|\le |l|/2}\frac{|y+l|^2}{1+|y+l|^2\,|y|^2}\,dy.
$$
You have shown that
$$
\int_{1\le|y|\le |l|/2}\frac{|y+l|^2}{1+|y+l|^2\,|y|^2}\,dy\lesssim\log|l|.
$$
Now, if $|l|>1$, then
\begin{align}
\int_{|y|\le 1}\frac{|y+l|^2}{1+|y+l|^2\,|y|^2}\,dy&\le
\int_{|y|\le 1}\frac{(|l|+1)^2}{1+(|l|-1)^2\,|y|^2}\,dy\\
&=2\,\pi\int_0^1\frac{(|l|+1)^2r\,dr}{1+(|l|-1)^2\,r^2}\\
&=\pi\,\frac{(|l|+1)^2}{(|l|-1)^2}\log\bigl(1+(|l|-1)^2\bigr)\\
&\lesssim\log|l|.
\end{align} |
What group do these types of numbers fall into? | They are called polygonal numbers, see here also for formulas http://en.wikipedia.org/wiki/Polygonal_number |
Need a hint on problem 5-*20 from Spivak | Hint: Construct a sequence of fractions $a_n$ with the limit a. Then take the same sequence and add $\sqrt{2}/n$ to it. you might have to look at a rational and a irrational also.
So If a is rational then $a_n=a+1/n\in Q$ and $b_n=a+\sqrt{2}/n\in R\wedge $ not $\in Q$. The limit is different for both cases. Hence, the limit does not exist. Similarly for a irrational. But you might need to consider the fractional approximants to your irrational number for $a_n$. |
Determine the values of a, b and c, for which the systems have (1) exactly one solution, (2) no solutions, (3) infinitely many solutions. | I have the same result as you: If $$a-2b+c=0$$ then we get infinity many solutions. If $$a-2b+c\ne 0$$ then we get no solutions.Since the last two equations are $$x_2+2x_3=\frac{4a-b}{3}$$ and $$x_2+2x_3=\frac{7a-c}{6}$$ |
Largest Odd Divisor Sum Help | Notice that
$$d(2n)=d(n)$$
since $2$ cannot be an odd divisor. Now let us separate your sum into even and odd terms:
$$\sum_{n=1}^{2^{99}} d(n)$$
$$=\sum_{n=1}^{2^{98}} d(2n)+\sum_{n=1}^{2^{98}} d(2n-1)$$
and now, since $d(2n)=d(n)$, we have
$$=\sum_{n=1}^{2^{98}} d(n)+\sum_{n=1}^{2^{98}} d(2n-1)$$
Now we can split it up again:
$$=\sum_{n=1}^{2^{97}} d(2n)+\sum_{n=1}^{2^{97}} d(2n-1)+\sum_{n=1}^{2^{98}} d(2n-1)$$
or
$$=\sum_{n=1}^{2^{97}} d(n)+\sum_{n=1}^{2^{97}} d(2n-1)+\sum_{n=1}^{2^{98}} d(2n-1)$$
If we continue this process, we end up with
$$=1+\sum_{k=0}^{98}\sum_{n=1}^{2^k}d(2n-1)$$
And since you seem to know how to calculate the sum of odd terms, this sum should be manageable for you.
EDIT: I had a bit of a brain fart and didn't realize that
$$d(2n-1)=2n-1$$
so now we have
$$=1+\sum_{k=0}^{98}\sum_{n=1}^{2^k} 2n-1$$
and since the sum of the first $a$ odd numbers is $a^2$,
$$=1+\sum_{k=0}^{98} (2^k)^2$$
$$=1+\sum_{k=0}^{98} 4^{k}$$
and this is just a geometric sequence that, using the formula
$$\sum_{k=0}^n a^k=\frac{1-a^{n+1}}{1-a}$$
sums to
$$=1+\frac{4^{99}-1}{3}$$
$$=\color{green}{\frac{4^{99}+2}{3}}$$
Can you find the last three digits of this? |
How can I estimate the exponent of the Floating Point Arithmetic representation of a decimal number? | Excerpt of facts from https://mathworks.com/help/matlab/ref/log2.html
[F,E] = log2(X)
returns a pair of real and integer number (or arrays of it if X is an array)
For real X, the pair E,F satisfies the equation X = F.*2.^E. E is integer, F real, usually in the range 0.5 <= abs(F) < 1.
This function corresponds to the ANSI® C function frexp() and the IEEE floating-point standard function logb().
Note however that as (1.ddd)_2 = 2 * (0.1ddd)_2, the result you are looking for is E-1. (Which may be irrelevant, as you want to compare sizes.) |
Linearity of the supremum | For simplicity, let's say you have a finite set of functions $f_k$, $k = 1 \ldots K$, and a finite set of $n$'s, $n=1..N$, and you want
$$ \sup_{k=1 \ldots K}\sum_{n=1}^N f_k(n) = \sum_{n=1}^N \sup_{k=1\ldots K} f_k(n) \tag{1}$$
Note that you always have $$f_j(n) \le \sup_{k=1\ldots K} f_k(n)\tag{2}$$ so
$$ \sum_{n=1}^N f_j(n) \le \sum_{n=1}^N \sup_{k=1\ldots K} f_k(n) \tag{3}$$
and therefore
$$ \sup_{k=1\ldots K}\sum_{n=1}^N f_k(n) \le \sum_{n=1}^N \sup_{k=1\ldots K} f_k(n) \tag{4}$$
Moreover, in order to have equality in (4), you need equality in (3) for some $j$, and that in turn requires equality in (2) for all $n$. That is, the
necessary and sufficient condition is that there is some $j$ where the maximum is attained for all $n$.
An infinite set of $n$'s doesn't change anything, as long as the sums always converge.
For an infinite set of $k$'s, things can be rather more complicated. |
Show that if $Y$ hasn't isolated points the graph any function $f: X \rightarrow Y$ has empty interior in $X\times Y$ | It looks fine, but you could have justified the existenc of $y$ and $z$ such that $y\ne z$ and that $(a,y),(a,z)\in G(f)$. That's easy though: you take $y=f(a)$. Since $\{f(a)\}$ is not open and it is a subset of $U$, which is open, $\{f(a)\}\varsubsetneq U$, and therefore you can take $z\in U\setminus\{f(a)\}$. |
Choosing Players for a Card Game | Yes, you're reasoning is correct! Your answer, $5$, is the minimum number of games in which each player can be with every other person. However, the minimum number of games in which one player can be with every other person is $4$. If you are still unsure about what you've done, try listing all of the teams out. Let's represent the players as a set $\{ a, b, c, d, e \}$. Here's all of the teams:
$\{ a, b \}$
$\{ c, d \}$
$\{ a, c \}$
$\{ d, e \}$
$\{ a, d \}$
$\{ b, e \}$
$\{ a, e \}$
$\{ b, c \}$
$\{ c, e \}$
$\{ b, d \}$
If you make each game the first and second pair as listed, and then the next game the third and fourth pair as listed, and so on until you've gone through $5$ games, you will have gone through all of the pairs. |
probability of not occurring either A or B | I am facing problem with the language. I don't understand what "not occurring either A or B" means
Yes, it is awkward wording, but does seem to intend to say "neither A nor B occurring."
Which is $(A\cup B)^\complement$, making you calculations correct. |
Linear operator $X \to X$ with dim $X= \infty$ | An infinite dimensional subspace $E$ of an infinite dimensional vector space $F$ is not necessarily equal to $F$. So $dim R(T) =dim X$ does not implies that $R(T)=X$. |
Equalizing percentages | Your statement that
He needs 80 dollars more to be back to his original salary
is incorrect; recall that his original salary wasn't just \$400, but also included a commission of 8%. If $d$ is the number of dollars in sales he needs to make so that his total salary would be the same under both his new plan and his old plan, instead of solving
$$320+(0.1\times d)=400$$
(which is what you proposed), we need to solve
$$320+(0.1\times d)=400+(0.08\times d)$$
This equation becomes
$$(0.1\times d)-(0.08\times d)= 400-320$$
$$0.02\times d = 80$$
$$d=80\times\frac{1}{0.02}=80\times 50=4000,$$
which is the correct answer. |
Is this function $f: \mathbb{R}^{n+1}\rightarrow{S^{n}}$ continuous? | The continuity of $g$ is straight forward. The map $g$ is simply the inclusion map. Fix $x_0 \in S^n$, $\varepsilon > 0$, and let $\delta = \varepsilon$. Then
$$\|x - x_0\| < \delta \implies \|g(x) - g(x_0)\| = \|x - x_0\| < \delta = \varepsilon.$$
The continuity of $f$ is less straight forward. Fix $x_0 \in \Bbb{R}^n \setminus \{0\}$, and $\varepsilon > 0$. We wish to find a $\delta > 0$ such that
$$\|x - x_0\| < \delta \implies \|f(x) - f(x_0)\| = \left\|\frac{x}{\|x\|} - \frac{y}{\|y\|}\right\| < \varepsilon.$$
In order to figure out such a $\delta$, it is typically prudent to start with the $\|f(x) - f(x_0)\| < \varepsilon$, and work backwards.
Remember: we are always allowed to substitute $\|f(x) - f(x_0)\|$ for something larger. If we can make this larger quantity smaller than $\varepsilon$, then $\|f(x) - f(x_0)\|$ will be smaller than $\varepsilon$ too.
We have
\begin{align*}
\|f(x) - f(x_0)\| &= \left\|\frac{x}{\|x\|} - \frac{x_0}{\|x_0\|}\right\| \\
&= \left\|\frac{x}{\|x\|} - \frac{x}{\|x_0\|} + \frac{x}{\|x_0\|} - \frac{x_0}{\|x_0\|}\right\| \\
&\le \left\|\frac{x}{\|x\|} - \frac{x}{\|x_0\|}\right\| + \left\|\frac{x}{\|x_0\|} - \frac{x_0}{\|x_0\|}\right\| \\
&\le \left\|\frac{x}{\|x\|} - \frac{x}{\|x_0\|}\right\| + \left\|\frac{x}{\|x_0\|} - \frac{x_0}{\|x_0\|}\right\| \\
&= \left|\frac{1}{\|x\|} - \frac{1}{\|x_0\|}\right|\|x\| + \frac{1}{\|x_0\|}\|x - x_0\|.
\end{align*}
In order to make this less than $\varepsilon$, we can make each of the terms in the sum less than $\varepsilon / 2$. The right term is easy; if we force $\delta \le \frac{\varepsilon \|x_0\|}{2}$, then
$$\|x - x_0\| < \delta \implies \|x - x_0\| < \frac{\varepsilon \|x_0\|}{2} \implies \frac{1}{\|x_0\|} \|x - x_0\| < \frac{\varepsilon}{2}$$
as needed.
The other term is more tricky for a couple of reasons. Firstly, the $\|x\|$ term is not constant; we can't just treat it the same way we did with the $\|x_0\|$ term. It's not even bounded, meaning that $\|x\|$ could be arbitrarily large! That is, unless we limit $\delta$. If we force, say $\delta \le \frac{\|x_0\|}{2}$, then we can say for sure that
$$\|x - x_0\| < \delta \implies \|x - x_0\| < \frac{\|x_0\|}{2} \implies \|x\| \le \|x - x_0\| + \|x_0\| < \frac{3\|x_0\|}{2}.$$
Note that I could have replaced $\frac{\|x_0\|}{2}$ with any positive number (e.g. $1$). I've chosen $\frac{\|x_0\|}{2}$ specifically because I know I'll want (very soon) for $x$ to be bounded away from $0$. The closer to $0$ that $x$ becomes, the larger $\frac{1}{\|x\|}$ becomes, and the larger $\left|\frac{1}{\|x\|} - \frac{1}{\|x_0\|}\right|$ becomes. By bounding $\delta$ by $\frac{\|x_0\|}{2}$, I ensure that
\begin{align*}
\|x - x_0\| < \delta &\implies \|x_0\| - \|x\| \le \|x - x_0\| < \frac{\|x_0\|}{2} \\
&\implies \|x\| > \|x_0\| - \frac{\|x_0\|}{2} = \frac{\|x_0\|}{2} \\
&\implies \frac{1}{\|x\|} < \frac{2}{\|x_0\|}.
\end{align*}
Under this assumption of $\delta \le \frac{\|x_0\|}{2}$, note that
$$\left|\frac{1}{\|x\|} - \frac{1}{\|x_0\|}\right| = \frac{1}{\|x\| \cdot \|x_0\|} \cdot \Big|\|x\| - \|x_0\|\Big| \le \frac{2}{\|x_0\|^2}\|x - x_0\|.$$
From the working above, we further have, under this same assumption
$$\left|\frac{1}{\|x\|} - \frac{1}{\|x_0\|}\right| \|x\| \le \frac{2}{\|x_0\|^2}\|x - x_0\| \cdot \frac{3\|x_0\|}{2} = \frac{3}{\|x_0\|}\|x - x_0\|.$$
We can make this less than $\frac{\varepsilon}{2}$ by making $\delta \le \frac{\varepsilon\|x_0\|}{6}$.
So, in summary, in order for our $\delta$ to work, it must be less than or equal to $\frac{\|x_0\|}{2}$, $\frac{\varepsilon\|x_0\|}{6}$, and $\frac{\varepsilon\|x_0\|}{2}$. Note that the latter is redundant, so our choice for $\delta$ is (at long last)
$$\delta = \min \left\{\frac{\|x_0\|}{2}, \frac{\varepsilon\|x_0\|}{6}\right\}.$$
This is not the proof. To write out the proof, you now have to write all of the above out in a sensible order. You start by assuming that $\|x - x_0\| < \delta$, and you must conclude (using the above working) that $\|f(x) - f(x_0)\| < \varepsilon$. I'm going to leave it to you. |
A question on $dis(x,A)$ | Hint: Check both inclusions separately. Use the fact that for every $\lambda \in \mathbb{C}$ there is an $x \in F$ so that $d(\lambda, x) = d(\lambda, F)$ [this follows from compactness]. |
Is there any intuitive relationship between $A A^{T}$ and $A^{T} A$? | Here's a nice relationship between the two matrices: all matrices $A$ have polar decompositions
$$
A = P_1U = UP_2
$$
Where $P_i = \sqrt{S_i}$ and $U$ is an orthogonal matrix (In fact, if $A$ is invertible, then $U$ is uniquely determined to be the nearest orthogonal matrix to $A$). With this orthogonal matrix $U$, we have
$$
AA^T = U(A^TA)U^T
$$ |
Prove that if $f(n)=\omega(g(n))$ then $f(n)-g(n)$ = $\Theta f(n).$ | Note that $f(n)-g(n) < f(n)$ therefore $f(n)-g(n) = O(f(n))$. Now you need to prove $f(n) - g(n) = \Omega(f(n))$.
Since $f(n) = \omega(g(n))$, how do you compare $f(n)-g(n)$ to $0.9f(n)$? |
Reducing Spaces: Evolution | I'm assuming your notation means that, for every $h \in \mathcal{D}(H)$, one has $Ph \in \mathcal{D}(H)$ and $HPh = PHh$. If that is the case, let $R(\lambda)=(H-\lambda I)^{-1}$ for $\lambda\not\in\sigma(H)$ and note that
$$
P(H-\lambda I)h = (H-\lambda I)Ph,\;\;\; h \in \mathcal{D}(H) \\
R(\lambda)Pg = PR(\lambda)g,\;\;\; g \in \mathcal{H}.
$$
By Stone's theorem, $E(S)P=PE(S)$ for all Borel subsets $S$ of $\mathbb{R}$, where $E$ is the spectral measure for $H$. Therefore, using the Borel functional calculus gives $e^{itH}P=Pe^{itH}$.
On the other hand, if $e^{itH}P=Pe^{itH}$, and if $h\in\mathcal{D}(H)$, then $e^{itH}h$ is strongly differentiable in $t$ and, hence, $e^{itH}Ph$ is strongly differentiable in $t$, which gives $Ph \in \mathcal{D}(H)$ and
$$
\frac{d}{dt}e^{itH}Ph|_{t=0}= P\frac{d}{dt}e^{itH}h|_{t=0} \\
HPh = PHh.
$$ |
Counting number of valid strings | Since you could compute $|S_i|=7^n-6^5$, it's easy to apply the same conclusion to find $|S_1 \cap S_2|=7^n-5^n$ and so on. So if you want to count the number of strings "At least one of them is included":
$$|S_A \cup S_B \cup S_C \cup S_D|=4(7^n-6^n)-6(7^n-5^n)+4(7^n-4^n)-(7^n-3^n)$$
But if you want to count the number of strings "At least one of each of them is included":
$$|S_A \cap S_B \cap S_C \cap S_D|=7^n-|S_A' \cup S_B' \cup S_C' \cup S_D'|=7^n-4\times6^n+6\times5^n-4\times4^n+3^n$$
That in above we first use complement and Demorgan's Law, and then expand it like previous part using PIE. The fact that the number of string of length $n$, that haven't $k$ specific elements of an alphabet with $m$ elements, is obviously $(m-k)^n$. At last the coefficients is $\binom{k}{i}$, where $k$ is number of selected elements, in this case $4$, and $i$ is the parameter of PIE. |
Combinatorics of given alphabet | You can use $$\displaystyle \sum_{s=0}^3 \sum_{e=0}^2 \sum_{a=0}^1 \sum_{i=0}^1 \sum_{d=0}^1 \dfrac{(s+e+a+i+d)!}{s!\,e!\,a!\,i!\,d!}$$ and if you like ignore the $ a!\,i!\,d!$ which is always $1$. This will give $9859$ possibilities. You may want to subtract $1$ if you want to exclude $0$-letter anagrams, leaving $9858$ possibilities.
For the full $8$-letter anagrams, each index takes its maximum value, giving $\frac{8!}{3! \, 2!}=3360$ possibilities. It seems there are another $3360$ $7$-letter anagrams (is this a co-incidence?), so between them more thatn two-thirds of the total. |
Fractal geometry on the circle, where area exponent and cross section exponent differ by less than 1 | You seem to require that the intersections with diameters and scaled disks are Lebesgue measurable (with respect to 2- or 1-dimensional measure, respectively).
Let $f(\phi)$ denote the measure of the diameter in direction $\phi$ intersected with the fractal.
For $r_1<r_2\le 1$ we find that for each diameter, the measure of intersection with the annulus with radii $r_1$ and $r_2$ is $(r_2^n-r_1^n)f(\phi)$.
Then the area $A(r_1,r_2)$ within the annulus is bounded as follows:
$$r_1\int_\phi (r_2^n-r_1^n)f(\phi)\,\mathrm d\phi \le A(r_1,r_2)\le r_2\int_\phi (r_2^n-r_1^n)f(\phi)\,\mathrm d\phi.$$
On the other hand, by the proportionality required for areas, we find
$$ A(r_1,r_2)=A(0,r_2)-A(0,r_1)=(r_2^m-r_1^m)A(0,1).$$
We conclude
$$ \frac{r_2^m-r_1^m}{r_1(r_2^n-r_1^n)}\ge\frac{\int_\phi f(\phi)\,\mathrm d\phi}{A(0,1)}\ge\frac{r_2^m-r_1^m}{r_2(r_2^n-r_1^n)}.$$
If $r_1\to r_2$, both bounds tend to $\frac mnr_2^{\epsilon-1} $, but the expression in the middle is a constant! |
How can changes between coordinate systems create functions from non-functions? | The circle intersects the $y$ axis either twice or not at all, for any value of $x$. Functions, strictly interpreted, have to be single-valued, thus the circle cannot be $y=f(x)$.
But, a circle around the origin (0,0) intersects the positive $r$ axis only once. hence, it can be described by $r(\theta)=R=$constant.
This is the key difference: does the shape intersect the "dependent axis" ($y$ or $r$) at most once? Yes: function, No: not function.
Your example of $r(\theta)=\sin(\theta)$ is a degenerate case. It seems to intersect the $r$ axis twice, once at $r=0$ and once at $r=\sin(\theta)$. It is tangential to the $x$ axis, and if you move it upwards even slightly, it is no longer a function even in polar coordinates. |
Prime number Stone-Weierstrass-looking problem | The key point is Müntz-Szasz theorem, which states that for a sequence $(\lambda_n)_{n\geqslant 1}$ of positive numbers, the vector space generated by the constant functions and $\{x\mapsto x^{\lambda_n},n\geqslant 1\}$ is dense in $C[0,1]$ endowed with the uniform norm if and only if $\sum_{n\geqslant 1}1/\lambda_n$ is divergent. Then we conclude. |
How to test the weak solution to hyperbolic conservation law? | To show that $u$ is a weak solution of this initial-value problem, we show that
$$
\int_{0}^{\infty}\int_{-\infty}^{\infty} \left[ \phi_t u + \phi_x f(u)\right] \mathrm{d}x\, \mathrm{d}t = -\int_{-\infty}^{\infty} \phi(x,0)\, u (x,0)\,\mathrm{d}x
$$
is satisfied for all $\phi$ in $C_0^1(\mathbb{R}\times \mathbb{R}^+)$. Let us prove this identity in the case
$$ u(x,t) = u_1(x,t) = \left\lbrace\begin{aligned}&0 &&\text{if } x < t/2\, ,\\ &1 &&\text{if } x > t/2\, ,\end{aligned}\right. $$
with the flux function of Burgers' equation $f(u) = \frac{1}{2}u^2$.
To do so, we split the integral in two parts, and switch the integrals according to the Fubini theorem:
\begin{aligned}
\int_{0}^{\infty}\!\int_{-\infty}^{\infty} \left[ \phi_t u + \phi_x f(u)\right] \mathrm{d}x\, \mathrm{d}t &= \int_{-\infty}^{\infty}\int_{0}^{\infty} \phi_t u\, \mathrm{d}t\, \mathrm{d}x + \int_{0}^{\infty}\!\int_{-\infty}^{\infty} \phi_x f(u)\, \mathrm{d}x\, \mathrm{d}t \\
&= \int_{0}^{\infty}\int_{0}^{2x} \phi_t \, \mathrm{d}t\, \mathrm{d}x + \frac{1}{2}\int_{0}^{\infty}\!\int_{t/2}^{\infty} \phi_x \, \mathrm{d}x\, \mathrm{d}t \\
&= \int_{0}^{\infty}\left[\phi(x,2x) - \phi(x,0)\right]\mathrm{d}x - \frac{1}{2}\int_{0}^{\infty} \phi(t/2,t)\, \mathrm{d}t \\
&= -\int_{0}^{\infty}\phi(x,0)\,\mathrm{d}x \\
&= -\int_{-\infty}^{\infty}\phi(x,0)\,u(x,0)\,\mathrm{d}x \, .
\end{aligned}
In the case where $u(x,t) = u_2(x,t)$, the proof is similar.
This is a particular case of exercise 3.4 p 29 of the book [1].
[1] R.J. LeVeque: Numerical Methods for Conservation Laws, 2nd ed., Birkhäuser, 1992. |
The transition from the residue classes modulo to the elements of classes | No, this is not necessarily true. For instance, suppose $K=\mathbb{Z}$, $R=2\mathbb{Z}$, and $[\cdot]_1=[\cdot]_2=[\cdot]_3=2\mathbb{Z}$. Then $[\cdot]_1\cdot[\cdot]_2=[\cdot]_3$, but if $a\in[\cdot]_1$ and $b\in[\cdot]_2$, then $ab$ must be divisible by $4$, not just by $2$. In particular, $2\in[\cdot]_3$ but cannot be written as such a product $ab$. |
The union of a set with a set that has the finite intersection property has F.I.P | I’ll use $\mathscr{C}$ instead of $C$ for the family with the finite intersection property.
Yes, if $A\in\mathscr{C}$, then $\mathscr{C}\cup\{A\}=\mathscr{C}$, so it has the finite intersection property. However, $A\notin\mathscr{C}$ does not imply that $I\setminus A\in\mathscr{C}$. Suppose, for instance, that $A_n=\{k\in\Bbb N:k\ge n\}$ for each $n\in\Bbb N$, and $\mathscr{C}=\{A_n:n\in\Bbb N\}$; it’s easy to check that $\mathscr{C}$ has the finite intersection property. Let $A=\{2n:n\in\Bbb N\}$, the set of even natural numbers, so that $\Bbb N\setminus A$ is the set of odd natural numbers; then neither $A$ nor $\Bbb N\setminus A$ has the finite intersection property.
HINT: I suggest trying to prove the contrapositive: show that if neither $\{A\}\cup\mathscr{C}$ nor $\{I\setminus A\}\cup\mathscr{C}$ has the finite intersection property, then $\mathscr{C}$ does not have the finite intersection property. Start by noticing that if $\{A\}\cup\mathscr{C}$ does not have the finite intersection property, then there is a finite $\mathscr{C}_0\subseteq\mathscr{C}$ such that $A\cap\bigcap\mathscr{C}_0=\varnothing$. There’s a further hint in the spoiler box if you need it.
If $A\cap X=\varnothing=B\cap X$, then $$(A\cup B)\cap X=(A\cap X)\cup(B\cap X)=\varnothing\,.$$ |
In 3D printing does the Bernouilli Equation mean that 1.75mm filament drive has less force on the extruder gear than 3.0mm? | You are missing a drive term in Bernoulli's equation. For our purposes it is better to include a energy gain of the left side that represents the drive gear's input. Note that mechanical energy is given by $E = \tau \omega$ where $\tau$ is the torque of the gear and $\omega$ is the rotational speed.
1) $Q = Av$ where $Q$ is the volumetric flow rate, $A$ is the cross sectional area of the pipe, $v$ is flow velocity. Since the outlet and inlet volumetric flow is conserved, we have:
$$A_1v_1 = A_2v_2$$
$$\frac{A_1}{A_2} = \frac{v_2}{v_1}$$
$$\frac{r^2_1}{r^2_2} = \frac{v_2}{v_1}$$
2) You can have the pressure difference pretty easily by arranging Bernoulli's equation, but the pressure ratio can only be found if you know one of the pressures. I remember there are ways to do this with extrusion equations, so let me do some research and come back to this.
3) With the change made to the equation, torque should now be easy to find. The problem is that torque is free, and you can set it to anything you like. There may be some missing constraints here. One big issue is that the system is not lossless and the energy drains may be significant. It may be worth computing these. |
An attempt at proving that $A=(0,1)$ is not compact on the real line with the usual topology. | I would note that $G_1=(1/2,3/2)$ which covers $(1/2,1)$, and then noting that
$$\frac{3}{2^{n+1}} \ge \frac{1}{2^n} \iff \frac{3}{2} \ge 1 \mbox{, for all }n\in \mathbb{N}$$
which is true.
This gives us that two consecutive intervals have non-empty intersection so that
$$\bigcup_{n\le N}G_{n}=\left(\frac{1}{2^N},\frac{3}{2}\right) \mbox{, for all } N\in \mathbb{N}$$
Now if we have $x\in (0,1)$ this means $x >0$ and $\frac{1}{2^n}$ converges to $0$ so by definition if we get $\epsilon =x$, there exists $N_0$ such that $\forall n\ge N_0$, $\frac{1}{2^n}<x$.
So we have that $x\in \bigcup_{n\le N_0}G_{n}$ (finite union) $\Longrightarrow$ there exists an interval $G_k$ with $k\le N_0$.
Hope I made no mistakes. |
Existence of a like affine function | Let's exploit the structure of $f$ and argue by contradiction. Suppose all three properties are met. Let $c=f(0,0)$ and $d=f(1,0)$. If any of $c$ or $d$ are non-positive, the third condition is violated and we're done, so we will assume $c>0$ and $d>0$. The convexity property of your function immediately determines the values of $f$ in the triangle $T$ with vertices $(0,0), (1,0), (0,1)$.
However, there is more! Now, pick any point $p$ in $D$ which isn't in $T$ (for instance, the corner $(1,1)$). To find the value of $f(p)$, we pick an auxiliary point $q$ in the interior of $T$. Because the interior of $T$ is an open set, we are able to pick another point $r$ which lies in the segment $p$-$q$ but still lies in the interior of $T$; this means we know $f(q)$ and $f(r)$, and we can write $r = \alpha p + \beta q$, and the convexity condition of your function gives us a linear equation for the value of $f(p)$. In this manner, the values of $f$ everywhere in $D$ are determined uniquely by the choice of $c$ and $d$.
Now that we know $f$ everywhere, write
\begin{align}
f(x, y)
&= f(x, y\times 1 + (1-y) \times 0)
\\&= yf(x, 1) + (1-y)f(x,0),
\\&= y[f(x\times 1+(1-x)\times 0, 1)] + (1-y)[f(x\times 1+(1-x)\times 0, 0)],
\\&= y[xf(1,1) + (1-x)f(0,1)] + (1-y)[xf(1,0) + (1-x)f(0,0)],
\\&= xyf(1,1) + (1-x)yf(0,1) + x(1-y)f(1,0) + (1-x)(1-y)f(0,0),
\\&= xyf(1,1) + x(1-y)d + (1-x)(1-y)c,
\end{align}
for any $(x,y)\in D$. We don't know $f(1,1)$ explicitly, but we can use convexity to find its value. $(0.5,0.5)$ is the midpoint of $(1,0)$ and $(0,1)$, so $f(0.5,0.5)=d/2$. Applying the same argument again, $(f(1,1)+c)/2 = d/2$, which yields $f(1,1)=d-c$, and if $c\geq d$, we're done! If not, we write, at last:
\begin{align}
f(x, y)
&= xy(d-c) + x(1-y)d + (1-x)(1-y)c,
\\&= d[xy + x(1-y)] + c[(1-x)(1-y) -xy],
\\&= dx + c(1-x-y).
\end{align}
Just for a moment, let's extend $f$ to the whole plane, and let's find its zeroes:
\begin{align}
0
&= dx + c(1-x-y),
\end{align}
which yields
\begin{align}
y=\frac{d-c}{c}x+1.
\end{align}
This line goes through the point (0,1), as expected. However, since we assumed $c<d$, the line has a positive slope, and thus intersects $D$ in the $x<0$ region, which gives the final contradiction.
I hope this helps! |
Removing brackets and negative values within equations | The gist of the matter is that $$-(q-2)=-q+2$$ That is, it's the same as $$(-1)\times(q-2)$$ and by the same token, $$-3(q-2)=(-3)(q-2)=(-3)(q)+(-3)(-2)=-3q+6$$
In your evaluation of $$10-2(3-1),$$ you insert a second minus sign, making it $$10--2(3-1).$$ There is no justification for this. The two expressions are not equal. |
Universal Quantifier Representation (Conditional Statement): Implication, Semantic Consequence, Syntactic Consequence | No one.
$∀x [P(x)→Q(x)]$ reads: "every $P$ is $Q$". See Categorical proposition.
The "syntactic consequence" relation: $P(x)⊢Q(x)$ reads: "from (formula) $P(x)$, (the formula) $Q(x)$ is derivable". See Proof calculus (or Proof system).
The "semantic consequence" relation: $P(x)⊨Q(x)$ reads: "(formula) $P(x)$ logically implies (the formula) $Q(x)$". See Logical Consequence.
$⇒$ is ambiguous; sometimes it is used for (logically) implies (i.e. semantic consequence), sometimes for the connective "if..., then..." (i.e. the conditional: →). |
Criteria for being a true martingale | Here you are :
From Protter's book "Stochastic Integration and Differential Equations" Second Edition (page 73 and 74)
First :
Let $M$ be a local martingale. Then $M$ is a martingale with
$E(M_t^2) < \infty, \forall t > 0$, if and only if $E([M,M]_t) < \infty, \forall t > 0$. If $E([M,M]_t) < \infty$, then $E(M_t^2) = E([M,M]_t)$.
Second :
If $M$ is a local martingale and $E([M, M]_\infty) < \infty$, then $M$ is a
square integrable martingale (i.e. $sup_{t>0} E(M_t^2) = E(M_\infty^2) < \infty$). Moreover $E(M_t^2) = E([M, M]_t), \forall t \in [0,\infty]$.
Third :
From George Lowther's Fantastic Blog, for positive Local Martingales that are (shall I say) weak-unique solution of some SDEs.
Take a look at it by yourself :
http://almostsure.wordpress.com/category/stochastic-processes/
Fourth :
For a positive continuous local martingales $Y$ that can written as Doléans-Dade exponential of a (continuous)-local martingale $M$, if $E(e^{\frac{1}{2}[M,M]_\infty})<\infty$ is true (that's Novikov's condition over $M$), then $Y$ is a uniformely integrable martingale.(I think there are some variants around the same theme)
I think I can remember I read a paper with another criteria but i don't have it with me right now. I 'll to try find it and give this last criteria when I find it.
Regards |
Valid form of using Big O notation | The big O notation sometimes denotes a class of functions. In this sense it is true that
$$O(n^2+5n+5)=O(n^2),$$ i.e. the two classes are equivalent.
But this does not express that
$$f(n)=n^2+5n+5\in O(n^2)$$
(in other terms, there is no member "$f$" here.) |
Showing there is no such bounded linear functional | Let's show that $F$ is not bounded on $M$.
Consider $f_n \in M$ defined as $f_n(x) = \sin\left(n(x-\frac12)\right), \forall x\in [0,1]$ for $n \in \mathbb{N}$.
We have $\|f_n\|_\infty = 1$ for large enough $n \in \mathbb{N}$ because $f_n\left(\frac12 + \frac{\pi}{2n}\right) = 1$ and $|f_n |\le 1$.
However, $f_n'(x) = n\cos\left(n(x-\frac12)\right)$ so $$F(f_n) = f_n'\left(\frac12\right) = n\cos 0 =n\xrightarrow{n\to\infty} \infty$$
We conclude that there cannot exist $C > 0$ such that $|F(f_n)| \le C\|f_n\|$ for all $n \in \mathbb{N}$. Thus, $F$ is not bounded on $M$ so in particular it cannot be extended to a bounded functional on $C[a,b]$. |
is $f$ integrable (measurable) | Typically, to show $f$ is measurable you need to show that
$$
\{x: f(x)< c\}
$$
is measurable for all $c$. (Equivalently, one can change $<$ to $\leq$, $\geq$, or $>$). It should be pretty easy to break down $c$ into cases and show that these sets amount to intervals, which are measurable. |
Proof by induction that $3^n - 1$ is an even number | Hint: Start by showing that $3^n$ is odd. What's an odd minus an odd? |
Can we have a continuous bijective function that maps closed intervals to open intervals? | You do not need continuity in the usual sense. If for all $i$, you can find
$$\tag{1} f^{-1} ([-i, i]) = (-k, k)$$
for some $k$, then $f$ is continuous between the topological spaces $(\mathbb R, \tau_1), (\mathbb R,\tau_2)$.
So all you need is that $f$ is bijective which satisfies (1). We can actually construct such an $f$ easily: define $f$ to be a bijection
\begin{align}
(-1, 1) &\to [-1, 1], \\
(-2, -1] \cup [1, 2) &\to [-2, 1) \cup (1, 2], \\
\vdots \ \ \ \ \ \ & \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots \\
(-n,-n+1] \cup [n-1, n) & \to [-n, -n+1) \cup (n-1, n]
\end{align}
and so on (to construct a bijection $(-1, 1)\to [-1, 1]$, see here for a similar construction). Then $f$ is bijective and
$$ f^{-1}([-n, n]) = (-n, n)$$
for all $n\in \mathbb N$
Remark: I am assuming $0\notin \mathbb N$. If instead $0\in \mathbb N$, then in $\tau_2$ we have the set $\{0\} = [-0,0]$, which has only one element. Thus $(\mathbb R, \tau_1)$ is not homeomorphic to $(\mathbb R, \tau _2)$ since in $\tau_1$ there is no open $\tau_1$-open set with exactly one element. |
Given MNC is a straight line,find the value of k. | AB = 6b- 6a
therefore BC = 6b-6a
MN = MA +AC
= 3a + 12b - 12a
= 12b-9a
since MNC is on a straight line
MN = kb - 3a
MC = 12b - 9a
therefore k = 4 |
Def. 4.1-1 in Kreyszig's functional analysis book: Any example of a subset of a poset with more than one upper bound? | Let $m \in M$ be maximal element. Consider $W := \{ n \in M :~~ n \leq_M m\}$ then $m \in W$ so $W$ is non-empty. But $m$ is an upper bound for $W$ since it is comparable with any $w \in W$ and greater than all of them.
About your second question : consider $M := \mathbb{R}$ with the usual order and $W := [0,1]$. Then any real number greater or equal to 1 is an upper bound for $W$ i.e. any $r \in [1, \infty)$ is comparable with any $w \in W$ and $w \leq r$.
Your last two comments are OK. You are right. |
'Bounds' on the Covariance Matrix | For $E[(X-E[X])^2]$, the minimality of $c=E[X]$ can be understood from a number of vantage points. On the one hand, it is directly related to the abstract definition of conditional expectation, which asks to minimize $E[(X-Y)^2|F]$ where $F$ is the sigma algebra of constant functions. On the other hand,
$$E[(X-c)^2]=E[(X-E[X]+E[X]-c)^2]=E[(X-E[X])^2]+(E[X]-c)^2,$$
from which $c=E[X]$ gives minimality.
For the multidimensional case, we have:
\begin{align*}
E[(X-c)(X-c)^T]&=E[(X-E[X]+E[X]-c)(X-E[X]+E[X]-c)^T]\\
&=E[(X-E[X])(X-E[X])^T]+(E[X]-c)(E[X]-c)^T.
\end{align*}
The problem is that $(E[X]-c)(E[X]-c)^T$ is not necessarily positive in all its entries, as the example of $yy^T$ shows for $y^T=(1,-2)$. It is however positive definite, in that for any $z$ and $c$:
$$z^TE[(X-E[X])(X-E[X])^T]z\leq z^T(E[X]-c)(E[X]-c)^Tz.$$
Now try to work out the result for the covariance matrix $E[(X-c)(Y-d)^T]$. |
Covering group $Aut(\tilde{X},p)\cong NH/H $ | I am not sure if I understand the proof sketched in your textbook but here's a proof:
$\gamma$ be a loop in $X$ based at $x_0$ which lifts to a path $\tilde{\gamma}$ between $\tilde{x_0}$ and $\tilde{x_1}$ in $\tilde{X}$. $[\gamma]$ belongs to $N(H)$, $H = p_*(\pi_1(\tilde{X}, \tilde{x_0}))$, iff $p_*(\pi_1(\tilde{X}, \tilde{x_0})) = p_*(\pi_1(\tilde{X}, \tilde{x_1}))$. Thus, if $[\gamma]$ is in $N(H)$ then there is a deck transformation $f : \tilde{X} \to \tilde{X}$ such that $f(\tilde{x_0}) = \tilde{x_1}$.
Thus define a homomorphism $g : N(H) \to \text{Aut}(\tilde{X})$ by sending $[\gamma]$ to $f$.
This is a homomorphism, as if $[\gamma], [\gamma']$ are two classes in $N(H)$ such that lift $\tilde{\gamma}$ has endpoints $\tilde{x_0}$ and $\tilde{x_1}'$ and corresponds to the deck transformation $f'$, then $\gamma * \gamma'$ would lift to a path $\tilde{\gamma} * f(\tilde{\gamma}')$ between $\tilde{x_0}$ and $f(\tilde{x_1}') = f(f'(\tilde{x_0}))$. Thus, $[\gamma * \gamma']$ is sent to $f \circ f'$ by $g$.
It's easy to see $g$ is surjective, as given a deck transformation $f$ of $\tilde{X}$ taking $\tilde{x_0}$ to $x \in p^{-1}(x_0)$, $g[\sigma] = f$ where $\sigma$ is a path joining $\tilde{x_0}$ and $x$. The kernel of $g$ consists of loops in $X$ based at $x_0$ which lift to loops at $\tilde{x_0}$ in $\tilde{X}$. Such loops form precisely the group $H$.
Hence, the quotient map $\tilde{g} : N(H)/H \to \text{Aut}(\tilde{X})$ is an isomorphism, as desired. |
Lagrange's Multiplier | Work with the square of the distance, that is, let $f(x,y)=x^2+y^2$. So, you get the system$$\left\{\begin{array}{l}2x=\lambda(4x+6y)\\2y=\lambda(6x+10y)\\2x^2+6xy+10y^2=1\end{array}\right.$$The first two equation form a system of linear equations dependent upon $\lambda$. So, choose $\lambda$ such that the determinant of the matrix of the coefficients of the system is $0$; otherwise, the only solution will be $(0,0)$, which is not a solution of the third equation. |
Sharpshooter Binomial Distribution Problem | Since the first question seems to have been answered by the comment of nicola, I'll answer the second, which calls for the geometric distribution, since order matters.
With $p=0.04$, we have $$(1-p)^5p=0.0326\dots$$ |
How many ways of putting 4 distinct pens in 3 identical boxes are there? | Since this has had a number of false (or at least misleading) answers posted, let me try to clarify.
The difficulty lies in trying to get the assumptions down. The problem tells us that the pens are distinct but the boxes are not. Thus "permuting the boxes" does not change any given solution. To be precise, putting all the pens in one box gives a single solution. It makes no difference which box we choose, as they are all the same.
To stress: the assumptions are critical here. If you change them, the answer will be very different. Here we are assuming that the pens are distinct but the boxes are identical.
If we ignore, for the moment, the fact that the pens are distinct, we see that there are exactly $4$ possible patterns. These correspond to those partitions of $4$ of length at most $3$. They are $$4\quad 3+1\quad 2+2\quad 2+1+1$$
Now we have to deal with the fact that the pens are distinct. How many ways can we populate each pattern? We'll do them one at a time.
Pattern $\underline 4$: There's only one way to put all the pens in one box, so $\boxed 1$.
Pattern $\underline {3+1}$: This allocation is entirely determined if you specify which pen is off on its own. Thus $\boxed 4$.
Pattern $\underline {2+2}$: Here, we get a wrinkle. There are $\binom 42=6$ ways to specify a pair of pens but there is a symmetry. The allocation $(AB,CD,0)$ is the same as the allocation $(CD,AB,0)$. Thus we need to divide by $2$, so $\boxed 3$.
Note: another way to see that the answer is $3$ in that case is to remark that the allocation is entirely determined by specifying which pen is paired with pen $A$.
Pattern $\underline {2+1+1}$: Now it is enough to just pick the pair which comprises the $2$, so $\boxed 6$.
The answer, then, is $$1+4+3+6=\boxed {14}$$
As the list is so short, let's just write them all out. Letting the pens be $\{A,B,C,D\}$ we have $$(ABCD,0,0)$$ $$ (ABC,D,0)\quad (ABD,C,0)\quad (ACD,B,0)\quad (BCD,A,0)$$ $$(AB,CD,0)\quad (AC,BD,0)\quad (AD,BC,0)$$
$$(AB,C,D)\quad (AC,B,D)\quad (AD,B,C)\quad (BD,A,C)\quad (BC,A,D)\quad (CD,A,B)$$ |
Tensor product of a module with an ideal is isomorphic to their standard product | An $A$-module $M$ is flat if and only if for every finitely generated ideal $I$ of $A$, $M\otimes_AI\rightarrow IM$ is an isomorphism. This is proved, e.g., as Theorem 1.2.4 of Liu's algebraic geometry textbook. He actually states it with all ideals $I$, but a direct limit argument reduces to the case of finitely generated ideals. |
Why is there only one point at infinity in the extended complex plane, but one in each direction in the real projective plane? | There is a general construction that produces for any field $k$ and any $n\geq1$ the $n$-dimensional projective space $P(k,n)$. In the case $n=1$ this construction "adds a point $\infty$" to $k$. If $k={\mathbb R}$ we obtain the real projective line, where there is no distinction between $+\infty$ and $-\infty$. When $n=1$ and $k={\mathbb C}$ we obtain the extended complex plane $P({\mathbb C},1)={\mathbb C}\cup\{\infty\}$ with just one point $\infty$.
The definition of the real projective plane $P({\mathbb R},2)$ fully requires explaining the "general construction" referred to above: The projective plane $P({\mathbb R},2)$ is by definition the set of all one-dimensional subspaces of ${\mathbb R}^3$, i.e. the set of all lines to the origin of ${\mathbb R}^3$. This set is in bijective correspondence with the points of $S^2$ with antipodal points $x$ and $-x$ identified. When "doing geometry" in $P({\mathbb R},2)$ we rather work in ${\mathbb R}^2$ with a "line at infinity" added: To each (unoriented) direction in ${\mathbb R}^2$ corresponds a point on this line. |
Expansion and orthogonality | It's not orthogonality that lets us expand. What orthogonality does is lets us expand easily, because the coefficients for a function $f$ must be exactly
$$
c_m = \langle f, P_m(\cos \theta) \rangle
$$
The factor the a function $f$ can be expanded at all comes because these functions form a basis for the set of all functions. (Except that's not quite true: you need to limit to integrable functions, and they probably need to be continuous almost everywhere, and equality holds only on a set whose complement has measure zero, and ... a whole lot of other technical constraints.) More particularly, the existence of an expansion comes from the functions spanning the relevant space of function. The uniqueness of the expansion comes because they're a basis. Orthogonality just makes writing down the coefficients easy. |
Statements on function with finite integral over $[0, \inf[$ | Make up the function $f$ the following way (for $n\in\mathbb N, n\ge 2$):
$f(n)=n$
Linear in $[n-\frac{1}{n^3}, n]$ so that $f(n-\frac{1}{n^3})=0$
Linear in $[n, n+\frac{1}{n^3}]$ so that $f(n+\frac{1}{n^3})=0$
$f(x)=0$ otherwise.
In other words, at each $n\ge 2$, the function has a "spike" of height $n$, width $\frac{2}{n^3}$, and area $\frac{1}{n^2}$.
Such a function is an obvious counterexample for all three statements, yet $\int_0^{\infty}f(t)dt=\sum_{n=2}^{\infty}\frac{1}{n^2}$ - (absolutely) convergent. |
Prove F[X]/p(x) contains all roots of p(x) | It is false.
** Hint for a counter-example: **
Take $F=\mathbf Q$, $p(X)=X^3-2$ and consider the complex roots of this polynomial.
$$\mathbf Q[X](p(X))\simeq\mathbf Q(\sqrt[3]2).$$
Does this field contain all the roots of $p(X)$? |
A set problem (inspired by geometry) | Expanding on my comment, once we see that we can't use (abcdef) at all, of the remaining sets (abc) is the only one that has a 'c', and (abde) is the only one that has an 'e', rendering both of those useless.
The remaining set is (ad) and obviously we can't get (b) by itself. |
The quotient group of the Heisenberg group and its center is isomorphic to $(\mathbb{R}^2, +)$. | Your map $\Phi$ is not an isomorphism. For example let $h_1 \in H$ have coordinates $(x_1,y_1,z_1)=(1,0,0)$, and let $h_2$ have coordinates $(x_2,y_2,z_2) = (0,0,1)$. A short computation shows that $h_3 = h_1 h_2 \in H$ has $z_3=1$ whereas $h_4 = h_2 h_1 \in H$ has $z_4 = 0$. Therefore $h_1 h_2 \ne h_2 h_1$, whereas $\Phi(h_1) + \Phi(h_2) = \Phi(h_2) + \Phi(h_1)$.
What you should do instead is to construct a map $\Phi : H \to \mathbb R^2$ (not $\mathbb R^3$) which you can prove is a homomorphism, is surjective, and has kernel $Z(H)$. |
Nef and semiample divisor | An example is given in Example 2.3.1 of "Positivity in Algebraic
Geometry, I" by Robert Lazarsfeld.
It is available free on line at https://cims.nyu.edu/~rodion/lib/R.%20K.%20Lazarsfeld.%20Positivity%20in%20Algebraic%20Geometry,%20I.%20Classical%20Setting:%20Line%20Bundles%20and%20Linear%20Series%20-%202003.pdf. |
Boundary points of union of a semi-closed and an open interval | Recall:
Let $A \subseteq \Bbb R$ . A point $x$ in $\Bbb R$ is called a boundary point of $A$ if every neighborhood of $x$ intersect both $A$ and its complement. With this definition, $\partial S=\{1,2,3,4\}$
Note that $\overline{S}=S \cup \partial(S)$ is correct. But $S$ and $\partial S$ are need not be disjoint.
So remove $S$ from $\overline{S}$ need not equal $\partial S$ ! |
Proving the set of subsequences of a sequence are uncountable | First lets fix notation:
A sequence of reals is a function $s:\mathbb{N}\to\mathbb{R}$
A subsequence $a\circ f$ of a given sequence $a$ is obtained by composing $a$ with some strictly increasing function $f:\mathbb{N}\to\mathbb{N}$.
In general the statement is not true, a constant sequence only has one subsequence. Let us assume that there are infinitely many different elements in the sequence $(a_i)_{i\in\mathbb{N}}$. Thus (by taking an appropriate subsequence) we can safely assume that the sequence is strictly increasing: $a_0<a_1<\dots$
Now to prove that there are uncountably many subsequences of $a$, it is enough to show that for any two different strictly increasing functions $f,g:\mathbb{N}\to\mathbb{N}$ the subsequences $a\circ f$ and $a\circ g$ are distinct. Let $n$ be the least natural number such that $f(n)\neq g(n)$, we may assume that $f(n)<g(n)$. Obviousely $(a\circ f)(n)$ is not an element of the image of $a\circ g$, thus the subsequences are distinct.
EDIT: There is an easier and more general way. If the sequence $a$ is not becoming stationary ie. there is no $n\in\mathbb{N}$ such that $(a)_{i>n}$ is constant, then there are already uncountably many subsequences of $a$ as basically all "binary sequences" can be realized as a subsequence of $a$. |
Question Mean Value Theorem for Integrals | Some elements below in the case $\int g =0$, to prove that the equality still holds.
If $g$ is non-negative with a vanishing integral, then $g$ is equal to zero almost everywhere. Therefore $f.g$ is also equal to zero almost everywhere as $f$ is supposed to be continuous and $\int fg=0$.
Note: I’m using Lebesgue integral here.
And if you want to use Riemann integral, you can consider an upper sum $U$ for $\int g$ that can be as small as you desire as $\int g=0$. Then notice that $f$ is bounded, let say by $M$ on $[a,b]$ as being continuous on this interval. Based on that you’ll find an upper sum for $\int fg$ smaller than $MU$. |
What is the smallest prime of the form $n^n+5$? | $444^{444}+5$ is prime, and is the smallest of that form. The next is $3948^{3948}+5$.
perl -Mntheory=:all -Mbigint -E 'for (1..1e5) { say if is_prime((0+$_)**$_+5); }'
It's a little faster using -Mbigint=lib,GMP or -MMath::GMP=:constant. A bit under 0.3 seconds to find the first, albeit this is uses a robust PRP test rather than doing a proof.
See: factordb entry for a primality certificate. |
Volume of a solid in $xyz$-space | Let $x=r \sin \theta,y=r \cos \theta$, and the volume is, removing $\theta$ having noticed that the graph is unchanged after a rotation about the $z$ axis,
$$V=\int_{r \ge 0}2 \pi r\left(\frac{1}{r^2+1}-\frac{1}{r^2+4} \right)dr= \int_{u \ge 0} \pi \left(\frac{1}{u+1}-\frac{1}{u+4} \right)dr= \pi \int_1^4\frac{1}{u}du=\pi\ln(4).$$
Note that I've used that the volume of a cylinder with height $z$, thickness $dr$ and radius $r$ is $2 \pi r z dr$. |