title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Urn problem- distribution after all balls of x randomly selected colours are removed | I doubt you'll find a nice expression for this. The probability of drawing a ball of colour $j$ is
$$
\frac{p_j}{\binom kt}\sum_{\scriptstyle T\subseteq[1,k]-\{j\}\atop\scriptstyle|T|=t}\frac1{1-\sum_{i\in T}p_i}\;.
$$ |
Consider the linear transformation $T:R^n\to R^m$ | (a)
The easiest way to determine $m$ is to look at the matrix $A$. It is a $3 \times 4$ matrix which means that it will multiply by a $4 \times 1$ vector to give a $3 \times 1$ vector as the output. Since $m$ references the output we have $m = 3$.
(b)
From the previous logic we have $n = 4$ since it is referencing the input.
(c)
You are sort of right about what you said. The only thing is typically we choose the basis from already given vectors. So from the reduced matrix we can see that columns 1 and 3 present us with linearly independent vectors. Now you also have to consider the fact that row operations do not change column vectors and so the basis we want is:
$$\Bigg\{ \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix}, \begin{pmatrix} 1 \\ 3 \\ -1 \end{pmatrix} \Bigg\}$$
(d)
To determine the kernel, or null space, we want to solve the system:
$$T(v) = 0$$
$$\begin{pmatrix} 1 & 2 & 0 & -3 \\ 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$
which will yield the solution:
$$\begin{pmatrix} 3s - 2t \\ t \\ -4s \\ s \end{pmatrix}$$
where $s,t \in \mathbb{R}$. To form the basis of the kernel, we just want to identify the vector that corresponds to just $t$ and another one for $s$. Then put these two vectors in a set:
$$\Bigg\{ \begin{pmatrix} 3 \\ 0 \\ -4 \\ 1 \end{pmatrix}, \begin{pmatrix} -2 \\ 1 \\ 0 \\ 0 \end{pmatrix} \Bigg\}$$
(e)
I am going to assume that by rank equation it is referencing the rank-nullity theorem. So we know that the rank of the transformation is given by the dimension of the range giving $rank = 2$. The nullity is defined as the dimension of the kernel giving $nullity = 2$. The rank-nullity theorem states:
$$rank + nullity = n$$
where $n$ is still the same from part (b). Now in our case:
$$2 + 2 = 4$$
the rank-nullity theorem checks out. |
Covering $R^2$ by open half planes | It is not always true, but if they cover $\mathbb R^2 \setminus \{(0,0)\}$, then it is true there exists a finite subcover.
There is a quotient map $\mathbb R^2 \setminus \{(0,0)\} \to S^1$ by identifying vectors on a common half-line ($s \sim t $ if there is $\alpha > 0$ such that $s = \alpha.t$). It turns out that your open half-planes are compatible with $\sim$ (if $\pi$ and $\pi'$ are congruent, the sign of their product with $s_\iota$ is the same, so one of them is in the half plane if and only if the other is too), and they correspond to open intervals of $S^1$ (open semi-circles, even). But $S^1$ is compact, so if you have an open cover of $S^1$, you can extract from it a finite cover.
However, if they don't cover the whole circle, there may not be a finite covering.
For an example, pick $s_\iota = (1,1/\iota)$ for $\iota \in I = (0;1]$. Any finite subcover will be included in $\cup_{\iota> \epsilon} U_\iota$ for some $\epsilon > 0$, which is strictly smaller than $\cup_{\iota \in I} U_\iota$ : the vector $(\epsilon,-1)$ is in the latter but not in the former. |
There are no semisimple Lie algebras of dimension $4$, $5$, or $7$ | One little fact that will help resolve this is that $\dim(H)=rank(\Phi)$, the rank of the root system.
Now the rest is done by a look at the classification of root systems. If we don't have that ready, we can even get by with just the low-dimension / low-rank cases by hand:
A root system of rank $\ge 3$ must contain at least six roots (a basis and their negatives), so in this case we would already have $rank(\Phi)+\lvert \Phi\rvert \ge 9$ (usually much larger in fact, but dimension $9$ indeed occurs for $\mathfrak{sl}_2\oplus \mathfrak{sl}_2 \oplus \mathfrak{sl}_2$).
Root systems of rank $2$ are implicitly classified at the beginning of every lecture on root system when one discusses the relations which two roots can have to each other. Turns out the possibilities are $A_1 \times A_1, A_2, B_2=C_2$, and $G_2$; while the first indeed contains four roots and describes the semisimple, six-dimensional $\mathfrak{sl}_2 \oplus \mathfrak{sl}_2$, all the others contain $\ge 6$ roots and thus correspond to Lie algebras of dimension $\ge 2+6 =8$ (actually, $A_2$ corresponds to the $8$-dimensional $\mathfrak{sl}_3$, and $B_2=C_2$ to the $10$-dimensional $\mathfrak{so}_5 \simeq \mathfrak{sp}_4$; the dimension of the exceptional Lie algebra of type $G_2$ is $14=2+12$).
There's only one root system of rank $1$: $A_1$, which corresponds to the $3$-dimensional $\mathfrak{sl}_2$. So there's nothing that could make up a semisimple Lie algebra of dimension $4,5,$ or $7$ (or dimension $1$ or $2$, for that matter; I'd think the next non-occurring dimension is $11$, edit: As Jason DeVito points out (thanks!), it looks like all higher dimensions do occur.)
The above implicitly assumed that we work over an algebraically closed field of characteristic $0$. As YCor points out in a comment, this is enough to conclude for any base field $k$ of characteristic $0$. Namely, if $L$ is a semisimple Lie algebra over $k$ of $k$-dimension $n$, and $K\vert k$ is any field extension, then the scalar extension $L\otimes_k K$ is a semisimple Lie algebra of $K$-dimension $n$. Apply to an algebraic closure of $k$. |
Representation of Cyclic Group over Finite Field | The question of how to classify all finite-dimensional representations of $C_n$ over an arbitrary field $F$ can be studied using the structure theorem for finitely-generated modules over a principal ideal domain, in this case $F[x]$. The structure theorem asserts that any finitely-generated module is uniquely a finite direct sum of modules of the form $F[x]/p(x)^r$ where $p \in F[x]$ is irreducible and $r$ is a non-negative integer.
If $T$ is an operator acting on $F^k$ for some $n$, then $F^k$ becomes a finitely-generated module over $F[x]$ with $x$ acting by $T$. $T$ gives a representation of the cyclic group $C_n$ if and only if $T^n = 1$, in which case the summands $F[x]/q(x)^r$ in the decomposition of $F^k$ must have the property that $q(x)^r | x^n - 1$.
If $F$ has characteristic $0$ or has characteristic $p$ and $p \nmid n$, then $x^n - 1$ is separable over $F$, hence $r \le 1$ and $F^k$ is a direct sum of irreducible representations, all of which are of the form $F[T]/q(T)$ where $q$ is an irreducible factor of $x^n - 1$ over $F$.
If $F$ has characteristic $p$ and $p | n$, then writing $n = p^s m$ where $p \nmid m$ we have
$$x^n - 1 = (x^m - 1)^{p^s}$$
from which it follows that $r \le p^s$ (but now it is possible to have $r > 1$). If $r > 1$, then the corresponding representation $F[T]/q(T)^r$ is indecomposable and not irreducible, where $q$ is an irreducible factor of $x^m - 1$ over $F$. The irreducible representations occur precisely when $r = 1$. In other words,
The irreducible representations of $C_{p^s m}$, where $p \nmid m$, over a field of characteristic $p$ all factor through the quotient $C_{p^s m} \to C_m$.
One can also see this more directly as follows. If $V$ is an irreducible representation of $C_{p^s m}$ over a field of characteristic $p$ and $T : V \to V$ is the action of a generator, then
$$T^{p^s m} - 1 = (T^m - 1)^{p^s} = 0.$$
Thus $T^m - 1$ is an intertwining operator which is not invertible, so by Schur's lemma it is equal to zero. |
Problem with cosh and sinh | To simplify a problem substitute $x=\cosh(\theta)$. Then we have $x\ge 1$,
$u(t)=\frac{\sinh\frac{xt}{2}}{x}$, $A(t)=\frac{\cosh\frac{tx}{2}-1}{x^2}-\cosh \frac{t}{2}+1$, and $f(t)=-\ln (1-g(t))$, where $$g(t)=\frac{2A(t)}{u(t)+\sinh t+A(t)}=$$
$$\frac{2\frac{\cosh\frac{tx}{2}-1}{x^2}-2\cosh \frac{t}{2}+2}{\frac{\sinh\frac{xt}{2}}{x}+\sinh t+\frac{\cosh\frac{tx}{2}-1}{x^2}-\cosh \frac{t}{2}+1}=$$
$$\frac{2\cosh\frac{tx}{2}-2-2x^2\cosh \frac{t}{2}+2x^2}{x\sinh\frac{xt}{2}+x^2\sinh t+\cosh\frac{tx}{2}-1-x^2\cosh \frac{t}{2}+x^2}.$$
Now given $xt\le 1-\delta$ we want to show that $f(t)=O(x^2t^3)$. Since $x\ge 1$, $t$ is bounded from above, so we investigate the behavior of the function $f(t)$ when $x$ is constant and $t$ tends to zero. In this case for each constant $c$ we have $\sinh ct=ct+O(t^3)$, $\cosh ct=1+\frac {c^2t^2}2+O(t^3)$, so
$$g(t)=\frac{2+\left(\frac{tx}{2}\right)^2-2-x^2\left(2-\left(\frac{t}{2}\right)^2\right)+2x^2+O(t^3)}{x\frac{xt}{2}+x^2t+1+\frac 12\left(\frac{tx}{2}\right)^2-1-x^2\left(1+\frac 12\left(\frac{t}{2}\right)^2\right) +x^2+O(t^3)}=$$ $$\frac {t^2+O(t^3)}{3t+O(t^3)}=\frac t3+O(t^2).$$
Thus when $t$ tends to $0$ the function $f(t)$ tends to $0$ too, but not as fast as we want. Namely,
$$f(t)= -\ln (1-g(t))= -\ln\left(1-\frac t3+O(t^2)\right)=\frac t3+o(t)\ne O(x^2t^3).$$ |
Countability(Mathematical analysis) | Hint/partial proof: Consider the point $x$. You want to draw an edge so that it misses all of the points of your countable set $S$ that you are trying to avoid.
How many directions can that edge go? (think of direction as angle and realize that the angle can be anywhere in the continuum from $[0,2\pi)$).
How many of those directions could have been bad directions to go?
Look at all of the "good directions" for $x$, and move over to $y$ instead. How many of the good directions for $x$ will be bad directions for $y$?
So, you have a line leaving $x$ and another line leaving $y$ going parallel. Can you repeat this process again? What shape does this make? |
Solve the inequality on the number line? | Hint: solve for
$x^2+10x+ 24 = 0$, find the roots, and determine when the factors are positive:
Solve: $$x^2 + 10 x + 24 = (x+4)(x+6) > 0$$
When is $(x+4)(x + 6)$ positive?:
$\quad$When both factors are positive, or when both factors are negative.
$$
(x+4)(x+6) > 0 \implies \begin{cases}(x+4) > 0, & (x+6) > 0 \longrightarrow x>-4\\ \\(x+4) < 0, & (x+6) < 0 \longrightarrow x<-6 \end{cases}
$$
Your task is to plot the intervals on which $x$ satisfies the inequality.
Edit: if you want to confirm the solution "graph", compare to: |
By mapping the generators $s_{i}$ into $S_{n}$ appropriately, find a well-defined epimorphism $\theta :G_{n}\rightarrow S_{n}$ . | Define
\begin{align}
\theta: \ & G_{n} \to S_{n}\\
& s_i \longmapsto (i,i+1) \tag{1}
\end{align}
The mapping defined in (1) is an isomorphism. |
Prove that the set$ (Z_p,+_p,•_p)$ is a field | Let us denote by $\bar{n}$ the elements of $\mathbb{Z}_{p}$ (I think that such notation is the most common one).
You have to check that if $\bar{n}\in \mathbb{Z}_{p}$ is non-zero, then there exists another element of $\mathbb{Z}_{p}$, $\bar{m}$, such that $\bar{n}\bar{m}=\bar{1}$ in $\mathbb{Z}_{p}$.
Hence, let $\bar{n}\in \mathbb{Z}_{p}$ be non-zero. By definition, that means that $n$ is not divisible by $p$ in $\mathbb{Z}$. Then, $\gcd(n,p)=1$, because $p$ is a prime number and $p$ does not divide $n$.
By Bezout, there exist $a,b\in \mathbb{Z}$ such that $1=\gcd(n,p)=an+bp$ in $\mathbb{Z}$.
Now, by doing that equality mod $p$, $\bar{1}=\overline{an+bp}=\overline{an}+\overline{bp}=\bar{a}\bar{n}+\bar{b}\bar{p}$. Now, observe that $\bar{p}=\bar{0}$ in $\mathbb{Z}_{p}$, so $$\bar{1}=\bar{a}\bar{n},$$ so $\bar{a}$ is the inverse element of $\bar{n}$ in $\mathbb{Z}_{p}$. |
How how do I find the measure of the sides of an equilateral triangle inscribed in a 30-60-90 triangle? | Labeling your diagram as below, here are a few hints:
Explain why $\theta=\alpha$.
Inspect the right triangle at the lower left to deduce:
$$\cos \alpha =\frac1x$$
Use the law of sines on the triangle at the lower right to deduce:
$$
\frac{\sin 60}x=\frac{\sin\alpha}1
$$
This gives two equations in two unknowns $\alpha$ and $x$. Now solve for $x$! |
Vector functions - 2 functions meeting, sharing t value | The system of equations
$$\begin{split} 2t&= 2\cos t \\ \frac{6}{t+4}&=2\sin t \end{split}$$
has no solutions. Indeed, by squaring the equations and adding them, we see that a solution would satisfy
$$t^2+\frac{9}{(t+4)^2}=1\tag{1}$$ which is an algebraic equation. On the other hand, $t=\cos t$ is transcendental equation; its only root is not an algebraic number.
Okay, I don't actually know a proof of the preceding claim, but it's easy enough to check that neither of two real roots of (1) satisfies $\cos t=t$. One of them is $\approx -0.51$, which obviously fails. The other, $\approx 0.78$, comes close, but its cosine is about $0.71$. |
Finite simple group $G$ containing no elements of order $5$ | Let $P \in {\rm Syl}_7(G)$, then $|G:N_G(P)|=8$ and $G$ embeds isomorphically ($G$ is simple!) in $A_8$. Also, since $7$ is the highest power of $7$ dividing $|A_8|=20160$, we have $P=\langle ( 1 2 3 4 5 6 7) \rangle$ or a conjugate.
Now it is not hard to see that the centralizer in $S_8$ of $P$ is $P$, and so $N_G(P)/P$ must be isomorphic to a subgroup of ${\rm Aut}(P)$, which is cyclic of order $6$. So $|N_G(P)|$ is not divisible by $5$, and hence neither is $G$.
In fact there is a unique (up to isomorphism) simple group that satisfies these conditions, and that is ${\rm PSL}(2,7)$, which is defined naturally as a subgroup of $A_8$. In that example $|N_G(P)|=21$ and $|G|=168$. |
Looking for ONLY a hint on how to do this question | Hint: Technically, the polytope is one of its own "faces"; I'll refer to the rest of the faces as "proper" faces.
Each proper face corresponds to a set $S$ of vertices. Note that if the convex hull of $S$ intersects the interior of the polytope, then $S$ does not lie within a face. Note that if the affine subspace "spanned" by a subset is the same for sets $S_1,S_2$, then these subsets correspond to the same face.
With that said: consider pairs of vertices. Then consider subsets of $3$ non co-linear vertices. Then consider subsets of $4$ non co-planar vertices. Among these sets, which have a convex hull that intersects the interior? Which sets correspond to the same face? |
Show that this metric is not complete | Looks mostly fine to me. The thing to keep in mind is that $f(x)$ or $f_n (x)$ refers to the value of the function at the point $x$, which are real numbers. By contrast, $f_n$ or $f$ by themselves represent the function as an object unto itself.
However, the point-wise limit seen above is not continuous, since ${f_n}→f(x)$.
This should technically read $f_n\rightarrow f$ since the sequence approaches $f$ itself, not just a value of $f$.
Similarly, your display equation should technically read $$f(x)=\lim_{n\rightarrow\infty}f_n(x)=\begin{cases} 1 &\mbox{if } x\in[0,1)\\
0 &\mbox{if } x=1\end{cases}$$ since you are giving a pointwise definition of $f$.
Hope this helps! |
Limits - Indeterminate forms | I think you just missed one indeterminate form which is $\infty^0$, an example of this type of indeterminate form could be:
$lim_{n \to 0^+} (\frac{1}{n})^n$=1. |
Is $M=\{(x,y)\in (0,\infty )\times\mathbb{R} : y=\sin(\frac{1}{x}) \}$ a closed set in space $((0,\infty )\times\mathbb{R} ,\rho_{e})$? | Another way of directly seeing that $M$ is closed in $(0,\infty)\times \mathbb{R}$ is by considering $f:(0,\infty)\times \mathbb{R}\to \mathbb{R}$ so that $f(x,y)=\sin(\frac{1}{x})-y$ (which is clearly continuous) and observing that $M=f^{-1}\{0\}$ whence $M$ is closed as a preimage of a closed set under a continuous function.
The notion of $M$ cannot be extended to $[0,\infty)\times \mathbb{R}$ as it stands since $\sin(\frac{1}{0})$ is not defined, but you can observe that its closure in $\mathbb{R}^{2}$ contains all points $(0,y)\in\mathbb{R}^{2}$ where $y\in [-1,1]$, since every open ball around such point contains a point $(x,\sin(\frac{1}{x}))$ for some $x>0$.
And to the question in your first paragraph: note that Induction is a proof over the set of natural numbers, not reals. |
Which of the following is the best way to define a new predicate in first-order logic? | I, personally, would do $(2)$ out of those options. Though I'd probably say something like "Define $P(x,y)$ ...".
$(3)$ doesn't make sense at all. It reads as you are picking a particular $x$ and $y$ and asserting that $P(x,y)$ is equivalent to $x=y$ for that particular $x$ and $y$. It would not at all follow that $P(1,1)$ should (or should not) hold.
The most natural thing when defining a predicate (or function) is to view it as a binding form.1 Saying "Define $P(x,y)$ to be ..." binds $x$ and $y$ in the body of the definition. As such, you could just say, "Define $P(x,y)$ to be $x=y$." Depending on what you mean by "where $x$ and $y$ are real numbers", this may be appropriate in a multi-sorted logic or a type theory to indicate the sort/type of $P$. In typical set theories, it would appear that what you'd actually want is to define $P(x,y)$ as $x\in\mathbb R\land y\in\mathbb R\land x = y$. Alternatively, you may mean for $P(x,y)$ to be defined as $x=y$ for real numbers but arbitrary for other inputs. This should be handled differently since you are not actually (fully) defining $P$, see below.
From this perspective, $(1)$ doesn't make sense because $P(x,y)$ binds $x$ and $y$. The $\forall$ would be talking about different $x$ and $y$. Also, this isn't a formula, so using formal notation is misleading. Typically definitions are happening at a meta-logical level.
Now in understanding definitions, we can think of them as adding predicate symbols ($P$ in this case) and then axioms (or assumptions), e.g. $\forall x,y.P(x,y)\leftrightarrow x = y$. Here $\forall x,y.P(x,y)\leftrightarrow x = y$ is a formula. Stating this formula as an axiom or assuming it presupposes $P$ is a predicate symbol, so is not in itself definitional.
This two-step process can be useful. Let's say we did want $P(x,y)$ to be $x=y$ when $x\in\mathbb R$ and $y\in\mathbb R$ but otherwise unconstrained. You could go about that as follows. First, assert $P$ is a predicate: "Let $P$ be a binary predicate symbol." Then constrain it, but with an axiom/assumption that doesn't fully specify it: "Assume $\forall x,y\in\mathbb R.P(x,y)\leftrightarrow x=y$."
(As a minor note, I'd prefer $P(x,y):\equiv\dots$ or $P(x,y):\leftrightarrow\dots$ to $P(x,y) : \dots$ [and $f(x,y):=\dots$ for functions]. Definition is an asymmetric operation, so it's nice for the notation to indicate that. As indicated above, the definitions lead to assertions of equivalences/equalities and this notation suggests that as well and makes it clear which. Most commonly, definitions are just written as equivalence/equalities and the surrounding text explains that they are definitional.)
1 This is how definitions of functions and other things work in virtually all programming languages. |
Characterizing discontinuous derivatives | There is no everywhere differentiable function $f$ on $[0,1]$ such that $f'$ is discontinuous at each irrational there. That's because $f',$ being the everywhere pointwise limit of continuous functions, is continuous on a dense $G_\delta$ subset of $[0,1].$ This is a result of Baire. Thus $f'$ can't be continuous only on a subset of the rationals, a set of the first category.
But there is a differentiable function whose derivative is discontinuous on a set of full measure.
Proof: For every Cantor set $K\subset [0,1]$ there is a "Volterra function" $f$ relative to $K,$ which for the purpose at hand means a differentiable function $f$ on $[0,1]$ such that i)$|f|\le 1$ on $[0,1],$ ii) $|f'|\le 1$ on $[0,1],$ iii) $f'$ is continuous on $[0,1]\setminus K,$ iv) $f'$ is discontinuous at each point of $K.$
Now we can choose disjoint Cantor sets $K_n \subset [0,1]$ such that $\sum_n m(K_n) = 1.$ For each $n$ we choose a Volterra function $f_n$ as above. Then define
$$F=\sum_{n=1}^{\infty} \frac{f_n}{2^n}.$$
$F$ is well defined by this series, and is differentiable on $[0,1].$ That's because each summand above is differentiable there, and the sum of derivatives converges uniformly on $[0,1].$ So we have
$$F'(x) = \sum_{n=1}^{\infty} \frac{f_n'(x)}{2^n}\,\, \text { for each } x\in [0,1].$$
Let $x_0\in \cup K_n.$ Then $x_0$ is in some $K_{n_0}.$ We can write
$$F'(x) = \frac{f_{n_0}'(x)}{2^{n_0}} + \sum_{n\ne n_0}\frac{f_n'(x)}{2^n}.$$
Now the sum on the right is continuous at $x_0,$ being the uniform limit of functions continuous at $x_0.$ But $f_{n_0}'/2^{n_0}$ is not continuous at $x_0.$ This shows $F'$ is not continuous at $x_0.$ Since $x_0$ was an aribtrary point in $\cup K_n,$ $F'$ is discontinuous on a set of full measure as desired. |
$f>0$ on $[0,1]$ implies $\int_0^1 f >0$ | Yes, your proof is correct. The two comments on that old answer are wrong. The first comment says:
The above proof is wrong: if $f(x)>C$ then $\int_0^1 f(x) \mathbb{d}x \geq C$. Notice that the inequality becomes non-strict, because integration is just passing to the limit and limits do not preserve the strictness of inequalities.
It is generally a good rule that limits do not preserve strict inequalities. But this user in his or her comment is wrong that this holds for integrals in particular. In fact, for the Lebesgue (or Riemann!) integral, if $f > g$ on a set $A$, and both are integrable, then $\int_A f > \int_A g$.
Specifically, let $A \subseteq \mathbb{R}$ be a Lebesgue measurable set, and suppose $f, g$ are measurable, with $f(x) > g(x)$ for all $x \in A$. Then $\int_A f > \int_A g$, with just a few exceptions:
If $A$ has measure $0$, this won't hold.
If $\int_A f = -\infty$ or if $\int_A g = \infty$, this won't hold.
This covers the Riemann case as well, since Riemann integrable functions are also Lebesgue integrable. (Except for some improper Riemann integrals -- I'm not sure if it holds in the case of improper integrals or not.)
This has also been covered on mathSE a lot of times. Some examples: 1, 2, 3, 4, 5. |
Combinations problem textbook wrong? | This is a pretty standard way to ask for the number of $3$-element subsets of the set of $9$ balls that contain at least one black ball.
I’d be somewhat surprised if the intended answer were not $\binom93-\binom63=84-20=64$ unless the larger context (e.g., text material that immediately precedes the question) clearly suggested either that the balls are distinguishable only by color or that the balls are to be drawn one at a time, and different orders are to be counted as distinct draws.
In any case, if you take the balls to be indistinguishable apart from color, there are only $6$ distinguishable outcomes with at least one black ball if order is not taken into account, namely, $BWW$, $BWR$, $BRR$, $BBW$, $BBR$, and $BBB$, and $19$ if order is taken into account. As Michal Adamaszek said, your answer is inconsistent, since you treat the black balls as indistinguishable but the white and red balls as distinguishable. |
Differential Equation using Laplace transformation. | With the Laplace transform, we have $$\mathcal{L}\{y'\}(s) =sY(s) - y(0)$$
And $$\mathcal{L}\{y''\}(s) =s^2Y(s)- sy(0) - y'(0)$$
This transforms the differential equation
$$y''−9y=0, \quad y(0)=1, \;y'(0)=0$$
Into
$$(s^2Y(s)- sy(0) - y'(0))−9Y(s)= 0$$
$$\Leftrightarrow (s^2Y(s)- s\cdot 1 - 0)−9Y(s)= 0$$
$$\Leftrightarrow (s^2Y(s)- s)−9Y(s)= 0$$
$$\Leftrightarrow Y(s)(s^2-9)-s= 0$$
$$\Leftrightarrow Y(s)(s^2-9)= s$$
$$\Leftrightarrow Y(s) = \frac{s}{s^2-9}$$
The right term can be identified as the laplace transform of the $\cosh(at)$ function (http://tutorial.math.lamar.edu/pdf/Laplace_Table.pdf), since
$$\mathcal{L}\{\cosh(at)\}(s) = \frac{s}{s^2-a^2}$$
And hence here $a=3$ and the inverse tranform of $Y(s)$ is
$$\mathcal{L}^{-1}\{Y(s)\}(t) = \cosh(3t)$$
And the differential equation is solved. |
Can I derive the sum of squares formula without induction and through the formula for series? | hint
Your approach
$$1^2=1$$
$$2^2=1+3$$
$$3^2=1+3+5$$
$$4^2=1+3+5+7$$
$$n^2=1+3+5+...+(2n-1)$$
thus
$$1^2+2^2+...+n^2=n+3(n-1)+5(n-2)+...+2(2n-3)+(2n-1)$$
it complicate things.
an other approach
$$(n+1)^3=n^3+3n^2+3n+1$$
$$n^3=(n-1)^3+3(n-1)^2+3(n-1)+1$$
$$(n-1)^3=(n-2)^3+3(n-2)^2+3(n-2)+1$$
...
$$2^3=1^3+3+3+1$$
$$1^3=1$$
by sum
$$(n+1)^3=3(1^1+2^2+...+n^2)+3(1+2+...+n)+(n+1)$$
the result is
$$\frac{(n+1)((n+1)^2-1-\frac 32 n)}{3}$$ |
Compute the integral $\int_{0}^{2\pi}|\cos^n t|\ dt $ for $n \in \mathbb{Z}$ | HINT:
$$I=\int_{0}^{2\pi} |\cos^n t|\ dt$$
$$=\int_0^\frac\pi2 |\cos^n t| dt+\int_\frac\pi2^\pi |\cos^n t| dt+\int_\pi^\frac{3\pi}2 |\cos^n t| dt+\int_\frac{3\pi}2^{2\pi} |\cos^n t| dt $$
Now $\displaystyle |x|=\begin{cases} x &\mbox{if } x\ge0 \\-x & \mbox{if } x<0 \end{cases} $
For even $n$ each integral becomes $\cos^nt$
For odd we know, $\cos t\ge0\iff 0\le t\le\frac\pi2$ or $3\frac\pi2\le t\le2\pi$
Set $u=t-\frac\pi2$ in the second integral, $v=t-\pi$ in the third and $w=t-\frac{3\pi}2$ in the fourth integral
Now using reduction formulae $\left(\displaystyle I_n=\int_0^{\frac\pi2}\cos^nxdx=\frac{n-1}nI_{n-2}\right)$,
can you derive the iterative formula of $I_n$
You can use $$\int_0^{\frac\pi2}\cos^nxdx=\int_0^{\frac\pi2}\sin^nxdx $$ which can be derived applying $\displaystyle\int_a^bf(x)dx=\int_a^bf(a+b-x)dx,$ |
Good sources on studying fractals (the mathematical, and not just the pretty pictures version)? | The following book by Kenneth Falconer (not mentioned in the other answer): "Techniques in Fractal Geometry".
http://www.amazon.com/Techniques-Fractal-Geometry-Kenneth-Falconer/dp/0471957240/ref=sr_1_4?ie=UTF8&qid=1374247568&sr=8-4&keywords=falconer++fractal |
how to prove uniqueness of the solution for a DE using Lipschitz condition | Disclaimer: I have never done a problem like this and below is my take.
Note that a differentiable function is Lipschitz if and only if it has bounded derivative.
You then need to verify that the RHS of $y''=(1+(y')^2)^{3/2}=g(y')$ is Lipschitz. A change of variables, maybe taking $f=y'$ should convince you that modulo needing more initial conditions, this set up is exactly what Picard's theorem allows you to tackle.
So differentiating, we have
$$
g'(y')=3(1+(y')^2)^{1/2}2y'\leq 6(1+(1+\delta))*(1+\delta)=6(2+\delta)(1+\delta)
$$
and is thus locally Lipschitz around $y'(0)=1$, implying that a solution to the IVP is unique Verifying that what you have is in fact a solution involves differentiating a few times and plugging stuff in. |
Fourier series of $f$ in the interval $[-\pi, \pi]$ and evaluation of a series | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\sum_{n = 1}^{\infty}{\pars{-1}^{n} \over 2n - 1} & =
\ic\sum_{n = 1}^{\infty}{\ic^{2n - 1} \over 2n - 1} =
\ic\sum_{n = 1}^{\infty}{\ic^{n} \over n} -
\ic\sum_{n = 1}^{\infty}{\ic^{2n} \over 2n} =
\ic\sum_{n = 1}^{\infty}{\ic^{n} \over n} -
{1 \over 2}\,\ic\sum_{n = 1}^{\infty}{\pars{-1}^{n} \over n}
\\[5mm] & =
-\ic\ln\pars{1 - \ic} - {1 \over 2}\,\ic\braces{-\ln\pars{1 - \bracks{-1}}} =
-\ic\bracks{{1 \over 2}\,\ln\pars{2} - {\pi \over 4}\,\ic} +
{1 \over 2}\,\ic\ln\pars{2}
\\[5mm] & = \bbx{-\,{\pi \over 4}}
\end{align} |
A change of variable to express as Lower incomplete Gamma function | The mistake occurred in the application of the $t'$-substitution. Since $t' = t^p$, $t = t^{1/p}$ and thus $dt = \frac{1}{p}t^{\frac{1}{p} - 1}\, dt'$. So,
\begin{equation}
\int_x^\infty e^{-t^p}\, dt = \frac{1}{p}\int_{x^p}^\infty e^{-t'} (t')^{\frac{1}{p} - 1}\, dt' = \frac{1}{p}\Gamma\Bigl(\frac{1}{p}, x^p\Bigr).
\end{equation} |
Integral of a function with two parts (piecewise defined) | Since your function for $x \le 0$ as $-\sin x$ and when $x >0$ your function is $2x$, you must split your integral.
Your function is defined as two separate functions for specific domains. Therefore, we must satisfy the conditions by integrating within the right bounds.
As we can see:
$$\int_{-\pi}^0 -\sin xdx + \int_{0}^2 2xdx$$
Now we integrate,
$$\int_{-\pi}^0 -\sin xdx =[\cos x]_{-\pi}^0 = 2$$
And
$$\int_{0}^2 2xdx = [x^2]_{0}^2 = 4$$
So our final answer is $$2 + 4 = 6$$ |
Whats the odds of getting my letters? | HINT:
Here is a demonstration of a simpler configuration: we begin with the pattern ABCD on top rows and EFGH on bottom rows. In this case, The approach is straightforward, but tracing takes some patience. Hope this provides some insight.
Let P(A) and P(B) represent the probabilities of each one of two players being able to take A, B, C, D, E in that order. The probability for both to be successful is P(A)$\cdot $P(B), where P(A)=P(B).
In this picture , F,G,H can be treated the same, we can use F,F, F to simplify the picture. Every time you draw from top row, you only have $\frac{1}{4}$ chance to be correct. Pay attention to separate move ups of E and F as illustrated.
For example, for 1st step to take A out from first row , the probability is $\frac{1}{4}$.When you move a letter up to fill A's position from second row, the probability for E is $\frac{1}{4}$ , while for F is $\frac{3}{4}$ . With some analysis, it is concluded there are three paths leading to a successful game. For each path, trace the success possibilities for every step and multiply them, we get the success rate for that path.
Adding everything together:
$P(A)=P(B)=(\frac{1}{4})^6+\frac{1}{4}\frac{3}{4}\frac{1}{4}\frac{1}{3}\frac{1}{4}\frac{1}{2}\frac{1}{4}\frac{1}{1}\frac{1}{4}+\frac{1}{4}\frac{3}{4}\frac{1}{4}\frac{2}{3}\frac{1}{4}\frac{1}{2}\frac{1}{4}\frac{1}{1}\frac{1}{4}=(\frac{1}{4})^6(1+\frac{1}{2}+1)$
$P(A)P(B)=(\frac{1}{4})^{12}(\frac{5}{2})^2=25\cdot (\frac{1}{4})^{13}\approx 3.7252902985×10^{−7}$
What a chance! |
How to prove the expression is not a square in the following question | I think that the numbers a and b must be differents .
My idea is to consider the map f from {2,5, 13, d} x {2,5, 13, d} to Z / 4Z which to (a, b) associated the class of (ab- 1), and to prouve that always, the fibre f ^ -1 ({cl (2)}) or f ^ -1 ({cl (3)}) is not empty for all integer d. |
Quantitative version of equality in Jensen’s inequality | Let $m = EX$ and $Y = X - m$, so that $EY = 0$. One has
$E(X^2) = m^2 + E(Y^2)$, so that $(1-\epsilon)(m^2 + E(Y^2))\le m^2$ which implies $(1-\epsilon)E(Y^2)\le \epsilon m^2$. Finally, assuming $\epsilon\in (0,1)$,
$$(E|Y|)^2 \le E(Y^2)\le \frac{\epsilon}{1-\epsilon}m^2$$
It follows that
$$E|X-EX| \le \sqrt{\frac{\epsilon}{1-\epsilon}} |EX|$$
Edit If you want $E(X^2)$, you can write
$$(E|Y|)^2 \le E(Y^2) = E(X^2)- m^2\le \epsilon E(X^2)$$
It follows that
$$E|X-EX| \le \sqrt{\epsilon}\sqrt{E(X^2)} $$ |
How to prove this function is coercive? | dYou are starting out the right way and you are on a good path, but your desired estimate cannot be correct. To see this, let $c = A^{-1}b$ and set $x = -c$. Then the left hand side of your estimate is $0$.
Modify your estimate along the following lines:
$$
\|Ax + b\| = \|A(x + c)\| \ge \lambda_{min} \|x + c\|
$$
where as before $c = A^{-1}b$. The right hand side can be estimate from below using the triangle inequality which should give you everything you need. |
what does drawing samples from a density mean? | I am sure you are referring to random sampling.
The random variables $X_1,X_2,...X_n$ are called a random sample of size $n$ from the population with pdf $f(x)$ if $X_1,X_2,...X_n$ are mutually independent and the marginal pdf of each of them is exactly $f(x)$. |
Let $K_n$ is a nonempty compact subset of $X$ and that $K_{n+1} \subset K_n$ for each $n \in \Bbb N$. Show that $f(K) = \bigcap _n f(K_n)$. | As Jonathan Y. says, so long as $\{y\}$ is closed for each $y$ in your space - in particular, if your space is a metric space - then this is good.
On the other hand, without that assumption, the statement need not be true! Consider the topology on $\mathbb{N}$ where $X$ is open iff $X$ is a final segment of $\mathbb{N}$, that is, $a\in X, b>a\implies b\in X$. Then:
The whole space, and each of its subsets, are compact.
The constant function $x\mapsto 1$ is continuous (the preimage of any set is either empty or everything).
Now take $K_n=\{m: m>n\}$. |
Maximal ideal of $\Bbb Z$ that is not maximal in $\Bbb Z[X]$ | For a maximal ideal $(p)$ of ${\mathbb Z}$, the ideal $(p)$ of ${\mathbb Z}[X]$ is never maximal, since ${\mathbb Z}[X]/(p) \cong {\mathbb F}_p[X]$ which is not a field.
For a maximal ideal $(p)$ of ${\mathbb Z}$, the ideal $(p, X)$ is always a maximal ideal of ${\mathbb Z}[X]$ for very much the same reason: ${\mathbb Z}[X]/(p,X) \cong {\mathbb F}_p$, which is a field.
In general, the maximal ideals of ${\mathbb Z}[X]$ are of the form $(p,f(X))$, where $p \in {\mathbb Z}$ is prime and $f(X) \in {\mathbb Z}[X]$ is irreducible modulo $p$. |
Extension of previous problem, involving $\ell^p$ norm circles | The superellipses would seem to fit your bill, as long as $p < \infty$. |
Functions that preserve measure | The proof is straightforward:
First calculate the inverse image of any measurable subset $A\subset [0,1]$. Fairly easy to see that $f^{-1}(A) = A/2 \cup (A/2 + 1/2)$, where the union is disjoint.
Then calculating $$\lambda(f^{-1}(A)) = \lambda(A/2 \cup (A/2 + 1/2)) =
\lambda(A)/2 + \lambda(A)/2 = \lambda(A)$$
If you are worried about proving the identity $\lambda(A/2) = \lambda(A)/2$, then you can probably show it works for interval and use measure theoretic induction to prove it works for any subset. Similar argument could be made for the translation invariance use also in the calculation.
Hope this helps! |
Question on proof regarding unique homomorphism in finitely generated vector spaces | This is not correct, because you assume that the set $\{v_1,\ldots,v_n\}$ has as many elements as $\dim V$. That's an extra assumption. However, there's nothing wrong in applying your hypothesis in a single case.
So, suppose that $\{v_1,\ldots,v_n\}$ generates $V$ and that for every $K$-vector space $W$ and every family of vectors $w_1,\ldots,w_n\in W$ exactly one homomorphism $f:V \longrightarrow W$ with $f(v_i)=w_i$ (for each $i\in\{1,2,\ldots,n\}$) exists. Suppose that $\{v_1,\ldots,v_n\}$ is not a basis of $V$. Then some $v_j$ is a linear combination of all the others. We can assume without loss of generality that $j=n$. So, if $f$ is a linear map from $V$ into $K$ such that $f(v_1)=f(v_2)=\cdots=f(v_{n-1})=0$, then $f(v_n)=0$ too, since $f$ is linear and $v_n$ is a linear combination of $v_1,v_2,\ldots,v_{n-1}$. So, if you take $w_1,\ldots,w_n\in K$ with$$w_j=\begin{cases}0&\text{ if }k<n\\1&\text{ otherwise,}\end{cases}$$then there is no linear map $f\colon V\longrightarrow K$ such that $(\forall j\in\{1,2,\ldots,n\}):f(v_j)=w_j$, which goes against our assumption.
Now, I will prove that $\{v_1,\ldots,v_n\}$ generates $V$. Let $W$ be the subspace of $V$ generated by them and let $W^\star$ be a subspace of $V$ such that $V=W\oplus W^\star$. Then, for each linear map $F\colon W^\star\longrightarrow W$, there is one and only linear map $f\colon V\longrightarrow W$ such that $f|_W=\operatorname{Id}$ and that $f|_{W^\star}=F$. Note that then $f$ has the property$$(\forall k\in\{1,2,\ldots,n\}):f(v_k)=v_k.$$But we are assuming that there's only one such map. That can only happen when $W^\star=\{0\}$, but\begin{align}W^\star=\{0\}&\iff W=V\\&\iff\langle v_1,\ldots,v_n\rangle=V.\end{align} |
Show that T is a vector space over C. | Try showing that the axioms of vector spaces hold in your example:
associativity of vector addition
commutativity of vector addition
distributivity of scalar multiplication over vector addition
distributivity of scalar multiplication over scalar multiplication
existence of vector additive identity
existence of scalar multiplicative identity
existence of vector additive inverse
compatibility of scalar multiplication with field multiplication i.e. for scalars $a,b$ and vector $\mathbf {v}$, $a(b \mathbf {v}) = (ab)\mathbf {v}$ |
When is a group isomorphic to a non-trivial quotient group of itself? | Apropriate answers are already given in the comments so let me just summarize them and add only a little bit.
1) Clearly such a group must be infinite (for a finite group this cannot be true by a simple cardinality argument).
2) $\mathbb{C}^*$ is an example. (comment of @Watson)
3) $\mathbb{Z}^\mathbb{N}$ is an example. Or more generally $G^\mathbb{N}$ for any non-trivial group $G$.
4) $BS(2,3)$ is a finitely generated example. (Baumslag Solitar group)
5) Such groups are called non-hopfian. (comment of @H.Durham)
6) As far as I know they are called hopfian/non-hopfian since the famous mathematician Hopf asked if such groups exists which are finitely generated.
7) There is a famous theorem of Mal'cev which states that a finitely generated residually finite group is Hopfian. |
summing roots of unity elementary question (complex numbers) | it's different from $0$ because $0$ is not a root of unity, but anyway in both aspects you use the fact that $w \not= 1$ which comes directly from the definition of $w = \exp(\frac{2\pi i}{n})$. a number $\exp(i 2\pi y)$ is equal to $1$ if and only if $y$ is a real integer - it's easy to see from the trigonometric definition of such a number (the one with sin and cos) |
Intersection of events in Coupon Collector Problem $ P(A_i \cap. A_j) $ | While I understand the idea of defining $A_i = \{\text{coupon i did not come up in the n tries}\}$, I don't understand why $P(A_i \cap A_j) = (1 - (p_i+p_j))^n$. Shouldn't it be
$P(A_i \cap A_j)= ((1-p_i)(1-p_j))^n$. This because each extraction is independent from the other, as is each coupon from the other.
First of all,
$$P(A_i \cap A_j)= ((1-p_i)(1-p_j))^n$$
is wrong, because on a given turn, getting Coupon-i and getting Coupon-j are not independent events. That is, on a given turn, when you don't get Coupon-i, it is slightly more likely than normal that you do get Coupon-j.
Then,
$$(1 - (p_i+p_j))$$
represents the chance of not getting either Coupon-i or Coupon-j on 1 specific turn. Therefore,
$$(1 - (p_i+p_j))^n$$
represents the chance of getting not getting either Coupon-i or Coupon-j on $(n)$ consecutive turns. |
intersection of three planes different cases, algebraic and geometric explanations | Some of the ideas:
Generically (which to a mathematician means something like "most of the time"), any two planes will intersect at a line. In fact two planes either intersect at a line or are parallel and we know that two rows have to be the same (in the non-augmented matrix) for the two planes described by those rows to be parallel.
The rank $r$ is the number of planes that are linearly independent (in a sense). You have to try imagine adding planes together to get another plane in a way very similar to how you add vectors.
Then only if $r=3$ will you have a unique solution. This is the case where all three planes are linearly independent (again, in a sense).
This also should help you understand that if $r=1$ then all $3$ planes will be parallel.
If $r'>r$ then there will be no solution. I can't think of a geometric explanation (at least one in terms of planes) for this, but algebraically its because the target vector $b$ is outside the image of the transformation $x\mapsto Ax$.
Also, algebraically, $r'$ can't be more than $1$ higher than $r$ because you're only adding one more column. That column might be linearly independent of the other columns ($r'=r+1$) or linearly dependent on them ($r'=r$) but those are the only choices. |
How to convert this equation to telescoping series | Essentially,
\begin{align*}
\sum_{n=1}^{\infty} \frac{(\ln 3)^{n}}{n!}\left(\sum_{0}^{n} k^{2}{{n}\choose{k}}\right) &= \sum_{n=1}^{\infty} \frac{(\ln 3)^{n}}{n!}\left[ n(n-1)2^{n-2}+n2^{n-1}\right] \\[1em]
&= \sum_{\color{red}{n=2}}^{\infty}\frac{(\ln 3)^{n}\;2^{n-2}}{(n-2)!} + \sum_{n=1}^{\infty}\frac{(\ln 3)^{n}\;2^{n-1}}{(n-1)!} \\[1em]
&= (\ln 3)^{2}\sum_{n=2}^{\infty}\frac{(2\ln 3)^{n-2}}{(n-2)!} + (\ln 3)\sum_{n=1}^{\infty}\frac{(2\ln 3)^{n-1}}{(n-1)!} \\[1em]
&= (\ln 3)^{2}\sum_{n=0}^{\infty}\frac{(2\ln 3)^{n}}{n!} + (\ln 3)\sum_{n=0}^{\infty}\frac{(2\ln 3)^{n}}{n!} \\[1em]
&= \left( (\ln 3)^{2} + \ln 3 \right)\left( \sum_{n=0}^{\infty}\frac{(2\ln 3)^{n}}{n!} \right) \\[1em]
&= \left( (\ln 3)^{2} + \ln 3 \right)e^{2\ln 3}.
\end{align*}
If we use the fact that $2\ln 3 = \ln 9$ then our answers simplifies to $9\left( (\ln 3)^{2} + \ln 3 \right)$. |
Is a transitive asymmetric relation a partial order? | First, to terminology, partial order is usually interpreted as a transitive, anti-symmetric, reflexive relation, with strict partial order being used for a transitive, asymmetric, irreflexive relation. (So, ironically but in a common twist, a strict partial order is not a partial order. This is like how a manifold with boundary is not a manifold.)
A relation being transitive and asymmetric is equivalent to it being transitive and irreflexive, and so either can be used as a definition.
If $R$ is asymmetric, meaning $R(a,b)\implies \neg R(b,a)$ for all $a$ and $b$, then $R(a,a)\implies \neg R(a,a)$ for all $a$ and thus $R(a,a)$ must not hold for any $a$, hence $R$ is irreflexive. We can establish this without even transitivity.
Conversely, if $R$ is transitive and irreflexive, then we have $R(a,b)\land R(b,c) \implies R(a,c)$ and $\neg R(a,a)$ for all $a$, $b$, and $c$. Choosing $a = c$, we get $R(a,b)\land R(b,a) \implies R(a,a)$ for all $a$ and $b$, and thus it can't be the case that both $R(a,b)$ and $R(b,a)$ are true, and so $R(a,b)\implies \neg R(b,a)$, hence $R$ is asymmetric.
Your conclusion is correct and your reasoning is almost correct. The only issue is a relation being asymmetric is not equivalent to it not being symmetric. Asymmetry is a stronger condition which means that it's never the case that both $R(a,b)$ and $R(b,a)$ hold. Symmetry means it's always the case that $R(a,b)$ holds if $R(b,a)$ does. The negation of this is that sometimes it's the case that $R(a,b)$ doesn't hold when $R(b,a)$ does, but this allows that $R(a,b)$ and $R(b,a)$ do sometimes hold too. For example, $R = \{(1,2),(2,1),(1,3)\}$ is a relation that is not symmetric but also not asymmetric. |
$\lim_{(x,y)\to(0,0)} \frac{xy}{\sqrt{x^2+y^2}}$ | Related problems: I, II. Here is how you advance
$$ \Bigg| \frac{xy}{\sqrt{x^2+y^2}}-0 \Bigg |\leq \frac{|x||y|}{\sqrt{x^2+y^2}} \leq \frac{ \sqrt{x^2+y^2} \sqrt{x^2+y^2} }{\sqrt{x^2+y^2}}=\sqrt{x^2+y^2}< \epsilon =\delta .$$
Note:
$$ |x| \leq \sqrt{x^2+y^2},\quad |y| \leq \sqrt{x^2+y^2}. $$ |
Derivative of a function of trace | Let's define
$$g:\mathbb{R}^{m\times m}\rightarrow \mathbb{R}, \quad g(X) = f\left(\mathop{\textrm{Tr}}(X)\right) = f\left(\textstyle\sum_i X_{ii}\right).$$
Notice that this function is constant for all off-diagonal elements of $X$, so any partial derivative that involves an off-diagonal element must be zero. So even though I've actually defined $g$ on all $m\times m$ matrices and not just the diagonal matrices, the Hessian still involves just the diagonal.
Now consider a partial derivative with respect to a single diagonal element. A simple application of the chain rule gives us
$$\frac{\partial g(X)}{\partial X_{jj}} = f'\left(\sum_i X_{ii}\right) \cdot
\frac{\partial}{\partial X_{jj}} \sum_i X_{ii} = f'\left(\sum_i X_{ii}\right)$$
For the second derivative, then, we get:
$$\frac{\partial^2 g(X)}{\partial X_{kk}\partial X_{jj}} =
\frac{\partial}{\partial X_{kk}} f'\left(\sum_i X_{ii}\right) =
\frac{\partial}{\partial X_{kk}} f''\left(\sum_i X_{ii}\right) \cdot
\frac{\partial}{\partial X_{kk}} \sum_i X_{ii} = f''\left(\sum_i X_{ii}\right).$$
The second partial derivative is the same for all combinations of the diagonal elements.
So far so good. But here's the problem: what does the Hessian of a matrix function look like? For a function defined on $\mathbb{R}^n$, we generally treat the Hessian as a symmetric matrix. But in this case, we can't do that. A generalizable notion is that the Hessian is a symmetric linear mapping from the input space back onto itself. If we do this right, it fits in nicely into a Taylor expansion:
$$g(X+tZ) \approx g(X) + t \langle \nabla g(X), Z \rangle + \tfrac{1}{2} t^2 \langle Z, \nabla^2 g(X) [Z] \rangle$$
In our case, then, we get this:
$$\nabla^2 g(X)[Z] = f''(\mathop{\textrm{Tr(X)}}) \cdot \mathop{\textrm{Tr}}(Z) \cdot I$$
where $Z$ is the search direction and $t$ is a scalar.
How do we confirm this? Well, let's look at partial derivatives. For a function $h(x)$ defined on $\mathbb{R}^n$, we have
$$\frac{\partial^2 h(x)}{\partial x_i \partial x_j} = \left(\nabla^2 h(x)\right)_{ij} = \langle e_i, \nabla^2 h(x) e_j \rangle$$
where $e_i$, $e_j$ are vectors with ones at positions $i$ and $j$,
respectively, and zeros everywhere else. For our matrix function, we have
$$\frac{\partial g(X)}{\partial X_{ij} \partial X_{kl}} = \langle E_{ij},
\nabla^2(X) [E_{kl}] \rangle = \left\langle E_{ij}, f''(\mathop{\textrm{Tr(X)}}) \cdot \mathop{\textrm{Tr}}(E_{kl}) \cdot I \right\rangle$$
where $E_{ij}$, $E_{kl}$ are matrices with zeros everyone except for
a one in positions $ij$ and $kl$, respectively. Simplifying,
$$\begin{aligned}
&\left\langle E_{ij}, f''(\mathop{\textrm{Tr(X)}}) \cdot \mathop{\textrm{Tr}}(E_{kl}) \cdot I \right\rangle = f''(\mathop{\textrm{Tr(X)}}) \cdot \mathop{\textrm{Tr}}(E_{kl}) \langle E_{ij}, I \rangle \\&\qquad = f''(\mathop{\textrm{Tr(X)}}) \cdot \mathop{\textrm{Tr}}(E_{kl}) \cdot \mathop{\textrm{Tr}}(E_{ij}) =
\begin{cases}
f''(\mathop{\textrm{Tr(X)}}) & i=j, k=l \\
0 & \text{otherwise}
\end{cases}
\end{aligned}$$
and that's what we were expecting: zero for any partial derivative involving an off-diagonal element, and the same for all others. |
Rates of change for functions dependent on same variable | Your first displayed equation is wrong. It should read: $$\frac{dx}{dy} = \frac{\dot{x}}{\dot{y}}$$ Note that, in your example, $\dot x = 3t^2$ and $\dot y = 4t^3$, so indeed $$\frac{\dot x}{\dot y} = \frac{3}{4t} = \frac{3}{4} y^{-1/4} = \frac{dx}{dy}.$$ |
Determinant of a random matrix | If you exchange two rows of your matrix, the determinant will change the sign, while this operation preserves the probability measure. Together with the fact that the probability of $\det H_n =0$ is $0$ (the measure is $0$ as it is of codimension $1$) you get that the probabiity of $\det H_n\geq0$ is $1/2$. |
How did my textbook find the interval of convergence? | No. The $a$ value is $0$ and since you have $-1\lt x\lt 1$ you have $R=1$ |
What really is ''orthogonality''? | To expand a bit on Daniel Fischer’s comment, coming at this from a different direction might be fruitful. There are, as you’ve seen, many possible inner products. Each one determines a different notion of length and angle—and so orthogonality—via the formulas with which you’re familiar. There’s nothing inherently coordinate-dependent here. Indeed, it’s often possible to define inner products in a coordinate-free way. For example, for vector spaces of functions on the reals, $\int_0^1 f(t)g(t)\,dt$ and $\int_{-1}^1 f(t)g(t)\,dt$ are commonly-used inner products. The fact that there are many different inner products is quite useful. There is, for instance, a method of solving a large class of interesting problems that involves orthogonal projection relative to one of these “non-standard” inner products.
Now, when you try to express an inner product in terms of vector coordinates the resulting formula is clearly going to depend on the choice of basis. It turns out that for any inner product one can find a basis for which the formula looks just like the familiar dot product.
You might also want to ask yourself what makes the standard basis so “standard?” If your vector space consists of ordered tuples of reals, then there’s a natural choice of basis, but what about other vector spaces? Even in the Euclidean plane, there’s no particular choice of basis that stands out a priori. Indeed, one often chooses an origin and coordinate axes so that a problem takes on a particularly simple form. Once you’ve made that choice, then you can speak of a “standard” basis for that space. |
Find the value: $\int_{0}^{π/6}\ 4\sin^{2}xdx$ | We have the following identity: $\operatorname{cos}(2x) = 1 - 2\operatorname{sin}^2(x)$. Therefore, $4\operatorname{sin}^2(x) = 2(1 - \operatorname{cos}(2x))$.
$$\begin{aligned}\int_0^{\frac{\pi}{6}}4\operatorname{sin}^2(x) dx &= \int_0^{\frac{\pi}{6}}2(1 - \operatorname{cos}(2x)) dx\\ &=\int_0^{\frac{\pi}{6}}2 dx - \int_0^{\frac{\pi}{6}}2\operatorname{cos}(2x)dx\\ &= 2x|^{\frac{\pi}{6}}_0 - \frac{2\operatorname{sin}(2x)}{2}\vert^{\frac{\pi}{6}}_0\\&=2\cdot\left(\frac{\pi}{6} - 0 \right) - \left[ \operatorname{sin}\left(2\cdot \frac{\pi}{6}\right) - \operatorname{sin}\left(2\cdot 0\right)\right] \\ &=\frac{\pi}{3} - \operatorname{sin}\left( \frac{\pi}{3}\right)=\frac{\pi}{3}-\frac{\sqrt3}{2}\end{aligned}$$ |
Normal subgroup $N$, subgroup $U$, then $UN/N = U/N$. | As far as I have always understood this, it is completely possible and common for $N$ to not be contained in $U$ at all. So, it would not make sense to mod out by $N$. But, taking $UN$ first, will be the smallest subgroup that contains both $U$ and $N$, which then gives us what we need to make sense of the quotient.
Moreover, the second Isomorphism theorem gives us that $UN/N \cong U/(U\cap N)$. |
Error in evaluating this limit? | There's not so much a typo as there is a seeming abuse of notation. It seems that White is using $$\lim_{dt->0}\left(\arctan\frac{\frac{\partial v}{\partial x}dx\,dt}{dx+\frac{\partial u}{\partial x}dx\,dt}\right) = \frac{\partial v}{\partial x}dt$$ to mean that $$\arctan\left(\frac{\frac{\partial v}{\partial x}dx\,dt}{dx+\frac{\partial u}{\partial x}dx\,dt}\right)\sim \frac{\partial v}{\partial x}dt\quad\text{ as } dt\to 0.$$ This is relatively straightforward to show. First, rewrite
\begin{align}
\arctan\left(\frac{\frac{\partial v}{\partial x}dx\,dt}{dx+\frac{\partial u}{\partial x}dx\,dt}\right) = \arctan\left(\frac{\frac{\partial v}{\partial x}dt}{1+\frac{\partial u}{\partial x}dt}\right).
\end{align}
Then, using $$\tan(\theta) \sim \theta\quad\text{ as }\theta \to 0 \implies \arctan(\theta) \sim \theta\quad\text{ as }\theta \to 0,$$ as well as
$$\frac{ax}{1+bx}\sim ax\quad\text{ as }x\to 0,
$$
we have that
\begin{alignat}{2}
\arctan\left(\frac{\frac{\partial v}{\partial x}dt}{1+\frac{\partial u}{\partial x}dt}\right) &\sim \frac{\frac{\partial v}{\partial x}dt}{1+\frac{\partial u}{\partial x}dt} &&\quad\text{ as }dt\to 0\\
&\sim \frac{\partial v}{\partial x}dt&&\quad\text{ as } dt\to 0.
\end{alignat} |
Does there Exist a Set Such That $\mathcal{P} (A) \subseteq A $? | You can follow the lines of Russell's paradox, as your instructor suggested.
Assume that $\mathscr{P}(A)\subseteq A.$
Let $W=\{x\in A \mid x\not\in x\}.$ Then $W$ is a subset of $A,$ so, by our assumption, $W$ is a member of $A.$
Now proceed just as in Russell's paradox to get a contradiction:
We know that for all $x\in A,$ $x\in W$ iff $\dots.$ |
$\frac{b^2-a^2}{c+a} + \frac{c^2-b^2}{a+b} + \frac{a^2-c^2}{b+c} \ge 0$ Proof | Since the expression is cyclic, we can, WLOG, reduce it into two cases:
Suppose $a\ge b\ge c > 0$. We have
$$\frac{a^2-c^2}{b+c} = \frac{a^2-b^2}{c+b} + \frac{b^2-c^2}{c+b} \ge \frac{a^2-b^2}{c+a} + \frac{b^2-c^2}{a+b}$$
Now suppose $a \ge c \ge b > 0$. We then have
$$\frac{c^2-b^2}{a+b} + \frac{a^2-c^2}{b+c} \ge \frac{c^2-b^2}{a+c} + \frac{a^2-c^2}{a+c} =\frac{a^2-b^2}{c+a}$$ |
The number of digits in the bijective base-k numeral for n is ⌈logk((n+1)(k−1))⌉(k≥2, n≥0). Why? | The relevant passage in full:
However, taking advantage of the fact that $n$ is an integer, we can also observe that as $n$ runs over the set $$\left\{\frac{k^\ell-1}{k-1},\ldots,\frac{k^{\ell+1}-1}{k-1}-1\right\}$$ of integers of length $\ell$, $(k-1)n$ runs over the set of multiples of $k-1$ between $k^\ell-1$ and $k^{\ell+1}-k$ inclusive, and therefore $(k-1)(n+1)$ runs over the set of multiples of $k-1$ between $k^\ell+k-2$ and $k^{\ell+1}-1$ inclusive. These are precisely the multiples $(k-1)m$ of $k-1$ that satisfy $k^\ell\le(k-1)m<k^{\ell+1}$ ...
The claim in question is that the set of multiples of $k-1$ between $k^\ell+k-2$ and $k^{\ell+1}-1$ inclusive is equal to the set of multiples $(k-1)m$ of $k-1$ such that
$$k^\ell\le(k-1)m<k^{\ell+1}\;.$$
We know that $k^\ell-1$ is a multiple of $k-1$; say $k^\ell-1=(k-1)m_0$. Similarly,
$$k^{\ell+1}-1=(k-1)m_1$$
for some integer $m_1$. Then
$$k^\ell+k-2=(k-1)(m_0+1)\;,$$
so the set of multiples of $k-1$ between $k^\ell+k-2$ and $k^{\ell+1}-1$ inclusive is the set of multiples $(k-1)m$ of $k-1$ such that $m_0+1\le m\le m_1$: it’s the set
$$M=\{(k-1)m:m_0+1\le m\le m_1\}\;.$$
Now $(k-1)m_0=k^\ell-1<k^\ell$, but
$$(k-1)(m_0+1)=(k^\ell-1)+(k-1)=x^\ell+k-2\ge k^\ell\;,$$
since $k\ge 2$, so $m_0+1$ is the smallest $m$ such that $k^\ell\le(k-1)m$. This means that $m_0+1\le m$ if and only if $k^\ell\le(k-1)m$. And $(k-1)m_1=k^{\ell+1}-1$, so $m_1$ is clearly the largest $m$ such that $(k-1)m<k^{\ell+1}$, meaning that $m\le m_1$ if and only if $(k-1)m<k^{\ell+1}$.
Thus, $m_0+1\le m\le m_1$ if and only if $k^\ell\le(k-1)m<k^{\ell+1}$, and therefore $M$ is just the set of multiples $(k-1)m$ of $k-1$ such that $k^\ell\le(k-1)m<k^{\ell+1}$. |
Proof of mutual information property that $I((1-\beta)Z + \beta X; X) \geq I(Z; X)$ | I don't think that this is true. The conditional mutual information
$$I((1-\beta)Z+\beta X;X|\beta)$$
certainly exceeds $I(Z;X)$, but the same may not hold for mutual information. Consider the following example where $X$ and $\beta$ are independent Bernoulli-$1/2$ random variables taking values in $\{0,1\}$. Let $Z$ take values in $\{0,1\}$ and suppose that $Z=0$ if and only if $X=1$. We hence have $I(Z;X)=1$. However, $(1-\beta)Z+\beta X$ is independent from $X$, hence $I((1-\beta)Z+\beta X;X)=0$. |
What does the subscript $SS$ mean in this context? | When you have a linear transformation $T$ from a vector space $V$ to a vector space $W$, you can represent that transformation by a matrix, but first you have to pick a basis for $V$ and a basis for $W$. The same is true if $V$ and $W$ are the same vector space – you can use the same basis for both $V$-as-domain and $V$-as-codomain, but you don't have to.
In the example you give, the author is choosing to use the same basis – the standard basis – for the domain and the codomain. |
What should $\aleph_2$ mean? | $\aleph_0$ is the cardinality of the set of all the finite ordinals, which coincide with our usual interpretation of the natural numbers. $\aleph_1$ is the cardinality of the set of all the countable ordinals.
$\aleph_2$ is the cardinality of the set of all the ordinals whose size is $\leq\aleph_1$ (and in fact equals $\aleph_1$ would suffice).
If we assume the generalized continuum hypothesis, then $\aleph_2=|\mathcal P(\Bbb R)|$. |
Placing points in $\Bbb R^n$ | The matrix has rank $\le r$ if and only if all the points lie in an $r$-dimensional subspace of $\mathbb R^n$. |
Inertia field of a compositum. | Judging by the way the question is phrased (and this is certainly the case in the question in Marcus' textbook to which the OP refers, where $K/\mathbb{Q}$ is abelian) we may assume that $K/\mathbb{Q}$ is Galois. Also assume that $q\geq3$ to avoid trivialities.
Let $(KL)^{I_U}$ denote the inertia field of $U$, where in turn $I_U$ is the inertia group as defined in the question.
Now $q$ is totally ramified in $\mathbb{Q}(\zeta_q)/\mathbb{Q}$, hence in $L/\mathbb{Q}$, and so in particular $I_U$ is non-trivial and $(KL)^{I_U}$ cannot contain $L$ (see for example Ramification in a tower of extensions). Also $q$ is ramified in $K$ by hypothesis, and so once again the action of $I_U$ upon the sub-extension $K$ must be non-trivial.
So $(KL)^{I_U}$ contains neither $K$ nor $L$. However it can definitely be a subfield of one or both of them.
Here is an illustrative (though far from universally representative!) example:
Let $p=5$, $q=11$ and let $C_{p^2q}$ be the cyclotomic field obtained from the 275-th roots of unity. Notice $p|(q-1)$ which is essential here.
Consider the fixed field $K$ of the Sylow-2-subgroup of the Galois group Gal$(C_{p^2q}\mid\mathbb{Q})$. This has degree $p^2=25$: it has Galois group equal to the product of two cyclic groups of order $5$ and is ramified of degree $e=5$ over $q=11$. For completeness we mention it is ramified of degree $5$ over $p=5$ as well, and that the (unique because it is an abelian extension) inertia groups over $p=5$ and $q=11$ are distinct.
$L$ is the fixed field of the cyclotomic field $C_q$ of $q$-th roots of unity under the action of its Sylow-2-subgroup, an extension of $\mathbb{Q}$ of degree $e=5$. By construction in this case $L\subseteq K$ and so $KL=K$. The inertia group $I_U$ therefore is just the inertia group of $K$ at $Q=U$, which from above is a cyclic group of order $5$ isomorphic to Gal$(L/\mathbb{Q})$.
So finally we see that $(KL)^{I_U}$ is the maximal subextension of $K$ which is unramified above $q$, which MAGMA gives as the (totally real) splitting field of $x^5-10x^3-5x^2+10x-1$ over $\mathbb{Q}$, ramified only over $5$. It is clear this contains neither $K$ nor $L$, though it is a subfield of $K$. |
Does $x_n=\frac{1}{n}$ for each $n \in \Bbb{N}$ converge in $(X=(0, \infty),d)$ where $d(x,y)=|ln(\frac{x}{y})|$? | Let us assume that there exist $l>0$ such that $x_n \to l$ in the topology given by $d$. This would mean that $d(l, x_n) \to 0$, which is equivalent to $\ln (nl) \to 0$, which is impossible, therefore that sequence is not convergent in the topology of $d$. |
Prove $\sum^{k}_{i=0}{F(i)} + 1 = F(k+2)$ without induction | Since you're trying to prove infinitely many statements, we either use induction or properties of the objects we're working with (in this case Fibonacci numbers). Hence, here is a proof using the latter. You may find it unsatisfying, but there aren't that many alternatives here.
We take as given the following two identities:
$$\sum_{i=0}^{n-1} F_{2i+1}=F_{2n}$$
and
$$\sum_{i=1}^n F_{2i}=F_{2n+1}-1$$
We add the two identities to get $$\sum_{i=1}^{2n}F_i=F_{2n}+F_{2n+1}-1=F_{2n+2}-1$$
Then we add $1$ to both sides, to get the desired identity for even $k$. To get it for odd $k$, we replace the second identity by $\sum_{i=1}^{n-1}F_{2i}=F_{2n-1}-1$, and proceed similarly. |
Find range of values for p in equation of circle | It should be $-3p-2$ rather than just $-3p+2$ and you are on right path. |
What is the difference between eigenfunctions and eigenvectors of an operator? | An eigenfunction is an eigenvector that is also a function. Thus, an eigenfunction is an eigenvector but an eigenvector is not necessarily an eigenfunction.
For example, the eigenvectors of differential operators are eigenfunctions but the eigenvectors of finite-dimensional linear operators are not. |
Show that coefficients of $f$ are integral over $R$, if $f(t)$ is integral over $R[t]$ | Hint: You are correct so far. Here we use the first observation you have, that if $P(t) = 0$ then $P$ is identically 0. Note that $g(f(t)) = 0.$ So we need to build a polynomial in $K[t]$ out of $g(f(t)).$
After we simplify $g(f(t))$ we obviously get some polynomial $P \in K[t]$, and so that polynomial must kill $t$ and ergo be identically 0.
$P$ has constant coefficient $b_0 + b_1a_0^1 + b_2a_0^2 + \cdots + b_{m-1}a_0^{m-1}+ a_0^m,$ which must thus be equal to 0. Therefore $a_0$ is integral.
Can you figure out how to turn that $P$ has all 0 coefficients into a polynomial relation in $a_1$? Try inducting, and proving the $a_i$ are algebraic one at a time. |
Writing a sampled differential equation as a difference equation? | Correspondences between recurrence relations and differential equations are considered in this Wikipedia article:
Relationship to difference equations narrowly defined.
Relationship between homogeneous linear recurrence relations with constant coefficients and linear differential equations.
Relationship between first-order non-homogeneous recurrence relations with variable coefficients and first order linear differential equations with variable coefficients
Relationship to differential equations. |
Analog of the Chebyshev's inequality | If I understand correctly, your D is an expectation here.
Here are the key steps:
1) $\max(a,b) = \frac{a+b+|a-b|}{2}$
2) $|c^2-d^2| = |c+d||c-d|$
3) Cauchy Schwarz gives $E|UV| \leq \sqrt{E[U^2]}\sqrt{E[V^2]}$
This is a sketch. Just substitute appropriately to get your answer. |
Series of random numbers on a continuous function | Using Uniform Random Numbers to Simulate Various Distributions
Usually, by 'random number' or 'pseudorandom number', we mean an observation $U$ from a uniform distribution on the interval (0, 1). Other distributions are simulated by transforming $U$ to match some other distribution. A histogram of many observations $U$ would have several bars with bases in (0, 1) and
heights nearly equal. Altogether, the histogram has a roughly rectangular
shape, and sometimes uniform distributions are called 'rectangular'.
The simplest case might be to use $60U$ to simulate events randomly
spread over an hour's time. This is another rectangular distribution
but the base of the rectangle is the interval (0, 60).
If you wanted observations confined to the interval (60, 70), you
could use the transformation $10U + 60$. Another rectangular
distribution with base (60, 70).
Other transformations use nonlinear transformation. For example,
many observations made with the square root transformation $\sqrt{U}$
would give a histogram that is roughly a right triangle with base (0,1)
and the tallest end of the hypotenuse on the right side. Admittedly, it is not
immediately obvious why taking the square root gives a triangular-shaped
histogram, but there is a mathematical proof for that.
Your specification that you just want values crowded rather closely
together mainly clustering around 45 is pretty vague. Most statistical
software has pre-programmed transformations (some of them quite
elaborate) to turn uniform $U$'s into various distributional shapes.
So it is not necessary for the user to figure out the transformation needed to get a
desired shape. If you can be more specific what you want, I might be
able to tell you how to simulate the kind of data you want.
Below is an example of 50 observations from a normal population with
population mean 45, population standard deviation 8, and rounded to
integer values. This illustration was done with R statistical software, but
many kinds of software would do the same job about as simply.
Please understand that this is a random procedure, so the next
time I type 'round(rnorm(50, 45, 8))' into R, I will get entirely
different numbers, but still integers in the 'general vicinity' of 45.
Also, 50 observations is not nearly enough to yield a smooth histogram that closely matches the classical bell-shaped curve; 1000 would be better for that. (Numbers in brackets simply give the count out of 50 of the first
number in each row.)
> round(rnorm(50, 45, 8))
[1] 46 50 51 51 50 51 29 50 48 55 35 41 31 43
[15] 49 53 53 48 37 48 33 43 40 37 46 43 46 60
[29] 35 64 57 48 53 48 39 61 44 43 41 33 46 39
[43] 40 58 45 45 57 29 41 44 |
Limit point symmetric? | Consider the Sierpinski space $S=\{0,1\}$ where $\phi, S,$ and $\{0\}$ are open but $\{1\}$ is not open. Then $1$ is a limit point of $\{0\}$ but $0$ is not a limit point of $\{1\}.$ |
Problem with finding the constant in a non linear ODE equation | Start with the one from the book, for instance.
$$x = \frac{1}{3}\sqrt{219 + 6e^{3t}} = \frac{\sqrt{219 + 6e^{3t}}}{3} = \sqrt{\frac{219 + 6e^{3t}}{9}},$$
simply because $\sqrt{9} = 3$. Now, $\dfrac{6}{9} = \dfrac{2}{3}$, so that
$$x = \sqrt{\frac{219}{9} + \frac{2}{3}e^{3t}}.$$
Dividing $219$ by $3$, you get $73$. So $\dfrac{219}{9} = \dfrac{73}{3} = \dfrac{146}{6}$. Thus,
$$x = \sqrt{\frac{219}{9} + \frac{2}{3}e^{3t}} = \sqrt{\frac{146}{6} + \frac{2}{3}e^{3t}}.$$ |
Choosing a team of $8$ from $11$ men and $7$ women | In your approach, you count some events more than once. Let's say the names of the women are $A,B,C,D,E,F,G$.
Then, selecting $A,B,C$ and $5$ men is counted as:
selecting $A,B$, and then selecting $C$ among the $6$ remaining choices
selecting $A,C$, and then selecting $B$ among the $6$ remaining choices
selecting $B,C$, and then selecting $A$ among the $6$ remaining choices
So, you count this event three times, but it is a single event. |
Calculate the integral $\int_0^1 \sum_{r_n \leq x} 2^{-n} dx$ | After a discussion with my friends, we conclude that $$ \int_0^1\chi_{(r_n \leq x)}dx = \int_{r_n}^1dx = 1 - r_n$$ and thus, we have $$ \int_0^1\sum_{n=1}^{\infty}\frac{1}{2^n}\chi_{(r_n \leq x)}dx = \lim_{N \rightarrow \infty} \sum_{n=1}^{N}\frac{1}{2^n} (1 - r_n) = \sum_{n=1}^{\infty}\frac{1 - r_n}{2^n}$$
which can't be further simplified, since the value of $r_n$ depends on how we order the rational numbers. |
Would every half angle of an angle in each quadrant be in the previous quadrant? | No. Here are two counterexamples:
(7pi)/4 is in the fourth quadrant and (7pi)/8 is in the second quadrant.
(pi)/3 is in the first quadrant and (pi)/6 is also in the first quadrant. |
Geometric progression, min $x^n > 100$ | $\lfloor x \rfloor =1$ as already shown (in comments). Let $x=1+y$
Let the three numbers be $\{y, 1, 1+y\}$. Then $y\times(1+y) = 1^2=1$. Solving this, we get $y = \frac{-1\pm \sqrt{1+4}}{2}$. Since $y\in[0,1]$, $y=\frac{-1\pm \sqrt{5}}{2} \simeq 0.6180 = \phi-1$, where $\phi$ is the golden ratio.
Thus the numbers in geometric progression are $\{\phi-1, 1, \phi\}$=$\{\phi', 1, \phi\}$ where $\phi'=\phi-1=\frac{1}{\phi}$ is also known as the conjugate of the golden ratio
Now, $\phi^{10}$ crosses 100 for the first time. So the required $n=10$
See here for more information on the golden ratio |
What is the image of the following mapping for $|w|\leq 1$ | $\operatorname{Im}\bigl(i\frac {w-1}{w+1}\bigr)=\operatorname{Im}\frac {i(w-1)(\overline {w} +1)} {|w+1|^{2}}=\frac {|w|^{2}-1} {|w+1|^{2}} <0$. So every point in the image has negative imaginary part. Conversely, given any $\zeta$ with $\operatorname{Im}\zeta <0$, take $w=\frac {i+\zeta} {i-\zeta}$ and verify that the image of $w$ is $\zeta$. Hence the image is exactly the lower half of plane. |
Example of conformal map that is not holomorphic? | $$ f(z) = \bar{z} \; \; \; \; $$ |
Prove: minimal origin-to-ellipse $|z+a|+|z-a|=2r$ lies on minor axis, using complex numbers | Turns out the example I made up is correct: all the important points have integer real and imaginary parts. Draw $a = 20 + 15i$ and $r=65$ |
Differential of rotation matrix at the north pole of sphere | On $\mathbb{R}^3$, because $A_z$ is linear and in $\mathbb{R}^3$ there is a natural identification of tangent spaces with the entire space, $dA_z = A_z$. Therefore, you know exactly what the action of $dA_z$ is on $T_{(0,0,1)}\mathbb{R}^3$.
Since $T_{(0,0,1)}S^2\subset T_{(0,0,1)}\mathbb{R}^3$, how can you deduce the action of $dA_z$ on it? |
Basic questions concerning sample means and distributions | 1. I suppose you mean to ask if the $expected$ value of $\bar X$ is $\mu.$ The answer is Yes, provided $\mu$ exists.
Let $X_1, X_2, \dots X_n$ be a random sample from a population with
mean $\mu$. Then $E(X_i) \equiv \mu.$ Then
$$E(\bar X) = E[(1/n) \sum_{i=1}^n X_i] = (1/n) E[\sum_{i=1}^n X_i]
= (1/n)\sum_{i-1}^n E(X_i) = (1/n)n\mu = \mu.$$
While we're at it, random sampling implies independence of the $X_i$
so that
$$V(\bar X) = V[(1/n) \sum_{i=1}^n X_i] = (1/n)^2 V[\sum_{i=1}^n X_i]
= (1/n)^2\sum_{i-1}^n V(X_i) = \sigma^2/n,$$
provided that the population variance $\sigma^2$ exists.
Note: The second equation requires independence. If $X_1$ and
$X_2$ are independent, then $V(X_1 + X_2) = V(X_1) + V(X_2).$
But this does $not$ work without independence. As an extreme example,
if $X_1 \equiv X_2$ then $V(X_1 + X_2) = V(2X_1) = 4V(X_1).$
2. Yes.
If $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2, \sigma_2^2),$ then $X_1 + X_2 \sim N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2).$ Thus adding two normal random variables gives
another normal random variable. Also, if $X \sim N(\mu,\sigma^2),$ and $a >0$ and $b$ are real numbers,
then $aX + b \sim N(a\mu + b, a^2 \sigma^2).$
In the case of random sampling of two observations from a population, the
means and variances are equal:
$\mu_1 = \mu_2 = \mu$ and $\sigma_1^2 = \sigma_2^2 = \sigma_2.$
Thus $\bar X = (X_1 + X_2)/2 \sim N(\mu, \sigma^2/2).$
Notice that $V[(X_1 + X_2)/2] = (1/4)V(X_1 + V_2) = (1/4)(2\sigma^2) = \sigma^2/2.$
This generalizes to $n$ independent observations from a normal
distribution. So that $E(\bar X) = \mu,\,V(\bar X) = \sigma^2/n,$
and $\bar X \sim N(\mu, \sigma^2/n).$
Note: The Central Limit Theorem says that, for large $n$,
$\bar X$ has $approximately$ the distribution $N(\mu, \sigma^2/n),$
even if the data $X_1, X_2, \dots, X_n$ are randomly sampled
from a non-normal distribution that has mean $\mu$ and variance $\sigma^2$.
3. Yes. Because each observation has $V(X_i) = \sigma^2,$
whereas the sample mean of $n$ observations has $V(\bar X) = \sigma^2/n.$ So as the sample size $n$ increases, the variance of the sample mean decreases. |
Why do we need any test function to be infinitely many times differentiable? | I would say that it also allows to have a largest space. Since the smaller is the space of test functions, the bigger is the dual space. For example bounded measures can be seen as the dual of continuous functions, and so there are less measures than distributions. But yes, it also allows you to differentiate as many times as you want ... this is often useful in applications, such as partial differential equations. If you prefer, knowing that the set of distribution is stable under differentiation tells you that distributions can be as irregular as you want.
Let take for example the Dirac delta. You can see the Dirac delta as a functional over continuous functions: $\delta_0\in (C^0)'$ defined for every $\varphi\in C^0$ by
$$
\langle \delta_0,\varphi\rangle = \varphi(0)
$$
If you do not it to take derivatives of it, this approach is sufficient. If you need to take its first derivative you can look at the same definition with $\varphi\in C^1$, and then you can define its derivative $\delta_0'$. One of the goals of distributions is to have one big space to put all distributions in order not to have to care about the precise space. Since the space is bigger, you will have more objects available in it and so it will be easier to prove existence theorems for PDE for example. However, in exchange, you lose the knowledge about the regularity of your solution.
Then, to study regularity, one often uses Sobolev spaces $W^{s,p}$ (or more refined scales). The set distribution of order $n$ (so the distributions for which you just need $\varphi\in C^n$) contain the spaces $W^{-n,p}$.
This is also useful to generalize the Fourier transform. The Fourier transform of a function as simple as $\mathbf{1}_{\mathbb{R}_+}$ is a distribution of order $1$, and in general it is not simple to know in what space exactly will be the Fourier transform of of a function that is not in $L^p$ with $p\leq 2$, so it is good not to have to worry about the regularity in this case. |
need help setting up a derivative using logarithm differentiation, picture in body | This should be enough to get you on the right track...
$$y = \sqrt{ \frac{x(x+2)}{(2x+1)(3x+2)} }$$
$$\ln y = \ln \sqrt{ \frac{x(x+2)}{(2x+1)(3x+2)} }$$
$$\ln y = \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg)^\frac{1}{2}$$
$$\ln y = \frac{1}{2} \cdot \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg)$$
$$\frac{d}{dx} \big[ \ln y \big]= \frac{d}{dx} \Bigg[ \frac{1}{2} \cdot \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg) \Bigg]$$
$$\frac{d}{dx} \big[ \ln y \big]= \frac{1}{2} \cdot \frac{d}{dx} \Bigg[ \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg) \Bigg]$$
$$\frac{1}{y} \cdot \frac{dy}{dx}= \frac{1}{2} \cdot \frac{d}{dx} \Bigg[ \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg) \Bigg]$$
$$\frac{dy}{dx}= \frac{y}{2} \cdot \frac{d}{dx} \Bigg[ \ln \bigg( \frac{x(x+2)}{(2x+1)(3x+2)} \bigg) \Bigg]$$
From here, utilize the chain rule as well as the quotient rule. |
Is there a way to solve equation of this form? | We have
$$f'(y)^2 + g(y) f(y) + h(y) = 0$$
Solving the quadratic,
$$f'(y) = \frac{-1}{2}g(y) \pm \frac{1}{2}\sqrt{g(y)^2 - 4 h(y)}$$
Integrating both sides,
$$f(y) = C -\frac{1}{2} \int_{y_0}^y g(s) \, ds \pm \frac{1}{2} \int_{y_0}^y \sqrt{g(s)^2 - 4 h(s)} \, ds$$ |
Why characteristic function is primitive recursive | What you cited was the definition of PR subsets of $\mathbb N$. What it means is that in order to see if a set $S\subset\mathbb N$ is PR, you must see if it's characteristic function is PR.
There is no why here, since this is the definition.
The set $\{1,2,5\}$ is, by this definition, primitive recursive if and only if the function $$f(n) = \begin{cases}1 & \text{if } x=1,2,5\\0&\text{otherwise}\end{cases}$$
is primitive recursive. |
incircle tanget to triangle at D and incirles of ADC ADB | Let touch-points of the incircles of $\Delta ACD$ and $\Delta ADB$ to $AD$ be $P$ and $Q$ respectively.
Thus, in the standard notation we obtain:
$$DP=\frac{AD+CD-AC}{2}=\frac{AD+\frac{a+b-c}{2}-b}{2}=\frac{AD+\frac{a+c-b}{2}-c}{2}=DQ$$ and we are done! |
Number of triangles sharing all vertices but no sides with a given octagon | If two of the vertices are $A$ and $C$, what are the possible third vertex? Look at the whole list $A,...,H$ |
Is there an easy way to see associativity or non-associativity from an operation's table? | Have you seen Light's associativity test? According to Wikipedia, "Direct verification of the associativity of a binary operation specified by a Cayley table is cumbersome and tedious. Light's associativity test greatly simplifies the task."
If nothing else, the existence of Light's algorithm seems to rule out the possibility that anyone knows an easy way to do it just by looking at the original Cayley table.
Note also that, in general, one cannot do better than the obvious method of just checking all $n^3$ identities of the form $(a\ast b)\ast c = a\ast (b\ast c)$. This is because it is possible that the operation could be completely associative except for one bad triple $\langle a,b,c\rangle$. So any method that purports to do better than this must only be able to do so in limited circumstances. |
Consider three nonzero matrices $A, B, C$ such that $ABB^t=CBB^t$.Then which property $A$ and $C$ share | Items (a) and (b) are certainly false. Take $A=B=\begin{pmatrix}1&0\\ \:0&0\end{pmatrix},C=\begin{pmatrix}1&0\\ \:0&2\end{pmatrix}$. Item (c) is true, and it relies on the fact that $B$ and $BB^t$ have the same column space. To see this, assume $B$ is $m\times n$, and fix $x\in \mathbb{R}^n$. Since $\text{Col}(B)=\text{Col}(BB^t)$ we can find $y\in \mathbb{R}^m$ such that $Bx=BB^ty$. So we have $$ABx=ABB^ty=CBB^ty=CBx$$ Since $x\in \mathbb{R}^n$ was chosen arbitrarily we must have $AB=CB$. |
What is the value of the sum $\sum_{n=3}^N(\mu(n))^22^{\nu(n)}=\sum_{p=3}^N2+\sum_{pq=15}^N4+\sum_{pqr=105}^N8+\dots$ | The comment by Matthew Conroy (https://math.stackexchange.com/users/2937/matthew-conroy) supplied the answer to my question, and also reminded me of the usual notation for the quantity which I labeled $\nu(n)$ in my question. This usual notation is $\omega(n)$ (http://oeis.org/wiki/Omega(n),_number_of_prime_factors_of_n_(with_multiplicity)) and represents the number of distinct prime factors of a number $n$.
The sequence listed at http://oeis.org/A069201 lists the asymptotic formula as $Cn\log n+O(n)$, where $C$ is the constant given by
$$C=\prod_p\left(1-\frac 1p\right)^2\left(1+\frac 2p\right).$$ |
Matrices defining an isomorphism on their image space | I do not know if it has a name. However, What I see is a more general form of projection. So, we have the matrix map $AP$ where $P$ is a projection matrix and $A$ is a matrix of the same or greater rank than $P$ that keeps the input vectors from $P$in the same space. it is not hard to prove that $(AP)^2$ has the same rank as $AP$.
Also, if we have a linear transform (bijective) one space to another, we have the isomorphism property, which should be made by $A$. |
Continuous non-decreasing image of a measure zero set | The Cantor-Lebesgue function $f$ — also known “the devil's staircase” — is continuous and monotonically increasing. It maps the unit interval onto the unit interval. Since the complement $[0,1] \smallsetminus C$ of the Cantor set $C$ has countable image, $f(C)$ must have measure one. |
How to define exact sequences in a semi-abelian category | By definition, a semi-abelian category (or homological) is regular, so every arrow $f:A\to B$ factorizes as a regular epimorphism $p_f:A\to Im(f)$ followed by a monomorphism $m_f:Im(f)\to B$. This $I$, or more precisely the subobject $m_f:Im(f)\to B$, is by definition the image of $f$. Then if you have a sequence
$$A\stackrel{f}{\longrightarrow} B \stackrel{g}{\longrightarrow} C$$
such that $g\circ f=0$, your factorization $f=k_g\circ \widetilde{f}$ shows that $Im(f)\subset Ker(g)$, in the sense that you must have a morphism $j:Im(f)\to Ker(g)$ such that $m_f=k_g\circ j$ (you can just take $j=m_{\widetilde{f}}$). Then the sequence is exact at $B$ if this $j$ is an isomorphism, which is equivalent to the condition that $\widetilde{f}$ is a regular epimorphism (because the factorization is unique up to a unique appropriate isomorphism) and that $m_f$ is the kernel of $g$.
In a homological category, one can prove that every regular epimorphism is the cokernel of its kernel, which implies that your $\overline{f}$ is always a monomorphism, and thus also that a morphism has zero kernel if and only if it is a monomorphism. So the image is really what you call the coimage; what you call the image, i.e. the kernel of the cokernel of $f$, is generally less useful, because not every monomorphism in a semi-abelian category is a kernel. In fact your image is the smallest kernel containing $m_f$, so if $m_f$ is a kernel then it coincides with your definition of image. |
Boundary of $L^1$ space | $L^1$ is Banach space, your set is therefore empty. You can also prove it in this way: for every $m,n$ we have
$$
\int_\mathbb{R}|f_n-f_m| \le \int_\mathbb{R}|f_n-f|+\int_\mathbb{R}|f-f_m|,
$$
i.e. $(f_n)$ is Cauchy sequence, and since $L^1$ is Banach space, there is some $g \in L^1$ such that $f_n \to g$. By uniqueness of the limit we conclude that $f=g \in L^1$. |
Sectioning sets in product measures | The step you highlighted only proves that $A_{\omega_2}$ is in $\mathcal F_1$ in the special case where $A$ is a rectangle (i.e. when $A$ can be written as a product, $A_1 \times A_2$, where $A_1 \in \mathcal F_1$ and $A_2 \in \mathcal F_2$). But many measurable sets $A$ in $\mathcal F$ are not rectangles!
[For example, consider endowing $\mathbb R^2 = \mathbb R \times \mathbb R$ with the product measure induced by the Lebesgue measure on each of the two $\mathbb R$'s. Then then disk $\{(x,y) \in \mathbb R^2 : \sqrt{x^2 + y^2 } < 1 \}$ is measurable w.r.t. the product measure, but it is not a rectangle.]
So to go from the easy special case of rectangles to the general case of all measurable sets, the authors structure their argument as follows:
(i) $A_{\omega_2} \in \mathcal F_1$ in the special case where $A$ is a "rectangle" (i.e. a set of the form $A_1 \times A_2$ where $A_1 \in \mathcal F_1$ and $A_2 \in \mathcal F_2$).
(ii) The collection of sets $A\in\mathcal F$ such that $A_{\omega_2} \in \mathcal F_1$ is a sigma-algebra, i.e. it contains the empty set, and it is closed under taking complements and countable unions.
(iii) Since $\mathcal F$ is, by definition, the smallest sigma-algebra containing all rectangles, it must be the case that the collection of sets $A \in\mathcal F$ such that $A_{\omega_2} \in \mathcal F_1$ is the whole of $\mathcal F$. |
What is the idea to integrate this equation from G&R? | Well, you usually don't do that step naturally (and that is a reason to keep such entry in a table). But here is why it is true. First, simplify the integrand in the left-hand side minus the integrand in the right-hand side:
$$
\frac{1}{xz^m}-\frac{1}{axz^{m-1}}=-\frac{b}{az^m}.
$$
Then just integrate,
$$
\int -\frac{b}{a(a+b x)^m}\,dx=\frac{1}{a(m-1)(a+bx)^{m-1}}+C.
$$
The constant $C$ can be omitted if one rearranges as in the table entry. |
Function order of $n\times \ln(1+{2\over n})$ | $a_n=n\ln (1+\frac 2 n) \to 2$. Now consider $a_n-2=n[\ln (1+\frac 2 n) -\frac 2 n]\sim (-n)\frac 2 {n^{2}}=-\frac 2 n$ from the Taylor eapansion of $\ln (1+x)$. |
Questions about the surface integral | $(1)$
we are integrating on the region of projection of $P_k$ on $xy$ plane,
$(2)$ For the last line of $=$ we just change discrete summation into continuous integral by making $\Delta A_k$ infinitesimal.
$(3)$ $\hat n = \hat k$ because that is the direction of $xy$ plane and we are taking projection of the surface on $xy$ plane. so we take dot product of normal of surface to normal of $xy$ plane.
Added:: in $(2)$ me made a slight error. It's a dot product of vector function and the $P_k$ which has direction along $\frac{\nabla F}{|\nabla F|}$ i.e. unit normal vector.
See this image |
If $X$ is a $CW-$complex. Are $C_*(X)$ and $C^{CW}_*(X)$ weakly equivalent? | First of all, as you point out, if you're dealing with triangulated spaces, the whole matter becomes easier. I suppose that's because you impose that the map $\partial \Delta^n \to X_{n-1}$ respects the triangulated structure, so everything works well when you look at the complexes in question.
For "plain" CW-complexes, as we saw in the comments, I'm not sure there's a geometric proof. However, here's an algebraic proof that the two are weakly equivalent, in fact, they're homotopy equivalent. The proof is "stupid" in that it only relies on the fact that they're complexes of free abelian groups and have the same homology; and the map you get is not natural in $X$ in any reasonable sense (even with respect to cellular maps, while you could hope that it would be)
The proof is as follows: $C_*^{CW}(X)$ (resp. $C_*(X)$) are complexes of abelian groups, so they are weakly equivalent (in the sense of a zigzag of morphisms) to their homology (look for instance at the accepted answer here) ,therefore they are weakly equivalent to one another.
This means they are isomorphic in the derived category $D_{\geq 0}(\mathbf{Ab})$. However, they are both chain complexes of free modules, so $\hom_{D_{\geq 0}(\mathbf{Ab})}(C_*^{CW}(X),C_*(X))$ is just the quotient of $\hom_{Ch_{\geq 0}(\mathbf{Ab})}(C^{CW}_*(X), C_*(X))$ by the homotopy relation, and similarly in the other direction. It follows that they are homotopy equivalent.
Here's a possible geometric approach, to yield naturality : the category we will consider is a slight modification on CW-complexes : we will want to record how cells are attached, and morphisms will have to respect this (note that I'm not entirely sure that what I'm writing is correct, you should especially double check this bit - I'm writing it and correcting it at the same time as thinking about it. Also, at the end, I don't get an actual conclusion, just a wild guess)
So an object in our category $C$ will be a CW-complex $X$ together with its "history" of construction, that is, for each $n$, a set $I_n$ and a family $\phi_i : S^n\to X^{(n)}$ of attaching maps. So essentially : a CW-complex, together with its CW-structure
A morphism between two such things will be in particular a cellular map, but actually the requirement will be stronger : a map $f: X\to Y$ will be a cellular map such that for all $n$, the map $X^{(n+1)}\to Y^{(n+1)}$ is induced by the map $X^{(n)}\to Y^{(n)}$ and a map $I_n\times D^{n+1}\to Y^{(n+1)}$ such that the composite with $Y^{(n+1)}\to Y^{(n+1)}/Y^{(n)}\cong \bigvee_{j\in J_n}S^{n+1}$ is, for each $i\in I_n$, just the quotient map $D^{n+1}\to S^{n+1} $ followed by the inclusion $S^{n+1}\to \bigvee_{j\in J_n}S^{n+1}$, for exactly one $j\in J_n$; and also such that $I_n\times D^{n+1}\to Y^{(n+1)}$ restricts to $I_n\times S^n\to J_n\times S^n$ with the induced map $I_n\to J_n$ and the identity $S^n\to S^n$
The goal is then to use the acylic models theorem. For notation, I will be following this statement . Our functor $F$ is $C_*^{CW}$, I think its definition is pretty clear (given that the maps are cellular but in fact send cells to cells, it's easy to see how it is defined on morphisms). Now I claim that $C_k^{W}$ is free on $\{D^k\}$, with the usual cell-decomposition : one $0$-cell, one $k-1$-cell to produce a $k-1$-sphere, and then one $k$-cell to fill it.
Indeed, what is a map $D^k\to X$ in $C$ ? I claim that is the same data as a $k$-cell in $X$. Well clearly such a map determines a $k$-cell in $X$ : indeed look in degree $k$, you have, by definition of $C$, that $D^k\to X^{(k)}/X^{(k-1)}$ corresponds to picking precisely one $k$-cell (and since we required that this map is the quotient map, there is no additional data). Conversely, a $k$-cell of $X$ determines (obviously) a map $D^k\to X$.
One may check that these two applications are inverse to one another (I think this uses the last requirement in my definition of $C$, that is, that a map $D^k\to X$ must respect the boundary : it clearly preserves the interior, because of the condition on quotient maps; and so to make sure we don't lose information one must impose that it preserves the boundary).
In any case $C_k^{CW}$ is free on $\{D^k\}$ (with the given cell-decomposition)
Then we put $V= C_*$, which is defined in the obvious way. We need to check that it is $k$ and $k+1$-acyclic at these models, which means that $H_k^{sing}(D^k),H_{k+1}^{sing}(D^k), H_k^{sing}(D^{k+1}), H_{k+1}^{sing}(D^{k+1})$ must be $0$ for $k>0$. Well this is just a classical fact about singular homology, and contractibility of $D^k$.
It then follows that any natural transformation $H_0\circ C^{CW}_* \to H_0\circ C_*$ extends (uniquely up to homotopy) to a natural chain map $C_*^{CW}\to C_*$. It mustn't be hard to show that the isomorphism $H_0^{CW}(X)\to H_0(X)$ is natural, so we get our unique chain map that does this.
My guess is that this chain map is a weak equivalence, but I'm not quite sure how to prove that. Note that this would provide some amount of naturality (although in a sense restricted : the maps of $C$ are quite restrictive) |