title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Fokker-Planck equation applied to $\lvert x\rvert^2$
The integral is well defined because we assumed that our measures of interest have finite second moment, so $$ \varphi \rightarrow \int_{[0,T]\times\mathbb{R}^n} \varphi(x)|x|^2\,du_t $$ is always finite for any test function, so it is a distribution, thus we can take a weak derivative of it. As for equality we use the Fokker-Planck equation: $$ \frac{d}{dt}\int |x|^2\,du_t = \int |x|^2\,d(\Delta_x u_t) + \int |x|^2\,d(\nabla \cdot xu_t) $$ Then integrating by parts we arrive at $$ \int \Delta|x|^2\,du_t - \int (\nabla|x|^2)\cdot x\,du_t = \int 2n\,du_t - \int 2x\cdot x\,du_t = 2n - 2\int |x|^2\,du_t, $$ since these are probability measures. Hope I helped.
Finding the limits of integration for the volume of a region inside a cube
We need to find the volume of the cube formed by $x, y, z \in [f, 1], 0 \leq f \lt 1$, with the condition $y^2 \geq 4xz$ (outside the cone). For the cube, the upper limit for $x, y, z$ is $1$. For the cone, please note when $x = z = 1, y = 2$ (I am only considering first octant given our cube is in the first octant) but as we are restricted by $y = 1$ for the cube, we have $4xz \leq 1 \implies x, z \leq \min (\frac{1}{4f}, 1)$ for $f \ne 0$. At the same time for the points outside the cone, $y^2 \geq 4f^2 \implies y \geq 2f$ for $x = z = f$ and $y \geq 2 \sqrt f \ \gt f $ as $f \lt 1$ for $x = 1, z = f$ or $z = 1, x = f$. These are important observations to find the right limits for our volume integral. What it indicates is that if $f \leq 0.25, $ we have following $3$ vertices of the cube outside the cone and rest $5$ are inside - $(f,1,f),(f,1,1), (1,1,f)$ In fact at $f = 0.25$, $(f,1,1)$ and $(1,1,f)$ are on the cone and only $(f,1,f)$ is outside. Also note that at $f = 0.5$, $(f,1,f)$ is on the cone and rest are inside and so for $f \geq 0.5$, we have not volume bound (as all points of the cube are inside the cone, $y^2 \leq 4xz$). So here is the integral to find the desired volume i) $0 \leq f \leq 0.25$ (first integral is zero at $f = 0.25$) $\displaystyle \int_{f}^{0.25} \int_{f}^{1} \int_{2\sqrt{xz}}^{1} dy \ dx \ dz + \int_{0.25}^{1} \int_{f}^{1/(4z)} \int_{2\sqrt{xz}}^{1} dy \ dx \ dz$ And your answer is indeed correct for $f = 0$. ii) $0.25 \leq f \leq 0.5$ (zero for $f = 0.5$) $\displaystyle \int_{f}^{1/(4f)} \int_{f}^{1/(4z)} \int_{2\sqrt{xz}}^{1} \ dy \ dx \ dz$ To your other specific questions, no there does not seem anything wrong with your approach. But your approach did not extend to other cases, probably because you did not visualize the surface well enough. A $3D$ sketch using an online tool helps. I literally had to keep a cube in front of me while answering the question.
Linear Algebra - determinant and linear independence
It might be easiest to just multiply it out without the row operations and find the roots. $(a^3 -3a + 2) = 0\\ (a-1)(a^2+a-2)\\ (a-1)(a + 2)(a-1)$ But, there are some ways you might guess your way to an answer. If $a = 1$ then all of the rows are identical and hence linearly independent. If $a = -2$ all of the row sum to $0.$ And that means that the vector $(1,1,1)$ is in the kernel, and the matrix is singular. If you know about eigenvalues.... What are the eignevalues of $\begin{bmatrix} 0&1&1\\1&0&1\\1&1&0\end{bmatrix}$ The sum of the eigenvalues equal the trace of the matrix, and the product of the eigenvalues equals the determinant of the matrix. We know that $-1, 2$ are eigenvalues, the trace is $0$ and the determinant is $2.$ The last eigenvalue must be $-1$
Can $\operatorname{Spec}(R[X])$ ever be finite?
Suppose that $R$ is non-zero. Then, we can find a surjection $R\to k$ for some field $k$, which induces a surjection $R[x]\to k[x]$. From this we obtain a closed embedding $\text{Spec }k[x]\to\text{Spec }R[x]$ and thus it suffices to prove that $\text{Spec }k[x]$ is infinite for $k$ a field. To do this we can proceed as follows. If $k$ is infinite then $(x-a)\in\text{spec }k[x]$ for all $a\in k$. If $k$ is finite, say $k=\mathbb{F}_q$, then we have infinite elements of $\text{MaxSpec }k[x]$ corresponding to the field extensions $\mathbb{F}_{q^n}$ for every $n\in\mathbb{N}$. Note that the above actually shows that even $\text{MaxSpec }R[x]$ is infinite if $R\ne 0$.
determining whether a set is convex
Let $x$ denote the variable $x_1$, and $y$ denote the variable $x_2$. Then: The set $x \ge 1$ is convex. The set $x - y \le 1$ is convex. The set $x^3 - x^2 + y^2 - 2xy \le 0$ is convex for $x \ge 1$. To show the third of these, write $(y - x)^2 \le x^2(2 - x)$, implying that $x \le 2$ and $x - x\sqrt{2-x} \le y \le x + x\sqrt{2-x}$. Take the second derivative of the lower and upper bounds on $y$ and show that each has the correct sign. Then, the intersection of convex sets is convex so we're done.
Inverse Laplace Transform of $\ln[\frac{s^2+a^2}{s^2+b^2}]$
$\mathcal{L}^{-1}\left\{\ln\dfrac{s^2+a^2}{s^2+b^2}\right\}$ $=\mathcal{L}^{-1}\left\{\int_s^\infty\left(\dfrac{2s}{s^2+a^2}-\dfrac{2s}{s^2+b^2}\right)ds\right\}$ $=\dfrac{1}{t}\mathcal{L}^{-1}\left\{\dfrac{2s}{s^2+a^2}-\dfrac{2s}{s^2+b^2}\right\}$ $=\dfrac{2\cos at-2\cos bt}{t}$
if $f_1(x,y) \leq f_2(x,y)$ regular surfaces show that the mean curvature $H_1 \leq H_2$
This is certainly the most difficult way I can imagine to prove that there is no compact minimal surface in $\Bbb R^3$. But the answer to your question is to write out the second-degree Taylor polynomials at $(0,0)$ for $f_1$ and $f_2$. It might help to choose the axes in the directions of the principal directions for one of the surfaces.
Rotate object around a fixed coordinate axis
I think your problem is related to frame of reference. Your global frame is A and other frames obtained after rotation in A are local frames (such as B). So u want all your rotations in global frame but they are taking place in local frame. For rotation in global frame of reference you need to pre-multiply the transformation matrix and for rotation in local frame, post-multiply. For eg, if u want rotation by $\theta_1$ in A for which transformation matrix is $R_{\theta_1}$ and then rotation by $\theta_2$ in A (rotation matrix $R_{\theta_2}$), then: $$V_f = R_{\theta_2}*R_{\theta_1}*V_i$$ where $V_i$ is initial position of the point(or object) and $V_f$ is final. for further reference, check this link: Maths - frame-of-reference for combining rotations
What is the different between d/dx and dz/dx in Calculus?
There are two cases : The plane is defined by an implicit expression of the form $$F(x,y,z)=0$$ In this case, the tangent plane at the point $ (x_0,y_0,z_0) $ will have the equation you wrote : $$F_x(x-x_0)+F_y(y-y_0)+F_z(z-z_0)=0$$ with $$F_x=\frac{\partial F}{\partial x}(x_0,y_0,z_0)$$ and so on for $ F_y $ and $ F_z$. The second case is when the plane is defined by an explicit expression of the form $$z=f(x,y)$$ or $$G(x,y,z)=z-f(x,y)=0$$ in this case, the equation of the tangent plane will be $$G_x(x-x_0)+G_y(y-y_0)+G_z(z-z_0)=$$ or $$z-z_0=\frac{\partial f}{\partial x}(x_0,y_0)(x-x_0)+\frac{\partial f}{\partial y}(x_0,y_0)(y-y_0)$$ $$=\frac{\partial z}{\partial x}(x_0,y_0)(x-x_0)+\frac{\partial z}{\partial y}(x_0,y_0)(y-y_0)$$
Prove a trigonometric series is positive
$$\overline{f(x)}=\sum_{n=-\infty}^\infty\frac{e^{-inx}}{1+n^2}\stackrel{m:=-n}=\sum_{m=\infty}^{-\infty}\frac{e^{imx}}{1+m^2}=f(x)$$ Now, writing $\;e^{inx}=\cos nx+i\sin nx\;$ , we get that (by absolute convergence) $$\sum_{n=\infty}^\infty\frac{\sin nx}{1+n^2}=\sum_{n=-\infty}^{-1}\frac{\sin nx}{1+n^2}+\sum_{n=1}^\infty\frac{\sin nx}{1+n^2}=$$ $$=-\sum_{n=-\infty}^{-1}\frac{\sin(-nx)}{1+n^2}+\sum_{n=1}^\infty\frac{\sin nx}{1+n^2}=-\sum_{n=1}^\infty\frac{\sin nx}{1+n^2}+\sum_{n=1}^\infty\frac{\sin nx}{1+n^2}=0$$ and, of course, we got another proof of the fact that the function is real. The problem now is just to evaluate $$\sum_{n=-\infty}^\infty\frac{\cos nx}{1+n^2}=1+2\sum_{n=1}^\infty\frac{\cos nx}{1+n^2}$$ Positiviness:: $$1+2\sum_{n=1}^\infty\frac{\cos nx}{1+n^2}\ge1+2\sum_{n=1}^\infty\frac{\cos n\pi}{1+n^2}=1+2\sum_{n=1}^\infty\frac{(-1)^n}{1+n^2}>1+2\left(-\frac12\right)=0$$ Using the estimation of the sum of an alternating series
Does every equational theory have an independent equational axiomatization?
No. Finite algebras with no independent basis of identities, I. M. ISAEV, Algebra univers. 37 (1997) 440-444 describes a finite algebra whose equational theory has no independent equational axiomatization. The algebra is a finite dimensional vector space over a finite field equipped with a certain nonassociative bilinear multiplication.
Algebraic geometry in representation theory?
Many modern representation theorists are interested in "geometric representation theory". One of the goals in this field is to realize a representation (e.g. a representation of a Lie algebra) geometrically. What this means is to realize the underlying vector space as the (co)homology of some algebraic variety and the action (e.g. the action of the Lie algebra) via some geometrically defined operations, such as cup products or convolution. There are several reasons why one would want to do this. One of the most important (in my opinion) is that the geometric approach often yields very nice bases in the representation, e.g., bases whose structure coefficients are positive integers (i.e. when you write the product of two basis elements as a linear combination of the basis elements, the coefficients are positive integers). These bases can be hard to define from a purely algebraic viewpoint.
Derivative of $y = \log_{\sqrt[3]{x}}(7)$.
Yes, you are right. Simplify as follows $$y=\log_{\sqrt[3]{x}}(7)=\frac{\ln 7}{\ln (\sqrt[3]{x})}=\frac{\ln (7)}{\frac13\ln x}=\frac{3\ln (7)}{\ln x}$$ $$\therefore \frac{dy}{dx}=3\ln (7)\left(\frac{-1}{(\ln x)^2}\frac1x\right)=-\frac{3\ln (7)}{x(\ln x)^2}$$
calculate centroid of triangle on a graph
Given three points in the coordinate plane $p_1 = (x_1,y_1)$, $p_2=(x_2,y_2)$, and $p_3=(x_3,y_3)$, the coordinates of the centroid $q$ is simply the average of the coordinates of the three points (actually, this is sometimes how the centroid is defined): $$q = \left(\frac{x_1+x_2+x_3}{3},\frac{y_1+y_2+y_3}{3}\right)$$
Voltage divider, need help simplifying fractions
The impedance seen from the output is $$Z_{out}=\frac1{10+i4\omega}$$ and that from the input $$Z_{in}=\frac1{i\omega}+Z_{out}.$$ Then the transmittance $$\frac{Z_{out}}{Z_{in}}=\frac{\dfrac1{10+i4\omega}}{\dfrac1{i\omega}+\dfrac1{10+i4\omega}}=\frac{i\omega}{10+i4\omega+i\omega}=\frac1{10}\frac{i\omega}{1+i\dfrac\omega2}.$$
Cube numbers ending on number X
Note: in base 10 any digit can be the final digit of a cube. Work on the least significant digits first. $0^3=0; 1^3=1; 2^3=8; 3^3=27; 4^3=64, 5^3=125; 6^3=216; 7^3=343; 8^3=512; 9^3=729$ Choose a digit $a$ so that $a^3$ ends with the digit you need, this will be unique. Then use $(10b+a)^3 = ... +30a^2b+a^3$ to choose $b$ to fix the final two digits - which may not be possible, and it is possible that there will be more than one $b$ to test. Then $(100c+10b+a)^3= ... 300c(10b+a)^2+(10b+a)^3$ will deal with the hundreds digit ... etc Exploring the various possibilities will give you some clues as to what works and why.
Prove that the correspondance is an isomorphism from $O_2$ to a subgroup of $SO_3$
Let $\phi\colon G\to H,\ \phi'\colon G\to H'$ be two group homomorphisms. Then $\psi\colon G\to H\times H'$ defined as $\psi(g) = \big(\phi(g),\phi'(g)\big)$ is also a group homomorphism. (routine verification) Now take $G=H=O(2)$, $\phi$ as identity, $H'=\{\pm1\}$ and $\phi'$ as the determinant homomorphism. And note that $H\times H'$ is a subgroup of $O(3)$. You should be able complete the details now.
What is the derivative of $\dot{x} = f(x(t))$?
$$\ddot{x}(t) = -\sin(x(t))\cdot \dfrac{dx}{dt}(t)$$ is the correct derivation. I was under the impression that $\dot{x}(t) = \frac{dx}{dt}(t)$ which means you can improve your derivation of $\ddot{x}(t)$ to $$\ddot{x}(t) = -\sin(x(t))\cdot \cos(x(t))$$
What does $\sin x \cdot \sin 2x \cdot \sin 3x \cdot ... \cdot \sin nx$ equal to?
Yes, correct. Euler's famous identity: $$e^{i\phi} = \cos \phi + i \sin \phi $$ can also be written (replace $\phi$ with $-\phi$) $$e^{-i\phi} = \cos \phi - i \sin \phi $$ Now take their difference these to see that $$\sin \phi = \frac{ e^{i \phi } - e^{-i\phi} }{2i}$$ which leads to your expression $(\phi = kx, \quad k=1,2,\cdots,n).$ Similarly, adding leads to $$\cos \phi = \frac{ e^{i \phi } + e^{-i\phi} }{2}.$$ You could multiply these binomials and see where that leads. I don't know if you get something 'simpler' other than putting the result back into a form involving only sines and cosines. UPDATE Here are the results when applying the expression for $n=2$ and $n=3$ and then factoring to get the terms to involve only $\cos x$ and $\sin x$: $$\sin x \cdot \sin 2x=\frac{1}{2} \cos x - \frac{1}{2} \cos^3 x + \frac{3}{2} \cos x \sin^2 x.$$ $$\sin x \cdot \sin 2x \cdot \sin 3x =\frac{1}{2} \cos x \sin x +\cos^3 x \sin x - \frac{3}{2} \cos^5~x \sin~x -~\cos~x~\sin^3~x +~5~\cos^3~x~\sin^3~x -~\frac{3}{2}~\cos~x~\sin^5~x.$$ To get these forms, Let $p=e^{ix}$ and $q=e^{-ix}$ to obtain a polynomial in $p$ and $q$. Now factor to get terms involving only $p-q$ and $p+q$. These terms are (up to a constant) $\sin x$ and $\cos x$. Simpler than your expression for $A$?
Evaluating the integral $\int_0^\infty \frac{\ln(x)}{e^x+1}$
For $s\approx1$, $\zeta(s)\approx\frac{1}{s-1}+\gamma$ and $\zeta^\prime(s)\approx\frac{-1}{(s-1)^2}$, so$$\begin{align}\eta^\prime(s)&=2^{1-s}\ln2\cdot\zeta(s)+(1-2^{1-s})\zeta^\prime(s)\\&\approx(1+(1-s)\ln 2)\ln2\left(\frac{1}{s-1}+\gamma\right)-\frac{\ln 2}{s-1}+\frac12\ln^22\\&=\gamma\ln 2-\frac12\ln^22.\end{align}$$Note we only need to expand $2^{1-s}$ to $O(1-s)^2$ viz.$$2^{1-s}=1+(1-s)\ln 2+\frac12(1-s)^2\ln^22+o((1-s)^2)$$in the $\zeta^\prime$ term.
Is this proper notation: $\;\mathbb Z^{+-}\;$?
Using $\mathbb Z^{+-}$ would have given me little clue as to what you are denoting, were it not for the equality you use to define it. (Okay, I could have probably guessed, but why make your readers have to guess?) I would strictly use $ \mathbb Z - \{0\}$, or better yet, $\mathbb Z \setminus \{0\}$. I'd even prefer it were spelled out in words: "all non-zero integers" than trying to guess what your notation denotes. (I have never seen the notation $\mathbb Z^{+-},\,$ to be honest.) I don't think it makes sense to write $ \{0\}$ as $\mathbb Z - \mathbb Z^{-+} $. (I'm not clear why you'd want to avoid using $\{0\})$. In any case, using $\;\{0\}$ is much more straightforward, and easier to write and read.
How do I simplify $\tan(\alpha-\beta)$ into $\frac{\tan\alpha-\tan\beta}{1+\tan\alpha\tan\beta}$?
From $$ \frac{\sin \alpha \cos \beta - \cos \alpha \sin \beta}{\cos \alpha \cos \beta + \sin \alpha \sin \beta} $$ divide the numerator and denominator by $\cos \alpha \cos \beta$.
Convolution of sine and unit step function
The two functions you are trying to convolve are both not integrable. The natural domain of the convolution is $L^1(\mathbf R)$, to this class neither of your two functions belong. The convolution can still be done but you need to view both, $H(t)$ and $g(t) = \sin(t)$ as distributions. I do not want to go into detail about the existence of the convolution in this case (this requires a careful analysis of the domain of definition of $H$ and $g$). If we accept that the two distributions can be convolved we need three ingredients (note that all the operations are now on distributions and they need to be well defined and explained in that context): $\partial f * g = f*\partial g$ $\delta * f = f$, where $\delta$ is the Dirac distribution $\partial H = \delta$ Now we can perform the following computation: $$ H * \sin = H * \partial (-\cos) = \partial H * (-\cos) = \delta * (-\cos) = -\cos $$
Why is this counter-example valid?
$P\Leftrightarrow Q$ if $P$ and $Q$ are both false. $\forall x \sim \!\!Ax $. So to make the "iff" in the first premiss true, it is enough to find $y$ where $By$ is false. And $y=1$ will serve, because $y\notin B$. To prove $\exists y\;By\rightarrow Ax$, it is enough to find a $y$ where either $By$ is false or $Ax$ is true. It doesn't matter if there is also a different $y$ where $By$ is true.
About triviality of a path in Residue Theorem
First we want to prove this lemma: Lemma: Consider two concentric circumferences in the plane centered in the origin parametrized by $A(t)=ae^{2\pi it}$ and $B(t)=be^{2\pi i (1-t)}$ with $a,b\in \mathbb R^+$, $a>b$ and $t\in [0,1]$. Consider the segment $l(t)=bt + (1-t)a$ with $t\in [0,1]$. Then $\gamma(t) = A(t) * l(t) * B(t) * l(1-t)$ is homotopy trivial in $\mathbb R^2 \ \backslash \ {(0,0)}$. Proof: The idea is to rotate around the origin. At each moment $s$ you don't want all the Annulus, but just a piece of it. For, define the following path: \begin{gather} A_s(t)=ae^{2\pi i [(1-s)t+s]}\\ B_s(t)=be^{2\pi i [(1-s)(1-t)+s]}\\ l_s(t)=tae^{2\pi s} + (1-t)be^{2\pi i s} \end{gather} They are respectively the parametrization of a part of the largest circumference (from the point $ae^{2\pi i s}$ to $a$), the parametrization of a part of the smallest circumference (from the point $b$ to $be^{2\pi i s}$) and the segment that joins $be^{2\pi s}$ and $ae^{2\pi s}$. Consider now the path $\gamma_s = A_s(t) * l(t) * B_s(t) * l_s(t)$. and define the map: $$ F:I^2\rightarrow \mathbb R^2 \quad F(t,s) = \gamma_s(t) $$ This is an homotopy between $\gamma_0 = \gamma$ in our hypotesis, and the path $\gamma_1(t)$ that is: $$ \gamma_1(t) = ae^{2\pi i} * l(t) * be^{2\pi i} * l(1-t) $$ and this is homotopy trivial since $ae^{2\pi i}$ and $be^{2\pi i}$ are constant and $l(1-t)$ is the inverse of $l(t)$. Observe that you can choose $\gamma$ as a bunch of $k$ path where the path $\eta_i$ goes only around the point $z_i$ and it is a simple curve (without intersection). This is clearly possible because the points $z_i$ are a discrete set. Take now the space that $\eta_i$ bounds and its boundary $\eta_i$: this portion of space is omeomorphic to a circle minus a point. Using the lemma and returning back thanks to the omeomorphism you have the statement.
To find $\beta_1, \beta_2$ in order to satisfy $2^2 p_1^{a_1+1-\beta_1}p_2^{a_2+1-\beta_2}p_3^{a_3}p_4^{a_4}=(p_1-1)(p_2-1)$
please check it and kindly tell me if I have made any mistake or is there anything extra I need to cover up but forgot You are correct except one typo Thus $\gamma_{21}=0$ This should be $\gamma_{31}=0$. You can write your idea simply as the following : $$2^2 p_1^{a_1+1-\beta_1}p_2^{a_2+1-\beta_2}p_3^{a_3}p_4^{a_4}=(p_1-1)(p_2-1)$$ Since $p_1-1\lt p_1\lt p_3$ and $p_2-1\lt p_2\lt p_3$, the RHS is not divisible by $p_3$ while the LHS is divisible by $p_3$.
Parametrization of pythagorean-like equation
Yes, there are. You can put $$A=ms+nt\\B=nt-ms\\C=ms-nt\\D=mt+ns$$ where $m,n,s,t$ are arbitrary.
Logic of numerical series
Last week, I posted this answer: You ask, "Can anybody say how this series is continued and what's the logic to calculate it?" The answer is yes; the colleague who wrote it on the whiteboard can do both of those things. That answer was deleted by a moderator. Since then, no one here has been able to say how the series is continued, etc. (as OP correctly rejects anything based on Lagrange interpolation). So I think it's time to post a modified version of my deleted answer: The answer is yes; only the colleague who wrote it on the whiteboard can do both of those things.
Prove that $f(x) = ax^3 + bx^2 + cx + d$ is monotonous
From where you left off: $\triangle' = b^2 - 3ac \le 0$ by assumption, so the sign of $f$ is the same of $a$. So $f$ is monotonically increasing if $a > 0$, and decreasing if $a < 0$. The other part is $f''(x) = 6ax + 2b = 0 \implies x = -\dfrac{b}{3a}$, and this is the inflection point, showing the two parts.
FTC Double Derivative of Two Integrals
By the fundamental theorem of calculus, if $g(u)$ is a "nice" function and you define the function $F(x)$ by $$ F(x) = \int_0^x g(u) du, $$ then $$ \frac{d}{dx} F(x) = g(x). $$ In your problem, you are differentiating a function that looks like $$ (F \circ \sin)(x) = F(\sin x) = \int_0^{\sin x} g(u) du. $$ To differentiate functions like this, you would use the chain rule. And in particular, $$ \frac{d}{dx} F(\sin x) = F'(\sin x) \cos x. $$ Fortunately, as we noted above, you know the derivative of $F$. In your case, you happen to have an annoying function $g(u)$ and you will be wanting to compute a second derivative --- you'll need to use the fundamental theorem of calculus (twice total), the chain rule (twice total), and the product rule in the second derivative.
Reduced row-echelon form of a matrix with variables
Because $a,b,c \ne 0$ then $$ \left( {\begin{array}{*{20}{c}} p & 0 & a \\ b & 0 & 0 \\ q & c & r \\ \end{array}} \right) \to \left( {\begin{array}{*{20}{c}} b & 0 & 0 \\ q & c & r \\ p & 0 & a \\ \end{array}} \right)\mathop \to \limits_{ - \frac{p}{b}{\rho _1} + {\rho _3}}^{ - \frac{q}{b}{\rho _1} + {\rho _2}} \left( {\begin{array}{*{20}{c}} b & 0 & 0 \\ 0 & c & r \\ 0 & 0 & a \\ \end{array}} \right)\mathop \to \limits^{ - \frac{r}{a}{\rho _3} + {\rho _2}} \left( {\begin{array}{*{20}{c}} b & 0 & 0 \\ 0 & c & 0 \\ 0 & 0 & a \\ \end{array}} \right)\mathop \to \limits^{\frac{1}{b}{\rho _1},\frac{1}{c}{\rho _2},\frac{1}{a}{\rho _3}} \left( {\begin{array}{*{20}{c}} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}} \right). $$
Monotone map on a Preset
Yes, they can be different in nature: Consider e.g. the identity map for $f$ from $R=(\Bbb N,\,|\,)$ to $S=(\Bbb N,\le)$.
How to calculate $\frac{1}{2} \int_0^1\ 1.5 e^{-ik\pi \ t} \ \ dt, \, k \in \mathbb{Z} $
The computation is straightforward: \begin{align} \int_0^1 \frac34 e^{-ik\pi t}\ \mathsf {dt} &= \left.\frac{3i}{4k\pi}e^{-ik\pi t}\right|_0^1\\ &=\frac{3i}{4k\pi}(e^{-ik\pi}-1)\\ &=\frac3{4k\pi}((-1)^k-1)i. \end{align}
Morphism of morphism
As it turns out, $\Gamma(f)$ is simply isomorphic to $A$. Let $\varphi: A \to \Gamma(f)$ be defined by $\varphi(a) = (a, f(a))$. Then $$\varphi(a*b) = (a*b, f(a*b)) = (a,f(a)) * (b,f(b)) = \varphi(a) * \varphi(b).$$ And $\varphi$ has an obvious inverse, namely the projection on the first coordinate. Finally $A \cong \Gamma(f)$. If you want to consider "morphisms of morphisms", there are various possibilities. One of them is the arrow category $\mathsf{Ar}(\mathsf{C})$ for example, where objects of $\mathsf{Ar}(\mathsf{C})$ are morphisms $f : A \to B$ in $\mathsf{C}$, and morphisms of $\mathsf{Ar}(\mathsf{C})$ are commutative diagrams. For example a morphism from $(f : A \to B)$ to $(g : C \to D)$ looks like: $$\require{AMScd} \begin{CD} A @>{f}>> B \\ @VVV @VVV \\ C @>{g}>> D \end{CD}$$ This isn't really related to higher category theory, though.
Considering quantity which a random variable depends on as own random variable
It feels like what you makes sense. $P(X^{(N)}=x)=f(N)$ is valid, and in this statement $N$ is random, so $f(N)$ is some random density (but I think it would be better still to write $f(x,N)$). However if you condition on a particular setting of $N$ it becomes a deterministic density function, so that you now arrive at, $P(X^{(N)}=x|N=n)=f(n)$ (and I would still prefer to write $f(x,n))$. The conditioning on a particular setting of $N$ is important. probability is deterministic, not random This is not a very clear statement. The evaluation of a probability mass function at a particular point is not random, but a probability measure is meant to help encode a sense of randomness via the random variable you define. So I thought maybe they mean $P(X(N)=x|N)=f(N)$. This does not make sense to me. As I said before you need to condition on an actual value. You need to be given a value, hence you should condition at $N= n $. A event must have occurred. I think it could be insightful to understand the difference between the parameterisation of a density, and its density dummy variable used to define the density. What is a common example is something like: $\mathcal{N}(x;\mu,\sigma^2)$, where $x$ is density variable, and the parameterisation are non-random quantities. Under your notation this could be expressed as, $P(X^{(\mu,\sigma^2)}\leq x) = \int_{-\infty}^x \mathcal{N}(x;\mu,\sigma^2) dx$, where $X\sim\mathcal{N}(x;\mu,\sigma^2)$, since remember the probability for a continuous R.V. requires an integration over the pdf.
Given $AB=0$. Prove that every column of $B$ is a solution for $Ax=0$
Here is a formal proof. Let $e_j$be the column vector with all entries zero, except for the $j$th one, which is $1$. Then $AB=0$ implies $0=0e_j=(AB)e_i=A(Be_j)=AB_j$, where $B_j$ is the $j$th column of $B$. The proof relies on the associativity of matrix multiplication.
finding a vector of specific length that is perpendicular to two other vectors
$$ n = 3 \cdot \frac{u \times v}{\|u \times v\|}$$
Solving a problem on differentiation
Well, $G*f(x)$ is just the derivative of $f(x)^m$ So $G*f(x) = mf(x)^{m-1}f'(x)$ call this eqn(1) So $G*h(x) = mh(x)^{m-1}h'(x)$ So just substitute in $h(x)=f(x)+g(x)$ So $$G*(f(x)+g(x)) $$ $$= m(f(x)+g(x))^{m-1}(f'(x)+g'(x))$$ $$= (mf'(x)+mg'(x))(f(x)+g(x))^{m-1}$$ From eqn (1), we get $mf'(x) = (G*f(x))f(x)^{1-m}$ $mg'(x) = (G*f(x))g(x)^{1-m}$ plutting these back into our main equations $$=[(G*f(x))f(x)^{1-m}+(G*g(x))g(x)^{1-m}](f(x)+g(x))^{m-1}$$ $$=(G*f(x))\left[\frac{f(x)+g(x)}{f(x)}\right]^{m-1}+(G*g(x))\left[\frac{f(x)+g(x)}{g(x)}\right]^{m-1}$$
Proof of families of sets.
Your reasoning is not quite correct: you say $x \in \bigcup_{A \in C} A$ implies $x \in A$ and $x \in C$. No, the correct conclusion is: there exists some $A \in C$ such that $x \in A$. Then (as $x \in M$ as well) for this same $A \in C$: $x \in M \cap A$, so by definition of the union, $x \in \bigcup_{A \in C} (M \cap A)$. The reverse inclusion is similar, try it.
taylor expansion of function with a vector as variable
Then what you're looking for is the Taylor expansion of a scalar field--a function $f$ that maps $\mathbb R^n$ to $\mathbb R$. An easy way to build up intuition about this is to do the expansion only in one direction. Let $e_i$ be an element of a basis $\lbrace e_1,\dots, e_n \rbrace$ of $\mathbb R^n$ and $t$ a scalar parameter. Let $x_0$, the point you want to expand around, be given by $x_0 = x - t e_i$ or $x = x_0 + t e_i$. There is only one direction connecting $x$ and $x_0$, and the magnitude can always be calculated (which fixes $t$). Then you can say $$ f(x) = f(x_0) + Df(x_0)(te_i) + \frac{1}{2}D^2f(x_0)(te_i,te_i) + o(t^2)\,, $$ namely $$f(x) = f(x_0 + t e_i) = f(x_0) + \left. \frac{\partial f}{\partial x_i} \right|_{x_0} t + \frac{1}{2} \left. \frac{\partial^2 f}{\partial x_i^2} \right|_{x_0} t^2 + o(t^2)\,.$$ Now, identify $\partial f/\partial x_i$ as $e_i \cdot \nabla f$. In addition, see that $t e_i = x - x_0$. Some clever recombining of terms gives $$f(x) = f(x_0) + (x-x_0) \cdot \nabla f|_{x_0} + \frac{1}{2} ((x - x_0) \cdot \nabla)^2 f|_{x_0} + o(t^2)\,.$$ This is suitably general to cover any point $x$.
A set is open in two metric spaces?
Prove that $$ d \le \bar d \le 2d. $$ Then notice that every ball in one matric contains a ball for the other metric, and vice-versa.
For a standard Brownian Motion the events $\{W_{1}>0\}$ and $\{W_{2}>1\}$ are not independent
From the definition of Brownian motion, you know that $Z_1=W_1$ and $Z_2 = W_2-W_1$ are independent $N(0,1).$ So $$ P(W_2>1\mid W_1>0) = P(Z_2>1-Z_1\mid Z_1>0).$$ It should be clear that this is larger than $P(W_2>1)=P(Z_2>1-Z_1)$ since guaranteeing $Z_1>0$ makes $Z_2>1-Z_1$ easier to satisfy (this is water-tight since they are independent). If we really need to clinch it, we can write $$ P(Z_2>1-Z_1) = \int_{-\infty}^\infty \phi(z_1) (1-\Phi(1-z_1))dz_1 < 2\int_0^\infty \phi(z_1)(1-\Phi(1-z_1))dz_1$$ where $\phi$ and $\Phi$ are the normal PDF and CDF. The inequality holds since $\Phi(1-|z_1|)\le \Phi(1-z_1).$ But on the other hand if $Z_1'$ is a random variable independent of $Z_2$ that has the distribution of $Z_1$ conditional on $Z_1>0,$ we have $$ P(Z_2>1-Z_1\mid Z_1>0) =P(Z_2>1-Z_1') =2\int_0^\infty \phi(z_1)(1-\Phi(1-z_1))dz_1.$$ But this is really no more than a glorified rephrasing of the last paragraph. I'm not sure if there's a good way to compute these exactly... we can easily do $P(W_1>0)=1/2$ and $P(W_2>1) = 1-\Phi(1/\sqrt{2}),$ but the computation of $P(W_2>1,W_1>0) = P(Z_2>1-Z_1,Z_1>0)$ seems an awkward region to integrate over. Rotating by 45 degrees seems the way to go. EDIT Indeed, rotating by 45 degrees to the independent standard normals $Z_1' = \frac{Z_1+Z_2}{\sqrt{2}}$ and $Z_2' = \frac{Z_2-Z_1}{\sqrt{2}}$ (sorry, not the same $Z_1'$ I defined before) gives an integral $$ P(W_2>1,W_1>0) = P(Z_1'>1/\sqrt{2}, Z_2' < Z_1') \\= \int_{1/\sqrt{2}}^\infty \phi(z_1') \int_{-\infty}^{z_1'} \phi(z_2')dz_2'dz_1' \\= \int_{1/\sqrt{2}}^\infty \Phi(x)\phi(x)dx \\ = \int_{\Phi(1/\sqrt{2})}^1 u du \\= \frac{1}{2}(1-\Phi(1/\sqrt{2})^2)\\= P(W_1>0)P(W_2>1)(1+\Phi(1/\sqrt(2)).$$
How do I solve a diffeential equation using a change of variable?
Hint: For the DE $$\frac{dy}{dx}+\frac{1}{x^2-1}y=x\tag{1}$$ an integrating factor is \begin{align*} \exp\left[\int\frac{1}{x^2-1}dx\right]&=\exp\left[\frac{1}{2}\ln\frac{x-1}{x+1}\right]\\ &=\left(\frac{x-1}{x+1}\right)^{1/2} \end{align*}
Closed form expression of $\int_{-\infty}^{+\infty}dx \exp[-\alpha(x^2-a^2)^2]$
If we define $$ I(a,b) = \int_{-\infty}^{+\infty}\exp\left[-b(x^2-a^2)^2\right]\,dx $$ for $a,b>0$, by setting $c=ba^4$ and $x=az$ we get: $$ I(a,b) = a \int_{-\infty}^{+\infty}\exp\left[-c(z^2-1)^2\right]\,dz = a\int_{0}^{+\infty}\exp\left[-c(z-1)^2\right]\,\frac{dz}{\sqrt{z}}\stackrel{\text{def}}{=} a\,J(c)$$ and: $$ J(c) = \int_{-1}^{+\infty}\frac{\exp(-c z^2)}{\sqrt{z+1}}\,dz =\color{blue}{\int_{-1}^{0}\frac{\exp(-cz^2)}{\sqrt{z+1}}\,dz}+\color{red}{\int_{0}^{+\infty}\frac{\exp(-cz^2)}{\sqrt{z+1}}\,dz}$$ where the blue integral can be approximated by expanding the integrand function as a Taylor series and the red integral can be studied by switching to Laplace transforms and getting values of Bessel functions. In any case, the behaviour depends on the magnitude of $\color{green}{ba^4}$. In terms of modified Bessel functions of the first kind, $$ I(a,b) = \frac{\pi a}{2 \exp(ba^4/2)}\left[I_{-1/4}(ba^4/2)+I_{1/4}(ba^4/2)\right].$$ It follows that if $ba^4$ is large we have $$ I(a,b) \approx \frac{\pi a }{\sqrt{\pi b a^4}}=\sqrt{\frac{\pi}{b a^2}}$$ while if $ba^4$ is close to zero we have $$ I(a,b) \approx \frac{\pi}{2^{3/4}\Gamma(3/4)b^{1/4}}.$$
Prove that there is an irrational number and a rational number between any two distinct real numbers
Nice attempt, but unfortunately your proof is wrong. $y-10^{-n}y=y(1-10^{-n})$. Since $1-10^{-n}$ is rational and $y$ is irrational, $y(1-10^{-n})$ is irrational. Also, as pointed out by Mees de Vries in comments, $\frac{x+y}{2}$ may be rational. In this link, you can find a proof by joeA that there is a rational between two real numbers. He uses the Archemidean Property of the real numbers, which can be stated as follows: For every number $x\in\mathbb{R}$, there exists a natural number $n$ such as $x<n$. for this purpose (even the intuitive fact that there is an integer between two numbers $x,y$ satisfying $y-x>1$ can be proved using this property) . Then you can conclude the result for irrationals; indeed if $x,y\in\mathbb{R}$, take any irrational number of your choice, say $\sqrt{2}$. Suppose that $x<y$. Then $x-\sqrt{2}<y-\sqrt{2}$. Thus there exists a rational number $q$ such as $x-\sqrt{2}<q<y-\sqrt{2}$, that it $x<q+\sqrt{2}<y$ and $q+\sqrt{2}$ is irrational.
If $E(X)=15$, $P(X\le11)=0.2$, and $P(X\ge19)=0.3$, what can be $V(X)$?
Notice that you have $P(|X-E(X)|\geqslant 4)$ and try to use Chebyshev’s inequality: $$\forall\alpha>0,\ P(|X-E(X)|\geqslant \alpha)\leqslant\dfrac{V(X)}{\alpha^2}$$
Uniqueness of probability measures
Considering $\mathcal{A}=\{A\in\mathcal{B}_{\mathbb{R}}\colon\mathbb{P}_1(A)=\mathbb{P}_2(A)\}$ is a good idea. However, I don't think there's a need to invoke something like the Monotone Class Theorem. Note two things: $\mathcal{A}$ is a $\sigma$-algebra. You have $\mathbb{P}_1((-\infty,x])=\mathbb{P}_2((-\infty,x])$ for all $x\in\mathbb{R}$ and $(-\infty,b]\setminus(-\infty,a]=(a,b]$ for $a<b$. Note also that $\mathbb{P_1}=\mathbb{P_2}$ is equivalent to $\mathcal{A}=\mathcal{B}_{\mathbb{R}}$. Can you use this to derive the result?
Subjects studied in number theory
Number theorists study a range of different questions that are loosely inspired by questions related to integers and rational numbers. Here are some basic topics: Distribution of primes: The archetypal result here is the prime number theorem, stating that the number of primes $\leq x$ is asymptotically $x/\log x$. Another basic result is Dirichlet's theorem on primes in arithmetic progression. More recently, one has the results of Ben Green and Terry Tao on solving linear equations (with $\mathbb Z$-coefficients, say) in primes. Important open problems are Goldbach's conjecture, the twin prime conjecture, and questions about solving non-linear equations in primes (e.g. are there infinitely many primes of the form $n^2 + 1$). The Riemann hypothesis (one of the Clay Institute's Millennium Problems) also fits in here. Diophantine equations: The basic problem here is to solve polynomial equations (e.g. with $\mathbb Z$-coefficients) in integers or rational numbers. One famous problem here is Fermat's Last Theorem (finally solved by Wiles). The theory of elliptic curves over $\mathbb Q$ fits in here. The Birch-Swinnerton-Dyer conjecture (another one of the Clay Institute's Millennium Problems) is a famous open problem about elliptic curves. Mordell's conjecture, proved by Faltings (for which he got the Fields medal) is a famous result. One can also study Diophantine equations mod $p$ (for a prime $p$). The Weil conjectures were a famous problem related to this latter topic, and both Grothendieck and Deligne received Fields medals in part for their work on proving the Weil conjectures. Reciprocity laws: The law of quadratic reciprocity is the beginning result here, but there were many generalizations worked out in the 19th century, culminating in the development of class field theory in the first half of the 20th century. The Langlands program is in part about the development of non-abelian reciprocity laws. Behaviour of arithmetic functions: A typical question here would be to investigate behaviour of functions such as $d(n)$ (the function which counts the number of divisors of a natural number $n$). These functions often behave quite irregularly, but one can study their asymptotic behaviour, or the behaviour on average. Diophantine approximation and transcendence theory: The goal of this area is to establish results about whether certain numbers are irrational or transcendental, and also to investigate how well various irrational numbers can be approximated by rational numbers. (This latter problem is the problem of Diophantine approximation). Some results are Liouville's construction of the first known transcendental number, transcendence results about $e$ and $\pi$, and Roth's theorem on Diophantine approximation (for which he got the Fields medal). The theory of modular (or more generally automorphic) forms: This is an area which grew out of the development of the theory of elliptic functions by Jacobi, but which has always had a strong number-theoretic flavour. The modern theory is highly influenced by ideas of Langlands. The theory of lattices and quadratic forms: The problem of studying quadratic forms goes back at least to the four-squares theorem of Lagrange, and binary quadratic forms were one of the central topics of Gauss's Disquitiones. In its modern form, it ranges from questions such as representing integers by quadratic forms, to studying lattices with good packing properties. Algebraic number theory: This is concerned with studying properties and invariants of algebraic number fields (i.e. finite extensions of $\mathbb Q$) and their rings of integers. There are more topics than just these; these are the ones that came to mind. Also, these topics are all interrelated in various ways. For example, the prime counting function is an example of one of the arithmetic functions mentioned in (4), and so (1) and (4) are related. As another example, $\zeta$-functions and $L$-functions are basic tools in the study of primes, and also in the study of Diophantine equations, reciprocity laws, and automorphic forms; this gives a common link between (1), (2), (3), and (6). As a third, a basic tool for studying quadratic forms is the associated theta-function; this relates (6) and (7). And reciprocity laws, Diophantine equations, and automorphic forms are all related, not just by their common use of $L$-functions, but by a deep web of conjectures (e.g. the BSD conjecture, and Langlands's conjectures). As yet another example, Diophantine approximation can be an important tool in studying and solving Diophantine equations; thus (2) and (5) are related. Finally, algebraic number theory was essentially invented by Kummer, building on old work of Gauss and Eisenstein, to study reciprocity laws, and also Fermat's Last Theorem. Thus there have always been, and continue to be, very strong relations between topics (2), (3), and (8). A general rule in number theory, as in all of mathematics, is that it is very difficult to separate important results, techniques, and ideas neatly into distinct areas. For example, $\zeta$- and $L$-functions are analytic functions, but they are basic tools not only in traditional areas of analytic number theory such as (1), but also in areas thought of as being more algebraic, such as (2), (3), and (8). Although some of the areas mentioned above are more closely related to one another than others, they are all linked in various ways (as I have tried to indicate). [Note: There are Wikipedia entries on many of the topics mentioned above, as well as quite a number of questions and answers on this site. I might add links at some point, but they are not too hard to find in any event.]
At what points on the curve $4(y^2 - 2y - x)(y^2 - 2y + x + 2) = 1$ does $\frac{dy}{dx}$ not exist?
That would be where the curve is vertical. Do implicit differentiation, like usual. Then find $dy/dx$ as a function of $x$ and $y$. The curve is vertical when BOTH the denominator is zero AND the original equation is true.
Constructing the inverse of a surjective homomorphism $g\otimes \operatorname{id}\colon B\otimes G \to C\otimes G$
Now the solution says its sufficient to prove that $$(B\otimes G)\big/ I \cong C\otimes G$$ I'd like to be extra pedantic here: this is not actually sufficient! What we need to do precisely is show that the map $(B \otimes G)/I \to C \otimes G$ induced by $g \otimes \operatorname{id}$ is an isomorphism. Of course, this implies that $(B\otimes G)\big/ I \cong C\otimes G$, but it's important that the isomorphism actually comes from this induced map! For example, the sequence of abelian groups $\mathbb{Z} \xrightarrow{0} \mathbb{Z} \xrightarrow{0} \mathbb{Z} \to 0$ is not exact, but $\mathbb{Z}/\operatorname{img}(0) \cong \mathbb{Z}$. In this example, the induced map $\mathbb{Z}/\operatorname{img}(0) \to \mathbb{Z}$ is $0$, which is not an isomorphism. Also, in your problem, of course it is also important to show exactness at $C \otimes G$, which amounts to showing that $g \otimes \operatorname{id}$ is surjective: hopefully you've already seen this part of the proof. Now I'll try to answer your actual questions: You're right that a priori all we know is that $I \subseteq K$, but the whole point of this argument is to prove that $I = K$ (this is what it means for the sequence to be exact at $B \otimes G$)! By constructing the promised inverse, we will conclude that $I = K$. Also, just to be precise, $g \otimes \operatorname{id}$ will not be injective (because $I$ might not be trivial); rather, the map induced by $g \otimes \operatorname{id}$, which goes $(B \otimes G)/I \to C \otimes G$, will be injective. Let me call this induced map $\gamma$ for convenience. You also note: for any $c \in C$ there might be various $b_c$ such that $b_c \mapsto c$ This is absolutely true, and indeed $g$ will not be invertible (in general). But this doesn't matter in the proof; we only aim to construct an inverse to $\gamma$. So, for each $c \in C$, we fix some $b_c \in B$ such that $b_c \mapsto c$ ahead of time, and we don't worry about the fact that these choices were non-unique until it matters later. Perhaps an easier way to understand the map $\varphi$ is to think of it as a function $C \times G \to (B \otimes G)/I$. Then the function is very simple to define: $\varphi(c,g) = [b_c \otimes g]$ (where square brackets mean "equivalence class of"). The proof explains why the choices of $b_c$'s don't affect the equivalence classes of $b_c \otimes g$ in $(B \otimes C)/I$, therefore $\varphi$ is well-defined independently of our choices of $b_c$'s (while we had to make these choices to construct $\varphi$ in the first place, any choices we made would have resulted in the exact same function). Now you can prove directly that $\varphi$ is bilinear. For example, we have $$\varphi(c_1 + c_2, g) = [b_{c_1 + c_2} \otimes g].$$ Since $b_{c_1} + b_{c_2} \mapsto c_1 + c_2$, and the choices of $b_c$'s don't matter, we can assume that $b_{c_1 + c_2} = b_{c_1} + b_{c_2}$. Therefore, $$\varphi(c_1 + c_2, g) = [b_{c_1 + c_2} \otimes g] = [(b_{c_1} + b_{c_2}) \otimes g] = [(b_{c_1} \otimes g) + (b_{c_2} \otimes g)]\\ = [b_{c_1} \otimes g] + [b_{c_2} \otimes g] = \varphi(c_1, g) + \varphi(c_2,g).$$ Once you prove furthermore that $\varphi(\alpha c, g) = \alpha \varphi(c,g) = \varphi(c,\alpha g)$ and $\varphi(c, g_1 + g_2) = \varphi(c,g_1) + \varphi(c,g_2)$, you'll conclude that $\varphi$ is bilinear. By the universal property of tensor products, $\varphi$ induces a homomorphism $\overline{\varphi} : C \otimes G \to (B \otimes G)/I$ such that $\overline{\varphi}(c \otimes g) = \varphi(c,g)$. You can then check directly that $\gamma \circ \overline{\varphi} = \operatorname{id}_{C \otimes G}$ and $\overline{\varphi} \circ \gamma = \operatorname{id}_{(B \otimes G)/I}$, so $\gamma$ is an isomorphism.
Position function of a point moving in a circle with velocity $v_0$ and initial position $r_0$ with reflection.
As far as I see, it is a ever interesting circular billiard problem. I could provide some hints, not to ruin the fun. Thanks to circular simmetry, the trajectory on circular billiard tables is fully determined by the angle $\theta$. The ball trajectory is marked in blue, and is symmetric with respect to the dotted red line, normal to the circumference. One could then for example simply flip the circle, and superimpose pre- and -post bounce trajectory. There are then immediate consequences on the length of each sub-path between reflections, which dramatically simplify the problem (check https://www.math.psu.edu/tabachni/Books/billiardsgeometry.pdf, Chapter 2, for many interesting hindsights). Moreover, if $\theta$ is rational, the path is periodic. If $\theta$ is irrational, then paths are dense (see again above reference for a rigorous definition and proof). If your problem is to know whether the ball crosses the centre at any point in time, armed with the information above I believe you are already there. To cross the centre, the trajectory must have a definite angle with respect to the circumference.
Diophantine solution to a fraction
Manipulate the equation as follows: $y = \dfrac{x^2-1085}{14718-2x}$ $14718y-2xy = x^2-1085$ $y^2+14718y + 7359^2 = x^2+2xy+y^2+7359^2-1085$ $(y+7359)^2 = (x+y)^2+(7359^2-1085)$ $(y+7359)^2 - (x+y)^2 = 54153796$ $(x+2y+7359)(7359-x) = 2^2 \cdot 1993 \cdot 6793$ Since $x+2y+7359$ and $7359-x$ have the same pairity, either both are even or both are odd. We quickly see that they can't both be odd. So, we have just $8$ cases to check: $(x+2y+7359,7359-x) = (2,2\cdot 1993 \cdot 6793)$ $(x+2y+7359,7359-x) = (2\cdot 1993,2 \cdot 6793)$ $(x+2y+7359,7359-x) = (2\cdot 6793,2 \cdot 1993)$ $(x+2y+7359,7359-x) = (2\cdot 1993 \cdot 6793,2)$ $(x+2y+7359,7359-x) = (-2,-2\cdot 1993 \cdot 6793)$ $(x+2y+7359,7359-x) = (-2\cdot 1993,-2 \cdot 6793)$ $(x+2y+7359,7359-x) = (-2\cdot 6793,-2 \cdot 1993)$ $(x+2y+7359,7359-x) = (-2\cdot 1993 \cdot 6793,-2)$ Solve each of these $8$ systems to get the solutions.
Conditional probability thoughts
Since $A_1\subset B$, $P(A_1\cap B)=P(A_1)=\frac1{10}$. The rest is correct.
Sum of sinx+sin3x+sin5x+......sin(2n-1)x
Hint let $z=\cos\theta+i\sin\theta$, use $$\operatorname{Im} \{ 1+z+z^2+...+z^{2n}\}=\operatorname{Im}\left(\frac{1-z^{2n+1}}{1-z}\right)$$ and $$\operatorname{Im} \{1+z^2+z^4+...+z^{2n}\}=\operatorname{Im}\left(\frac{1-z^{2n+2}}{1-z^2}\right)$$ $$\sin\theta+\sin3\theta+\sin5\theta+...+\sin(2n-1)\theta=\operatorname{Im}\left(\frac{1-z^{2n+1}}{1-z}\right)-\operatorname{Im}\left(\frac{1-z^{2n+2}}{1-z^2}\right)$$
Show that the following defines an inner product on $C[-1,1]$
As someone pointed out in comments, you have to check the axioms of the inner product. That this is linear in both components is clear from the linearity of integral. Symmetry also easily follows from commutativity of the real numbers, namely $f(t)g(t)=g(t)f(t)$. We have to prove that $\langle f,f \rangle = 0$ if and only if $f(t)=0$ for all $t \in [-1,1]$. If $f$ is identically zero, then $\langle f,f \rangle = 0$ obviously. Suppose now that $\langle f,f \rangle = 0$ but that there exists $t_0 \in [-1,1]$ such that $f(t_0) \neq 0$. By continuity of $f$, we have that $f(t) \neq 0$ for all $t \in (t_0-\epsilon, t_0+\epsilon)$, for some $\epsilon >0$ small enough. But then $\int_{-1}^{1} |t|f(t)^2 dt \geq \int_{t_0-\epsilon}^{t_0+\epsilon} |t|f(t)^2>0$, a contradiction.
What are the interest of the moments of a random variable?
... but in what moments of order r is interesting? One example: in statistics, moments of higher order may be needed in the method of moments. Why such a definition, and not simply ... The moment generating function of a random variable is not defined merely for calculating the moments of a random variable. It has other important properties such as $\phi_{X+Y}(t)=\phi_X\phi_Y(t)$ when $X$ and $Y$ are independent. (Maybe) most importantly, it characterizes a distribution! Even in the studies of infinite sequences, exponential generating functions may be generally more convenient than ordinary generating functions in some situations. ... what is the interest of the moment generating function? You could first read the Wikipedia article on moment generating function. Again, this is not simply a tool for calculating moments. You may also want to take a look at a more often used cousin: the characteristic function, which is essentially the Fourier transform of a random variable. A classical proof the central limit theorem uses the notion of characteristic functions.
What is the permutation of choosing the just 3 balls in a pool of 16 balls?
I assume that order matters. Then the correct answer is $16 \times 15 \times 14$, because for the first ball I have 16 possible choices, then for the second ball, when one is removed, I have 15 choices (i.e for each first ball chosen I have 15 options remaining). And then, for the third ball, I have 14 options remaining. $3!=6$ would be correct if I had total of 3 balls. For example, I have balls a, b, c. Then possible choices are abc acb bac bca cab cba But if I add fourth ball here, d, you will see that I have to consider other possibilities also, for example, acd.
proving that an abelian group is a $\mathbb{Z_{n}}$-module.
Your proofs look right but lack an important point: that the action is well defined. You need to prove that $\bar k_1 = \bar k_2 \implies k_1 a = k_2 a$ for all $a$, so that the action of a residue class does not depend on the representative chosen in the definition. Here is where you use the hypothesis $na=0$. Here is an alternative proof (which is just the same proof in a different guise): The action of $\mathbb Z$ on $A$ is equivalent to a ring homomorphism $\phi: \mathbb Z \to \operatorname{End}(A)$. Now if $\alpha \in \operatorname{End}(A)$, then $(n\alpha)(a)=\alpha(na)=\alpha(0)=0$ and so $\ker\phi \supseteq n\mathbb Z$. This gives $\bar\phi :\mathbb Z/n \mathbb Z \to \operatorname{End}(A)$ and an action of $\mathbb Z/n \mathbb Z $ on $A$.
A function from a matrix to its characteristic polynomial.
The function that takes you from an $n\times n$ matrix to its characteristic polynomial is simply $$ \begin{align} \phi:\mathbb R^{n\times n} &\to \mathbb R[x]\\ A &\mapsto \det(A-x I_n) \end{align} $$ Where $I_n$ is the $n\times n$ identity matrix. As for whether this function is "continuous", well, how do you define continuity on such a map? What topology do we impose on $\mathbb R[x]$?
Eigenvalues of the sum of Laplacian matrix and the all ones matrix
The $0$ eigenspace of $J$ (the usual name for the matrix of all ones) is orthogonal to the non-zero eigenspace. In particular, every nontrivial eigenvector of $L$ is also an eigenvector of $J.$ So, $(L+J)v = Lv + Jv = Lv,$ so the eigenvalues don't change.
How to find the Green's function
The correct extension is $$\nabla^2G = \delta(\mathbf{x}-\xi) + \delta(\mathbf{x}-\xi_1) - \delta(\mathbf{x}-\xi_2) - \delta(\mathbf{x}-\xi_3)$$ To see why, interpret $G$ as electric potential. Then, the delta functions correspond to point charges, and we have the conditions that (i) the potential is symmetric across $x=0$, and (ii) the potential is zero along $y=0$. The first condition implies that the sign on the delta function at $\xi_1$ should be the same as the sign on the delta function at $\xi$, while the second condition implies that the sign on delta function at $\xi_2$ should be opposite the sign at $\xi$. Then, since $\xi_3$ is the reflection of $\xi_2$ over $x=0$, the delta function there should have the same sign as $\xi_2$ (alternatively, since it is the reflection of $\xi_1$ over $y=0$, it should have the opposite sign from the delta function at $\xi_1$). Solving for $G$ yields $$G = \ln\frac{1}{|\mathbf{x} - \xi|} + \ln\frac{1}{|\mathbf{x} - \xi_1|} - \ln\frac{1}{|\mathbf{x} - \xi_2|} - \ln\frac{1}{|\mathbf{x} - \xi_3|}$$ and we can verify that $G$ satisfies the required boundary conditions.
Is the Library of Babel random? Does it contain information?
I think the issue is that while the Shannon entropy could be considered an "objective" property of the library, the "algorithmic complexity" of the program required to generate the content of the library is not an objective property, but depends on the specifics of the Turing machine available. Since the library has a finite number of unique texts within it, it's trivially easy to design a Turing machine that generates the entire unique content of the library in $O(1)$ - simply hardwire the whole content into the "instruction set architecture" of the machine. But you can imagine other types of very limited Turing machines where the "simple program" you describe to generate the library might not be so simple at all. The purportedly objective characterization of the complexity/randomness of a single string of symbols given by algorithmic information theory turns out to be objective only insofar as one specifies a Turing machine. Algorithmic complexity simpliciter is no more an objective property of a string than velocity is an objective property of a physical object. See this paper for more details: Objectivity, Information, and Maxwell's Demon
What exactly is a pumping lemma and how do you do one
you want to prove that $\mathcal L=\{www|w\in\{a,b\}^*\}$ is not regular The word the you chose: $x=a^pba^pba^pb$ $|x|=3p+3>p$ so we can use the pumping lemma so $x=uvw$ etc... Note that $\underbrace{aaaaa...b}_{\text{p+1 latters}}\underbrace{aaaaa...b}_{\text{p+1 latters}}\underbrace{aaaaa...b}_{\text{p+1 latters}}$ So since that $|uv|\leq p$ uv can be here: $\overbrace{aaa...}^{\text{uv}}b,aaaa....b,aaaa....b$ Or here: $aaaa....b,aaaa....b,\overbrace{aaa...}^{\text{uv}}b$ Or here: $aaaa....b,\overbrace{aaa...}^{\text{uv}}baaaa....b$ Or here: $aaaa...\overbrace{.b,aa}^{\text{uv}}aa...b,aaaa....b$ Or here: $aaaa....b,aaaa..\overbrace{.b,a}^{\text{uv}}aaa...b$ Now if you choose $i=2$ the word is not in $\mathcal L$
Given some arbitrary function $y = f(x)$, if you only know $y$ when given the associated $x$, what is the fastest way of finding $x$ s.t. $f(x) = 0$?
You can't find a unique function under such conditions without a second point or without the graph. There could be hundreds of functions that might yield the same $y$ at the same $x$ as in your question. However, if some conditions are added as hints as in here, you can do at least something algebraic to get hold of a possible function.
Two definitions of a connection
$\Gamma(E \otimes T^*M)$ is the set of sections of the vector bundle $E \otimes T^*M = \bigcup_{p \in M} E_p \otimes T_p^*M$. The usual second definition (that I will be using) is Definition: A connection on a vector bundle $\pi:E \to M$ is a map $$D: \Gamma(E) \to \Gamma(T^*M \otimes E)$$ such that for any $s_1, s_2 \in \Gamma(E)$, $D(s_1 + s_2) = Ds_1 + Ds_2$ and for any section $s \in \Gamma(E)$ and $\alpha \in C^\infty(M)$, $D(\alpha s) = d\alpha \otimes s + \alpha Ds$. Now I will sketch one way to get between the two definitions. ($D \to \nabla$): For fixed $X \in \Gamma(TM)$, define $\nabla_X(s) := i_X(Ds)$ where, for $\omega \otimes s \in \Gamma(T^*M \otimes E)$, $i_X(\omega \otimes s) = \omega(X) s \in \Gamma(E)$. Then it's a straightforward exercise to check that this has the desired properties. ($\nabla \to D$): Let $x^1, \dots, x^n$ be local coordinates for some chart $U \subseteq M$. Define in local coordinates $$D(s) = \sum_{i=1}^n dx^i \otimes \nabla_{\frac{\partial}{\partial x^i}} s.$$ Then you can check that this definition has the right behaviour under a change of coordinates to define a global object. Finally, it is an easy coordinate based calculation to check the desired properties. Finally, by a calculation in local coordinates, you can check that these constructions are inverse to each other so that the definitions are equivalent.
Prove that there exists a countable collection of measurable sets $\{A_n: n=0,1,2,3...\}$ such that $f_n\rightarrow f$ on $A_n$ and $m(\cup A_n)=1$.
For each $k \in \Bbb N_{\ge 0}$, there is a measurable set $A_k$with $m(A_k^c) < \frac{1}{2^k}$ such that $f_n \to f$ on $A_k$. If $S = \cap_{k = 0}^\infty A_k^c$ then $m(S) \le m(A_k^c) < \frac{1}{2^k}$ for every $k$. Hence $m(S) = 0$. Since $\cup A_k = S^c$, then $m(\cup A_k) = 1 - m(S) = 1$.
If a signed measure $m$ is nonnegative on a generating semiring of the underlying $\sigma$-algebra $\mathcal A$, is $m$ nonnegative on $\mathcal A$?
Let $\Omega = \{a,b\}$, $\mathcal{A} = 2^\Omega$. Let $\mathcal{E} = \{\emptyset, \{a\}\}$ which is a semiring that generates $\mathcal{A}$. Define $\mu$ by $\mu(\{a\})=0$, $\mu(\{b\})=1$. Take $\nu = 0$ and you have a counterexample. If you like probability measures you may also take $\nu(\{a\}) = \nu(\{b\}) = 1/2$. If you add the assumption $\Omega \in \mathcal{E}$ then it becomes true. Let $\mathcal{F}$ be the collection of all finite disjoint unions of sets in $\mathcal{E}$. Verify that $\mathcal{F}$ is an algebra and that $\mu(E) \le \nu(E)$ for all $E \in \mathcal{F}$. Now show that the collection $\mathcal{M} = \{E \in \mathcal{A} : \mu(E) \le \nu(E)\}$ is a monotone class. We just showed $\mathcal{F} \subset \mathcal{M}$. By the monotone class theorem, we conclude $\sigma(\mathcal{F}) \subset \mathcal{M}$. But $\sigma(\mathcal{F}) \supset \sigma(\mathcal{E}) = \mathcal{A}$.
A group of people who have birthdays in distinct months
HINT Choose $9$ of the $12$ months to "fill" and permute the $9$ people to determine who is in which month. Finally divide by $12^{9}$ which represents all possible ways drhab has given a simple solution by directly multiplying probabilities. I thought my hint would help you in boning up on combinations and permutations My hint amounts to $\dfrac {\binom{12}99!}{12^{9}}$ Or you could, using permutations, write $\dfrac{_{12}P_9}{12^{9}}$
Uncoupled projections.
Basically, it’s another way of saying that the orthogonal projections onto the axes are independent of each other when the axes are orthogonal: varying the length of the component of $b$ along one axis doesn’t affect its components along any other axis. If basis vectors aren’t orthogonal, there’s “cross talk” between them and the projection of $b$ onto one vector can pick up some of the components of $b$ in other directions. I emphasize orthogonal here because it’s possible to find uncoupled projections for any basis; it’s just that they won’t necessarily be orthogonal projections (as defined by the inner product $(a,b)=b^Ta$). See this answer for a more detailed explanation, including diagrams, in $\mathbb R^2$.
Derivative under integral mixed with...
Use the Liebniz integral rule to get the result. That is, $$ \frac{d}{dt}\int_{a(t)}^{b(t)}F(w,t)dw = \int_{a(t)}^{b(t)}\frac{\partial F}{\partial t}(w,t)dw + F(b(t),t)\frac{d b}{dt}(t)-F(a(t),t)\frac{d a}{dt} $$ In this case you have $$ b(t)=\log^3(x(t))=4t^2 \cos(2+6t) $$ $$ a(t) = \exp(4y(t))=\exp(4\log(2r+7\exp(5(t)))) $$ and $$ F(w,t) = \frac{\sin(w)}{w} $$ Notice the last is independent of a time variable $t$, any variable in the integrand is participating in the integration and is in fact a different $t$ than what you are differentiating.
Arrange letters of the word COMPUTE so that vowels OR consonants are in alphabetical order OR at least two vowels next to each other.
Start by counting the number of ways to choose the positions of the vowels: $\binom{7}{3}=35$ Then find how many of these do not have vowels in consecutive positions. With four consonants, there are five places to insert vowels: _c_c_c_c_ Choose three of them: $\binom{5}{3}=10$ So there are $35-10=25$ patterns that meet the condition of having consecutive vowels. For each of these there are $4!$ permutations of the consonants and $3!$ permutations of the vowels. This gives a total of $25\times 4!\times 3!=3600$ permutations with at least two vowels next to each other. For the $10$ patterns that do not have consecutive vowels, there are $4!$ permutations with the vowels in order and $3!$ with the consonants in order. The permutation with both groups in order is counted twice, so there are $4!+3!-1=29$ permutations with at least one group in alphabetical order. This gives a total of $10\times 29=290$ additional permutations that meet the given conditions. The total is therefore $3890$ arrangements.
If a compact subset is contained in an open subset in $\mathbb{R}^n$, is a small cylinder of this compact subset also contained in the open set?
To avoid excessive typing, assume without loss of generality, that $a=0$. For each point $x\in K$, fix a neighborhood $U_x$ of $x$ in $\mathbb R^{n-1}$ and an $\epsilon_x>0$ such that $(-\epsilon_x,\epsilon_x)\times U_x\subseteq O$. Such $U_x$ and $\epsilon_x$ exist because $O$ is open and contains $(x,0)$. By compactness, finitely many of the $U_x$'s cover $K$; say these are $U_{x_1},\dots,U_{x_n}$. Let $\epsilon$ be the smallest of the corresponding $\epsilon_{x_1},\dots,\epsilon_{x_n}$. Because this is the minimum of only finitely many positive numbers, $\epsilon$ is positive. For each $(t,y)\in (-\epsilon,\epsilon)\times K$, we have an $x_i$ such that $y\in U_{x_i}$ and therefore $(t,y)\in (-\epsilon_{x_i},\epsilon_{x_i})\times U_{x_i}\subseteq O$.
Explanation of a mathematical phenomenon?
Without group theory: Look at your 7, 3, 2 example. Note how the numbers go 6, 5, 3, 6, 5, 3; you get the distinct numbers 6, 5, 3, and then they repeat. Well, that always happens: you get a string of distinct numbers, and then that string repeats, exactly, until you get to the end. Since you write down $p-1$ numbers total, and the string repeates exactly until you have written down the $p-1$ numbers, the number of numbers in the string must be a factor of $p-1$. Well, there are a few assertions in that paragraph that need to be proved. You don't need group theory to prove them, you can find the topic discussed (although not in exactly the terms I've used) in any introductory Number Theory text. I'm sorry, but I'm not up to writing it all out here.
Calculate the finite value of $E(\left|X-Y\right|)$ where $X$ and $Y$ are standard uniform random variables
$E|x-y| =\int_0^{1}\int_x^{1} (y-x)\, dy\, dx+\int_0^{1}\int_0^{x} (x-y)\, dy\, dx$. I will let you do the calculations. Symmetry can be used to get the value of the second term from the first: the two terms are equal.
Find position on line given start and end points
Hint:- 1. Use section formula, You can assume that the point $(x_2,y_2)$ divides the line joining $(x_1,y_1)$ and $(a_1,b_1)$ in the ratio $T_d$:$M_d$, externally. 2. Distance formula, $\dfrac{a_1-x_1}{\cos\theta}=\dfrac{b_1-y_1}{\sin\theta}=M_D$ where $\theta$ is the slope of line joining $(x_1,y_1)$ and $(x_2,y_2.)$ Let us take, $(x_1,y_1)$ and $(x_2,y_2)$, $(0,0)$ and $(3,3)$ respectively and $M_d$ equal to $2\sqrt2$, for example. Here $\theta$ is $45^{\circ}$, using the formula, $(a_1,b_1)$ comes out to be $(2,2)$.
Subrings of $\mathbb{Q}$
HINT: If $p\nmid m$, then $\frac{n}m\in R$.
Convergence of integral of quantiles
I don't know of this solves your problem, but if the distributions were continuous, i.e. $_X(x-)=F_X(x)$, then the argument may not be complicated since $(\cdot;_)=_$ in law and then through a subsequence $(\cdot;_)$ converges weakly to $(\cdot;)$, which is the same as in law. The function $=\mathbb{1}_{[\alpha,1]}(\beta)=\mathbb{1}(q(\beta;)\geq q(\alpha;))$ has discontinuity only at $\alpha$ which under the law of $q(\cdot;)$ is of measure zero. You also need to use handle by uniform integrability. The fact that $X_n$ converges to $$ in $L_2$ (and so in $L_1$) imlplies that $\{X_n,X\}$ is a uniform integrable family, and that $_n$ also converges weakly to $X$. Uniform integrability implies that the measures $\nu_n():=\int_{X_n^{-1}(A)} X\,dP$ converge weakly to $\nu():=\int_{X^{−1}()}\,$. From this and the continuity mapping theorem you may be able to get what you need. Here is a short proof that $\nu_n$ converges weakly to $\nu$: Let $f\in\mathcal{C}_b(\mathbb{R})$, the continuity of $x\mapsto xf(x)$ implies that $ X_nf( X_n)\Rightarrow X f( X)$. Since $|f( X_n) X_n|\leq \|f\|_\infty| X_n|$ and $X_n$ converges to $X$ in $L_1$, $\{f( X_n) X_n\}$ is uniformly integrable. Hence $\int f( X_n) X_n\,d\mu\rightarrow\int f( X) X\,d\mu$.
Evaluating Definite Integral $\int_1^2\arcsin\left(\frac{4-3\sqrt{x^2-1}}{5x}\right)dx$
Let $\arcsin\left(\dfrac{4-3\sqrt{x^2-1}}{5x}\right) = t$. We then have $$\dfrac{4-3\sqrt{x^2-1}}{5x} = \sin(t)\implies(4-5x \sin(t))^2 = 9(x^2-1)$$ This gives us $$25x^2 \sin^2(t)-9x^2 - 40x \sin(t) + 25 = 0 \implies (4x \sin(t)-5)^2 = (3x \cos(t))^2$$ Hence, now let us set $$x = \dfrac5{3\cos(t) + 4 \sin(t)} = \sec\left(t+\phi\right)$$ where $\cos(\phi) = \dfrac35$ and $\sin(\phi) = -\dfrac45$. Hence, $$\int tdx = tx - \int xdt$$ I trust you can now plug in the appropriate limits for $t$ and obtain the answer.
How can I prove that F2 is a Field?
I don’t know what you’re suppose to know or not. It you write $\mathbb F_2= \mathbb Z /2 \mathbb Z$, then the result is clear. $\mathbb F_2$ is a field as it is the quotient of a ring over a maximal ideal and therefore is a field. By the way, you’re almost forced to have this background. How do you define $\mathbb F_2$ without it?
Equation of tangent and normal for translated conics
What basically happens during translation is the shift of the origin.(0,0) shifts to (h,k).Thus, every y coordinate changes to y-k and x to x-h.Think like this- "You have a circular rubber-band, and on it you have a net.When you 'move' the 'net' keeping the circle's position under it fixed, you actually shift the origin in terms of coordinate geometry.With reference to the previous position of the net, you have shifted it to a new position, by a distance 'h' horizontally and 'k' vertically.Thus the new origin with respect to the old has 'shifted' to (h,k).And so changes the x and y coordinates, which, too, 'shifts' by h and k...
Fields, closed under two operations
In the postive integers, multiplication can be viewed as repeated addition. In fact, this is almost certainly the way humans first thought of multiplication when it was invented. Carrying this to all integers (including negative integers) is a bit trickier, especially for one negative times another, but one can make sense of it. But there are simple fields where this thinking doesn't work. For example, take the ring of all polynomials over $\mathbb{Z}_2$. (In other words, the ring of all polynomials with coefficients 0 and 1.) Now mod out by $x^2+x+1$, which is irreducible over $\mathbb{Z}_2$. The resulting field has four elements: $$ 0, 1, x, 1+x $$ Addition is normal polynomial addition where the coefficients are mod 2, so any element added to itself is 0. (You can think of this as XOR if you're a CS person.) But multiplication is multiplication mod $x^2+x+1$, which cannot be thought of as repeated addition since it's not even clear how to interpret $x$ times $x+1$ as repeated addition. (You don't need a field to realize this..., but since that's the context you asked about). In case you're interested, $x \cdot (x+1) = 1$ in this field. But if you take (say) $x$ and repeatedly add it to itself (in an attempt to multiply), you will get $x, 0, x, 0, x, \cdots$ and you will never get 1.
basic analysis: showing $e^{-1/x^2}$ is continuous at $0$
As was noted above, your original function $f(x) = e^{-\frac{1}{x^{2}}}$ is not defined at $x=0$. However, if and only if the limit of your function exists at the point where it isn't defined, then you can extend your function by considering the value of the limit as the value of your function at $x=0$.
Finding All Integers in such that $\phi(n)=80$
Hint: If the prime factorization of $n$ is $p_1^{a_1}\cdots p_r^{a_r}$, then $$\phi(n)=\phi(p_1^{a_1})\cdots\phi(p_r^{a_r})=p_1^{a_1-1}(p_1-1)p_2^{a_2-1}(p_2-1)\cdots p_r^{a_r}(p_r-1).$$ The prime factorization of $80$ is $2^4\cdot 5$. What are the possible ways the $2$'s and the $5$ can be distributed among the factors on the right side of this equation?
what is a left nilpotent Leibniz algebra
An algebra $A$ is called left Leibniz, if its multiplication satisfies the left Leibniz identity $$ (ab) c = a (bc) − b (ac) $$ for all $a, b, c ∈ A$. The $2$-sided ideals $A^i$ and $A_i$ are defined by $A^1=A$ and $A^{i+1}=AA^i$, and $A_1=A$, $A_{i+1}=\sum_{p=1}^iA_pA_{i+1-p}$, from the left. Then $A$ is called left nilpotent if $A_n=0$ for some $n$. One can show that $A_n=A^n$ for all $n\ge 1$.
How to find joint PDF of Z and Y where Z = X + Y
A starndard idea in this type of exercise is to start by fiding the cdf of $(Z,Y)$: \begin{align*} P(Z \leq z_0, Y \leq y_0) &= P(X + Y \leq z_0, Y \leq y_0) \\ &= \int_{-\infty}^{y_0}\int_{-\infty}^{z_0-y}f_{X,Y}(x,y)dxdy \end{align*} Conclude that \begin{align*} f_{Z,Y}(z_0,y_0) &= \frac{\partial^2 P(Z \leq z_0, Y \leq y_0)}{\partial z_0 \partial y_0} \\ &= f_{X,Y}(z_0-y_0,y_0) \end{align*}
Discrete maths onto and one to one question
You have given one example of a one to one function $X\to Z$. There are others, but one is sufficient. There are $540$ onto functions $X\to Y$, I'm not sure where you got $4!$ from. Again, it seems as if you need only find one example. If you're writing the functions in terms of sets, you need some set of $5$ pairs such that each element of $Y$ is the second entry in one of those pairs. The example you added satisfies this.
Formula to derive angle and radius from Bezier circular curve control points
One way ... Compute the point on the curve at $t=\tfrac12$. If the four control points are $P_0$, $P_1$, $P_2$, $P_3$, then this point is $$ P_m = \tfrac18 P_0 + \tfrac38 P_1 + \tfrac38 P_2 + \tfrac18 P_3 $$ The three points $P_0$, $P_m$, $P_1$ define a circle whose center and radius you can compute fairly easily. These answers show you how.
Dominated convergence theorem query
The sequence of functions $$f_n = \frac{1}{x} \chi_{(1/n, 1)}(x)$$ converges to $1/x$ pointwise, but the differences $f - f_n$ aren't bounded for any $n$. And although every $f_n \in L^1$, the limit function is not integrable. Alternatively, the functions $$g_n = n \chi_{(0, 1/n)}$$ are all in $L^1$, and converge to $0$ pointwise everywhere. But $$\lim_n \int g_n = 1 \ne 0 = \int \lim_n g_n$$ There is no $L^1$ dominating function for all $g_n$.
Calculate the surface if we know the volume and ratio of the edges
Assuming the object to be a cuboid. We have ${\rm V}= abc$ Also, using the give ratios $b= \frac {7a}{3}$ and $c=3a$. Thus, $${\rm V}=a \times \frac{7a}{3} \times 3a=5103 \implies a^3=729 \implies a=9$$ Now you have all the edges : $a=9 \; ; b=21 \; ; c=27$. Can you calculate the surface area of the cuboid now?
Prove that $R$ has $2^{k}$ elements
The ring $R$ must have characteristic two (because $-1=1$), and hence be an algebra over the field with two elements. Being a vector space, its order must therefore be a power of two.
Determining if a language is Recursively Enumerable
The hypothesis of the exercise is that $L$ is RE and $-L$ is not RE. The argument is that if $L'$ were RE, then $-L$ would also be RE, contradicting this hypothesis.
Proof of tracelessness of $\mathfrak{su}(n)$ generators
If $\xi \in \mathfrak{su}(n)$ is an infinitesimal generator (i.e. $\exp(t \xi) \in SU(n)$), then $\det \exp(t \xi) = 1$ for each $t$ and thus $\frac d{dt}|_{t = 0}\det \exp(t \xi) = 0$. Applying Jacobi's formula we can write this derivative as $$0 = \mathrm{tr}\left( \frac d{dt}\Big|_{t=0}\exp(t \xi)\right)=\mathrm{tr} (\xi).$$
When two digit numbers in base $5$ are multiplied the result is $4103_5$. What are the numbers in base $5$?
Notice that $\;4103_5 = x^2 - 4x + 3\;$ where $\;x = 5^2.\;$ Now factor it as $\;(x-3)(x-1).$
$\sum_{n=1}^{\infty } \left ( 1 - \cos(\frac{1}{n}) \right )$ converges or diverges?
Hint $\cos x \approx 1-\dfrac{x^2}2$ for small $x$
Let $R$ be a domain. Prove that if a polynomial in $R[x]$ is a unit, then it is a nonzero constant (the converse is true if $R$ is a field)
Well, a proof by contradition is not necessary. Suppose $f$ is a unit with inverse $g$. Then $fg=1$. Using degrees, we obtain $$0 = {\rm deg}(1) = {\rm deg}(fg) = {\rm deg}(f) + {\rm deg}(g).$$ The last equality holds since $R$ has no zero divisors. As the degree is a nonnegative function, it follows that both, $f$ and $g$ have degree zero and so are constants (elements of $R$).
Graphs of a functions: $ e^{x^2} , e^{1/x} $
You should generally look for asymptotes and known values. For example, $\frac{1}{x-2}$ has asymptotes at $x=2$, and $y=0$. $\lim_{x\rightarrow\infty}\arctan(x) = \pi/2 \approx 1.6$ $\lim_{x\rightarrow-\infty}\arctan(x) = -\pi/2 \approx -1.6$ $\arctan(0) = 0$ $\arctan(1) = \pi/4 \approx 3/4$. $e^0 = 1$ $e^1 \approx 2.7$ $\lim_{x\rightarrow\infty}e^{-x} = 0$ ... Extremums (minimums/maximums) and points of inflections could also help when applicable.
Summing $M(n)$ and more
$$M(x) = \sum_{n \le x} \mu(n)$$ For $|x| < 1$, $(1+x)^{-s} = \sum_{k = 0}^\infty {-s\choose k} x^k$ $$\frac{1}{\zeta(s)} = \sum_{n=1}^\infty \mu(n) n^{-s} = s \int_1^\infty M(x) x^{-s-1}dx$$ $$ = M(n) (n^{-s}-(n+1)^{-s}) =-\sum_{k=1}^\infty {-s \choose k} \sum_{n=1}^\infty M(n) n^{-s-k} $$ If $M(x) = O(x^{1/2+\epsilon})$ then $ M(n) (n^{-s}-(n+1)^{-s})=\sum_{n=1}^\infty \mu(n) n^{-s} $ converges (so is analytic) for $\Re(s) > 1/2$ and the RH is true. The converse is a PNT-like Tauberian atheorem : if the RH is true then $M(x) = O(x^{1/2+\epsilon})$ and $\sum_{n=1}^\infty M(n) n^{-s-1}$ converges for $\Re(s) > 1/2$. Note the Mellin transform of $f(x)=\sum_{n=1}^\infty \mu(n) e^{-nx}$ is $\frac{\Gamma(s)}{\zeta(s)}$ and $\sum_{n=1}^\infty M(n)e^{-nx}=\frac{f(x)}{1-e^{-x}} \approx \frac{f(x)}{x}$. There are explicit formulas for $f(x)$ and $\frac{f(x)}{1-e^{-x}}$ in term of the zeros.
If there exists a vertex of $ \Gamma_{2}(R)\setminus J(R) $ which is adjacent to every other vertex then $ R \cong \mathbb{Z}_{2}\times F$
If $x^2\ne x$ then $xR+x^2R=R$, so $x$ is invertible, a contradiction. Let $y\in J(R)$. If $y\ne 0$ then $x+y\ne x$. Since $xR\ne R$ there is a maximal ideal $M$ such that $x\in M$. Then $x+y\in M$, so $x+y$ is not invertible. Also $x+y\notin J(R)$, otherwise $x\in J(R)$. Thus $xR+(x+y)R=R$; but $xR,(x+y)R\subseteq M$, a contradiction. If $2x\ne 0$, then $2x\in R-U(R)\cup J(R)$, so $xR+2xR=R$, false. So $2x=0$. Similarly we get $rx=0$ or $rx=x$ for any $r\in R$. This shows that $m=\{0,x\}$ is an ideal. If it is not maximal there is a maximal ideal $M$ strictly containing it, and for $y\in M-m$ we must have $xR+yR=R$, a contradiction. Since $x$ is idempotent we have $R\simeq xR\times (1-x)R$. But $xR=\{0,x\}\simeq\mathbb Z_2$. The unity of $(1-x)R$ is $1-x$. A nonzero element of $(1-x)R$ writes $(1-x)s$ with $s\notin xR$ and from $(1-x)sR=(1-x)R$ we get $1-x=(1-x)st$ for some $t\in R$, so $(1-x)s$ is invertible.
Tiling a rectangle with rectangles, leaving a non-moveable hole
Interesting problem (especially if you can formulate it so that you don't have to exlcude so many cases). Here is another scheme (that I am sure you can see can be extended to allow for various numbers of holes): The tiles can be halved, and quartered, to give more schemes. Here is another: Again, you can find more schemes by bisecting or quartering the tiles. All these solutions employ the ring around the holes, but the rings overlapped rather than stack.
Find the distribution of $X=\frac{\max\{U,V\}}{\min\{U,V\}}$
Hint: If you consider $(U,V)$ as a uniform distribution on a unit square, then $\mathbb P\left(\dfrac{\max\{U,V\}}{\min\{U,V\}} \le x\right)$ involves find the area of the square less two congruent right-angled triangles