# Developing normal pdf from symmetry & independence

When I was in the 3rd grade of my middle school, I enjoyed my town bookstore as a standing library. There a series of six math-story books by Zhang Yuan-Nan impressed me a lot. I cited a case from one in my PPT when I taught the normal distribution -- the normal pdf can be derived from simple symmetry & independence conditions.

Today I can even google out an illegal pdf of its new edition to verify the case (2005, pp. 89). Actually I have bought the new edition series (now 3 books) and lent them to students. Those conditions are as instinctive as--

1. For white noise errors on 2-D, the independence means pdf at $(x,y)$ is the product of 1-D pdf, that is, $\phi\left(x\right)\phi\left(y\right)$.

2. The symmetry means pdf at $(x,y)$ is just a function of $x^{2}+y^{2}$, nothing to do with direction. That is, $\phi\left(x\right)\phi\left(y\right)=f\left(x^{2}+y^{2}\right)$.

So, $f\left(x^{2}\right)f\left(y^{2}\right)=f\left(x^{2}+0\right)f\left(0+y^{2}\right)=\phi^{2}\left(0\right)f\left(x^{2}+y^{2}\right)$.

For middle school students, the book stated a gap here to arrive at the final result $f\left(x^{2}\right)=ke^{bx^{2}}$, which is $\phi\left(x\right)=\frac{1}{\phi\left(0\right)}f\left(x^{2}+0\right)=\frac{k}{\phi\left(0\right)}e^{bx^{2}}$.

I think non-math graduate students with interests can close the gap by themselves with following small hints.

Denote $\alpha=x^{2},\beta=y^{2}$.
We have
$\log f\left(\alpha\right)+\log f\left(\beta\right)=\log\phi^{2}\left(0\right)+\log f\left(\alpha+\beta\right)$,
or
$\;\;\left[\log f\left(\alpha\right)-\log\phi^{2}\left(0\right)\right]+\left[\log f\left(\beta\right)-\log\phi^{2}\left(0\right)\right]$  $=\left(\log f\left(\alpha+\beta\right)-\log\phi^{2}\left(0\right)\right)$.
Denote $g\left(\alpha\right)=\log f\left(\alpha\right)-\log\phi^{2}\left(0\right)$.
That is, $g\left(\alpha\right)+g\left(\beta\right)=g\left(\alpha+\beta\right)$.

Now to prove $g\left(\frac{m}{n}\right)=\frac{m}{n}g\left(1\right),\forall m,n\in\mathbb{N}$. With continuousness, it gets $g\left(\alpha\right)=\alpha g\left(1\right),\forall\alpha\in\mathbb{R}$.