Hover over the menu bar to pick a physics animation.

About 2010, I realized that computer speed and memory were no longer big limitations to being able to animate very many particles and their interactions. So I began to program visualizations of on the order of 1000 particles in a containter. When I derived the equations for simple elastic collisions between particle pairs I found that angular functions like sine and cosine were not needed. Only simple dot (inner) products of the particle parameters were sufficient and very fast compared to using sine and cosine. Energy is the most important parameter to understanding physics and my first goal was to demonstrate that the Maxwell-Boltzmann energy distribution (MB) arose directly from the collisions between particle pairs no matter what was the initial energy distribution. As an initial energy distribution, I usually choose that all particles have the same energy but random directions and positions in space. Starting with all particles of the same energy makes it very clear that the distribution is quickly evolving toward a more uniform one. Regardless of the number of degrees of freedom of particle or molecule motion, I show that the MB distribution is achieved.

Although elastic collisions are a very good approximation for the energy evolution of a Gas, I also explored the use of a potential between interacting particles. The potential that I chose was that of Lennard-Jones (LJ) which is the one that is involved in Van-der-Waals (VDW)forces. VDW force is very repulsive at close range (two particles cannot continuously occupy the same space) and more weakly attractive at somewhat longer range. As such the LJ force model can be used for the attractive and repulsive forces between the atoms of a molecule. LJ force also provides the stable distances between those atoms at absolute zero `K` as well as the vibrational frequencies of the atom with respect to the molecule center of mass. LJ force can also model some of the behavior of solids but that is not part of gas physics.

I soon discovered by fitting the energy distributions that their forms varied with the number of dimensions permitted for the particles.

1D: `(d(N_(1D)(E)))/(dE)=C_1sqrt("<"E">")/sqrt(E)exp(-2E/("<"E">"))`

2D: `(dN_(2D)(E))/(dE)=C_2exp(-E/("<"E">"))`

3D: `(dN_(3D)(E))/(dE)=C_3sqrt(E/("<"E">"))exp(-(3E)/(2"<"E">"))`

where `E` is particla energy and `"<"E">"` denotes the average energy of all the particles. Note that `"<"E">"` takes the place of the MB parameter `k_BT` where `k_B` is the Maxwell-Boltzmann constant and T is the absolute temperature.I couldn't come up with a good explanation of the energy factors in front of these three distributions until I started to investigate the modes of quantum physics. Discussions of this tend to concentrate on the effects of temperature but there are actually 2 separate issues involved. First is the number of available of states and that involves the volume in configuration space `deltapdeltax` for each state. Second is the fraction of those states that are occupied and that involves the temperature. The explanation for the volume is that no particles are free, all have boundaries that they cannot escape. Because of this containment, their momentum modes are quantized and the volume of the momentum cell is dependent on the dimension, `deltax`, of the container. For each dimension, the uncertainly principle: requires that the product of the momentum variance, `deltap`, with the position variance, `deltax`, have to be greater than a constant:

`deltapdeltax>=ℏ/2`

where `ℏ` is Planck's constant `h` divided by `2pi`. As is clearly stated in the reference above, this principle applies to all wave-like phenomena and has nothing to do with the presence of the observer. For the 1D, 2D, and 3D versions of the state energy distribution see Fermi Surface.For 1D, the modes are a simple linear array and the number of modes is the integer length of the array. The momentum increases as the integer denoting the mode number. Since the energy is proportional to the momentum squared, the energy increases as the square of that integer:

`E_(1D)=aN_(1D)(E)^2`

where `a` is a constant. Solving the equation for `N_(1D)` we have:`N_(1D)(E)=sqrt(E_(1D)/a)`

with the result for absolute zero:`(dN_(1D)(E))/(dE)=1/(2sqrt(aE_(1D)))`

The momentum increases as the integer denoting a radial mode number, `n_r`. The total number of modes out to a radial mode number of `n_r` is:

`N(n_r)=pi n_r^2\ \ \ \ (1)`

The energy also increases as the square of that mode number:`E=1/(2m)n_r^2deltap_0^2=n_r^2E_0\ \ \ \ (2)`

where `deltap_0` is the momentum width given by the uncertainty principle and `E_0` is the kinetic energy associated with that momentum. Using equation 2 in equation 1 we can solve for the total number of states out to `n_r` and it is:`N_(2D)(E)=bpiE\ \ \ \ (3)`

where b is a constant. Taking the derivative of equation 3 we find that .`(dN_2D(E))/(dE)=bpi\ \ \ \ (4)`

and not a function of `E`.The momentum increases as the integer denoting a radial mode number, `n_r`. The total number of modes out to a radial mode number of `n_r` is:

`N(n_r)=(4pi)/3 n_r^3\ \ \ \ (5)`

The energy increases as the square of that mode number:`E(n_r)=1/(2m)n_r^2deltap_0^2=n_r^2E_0\ \ \ \ (6)`

where `deltap_0` is the momentum width given by the uncertainty principle and `E_0` is the kinetic energy associated with that momentum. Using equation 6 in equation 5 we can solve for the total number of states out to `n_r` and it is:`N_(3D)(E)=c(4pi)/3E^(3/2)\ \ \ \ (7)`

where c is a constant. Taking the derivative of equation 3 we find that .`(dN_3D(E))/(dE)=2cpiE^(1/2)\ \ \ \ (4)`

and is proportional to the square root of `E`.The particles which the Gas Physics book addresses are Bose particles so to convert to a finite temperature we need to multilpy the distribution by the Bose-Einstein This brings up the concept of the number of states, `g(E)` for any given energy value and the number, `n(E)` of those states that are occupied. The ratio of these will be called the fractional occupation number. From the Bose-Einstein temperature function, the fractional occupation number `(n(E))/(g(E))` for each state is

`(n(E))/(g(E)) =1/(exp((E)/(k_BT_K))-1)`

where, to make the expression look more familiar, we've used `k_BT_K` in place of `"<"E">"`. Unless the gas is greatly compressed or at a very low temperature, the state occupation fraction is much less than 1 so the denominator is much greater than 1 and we can drop the -1 in the deonminator and this becomes the Boltzmann distribution. Then:`(dN_(1D)(E))/(dE)=1/(2sqrt(aE_(1D)))exp(-(E)/("<"E">"))`

which is the same as the 1D Boltzmann distribution.