John F. McGowan on Math-blog.com has an interesting article on audio pitch shifting by manipulating the Fourier transform of the voice and some additional mathematical acrobatics. This produced a recognizable pitch shifted voice and a relatively smooth pitch shifted voice similar to the output of the analog processing.

Traditional pitch shifting algorithms give artificial qualities to the pitch-shifted voice. The improved algorithms can create more natural sounding pitch-shifted voices. These voices can be used for humor, entertainment, or emphasis in movies, television, video games, video advertisements for small businesses, personal and home video, and in many other applications.

This video is President Obama’s original introduction from his April 2, 2011 speech on the energy crisis.

This video is President Obama speaking with his pitch doubled by shifting the Fourier components but without the mathematical acrobatics to compensate for un-centered frequency components:

This video is President Obama speaking with a chipmunked voice; his pitch has been doubled.

This video is President Obama speaking with a deep voice; his pitch has been reduced to seventy percent of normal.

This video is President Obama speaking with a voice similar to the voice of Mickey Mouse:

Interesting videos that document current ideas about infinity in regards to mathematics and the observable universe (and here is interesting article about the story of infinity and beyond, from Georg Cantor to Hugh Woodin, from ‘infinite hierarchy of infinite sets’ to the ‘Ultimate L’ 🙂 ).

Here’s a simple algorithm to produce Self-organizing neural networks (SONN) in 2D clustering problems with a simple decremented gain.

Number of clusters of input data x represented by , and is the number of features in each cluster. To represent the amount of change in the weights as a function of the distance from the center cluster , here I use window function , and the goal is to decrement the gain for updating the weights at next step iteration.

Step 1 : set the weight in all clusters to random values :

,

for ; ; and

Set the initial gain .

Step 2 : For each input pattern

where ,

(a). Identify the cluster that is closest to to the k-th input :

.

(b). Update the weights of the clusters in the neighborhood of cluster according to the rule :

, for ,

Where is window function.

Step 3 : Decrement the gain term used for adapting the weight :

,

where is the learning rate.

Step 4 : Repeat by going back to step 2 until convergence.