Audio pitch shifting with Fourier transform

John F. McGowan on Math-blog.com has an interesting article on audio pitch shifting by manipulating the Fourier transform of the voice and some additional mathematical acrobatics. This produced a recognizable pitch shifted voice and a relatively smooth pitch shifted voice similar to the output of the analog processing.

Traditional pitch shifting algorithms give artificial qualities to the pitch-shifted voice. The improved algorithms can create more natural sounding pitch-shifted voices. These voices can be used for humor, entertainment, or emphasis in movies, television, video games, video advertisements for small businesses, personal and home video, and in many other applications.

This video is President Obama’s original introduction from his April 2, 2011 speech on the energy crisis.

This video is President Obama speaking with his pitch doubled by shifting the Fourier components but without the mathematical acrobatics to compensate for un-centered frequency components:

This video is President Obama speaking with a chipmunked voice; his pitch has been doubled.

This video is President Obama speaking with a deep voice; his pitch has been reduced to seventy percent of normal.

This video is President Obama speaking with a voice similar to the voice of Mickey Mouse:

© 2011 John F. McGowan, Source : math-blog.com

The story of infinity and beyond

Interesting videos that document current ideas about infinity in regards to mathematics and the observable universe (and here is interesting article about the story of infinity and beyond, from Georg Cantor to Hugh Woodin, from ‘infinite hierarchy of infinite sets’ to the ‘Ultimate L’ 🙂 ).

2D SONN algorithm with decremental gain

Here’s a simple algorithm to produce Self-organizing neural networks (SONN) in 2D clustering problems with a simple decremented gain.

Number of clusters of input data x represented by N_n\times N_m, and N_F is the number of features in each cluster. To represent the amount of change in the weights as a function of the distance from the center cluster (n_0,m_0), here I use window function \lambda, and the goal is to decrement the gain g(t+1) for updating the weights at next step iteration.

Step 1 : set the weight in all clusters to random values :

w_i^{m,n}=random,

for n=0,1,2,...,N_n; m=0,1,2,...,N_m; and i=0,1,2,...,N_F

Set the initial gain g(0)=1.

Step 2 : For each input pattern

x^t, where t = 1, 2, ...., k,

(a). Identify the cluster that is closest to to the k-th input  :

(n_0, m_0)=\displaystyle min_{j,l} {||x^k-w^{j,l}||}.

(b). Update the weights of the clusters in the neighborhood of cluster (n,m), N, according to the rule :

w_i^{n,m}(t+1) \leftarrow w_i^{n,m}(t)+g(t)\lambda (n,m)[x_i^k-w_i^{n,m}(t)], for (n,m) \in N,

Where \lambda(n,m) is window function.

Step 3 : Decrement the gain term used for adapting the weight :

g(t+1)=\mu g(t),

where \mu is the learning rate.

Step 4 : Repeat by going back to step 2 until convergence.

For the similar 1D SONN algorithm see here.

SOINN Robot

This video shows the robot that uses a technology called SOINN (Self-Organising Incremental Neural Network)

The SOINN robot uses its past experiences to make an educated guess as to what to do. It does this by “self-organising the input data it is supplied with.” Here’s a recorded Java Applet of SOINN from Hasegawa Lab.

Source & Reading :

wired.co.uk : Robot taught to think for itself
ubergizmo.com : Robots that can learn and think for themselves
Hasegawa Lab. : SOINN-e and HasegawaLab’s Channel on YouTube