Skip to content

A simple synth in Android: step by step guide using the Java SDK

October 18, 2012

In this post, I will provide a step-by-step guide to building a simple audio app with Eclipse and the Android SDK. Although I would encourage readers to check out the NDK for audio development, this is meant as a Java-only introduction to the topic.

This guide expects you to have installed the Android SDK, Eclipse Classic and its ADT plugin. You can run the examples on the emulator or on an Android device. If the latter is chosen, make sure that the device is set for development (debug enabled in the Android settings), and of course connected to the development computer.

Step 1

Create a new blank App: File -> New -> Android App

Follow the instructions, add the App name, and to simplify things, uncheck “create custom launcher icon”. Then follow the next steps, create a BlankActivity, using all the default settings and finish.

Step 2

Contgratulations: you have created your (first?) App. At this point you can press on the ‘play’ button  to run it (either on the emulator or on a connected device). But does it do anything?

Let’s build the synth part of the app. We want to locate the MainActivity.java source file. Use the Package Explorer window (the application is called SoundTest in this example):

Now we can go and edit the code. In the MainActivity class, we want to first add some data members:

public class MainActivity extends Activity {
    Thread t;    
    int sr = 44100;
    boolean isRunning = true;

The first is a Thread object that will hold our audio processing thread, the second the sampling rate, and the third a means of switching the audio on and off.

Inside the OnCreate method, provided by the template code, we will instantiate the thread object. This is done by defining its run() function, which will hold the code to process audio.

 // start a new thread to synthesise audio        
    t = new Thread() {
         public void run() {
           // set process priority
           setPriority(Thread.MAX_PRIORITY);

We set the priority to max so that we can achieve good performance. Now we need to deal with creating an output audio object, which will be an AudioTrack one. First we set the buffersize, which will hold the size of the audio block to be output. Then we instantiate the audioTrack object:

int buffsize = AudioTrack.getMinBufferSize(sr, AudioFormat.CHANNEL_OUT_MONO, 
                                               AudioFormat.ENCODING_PCM_16BIT);
// create an audiotrack object
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sr, 
                                       AudioFormat.CHANNEL_OUT_MONO, 
                                       AudioFormat.ENCODING_PCM_16BIT, 
                                       buffsize, 
                                       AudioTrack.MODE_STREAM);

Eclipse should take care of adding any needed library import lines for AudioTrack and AudioFormat, but if there is trouble, just follow the suggestions from the editor on how fix the errors by allowing it toadd the given ‘import …’ lines automatically.

Now we’re ready to take care of the synthesis side. We create a signal buffer and define some parameters:

short samples[] = new short[buffsize];
int amp = 10000;
double twopi = 8.*Math.atan(1.);
double fr = 440.f;
double ph = 0.0;

Start audioTrack running:

// start audio
audioTrack.play();

The next thing is to define the synthesis loop:

// synthesis loop
while(isRunning){

   for(int i=0; i < buffsize; i++){ 
     samples[i] = (short) (amp*Math.sin(ph));
     ph += twopi*fr/sr;
   }
   audioTrack.write(samples, 0, buffsize);
}

and then take care of the closing of the audio device etc. when the synthesis is stopped:

audioTrack.stop();
audioTrack.release();
}
};

This closes the definition of the run() function and the Thread object that holds it. The final thing we need to do in the OnCreate() method is to start the thread:

t.start();

Finally, we need to add a method to deal with switching off the audio when the App is closed. This is done by supplying an OnDestroy() method:

public void onDestroy(){
   super.onDestroy();    
   isRunning = false;
   try {
     t.join();
   } catch (InterruptedException e) {
     e.printStackTrace();
   }    
  t = null;
}

Step 3

Now you can run your (first) Audio app. Switch it on and you get a beep, works well as a tuning fork. But we want controls. This is what we do next. Open the activity_main.xml resource file:

Click on the “hello world” label and delete. Drag a slider and add it to the app:

Right click on the slider in the App layout and select ‘edit ID’. Rename the ID “frequency”. Save the file.

Now go back to the MainApplication source code. Add these two other data members to the class (after the boolean isRunning…  line):

    SeekBar fSlider;
    double sliderval;

Ok, the SeekBar object will handle the slider, and sliderval member will pick up the slider value so we can use it.

Now we need to connect the fSlider object to its ‘view’ , which is the actual slider widget.

// point the slider to the GUI widget
fSlider = (SeekBar) findViewById(R.id.frequency);

and create a ‘listener’ for it, so that changes in the slider position are detected:

// create a listener for the slider bar;
OnSeekBarChangeListener listener = new OnSeekBarChangeListener() {
        public void onStopTrackingTouch(SeekBar seekBar) { }
        public void onStartTrackingTouch(SeekBar seekBar) { }
        public void onProgressChanged(SeekBar seekBar,int progress, 

                                      boolean fromUser) {
            if(fromUser) sliderval = progress / (double)seekBar.getMax();
        }
};

The listener has to implement three methods, but we only need the data from onProgressChanged(). The sliderval will hold the slider value in the range 0 – 1.

We now tell the fSlider object to use this listener:

// set the listener on the slider

fSlider.setOnSeekBarChangeListener(listener);

The final changes are in the synthesis loop. We will need to update the sinewave frequency, by looking at the value of sliderval:

// synthesis loop
while(isRunning){

  fr =  440 + 440*sliderval;

  for(int i=0; i < buffsize; i++){
    samples[i] = (short) (amp*Math.sin(ph));
    ph += twopi*fr/sr;
   }
   audioTrack.write(samples, 0, buffsize);

}

And there you go. You can now control the pitch of the sound with a slider. This concludes this short step-by-step tutorial.

Full code

The full MainApplication.java code is shown below:

package com.example.soundtest;
import com.example.soundtest.R;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.os.Bundle;
import android.app.Activity;
import android.view.Menu;
import android.widget.SeekBar;
import android.widget.SeekBar.OnSeekBarChangeListener;

public class MainActivity extends Activity {
    Thread t;
    int sr = 44100;
    boolean isRunning = true;
    SeekBar fSlider;
    double sliderval;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
       // point the slider to thwe GUI widget
        fSlider = (SeekBar) findViewById(R.id.frequency);

        // create a listener for the slider bar;
        OnSeekBarChangeListener listener = new OnSeekBarChangeListener() {
          public void onStopTrackingTouch(SeekBar seekBar) { }
          public void onStartTrackingTouch(SeekBar seekBar) { }
          public void onProgressChanged(SeekBar seekBar, 
                                          int progress,
                                           boolean fromUser) {
              if(fromUser) sliderval = progress / (double)seekBar.getMax();
           }
        };

        // set the listener on the slider
        fSlider.setOnSeekBarChangeListener(listener);

        // start a new thread to synthesise audio
        t = new Thread() {
         public void run() {
         // set process priority
         setPriority(Thread.MAX_PRIORITY);
         // set the buffer size
        int buffsize = AudioTrack.getMinBufferSize(sr,
                AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
        // create an audiotrack object
        AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
                                          sr, AudioFormat.CHANNEL_OUT_MONO,
                                  AudioFormat.ENCODING_PCM_16BIT, buffsize,
                                  AudioTrack.MODE_STREAM);

        short samples[] = new short[buffsize];
        int amp = 10000;
        double twopi = 8.*Math.atan(1.);
        double fr = 440.f;
        double ph = 0.0;

        // start audio
       audioTrack.play();

       // synthesis loop
       while(isRunning){
        fr =  440 + 440*sliderval;
        for(int i=0; i < buffsize; i++){
          samples[i] = (short) (amp*Math.sin(ph));
          ph += twopi*fr/sr;
        }
       audioTrack.write(samples, 0, buffsize);
      }
      audioTrack.stop();
      audioTrack.release();
    }
   };
   t.start();        
}

    @Override
    public boolean onCreateOptionsMenu(Menu menu) {
        getMenuInflater().inflate(R.menu.activity_main, menu);
        return true;
    }

    public void onDestroy(){
          super.onDestroy();
          isRunning = false;
          try {
            t.join();
           } catch (InterruptedException e) {
             e.printStackTrace();
           }
           t = null;
     }
}
Advertisements
9 Comments
  1. Erich permalink

    Dear Victor,
    Thank you so much for this beginner’s tutorial. Do you have any more tutorials of this nature? I would like to add a second voice to our simple synth to make it polyphonic, but I’m not sure how to go about doing that. Also, can you point me to a reference tutorial or video that explains the math portion of your “isRunning” synthesis loop? I’m a musician who knows theory and how to play, but not alot about how to create waves mathematically. Let’s say I wanted to have a triangle wave or sawtooth wave at 440Hz – how would I go about getting the math implemented, or the overtones correctly?
    I would greatly appreciate any help or directions to webpages you might have.
    Thanks again and keep up the great blog!
    -Erich

  2. The mathematics is pretty simple, just a call to sine with a new increasing argument all the time.
    The argument values depend on the freq, which determines effectively how much you increment them every time. The larger the increment, the higher the freq, meaning you are producing more complete cycles per second.

    To add another voice you will need another oscillator, basically another line with a sine function call, with independent parameters to the original one. Then, you just mix the two oscillator outputs together, before writing to output.

    To do another type of wave, you could use a wavetable instead of a call to the sine function. You can then fill the table, a float array, with a single cycle of your waveform choice.

  3. Hi there, ive watching the video of the audio latency improvements in api 17, Google io. Just wondering if your code takes advantage of these improvements and have you measure the output latency?

    • Any improvements can only exist in the openSL code (afaik), but I will look again. The OpenSL code in this blog should be able take advantage of this, but I will have to see whether there is anything else that needs to be implemented.

      • Apparently, choosing the correct sample rate etc for each device is important, I will try to do some tests with your code and measure the latency, a good way to test could be to turn on the indicator LED the same time you play a sound, This will cut out the Touch latency. we can then get a crude measurement by filming the device, capturing the audio and measuring the delay from the time the LED turns on and when the audio plays.

  4. CicK permalink

    Hey victor I have a question what exactly is the ph parameter in the synthesis loop?

    And how exactly does this math work: samples[i] = (short) (amp*Math.sin(ph));
    ph += twopi*fr/sr;

    • ph is the phase index, which tells the sin() function what value to produce. The code synthesis a series of samples
      of a sine wave at fr Hz. SR is the number of samples per second that are to be produced (so that fr can be given in
      cycles per second), ph is incremented proportionally to the desired frequency. If the frequency was 1 Hz, then you’d
      see that ph increments by twopi every SR samples.

Trackbacks & Pingbacks

  1. Android sound synthesis - How-To Video

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: