Skip to content

Android audio streaming with OpenSL ES and the NDK.

March 3, 2012

Audio streaming in Android is a topic that has not been covered in general in the Android documentation or in programming examples. To cover that gap, I will like to discuss the use of the OpenSL ES API through the Android Native Development Kit (NDK). For those of you who are new to Android programming, it is important to explain a little bit how the various components of the development system work.

First we have the top-level application programming environment, the Android SDK, which is Java based. This supports audio streaming via the AudioTrack API, which is part of the SDK. There are various examples of AudioTrack applications around, including the pd-android and Supercollider for Android projects.

In addition to the SDK, Android also provides a slightly lower-level programming environment, called the NDK, which allows developers to write C or C++ code that can be used in the Application via the Java Native Interface (JNI).  Since Android 2.3, the NDK includes the OpenSL ES API, which has not been used as widely at the time of writing. One project currently employing it is Csound for Android. This note discusses the use of the OpenSL API and the NDK environment for the development of audio streaming apps.

Setting up the development environment

For this, you will need to go to the Google Android development site and download all the tools. These include the SDK, the NDK and the eclipse plugin. You will also need to get the Eclipse IDE, the ‘classic’ version is probably the most suitable for this work. Instructions for installing these packages are very clear and there is plenty of information on the internet to help you if things go wrong.

Another useful tool for Android development is SWIG, which is used to create the Java code to wrap the C functions we will write. It is not completely required, because you can use the JNI directly. However, it is very handy, as the JNI is not the easiest piece of development software around (some would call it ‘a nightmare’). SWIG wraps C code very well and it simplifies the process immensely. We will use it in the example discussed here.

An example project

The example project we will be discussing can be obtained via git with the following command:

$git clone https://bitbucket.org/victorlazzarini/android-audiotest

Alternatively, these sources can be obtained from the same location as an archive, via the web page interface.

The project consists of a NDK project for the OpenSL streaming IO module and an Eclipse project for the application example. The NDK project is first built by running the top-level script

$sh build.sh

This simple script first sets up the location of the downloaded NDK (you will need to set this to match your system locations)

export ANDROID_NDK_ROOT=$HOME/work/android-ndk-r7

and then proceeds to call SWIG to build the Java interface code that will link our C opensl example module to the app. It creates both a C++ file wrapping the C code and the Java classes we need to use to run it.

swig -java -package opensl_example -includeall -verbose
-outdir src/opensl_example -c++ -I/usr/local/include
-I/System/Library/Frameworks/JavaVM.framework/Headers
-I./jni -o jni/java_interface_wrap.cpp opensl_example_interface.i

When this is done, it calls the NDK build script,

$ANDROID_NDK_ROOT/ndk-build TARGET_PLATFORM=android-9 V=1

that will build a dynamically-loadable module (.so) containing our native code. This script is hardwired to use the Android.mk file in the ./jni directory.

Once the NDK part is built, we can turn to Eclipse. After starting it, we should import the project by using File->Import and then the ‘Import into existing workspace’ option. It will ask for the project directory and we just browse and select the top-level one (android-audiotest). If everything proceeded to plan, you can plug in your device and choose build (Android app). The application will be built and run in the device. At this point you will be able to talk into the mic and hear your voice over the speakers (or, more appropriately, a pair of headphones).

The native interface code

Two source files compose the native part of this project: opensl_io.c, which has the all the audio streaming functions; and opensl_example.c, which uses these to implement the simple audio processing example. A reference for the OpenSL API is found in the OpenSL ES 1.0.1 specification, which is also distributed in the Android NDK docs/opensl directory. There we find some specific documentation on the Android implementation of the API, which is also available online.

Opening the device for audio output

The entry point into OpenSL is through the creation of the audio engine, as in

result = slCreateEngine(&(p->engineObject), 0, NULL, 0, NULL, NULL);

This initialises an  engine object of the type SLObjectItf  (which in the example above is held in a data structure pointed by p). Once an engine is created, it needs to be realised (this is going to be a common process with OpenSL objects, creation followed by realisation). An engine interface is then obtained, which will be used subsequently to open and initialise the input and output devices (with their sources and sinks):

result = (*p->engineObject)->Realize(p->engineObject, SL_BOOLEAN_FALSE);
...
result = (*p->engineObject)->GetInterface(p->engineObject,
                                     SL_IID_ENGINE, &(p->engineEngine));

Once the interface to the engine object is obtained, we can use it to create other API objects. In general, for all API objects, we:

  1. create the object (instantiation)
  2. realise it (initialisation)
  3. obtain an interface to it (to access any features needed), via the GetInterface() method

In the case of playback, the first object to be created is the Output Mix (also a SLObjectItf), and then realised:

const SLInterfaceID ids[] = {SL_IID_VOLUME};
const SLboolean req[] = {SL_BOOLEAN_FALSE}
result = (*p->engineEngine)->CreateOutputMix(p->engineEngine,
                                    &(p->outputMixObject), 1, ids, req);
...
result = (*p->outputMixObject)->Realize(p->outputMixObject,
                                                 SL_BOOLEAN_FALSE);

As we will not need to manipulate it, we do not need to get its interface. Now, we configure the source and sink of a player object we will need to create. For output, the source is going to be a buffer queue, which is where we will send our audio data samples. We configure it with the usual parameters: data format, channels, sampling rate (sr), etc:

SLDataLocator_AndroidSimpleBufferQueue loc_bufq =
                           {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM,channels,sr,
               SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
               speakers, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};

and the sink the Output Mix, we created above,

SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX,
                                                p->outputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};

The audio player object then gets created with this source and sink, and realised:

const SLInterfaceID ids1[] = {SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req1[] = {SL_BOOLEAN_TRUE};
result = (*p->engineEngine)->CreateAudioPlayer(p->engineEngine,
                    &(p->bqPlayerObject), &audioSrc, &audioSnk,
                     1, ids1, req1);
...
result = (*p->bqPlayerObject)->Realize(p->bqPlayerObject, 
                                             SL_BOOLEAN_FALSE)

Then we get the player object interface,

result = (*p->bqPlayerObject)->GetInterface(p->bqPlayerObject, 
                                 SL_IID_PLAY,&(p->bqPlayerPlay));

and the buffer queue interface (of type SLBufferQueueItf)

result = (*p->bqPlayerObject)->GetInterface(p->bqPlayerObject,
       SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &(p->bqPlayerBufferQueue));

The OpenSL API provides a callback mechanism for audio IO. However, unlike other asynchronous audio IO implementations, like in CoreAudio or Jack, the callback does not pass the audio buffers for processing, as one of its arguments. Instead, the callback is only use to signal the application, indicating that the buffer queue is ready to receive data.

The buffer queue interface obtained above will be used to set up a callback (bqPlayerCallback, which is passed p as context):

result = (*p->bqPlayerBufferQueue)->RegisterCallback(
                      p->bqPlayerBufferQueue,bqPlayerCallback, p);

Finally, the player interface is used to start audio playback:

result = (*p->bqPlayerPlay)->SetPlayState(p->bqPlayerPlay,
                                            SL_PLAYSTATE_PLAYING);

Opening the device for audio input

The process of starting the recording of audio data is very similar to playback. First we set our source and sink, which will be the Audio Input and a buffer queue, respectively:

SLDataLocator_IODevice loc_dev = {SL_DATALOCATOR_IODEVICE,
                      SL_IODEVICE_AUDIOINPUT,
                      SL_DEFAULTDEVICEID_AUDIOINPUT, NULL};
SLDataSource audioSrc = {&loc_dev, NULL};
...
SLDataLocator_AndroidSimpleBufferQueue loc_bq =
                      {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, channels, sr,
          SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
          speakers, SL_BYTEORDER_LITTLEENDIAN};
SLDataSink audioSnk = {&loc_bq, &format_pcm};

Then we create an audio recorder, realize it and get its interface:

const SLInterfaceID id[1] = {SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req[1] = {SL_BOOLEAN_TRUE};
result = (*p->engineEngine)->CreateAudioRecorder(p->engineEngine,
                              &(p->recorderObject), &audioSrc,
                               &audioSnk, 1, id, req);
...
result = (*p->recorderObject)->Realize(p->recorderObject,
                                          SL_BOOLEAN_FALSE);
...
result = (*p->recorderObject)->GetInterface(p->recorderObject,
                           SL_IID_RECORD, &(p->recorderRecord));

The buffer queue interface is obtained and the callback set:

result = (*p->recorderObject)->GetInterface(p->recorderObject,
     SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &(p->recorderBufferQueue));
...
result = (*p->recorderBufferQueue)->RegisterCallback(
                   p->recorderBufferQueue, bqRecorderCallback,p);

We can now start audio recording:

result = (*p->recorderRecord)->SetRecordState(
                      p->recorderRecord,SL_RECORDSTATE_RECORDING);

Audio IO

Streaming audio to/from the device is done by the Enqueue() method of SLBufferQueueItf:

SLresult (*Enqueue) (SLBufferQueueItf self,
                     const void *pBuffer, SLuint32 size);

This should be called whenever the buffer queue is ready for a new data buffer (either for input or output). As soon as the player or recorder object is set into playing or recording state, the buffer queue will be ready for data. After this, the callback mechanism will be responsible for signaling the application that the buffer queue is ready for another block of data. We can call the Enqueue() method in the callback itself, or elsewhere. If we decide for the former, to get the callback mechanism running, we need to enqueue a buffer as we start recording or playing, otherwise the callback will never be called.

An alternative is to use the callback only to notify the application, waiting for it as we have a full buffer to deliver. In this case we would employ a double buffer, so that while one half is enqueued, the other is getting filled or consumed by our application. This allows us to create a simple interface that can receive a block of audio that will be used to write to the buffer or to receive the samples from the buffer.

Here is what we do for input. The callback is very minimal, it just notifies our main processing thread that the buffer queue is ready:

void bqRecorderCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
  OPENSL_STREAM *p = (OPENSL_STREAM *) context;
  notifyThreadLock(p->inlock);
}

Meanwhile, a processing loop would call the audio input function to get a block of samples. When the buffer is emptied, we wait for the notification to enqueue a buffer to be filled by the device and switch buffers:

int android_AudioIn(OPENSL_STREAM *p,float *buffer,int size){
  short *inBuffer;
  int i, bufsamps = p->inBufSamples, index = p->currentInputIndex;
  if(p == NULL || bufsamps ==  0) return 0;

  inBuffer = p->inputBuffer[p->currentInputBuffer];
  for(i=0; i < size; i++){
    if (index >= bufsamps) {
      waitThreadLock(p->inlock);
      (*p->recorderBufferQueue)->Enqueue(p->recorderBufferQueue,
                     inBuffer,bufsamps*sizeof(short));
      p->currentInputBuffer = (p->currentInputBuffer ? 0 : 1);
      index = 0;
      inBuffer = p->inputBuffer[p->currentInputBuffer];
    }
    buffer[i] = (float) inBuffer[index++]*CONVMYFLT;
  }
  p->currentInputIndex = index;
  if(p->outchannels == 0) p->time += (double) size/(p->sr*p->inchannels);
  return i;
}

For output, we do the reverse. The callback is exactly the same, but now it notifies that the device has consumed our buffer. So in the processing loop, we call this function that fills the output buffer with the blocks we pass to it. When the buffer is full, we wait for the notification so that we can enqueue the data and switch buffers:

int android_AudioOut(OPENSL_STREAM *p, float *buffer,int size){

short *outBuffer, *inBuffer;
int i, bufsamps = p->outBufSamples, index = p->currentOutputIndex;
if(p == NULL  || bufsamps ==  0)  return 0;
outBuffer = p->outputBuffer[p->currentOutputBuffer];

for(i=0; i < size; i++){
outBuffer[index++] = (short) (buffer[i]*CONV16BIT);
if (index >= p->outBufSamples) {
waitThreadLock(p->outlock);
(*p->bqPlayerBufferQueue)->Enqueue(p->bqPlayerBufferQueue,
outBuffer,bufsamps*sizeof(short));
p->currentOutputBuffer = (p->currentOutputBuffer ?  0 : 1);
index = 0;
outBuffer = p->outputBuffer[p->currentOutputBuffer];
}
}
p->currentOutputIndex = index;
p->time += (double) size/(p->sr*p->outchannels);
return i;
}

The interface

The code discussed above is structured into a minimal API for audio streaming with OpenSL. It contains five functions (and one opaque data structure):

/*
  Open the audio device with a given sampling rate (sr), input and
  output channels and IO buffer size in frames.
  Returns a handle to the OpenSL stream
*/
OPENSL_STREAM* android_OpenAudioDevice(int sr, int inchannels,
                                int outchannels, int bufferframes);
/*
Close the audio device
*/
void android_CloseAudioDevice(OPENSL_STREAM *p);
/*
Read a buffer from the OpenSL stream *p, of size samples.
Returns the number of samples read.
*/
int android_AudioIn(OPENSL_STREAM *p, float *buffer,int size);
/*
Write a buffer to the OpenSL stream *p, of size samples.
Returns the number of samples written.
*/
int android_AudioOut(OPENSL_STREAM *p, float *buffer,int size);
/*
Get the current IO block time in seconds
*/
double android_GetTimestamp(OPENSL_STREAM *p);

Processing

The example is completed by a trivial processing function, start_processing(), which we will wrap in Java so that it can be called by the application. It employs the API described above:

p = android_OpenAudioDevice(SR,1,2,BUFFERFRAMES);
...
while(on) {
   samps = android_AudioIn(p,inbuffer,VECSAMPS_MONO);
   for(i = 0, j=0; i < samps; i++, j+=2)
     outbuffer[j] = outbuffer[j+1] = inbuffer[i];
   android_AudioOut(p,outbuffer,VECSAMPS_STEREO);
  }
android_CloseAudioDevice(p);

A stop_processing() function is also supplied, so that we can stop the streaming to close the application.

The application code

Finally, completing the project, we have a small Java class, which is based on the Eclipse auto- generated Java application code, with the addition of a secondary thread and calls to the two wrapped native functions described above:

public class AudiotestActivity extends Activity {
    /** Called when the activity is first created. */
    Thread thread;
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);
        thread = new Thread() {
            public void run() {
                setPriority(Thread.MAX_PRIORITY);
                opensl_example.start_process();
            }
        };
        thread.start();   
    }
    public void onDestroy(){
        super.onDestroy();
        opensl_example.stop_process();
        try {
            thread.join();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        thread = null;
    }
}

Final Words

I hope these notes have been useful to shed some light on the development of audio processing applications using native code and OpenSL. While not offering (yet) a low-latency option to the AudioTrack SDK, it can still possibly deliver better performance, as native code is not subject to  virtual machine overheads such as garbage collection pauses. We assume that this is the way forward for audio development on Android. In the NDK document on the Android OpenSL implementation, we read that

as the Android platform and specific device implementations continue to evolve, an OpenSL ES application can expect to benefit from any future system performance improvements.

This indicates that Android audio developers should perhaps take some notice of OpenSL. The lack of examples has not been very encouraging, but I hope this blog is a step towards changing this.

Advertisements
72 Comments
  1. Wow! Thanks for the information! I have been digging a lot for OPENSL ES info!

  2. Neeraj permalink

    Hi Victor,

    Thanks for the informative post. I have been doing some openSl ES xploration on my own and built a sample app where I modified the nativeAudio sample in the NDK for streaming. The idea was to get a deterministic value for audio latency on Android using OpenSL ES. My app estimates the latency to be around 270 ms on ICS. I would be grateful if you could reply with the latency that you experience with this app that you have developed. It would go a long way in confirming if I am using OpenSL ES correctly or missing out on something.
    Thanks in advance.

    Regards,
    Neeraj

    • with my Galaxy tablet, android 3.2, the latency is similar, between 200 and 300 ms. I was hoping to upgrade to ICS when it becomes available from samsung, but fron what you report, it appears that will not make any difference to the latency. It appears to be device dependent, as a friend who has a kindle fire has reported even longer latencies.

      • neeraj permalink

        Hi Victor,

        Thanks for the response. Yes I have noted that the latencies are device dependent since the minimum buffer size returned by the ALSA layer is device dependent. I have been trying to tweak the ALSA but without much success yet. Though the differences in latencies smooth out on ICS, atleast with the Nexus S & Nexus Prime.

        Thanks again.

        Regards,
        Neeraj

  3. Is there a way to access the Alsa layer via the NDK (without any customisation)?

  4. neeraj permalink

    Unfortunately no. Hope android addresses this issue and exposes low level audio in future releases.

  5. yes. I think eventually they should think of support Jack. It’s the best technology for the job.

  6. athos permalink

    thank you for this, I read the post and I’m studying your code to understand how everything works. I can’t make the application work on an android 3.2 device emulator. It fails loading the library, which is built with ndk-build with no error (I also specify android-9 as the target platform as you indicated). Is the audio IO supposed to work only on real devices, or am I missing anything?

    • I have not tested on the emulator, but I will suggest you use 4.0 because it is the only emulator using armeabi v7, which I am building the NDK code for. Possibly, most likely, this is the cause of the crash. Also, I do not think the emulator has audio in, but it has audio out.

  7. khiner permalink

    Victor, thanks so much for the thoughtful tutorial here. I’m semi-convinced that this is the only one of its kind on the entire internet! 🙂 I’m blown away that there are not more resources on this. Although Android is clearly not in the game/audio app market as much as Apple is (BECAUSE of these limitations, not the Android culture as some have it), there is definitely a fair swath of audio applications that do nontrivial audio DSP, which means a lot of people have gone through these trials and tribulations without sharing their experiences with the rest of us, as you have. I can only assume that most audio developers are still doing DSP directly on AudioTrack byte buffers, through the JVM. But I just can’t imagine, say, chaining multiple effects like a reverb, delay and a decent filter using that method… with 5 tracks at the same time.

    Of course, my main concern in using OpenSL is limiting my audience to Android versions > 2.3. Do you have any opinions on this matter? That is, do you think that in mid-2012 it is worth creating a separate (much slower) effects implementation using, say, an AudioTrack architecture, and dynamically using this architecture for users with Android versions 4 simultaneous tracks, >2 nontrivial DSP effects each) using AudioTracks, or any Android-sound-API? Or is something like this achievable today at all in Android?

    • It should be easy enough to put in a conditional compilation directive to select between OpenSL and AudioTrack. Even if you are using AudioTrack, I would still suggest you to do the processing in C and then wrap it in JNI calls (I found that using SWIG is the simplest way).
      Good luck with your projects.

  8. Victor, thanks so much for the thoughtful tutorial here. I’m semi-convinced that this is the only one of its kind on the entire internet! 🙂 I’m blown away that there are not more resources on this. Although Android is clearly not in the game/audio app market as much as Apple is (BECAUSE of these limitations, not the Android culture as some have it), there is definitely a fair swath of audio applications that do nontrivial audio DSP, which means a lot of people have gone through these trials and tribulations without sharing their experiences with the rest of us, as you have. I can only assume that most audio developers are still doing DSP directly on AudioTrack byte buffers, through the JVM. But I just can’t imagine, say, chaining multiple effects like a reverb, delay and a decent filter using that method… with 5 tracks at the same time.

    Of course, my main concern in using OpenSL is limiting my audience to Android versions > 2.3. Do you have any opinions on this matter? That is, do you think that in mid-2012 it is worth creating a separate (much slower) effects implementation using, say, an AudioTrack architecture, and dynamically using this architecture for users with Android versions 4 simultaneous tracks, >2 nontrivial DSP effects each) using AudioTracks, or any Android-sound-API? Or is something like this achievable today at all in Android?

    • It doesn’t look like I can edit my post. There was an editing error. Here’s the last part again, missing sentence included:
      do you think that in mid-2012 it is worth creating a separate (much slower) effects implementation using, say, an AudioTrack architecture, and dynamically using this architecture for users with Android versions 4 simultaneous tracks, >2 nontrivial DSP effects each) using AudioTracks, or any Android-sound-API? Or is something like this achievable today at all in Android?

      • Okay, it thinks my greater/less-thans are HTML, and it’s escaping an entire sentence. One more time: should I dynamically use a different architecture for Android versions less than 2.3? Is it even possible to achieve continuous audio for a situation as complex as having more than 4 simultaneous tracks, each with more than 2 nontrivial DSP effects each using AudioTracks, or any Android-sound-API? Or is something like this achievable today at all in Android?
        Whew! Come on WordPress, can we get an edit feature?

  9. Thanks for a great write-up, Victor! Starting with your sample code, I was able to implement an OpenSL-based branch of Pd for Android. It’s still experimental (in the sense that I haven’t yet had time to test it much), but so far it’s working nicely: https://github.com/libpd/pd-for-android/tree/opensl

    As an intermediate step, I put together a very simple API that lets you use OpenSL by providing a JACK-style processing callback: https://github.com/libpd/libpd/blob/opensl/jni/opensl_io.h

    • Good job, Peter. With libpd and Csound supporting OpenSL, there are good alternatives for audio programming in Android. Now let’s hope the latency issue gets fixed…

  10. I notice that as the android version of csound uses this for the backend on my 2.3 and 4.03 devices I get flawless audio playback. As good as my much more powerful intel laptop. Very exciting as I don’t have to touch the stuff myself and can focus on writing a phone gap plugin to route channel messages from javascript function calls into the csound backend with good enough performance to be useful.

  11. yes, it is a very similar code, the idea was, after doing the Csound backend, to share the experience as there was at the time very little information on how to do audio streaming with the OpenSL. I am glad Csound is working well for you. I am looking forward to trying it in the new Google tablets once I get my hands on one.

  12. rpmchaos permalink

    Excellent post! I’m hoping you can help me out with a something… I am building an app that records audio using the AudioRecord api. I have a test device that plugs into the headset port of a cell phone and emits a known pattern. By always using the same test device/pattern, I can record on an Android cell phone an evaluate how true the signal is perceived (I record the waveform on Android, then analyze visually using Audacity on my PC). I noticed on the new Samsung Galaxy S3, that my waveform looks a little funky (still similar, but funky). I tried using the excellent recording app Tape Machine available at the Play Store and the signal looks excellent. Looking at the Android log info, I see that my app causes the following:
    D/alsa_ucm(227): snd_use_case_set(): uc_mgr 0x63c878 identifier _verb value HiFi Rec
    D/alsa_ucm(227): set_use_case_ident_for_all_devices(): HiFi Rec
    D/alsa_ucm(227): Set mixer controls for HiFi Rec enable 1

    While the Tape Machine app causes:
    D/alsa_ucm(227): snd_use_case_set(): uc_mgr 0x63c878 identifier _enamod value Capture Music
    D/alsa_ucm(227): set_use_case_ident_for_all_devices(): Capture Music
    D/alsa_ucm(227): Set mixer controls for Capture Music enable 1

    I’m speculating that the difference here is setting some configuration which is affecting my signal.

    Looking at Tape Machine’s website, they state they use libsndfile and libavcodec libraries. I’m guessing this means they must be using the NDK to work with those libraries. Still, I haven’t found how to configure these settings manually. Anybody have some insight here?

  13. thanks for the informative tips for android..
    i have used this in my project…
    you are done a great job….

  14. I need help
    I want to develop radio streamming app which having OGG file format
    i am new to android development
    what should i need to do
    i have to use NDk for OGG or it can possible in SDK to stream OGG audio file
    please share your code with me
    thank you in advance

  15. ksam permalink

    Hi, Victor

    Thank you for your valuable insights into how to use native audio using open SL ES. I am developing an application in which I required to have both capture and playback at the same time.
    In this, currently any one is happening. How to do this? And if I want to use an extra capture or playback device connected to my android device. How to enumerate that device? And what is the minimum delay will be if we are using this. Because in other ones the delay is coming more, which causing some problems in my applications.

    Eagerly waiting for your replay.
    Thanking you in advance.
    sam

  16. Ritz permalink

    Hi All,

    I am trying to implement audio capture using OpenSL ES. I had written the code but while compiling it is showing
    undefined reference to `slCreateEngine’
    undefined reference to SL_IID_RECORD
    undefined reference to SL_IID_ENGINE
    undefined reference to SL_IID_ANDROIDSIMPLEBUFFERQUEUE
    collect2: ld returned 1 exit status
    make: *** [/home/ubuntu-desktop/Desktop/ritun/OpenSL_ES/obj/local/armeabi/libopenaltest.so] Error 1

    I am using TARGET_PLATFORM := android-9

    Can anyone please suggest where I am going wrong.

    • Sounds like you are not adding the openSL library to the build. Make sure your Android.mk has the line

      LOCAL_LDLIBS := -llog -lOpenSLES

      so that the app is correctly linked

  17. Ritz permalink

    Ohh thankss a lot. I had missed it, now I am able to capture audio perfectly.
    But while playing audio, I am getting result = SL_RESULT_BUFFER_INSUFFICIENT while calling the enqueue function. Actually I am bit confused when to call the enqueue function.

    Can anyone put some light in this issue. I will be really thankful to them. Waiting eagerly for any reply.

  18. Ritz permalink

    The audio played is coming like robot voice. Quality is not at all good.

    Please put your comments in it.

  19. It looks like it could be a buffer size issue. Did you try building and running the code from the GIT repository? (https://bitbucket.org/victorlazzarini/android-audiotest)

  20. Ritz permalink

    do anyone have idea regarding minimum buffer size to enqueue???

    • Sizes below 1024 bytes are not recommended for streaming audio in general. But the answer to your question is device specific. On the Java end, the AudioRecord interface has a method called getMinBufferSize() that obviously returns the minimum buffer size for record buffer enqueueing for the calling device. Again, the number that this method returns is device-dependent, but typical sizes are in the range of 4096.

      • Ritz permalink

        I am playing audio using 8192 bytes as returned by the getMinBufferSize() and it is playing audio perfect. Also,I am trying to capture audio also using 8192 bytes at a time. It is capturing audio but sometimes data loss is there and sometimes glitches in audio is observed. I am not understanding why I am getting like this.

  21. Ritz permalink

    Thanks a lot for the informations. I used the function getMinBufferSize() and queried the minimum buffer size for Idolian tablet. It showed 8192 bytes. After using 8192 as the buffer size, I am able to play audio fine. Audio is coming super. Thanks again for helping to solve problem.

  22. Ritz permalink

    Hello Everyone,

    Actually I want to know that can we able to play audio without using threading concept in the callback function. Currently I am able to play audio by using the Enqueue in callback function and also once outside the callback function (only for the 1st time), Can I avoid that by any other methods?

    Any idea regarding this will be appreciated.

  23. I think I see what you mean. There are a few ways to answer your question. The callback function is called whenever the audio is done being enqueued to the device’s audio buffer. We have the opportunity at this point in time to provide more audio data to be enqueued, which will result in a continuous, unbroken stream of audio being written.

    In practice, there is nothing magic about using the Enqueue method in this callback function other than timing. In this way, the callback can be looked at as a hint as to when we should provide another buffer of audio such that we get a continuous audio stream. Theoretically, if we could predict exactly how long it will take to enqueue and write each buffer of audio, we could write our own method using a simple timer, and enqueue audio at fixed intervals. The problem with this approach is that the time it takes for OpenSL and the OS to do this job is mildly nondeterministic, and can be slightly different for two Enqueues even with the same buffer size, due to OS scheduling, minimum hardware buffer writes, etc.

    Long story short: use the hint! It’s right every time, and very helpful. Of course, you can get fancy and do things like calculate your next buffer in a different thread while the OS is writing and playing your last enqueued buffer. The bottom line is, when your OpenSL output object is asking for more audio, give it what it wants!

  24. Regarding the buffer sizes, I have noticed on ICS and Jelly Bean that OpenSL can operate with smaller buffer sizes than what AudioTrack gives in the getminbuffsize(). In particular, I can use something like 512 samples without drops. The latency is still considerably higher than what this would indicate (about 11ms @44.1k, versus a real latency of about 200ms), but it is shorter than using the larger sizes returned by getminbuffsize().

    • Ritz permalink

      I am trying to capture audio using 8192 bytes at a time. It is capturing audio but sometimes data loss is there and sometimes click sound in audio is observed. I am using only one buffer which is enqueued in every 256 ms of time. I am not getting where I am going wrong.

  25. Are you using a callback to notify when the buffer is supposed to be enqueued? I would suggest using a double buffer, where the buffer you pass to the enqueue is not the one you are currently processing. Look at the source code in GIT, which uses this principle.

    • Ritz permalink

      Actually I am trying to use enqueue function to enqueue another buffer inside the callback, but it is crashing when ever I am trying to call enqueue function inside the recorder callback function. So currently I am enqueuing buffers in a separate function and in the callback function writing the captured data in the processed buffer to a wav file.

  26. jul permalink

    I followed your instructions and get:

    opensl_example cannot be resolved AudiotestActivity.java /com.audiotest.AudiotestActivity/src/com/audiotest line 46 Java Problem

    What did I miss?

  27. jul permalink

    I missed swig 😦

  28. awesome post regarding Android audio streaming with OpenSL ES and the NDK, very informative post, actually i am searching program how to code android audio streaming?.. here is very useful solution for me.. thanks a lot

  29. Hi Victor, thanks for the great tutorial, there’s not a lot of support for android opensl es out there!

    When trying to compile the test app in eclipse it fails to compile and the console cites this:

    Language subdirectory: java
    Search paths:
    ./
    /usr/local/include/
    /System/Library/Frameworks/JavaVM.framework/Headers/
    ./jni/
    ./swig_lib/java/
    /usr/local/share/swig/2.0.8/java/
    ./swig_lib/
    /usr/local/share/swig/2.0.8/
    Preprocessing…
    Starting language-specific parse…
    Processing types…
    C++ analysis…
    Generating wrappers…
    rm -f ./libs/armeabi/lib*.so ./libs/armeabi-v7a/lib*.so ./libs/mips/lib*.so ./libs/x86/lib*.so
    rm -f ./libs/armeabi/gdbserver ./libs/armeabi-v7a/gdbserver ./libs/mips/gdbserver ./libs/x86/gdbserver
    rm -f ./libs/armeabi/gdb.setup ./libs/armeabi-v7a/gdb.setup ./libs/mips/gdb.setup ./libs/x86/gdb.setup
    Compile++ thumb : opensl_example libs/armeabi-v7a/libopensl_example.so
    mkdir -p ./libs/armeabi-v7a
    install -p ./obj/local/armeabi-v7a/libopensl_example.so ./libs/armeabi-v7a/libopensl_example.so
    /Users/neilc/Documents/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86/bin/arm-linux-androideabi-strip –strip-unneeded ./libs/armeabi-v7a/libopensl_example.so

    I’m completely new to Android development and Eclipse both, could anyone illuminate on what’s going wrong here?

    • the console messages seem correct. That’s the NDK library build (from build.sh).
      The errors are probably to do with resources or java code that is not up-to-date (as both SWIG and Eclipse generate java code). You might need to check in the Eclipse explorer where the error is, there is generally a red cross icon showing where the problem is. It will probably tell you also how to fix it.

      I’ve also been told that there was project directory (.externalToolBuilders) missing in the GIT repository. I’ve added this, you might want to try and update your local copy.

      Let me know whether this helped.

  30. Hi – Great article – I have a small question – I am writing application which using the audio recorder (need to capture each record buffer and do some work on it) When i am using a headset with microphone plugged into the phone and trying to record sound using the audio recorder library (functions) i figure that when the microphone has a sound on it it will stop the audio comes from the speakers. When i am using the same setting but using the Media player library (functions) it works great. But in this option i cannot do work on each received buffer and need to wait for the record to finished in order to do the work. Is there a way of configure the microphone not to stop the speaker output when recording using the audio record function. In principle i am connecting the speaker output directly to the microphone and as i said above with Media played functions it works great but problems with audio record. I need an Urgent answer where i am stuck now and i am a new to audio programming and you may know the problems and the workaround of that. Thanks.

    • Not sure about this, but maybe have a look at the OpenSL headers to see if there is a distinction for audio sinks – speaker and headphones. I have not seen it there, but maybe there is.

  31. Ritz permalink

    Hi Victor,

    Finally I am able to capture and play audio using your code. Its working very fine. Later, I modified the same to class version by removing the structure OPENSL_STREAM, so that I can do separate capture and playback operations. While creating two separate instances of the same class and creating two engines and getting the error as “slCreateEngine while another engine is active”. And while deleting the objects also it is giving error as “Destroy for engine ignored”. Any lights on this issue will really be appreciated.

    • athos permalink

      @Ritz: you can have only one Engine object, so you might want to create it once and give a reference to it to each instance that needs to work with it.

  32. triathlon permalink

    make: *** No rule to make target `jni/opensl_io2.c’, needed by `obj/local/armeabi-v7a/objs/opensl_example/opensl_io2.o’. Stop.

  33. I stumble opun a problem quite similar to that one:

    make: *** No rule to make target `jni/opensl_io3.c’, needed by `obj/local/armeabi-v7a/objs/opensl_example/opensl_io3.o’. Stop.

    • yes, I was playing around with some new code and the thing got committed by mistake. It should be fixed in GIT now, just update from the repo. Thanks for alerting me.

  34. Thanks for the fast help!

  35. Thank you very much for this great article, and useful information!

  36. I have no idea what’s wrong. I changed the NDK path and the target SDK version to 9 in the script, ran it and this is what I’m getting:

    C++ analysis…
    Generating wrappers…
    rm -f ./libs/armeabi/lib*.so ./libs/armeabi-v7a/lib*.so ./libs/mips/lib*.so ./libs/x86/lib*.so
    rm -f ./libs/armeabi/gdbserver ./libs/armeabi-v7a/gdbserver ./libs/mips/gdbserver ./libs/x86/gdbserver
    rm -f ./libs/armeabi/gdb.setup ./libs/armeabi-v7a/gdb.setup ./libs/mips/gdb.setup ./libs/x86/gdb.setup
    Compile thumb : opensl_example <= opensl_example.c
    /home/sergio/Downloads/android-ndk-r8d/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc -MMD -MP -MF ./obj/local/armeabi-v7a/objs/opensl_example/opensl_example.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -no-canonical-prefixes -march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16 -mthumb -Os -g -DNDEBUG -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -Ijni -I/home/sergio/Downloads/android-ndk-r8d/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/sergio/Downloads/android-ndk-r8d/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include -Ijni -DANDROID -O3 -Wa,–noexecstack -I/home/sergio/Downloads/android-ndk-r8d/platforms/android-9/arch-arm/usr/include -c jni/opensl_example.c -o ./obj/local/armeabi-v7a/objs/opensl_example/opensl_example.o
    Assembler messages:
    Fatal error: invalid -march= option: `armv7-a'
    make: *** [obj/local/armeabi-v7a/objs/opensl_example/opensl_example.o] Error 1

    Seems like the "Fatal error: invalid -march= option: `armv7-a'" is what I should worry about, but I'm clueless as to how to fix it =/ any help would be greatly appreciated.

    • Maybe it’s armv7a instead of armv7-a?
      What OS are you building on? I know there was a problem with the n8c version of NDK and OSX10.6
      Maybe try reverting to an old version of the NDK?

      • I’m on Linux (latest Ubuntu, 32bits), and the NDK is n8d. I doubt it’s a typo, since that seems to be a command executed by one of the build tools (and the error seems to be somewhat common although in other contexts – a google search seems unfruitful).

      • I checked in my build and its armv7-a alright, so forget that comment. But this is r8c NDK.

        by the way According to your messages, the NDK you are using is r8d not 9:

        /home/sergio/Downloads/android-ndk-r8d

      • Oh I phrased that wrong, I meant I fixed the NDK path (“export ANDROID_NDK_ROOT=$HOME/Downloads/android-ndk-r8d”) *AND* changed the target SDK version to 9 (“$ANDROID_NDK_ROOT/ndk-build TARGET_PLATFORM=android-9 V=1”). It doesn’t seem to be related to that. I guess I’ll try using NDK r8c.

      • In any case, the error
        Fatal error: invalid -march= option: `armv7-a’

        sounds like a problematic NDK. The gcc compiler should accept -march=armv7-a alright. This is how my console shows the compilation of that particular file:

        Compile thumb : opensl_example <= opensl_example.c
        /Users/victor/src/android-ndk-r8c/toolchains/arm-linux-androideabi-4.6/prebuilt/darwin-x86/bin/arm-linux-androideabi-gcc -MMD -MP -MF ./obj/local/armeabi-v7a/objs/opensl_example/opensl_example.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16 -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -Ijni -I/Users/victor/src/android-ndk-r8c/sources/cxx-stl/gnu-libstdc++/4.6/include -I/Users/victor/src/android-ndk-r8c/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a/include -Ijni -DANDROID -O3 -Wa,–noexecstack -O2 -DNDEBUG -g -I/Users/victor/src/android-ndk-r8c/platforms/android-14/arch-arm/usr/include -c jni/opensl_example.c -o ./obj/local/armeabi-v7a/objs/opensl_example/opensl_example.o

  37. Maxi permalink

    Hi Victor,

    Thanks for the excellent post.
    I would like to capture and playback audio with external hardware via USB (self developed).
    The corresponding app should be an effect processor intended to perform amp modeling + equalization + reverb + echo + chorus on one audio channel. (similar to iRig for iPhone)

    1. Assuming “zero latency” of the codec+USB, do you think that programming C++ with the NDK could achieve a latency below 20ms?

    2. Do you know if the large latency (above 100ms) reported in your post is mainly due to the phone codec and the routines that manage audio buffering?

    Kind regards,
    Maxi

    • as far as I know, if you are not thinking of rooting your device and making changes to the OS, then nope. You will need to write your USB driver, probably using whatever toolkit is offered for it (I do not know details) in the NDK, and this will be hooked onto the audio infrastructure of Android. That might allow lower latencies, but I do not see it getting closer to 20ms.

  38. Reblogged this on Blog Music Radio and commented:
    Interesting

  39. Kevin permalink

    Thanks for doing this tutorial. This is new to me so I am confused about a few things. Primarily, I’d like to know what device is being used for input. What audio will end up being processed in this example (and the delay-effect example that follows)?

    • The input used is the default audio input. In my tests, it’s the tablet mic, but I guess you might be able to plug audio in with the correct cable, depending on the device.

  40. Vinay permalink

    Actually i’m trying to implement a music player app in android..so i installed android ndk which contains opensl audio libraries..i used it and tried to apply environmental reverb effect..but there is no change in the song..so anyone pls let me know what is the problem..

    note:
    Added permissions in manifest file.

  41. SHINODA permalink

    Thanks for your helpful tutorial. I successfully built the project. But when I run it in my Android 2.3.1 phone, the application can’t load opensles lib. I got this error
    “native code library failed to load.
    java.lang.UnsatisfiedLinkError: Cannot load library: link_image[1962]: 60 could not load needed library ‘libOpenSLES.so’ for ‘libopensl_example.so’ (reloc_library[1311]: 60 cannot locate ‘_ZN7android16NuHTTPDataSource7connectEPKcPKNS_11KeyedVectorINS_7String8ES4_EEl’…”

    Please help me. what wrong here?

    • That’s a new one for me. Maybe your Android phone does not have OpenSL installed. I had the impression it would be there in 2.3.1. Could you try upgrading Android?

  42. I seldom leave a response, however i did a few searching and wound
    up here Android audio streaming with OpenSL ES and the NDK.
    | The Audio Programming Blog. And I actually do have a couple of
    questions for you if it’s allright. Is it just me or does it look like some of the responses come across as if they are left by brain dead visitors? 😛 And, if you are writing on additional online social sites, I would like to keep up with anything fresh you have to post. Could you list of every one of your social sites like your Facebook page, twitter feed, or linkedin profile?

  43. certainly like your web site but you need to test the spelling on
    several of your posts. Several of them are rife with spelling problems and
    I in finding it very bothersome to tell the reality on the other hand I’ll definitely come back again.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: