theubuntu system proxyy was changed是什么意思

(window.slotbydup=window.slotbydup || []).push({
id: '2014386',
container: s,
size: '234,60',
display: 'inlay-fix'
&&|&&0次下载&&|&&总419页&&|
您的计算机尚未安装Flash,点击安装&
如需下载到电脑,请使用积分()
0人评价54页
0人评价54页
0人评价34页
0人评价28页
10人评价754页
所需积分:(友情提示:所有文档均可免费全文预览!下载之前请务必先预览阅读,以免误下载造成积分浪费!)
(多个标签用逗号分隔)
文不对题,内容与标题介绍不符
广告内容或内容过于简单
文档乱码或无法正常显示
若此文档涉嫌侵害了您的权利,请参照说明。
评价文档:Android 4.4KitKat AudioRecord 流程分析
  Android是架构分为三层:
底层&&&&&&Linux Kernel
中间层&&主要由C++实现 (Android 60%源码都是C++实现)
应用层&&主要由JAVA开发的应用程序
  应用程序执行过程大致如下: JAVA应用程序产生操作(播放音乐或停止),然后通过JNI调用进入中间层执行C++代码,中间层处理后可能需要硬件产生动作的,会继续将操作传到Linux Kernel,Kernel&,不需要硬件产生操作的可能在中间层做一些处理就直接返回。需要硬件产生操作的动作则需通过Kernel调用相关的驱动执行动作或一些处理。
  在这里大家需要明白一点:Android仅使用了Linux的Kernel&,即便是一些常用的库例如pthread等,都是Android自已用C/C++/汇编重写实现的。
  因为在音频通路建立过程中,涉及Android IPC通信及系统服务管理,所以下面就这两点先做个简述:
  ①Android IPC通信采用的是Client/Server结构,Client&客户端&(AudioRecord)通过接口(IAudioRecord)调用Server&服务器对象(AudioFlinger及AudioFlinger::RecordThread等)的方法,并获取执行结果。AudioRecord.cpp&主要是对类AudioRecord的实现,AudioFlinger.cpp主要是对类AudioFlinger的实现。在底层音频通信中,可以将AudioRecord作为Android IPC通信的客户端,而将AudioFlinger作为服务器端。AudioRecord获取服务器端接口(mAudioRecord)后就可以像执行自已的方法一样调用服务器端方法(AudioFlinger)。
  ②Android&启动时会创建一个服务管理进程。Android系统中所有的服务都必需注册添加到该进程中,可以通过sp&IServiceManager& sm=defaultServiceManager()获取管理进程接口,然后可以通过它的AddService方法将服务注册添加:sm-&addService(String16("media.audio_flinger"), new AudioFlinger());只有将服务添加到管理进程中才能被其它的进程使用:
sp&IServiceManager& sm = defaultServiceManager();
sp&IBinder& binder = sm-&getService(String16("media.audio_flinger"));
Android的音频系统在启动的时候会创建两个服务:一个是上面的示例&AudioFlingerService,一个是AudioPolicyService,并添加到管理进程中,之后其它进程可以使用它们提供的方法。
以下简称AudioFlingerService为AudioFlinger,&AudioPolicyService为AudioPolicy
核心流程:
AudioSystem:getinput(&)-&aps-&getinput(..)-&AudioPolicyService::getInput(&)-&mpPolicyManager-&getInput(&)-&
&AudioPolicyService&mpClientInterface-&openInput(&)-&AudioFlinger::openInput(&)
录音流程分析
应用层录音
  AndioRecord类的主要功能是让各种JAVA应用能够管理音频资源,以便它们通过此类能够录制平台的声音输入硬件所收集的声音。此功能的实现就是通过&pulling同步&(reading读取)AudioRecord对象的声音数据来完成的。在录音过程中,应用所需要做的就是通过read方法去及时地获取AudioRecord对象的录音数据. AudioRecord类提供的三个获取声音数据的方法分别是read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). 无论选择使用那一个方法都必须事先设定方便用户的声音数据的存储格式。
  开始录音的时候,一个AudioRecord需要初始化一个相关联的声音buffer, 这个buffer主要是用来保存新的声音数据。这个buffer的大小,我们可以在对象构造期间去指定。它表明一个AudioRecord对象还没有被读取(同步)声音数据前能录多长的音(即一次可以录制的声音容量)。声音数据从音频硬件中被读出,数据大小不超过整个录音数据的大小(可以分多次读出),即每次读取初始化buffer容量的数据。一般情况下录音实现的简单流程如下:
创建一个数据流。
构造一个AudioRecord对象,其中需要的最小录音缓存buffer大小可以通过getMinBufferSize方法得到。如果buffer容量过小,将导致对象构造的失败。
初始化一个buffer,该buffer大于等于AudioRecord对象用于写声音数据的buffer大小。
开始录音。
从AudioRecord中读取声音数据到初始化buffer,将buffer中数据导入数据流。
停止录音。
关闭数据流。
程序示例 :
// Create a DataOuputStream to write the audio data into the saved file.
OutputStream os = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream(os);
DataOutputStream dos = new DataOutputStream(bos);
// Create a new AudioRecord object to record the audio.
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration,
audioEncoding);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,              11025, AudioFormat.CHANNEL_IN_MONO,              AudioFormat.ENCODING_PCM_16BIT, bufferSize);
short[] buffer = new short[bufferSize];
audioRecord.startRecording();
isRecording = true ;
while (isRecording) {
int bufferReadResult = audioRecord.read(buffer, 0, bufferSize);
for (int i = 0; i & bufferReadR i++)
dos.writeShort(buffer[i]);
audioRecord.stop();
dos.close();
1. getMinBufferSize
  &getMinBufferSize函数前文已做介绍,不再细说,查看源码可知函数实现中通过调用native_get_min_buff_size这个JNI函数进入framework/base/core/jni/android_media_AudioRecord.cpp函数中的android_media_AudioRecord_get_min_buff_size.
  native_get_min_buff_size函数到android_media_AudioRecord_get_min_buff_size的关联是通过android_media_AudioRecord.cpp中的函数数组来查看的:
static JNINativeMethod gMethods[] = {
signature,
{"native_start",
(void *)android_media_AudioRecord_start},
{"native_stop",
(void *)android_media_AudioRecord_stop},
{"native_setup",
"(Ljava/lang/OIIIII[I)I", (void *)android_media_AudioRecord_setup},
{"native_finalize",
(void *)android_media_AudioRecord_finalize},
{"native_release",
(void *)android_media_AudioRecord_release},
{"native_read_in_byte_array", "([BII)I", (void *)android_media_AudioRecord_readInByteArray},
{"native_read_in_short_array",
"([SII)I", (void *)android_media_AudioRecord_readInShortArray},
{"native_read_in_direct_buffer","(Ljava/lang/OI)I", (void *)android_media_AudioRecord_readInDirectBuffer},
{"native_set_marker_pos","(I)I",
(void *)android_media_AudioRecord_set_marker_pos},
{"native_get_marker_pos","()I",
(void *)android_media_AudioRecord_get_marker_pos},
{"native_set_pos_update_period", "(I)I",
(void *)android_media_AudioRecord_set_pos_update_period},
{"native_get_pos_update_period", "()I",
(void *)android_media_AudioRecord_get_pos_update_period},
{"native_get_min_buff_size", "(III)I",
(void *)android_media_AudioRecord_get_min_buff_size},
  android_media_AudioRecord_get_min_buff_size代码如下:
// returns the minimum required size for the successful creation of an AudioRecord instance.
// returns 0 if the parameter combination is not supported.
// return -1 if there was an error querying the buffer size.
static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,
jobject thiz,
jint sampleRateInHertz, jint nbChannels, jint audioFormat) {
ALOGV("&& android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",sampleRateInHertz, nbChannels, audioFormat);
size_t frameCount = 0;  //以地址的方式获取frameCount的值。
status_t result = AudioRecord::getMinFrameCount(&frameCount,sampleRateInHertz,
(audioFormat == ENCODING_PCM_16BIT ?AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT),
audio_channel_in_mask_from_count(nbChannels));
if (result == BAD_VALUE) {
if (result != NO_ERROR) {
return -1;
return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1);
  根据最小的framecount计算最小的buffersize。音频中最常见的是frame这个单位,一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多//声道的话,用1个采样点的字节数表示不全,因为播放的时候肯定是多个声道的数据都要播出来//才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了。getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。
&2. new AudioRecord
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes) throws IllegalArgumentException {
mRecordingState = RECORDSTATE_STOPPED;
// remember which looper is associated with the AudioRecord instanciation     // 获得主线程的Looper,关于Looper的介绍见其他专题。     if ((mInitializationLooper = Looper.myLooper()) == null) {
mInitializationLooper = Looper.getMainLooper();
audioParamCheck(audioSource, sampleRateInHz, channelConfig, audioFormat);
audioBuffSizeCheck(bufferSizeInBytes);
// native initialization
int[] session = new int[1];
session[0] = 0;
//TODO: update native initialization when information about hardware init failure
due to capture device already open is available.     //调用native层的native_setup,把自己的WeakReference传进去
int initResult = native_setup( new WeakReference&AudioRecord&(this),
mRecordSource, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes,
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing native AudioRecord object.");
return; // with mState == STATE_UNINITIALIZED
mSessionId = session[0];
mState = STATE_INITIALIZED;
&  函数实现通过调用native_setup函数进入了framework/base/core/jni/android_media_AudioRecord.cpp中的android_media_AudioRecord_setup:
static int android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jint source, jint sampleRateInHertz, jint channelMask,
// Java channel masks map directly to the native definition
jint audioFormat, jint buffSizeInBytes, jintArray jSession)
//ALOGV("&& Entering android_media_AudioRecord_setup");
//ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d",
sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes);
if (!audio_is_input_channel(channelMask)) {
ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);
return AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;
//popCount是统计一个整数中有多少位为1的算法
uint32_t nbChannels = popcount(channelMask);
// compare the format against the Java constants
if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) {
ALOGE("Error creating AudioRecord: unsupported audio format.");
return AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;
int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1;
audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;
if (buffSizeInBytes == 0) {
ALOGE("Error creating AudioRecord: frameCount is 0.");
return AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;
int frameSize = nbChannels * bytesPerS
size_t frameCount = buffSizeInBytes / frameS
if ((uint32_t(source) &= AUDIO_SOURCE_CNT) && (uint32_t(source) != AUDIO_SOURCE_HOTWORD)) {
ALOGE("Error creating AudioRecord: unknown source.");
return AUDIORECORD_ERROR_SETUP_INVALIDSOURCE;
jclass clazz = env-&GetObjectClass(thiz);
if (clazz == NULL) {
ALOGE("Can't find %s when setting up callback.", kClassPathName);
return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
if (jSession == NULL) {
ALOGE("Error creating AudioRecord: invalid session ID pointer");
return AUDIORECORD_ERROR;
jint* nSession = (jint *) env-&GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
return AUDIORECORD_ERROR;
int sessionId = nSession[0];
env-&ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL;
// create an uninitialized AudioRecord object
sp&AudioRecord& lpRecorder = new AudioRecord();
// create the callback information:
// this data will be passed with every AudioRecord callback
audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_
lpCallbackData-&audioRecord_class = (jclass)env-&NewGlobalRef(clazz);
// we use a weak reference so the AudioRecord object can be garbage collected.
lpCallbackData-&audioRecord_ref = env-&NewGlobalRef(weak_this);
lpCallbackData-&busy = false;
lpRecorder-&set((audio_source_t) source,
sampleRateInHertz,
// word length, PCM
channelMask,
frameCount,
recorderCallback,// callback_t
lpCallbackData,// void* user
// notificationFrames,
// threadCanCallJava
sessionId);
if (lpRecorder-&initCheck() != NO_ERROR) {
ALOGE("Error creating AudioRecord instance: initialization check failed.");
goto native_init_
nSession = (jint *) env-&GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioRecord: Error retrieving session id pointer");
goto native_init_
// read the audio session ID back from AudioRecord in case a new session was created during set()
nSession[0] = lpRecorder-&getSessionId();
env-&ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL;
// scope for the lock
Mutex::Autolock l(sLock);
sAudioRecordCallBackCookies.add(lpCallbackData);
// save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field of the Java object  // 把刚创建的AudioRecord对象保存在Java层,后面会通过getAudioRecord函数再获取。  setAudioRecord(env, thiz, lpRecorder);
// save our newly created callback information in the "nativeCallbackCookie" field
// of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()
env-&SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, (int)lpCallbackData);
return AUDIORECORD_SUCCESS;
// failure:
native_init_failure:
env-&DeleteGlobalRef(lpCallbackData-&audioRecord_class);
env-&DeleteGlobalRef(lpCallbackData-&audioRecord_ref);
delete lpCallbackD
env-&SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0);
return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
比较关键的是lpRecorder-&set函数,跟踪实现:
status_t AudioRecord::set(
audio_source_t inputSource,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int frameCountInt,
callback_t cbf,
void* user,
int notificationFrames,
bool threadCanCallJava,
int sessionId,
transfer_type transferType,
audio_input_flags_t flags)
switch (transferType) {
case TRANSFER_DEFAULT:
if (cbf == NULL
threadCanCallJava) {
transferType = TRANSFER_SYNC;
transferType = TRANSFER_CALLBACK;
case TRANSFER_CALLBACK:
if (cbf == NULL) {
ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");
return BAD_VALUE;
case TRANSFER_OBTAIN:
case TRANSFER_SYNC:
ALOGE("Invalid transfer type %d", transferType);
return BAD_VALUE;
mTransfer = transferT
// FIXME "int" here is legacy and will be replaced by size_t later
if (frameCountInt & 0) {
ALOGE("Invalid frame count %d", frameCountInt);
return BAD_VALUE;
size_t frameCount = frameCountI
ALOGV("set(): sampleRate %u, channelMask %#x, frameCount %u", sampleRate, channelMask,
frameCount);
AutoMutex lock(mLock);
if (mAudioRecord != 0) {
ALOGE("Track already in use");
return INVALID_OPERATION;
if (inputSource == AUDIO_SOURCE_DEFAULT) {
inputSource = AUDIO_SOURCE_MIC;
mInputSource = inputS
if (sampleRate == 0) {
ALOGE("Invalid sample rate %u", sampleRate);
return BAD_VALUE;
mSampleRate = sampleR
// these below should probably come from the audioFlinger too...
if (format == AUDIO_FORMAT_DEFAULT) {
format = AUDIO_FORMAT_PCM_16_BIT;
// validate parameters
if (!audio_is_valid_format(format)) {
ALOGE("Invalid format %d", format);
return BAD_VALUE;
// Temporary restriction: AudioFlinger currently supports 16-bit PCM only
if (format != AUDIO_FORMAT_PCM_16_BIT) {
ALOGE("Format %d is not supported", format);
return BAD_VALUE;
if (!audio_is_input_channel(channelMask)) {
ALOGE("Invalid channel mask %#x", channelMask);
return BAD_VALUE;
mChannelMask = channelM
uint32_t channelCount = popcount(channelMask);
mChannelCount = channelC
// Assumes audio_is_linear_pcm(format), else sizeof(uint8_t)
mFrameSize = channelCount * audio_bytes_per_sample(format);
// validate framecount
size_t minFrameCount = 0;
status_t status = AudioRecord::getMinFrameCount(&minFrameCount,
sampleRate, format, channelMask);
if (status != NO_ERROR) {
ALOGE("getMinFrameCount() status %d", status);
ALOGV("AudioRecord::set() minFrameCount = %d", minFrameCount);
if (frameCount == 0) {
frameCount = minFrameC
} else if (frameCount & minFrameCount) {
ALOGE("frameCount %u & minFrameCount %u", frameCount, minFrameCount);
return BAD_VALUE;
mFrameCount = frameC
mNotificationFramesReq = notificationF
mNotificationFramesAct = 0;
if (sessionId == 0 ) {
mSessionId = AudioSystem::newAudioSessionId();
mSessionId = sessionId;
ALOGV("set(): mSessionId %d", mSessionId);
// create the IAudioRecord
status = openRecord_l(0 /*epoch*/);
if (status) {
if (cbf != NULL) {
mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
mAudioRecordThread-&run("AudioRecord", ANDROID_PRIORITY_AUDIO);
mStatus = NO_ERROR;
// Update buffer size in case it has been limited by AudioFlinger during track creation
mFrameCount = mCblk-&frameCount_;
mActive = false;
mRefreshRemaining = true;
mUserData =
// TODO: add audio hardware input latency here
mLatency = (1*mFrameCount) / sampleR
mMarkerPosition = 0;
mMarkerReached = false;
mNewPosition = 0;
mUpdatePeriod = 0;
AudioSystem::acquireAudioSessionId(mSessionId);
mSequence = 1;
mObservedSequence = mS
mInOverrun = false;
return NO_ERROR;
&openRecord_l跟踪:
// must be called with mLock held
status_t AudioRecord::openRecord_l(size_t epoch)
const sp&IAudioFlinger&& audioFlinger = AudioSystem::get_audio_flinger();
if (audioFlinger == 0) {
ALOGE("Could not get audioflinger");
return NO_INIT;
IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
pid_t tid = -1;
// Client can only express a preference for FAST.
Server will perform additional tests.
// The only supported use case for FAST is callback transfer mode.
if (mFlags & AUDIO_INPUT_FLAG_FAST) {
if ((mTransfer != TRANSFER_CALLBACK)
(mAudioRecordThread == 0)) {
ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");
// once denied, do not request again if IAudioRecord is re-created
mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
trackFlags = IAudioFlinger::TRACK_FAST;
tid = mAudioRecordThread-&getTid();
mNotificationFramesAct = mNotificationFramesR
if (!(mFlags & AUDIO_INPUT_FLAG_FAST)) {
// Make sure that application is notified with sufficient margin before overrun
if (mNotificationFramesAct == 0
mNotificationFramesAct & mFrameCount/2) {
mNotificationFramesAct = mFrameCount/2;
audio_io_handle_t input = AudioSystem::getInput(mInputSource, mSampleRate, mFormat,
mChannelMask, mSessionId);
if (input == 0) {
ALOGE("Could not get audio input for record source %d", mInputSource);
return BAD_VALUE;
int originalSessionId = mSessionId;
sp&IAudioRecord& record = audioFlinger-&openRecord(input,
mSampleRate, mFormat,
mChannelMask,
mFrameCount,
&trackFlags,
&mSessionId,
ALOGE_IF(originalSessionId != 0 && mSessionId != originalSessionId,
"session ID changed from %d to %d", originalSessionId, mSessionId);
if (record == 0
status != NO_ERROR) {
ALOGE("AudioFlinger could not create record track, status: %d", status);
AudioSystem::releaseInput(input);
sp&IMemory& iMem = record-&getCblk();
if (iMem == 0) {
ALOGE("Could not get control block");
return NO_INIT;
void *iMemPointer = iMem-&pointer();
if (iMemPointer == NULL) {
ALOGE("Could not get control block pointer");
return NO_INIT;
if (mAudioRecord != 0) {
mAudioRecord-&asBinder()-&unlinkToDeath(mDeathNotifier, this);
mDeathNotifier.clear();
mAudioRecord =
mCblkMemory = iM
audio_track_cblk_t* cblk = static_cast&audio_track_cblk_t*&(iMemPointer);
// FIXME missing fast track frameCount logic
mAwaitBoost = false;
if (mFlags & AUDIO_INPUT_FLAG_FAST) {
if (trackFlags & IAudioFlinger::TRACK_FAST) {
ALOGV("AUDIO_INPUT_FLAG_FAST frameCount %u", mFrameCount);
mAwaitBoost = true;
// double-buffering is not required for fast tracks, due to tighter scheduling
if (mNotificationFramesAct == 0
mNotificationFramesAct & mFrameCount) {
mNotificationFramesAct = mFrameC
ALOGV("AUDIO_INPUT_FLAG_FAST frameCount %u", mFrameCount);
// once denied, do not request again if IAudioRecord is re-created
mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);
if (mNotificationFramesAct == 0
mNotificationFramesAct & mFrameCount/2) {
mNotificationFramesAct = mFrameCount/2;
// starting address of buffers in shared memory
void *buffers = (char*)cblk + sizeof(audio_track_cblk_t);
// update proxy
mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
mProxy-&setEpoch(epoch);
mProxy-&setMinimum(mNotificationFramesAct);
mDeathNotifier = new DeathNotifier(this);
mAudioRecord-&asBinder()-&linkToDeath(mDeathNotifier, this);
return NO_ERROR;
AudioSystem::getInput跟踪实现:
audio_io_handle_t AudioSystem::getInput(audio_source_t inputSource,
uint32_t samplingRate,
audio_format_t format,
audio_channel_mask_t channelMask,
int sessionId)
const sp&IAudioPolicyService&& aps = AudioSystem::get_audio_policy_service();
if (aps == 0) return 0;
return aps-&getInput(inputSource, samplingRate, format, channelMask, sessionId);
&AudioSystem.cpp相关部分:
// client singleton for AudioPolicyService binder interface
sp&IAudioPolicyService& AudioSystem::gAudioPolicyS
sp&AudioSystem::AudioPolicyServiceClient& AudioSystem::gAudioPolicyServiceC
// establish binder interface to AudioPolicy service
const sp&IAudioPolicyService&& AudioSystem::get_audio_policy_service()
gLock.lock();
if (gAudioPolicyService == 0) {
sp&IServiceManager& sm = defaultServiceManager();
sp&IBinder&
binder = sm-&getService(String16("media.audio_policy"));
if (binder != 0)
ALOGW("AudioPolicyService not published, waiting...");
usleep(500); // 0.5 s
} while (true);
if (gAudioPolicyServiceClient == NULL) {
gAudioPolicyServiceClient = new AudioPolicyServiceClient();
binder-&linkToDeath(gAudioPolicyServiceClient);
gAudioPolicyService = interface_cast&IAudioPolicyService&(binder);
gLock.unlock();
gLock.unlock();
return gAudioPolicyS
// establish binder interface to AudioFlinger service
const sp&IAudioFlinger&& AudioSystem::get_audio_flinger()
Mutex::Autolock _l(gLock);
if (gAudioFlinger == 0) {
sp&IServiceManager& sm = defaultServiceManager();
sp&IBinder&
binder = sm-&getService(String16("media.audio_flinger"));
if (binder != 0)
ALOGW("AudioFlinger not published, waiting...");
usleep(500); // 0.5 s
} while (true);
if (gAudioFlingerClient == NULL) {
gAudioFlingerClient = new AudioFlingerClient();
if (gAudioErrorCallback) {
gAudioErrorCallback(NO_ERROR);
binder-&linkToDeath(gAudioFlingerClient);
gAudioFlinger = interface_cast&IAudioFlinger&(binder);
gAudioFlinger-&registerClient(gAudioFlingerClient);
ALOGE_IF(gAudioFlinger==0, "no AudioFlinger!?");
return gAudioF
3.&startRecording
startRecording &&native_start &&android_media_AudioRecord_start &&lpRecorder-&start:
static int android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
sp&AudioRecord& lpRecorder = getAudioRecord(env, thiz);
if (lpRecorder == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return AUDIORECORD_ERROR;
return android_media_translateRecorderErrorCode(
lpRecorder-&start((AudioSystem::sync_event_t)event, triggerSession));
status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession)
ALOGV("start, sync event %d trigger session %d", event, triggerSession);
AutoMutex lock(mLock);
if (mActive) {
return NO_ERROR;
// reset current position as seen by client to 0
mProxy-&setEpoch(mProxy-&getEpoch() - mProxy-&getPosition());
mNewPosition = mProxy-&getPosition() + mUpdateP
int32_t flags = android_atomic_acquire_load(&mCblk-&mFlags);
status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
ALOGV("mAudioRecord-&start()");
status = mAudioRecord-&start(event, triggerSession);
if (status == DEAD_OBJECT) {
flags = CBLK_INVALID;
if (flags & CBLK_INVALID) {
status = restoreRecord_l("start");
if (status != NO_ERROR) {
ALOGE("start() status %d", status);
mActive = true;
sp&AudioRecordThread& t = mAudioRecordT
if (t != 0) {
t-&resume();
mPreviousPriority = getpriority(PRIO_PROCESS, 0);
get_sched_policy(0, &mPreviousSchedulingGroup);
androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
read &&native_read_in_short_array &&android_media_AudioRecord_readInByteArray:
static jint android_media_AudioRecord_readInByteArray(JNIEnv *env,
jobject thiz,
jbyteArray javaAudioData,
jint offsetInBytes, jint sizeInBytes) {
jbyte* recordBuff = NULL;
// get the audio recorder from which we'll read new audio samples
sp&AudioRecord& lpRecorder = getAudioRecord(env, thiz);
if (lpRecorder == NULL) {
ALOGE("Unable to retrieve AudioRecord object, can't record");
if (!javaAudioData) {
ALOGE("Invalid Java array to store recorded audio, can't record");
// get the pointer to where we'll record the audio
// NOTE: We may use GetPrimitiveArrayCritical() when the JNI implementation changes in such
// a way that it becomes much more efficient. When doing so, we will have to prevent the
// AudioSystem callback to be called while in critical section (in case of media server
// process crash for instance)
recordBuff = (jbyte *)env-&GetByteArrayElements(javaAudioData, NULL);
if (recordBuff == NULL) {
ALOGE("Error retrieving destination for recorded audio data, can't record");
// read the new audio data from the native AudioRecord object
ssize_t recorderBuffSize = lpRecorder-&frameCount()*lpRecorder-&frameSize();
ssize_t readSize = lpRecorder-&read(recordBuff + offsetInBytes,
sizeInBytes & (jint)recorderBuffSize ?
(jint)recorderBuffSize : sizeInBytes );
env-&ReleaseByteArrayElements(javaAudioData, recordBuff, 0);
if (readSize & 0) {
readSize = AUDIORECORD_ERROR_INVALID_OPERATION;
return (jint) readS
stop &&native_stop &&android_media_AudioRecord_stop &&lpRecorder-&stop()&:
static void android_media_AudioRecord_stop(JNIEnv *env, jobject thiz)
sp&AudioRecord& lpRecorder = getAudioRecord(env, thiz);
if (lpRecorder == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
lpRecorder-&stop();
//ALOGV("Called lpRecorder-&stop()");
> 本站内容系网友提交或本网编辑转载,其目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责。如涉及作品内容、版权和其它问题,请及时与本网联系,我们将在第一时间删除内容!
Android Audio 系统的主要内容: AudioManager:这个主要是用来管理Audio系统的,需要考虑整个系统上声音的策略问题,例如来电话铃声,短信铃声等,主要是策略上的问题. AudioTrack:这个主要是用来播放声音的 AudioRecord:这个主要是用来录音的 当前分析AudioTrack的文章较多,先以AudioTrack为例进行分 ...
AudioFlinger(AF)是一个服务,具体的启动代码在av\media\mediaserver\Main_mediaserver.cpp中: int main(int argc, char** argv) { signal(SIGPIPE, SIG_IGN); char value[PROPERTY_VALUE_MAX]; bool doLog = ( ...
Android休眠唤醒驱动流程分析(一) 作者:Sean 日期: 修改历史:2013-1 标准linux休眠过程: power management notifiers are executed with PM_SUSPEND_PREPARE tasks are frozentarget system sleep state is ann ...
android 系统重启关机流程分析
分类: kernel
17:27 981人阅读 评论(0) 收藏
1.5 android 系统重启关机流程分析 1.5.1 c语言中调用 reboot 函数 bionic/libc/unistd/reboot.c:33: int reboot (int mode) { return __reb ...
当呼叫startActivity去启动一个新的Activity, System Server就会经由Socket发送一个请求给Zygote,这个请求带有&android.app.ActivityThread&参数.Zygote就会因为这个请求而fork一个子进程 (假设所启动的activity所属的process还没有被启 ...
写在前面的话 本文主要分析Android 多方通话的流程,研究的代码是Android 4.4的,目前只关注framework层,以CDMA为例,GSM同理. 1. 多方通话的概念 下面引用来自&百度百科&的一段文字: 多方通话的发起流程是:主席方用户A先呼叫参与方用户B,B用户接通呼叫,形成一个典型的两人通话的基本呼叫场景,此后A用户通过终端菜单 ...
参照该文章进行总结:http://blog.csdn.net/saintswordsman/article/details/5130947 今天想研究下android的锁屏机制是怎么实现的,听说和桌面launcher是异步的.在看Framework代码的时候, 发现android的policy调用了一个IKeyguardService.aidl.发现它存放在 ...
最近看恢复出厂的一个问题,以前也查过这方面的流程,所以这里整理一些AP+framework层的流程:
在setting--&备份与重置---&恢复出厂设置---&重置手机---&清除全部内容---&手机关机---&开机---&进行恢复出厂的操作---&开机流程:
Step 1:前 ...

我要回帖

更多关于 changed的意思 的文章

 

随机推荐