问题描述:

I'm making an iPhone app which lets the user design an audio filter and test it on some recorded sound. I try to do the following:

  1. I create two audio files called "recordeAudio.aiff" and "filteredAudio.aiff"
  2. I record sound with the mic and save it in "recordedAudio.aiff"
  3. I copy the audio data from "recordedAudio.aiff" into a buffer
  4. Later, I will perform some audio filtering on the data that is in the buffer at this point but for the purpose of testing I just want to reduce the value of each sample by half (which would simply reduce the volume by half) so I'm sure that I'm able to manipulate single samples
  5. I write the result into a second buffer
  6. I write the data of that buffer into the second file "filteredAudio.aiff"
  7. I play the second file

The problem is the following: as long as I just copy the data from one buffer into the other and then write it into the second audio file everything works fine. But as soon as I try to perform any kind of operation on the samples (like dividing them by 2) the result is just random noise. This makes me suspect that I'm not interpreting the values of the audio data correctly but I've been trying for five days now and I just don't get it. If you have any idea how one can access and manipulate single audio samples, please help me with this, I would really appreciate it a lot!

Thanks!

This is the code that will perform the filtering later (for now it should just divide all audio samples by 2);

OSStatus status = noErr;

UInt32 propertySizeDataPacketCount;

UInt32 writabilityDataPacketCount;

UInt32 numberOfPackets;

UInt32 propertySizeMaxPacketSize;

UInt32 writabilityMaxPacketSize;

UInt32 maxPacketSize;

UInt32 numberOfBytesRead;

UInt32 numberOfBytesToWrite;

UInt32 propertySizeDataByteCount;

SInt64 currentPacket;

double x0;

double x1;

status = AudioFileOpenURL(audioFiles->recordedFile,

kAudioFileReadPermission,

kAudioFileAIFFType,

&audioFiles->inputFile);

status = AudioFileOpenURL(audioFiles->filteredFile,

kAudioFileReadWritePermission,

kAudioFileAIFFType,

&audioFiles->outputFile);

status = AudioFileGetPropertyInfo(audioFiles->inputFile,

kAudioFilePropertyAudioDataPacketCount,

&propertySizeDataPacketCount,

&writabilityDataPacketCount);

status = AudioFileGetProperty(audioFiles->inputFile,

kAudioFilePropertyAudioDataPacketCount,

&propertySizeDataPacketCount,

&numberOfPackets);

status = AudioFileGetPropertyInfo (audioFiles->inputFile,

kAudioFilePropertyMaximumPacketSize,

&propertySizeMaxPacketSize,

&writabilityMaxPacketSize);

status = AudioFileGetProperty(audioFiles->inputFile,

kAudioFilePropertyMaximumPacketSize,

&propertySizeMaxPacketSize,

&maxPacketSize);

SInt16 *inputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);

SInt16 *outputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize);

currentPacket = 0;

status = AudioFileReadPackets(audioFiles->inputFile,

false, &numberOfBytesRead,

NULL,

currentPacket,

&numberOfPackets,

inputBuffer);

for (int i = 0; i < numberOfPackets; i++) {

x0 = (double)inputBuffer[i];

x1 = 0.5 * x0; //This is supposed to reduce the value of the sample by half

//x1 = x0; //This just copies the value of the sample and works fine

outputBuffer[i] = (SInt16)x1;

}

numberOfBytesToWrite = numberOfBytesRead;

currentPacket = 0;

status = AudioFileWritePackets(audioFiles->outputFile,

false,

numberOfBytesToWrite,

NULL,

currentPacket,

&numberOfPackets,

outputBuffer);

status = AudioFileClose(audioFiles->inputFile);

status = AudioFileClose(audioFiles->outputFile);

For creating the audio files I use the following code:

 #import "AudioFiles.h"

#define SAMPLE_RATE 44100

#define FRAMES_PER_PACKET 1

#define CHANNELS_PER_FRAME 1

#define BYTES_PER_FRAME 2

#define BYTES_PER_PACKET 2

#define BITS_PER_CHANNEL 16

@implementation AudioFiles

-(void)setupAudioFormat:(AudioStreamBasicDescription *)format {

format->mSampleRate = SAMPLE_RATE;

format->mFormatID = kAudioFormatLinearPCM;

format->mFramesPerPacket = FRAMES_PER_PACKET;

format->mChannelsPerFrame = CHANNELS_PER_FRAME;

format->mBytesPerFrame = BYTES_PER_FRAME;

format->mBytesPerPacket = BYTES_PER_PACKET;

format->mBitsPerChannel = BITS_PER_CHANNEL;

format->mReserved = 0;

format->mFormatFlags = kLinearPCMFormatFlagIsBigEndian |

kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;

}

- (id)init

{

self = [super init];

if (self) {

char path[256];

NSArray *dirPaths;

NSString *docsDir;

dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

docsDir = [dirPaths objectAtIndex:0];

NSString *recordedFilePath = [docsDir stringByAppendingPathComponent:@"/recordedAudio.aiff"];

[recordedFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];

recordedFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);

recordedFileURL = [NSURL fileURLWithPath:recordedFilePath];

dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

docsDir = [dirPaths objectAtIndex:0];

NSString *filteredFilePath = [docsDir stringByAppendingPathComponent:@"/filteredAudio.aiff"];

[filteredFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding];

filteredFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false);

filteredFileURL = [NSURL fileURLWithPath:filteredFilePath];

AudioStreamBasicDescription audioFileFormat;

[self setupAudioFormat:&audioFileFormat];

OSStatus status = noErr;

status = AudioFileCreateWithURL(recordedFile,

kAudioFileAIFFType,

&audioFileFormat,

kAudioFileFlags_EraseFile,

&inputFile);

status = AudioFileCreateWithURL(filteredFile,

kAudioFileAIFFType,

&audioFileFormat,

kAudioFileFlags_EraseFile,

&outputFile);

}

return self;

}

@end

For recording I use an AVAudioRecorder with the following settings:

 NSDictionary *recordSettings =

[[NSDictionary alloc] initWithObjectsAndKeys:

[NSNumber numberWithFloat: 8000.0], AVSampleRateKey,

[NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey,

[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,

[NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,

[NSNumber numberWithInt:16], AVEncoderBitRateKey,

[NSNumber numberWithBool:YES],AVLinearPCMIsBigEndianKey,

[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,

[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,

[NSNumber numberWithBool:YES], AVLinearPCMIsNonInterleaved,

nil];

NSError *error = nil;

audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFiles->recordedFileURL settings:recordSettings error:&error];

if (error)

{

NSLog(@"error: %@", [error localizedDescription]);

} else {

[audioRecorder prepareToRecord];

}

网友答案:

Your input data is BigEndian but you are assuming it to be LittleEndian.

One way to handle this would be:

SInt16 inVal = OSSwapBigToHostInt16(inputBuffer[i]);
SInt16 outVal = inVal / 2;
outputBuffer[i] = OSSwapHostToBigInt16(outVal);
相关阅读:
Top