如何把CMSampleBufferRef 转化为 NSData或者ios uiimage转nsdata

下次自动登录
现在的位置:
& 综合 & 正文
CMSampleBufferRef 与 UIImage 的转换
转载自:/archives/34496
(这里面还有其他 很多的信息 可看看)
CMSampleBufferRef 与 UIImage 的转换
CMSampleBufferRef 之后,还必须透过一连串的转换才能够得到 UIImage,CMSampleBufferRef –& CVImageBufferRef –& CGContextRef –& CGImageRef –& UIImage,你可以将以下任意实作于上述两个内部函数中来取得连续影像片段中的 UIImage。//制作 CVImageBufferRef
CVImageBufferR
buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(buffer, 0);
//从 CVImageBufferRef 取得影像的细部信息
size_t width, height, bytesPerR
base = CVPixelBufferGetBaseAddress(buffer);
width = CVPixelBufferGetWidth(buffer);
height = CVPixelBufferGetHeight(buffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);
//利用取得影像细部信息格式化 CGContextRef
CGColorSpaceRef colorS
CGContextRef cgC
colorSpace = CGColorSpaceCreateDeviceRGB();
cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
//透过 CGImageRef 将 CGContextRef 转换成 UIImage
CGImageRef cgI
cgImage = CGBitmapContextCreateImage(cgContext);
image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(cgContext);
CVPixelBufferUnlockBaseAddress(buffer, 0);
//成功转换成 UIImage
//[myImageView setImage:image];
最后,如果你希望改变撷取影像时的方向,则可以对内部函数中的 AVCaptureConnection 做 setVideoOrientation: 旋转影像,或 setVideoMirrored: 镜射影像。
&&&&推荐文章:
【上篇】【下篇】ios - convert CMSampleBufferRef to UIImage - Stack Overflow
to customize your list.
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
J it only takes a minute:
Join the Stack Overflow community to:
Ask programming questions
Answer and help your peers
Get recognized for your expertise
I'm captuting video with AVCaptureSession.
But I would like to convert the captured image to an UIImage.
I found some code on Internet:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
NSLog(@"imageFromSampleBuffer: called");
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
But I got some errors:
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] &Error&: CGBitmapContextCreate: invalid data bytes/row: should be at least 2560 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] &Error&: CGBitmapContextCreateImage: invalid context 0x0
17:39:25.896 ThinkOutsideTheBox[] image &UIImage: 0x1d553f00&
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] &Error&: CGContextDrawImage: invalid context 0x0
Jan 17 17:39:25 iPhone-4-de-XXX ThinkOutsideTheBox[2363] &Error&: CGBitmapContextGetData: invalid context 0x0
I also use the UIImage to get the rgb color:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
UIImage* image = [self imageFromSampleBuffer:sampleBuffer];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p&image.size.width*image.size.height*4;p+=4)
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(@"totalLuminance %f",totalLuminance);
Your best bet will be to
to a dictionary that , which you'll need to set to some variation on RGB that CGBitmapContext can handle.
The documentation has . Only a tiny subset of those are supported by CGBitmapContext. The format that the code you found on the internet is expecting is kCVPixelFormatType_32BGRA, but that might have been written for Macs—on iOS devices, kCVPixelFormatType_32ARGB (big-endian) might be faster. Try them both, on the device, and compare frame rates.
77.9k12154324
Your Answer
Sign up or
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Post as a guest
By posting your answer, you agree to the
Not the answer you're looking for?
Browse other questions tagged
Stack Overflow works best with JavaScript enabled主题 : How To Convert CMSampleBufferRef to NSData
级别: 新手上路
UID: 117850
可可豆: 43 CB
威望: 46 点
在线时间: 49(时)
发自: Web Page
来源于&&分类
How To Convert CMSampleBufferRef to NSData&&&
我有个CMSampleBufferRef的引用,该引用包涵H264压缩后的帧,但是我不知道怎么将CMSampleBufferRef转换为NSData,以便转为H264码流进行网络传输。请问有谁知道怎么转吗?
级别: 新手上路
可可豆: 36 CB
威望: 16 点
在线时间: 45(时)
发自: Web Page
可以参考一下这个链接
CMSampleBufferRef ref=[output copyNextSampleBuffer];
&&&&&&&&// NSLog(@&%@&,ref);
&&&&&&&&if(ref==NULL)
&&&&&&&&&&&&
&&&&&&&&//copy data to file
&&&&&&&&//read next one
&&&&&&&&AudioBufferList audioBufferL
&&&&&&&&NSMutableData *data=[[NSMutableData alloc] init];
&&&&&&&&CMBlockBufferRef blockB
&&&&&&&&CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
&&&&&&&&// NSLog(@&%@&,blockBuffer);
&&&&&&&&for( int y=0; y&audioBufferList.mNumberB y++ )
&&&&&&&&&&&&AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
&&&&&&&&&&&&Float32 *frame = (Float32*)audioBuffer.mD
&&&&&&&&&&&&[data appendBytes:frame length:audioBuffer.mDataByteSize];
&&&&&&&&CFRelease(blockBuffer);
&&&&&&&&CFRelease(ref);
&&&&&&&&ref=NULL;
&&&&&&&&blockBuffer=NULL;
&&&&&&&&[data release];
可以参考一下这个链接
-给我一台电脑,还你一个世界
级别: 新手上路
UID: 117850
可可豆: 43 CB
威望: 46 点
在线时间: 49(时)
发自: Web Page
回 1楼(kaili) 的帖子
你好^_^,谢谢你的回复。这个链接我也找到过,不过“works for audio sample buffer”,但是我想video sample buffer的,就是没有找到相关例子可以参考。
关注本帖(如果有新回复会站内信通知您)
个人IDP证书一年费用? 正确答案:99美金
发帖、回帖都会得到可观的积分奖励。
按"Ctrl+Enter"直接提交
关注CocoaChina
关注微信 每日推荐
扫一扫 浏览移动版[IOS开发]自定义使用AVCaptureSession 拍照,摄像,载图总结 - liangzhimy - 博客园
[IOS开发]拍照,摄像,载图总结
1 建立Session&
2 添加 input&
3 添加output&
4 开始捕捉
5 为用户显示当前录制状态
7 结束捕捉
1 建立Session&
1.1 声明session&
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
[session startRunning];
1.2 设置采集的质量&
Resolution
Highest recording quality.
This varies per device.
Suitable for WiFi sharing.
The actual values may change.
Suitable for 3G sharing.
The actual values may change.
Full photo resolution.
This is not supported for video output.
if ([session canSetSessionPreset:AVCaptureSessionPreset]) {
& & session.sessionPreset = AVCaptureSessionPreset;
& & // Handle the failure.
1.3 重新设置session&
[session beginConfiguration];
// Remove an existing capture device.
// Add a new capture device.
// Reset the preset.
[session commitConfiguration];
2 添加input&
2.1 配置一个device (查找前后摄像头)&
NSArray *devices = [AVCaptureDevice devices];
for (AVCaptureDevice *device in devices) {
& & NSLog(@"Device name: %@", [device localizedName]);
& & if ([device hasMediaType:AVMediaTypeVideo]) {
& & & & if ([device position] == AVCaptureDevicePositionBack) {
& & & & & & NSLog(@"Device position : back");
& & & & else {
& & & & & & NSLog(@"Device position : front");
2.2 设备的前后切换 Switching Between Devices
AVCaptureSession *session = &#A capture session#&;
[session beginConfiguration];
[session removeInput:frontFacingCameraDeviceInput];
[session addInput:backFacingCameraDeviceInput];
[session commitConfiguration];
2.3 添加输入设备到当前session&
AVCaptureDeviceInput *input =
& & & & [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
& & // Handle the error appropriately.
AVCaptureSession *captureSession = &#Get a capture session#&;
AVCaptureDeviceInput *captureDeviceInput = &#Get a capture device input#&;
if ([captureSession canAddInput:captureDeviceInput]) {
& & [captureSession addInput:captureDeviceInput];
& & // Handle the failure.
3 添加输出设备到session&
to output to a movie file (输出一个 视频文件)&
if you want to process frames from the video being captured (可以采集数据从指定的视频中)&
if you want to process the audio data being captured (采集音频)&
if you want to capture still images with accompanying metadata (采集静态图片)&
3.1 添加一个output 到session&
AVCaptureSession *captureSession = &#Get a capture session#&;
AVCaptureMovieFileOutput *movieOutput = &#Create and configure a movie output#&;
if ([captureSession canAddOutput:movieOutput]) {
& & [captureSession addOutput:movieOutput];
& & // Handle the failure.
3.2 保存视频到文件& Saving to a Movie File
3.2.1 声明一个输出
AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
CMTime maxDuration = &#Create a CMTime to represent the maximum duration#&;
aMovieFileOutput.maxRecordedDuration = maxD
aMovieFileOutput.minFreeDiskSpaceLimit = &#An appropriate minimum given the quality of the movie format and the duration#&;
3.2.2 配置写到指定的文件
AVCaptureMovieFileOutput *aMovieFileOutput = &#Get a movie file output#&;
NSURL *fileURL = &#A file URL that identifies the output location#&;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:&#The delegate#&];
3.2.3 确定文件是否写成功
& 实现这个方法
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
& & & & didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
& & & & fromConnections:(NSArray *)connections
& & & & error:(NSError *)error {
& & BOOL recordedSuccessfully = YES;
& & if ([error code] != noErr) {
& & & & // A problem occurred: Find out if the recording was successful.
& & & & id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
& & & & if (value) {
& & & & & & recordedSuccessfully = [value boolValue];
& & // Continue as appropriate...
3.3 对采集载图
3.3.1 设置采集图片的像素格式&
说实话下面这段的像素格式我也似懂非懂, 感觉是不是像素的对像素的质量会有一些影响
You can use the
property to specify a custom output format. The video settings prop currently, the only supported key is . The recommended pixel format choices for iPhone 4 for iPhone 3G the recommended pixel format choices are
or . Both Core Graphics and OpenGL work well with the BGRA format:
// Create a VideoDataOutput and add it to the session
& & AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
& & [session addOutput:output];
& & // Configure your output.
& & dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
& & [output setSampleBufferDelegate:self queue:queue];
& & dispatch_release(queue);
& & // Specify the pixel format
& & output.videoSettings =
& & [NSDictionary dictionaryWithObject:
&& & [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
& & & & & & & & & & & & & & & & forKey:(id)kCVPixelBufferPixelFormatTypeKey];
3.3.2 采集静态图片
& 这个类可以采集静态图片。
iPhone 3GS
iPhone 4 (Back)
iPhone 4 (Front)
Pixel and Encoding Formats
Different devices support different image formats:
iPhone 3GS
yuvs, 2vuy, BGRA, jpeg
420f, 420v, BGRA, jpeg
420f, 420v, BGRA, jpeg
可以自己指定想要捕捉的格式, 下面就可以指定捕捉一个JPEG的图片
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
[stillImageOutput setOutputSettings:outputSettings];
如果使用JPEG图片格式, 就不应该再指定其它的压缩了, output 会自动压缩, 这个压缩会使用硬件加速。 而我们要使用这个图片数据时。 可以使用 这个方法来获取相应的NSData,这个方法不会做重复压缩的动作。&
jpegStillImageNSDataRepresentation:
Returns an NSData representation of a still image data and metadata attachments in a JPEG sample buffer.
+ ( *)jpegStillImageNSDataRepresentation:(CMSampleBufferRef)jpegSampleBuffer
Parameters
jpegSampleBuffer
The sample buffer carrying JPEG image data, optionally with Exif metadata sample buffer attachments.
This method throws an
if jpegSampleBuffer is NULL or not in the JPEG format.
Return Value
An NSData representation of jpegSampleBuffer.
Discussion
This method merges the image data and Exif metadata sample buffer attachments without re-compressing the image.
The returned NSData object is suitable for writing to disk.
捕捉图片 Capturing an Image
When you want to capture an image, you send the output a
message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:
AVCaptureConnection *videoConnection =
for (AVCaptureConnection *connection in stillImageOutput.connections) {
& & for (AVCaptureInputPort *port in [connection inputPorts]) {
& & & & if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
& & & & & & videoConnection =
& & & & & &
& & if (videoConnection) { }
The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer containing the image data, and an error. The sample buffer itself may contain metadata, such as an Exif dictionary, as an attachment. You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats.”
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
& & ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
& & & & CFDictionaryRef exifAttachments =
& & & & & & CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
& & & & if (exifAttachments) {
& & & & & & // Do something with the attachments.
& & & & // Continue as appropriate.
5 为用户显示当前的录制状态
5.1 录制预览
AVCaptureSession *captureSession = &#Get a capture session#&;
CALayer *viewLayer = &#Get a layer from the view in which you want to present the preview#&;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[viewLayer addSublayer:captureVideoPreviewLayer];
Video Gravity Modes
The preview layer supports three gravity modes that you set using :
: This preserves the aspect ratio, leaving black bars where the video does not fill the available screen area.
: This preserves the aspect ratio, but fills the available screen area, cropping the video when necessary.
: This simply stretches the video to fill the available screen area, even if doing so distorts the image.
下面是一个完整的过程
Putting it all Together: Capturing Video Frames as UIImage Objects
This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:
object to coordinate the flow of data from an AV input device to an output
object for the input type you want
object for the device
object to produce video frames
Implement a delegate for the
object to process video frames
Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object
Note:&To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV&Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.
Create and Configure a Capture Session
You use an
object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium resolution video frames.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetM
Create and Configure the Device and Device Input
Capture devices are rep the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an
object. Typically, you use the capture input in its default configuration.
Find a video capture device, then create a device input with the device and add it to the session.
AVCaptureDevice *device =
& & & & [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error =
AVCaptureDeviceInput *input =
& & & & [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
& & // Handle the error appropriately.
[session addInput:input];
Create and Configure the Data Output
You use an
object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the
property, and cap the frame rate by setting the
Create and configure an output for video data and a cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings =
& & & & & & & & @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
output.minFrameDuration = CMTimeMake(1, 15);
The data output object uses delegation to vend the video frames. The delegate must adopt the
protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
You use the queue to modify the priority given to delivering and processing the video frames.
Implement the Sample Buffer Delegate Method
In the delegate class, implement the method () that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffers, so you need to convert from the CMSampleBuffer to a UIImage object. The function for this operation is shown in
- (void)captureOutput:(AVCaptureOutput *)captureOutput
&& & & & didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
&& & & & fromConnection:(AVCaptureConnection *)connection {
& & UIImage *image = imageFromSampleBuffer(sampleBuffer);
& & // Add your code here that uses the image.
Remember that the delegate method is invoked on the que if you want to update the user interface, you must invoke any relevant code on the main thread.
Starting and Stopping Recording
After configuring the capture session, you send it a
message to start the recording.
[session startRunning];
To stop recording, you send the session a
这个demo 运行时出了一个问题,- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer& & & & fromConnection:(AVCaptureConnection *)connection 里载到图后把图片转换为UIImage后转出来后怎么都不显示图片, 经查后直接转为NSData传出来后一切正常, 特别说明一下。&
// Create and configure a capture session and start it running
- (void)setupCaptureSession
& & NSError *error = nil;
& & // Create the session
& & AVCaptureSession *session = [[AVCaptureSession alloc] init];
& & // Configure the session to produce lower resolution video frames, if your
& & // processing algorithm can cope. We'll specify medium quality for the
& & // chosen device.
& & session.sessionPreset = AVCaptureSessionPresetLow;&
& & // Find a suitable AVCaptureDevice
& & AVCaptureDevice *device = [AVCaptureDevice
&& & & & & & & & & & & & & & & defaultDeviceWithMediaType:AVMediaTypeVideo];
& & // Create a device input with the device and add it to the session.
& & AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
& & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & error:&error];
& & if (!input) {
& & & & // Handling the error appropriately.
& & [session addInput:input];
& & // Create a VideoDataOutput and add it to the session
& & AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
& & [session addOutput:output];
& & // Configure your output.
& & dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
& & [output setSampleBufferDelegate:self queue:queue];
& & dispatch_release(queue);
& & // Specify the pixel format
& & output.videoSettings =
& & [NSDictionary dictionaryWithObject:
&& & [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
& & & & & & & & & & & & & & & & forKey:(id)kCVPixelBufferPixelFormatTypeKey];
& & // 添加界面显示
& & AVCaptureVideoPreviewLayer *previewLayer = nil;&
& & previewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:session] autorelease];
& & [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
& & CGRect layerRect = [[[self view] layer] bounds];
& & [previewLayer setBounds:layerRect];
& & [previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:previewLayer];
& & // If you wish to cap the frame rate to a known value, such as 15 fps, set
& & // minFrameDuration.
//& & output.minFrameDuration = CMTimeMake(1, 15);
& & // Start the session running to start the flow of data
& & [session startRunning];
& & sessionGlobal =&
& & // Assign session to an ivar.
&& //& [self setSession:session];
& & isCapture = FALSE;
& & UIView *v = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 300, 300)];
& & v.backgroundColor = [UIColor blueColor];
& & v.layer.masksToBounds = YES;
& & v1 = [v retain];&
& & [self.view addSubview:v];
&& // [v release];
& & start = [[NSDate date] timeIntervalSince1970];
& & before = start;
& & num = 0;&
&(NSTimeInterval)getTimeFromStart
& & NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];
& & NSTimeInterval now = [dat timeIntervalSince1970]*1;
& & NSTimeInterval b = now - start;
& & return&
- (void)showImage:(NSData *)topImageData
& & if(num & 5)
& & & & [sessionGlobal stopRunning];&
& & & & return;
& & num ++;
& & NSString *numStr = [NSString stringWithFormat:@"%d.jpg", num];
& & NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:numStr];
& & NSLog(@"PATH : %@", path);
& & [topImageData writeToFile:path atomically:YES];
& & UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
& & imageView.layer.masksToBounds = YES;
& & imageView.backgroundColor = [UIColor redColor];
& & UIImage *img = [[UIImage alloc] initWithData:topImageData];
& & imageView.image =
& & [img release];&
& & [self.view addSubview:imageView];
& & [imageView release];
& & [self.view setNeedsDisplay];&
//& & [v1 setNeedsDisplay];
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
&& & & fromConnection:(AVCaptureConnection *)connection
& & NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];
& & NSTimeInterval now = [dat timeIntervalSince1970]*1;
& & NSLog(@" before: %f& num: %f" , before, now - before);
& & if((now - before) & 5)
& & & & before = [[NSDate date] timeIntervalSince1970];
& & & & // Create a UIImage from the sample buffer data
& & & & UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
& & & & if(image != nil)
& & & & { //& & & & & & NSTimeInterval t = [self getTimeFromStart];
& & & & & & NSData* topImageData = UIImageJPEGRepresentation(image, 1.0);
& & & & & & [self performSelectorOnMainThread:@selector(showImage:) withObject:topImageData waitUntilDone:NO];
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
& & CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
& & // Lock the base address of the pixel buffer
& & CVPixelBufferLockBaseAddress(imageBuffer,0);
& & // Get the number of bytes per row for the pixel buffer
& & size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
& & // Get the pixel buffer width and height
& & size_t width = CVPixelBufferGetWidth(imageBuffer);
& & size_t height = CVPixelBufferGetHeight(imageBuffer);
& & // Create a device-dependent RGB color space
& & CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
& & if (!colorSpace)
& & & & NSLog(@"CGColorSpaceCreateDeviceRGB failure");
& & & & return nil;
& & // Get the base address of the pixel buffer
& & void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
& & // Get the data size for contiguous planes of the pixel buffer.
& & size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
& & // Create a Quartz direct-access data provider that uses data we supply
& & CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
& & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & NULL);
& & // Create a bitmap image from data supplied by our data provider
& & CGImageRef cgImage =
& & CGImageCreate(width,
& & & & & & & & & height,
& & & & & & & & & 8,
& & & & & & & & & 32,
& & & & & & & & & bytesPerRow,
& & & & & & & & & colorSpace,
& & & & & & & & & kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
& & & & & & & & & provider,
& & & & & & & & & NULL,
& & & & & & & & & true,
& & & & & & & & & kCGRenderingIntentDefault);
& & CGDataProviderRelease(provider);
& & CGColorSpaceRelease(colorSpace);
& & // Create and return an image object representing the specified Quartz image
& & UIImage *image = [UIImage imageWithCGImage:cgImage];
& & CGImageRelease(cgImage);
& & CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
& & return
7 结束捕捉&
- (void)stopVideoCapture:(id)arg
//停止摄像头捕抓
if(self-&avCaptureSession){
[self-&avCaptureSession&stopRunning];
self-&avCaptureSession=&nil;
[labelStatesetText:@"Video capture stopped"];
xcode 自带文档 Media Capture
比较完整的捕捉代码
随便找的2个

我要回帖

更多关于 swift uiimage nsdata 的文章

 

随机推荐