如何查找海康威视摄像机urlrstp访问的url

如何将网络监控摄像头中的网络流 RTSP 转换成 M3U8 流并嵌入微信公众号 - vivianlinux的博客 - CSDN博客
如何将网络监控摄像头中的网络流 RTSP 转换成 M3U8 流并嵌入微信公众号
media server
现在的网络监控摄像头一般都是支持输出网络流
RTSP 协议,当然现在有一些也直接支持输出 RTMP 协议流,目的是更好的适配主流流媒体服务器系统的接收如 adobe 的 FMS,Wowza 或 800Li Media Server。从节省成本来说输出 RTSP 协议的摄像头都在 100-500 元之间,性价比高,受众群更多。&
很多人买了监控摄像头会希望嵌入自己的网站,现在很多销售
IPCam 的厂家也提供直播云服务了,不过他们考虑带宽成本,会有很多限制码率,流量等。 所以现在很多人都在找能够嵌入自己网站或手机公众号的监控直播系统或划算的监控直播云服务。&
那么我们首先来看看实现 RTSP 成
M3U8 (手机端播放支持的流形式)需要哪些步骤:&
1. 了解到手的网络监控摄像头支持的协议(
RTSP 或 RTMP )&
2. 每个厂家对输出的网络协议的具体书写规则自定义都不同,需要看具体说明书或直接联系厂家进行询问。(重要环节)&
3. 找到能够转接 RTSP 协议成
RTMP 并能够主动推送 RTMP 至流媒体服务器的软件。(我用的是 800Li 外部信号适配器软件,大家也可以百度搜索看其他的同样功能的软件。)&
4. 找到流媒体服务器系统软件。 (推荐
800Li 流媒体直播系统,理由:支持输出 m3u8 流地址,同时也支持给出嵌入网页的 HTML 代码,这段代码就是网页播放器和直播内容,这样就可以直接嵌入微信公众账号啦。)&
敲黑板,划重点:&
1. 如果是直接支持输出 RTMP 的摄像头,一般是直接能推送到流媒体服务器,无需转流软件进行拉流的。&
2. 给出大家我目前了解到一些监控头的
RTSP 书写规则:&
海康摄像头:&
rtsp://[username]:[password]@[ip]:[port]/[codec]/[channel]/[subtype]/av_stream&
username: 用户名。例如 admin。&
password: 密码。例如 12345。&
ip: 为设备 IP。例如 192.0.0.64
port: 端口号默认为 554,若为默认可不填写。&
codec:有 h264、MPEG-4、mpeg4
channel: 通道号,起始为 1。例如通道
1,则为 ch1。&
subtype: 码流类型,主码流为
main,辅码流为 sub。&
例如,请求海康摄像机通道 1 的主码流,Url
rtsp://admin:.0.64:554/h264/ch1/main/av_stream&
rtsp://admin:.0.64:554/MPEG-4/ch1/main/av_stream&
rtsp://admin:.0.64/mpeg4/ch1/sub/av_stream&
rtsp://admin:.0.64/h264/ch1/sub/av_stream&
大华摄像头:&
rtsp://username:password@ip:port/cam/realmonitor?channel=1&subtype=0&
username: 用户名。例如 admin。&
password: 密码。例如 admin。&
ip: 为设备 IP。例如 10.7.8.122
port: 端口号默认为 554,若为默认可不填写。&
channel: 通道号,起始为 1。例如通道
2,则为 channel=2。&
subtype: 码流类型,主码流为
0 (即 subtype=0 ),辅码流为 1 (即 subtype=1 )。&
foscam 摄像头:&
rtsp://admin:fulinoil@59.127.79.88:88/videoMain
(带有用户名和密码)&
rtsp://59.127.79.88:88/videoMain
(无用户名和密码)&
其他品牌:&
宏视监控摄像头:&
rtsp://0.0.0.0/live/ch00_0&
中维世纪监控摄像头:&
rtsp://0.0.0.0:8554/live1.264
(次码流)&
rtsp://0.0.0.0:8554/live0.264
我的热门文章
即使是一小步也想与你分享知道rtsp的URL地址,怎样或者说可以让ffmpeg.exe工具接收?解决思路 - 图像工具使用当前位置:& &&&知道rtsp的URL地址,怎样或者说可以让ffmpeg.exe工知道rtsp的URL地址,怎样或者说可以让ffmpeg.exe工具接收?解决思路&&网友分享于:&&浏览:0次知道rtsp的URL地址,怎样或者说可以让ffmpeg.exe工具接收?ffmpeg.exe怎么使用,我想用他来解码,我看过代码里是有解码功能的,请问怎么用?------解决方案--------------------可参考我总结的经验
[总结]FFMPEG视音频编解码零基础学习方法
12345678910
12345678910
12345678910 上一篇:下一篇:文章评论相关解决方案 12345678910 Copyright & &&版权所有您要找的是不是:
abbr. 一茶匙的量(teaspoon );技术规格...
abbr. 注册教育储蓄计划(Registered Educa...
实时流传输协议(real time streaming protocol)
稳定挤压压力垫圈用的铆接工装设备(rivet tool set for stasqueeze pressure pad)
实时模拟程序(real time simulation program)
无线电话信号规程(radio telephone signal procedure)
实时流协议
简介 该协议用于C/S模型,是一个基于文本的协议,用于在客户端和服务器端建立和协商实时流会话。 实时流协议(RTSP)是应用级协议,控制实时数据的发送。RTSP提供了一个可扩展框架,使实时数据,如音频与视频,的受控、点播成为可能。
基于990个网页-
实时流传输协议
提供了一种用于监视并迁移发送实时流传输协议(RTSP)流的主服务器(131a-c)的方法。备用服务器(211a-c)监视至少一个主服务器(131a-c)以判断主服务器(131a-c)是否是活动的。
基于238个网页-
...来获得本协议的标准化程度和状态。本备忘录的发布不受任何限制。 版权声明: 版权为The Internet Society 所有。所有权利保留。 摘要: 实 时流协议(RTSP)是应用层协议,控制实时数据的传送。RTSP提供了一个可扩展框架,使实时数据,如音频与视频的受控、点播成为可能。
基于99个网页-
实时流式协议
《计算机网络-(第4版)(赠光盘一张)》的目录信息 ... 实时运输控制协议RTCP 实时流式协议RTSP 会话发起协议SIP ...
基于80个网页-
统一资源定位符
RTSP服务器
实时流协议〖因特网
实时流协议〖因特网
撑持多种和谈
撑持多种以及谈
更多收起网络短语
实时流协议
&2,447,543篇论文数据,部分数据来源于
Or you can use an ISMA-compliant server for high-quality, randomly accessible, RTSP/RTP streaming delivery.
您也可以用符合 ISMA 的服务器进行高质量的、可以随机访问的 RTSP/RTP 流发布。
The server also requires port 554 (RTSP, real-time streaming protocol) for QuickTime stream setup and control.
服务器还需要端口 554(RTSP,实时流协议)用于 QuickTime 流设置和控制。
The RTSP and RTP protocols are especially designed for media streaming over IP networks and have the following advantages
RTSP 和 RTP 协议是专门为 IP 网络上的媒体流设计的,它们具有以下优势
$firstVoiceSent
- 来自原声例句
请问您想要如何调整此模块?
感谢您的反馈,我们会尽快进行适当修改!
请问您想要如何调整此模块?
感谢您的反馈,我们会尽快进行适当修改!视频监控、直播——基于opencv,lbx264,live555的RTSP流媒体服务器 (zc301P摄像头)
时间: 18:35:54
&&&& 阅读:201
&&&& 评论:
&&&& 收藏:0
标签:一个月一步步的学习历程已经在我前面的随笔中。现在终于迎来了最后一步
不多说,贴代码,不懂的,先看看我之前的随笔,有一步步的过程。还是不懂就在评论中问。
如果有哪里错了或哪里不好,希望读者给出建议。
#ifndef _DYNAMIC_RTSP_SERVER_HH
#define _DYNAMIC_RTSP_SERVER_HH
#ifndef _RTSP_SERVER_SUPPORTING_HTTP_STREAMING_HH
#include &RTSPServerSupportingHTTPStreaming.hh&
class DynamicRTSPServer: public RTSPServerSupportingHTTPStreaming {
static DynamicRTSPServer* createNew(UsageEnvironment& env, Port ourPort,
UserAuthenticationDatabase* authDatabase,
unsigned reclamationTestSeconds = 65);
protected:
DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds);
// called only by createNew();
virtual ~DynamicRTSPServer();
protected: // redefined virtual functions
virtual ServerMediaSession*
lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession);
DynamicRTSPServer.hh
#include "DynamicRTSPServer.hh"
#include "H264LiveVideoServerMediaSubssion.hh"
#include &liveMedia.hh&
#include &string.h&
DynamicRTSPServer* DynamicRTSPServer::createNew(
UsageEnvironment& env, Port ourPort,
UserAuthenticationDatabase* authDatabase,
unsigned reclamationTestSeconds)
int ourSocket = setUpOurSocket(env, ourPort);
if (ourSocket == -1) return NULL;
return new DynamicRTSPServer(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds);
DynamicRTSPServer::DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds)
: RTSPServerSupportingHTTPStreaming(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds) {}
DynamicRTSPServer::~DynamicRTSPServer() {}
static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/); // forward
ServerMediaSession* DynamicRTSPServer::lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession)
// Next, check whether we already have a "ServerMediaSession" for this file:
ServerMediaSession* sms = RTSPServer::lookupServerMediaSession(streamName);
Boolean smsExists = sms != NULL;
// Handle the four possibilities for "fileExists" and "smsExists":
if (smsExists && isFirstLookupInSession)
// Remove the existing "ServerMediaSession" and create a new one, in case the underlying
// file has changed in some way:
removeServerMediaSession(sms);
sms = NULL;
if (sms == NULL)
sms = createNewSMS(envir(), streamName/*, fid*/);
addServerMediaSession(sms);
static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/)
// Use the file name extension to determine the type of "ServerMediaSession":
char const* extension = strrchr(fileName, ‘.‘);
if (extension == NULL) return NULL;
ServerMediaSession* sms = NULL;
Boolean const reuseSource = F
if (strcmp(extension, ".264") == 0) {
// Assumed to be a H.264 Video Elementary Stream file:
char const* descStr = "H.264 Video, streamed by the LIVE555 Media Server";
sms = ServerMediaSession::createNew(env, fileName, fileName, descStr);
OutPacketBuffer::maxSize = 100000; // allow for some possibly large H.264 frames
sms-&addSubsession(H264LiveVideoServerMediaSubssion::createNew(env, fileName, reuseSource));
DynamicRTSPServer.cpp
#include &BasicUsageEnvironment.hh&
#include "DynamicRTSPServer.hh"
#include "H264FramedLiveSource.hh"
#include &opencv/highgui.h&
//"version"
#ifndef _MEDIA_SERVER_VERSION_HH
#define _MEDIA_SERVER_VERSION_HH
#define MEDIA_SERVER_VERSION_STRING "0.85"
int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
UsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);
UserAuthenticationDatabase* authDB = NULL;
#ifdef ACCESS_CONTROL
// To implement client access control to the RTSP server, do the following:
authDB = new UserAuthenticationD
authDB-&addUserRecord("username1", "password1"); // replace these with real strings
// Repeat the above with each &username&, &password& that you wish to allow
// access to the server.
// Create the RTSP server.
Try first with the default port number (554),
// and then with the alternative port number (8554):
RTSPServer* rtspS
portNumBits rtspServerPortNum = 554;
Camera.Init();
rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
if (rtspServer == NULL) {
rtspServerPortNum = 8554;
rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
if (rtspServer == NULL) {
*env && "Failed to create RTSP server: " && env-&getResultMsg() && "\n";
*env && "LIVE555 Media Server\n";
*env && "\tversion " && MEDIA_SERVER_VERSION_STRING
&& " (LIVE555 Streaming Media library version "
&& LIVEMEDIA_LIBRARY_VERSION_STRING && ").\n";
char* urlPrefix = rtspServer-&rtspURLPrefix();
*env && "Play streams from this server using the URL\n\t"
&& urlPrefix && "&filename&\nwhere &filename& is a file present in the current directory.\n";
*env && "Each file‘s type is inferred from its name suffix:\n";
*env && "\t\".264\" =& a H.264 Video Elementary Stream file\n";
// Also, attempt to create a HTTP server for RTSP-over-HTTP tunneling.
// Try first with the default HTTP port (80), and then with the alternative HTTP
// port numbers (8000 and 8080).
if (rtspServer-&setUpTunnelingOverHTTP(80) || rtspServer-&setUpTunnelingOverHTTP(8000) || rtspServer-&setUpTunnelingOverHTTP(8080)) {
*env && "(We use port " && rtspServer-&httpServerPortNum() && " for optional RTSP-over-HTTP tunneling, or for HTTP live streaming (for indexed Transport Stream files only).)\n";
*env && "(RTSP-over-HTTP tunneling is not available.)\n";
env-&taskScheduler().doEventLoop(); // does not return
Camera.Destory();
return 0; // only to prevent compiler warning
live555MediaServer.cpp
H264LiveVideoServerMediaSubssion.hh
#ifndef _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#define _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#include &liveMedia/H264VideoFileServerMediaSubsession.hh&
#include &UsageEnvironment/UsageEnvironment.hh&
class H264LiveVideoServerMediaSubssion : public H264VideoFileServerMediaSubsession {
static H264LiveVideoServerMediaSubssion*
createNew(UsageEnvironment& env,
char const* fileName,
Boolean reuseFirstSource);
protected:
H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource);
~H264LiveVideoServerMediaSubssion();
protected:
FramedSource* createNewStreamSource(unsigned clientSessionId,
unsigned& estBitrate);
char fFileName[100];
H264LiveVideoServerMediaSubssion.hh
H264LiveVideoServerMediaSubssion.cpp
#include "H264LiveVideoServerMediaSubssion.hh"
#include "H264FramedLiveSource.hh"
#include &H264VideoStreamFramer.hh&
#include &string.h&
H264LiveVideoServerMediaSubssion* H264LiveVideoServerMediaSubssion::createNew (UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
return new H264LiveVideoServerMediaSubssion(env, fileName, reuseFirstSource);
H264LiveVideoServerMediaSubssion::H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
: H264VideoFileServerMediaSubsession(env, fileName, reuseFirstSource)
strcpy(fFileName, fileName);
H264LiveVideoServerMediaSubssion::~H264LiveVideoServerMediaSubssion()
FramedSource* H264LiveVideoServerMediaSubssion::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)
estBitrate = 1000; // kbps
H264FramedLiveSource* liveSource = H264FramedLiveSource::createNew(envir(), fFileName);
if (liveSource == NULL)
return NULL;
return H264VideoStreamFramer::createNew(envir(), liveSource);
H264LiveVideoServerMediaSubssion.cpp
* H264FramedLiveSource.hh
#ifndef _H264FRAMEDLIVESOURCE_HH
#define _H264FRAMEDLIVESOURCE_HH
#include &FramedSource.hh&
#include &UsageEnvironment.hh&
#include &opencv/highgui.h&
extern "C"
#include "encoder.h"
class H264FramedLiveSource : public FramedSource
static H264FramedLiveSource* createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize = 0, unsigned playTimePerFrame = 0);
x264_nal_t * my_
protected:
H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame); // called only by createNew()
~H264FramedLiveSource();
// redefined virtual functions:
virtual void doGetNextFrame();
int TransportData(unsigned char* to, unsigned maxSize);
//static int nalI
protected:
class Cameras
void Init();
void GetNextFrame();
void Destory();
CvCapture *
my_x264_encoder*
x264_picture_t pic_
IplImage *
unsigned char *RGB1;
H264FramedLiveSource.hh
H264FramedLiveSource.cpp
#include "H264FramedLiveSource.hh"
#include &stdio.h&
#include &stdint.h&
#include &unistd.h&
#include &fcntl.h&
#include &stdlib.h&
#include &string.h&
extern class Cameras C //in mainRTSPServer.cpp
#define WIDTH 320
#define HEIGHT 240
#define widthStep 960
#define ENCODER_TUNE
"zerolatency"
#define ENCODER_PROFILE
"baseline"
#define ENCODER_PRESET "veryfast"
#define ENCODER_COLORSPACE X264_CSP_I420
#define CLEAR(x) (memset((&x),0,sizeof(x)))
void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height);
H264FramedLiveSource::H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env)
//fp = fopen(fileName, "rb");
H264FramedLiveSource* H264FramedLiveSource::createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize /*= 0*/, unsigned playTimePerFrame /*= 0*/)
H264FramedLiveSource* newSource = new H264FramedLiveSource(env, fileName, preferredFrameSize, playTimePerFrame);
return newS
H264FramedLiveSource::~H264FramedLiveSource()
//fclose(fp);
void H264FramedLiveSource::doGetNextFrame()
fFrameSize = 0;
//不知道为什么,多几帧一起发送效果会好一点点,也许是心理作怪
for(int i = 0; i & 2; i++)
Camera.GetNextFrame();
for (my_nal = Camera.encoder-& my_nal & Camera.encoder-&nal + Camera.n_ ++my_nal){
memmove((unsigned char*)fTo + fFrameSize, my_nal-&p_payload, my_nal-&i_payload);
fFrameSize += my_nal-&i_
nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
(TaskFunc*)FramedSource::afterGetting, this);//表示延迟0秒后再执行 afterGetting 函数
void Cameras::Init()
//打开第一个摄像头
cap = cvCreateCameraCapture(0);
fprintf(stderr, "Can not open camera1.\n");
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, WIDTH);
cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);
encoder = (my_x264_encoder *)malloc(sizeof(my_x264_encoder));
if (!encoder){
printf("cannot malloc my_x264_encoder !\n");
exit(EXIT_FAILURE);
CLEAR(*encoder);
strcpy(encoder-&parameter_preset, ENCODER_PRESET);
strcpy(encoder-&parameter_tune, ENCODER_TUNE);
encoder-&x264_parameter = (x264_param_t *)malloc(sizeof(x264_param_t));
if (!encoder-&x264_parameter){
printf("malloc x264_parameter error!\n");
exit(EXIT_FAILURE);
/*初始化编码器*/
CLEAR(*(encoder-&x264_parameter));
x264_param_default(encoder-&x264_parameter);
if ((ret = x264_param_default_preset(encoder-&x264_parameter, encoder-&parameter_preset, encoder-&parameter_tune))&0){
printf("x264_param_default_preset error!\n");
exit(EXIT_FAILURE);
/*cpuFlags 去空缓冲区继续使用不死锁保证*/
encoder-&x264_parameter-&i_threads = X264_SYNC_LOOKAHEAD_AUTO;
/*视频选项*/
encoder-&x264_parameter-&i_width = WIDTH;//要编码的图像的宽度
encoder-&x264_parameter-&i_height = HEIGHT;//要编码的图像的高度
encoder-&x264_parameter-&i_frame_total = 0;//要编码的总帧数,不知道用0
encoder-&x264_parameter-&i_keyint_max = 25;
/*流参数*/
encoder-&x264_parameter-&i_bframe = 5;
encoder-&x264_parameter-&b_open_gop = 0;
encoder-&x264_parameter-&i_bframe_pyramid = 0;
encoder-&x264_parameter-&i_bframe_adaptive = X264_B_ADAPT_TRELLIS;
/*log参数,不需要打印编码信息时直接注释掉*/
encoder-&x264_parameter-&i_log_level = X264_LOG_DEBUG;
encoder-&x264_parameter-&i_fps_num = 25;//码率分子
encoder-&x264_parameter-&i_fps_den = 1;//码率分母
encoder-&x264_parameter-&b_intra_refresh = 1;
encoder-&x264_parameter-&b_annexb = 1;
/////////////////////////////////////////////////////////////////////////////////////////////////////
strcpy(encoder-&parameter_profile, ENCODER_PROFILE);
if ((ret = x264_param_apply_profile(encoder-&x264_parameter, encoder-&parameter_profile))&0){
printf("x264_param_apply_profile error!\n");
exit(EXIT_FAILURE);
/*打开编码器*/
encoder-&x264_encoder = x264_encoder_open(encoder-&x264_parameter);
encoder-&colorspace = ENCODER_COLORSPACE;
/*初始化pic*/
encoder-&yuv420p_picture = (x264_picture_t *)malloc(sizeof(x264_picture_t));
if (!encoder-&yuv420p_picture){
printf("malloc encoder-&yuv420p_picture error!\n");
exit(EXIT_FAILURE);
if ((ret = x264_picture_alloc(encoder-&yuv420p_picture, encoder-&colorspace, WIDTH, HEIGHT))&0){
printf("ret=%d\n", ret);
printf("x264_picture_alloc error!\n");
exit(EXIT_FAILURE);
encoder-&yuv420p_picture-&img.i_csp = encoder-&
encoder-&yuv420p_picture-&img.i_plane = 3;
encoder-&yuv420p_picture-&i_type = X264_TYPE_AUTO;
/*申请YUV buffer*/
encoder-&yuv = (uint8_t *)malloc(WIDTH*HEIGHT * 3 / 2);
if (!encoder-&yuv){
printf("malloc yuv error!\n");
exit(EXIT_FAILURE);
CLEAR(*(encoder-&yuv));
encoder-&yuv420p_picture-&img.plane[0] = encoder-&
encoder-&yuv420p_picture-&img.plane[1] = encoder-&yuv + WIDTH*HEIGHT;
encoder-&yuv420p_picture-&img.plane[2] = encoder-&yuv + WIDTH*HEIGHT + WIDTH*HEIGHT / 4;
n_nal = 0;
encoder-&nal = (x264_nal_t *)calloc(2, sizeof(x264_nal_t));
if (!encoder-&nal){
printf("malloc x264_nal_t error!\n");
exit(EXIT_FAILURE);
CLEAR(*(encoder-&nal));
RGB1 = (unsigned char *)malloc(HEIGHT * WIDTH * 3);
void Cameras::GetNextFrame()
img = cvQueryFrame(cap);
for (int i = 0; i& HEIGHT; i++)
for (int j = 0; j& WIDTH; j++)
RGB1[(i*WIDTH + j) * 3] = img-&imageData[i * widthStep + j * 3 + 2];;
RGB1[(i*WIDTH + j) * 3 + 1] = img-&imageData[i * widthStep + j * 3 + 1];
RGB1[(i*WIDTH + j) * 3 + 2] = img-&imageData[i * widthStep + j * 3];
Convert(RGB1, encoder-&yuv, WIDTH, HEIGHT);
encoder-&yuv420p_picture-&i_pts++;
//printf("!!!!!\n");
if ( x264_encoder_encode(encoder-&x264_encoder, &encoder-&nal, &n_nal, encoder-&yuv420p_picture, &pic_out) & 0){
printf("x264_encoder_encode error!\n");
exit(EXIT_FAILURE);
//printf("@@@@@@\n");
/*for (my_nal = encoder-& my_nal & encoder-&nal + n_ ++my_nal){
write(fd_write, my_nal-&p_payload, my_nal-&i_payload);
void Cameras::Destory()
free(RGB1);
cvReleaseCapture(&cap);
free(encoder-&yuv);
free(encoder-&yuv420p_picture);
free(encoder-&x264_parameter);
x264_encoder_close(encoder-&x264_encoder);
free(encoder);
H264FramedLiveSource.cpp
#include &x264.h&
typedef struct my_x264_encoder{
x264_param_t
char parameter_preset[20];
char parameter_tune[20];
char parameter_profile[20];
x264_picture_t * yuv420p_
unsigned char *
x264_nal_t *
} my_x264_
#include &stdio.h&
#include &stdlib.h&
#include &string.h&
#include &iostream&
//转换矩阵
#define MY(a,b,c) (( a*
#define MU(a,b,c) (( a*(-0.1688) + b*(-0.3312) + c*
0.5000 + 128))
#define MV(a,b,c) (( a*
+ b*(-0.4184) + c*(-0.0816) + 128))
//#define MY(a,b,c) (( a*
0.257 + b*
0.098+16))
//#define MU(a,b,c) (( a*( -0.148) + b*(- 0.291) + c* 0.439 + 128))
//#define MV(a,b,c) (( a*
+ b*(- 0.368) + c*( - 0.071) + 128))
//大小判断
#define DY(a,b,c) (MY(a,b,c) & 255 ? 255 : (MY(a,b,c) & 0 ? 0 : MY(a,b,c)))
#define DU(a,b,c) (MU(a,b,c) & 255 ? 255 : (MU(a,b,c) & 0 ? 0 : MU(a,b,c)))
#define DV(a,b,c) (MV(a,b,c) & 255 ? 255 : (MV(a,b,c) & 0 ? 0 : MV(a,b,c)))
#define CLIP(a) ((a) & 255 ? 255 : ((a) & 0 ? 0 : (a)))
//RGB to YUV
void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height)
//变量声明
unsigned int i, x, y,
unsigned char *Y = NULL;
unsigned char *U = NULL;
unsigned char *V = NULL;
U = YUV + width*
V = U + ((width*height) && 2);
for (y = 0; y & y++)
for (x = 0; x & x++)
j = y*width +
i = j * 3;
Y[j] = (unsigned char)(DY(RGB[i], RGB[i + 1], RGB[i + 2]));
if (x % 2 == 1 && y % 2 == 1)
j = (width && 1) * (y && 1) + (x && 1);
//上面i仍有效
U[j] = (unsigned char)
((DU(RGB[i], RGB[i + 1], RGB[i + 2]) +
DU(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
DU(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
DU(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
V[j] = (unsigned char)
((DV(RGB[i], RGB[i + 1], RGB[i + 2]) +
DV(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
DV(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
DV(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
RGB2YUV.cpp
以上就是全部代码了,代码有些是修改源代码的,有些是修改别人的代码的,也许还有一些没用的变量或没用的步骤,忽略就行了。不影响编译。
g++ -c *.cpp -I /usr/local/include/groupsock &-I /usr/local/include/UsageEnvironment -I /usr/local/include/liveMedia -I /usr/local/include/BasicUsageEnvironment -I .
g++ &*.o /usr/local/lib/libliveMedia.a /usr/local/lib/libgroupsock.a /usr/local/lib/libBasicUsageEnvironment.a /usr/local/lib/libUsageEnvironment.a /usr/local/lib/libx264.so /usr/local/lib/libopencv_highgui.so /usr/local/lib/libopencv_videoio.so /usr/lib/x86_64-linux-gnu/libx264.so.142 -ldl &-lm -lpthread -ldl -g
这个是我的编译命令,因为libx264误安装了两个版本,所以两个库都用上。
切换到root权限,然后执行./a.out
除了VLC软件之外,手机软件MX player更好用,我一直都是用手机软件看的,连上自己的WIFI,输入地址就行了。
代码中设置的帧速是25帧每秒,但实际只有...我看不出来。。总之算是很流畅了,分辨率为320x240(可以自己适当调节),延迟在1秒钟之内或1秒左右。
看了很多视频监控、直播的论文,别人要不就是不搭建服务器,要不就是直接传jpeg图片,要不就是帧速才6-7帧每秒,感觉都不怎么符合要求。
当时还以为能找到些适合的论文学习,还有搜索到适合的博客那些,但是没有。
很多人会了但他就是不肯分享出来,花点时间总结下也不错吧?还能给初学者学习,一代代地分享下去世界才会有更快的发展。
希望我的读者能够分享出自己幸苦学来的知识,也带动起周围的这种气氛,让人人都学会分享。
谢谢阅读~,有什么建议或问题的话希望能够提出。
测试没有问题后,剩下的步骤就是把代码移植到开发板上面了,这个根据自己的开发板来弄把,过程也不是很复杂。&
&&国之画&&&& &&&&chrome插件
版权所有 京ICP备号-2
迷上了代码!

我要回帖

更多关于 网络摄像机查找器 的文章

 

随机推荐