OpenglES学习5-纹理

1. 什么是纹理 ?

OpenglES中纹理有两种类型:Textures in OpenGL ES 2.0 come in two forms: 2D textures and cube map textures.

2D纹理:A 2D texture is the most basic and common form of texture in OpenGL ES. A 2D texture is—as you might guess—a two-dimensional array of image data. The individual data elements of a texture are known as texels. A texel is a shortened way of describing a texture pixel.Textures are typically applied to a surface by using texture coordinates that can be thought of as indices into texture array data.When rendering with a 2D texture, a texture coordinate is used as an index into the texture image.
可以看点纹理就是一个二维的图片数据数组。通过纹理坐标获取每块纹理数据。纹理坐标系通过(s,t)或者(u,v)表示,并且纹理的左下角问坐标原点(0,0),右上角是为(1,1).
纹理图片的格式,有如下几种:

2. 如何使用纹理

The first step in the application of textures is to create a texture object. A texture object is a container object that holds the texture data that is needed for rendering such as image data, filtering modes, and wrap modes.

  • 首先要创建纹理对象,纹理对象是一个容器,可以保存图片数据,过滤模式,边界模式glGenTextures
  • 然后绑定刚才创建的纹理对象到一个纹理类型GL_TEXTURE_2D,绑定完成后,后续对这个纹理类型的操作都是对这个纹理对象。glBindTexture
  • 然后就是加载图片数据glTexImage2D

3. 看实例代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
self.ctx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:self.ctx];
//------ 1. 配置帧,渲染缓存 ---------
// 1.1 帧缓存
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// 1.2 渲染缓存
glGenRenderbuffers(1, &colorRboId);
glBindRenderbuffer(GL_RENDERBUFFER, colorRboId);
[self.ctx renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
// 1.3 绑定渲染缓存
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRboId);
// 1.4 计算渲染宽高
GLint renderWidth = 0, renderHeight = 0;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderHeight);
//----- 2. 准备顶点和下标数据 -----
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
const GLvoid *data = NULL;
data = text2DSquare;
glBufferData(GL_ARRAY_BUFFER, sizeof(text2DSquare), text2DSquare, GL_STATIC_DRAW);
const GLvoid *indicts = NULL;
indicts = squareIndicts;
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(squareIndicts), indicts, GL_STATIC_DRAW);
//------- 3. 设置shader program -------
GLuint vShaderId = [self addShader:GL_VERTEX_SHADER];
GLuint fShaderId = [self addShader:GL_FRAGMENT_SHADER];
programId = glCreateProgram();
glAttachShader(programId, vShaderId);
glAttachShader(programId, fShaderId);
glBindAttribLocation(programId, 0, "a_position");
glBindAttribLocation(programId, 1, "a_texCoord");
glLinkProgram(programId);
//------ 4.使用shader程序 ------
// 4.1 设置背景
glClearColor(0.3,0.3, 0.3, 1);
// 4.2 清理缓存
glClear(GL_COLOR_BUFFER_BIT);
// 4.3 设置视口
glViewport(0, 0, renderWidth, renderHeight);
// 4.4 使用shader 程序
glUseProgram(programId);
// 4.5 坐标转换 计算变换矩阵
GLuint modelViewLoc = glGetUniformLocation(programId, "u_modelViewMat4");
GLuint projectionLoc = glGetUniformLocation(programId, "u_projectionMat4");
VYTransforms *trans = self.currentTransforms;
trans.modelTransform = VYSTTransformSetPosition(trans.modelTransform, trans.PositionVec3Make(0,-0.5,-3));
trans.modelTransformMat4 = VYSTTransformMat4Make(trans.modelTransform);
trans.viewTransformMat4 = VYSTTransformMat4Make(trans.viewTransform);
trans.lookAtMat4 = VYLookAtMat4Make(trans.lookAt);
trans.perspectiveProjMat4 = VYPerspectivePerspectiveMat4Make(trans.perspectiveProj);
trans.baseCamera = VYCameraMake(trans.lookAtMat4, trans.perspectiveProjMat4);
trans.baseCameraMat4 = VYCameraMat4Make(trans.baseCamera);
trans.mvpTransfrom = VYMVPTransformMake(trans.modelTransformMat4, trans.viewTransformMat4, trans.baseCameraMat4);
// 4.6 给shader中uniform变量赋值
VYMVPTransform mvp = trans.mvpTransfrom;
glUniformMatrix4fv(modelViewLoc, 1, GL_FALSE, VYMVPTransformModelViewMat4Make(mvp).m);
glUniformMatrix4fv(projectionLoc, 1, GL_FALSE, mvp.cameraProjMat4.m);
// 4.7 给attribute变量赋值
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(ZYVextex), (const GLvoid*)offsetof(ZYVextex, postion));
// GLfloat *d = text2DSquare;
// glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(ZYVextex),d);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(ZYVextex), (const GLvoid *)offsetof(ZYVextex, texCoord));
// d = d + 3;
// glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(ZYVextex),d);
//------ 5. 创建texture -----
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// 5.1给sample变量赋值
GLuint textureSourceLoc = glGetUniformLocation(programId, "us2d_texture");
glUniform1i(textureSourceLoc, 0);
// 5.2 加载纹理数据
[self p_loadTextureImg:@"512_512" completion:^(NSMutableData *data, size_t newWidth, size_t newHeight) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, GL_FALSE, GL_RGBA, GL_UNSIGNED_BYTE, data.bytes);
}];
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//----- 6.绘制------
glBindRenderbuffer(GL_RENDERBUFFER, colorRboId);
glDrawElements(GL_TRIANGLES, sizeof(indicts)/sizeof(indicts[0]), GL_UNSIGNED_BYTE, indicts);
// glDrawArrays(GL_TRIANGLES, 0, 4);
[self.ctx presentRenderbuffer:GL_RENDERBUFFER];

fragment shader 代码

1
2
3
4
5
6
uniform sampler2D us2d_texture;
varying highp vec2 v_texCoord;
void main(void){
gl_FragColor = texture2D(us2d_texture,v_texCoord);
}

GLuint textureSourceLoc = glGetUniformLocation(programId, "us2d_texture"); 是获取shader 采样参数的下标。
glUniform1i(textureSourceLoc, 0); 把采样参数绑定的0这个纹理单元。
glActiveTexture(GL_TEXTURE0); 表示激活0纹理单元。

OpenGL ES 2.0 (iOS)06-1:基础纹理 - 简书

OpenglES学习4-立方体

这次我们参考xxx 实现的简单版本旋转立方体

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
- (void)prepare{
// 1. 基本配置
EAGLContext *ctx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:ctx];
self.ctx = ctx;
glDeleteFramebuffers(1, &fboId);
glDeleteBuffers(1, &colorRboId);
glDeleteBuffers(1, &depthRboId);
fboId = colorRboId = depthRboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &colorRboId);
glBindRenderbuffer(GL_RENDERBUFFER, colorRboId);
[self.ctx renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
GLint w,h;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &w);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &h);
self.renderSize = CGSizeMake(w, h);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRboId);
[self chkFBOStatus];
// --- depth render buffer
glGenRenderbuffers(1, &depthRboId);
glBindRenderbuffer(GL_RENDERBUFFER, depthRboId);
// Once a renderbuffer object is bound, we can specify the dimensions and format of the image stored in the renderbuffer.
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, self.renderSize.width, self.renderSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRboId);
[self chkFBOStatus];
// 2.
glClearColor(0.423, 0.43, 0.87, 1);
// 3. 设置顶点数据,下标数据
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
//The vertex array data or element array data storage is created and initialized using the glBufferData command.
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
// 3. shader配置
[self setShader];
projUnif = glGetUniformLocation(programId, "u_Projection");
modelUnif = glGetUniformLocation(programId, "u_ModelView");
// 5. 给shader 参数设置数据
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(VFVertex), (const GLvoid *)offsetof(VFVertex, position));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(VFVertex), (const GLvoid *)offsetof(VFVertex, color));
}
- (void)drawAndRender{
// 1. use program
glUseProgram(programId);
// 2. transform
//FIXME: 这个不懂参数什么意思,如何取到的?
self.modelPostion = GLKVector3Make(0, -0.5, -5);
[self tansform];
glViewport(0, 0, self.renderSize.width, self.renderSize.height);
//FIXME: 这个不懂为什么设置?
glDepthRangef(0, 1);
// 3. clear old cached
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// 4. open depth test
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
// 5. draw
glBindRenderbuffer(GL_RENDERBUFFER, colorRboId);
//The glBindBuffer command is used to make a buffer object the current array buffer object or the current element array buffer object.
// glBindBuffer(GL_ARRAY_BUFFER, vboId);
glDrawElements(GL_TRIANGLES, sizeof(indices)/sizeof(indices[0]), GL_UNSIGNED_BYTE, indices);
// 6. render
[self.ctx presentRenderbuffer:GL_RENDERBUFFER];
}

** 遗留的几个问题

  1. 坐标空间的转换?

  2. shader代码为何这么写?

OpenglES学习3 FBO RBO

1.FrameBuffer

1.1 为什么需要FBO

系统提供的framebuffer不适用于所有场合。比如:
many applications need to render to a texture, and for this using the window system provided framebuffer as your drawing surface is usually not an ideal option. Examples of where render to texture is useful are dynamic reflections and environment-mapping, multipass techniques for depth-of-field, motion blur effects, and post-processing effects.

1.2 什么是FBO

A framebuffer object (often referred to as an FBO) is a collection of color, depth, and stencil buffer attachment points; state that describes properties such as the size and format of the color, depth, and stencil buffers attached to the FBO; and the names of the texture and renderbuffer objects attached to the FBO. Various 2D images can be attached to the color attachment point in the framebuffer object. These include a renderbuffer object that stores color values, a mip-level of a 2D texture or a cubemap face, or even a mip-level of a 2D slice in a 3D texture. Similarly, various 2D images containing depth values can be attached to the depth attachment point of an FBO. These can include a renderbuffer, a mip-level of a 2D texture or a cubemap face that stores depth values. The only 2D image that can be attached to the stencil attachment point of an FBO is a renderbuffer object that stores stencil values.

  • FBO API 支持系列操作

• Creating and using multiple framebuffer objects within a single EGL context; that is, without requiring a rendering context per framebuffer.
• Creating off-screen color, depth, or stencil renderbuffers and textures, and attaching these to a framebuffer object.
• Sharing color, depth or stencil buffers across multiple framebuffers.
• Attaching textures directly to a framebuffer as color or depth and avoiding the need to do a copy operation.

1.3 如何使用FBO

  1. 创建
1
2
3
void glGenFramebuffers(GLsizei n, GLuint *ids)
n:number of framebuffer object names to return
ids:pointer to an array of n entries, where allocated framebuffer objects are returned

ids是大于0的,等于0表示失败。

  1. glBindFramebuffer
1
2
3
void glBindFramebuffer(GLenum target, GLuint framebuffer)
target:must be set to GL_FRAMEBUFFER
framebuffer:framebuffer object name

在使用framebuffer之前,需要把framebuffer 设置为当前 fbo.
The first time a framebuffer object name is bound by calling glBindFrame-buffer, the framebuffer object is allocated with appropriate default state, and if the allocation is successful, this allocated object is bound as the current framebuffer object for the rendering context.

  1. glDeleteFramebuffers
1
2
3
void glDeleteFramebuffers(GLsizei n, GLuint *framebuffers)
n:number of framebuffer object names to delete
framebuffers:pointer to an array of n framebuffer object names to be deleted

Once a framebuffer object is deleted, it has no state associated with it and is marked as unused and can later be reused as a new framebuffer object. When deleting a framebuffer object that is also the currently bound framebuffer object, the framebuffer object is deleted and the current frame-buffer binding is reset to zero. If framebuffer object names specified in framebuffers are invalid or zero, they are ignored and no error will be generated.

2.RenderBuffer

1.1 为什么需要RBO

1.2 什么是RBO

A renderbuffer object is a 2D image buffer allocated by the application. The renderbuffer can be used to allocate and store color, depth, or stencil values and can be used as a color, depth, or stencil attachment in a framebuffer object. A renderbuffer is similar to an off-screen window system provided drawable surface, such as a pbuffer. A renderbuffer, however, cannot be directly used as a GL texture.

1.3 如何使用RBO

Opengles学习2-Vextex attributes,Vextex array,VBO

1. Vextex attributes

一个顶点基本上有postion,color,normal,texture coordinate等数据。
vextex attribute 分两种: 一种是所有顶点的数据都一样,比如color都是白色;相应的另一种就是有不一样的值,所有需要用array保存每个顶点的数据。

1.1 const vextex attribute

设置const attribute

1
2
3
void glVertexAttrib1f(GLuint index, GLfloat x); void glVertexAttrib2f(GLuint index, GLfloat x, GLfloat y); void glVertexAttrib3f(GLuint index, GLfloat x, GLfloat y, GLfloat z);
stride : 代表指针位置幅度,the components of vertex attribute specified by size are stored sequentially for each vertex. stride specifies the delta between data for vertex index I and vertex (I + 1). If stride is 0, attribute data for all vertices are stored sequentially. If stride is > 0, then we use the stride value as the pitch to get vertex data for next index

1.2 vertex arrays

1
glVertexAttribPointer(GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const void *ptr)

保存顶点数据数组的两种方式:

  • 每个顶点的所有属性保存在一个struct中,然后有一个保存每个顶点struct的数组。array of structs
  • 每个顶点同样属性保存在一个数组中,然后一个struct里面保存全部属性的数组。struct of arrays

假如每个顶点的属性只有postion ,normal ,两个textcoord。
两种方式代码书写的区别:

  • array of structs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// 每个属性的大小(几个float 组成)
#define VERTEX_POS_SIZE 3
#define VERTEX_NORMAL_SIZE 3
#define VERTEX_TEXCOORD0_SIZE 2
#define VERTEX_TEXCOORD1_SIZE 2
// 属性的下标
#define VERTEX_POS_INDX 0
#define VERTEX_NORMAL_INDX 1
#define VERTEX_TEXCOORD0_INDX 2
#define VERTEX_TEXCOORD1_INDX 3
// the following 4 defines are used to determine location of various // attributes if vertex data is are stored as an array of structures
#define VERTEX_POS_OFFSET 0
#define VERTEX_NORMAL_OFFSET 3
#define VERTEX_TEXCOORD0_OFFSET 6
#define VERTEX_TEXCOORD1_OFFSET 8
// 一个顶点strcut 的大小
#define VERTEX_ATTRIB_SIZE VERTEX_POS_SIZE + \
VERTEX_NORMAL_SIZE + \
VERTEX_TEXCOORD0_SIZE + \
VERTEX_TEXCOORD1_SIZE
// array of structs,创建
float *p = malloc(numVertices * VERTEX_ATTRIB_SIZE * sizeof(float));
// 设置每个属性
// position is vertex attribute 0
glVertexAttribPointer(VERTEX_POS_INDX, VERTEX_POS_SIZE,GL_FLOAT, GL_FALSE, VERTEX_ATTRIB_SIZE * sizeof(float), p);
// normal is vertex attribute 1
glVertexAttribPointer(VERTEX_NORMAL_INDX, VERTEX_NORMAL_SIZE, GL_FLOAT, GL_FALSE, VERTEX_ATTRIB_SIZE * sizeof(float), (p + VERTEX_NORMAL_OFFSET));
// texture coordinate 0 is vertex attribute 2
glVertexAttribPointer(VERTEX_TEXCOORD0_INDX, VERTEX_TEXCOORD0_SIZE, GL_FLOAT, GL_FALSE, VERTEX_ATTRIB_SIZE * sizeof(float), (p + VERTEX_TEXCOORD0_OFFSET));
// texture coordinate 1 is vertex attribute 3
glVertexAttribPointer(VERTEX_TEXCOORD1_INDX, VERTEX_TEXCOORD1_SIZE, GL_FLOAT, GL_FALSE, VERTEX_ATTRIB_SIZE * sizeof(float), (p + VERTEX_TEXCOORD1_OFFSET));
  • struct of arrays
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// 一个包含全部顶点position属性的数组
float *position = malloc(numVertices * VERTEX_POS_SIZE * sizeof(float));
// 一个包含全部顶点normal属性的数组
float *normal = malloc(numVertices * VERTEX_NORMAL_SIZE * sizeof(float));
// 一个包含全部顶点texcoord0属性的数组
float *texcoord0 = malloc(numVertices * VERTEX_TEXCOORD0_SIZE * sizeof(float));
// 一个包含全部顶点texcoord1属性的数组
float *texcoord1 = malloc(numVertices * VERTEX_TEXCOORD1_SIZE * sizeof(float));
// position is vertex attribute 0
glVertexAttribPointer(VERTEX_POS_INDX, VERTEX_POS_SIZE,GL_FLOAT, GL_FALSE, VERTEX_POS_SIZE * sizeof(float), position);
// normal is vertex attribute 1
glVertexAttribPointer(VERTEX_NORMAL_INDX, VERTEX_NORMAL_SIZE,GL_FLOAT, GL_FALSE, VERTEX_NORMAL_SIZE * sizeof(float), normal);
// texture coordinate 0 is vertex attribute 2
glVertexAttribPointer(VERTEX_TEXCOORD0_INDX, VERTEX_TEXCOORD0_SIZE, GL_FLOAT, GL_FALSE, VERTEX_TEXCOORD0_SIZE * sizeof(float), texcoord0);
// texture coordinate 1 is vertex attribute 3
glVertexAttribPointer(VERTEX_TEXCOORD1_INDX, VERTEX_TEXCOORD1_SIZE, GL_FLOAT, GL_FALSE, VERTEX_TEXCOORD1_SIZE * sizeof(float), texcoord1);

1.3 选择使用const vextex 还是vextex array

1
2
void glEnableVertexAttribArray(GLuint index);
void glDisableVertexAttribArray(GLuint index);

2. VBO

为什么需要vbo,因为vextex data,vextex arrays 保存在应用内存中,当绘制的时候需要拷贝到GPU内存中,影响性能。
所以如果能把vextex data 直接放大GPU内存中就好了。

2.1 那什么是VBO?

Opengles 中有两种类型的buffer object:

  • array buffer object

GL_ARRAY_BUFFER 用来保存vextex data.

  • element buffer object

GL_ELEMENT_ARRAY_BUFFER用来保存 图元的indicts.

2.2 如何使用VBO?

1
2
3
4
5
6
7
8
// 创建
glGenBuffers(2, vboIds);
// 设置为当前的 array buffer
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(vertex_t), vertexBuffer, GL_STATIC_DRAW);
// bind buffer object for element indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,numIndices * sizeof(GLushort),indices, GL_STATIC_DRAW);

看一下代码 绘制图元 不用VBO和使用VBOde 区别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
// 每个属性的大小(几个float 组成)
#define VERTEX_POS_SIZE 3
#define VERTEX_NORMAL_SIZE 3
#define VERTEX_TEXCOORD0_SIZE 2
// 属性的下标
#define VERTEX_POS_INDX 0
#define VERTEX_NORMAL_INDX 1
#define VERTEX_TEXCOORD0_INDX 2
void drawPrimitiveWithoutVBOs(GLfloat *vertices, GLint vtxStride, GLint numIndices, GLushort *indices) {
GLfloat *vtxBuf = vertices;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(VERTEX_POS_INDX);
glEnableVertexAttribArray(VERTEX_NORMAL_INDX);
glEnableVertexAttribArray{VERTEX_TEXCOORD0_INDX);
glVertexAttribPointer(VERTEX_POS_INDX, VERTEX_POS_SIZE, GL_FLOAT, GL_FALSE, vtxStride, vtxBuf);
vtxBuf += VERTEX_POS_SIZE;
glVertexAttribPointer(VERTEX_NORMAL_INDX, VERTEX_NORMAL_SIZE, GL_FLOAT, GL_FALSE, vtxStride, vtxBuf);
vtxBuf += VERTEX_NORMAL_SIZE;
glVertexAttribPointer(VERTEX_TEXCOORD0_INDX,VERTEX_TEXCOORD0_SIZE, GL_FLOAT, GL_FALSE, vtxStride, vtxBuf);
glBindAttribLocation(program, VERTEX_POS_INDX, "v_position");
glBindAttribLocation(program, VERTEX_NORMAL_INDX, "v_normal");
glBindAttribLocation(program, VERTEX_TEXCOORD0_INDX, "v_texcoord");
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
}
void drawPrimitiveWithVBOs(GLint numVertices, GLfloat *vtxBuf, GLint vtxStride, GLint numIndices, GLushort *indices)
{
GLuint offset = 0;
GLuint vboIds[2];// vboIds[0] – used to store vertex attribute data // vboIds[1] – used to store element indices glGenBuffers(2, vboIds);
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, vtxStride * numVertices, vtxBuf, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * numIndices, indices, GL_STATIC_DRAW);
glEnableVertexAttribArray(VERTEX_POS_INDX);
glEnableVertexAttribArray(VERTEX_NORMAL_INDX);
glEnableVertexAttribArray{VERTEX_TEXCOORD0_INDX);
glVertexAttribPointer(VERTEX_POS_INDX, VERTEX_POS_SIZE, GL_FLOAT, GL_FALSE, vtxStride, (const void*)offset);
offset += VERTEX_POS_SIZE * sizeof(GLfloat);
glVertexAttribPointer(VERTEX_NORMAL_INDX, VERTEX_NORMAL_SIZE, GL_FLOAT, GL_FALSE, vtxStride, (const void*)offset);
offset += VERTEX_NORMAL_SIZE * sizeof(GLfloat);
glVertexAttribPointer(VERTEX_TEXCOORD0_INDX,VERTEX_TEXCOORD0_SIZE, GL_FLOAT, GL_FALSE, vtxStride, (const void*)offset);
glBindAttribLocation(program, VERTEX_POS_INDX, "v_position");
glBindAttribLocation(program, VERTEX_NORMAL_INDX, "v_normal");
glBindAttribLocation(program, VERTEX_TEXCOORD0_INDX, "v_texcoord");
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
glDeleteBuffers(2, vboIds);
}

Opengl学习1- draw line on ios

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
//
// ZYGLView.m
// esgl_1
//
// Created by zhangyun on 2017/10/9.
// Copyright © 2017年 zhangyun. All rights reserved.
//
#import "ZYGLView.h"
#import <GLKit/GLKit.h>
#import "VFMatrix.h"
typedef struct{
CGFloat red;
CGFloat green;
CGFloat blue;
CGFloat alpha;
}RGBAColor;
// vertex attribute struct
typedef struct {
GLfloat Position[3];
GLfloat Color[4];
}VFVertex;
// 白色
static const GLfloat whiteColor[] = {1,1,1,1};
static const RGBAColor kDefaultColor = {0.4,0.7,0.9,1.f};
// 4个顶点的数据
static const VFVertex crossLinesVertices[] = {
// line one
{0.5f,0.5f,0.f},
{-0.5f,-0.5f,0.f},
// line two
{-0.53f,0.48f,0.f},
{0.55f,-0.4f,0.f}
};
@interface ZYGLView(){
GLint programID;
}
@property (nonatomic,strong) EAGLContext *ctx;
@property (nonatomic,assign) GLfloat windowScale;
@end
@implementation ZYGLView
+ (Class)layerClass{
return [CAEAGLLayer class];
}
- (instancetype)initWithFrame:(CGRect)frame{
if (self = [super initWithFrame:frame]) {
CAEAGLLayer *layer = (CAEAGLLayer *)self.layer;
layer.drawableProperties = @{kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8,kEAGLDrawablePropertyRetainedBacking:@(YES)};
layer.contentsScale = [UIScreen mainScreen].scale;
layer.opaque = YES;
}
return self;
}
- (void)prepare{
// 1.设置上下文环境
EAGLContext *ctx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:ctx];
self.ctx = ctx;
// 2. 设置背景颜色
glClearColor(kDefaultColor.red, kDefaultColor.green, kDefaultColor.blue, kDefaultColor.alpha);
// 3. 配置rbo
GLuint rboId;
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
// 4. 配置fbo
GLuint fboId;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// 5. 绑定fbo rbo
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rboId);
// 6. 绑定renderbuffer到绘制表面
[ctx renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
// 7. 下面的shader的内容
GLuint vertexId = [self addShader:GL_VERTEX_SHADER];
GLuint fragmentId = [self addShader:GL_FRAGMENT_SHADER];
if (vertexId ==0 || fragmentId == 0) {
return;
}
// 8. 创建program
GLuint programId = glCreateProgram();
// 9. 添加shader 到program
glAttachShader(programId, vertexId);
glAttachShader(programId, fragmentId);
// 10. 设置attribute 的index
glBindAttribLocation(programId, 0, "v_Position");
glBindAttribLocation(programId, 1, "v_Color");
// 11. link program
glLinkProgram(programId);
GLint linkSuccess;
// 检查是否链接成功
glGetProgramiv(programId, GL_LINK_STATUS, &linkSuccess);
if (linkSuccess == GL_FALSE) {
GLint infoLen;
glGetProgramiv(programId, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 0) {
GLchar *msg = malloc(sizeof(GLchar *) * infoLen);
glGetProgramInfoLog(programId, infoLen,NULL, msg);
NSString *str = [NSString stringWithUTF8String:msg];
NSLog(@"**---->shader link error:%@",str);
free(msg);
}
NSLog(@"**---->shader link error return");
return;
}
programID = programId;
// 12. 重置渲染内存
glClear(GL_COLOR_BUFFER_BIT);
// 13. 设置视窗
GLint renderbufW,renderbufH;
// 获取当前设备窗口的宽高
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderbufW);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderbufH);
glViewport(0, 0, renderbufW, renderbufH);
// 获取宽高比,用来做坐标系转换
self.windowScale = ((GLfloat)renderbufW / (GLfloat)renderbufH);
// 14. 加载顶点数据
GLuint vboId;
glGenBuffers(1, &vboId);
const GLvoid *dataPtr;
GLsizeiptr dataSize;
GLsizei verticesIndicesCount;
dataSize = sizeof(crossLinesVertices);
dataPtr = crossLinesVertices;
verticesIndicesCount = (GLsizei)(sizeof(crossLinesVertices) / sizeof(crossLinesVertices[0]));
// vbo
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, dataSize, dataPtr, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
// 设置shader中postion数据
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(VFVertex), (const GLvoid *)offsetof(VFVertex,Position));
// 设置shader 中color数据
glDisableVertexAttribArray(1);
glVertexAttrib4fv(1, whiteColor);
}
- (void)display{
// 1. 使用program
glUseProgram(programID);
// 2. 坐标系转换,这块好复杂,没搞懂,直接使用了https://github.com/huangwenfei/OpenGLES2Learning 里的代码
VFMatrix4 scaleMat4 = VFMatrix4MakeScaleY(self.windowScale);
VFMatrix4 transMat4 = VFMatrix4Identity;
glUniformMatrix4fv(0, // 定义的 uniform 变量的内存标识符
1, // 不是 uniform 数组,只是一个 uniform -> 1
GL_FALSE, // ES 下 只能是 False
(const GLfloat *)scaleMat4.m1D); // 数据的首指针
glUniformMatrix4fv(1, // 定义的 uniform 变量的内存标识符
1, // 不是 uniform 数组,只是一个 uniform -> 1
GL_FALSE, // ES 下 只能是 False
(const GLfloat *)transMat4.m1D); // 数据的首指针
glLineWidth(10);
// 2. 绘制
glDrawArrays(GL_LINES, 0, 4);
// 3. 展示renderbuffer内容
[self.ctx presentRenderbuffer:GL_RENDERBUFFER];
}
- (GLuint)addShader:(GLenum)type{
NSString *fileName;
if (type == GL_VERTEX_SHADER) {
fileName = @"vertex.glsl";
}else{
fileName = @"fragment.glsl";
}
// 1. 从文件中加载shader代码
NSString *path = [[NSBundle mainBundle] pathForResource:fileName ofType:nil];
NSString *shaderSource = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
const GLchar *stringDatas = [shaderSource UTF8String];
GLint stringLen = (GLint)shaderSource.length;
// 2. 创建shader
GLuint shaderId = glCreateShader(type);
// 3. 加载代码
glShaderSource(shaderId, 1, &stringDatas, &stringLen);
// 4. 编译
glCompileShader(shaderId);
GLint compileSuccess;
// 5. 检查是否编译成功
glGetShaderiv(shaderId, GL_COMPILE_STATUS, &compileSuccess);
if (compileSuccess == GL_FALSE) {
GLint infoLen;
glGetShaderiv(shaderId, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 0) {
GLchar *msg = malloc(sizeof(GLchar *) * infoLen);
glGetShaderInfoLog(shaderId, infoLen, NULL, msg);
NSString *msgS = [NSString stringWithUTF8String:msg];
NSLog(@"&&&-->shader erro: %@",msgS);
free(msg);
}
return 0;
}
return shaderId;
}
@end

VideoToolBox 压缩

VTCompressionSession

管理输入视频数据压缩的会话。
压缩会话支持一系列视频frame的压缩。流程如下:

  • VTCompressionSessionCreate 创建session
  • 使用VTSessionSetProperty VTSessionSetProperties 配置sessions属性。
  • 使用VTCompressionSessionEncodeFrame编码视频frames,在VTCompressionOutputCallback 里接收压缩的视频frames.
  • VTComressionSessionCompleteFrames 表示没有视频数据输入了。
  • VTCompressionSessionInvalidate 结束session,释放内存。

  • 创建session

    1
    VTCompressionSessionCreate
  • 配置session

1
2
Compression Properties
Properties used to configure a VideoToolbox compression session.
  • 编码frame

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    VTCompressionSessionPrepareToEncodeFrames
    编码器在开始编码前进行资源分配,可选
    VTCompressionSessionEncodeFrame
    给session 提供frame
    VTCompressionSessionEncodeFrameWithOutputHandler
    给sesion 提供frame,压缩完成后调用callback。
    VTCompressionSessionCompleteFrames
    结束压缩session。
  • 检查session

    1
    2
    3
    4
    5
    VTCompressionSessionGetPixelBufferPool
    返回一个为压缩session提供理想资源pixel buffer的容器池。
    VTCompressionSessionGetTypeID
    Retrieves the Core Foundation type identifier for the compression session.
  • 多通道压缩

    1
    2
    3
    4
    5
    6
    7
    8
    VTCompressionSessionBeginPass
    标记一个压缩通道的开始
    VTCompressionSessionEndPass
    Marks the end of a compression pass.
    VTCompressionSessionGetTimeRangesForNextPass
    获取下一个通道的时间范围。
  • 结束Session

1
2
VTCompressionSessionInvalidate
Tears down a compression session.
  • 数据类型
1
2
3
4
5
6
VTCompressionSessionRef
A reference to a VideoToolbox compression session.
VTCompressionOutputCallback
Prototype for the callback invoked when frame compression is complete.
VTCompressionOutputHandler
Prototype for the block invoked when frame compression is complete.
  • 枚举
    1
    VTCompressionSessionOptionFlags
1
2
3
4
5
VTEncodeInfoFlags
kVTEncodeInfo_Asynchronous
异步编码
kVTEncodeInfo_FrameDropped
编码的时候一个frame 被丢弃了。

ffmpeg transcoding

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
static AVFormatContext *ifmt_ctx;
static AVFormatContext *ofmt_ctx;
typedef struct FilteringContext {
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
} FilteringContext;
static FilteringContext *filter_ctx;
static int open_input_file(const char *filename)
{
int ret;
unsigned int i;
ifmt_ctx = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
av_dump_format(ifmt_ctx, 0, filename, 0);
return 0;
}
static int open_output_file(const char *filename)
{
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
ofmt_ctx = NULL;
//Allocate an AVFormatContext for an output format.
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
if (!ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
/*
Add a new stream to a media file.
When demuxing, it is called by the demuxer in read_header(). If the flag AVFMTCTX_NOHEADER is set in s.ctx_flags,
then it may also be called in read_packet().When muxing, should be called by the user before avformat_write_header().
User is required to call avcodec_close() and avformat_free_context() to clean up the allocation by avformat_new_stream().
*/
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* in this example, we choose transcoding to same codec */
encoder = avcodec_find_encoder(dec_ctx->codec_id);
if (!encoder) {
av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
return AVERROR_INVALIDDATA;
}
/* In this example, we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
/* take first format from list of supported formats */
enc_ctx->pix_fmt = encoder->pix_fmts[0];
/* video time_base can be set to whatever is handy and supported by encoder */
enc_ctx->time_base = dec_ctx->time_base;
} else {
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
/* take first format from list of supported formats */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
}
/* Third parameter can be used to pass settings to encoder */
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
/* if this stream must be remuxed */
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
} // for
av_dump_format(ofmt_ctx, 0, filename, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
//Create and initialize a AVIOContext for accessing the resource indicated by url.
//When the resource indicated by url has been opened in read+write mode, the AVIOContext can be used only for writing.
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
/*
Allocate the stream private data and write the stream header to an output media file.
s Media file handle, must be allocated with avformat_alloc_context().
Its oformat field must be set to the desired output format; Its pb field must be set to an already opened AVIOContext.
options An AVDictionary filled with AVFormatContext and muxer-private options.
On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL.
*/
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
AVCodecContext *enc_ctx, const char *filter_spec)
{
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static int init_filters(void)
{
const char *filter_spec;
unsigned int i;
int ret;
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
if (!filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
filter_ctx[i].buffersrc_ctx = NULL;
filter_ctx[i].buffersink_ctx = NULL;
filter_ctx[i].filter_graph = NULL;
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
//
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
/*
Encode a frame of video.
Takes input raw video data from frame and writes the next output packet, if available, to avpkt.
The output packet does not necessarily contain data for the most recent frame,
as encoders can delay and reorder input frames internally as needed.
avpkt: output AVPacket. The user can supply an output buffer by setting avpkt->data and avpkt->size prior to calling the function,
but if the size of the user-provided data is not large enough, encoding will fail.
All other AVPacket fields will be reset by the encoder using av_init_packet().
If avpkt->data is NULL, the encoder will allocate it. The encoder will set avpkt->size to the size of the output packet.
The returned data (if any) belongs to the caller, he is responsible for freeing it.
*/
/*
Encode a frame of audio.
Takes input samples from frame and writes the next output packet, if available, to avpkt.
The output packet does not necessarily contain data for the most recent frame,
as encoders can delay, split, and combine input frames internally as needed.
*/
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
av_packet_rescale_ts(&enc_pkt,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
return ret;
}
// 处理视频帧,并编码保存到文件中
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
}
// 冲洗最后的数据。
static int flush_encoder(unsigned int stream_index)
{
int ret;
int got_frame;
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
AV_CODEC_CAP_DELAY))
return 0;
while (1) {
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
ret = encode_write_frame(NULL, stream_index, &got_frame);
if (ret < 0)
break;
if (!got_frame)
return 0;
}
return ret;
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet = { .data = NULL, .size = 0 };
AVFrame *frame = NULL;
enum AVMediaType type;
unsigned int stream_index;
unsigned int i;
int got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
if (argc != 3) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
av_register_all();
avfilter_register_all();
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = open_output_file(argv[2])) < 0)
goto end;
if ((ret = init_filters()) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
break;
stream_index = packet.stream_index;
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
stream_index);
if (filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
//Convert valid timing fields (timestamps / durations) in a packet from one timebase to another.
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ifmt_ctx->streams[stream_index]->codec->time_base);
//Decode the video frame of size avpkt->size from avpkt->data into picture.
// Decode the audio frame of size avpkt->size from avpkt->data into frame
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
avcodec_decode_audio4;
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
&got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
// 解码处视频音频数据后->过滤器处理->编码处理->mutex->文件
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
// Convert valid timing fields (timestamps / durations) in a packet from one timebase to another.
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base);
// Write a packet to an output media file ensuring correct interleaving.
/*
This function will buffer the packets internally as needed to make sure the packets in the output file are properly interleaved
in the order of increasing dts.Callers doing their own interleaving should call av_write_frame() instead of this function.
Using this function instead of av_write_frame() can give muxers advance knowledge of future packets, improving e.g.
the behaviour of the mp4 muxer for VFR content in fragmenting mode.
*/
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_free_packet(&packet);
} // while
/* flush filters and encoders */
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
/*
Write the stream trailer to an output media file and free the file private data.
May only be called after a successful call to avformat_write_header.
*/
av_write_trailer(ofmt_ctx);
end:
av_free_packet(&packet);
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
if (filter_ctx && filter_ctx[i].filter_graph)
avfilter_graph_free(&filter_ctx[i].filter_graph);
}
av_free(filter_ctx);
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
return ret ? 1 : 0;
}

ffmpeg avfilter

  1. Scale

Scale 保存DAR不变,改变SAR.并且如果输入流的格式不等于下一个filter的输入格式,scale会自动转换格式。

scale=w=200:h=100转到200*100
scale=200:100
scale=qcif 变成1/4

  1. format
    把输入文件转换到指定的像素格式。
    ffmpeg -i xxx.mp4 -vf format=yuv444p out7.mp4
    把一个420p转换到444p.

  2. movie, amovie

  3. setpts ,asetpts
    修改输入frame的PTS。

  4. crop
    将输入视频剪切到指定的宽高。

  • w,h,out_w,out_h 输出视频的宽高。默认是iw,ih.只在一开始的时候计算一次。
  • x,y : 从输入视频的那个位置开始剪切。每帧都会计算一次。
  • keep_aspect:是否保持宽高比不变。
  • exact:是否使用精确的剪切采样,默认不开启就是使用近视值。
    ffmpeg -i xxx.mp4 -vf crop=100:100:12:34 out11.mp4

ffmpeg -i test.mp4 -vf "crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)" out12.mp4 这个输出的视频会有摄像头晃动的效果。

  1. slip
    把输入分成多个相同的输出。需要有个指定输出数量的的参数。默认是2.
    [in] split [out0][out1] .

ffmpeg -i test.mp4 -filter_complex asplit=5 11.mp4 这条命令输出的视频包含5个音频流。

  1. hflip
    水平翻转输入视频。
    ffmpeg -i test.mp4 -vf hflip 11.mp4 这个很像录制时候的镜像功能。

  2. pad
    对输入的图片添加边框,并把原来的输入放到x,y参数的位置。

ffmpeg -i test.mp4 -vf "pad=2*iw:2*ih:ow-iw:oh-ih" 11.mp4 如下图,把原来视频放大两倍,把输入视频放到右下角。屌屌的。

  1. overlay
    把一个视频放到另一个上。有两个输入一个输出。第一个输入作为main,第二个覆盖到上面。
  • x,y : 设置上面视频的坐标。
  • eof_action:第二个视频结束的时候,如何处理。repeat 重复最后一帧;endall 结束全部;pass main视频继续。
  • shortest: 当最短的输入流结束的时候强制结束输出。默认0.
  • format: 设置输出视频的格式。yuv420,yuv422,yuv444,rgb,grgb,auto,默认是yuv420
  • repeatlasr:强制绘制overlay最后一帧直到结束。默认1.

  • 加水印
    ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output

  • 加两个水印
    ffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output

mac ffmpeg 开发环境设置

mac-ffmpeg-开发环境设置

#2017/ffmpeg#

  1. Mac 上安装FFmpeg
    最简单的办法:brew install ffmpeg

安装完成后 执行ffmpeg
[image:9E5F4C3E-BB3A-4021-9977-7E1E5C8CAED1-602-00002577A16B4DCD/C6E830EE-026A-4022-AF43-17285995FBC4.png]

代表安装好了。
安装完成后可以在/usr/local/Cellar/ffmpeg/3.3.3/lib 找到动态库文件。
[image:D5EC1DAC-4F30-48A5-AC39-EF05F542B360-602-000025A461E14647/66DA3B7E-6CDA-400B-96B3-E2C22A001F41.png]

  1. 使用Xcode 创建一个command-lIne 工程
  • 导入ffmpeg动态库:添加第三方库,选择/usr/local/Cellar/ffmpeg/3.3.3/lib 路径下的动态库文件。

  • 设置Header search path
    设置为/usr/local/include 递归查找
    [image:C9D8383C-66B6-4448-8DB8-7D27D1C33336-602-000025CC3D567C86/5BD566B9-92F8-4CAF-BC76-FD5D4FD3137B.png]

  1. 测试一下
1
2
3
4
5
6
7
8
9
int main(int argc, const char * argv[]) {
// insert code here...
avcodec_register_all();
printf("Hello, World!\n");
return 0;
}

完美运行。

|