术语通用GPU(GPGPU)指的是现代GPU不断增强的可编程性.在OpenGL 4模型中,仅下图的蓝色阶段是可编程的: 图像源.着色器从上一个流水线阶段获取输入(例如,顶点位置,颜色和栅格化像素),然后将输出自定义到下一个阶段.两个最重要的是: 顶点着色器:输入:点在3D空间中的位置输出:点的2D投影(使用 4D矩阵乘法)此相关示例更清楚地说明了什么是投影:如何在OpenGL中使用glOrtho()? 片段着色器:输入:三角形所有像素的2D位置+(边缘的颜色或纹理图像的颜色)+照明参数输出:三角形的每个像素的颜色(如果没有被另一个更接近的三角形遮挡),通常在顶点之间进行插值如先前在片段着色器如何确定来自顶点着色器输出的片段数? 相关问题:什么是顶点和像素着色器? 由此我们可以看出,"shader"这个名称对当前的体系结构不是很具描述性.该名称当然源于阴影",现在我们称之为片段着色器"来处理.但是GLSL中的着色器"现在也像顶点着色器一样管理顶点位置,更不用说OpenGL 4.3 GL_COMPUTE_SHADER了,它允许与渲染完全无关的任意计算,就像OpenCL一样. GPU片段着色器的一些很酷的非3D"应用程序包括: 图像处理:是否可以以每秒60次的速度从点数据构建热图? 绘图:是否可以以每秒60次的速度从点数据构建热图? TODO是否可以仅使用OpenCL有效地实现OpenGL,即使所有阶段都可编程?当然,必须在性能/灵活性之间进行权衡.第一个带有着色器的GPU使用不同的专用硬件进行顶点和片段着色,因为它们的工作量完全不同.但是,当前的体系结构对所有着色器类型都使用同一类型的硬件(基本上是小型CPU)的多次传递,这节省了一些硬件重复.这种设计被称为统一着色器模型: 图片来源.要真正了解着色器及其所有功能,您必须查看许多示例并学习API.例如, https://github.com/JoeyDeVries/LearnOpenGL 是一个很好的来源.在现代OpenGL 4中,甚至Hello World三角形程序也使用超级简单的着色器,而不是较旧的不推荐使用的即时API(如glBegin和glColor).这是一个示例: https://stackoverflow.com/a/36166310/895245 动态阴影是经典的非凡着色器的炫酷应用: 图片源.. >I'm not native English speaker and when I'm trying to get through openGL wiki and tutorials on www.learnopengl.com, it never ends up understandable by intuition how whole concept works. Can someone maybe explain me in more abstract way how it works? What are vertex shader and fragment shader and what do we use them for? 解决方案 The OpenGL wiki gives a good definition:In the past, graphics cards were non-programmable pieces of silicon which performed a set of fixed algorithms:inputs: 3D coordinates of triangles, their colors, light sourcesoutput: a 2D imageall using a single fixed parametrized algorithm, typically similar to the Phong reflection model. Image from Wiki:But that was too restrictive for programmers who wanted to create many different complex visual effects.So as semiconductor manufacture technology advanced, and GPU designers were able to cramp more transistors per square millimeter, vendors started allowing some the parts of the rendering pipeline to be programmed programming languages like the C-like GLSL.Those languages are then converted to semi-undocumented instruction sets that runs on small "CPUs" built-into those newer GPU's. In the beginning, those shader languages were not even Turing complete!The term General Purpose GPU (GPGPU) refers to this increased programmability of modern GPUs.In the OpenGL 4 model, only the blue stages of the following diagram are programmable:Image source.Shaders take the input from the previous pipeline stage (e.g. vertex positions, colors, and rasterized pixels) and customize the output to the next stage.The two most important ones are:vertex shader:input: position of points in 3D spaceoutput: 2D projection of the points (using 4D matrix multiplication)This related example shows more clearly what a projection is: How to use glOrtho() in OpenGL?fragment shader:input: 2D position of all pixels of a triangle + (color of edges or a texture image) + lightining parametersoutput: the color of every pixel of the triangle (if it is not occluded by another closer triangle), usually interpolated between verticesThe fragments are discretized from the previously calculated triangle projections as mentioned at: How fragment shader determines the number of fragments from vertex shader output?Related question: What are Vertex and Pixel shaders?From this we see that the name "shader" is not very descriptive for current architectures. The name originates of course from "shadows", which is handled by what we now call the "fragment shader". But "shaders" in GLSL now also manage vertex positions as is the case for the vertex shader, not to mention OpenGL 4.3 GL_COMPUTE_SHADER, which allows for arbitrary calculations completely unrelated to rendering, much like OpenCL.Some cool "non 3D" applications of GPU fragment shaders include:image processing: Is it possible to build a heatmap from point data at 60 times per second?plotting: Is it possible to build a heatmap from point data at 60 times per second?TODO could OpenGL be efficiently implemented with OpenCL alone, i.e., making all stages programmable? Of course, there must be a performance / flexibility trade-off.The first GPUs with shaders used different specialized hardware for vertex and fragment shading, since those have quite different workloads. Current architectures however, use multiple passes of a single type of hardware (basically small CPUs) for all shader types, which saves some hardware duplication. This design is known as an Unified Shader Model:Image source.To truly understand shaders and all they can do, you have to look at many examples and learn the APIs. https://github.com/JoeyDeVries/LearnOpenGL for example is a good source.In modern OpenGL 4, even hello world triangle programs use super simple shaders, instead of older deprecated immediate APIs like glBegin and glColor. Here is an example: https://stackoverflow.com/a/36166310/895245One classic cool application of a non-trivial shader are dynamic shadows:Image source. 这篇关于OpenGL中的着色器是什么,我们需要它们吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
10-30 02:59