In this part of the series I explained how I implemented the Camera, Renderer, Mesh, and Shader classes, corresponding transformation matrices and functions.


For the camera implementation it is needed to add some functions to Math and Vector3 classes.

In Math class a LookAt function is added instead of using glu library because I don’t want to be too dependent of other libraries, and I already have vector and matrix classes implemented. Therefore there is no need to include another library for a few functions of it. In LookAt function basically the view matrix -which will be explained in the following section- of the camera is calculated. I also added transformation operations(translation, rotation, and scaling) to Vector3 class -which is not necessary only for camera implementation-.

In Camera class, every necessary function and variable are being held such as position, forward, upward, and right vectors, near, and far distances and so on. Also 3 rotation functions (yaw, pitch, and roll) are added. In camera implementation, maybe the most important concept is view and projection matrices; therefore, it is better to have another section for explaining them.

View and Projection Matrices

View matrix of a camera is used to bring every vertex from world space to view space, i.e. every vertex is transformed according to the view matrix. You may think that moving or rotating the camera is not related with the objects because camera is transformed not the objects, but this is not the case in computer graphics. In CG there is no such thing as rotating or moving the camera, only vertices are movable and rotatable. The object we call “camera” is only an object that holds a view and a projection matrix, i.e. when we transform our view, every object in the scene is transformed but the camera stays the same with all properties of it.

As we all know, we have screens in 2D space, but our scenes are all 3D, and we need to calculate which fragment(pixel-to-be) projects where on the screen. For this purpose there exists a matrix, called projection matrix. In computer graphics there are two different projection type which are perspective and orthographic projections. In orthographic projection, closer or further objects have the same scale as depth is not important. A box in a scene in front of the camera by 1 meter is equal sized with the same box placed 1 kilometer away from the camera. Orthographic projection is generally used in 2D and isometric games or some professional areas such as architecture. On the other hand, in perspective projection, the projection of the 3D world is like how we see the real world with our eyes, closer objects seem bigger and their projection is reduced in size as they go further away.

Indeed, these projection types are very easy to implement in computer graphics. We only need 6 variables of the viewport which are coordinates of the left, right, top and bottom edges of it, and distances of near and far planes. Near plane is the closest distance that can be projected and the far plane is the furthest one. Only fragments between these distances are seen in the screen.

Shader Class

I implemented a basic shader class where shader binding/unbinding, uniform setting operations etc. are done.


Mesh class is the base class rendered by the renderer. It consists of two fundamental classes, Face and VertexData where Face is simply a container holding vertex indices which creates a face together, and VertexData class holds every property of a vertex: its position, normal, and UV.

Just for now, vertex and fragment shaders are assigned as strings. This will be changed in the upcoming article and the shaders will be generated by a class named ShaderBuilder.


I explained the basic Renderer class in the previous article. In this part of the project, I added buffer data setter and render functions.

Rendering Multiple Meshes

In Init function I sum-up every mesh’s vertex and face count and in SetBufferData function vertex and index buffers are generated and bound using the calculated vertex and face counts. Then, each mesh information is transferred into the buffer using glBufferSubData.

In Render function, glDrawElementsBaseVertex function of OpenGL is used to render every object with their own vertex indices. Because every vertex stacked into the buffer continuously it would be needed to add up index count of every object to the next one if we use glDrawElements, but glDrawElementsBaseVertex redeem us from this limitation.

You can check the GitHub repository for source files.

I, now, added a project for Goknar Engine in GitHub. You can follow status of the project there.

In the next article I will talk about static/dynamic lighting implementations. Until then, take care 🙂