[ad_1]
I am making a recreation in fashionable OpenGL, C++ however I am having issues with optimization. Each time I render the sprites the CPU utilization begins to extend.
Code:
void GameController::Render() {
shaders.Use();
glBindVertexArray(vao);
for (const auto& layers : chuncks) {
for (const auto& layer : layers) {
const bool isVisible =
(layer.place.x + layer.measurement.x >= camera->pixelRect.min().x) &&
(layer.place.x - layer.measurement.x <= camera->pixelRect.max().x) &&
(layer.place.y + layer.measurement.y >= camera->pixelRect.min().y) &&
(layer.place.y - layer.measurement.y <= camera->pixelRect.max().y);
if (isVisible) {
textures[layer.id - 1]->Utilization(GL_TEXTURE0);
shaders.SetUniform("textureSampler", 0);
shaders.SetUniform("projection", camera->projectionMatrix);
shaders.SetUniform("translate", layer.place);
shaders.SetUniform("scale", layer.measurement);
glDrawArrays(GL_QUADS, 0, 4);
textures[layer.id - 1]->Unused();
}
}
}
glBindVertexArray(0);
shaders.Unuse();
}
CPU utilization is beginning to get unusually excessive. I’ve considered implementing separate threads, one for updating and one for rendering, however I do not know if it will likely be environment friendly.
[ad_2]