tornavis/source/blender/draw/intern/draw_cache_extract_mesh.cc

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

814 lines
29 KiB
C++
Raw Normal View History

/*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2017 by Blender Foundation.
* All rights reserved.
*/
/** \file
* \ingroup draw
*
* \brief Extraction of Mesh data into VBO to feed to GPU.
*/
#include "MEM_guardedalloc.h"
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
#include <optional>
#include "atomic_ops.h"
#include "DNA_mesh_types.h"
#include "DNA_scene_types.h"
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
#include "BLI_array.hh"
#include "BLI_math_bits.h"
#include "BLI_task.h"
#include "BLI_vector.hh"
#include "BKE_editmesh.h"
#include "GPU_capabilities.h"
#include "draw_cache_extract.h"
#include "draw_cache_extract_mesh_private.h"
#include "draw_cache_inline.h"
// #define DEBUG_TIME
#ifdef DEBUG_TIME
# include "PIL_time_utildefines.h"
#endif
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
#define CHUNK_SIZE 1024
namespace blender::draw {
/* ---------------------------------------------------------------------- */
/** \name Mesh Elements Extract Struct
* \{ */
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
using TaskId = int;
using TaskLen = int;
struct ExtractorRunData {
/* Extractor where this run data belongs to. */
const MeshExtract *extractor;
/* During iteration the VBO/IBO that is being build. */
void *buffer = nullptr;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
uint32_t data_offset = 0;
ExtractorRunData(const MeshExtract *extractor) : extractor(extractor)
{
}
#ifdef WITH_CXX_GUARDEDALLOC
MEM_CXX_CLASS_ALLOC_FUNCS("DRAW:ExtractorRunData")
#endif
};
class ExtractorRunDatas : public Vector<ExtractorRunData> {
public:
void filter_into(ExtractorRunDatas &result, eMRIterType iter_type) const
{
for (const ExtractorRunData &data : *this) {
const MeshExtract *extractor = data.extractor;
if ((iter_type & MR_ITER_LOOPTRI) && extractor->iter_looptri_bm) {
BLI_assert(extractor->iter_looptri_mesh);
result.append(data);
continue;
}
if ((iter_type & MR_ITER_POLY) && extractor->iter_poly_bm) {
BLI_assert(extractor->iter_poly_mesh);
result.append(data);
continue;
}
if ((iter_type & MR_ITER_LEDGE) && extractor->iter_ledge_bm) {
BLI_assert(extractor->iter_ledge_mesh);
result.append(data);
continue;
}
if ((iter_type & MR_ITER_LVERT) && extractor->iter_lvert_bm) {
BLI_assert(extractor->iter_lvert_mesh);
result.append(data);
continue;
}
}
}
void filter_threaded_extractors_into(ExtractorRunDatas &result)
{
for (const ExtractorRunData &data : *this) {
const MeshExtract *extractor = data.extractor;
if (extractor->use_threading) {
result.append(extractor);
}
}
}
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
eMRIterType iter_types() const
{
eMRIterType iter_type = static_cast<eMRIterType>(0);
for (const ExtractorRunData &data : *this) {
const MeshExtract *extractor = data.extractor;
iter_type |= mesh_extract_iter_type(extractor);
}
return iter_type;
}
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
const uint iter_types_len() const
{
const eMRIterType iter_type = iter_types();
uint bits = static_cast<uint>(iter_type);
return count_bits_i(bits);
}
eMRDataType data_types()
{
eMRDataType data_type = static_cast<eMRDataType>(0);
for (const ExtractorRunData &data : *this) {
const MeshExtract *extractor = data.extractor;
data_type |= extractor->data_type;
}
return data_type;
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
size_t data_size_total()
{
size_t data_size = 0;
for (const ExtractorRunData &data : *this) {
const MeshExtract *extractor = data.extractor;
data_size += extractor->data_size;
}
return data_size;
}
#ifdef WITH_CXX_GUARDEDALLOC
MEM_CXX_CLASS_ALLOC_FUNCS("DRAW:ExtractorRunDatas")
#endif
};
/** \} */
/* ---------------------------------------------------------------------- */
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
/** \name ExtractTaskData
* \{ */
struct ExtractTaskData {
const MeshRenderData *mr = nullptr;
MeshBatchCache *cache = nullptr;
ExtractorRunDatas *extractors = nullptr;
MeshBufferCache *mbc = nullptr;
eMRIterType iter_type;
bool use_threading = false;
ExtractTaskData(const MeshRenderData *mr,
struct MeshBatchCache *cache,
ExtractorRunDatas *extractors,
MeshBufferCache *mbc,
const bool use_threading)
: mr(mr), cache(cache), extractors(extractors), mbc(mbc), use_threading(use_threading)
{
iter_type = extractors->iter_types();
};
ExtractTaskData(const ExtractTaskData &src) = default;
~ExtractTaskData()
{
delete extractors;
}
#ifdef WITH_CXX_GUARDEDALLOC
MEM_CXX_CLASS_ALLOC_FUNCS("DRW:ExtractTaskData")
#endif
};
static void extract_task_data_free(void *data)
{
ExtractTaskData *task_data = static_cast<ExtractTaskData *>(data);
delete task_data;
}
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Extract Init and Finish
* \{ */
BLI_INLINE void extract_init(const MeshRenderData *mr,
struct MeshBatchCache *cache,
ExtractorRunDatas &extractors,
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
MeshBufferCache *mbc,
void *data_stack)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
uint32_t data_offset = 0;
for (ExtractorRunData &run_data : extractors) {
const MeshExtract *extractor = run_data.extractor;
run_data.buffer = mesh_extract_buffer_get(extractor, mbc);
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
run_data.data_offset = data_offset;
extractor->init(mr, cache, run_data.buffer, POINTER_OFFSET(data_stack, data_offset));
data_offset += (uint32_t)extractor->data_size;
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
BLI_INLINE void extract_finish(const MeshRenderData *mr,
struct MeshBatchCache *cache,
const ExtractorRunDatas &extractors,
void *data_stack)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
for (const ExtractorRunData &run_data : extractors) {
const MeshExtract *extractor = run_data.extractor;
if (extractor->finish) {
extractor->finish(
mr, cache, run_data.buffer, POINTER_OFFSET(data_stack, run_data.data_offset));
}
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Extract In Parallel Ranges
* \{ */
struct ExtractorIterData {
ExtractorRunDatas extractors;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const MeshRenderData *mr = nullptr;
const void *elems = nullptr;
const int *loose_elems = nullptr;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
#ifdef WITH_CXX_GUARDEDALLOC
MEM_CXX_CLASS_ALLOC_FUNCS("DRW:MeshRenderDataUpdateTaskData")
#endif
};
static void extract_task_reduce(const void *__restrict userdata,
void *__restrict chunk_to,
void *__restrict chunk_from)
{
const ExtractorIterData *data = static_cast<const ExtractorIterData *>(userdata);
for (const ExtractorRunData &run_data : data->extractors) {
const MeshExtract *extractor = run_data.extractor;
if (extractor->task_reduce) {
extractor->task_reduce(POINTER_OFFSET(chunk_to, run_data.data_offset),
POINTER_OFFSET(chunk_from, run_data.data_offset));
}
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_looptri_bm(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
void *extract_data = tls->userdata_chunk;
const MeshRenderData *mr = data->mr;
BMLoop **elt = ((BMLoop * (*)[3]) data->elems)[iter];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_looptri_bm(
mr, elt, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_looptri_mesh(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const MLoopTri *mlt = &((const MLoopTri *)data->elems)[iter];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_looptri_mesh(
mr, mlt, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_poly_bm(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const BMFace *f = ((const BMFace **)data->elems)[iter];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_poly_bm(
mr, f, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_poly_mesh(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const MPoly *mp = &((const MPoly *)data->elems)[iter];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_poly_mesh(
mr, mp, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_ledge_bm(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const int ledge_index = data->loose_elems[iter];
const BMEdge *eed = ((const BMEdge **)data->elems)[ledge_index];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_ledge_bm(
mr, eed, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_ledge_mesh(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const int ledge_index = data->loose_elems[iter];
const MEdge *med = &((const MEdge *)data->elems)[ledge_index];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_ledge_mesh(
mr, med, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_lvert_bm(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const int lvert_index = data->loose_elems[iter];
const BMVert *eve = ((const BMVert **)data->elems)[lvert_index];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_lvert_bm(
mr, eve, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_range_iter_lvert_mesh(void *__restrict userdata,
const int iter,
const TaskParallelTLS *__restrict tls)
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
void *extract_data = tls->userdata_chunk;
const ExtractorIterData *data = static_cast<ExtractorIterData *>(userdata);
const MeshRenderData *mr = data->mr;
const int lvert_index = data->loose_elems[iter];
const MVert *mv = &((const MVert *)data->elems)[lvert_index];
for (const ExtractorRunData &run_data : data->extractors) {
run_data.extractor->iter_lvert_mesh(
mr, mv, iter, POINTER_OFFSET(extract_data, run_data.data_offset));
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
}
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
BLI_INLINE void extract_task_range_run_iter(const MeshRenderData *mr,
ExtractorRunDatas *extractors,
const eMRIterType iter_type,
bool is_mesh,
TaskParallelSettings *settings)
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
ExtractorIterData range_data;
range_data.mr = mr;
TaskParallelRangeFunc func;
int stop;
switch (iter_type) {
case MR_ITER_LOOPTRI:
range_data.elems = is_mesh ? mr->mlooptri : (void *)mr->edit_bmesh->looptris;
func = is_mesh ? extract_range_iter_looptri_mesh : extract_range_iter_looptri_bm;
stop = mr->tri_len;
break;
case MR_ITER_POLY:
range_data.elems = is_mesh ? mr->mpoly : (void *)mr->bm->ftable;
func = is_mesh ? extract_range_iter_poly_mesh : extract_range_iter_poly_bm;
stop = mr->poly_len;
break;
case MR_ITER_LEDGE:
range_data.loose_elems = mr->ledges;
range_data.elems = is_mesh ? mr->medge : (void *)mr->bm->etable;
func = is_mesh ? extract_range_iter_ledge_mesh : extract_range_iter_ledge_bm;
stop = mr->edge_loose_len;
break;
case MR_ITER_LVERT:
range_data.loose_elems = mr->lverts;
range_data.elems = is_mesh ? mr->mvert : (void *)mr->bm->vtable;
func = is_mesh ? extract_range_iter_lvert_mesh : extract_range_iter_lvert_bm;
stop = mr->vert_loose_len;
break;
default:
BLI_assert(false);
return;
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extractors->filter_into(range_data.extractors, iter_type);
BLI_task_parallel_range(0, stop, &range_data, func, settings);
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static void extract_task_range_run(void *__restrict taskdata)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
ExtractTaskData *data = (ExtractTaskData *)taskdata;
const eMRIterType iter_type = data->iter_type;
const bool is_mesh = data->mr->extract_type != MR_EXTRACT_BMESH;
GPU: Thread safe index buffer builders. Current index builder is designed to be used in a single thread. This makes all index buffer extractions single threaded. This patch adds a thread safe solution enabling multithreaded building of index buffers. To reduce locking the solution would provide a task/thread local index buffer builder (called sub builder). When a thread is finished this thread local index buffer builder can be joined with the initial index buffer builder. `GPU_indexbuf_subbuilder_init`: Initialized a sub builder. The index list is shared between the parent and sub buffer, but the counters are localized. Ensuring that updating counters would not need any locking. `GPU_indexbuf_subbuilder_finish`: merge the information of the sub builder back to the parent builder. Needs to be invoked outside the worker thread, or when sure that all worker threads have been finished. Internal the function is not thread safe. For testing purposes the extract_points extractor has been migrated to the new API. Herefore changes to the mesh extractor were needed. * When creating tasks, the task number of current task is stored in ExtractTaskData including the total number of tasks. * Adding two functions in `MeshExtract`. ** `task_init` will initialize the task specific userdata. ** `task_finish` should merge back the task specific userdata back. * adding task_id parameter to the iteration functions so they can access the correct task data without any need for locking. There is no noticeable change in end user performance. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D11499
2021-06-08 16:35:33 +02:00
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
TaskParallelSettings settings;
BLI_parallel_range_settings_defaults(&settings);
settings.func_reduce = extract_task_reduce;
settings.min_iter_per_thread = CHUNK_SIZE;
settings.use_threading = data->use_threading;
size_t chunk_size = data->extractors->data_size_total();
char *chunk = new char[chunk_size];
extract_init(data->mr, data->cache, *data->extractors, data->mbc, (void *)chunk);
settings.userdata_chunk = chunk;
settings.userdata_chunk_size = chunk_size;
if (iter_type & MR_ITER_LOOPTRI) {
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extract_task_range_run_iter(data->mr, data->extractors, MR_ITER_LOOPTRI, is_mesh, &settings);
}
if (iter_type & MR_ITER_POLY) {
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extract_task_range_run_iter(data->mr, data->extractors, MR_ITER_POLY, is_mesh, &settings);
}
if (iter_type & MR_ITER_LEDGE) {
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extract_task_range_run_iter(data->mr, data->extractors, MR_ITER_LEDGE, is_mesh, &settings);
}
if (iter_type & MR_ITER_LVERT) {
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extract_task_range_run_iter(data->mr, data->extractors, MR_ITER_LVERT, is_mesh, &settings);
}
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
extract_finish(data->mr, data->cache, *data->extractors, (void *)chunk);
delete[] chunk;
}
/** \} */
/* ---------------------------------------------------------------------- */
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
/** \name Extract Single Thread
* \{ */
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
static struct TaskNode *extract_task_node_create(struct TaskGraph *task_graph,
const MeshRenderData *mr,
MeshBatchCache *cache,
ExtractorRunDatas *extractors,
MeshBufferCache *mbc,
const bool use_threading)
{
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
ExtractTaskData *taskdata = new ExtractTaskData(mr, cache, extractors, mbc, use_threading);
struct TaskNode *task_node = BLI_task_graph_node_create(
task_graph,
extract_task_range_run,
taskdata,
(TaskGraphNodeFreeFunction)extract_task_data_free);
return task_node;
}
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Task Node - Update Mesh Render Data
* \{ */
struct MeshRenderDataUpdateTaskData {
MeshRenderData *mr = nullptr;
eMRIterType iter_type;
eMRDataType data_flag;
2021-06-07 16:06:30 +02:00
MeshRenderDataUpdateTaskData(MeshRenderData *mr, eMRIterType iter_type, eMRDataType data_flag)
: mr(mr), iter_type(iter_type), data_flag(data_flag)
{
}
~MeshRenderDataUpdateTaskData()
{
mesh_render_data_free(mr);
}
#ifdef WITH_CXX_GUARDEDALLOC
MEM_CXX_CLASS_ALLOC_FUNCS("DRW:MeshRenderDataUpdateTaskData")
#endif
};
2021-06-07 16:06:30 +02:00
static void mesh_render_data_update_task_data_free(void *data)
{
2021-06-07 16:06:30 +02:00
MeshRenderDataUpdateTaskData *taskdata = static_cast<MeshRenderDataUpdateTaskData *>(data);
BLI_assert(taskdata);
delete taskdata;
}
static void mesh_extract_render_data_node_exec(void *__restrict task_data)
{
MeshRenderDataUpdateTaskData *update_task_data = static_cast<MeshRenderDataUpdateTaskData *>(
task_data);
MeshRenderData *mr = update_task_data->mr;
const eMRIterType iter_type = update_task_data->iter_type;
const eMRDataType data_flag = update_task_data->data_flag;
2021-06-01 13:18:04 +02:00
mesh_render_data_update_normals(mr, data_flag);
mesh_render_data_update_looptris(mr, iter_type, data_flag);
}
static struct TaskNode *mesh_extract_render_data_node_create(struct TaskGraph *task_graph,
MeshRenderData *mr,
const eMRIterType iter_type,
const eMRDataType data_flag)
{
2021-06-07 16:06:30 +02:00
MeshRenderDataUpdateTaskData *task_data = new MeshRenderDataUpdateTaskData(
mr, iter_type, data_flag);
struct TaskNode *task_node = BLI_task_graph_node_create(
task_graph,
mesh_extract_render_data_node_exec,
task_data,
(TaskGraphNodeFreeFunction)mesh_render_data_update_task_data_free);
return task_node;
}
/** \} */
/* ---------------------------------------------------------------------- */
/** \name Extract Loop
* \{ */
static void mesh_buffer_cache_create_requested(struct TaskGraph *task_graph,
MeshBatchCache *cache,
MeshBufferCache *mbc,
MeshBufferExtractionCache *extraction_cache,
Mesh *me,
const bool is_editmode,
const bool is_paint_mode,
const bool is_mode_active,
const float obmat[4][4],
const bool do_final,
const bool do_uvedit,
const bool use_subsurf_fdots,
const Scene *scene,
const ToolSettings *ts,
const bool use_hide)
{
/* For each mesh where batches needs to be updated a sub-graph will be added to the task_graph.
* This sub-graph starts with an extract_render_data_node. This fills/converts the required
* data from Mesh.
*
* Small extractions and extractions that can't be multi-threaded are grouped in a single
* `extract_single_threaded_task_node`.
*
* Other extractions will create a node for each loop exceeding 8192 items. these nodes are
2020-06-25 08:56:49 +02:00
* linked to the `user_data_init_task_node`. the `user_data_init_task_node` prepares the
* user_data needed for the extraction based on the data extracted from the mesh.
* counters are used to check if the finalize of a task has to be called.
*
* Mesh extraction sub graph
*
* +----------------------+
* +-----> | extract_task1_loop_1 |
* | +----------------------+
* +------------------+ +----------------------+ +----------------------+
* | mesh_render_data | --> | | --> | extract_task1_loop_2 |
* +------------------+ | | +----------------------+
* | | | +----------------------+
* | | user_data_init | --> | extract_task2_loop_1 |
* v | | +----------------------+
* +------------------+ | | +----------------------+
* | single_threaded | | | --> | extract_task2_loop_2 |
* +------------------+ +----------------------+ +----------------------+
* | +----------------------+
* +-----> | extract_task2_loop_3 |
* +----------------------+
*/
const bool do_hq_normals = (scene->r.perf_flag & SCE_PERF_HQ_NORMALS) != 0 ||
GPU_use_hq_normals_workaround();
const bool override_single_mat = mesh_render_mat_len_get(me) <= 1;
/* Create an array containing all the extractors that needs to be executed. */
ExtractorRunDatas extractors;
#define EXTRACT_ADD_REQUESTED(type, type_lowercase, name) \
do { \
if (DRW_##type_lowercase##_requested(mbc->type_lowercase.name)) { \
const MeshExtract *extractor = mesh_extract_override_get( \
&extract_##name, do_hq_normals, override_single_mat); \
extractors.append(extractor); \
} \
} while (0)
EXTRACT_ADD_REQUESTED(VBO, vbo, pos_nor);
EXTRACT_ADD_REQUESTED(VBO, vbo, lnor);
EXTRACT_ADD_REQUESTED(VBO, vbo, uv);
EXTRACT_ADD_REQUESTED(VBO, vbo, tan);
EXTRACT_ADD_REQUESTED(VBO, vbo, vcol);
EXTRACT_ADD_REQUESTED(VBO, vbo, sculpt_data);
EXTRACT_ADD_REQUESTED(VBO, vbo, orco);
EXTRACT_ADD_REQUESTED(VBO, vbo, edge_fac);
EXTRACT_ADD_REQUESTED(VBO, vbo, weights);
EXTRACT_ADD_REQUESTED(VBO, vbo, edit_data);
EXTRACT_ADD_REQUESTED(VBO, vbo, edituv_data);
EXTRACT_ADD_REQUESTED(VBO, vbo, edituv_stretch_area);
EXTRACT_ADD_REQUESTED(VBO, vbo, edituv_stretch_angle);
EXTRACT_ADD_REQUESTED(VBO, vbo, mesh_analysis);
EXTRACT_ADD_REQUESTED(VBO, vbo, fdots_pos);
EXTRACT_ADD_REQUESTED(VBO, vbo, fdots_nor);
EXTRACT_ADD_REQUESTED(VBO, vbo, fdots_uv);
EXTRACT_ADD_REQUESTED(VBO, vbo, fdots_edituv_data);
EXTRACT_ADD_REQUESTED(VBO, vbo, poly_idx);
EXTRACT_ADD_REQUESTED(VBO, vbo, edge_idx);
EXTRACT_ADD_REQUESTED(VBO, vbo, vert_idx);
EXTRACT_ADD_REQUESTED(VBO, vbo, fdot_idx);
EXTRACT_ADD_REQUESTED(VBO, vbo, skin_roots);
EXTRACT_ADD_REQUESTED(IBO, ibo, tris);
if (DRW_ibo_requested(mbc->ibo.lines)) {
const MeshExtract *extractor;
if (mbc->ibo.lines_loose != nullptr) {
/* Update #lines_loose ibo. */
extractor = &extract_lines_with_lines_loose;
}
else {
extractor = &extract_lines;
}
extractors.append(extractor);
}
else if (DRW_ibo_requested(mbc->ibo.lines_loose)) {
/* Note: #ibo.lines must have been created first. */
const MeshExtract *extractor = &extract_lines_loose_only;
extractors.append(extractor);
}
EXTRACT_ADD_REQUESTED(IBO, ibo, points);
EXTRACT_ADD_REQUESTED(IBO, ibo, fdots);
EXTRACT_ADD_REQUESTED(IBO, ibo, lines_paint_mask);
EXTRACT_ADD_REQUESTED(IBO, ibo, lines_adjacency);
EXTRACT_ADD_REQUESTED(IBO, ibo, edituv_tris);
EXTRACT_ADD_REQUESTED(IBO, ibo, edituv_lines);
EXTRACT_ADD_REQUESTED(IBO, ibo, edituv_points);
EXTRACT_ADD_REQUESTED(IBO, ibo, edituv_fdots);
#undef EXTRACT_ADD_REQUESTED
if (extractors.is_empty()) {
return;
}
#ifdef DEBUG_TIME
double rdata_start = PIL_check_seconds_timer();
#endif
eMRIterType iter_type = extractors.iter_types();
eMRDataType data_flag = extractors.data_types();
MeshRenderData *mr = mesh_render_data_create(me,
extraction_cache,
is_editmode,
is_paint_mode,
is_mode_active,
obmat,
do_final,
do_uvedit,
ts,
iter_type);
mr->use_hide = use_hide;
mr->use_subsurf_fdots = use_subsurf_fdots;
mr->use_final_mesh = do_final;
#ifdef DEBUG_TIME
double rdata_end = PIL_check_seconds_timer();
#endif
struct TaskNode *task_node_mesh_render_data = mesh_extract_render_data_node_create(
task_graph, mr, iter_type, data_flag);
/* Simple heuristic. */
const bool use_thread = (mr->loop_len + mr->loop_loose_len) > CHUNK_SIZE;
if (use_thread) {
/* First run the requested extractors that do not support asynchronous ranges. */
for (const ExtractorRunData &run_data : extractors) {
const MeshExtract *extractor = run_data.extractor;
if (!extractor->use_threading) {
ExtractorRunDatas *single_threaded_extractors = new ExtractorRunDatas();
single_threaded_extractors->append(extractor);
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
struct TaskNode *task_node = extract_task_node_create(
task_graph, mr, cache, single_threaded_extractors, mbc, false);
BLI_task_graph_edge_create(task_node_mesh_render_data, task_node);
}
}
/* Distribute the remaining extractors into ranges per core. */
ExtractorRunDatas *multi_threaded_extractors = new ExtractorRunDatas();
extractors.filter_threaded_extractors_into(*multi_threaded_extractors);
if (!multi_threaded_extractors->is_empty()) {
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
struct TaskNode *task_node = extract_task_node_create(
task_graph, mr, cache, multi_threaded_extractors, mbc, true);
BLI_task_graph_edge_create(task_node_mesh_render_data, task_node);
}
else {
/* No tasks created freeing extractors list. */
delete multi_threaded_extractors;
}
}
else {
/* Run all requests on the same thread. */
ExtractorRunDatas *extractors_copy = new ExtractorRunDatas(extractors);
Refactor: Draw Cache: use 'BLI_task_parallel_range' This is an adaptation of {D11488}. A disadvantage of manually setting the iter ranges per thread is that we don't know how many threads are running in the background and so we don't know how to best distribute the ranges. To solve this limitation we can use `parallel_reduce` and thus let the driver choose the best distribution of ranges among the threads. This proved to be especially beneficial for computers with few cores. **Benchmarking:** Here's the result on an 4-core laptop: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 5.203638 FPS|Average: 5.398925 FPS ||rdata 15ms iter 43ms (frame 193ms)|rdata 14ms iter 36ms (frame 187ms) Here's the result on an 8-core PC: ||master:|PATCH: |---|---|---| |large_mesh_editing:|Average: 15.267482 FPS|Average: 15.906881 FPS ||rdata 9ms iter 28ms (frame 65ms)|rdata 9ms iter 25ms (frame 63ms) |large_mesh_editing_ledge: |Average: 15.145966 FPS|Average: 15.520474 FPS ||rdata 9ms iter 29ms (frame 65ms)|rdata 9ms iter 25ms (frame 64ms) |looptris_test:|Average: 4.001917 FPS|Average: 4.061105 FPS ||rdata 12ms iter 90ms (frame 236ms)|rdata 12ms iter 87ms (frame 230ms) |subdiv_mesh_cage_and_final:|Average: 1.917769 FPS|Average: 1.971790 FPS ||rdata 7ms iter 37ms (frame 261ms)|rdata 7ms iter 31ms (frame 258ms) ||rdata 7ms iter 38ms (frame 252ms)|rdata 7ms iter 33ms (frame 249ms) |subdiv_mesh_final_only:|Average: 6.387240 FPS|Average: 6.591251 FPS ||rdata 3ms iter 25ms (frame 151ms)|rdata 3ms iter 16ms (frame 145ms) |subdiv_mesh_final_only_ledge:|Average: 6.247393 FPS|Average: 6.596024 FPS ||rdata 3ms iter 26ms (frame 158ms)|rdata 3ms iter 16ms (frame 148ms) **Notes:** - The improvement can only be noticed if all extracts are multithreaded. - This patch touches different areas of the code, so it can be split into another patch if the idea is accepted. These screenshots show how threads behave in a quadcore: Master: {F10164664} Patch: {F10164666} Differential Revision: https://developer.blender.org/D11558
2021-06-10 16:01:36 +02:00
struct TaskNode *task_node = extract_task_node_create(
task_graph, mr, cache, extractors_copy, mbc, false);
BLI_task_graph_edge_create(task_node_mesh_render_data, task_node);
}
/* Trigger the sub-graph for this mesh. */
BLI_task_graph_node_push_work(task_node_mesh_render_data);
#ifdef DEBUG_TIME
BLI_task_graph_work_and_wait(task_graph);
double end = PIL_check_seconds_timer();
static double avg = 0;
static double avg_fps = 0;
static double avg_rdata = 0;
static double end_prev = 0;
if (end_prev == 0) {
end_prev = end;
}
avg = avg * 0.95 + (end - rdata_end) * 0.05;
avg_fps = avg_fps * 0.95 + (end - end_prev) * 0.05;
avg_rdata = avg_rdata * 0.95 + (rdata_end - rdata_start) * 0.05;
printf(
"rdata %.0fms iter %.0fms (frame %.0fms)\n", avg_rdata * 1000, avg * 1000, avg_fps * 1000);
end_prev = end;
#endif
}
} // namespace blender::draw
extern "C" {
void mesh_buffer_cache_create_requested(struct TaskGraph *task_graph,
MeshBatchCache *cache,
MeshBufferCache *mbc,
MeshBufferExtractionCache *extraction_cache,
Mesh *me,
const bool is_editmode,
const bool is_paint_mode,
const bool is_mode_active,
const float obmat[4][4],
const bool do_final,
const bool do_uvedit,
const bool use_subsurf_fdots,
const Scene *scene,
const ToolSettings *ts,
const bool use_hide)
{
blender::draw::mesh_buffer_cache_create_requested(task_graph,
cache,
mbc,
extraction_cache,
me,
is_editmode,
is_paint_mode,
is_mode_active,
obmat,
do_final,
do_uvedit,
use_subsurf_fdots,
scene,
ts,
use_hide);
}
} // extern "C"
/** \} */