2023-08-15 16:20:26 +02:00
|
|
|
/* SPDX-FileCopyrightText: 2023 Blender Authors
|
2023-05-31 16:19:06 +02:00
|
|
|
*
|
|
|
|
* SPDX-License-Identifier: GPL-2.0-or-later */
|
2020-06-16 16:35:57 +02:00
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
/** \file
|
|
|
|
* \ingroup fn
|
|
|
|
*
|
2023-01-14 15:42:52 +01:00
|
|
|
* This file provides an Params and ParamsBuilder structure.
|
2020-06-16 16:35:57 +02:00
|
|
|
*
|
2023-01-07 17:32:28 +01:00
|
|
|
* `ParamsBuilder` is used by a function caller to be prepare all parameters that are passed into
|
2023-01-14 15:42:52 +01:00
|
|
|
* the function. `Params` is then used inside the called function to access the parameters.
|
2020-06-16 16:35:57 +02:00
|
|
|
*/
|
|
|
|
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
#include <mutex>
|
2023-01-06 11:50:56 +01:00
|
|
|
#include <variant>
|
Geometry Nodes: refactor multi-threading in field evaluation
Previously, there was a fixed grain size for all multi-functions. That was
not sufficient because some functions could benefit a lot from smaller
grain sizes.
This refactors adds a new `MultiFunction::call_auto` method which has the
same effect as just calling `MultiFunction::call` but additionally figures
out how to execute the specific multi-function efficiently. It determines
a good grain size and decides whether the mask indices should be shifted
or not.
Most multi-function evaluations benefit from this, but medium sized work
loads (1000 - 50000 elements) benefit from it the most. Especially when
expensive multi-functions (e.g. noise) is involved. This is because for
smaller work loads, threading is rarely used and for larger work loads
threading worked fine before already.
With this patch, multi-functions can specify execution hints, that allow
the caller to execute it most efficiently. These execution hints still
have to be added to more functions.
Some performance measurements of a field evaluation involving noise and
math nodes, ordered by the number of elements being evaluated:
```
1,000,000: 133 ms -> 120 ms
100,000: 30 ms -> 18 ms
10,000: 20 ms -> 2.7 ms
1,000: 4 ms -> 0.5 ms
100: 0.5 ms -> 0.4 ms
```
2021-11-26 11:05:47 +01:00
|
|
|
|
2022-03-19 08:26:29 +01:00
|
|
|
#include "BLI_generic_pointer.hh"
|
|
|
|
#include "BLI_generic_vector_array.hh"
|
|
|
|
#include "BLI_generic_virtual_vector_array.hh"
|
2021-04-01 15:55:08 +02:00
|
|
|
#include "BLI_resource_scope.hh"
|
2021-03-21 19:31:24 +01:00
|
|
|
|
2020-06-16 16:35:57 +02:00
|
|
|
#include "FN_multi_function_signature.hh"
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
namespace blender::fn::multi_function {
|
2020-06-16 16:35:57 +02:00
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
class ParamsBuilder {
|
2020-06-16 16:35:57 +02:00
|
|
|
private:
|
2023-01-14 15:56:43 +01:00
|
|
|
std::unique_ptr<ResourceScope> scope_;
|
2023-01-07 17:32:28 +01:00
|
|
|
const Signature *signature_;
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
const IndexMask &mask_;
|
2020-07-20 12:16:20 +02:00
|
|
|
int64_t min_array_size_;
|
2023-01-06 11:50:56 +01:00
|
|
|
Vector<std::variant<GVArray, GMutableSpan, const GVVectorArray *, GVectorArray *>>
|
|
|
|
actual_params_;
|
2020-06-16 16:35:57 +02:00
|
|
|
|
2023-01-14 15:42:52 +01:00
|
|
|
friend class Params;
|
2020-06-16 16:35:57 +02:00
|
|
|
|
BLI: refactor IndexMask for better performance and memory usage
Goals of this refactor:
* Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an
`int64_t` for each index which is more than necessary in pretty much all
practical cases currently. Using `int32_t` might still become limiting
in the future in case we use this to index e.g. byte buffers larger than
a few gigabytes. We also don't want to template `IndexMask`, because
that would cause a split in the "ecosystem", or everything would have to
be implemented twice or templated.
* Allow for more multi-threading. The old `IndexMask` contains a single
array. This is generally good but has the problem that it is hard to fill
from multiple-threads when the final size is not known from the beginning.
This is commonly the case when e.g. converting an array of bool to an
index mask. Currently, this kind of code only runs on a single thread.
* Allow for efficient set operations like join, intersect and difference.
It should be possible to multi-thread those operations.
* It should be possible to iterate over an `IndexMask` very efficiently.
The most important part of that is to avoid all memory access when iterating
over continuous ranges. For some core nodes (e.g. math nodes), we generate
optimized code for the cases of irregular index masks and simple index ranges.
To achieve these goals, a few compromises had to made:
* Slicing of the mask (at specific indices) and random element access is
`O(log #indices)` now, but with a low constant factor. It should be possible
to split a mask into n approximately equally sized parts in `O(n)` though,
making the time per split `O(1)`.
* Using range-based for loops does not work well when iterating over a nested
data structure like the new `IndexMask`. Therefor, `foreach_*` functions with
callbacks have to be used. To avoid extra code complexity at the call site,
the `foreach_*` methods support multi-threading out of the box.
The new data structure splits an `IndexMask` into an arbitrary number of ordered
`IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The
indices within a segment are stored as `int16_t`. Each segment has an additional
`int64_t` offset which allows storing arbitrary `int64_t` indices. This approach
has the main benefits that segments can be processed/constructed individually on
multiple threads without a serial bottleneck. Also it reduces the memory
requirements significantly.
For more details see comments in `BLI_index_mask.hh`.
I did a few tests to verify that the data structure generally improves
performance and does not cause regressions:
* Our field evaluation benchmarks take about as much as before. This is to be
expected because we already made sure that e.g. add node evaluation is
vectorized. The important thing here is to check that changes to the way we
iterate over the indices still allows for auto-vectorization.
* Memory usage by a mask is about 1/4 of what it was before in the average case.
That's mainly caused by the switch from `int64_t` to `int16_t` for indices.
In the worst case, the memory requirements can be larger when there are many
indices that are very far away. However, when they are far away from each other,
that indicates that there aren't many indices in total. In common cases, memory
usage can be way lower than 1/4 of before, because sub-ranges use static memory.
* For some more specific numbers I benchmarked `IndexMask::from_bools` in
`index_mask_from_selection` on 10.000.000 elements at various probabilities for
`true` at every index:
```
Probability Old New
0 4.6 ms 0.8 ms
0.001 5.1 ms 1.3 ms
0.2 8.4 ms 1.8 ms
0.5 15.3 ms 3.0 ms
0.8 20.1 ms 3.0 ms
0.999 25.1 ms 1.7 ms
1 13.5 ms 1.1 ms
```
Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
|
|
|
ParamsBuilder(const Signature &signature, const IndexMask &mask)
|
2021-09-14 14:52:44 +02:00
|
|
|
: signature_(&signature), mask_(mask), min_array_size_(mask.min_array_size())
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 16:51:26 +01:00
|
|
|
actual_params_.reserve(signature.params.size());
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-09-14 14:52:44 +02:00
|
|
|
public:
|
|
|
|
/**
|
|
|
|
* The indices referenced by the #mask has to live longer than the params builder. This is
|
|
|
|
* because the it might have to destruct elements for all masked indices in the end.
|
|
|
|
*/
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamsBuilder(const class MultiFunction &fn, const IndexMask *mask);
|
2020-06-16 16:35:57 +02:00
|
|
|
|
2021-08-20 11:43:54 +02:00
|
|
|
template<typename T> void add_readonly_single_input_value(T value, StringRef expected_name = "")
|
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleInput(CPPType::get<T>()), expected_name);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVArray>,
|
|
|
|
varray_tag::single{},
|
|
|
|
CPPType::get<T>(),
|
|
|
|
min_array_size_,
|
|
|
|
&value);
|
2021-08-20 11:43:54 +02:00
|
|
|
}
|
2020-07-23 17:57:11 +02:00
|
|
|
template<typename T> void add_readonly_single_input(const T *value, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleInput(CPPType::get<T>()), expected_name);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVArray>,
|
|
|
|
varray_tag::single_ref{},
|
|
|
|
CPPType::get<T>(),
|
|
|
|
min_array_size_,
|
|
|
|
value);
|
2021-03-21 19:31:24 +01:00
|
|
|
}
|
|
|
|
void add_readonly_single_input(const GSpan span, StringRef expected_name = "")
|
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleInput(span.type()), expected_name);
|
2022-05-31 20:41:01 +02:00
|
|
|
BLI_assert(span.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVArray>, varray_tag::span{}, span);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2021-05-13 13:23:53 +02:00
|
|
|
void add_readonly_single_input(GPointer value, StringRef expected_name = "")
|
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleInput(*value.type()), expected_name);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVArray>,
|
|
|
|
varray_tag::single_ref{},
|
|
|
|
*value.type(),
|
|
|
|
min_array_size_,
|
|
|
|
value.get());
|
2021-05-13 13:23:53 +02:00
|
|
|
}
|
Geometry Nodes: refactor virtual array system
Goals of this refactor:
* Simplify creating virtual arrays.
* Simplify passing virtual arrays around.
* Simplify converting between typed and generic virtual arrays.
* Reduce memory allocations.
As a quick reminder, a virtual arrays is a data structure that behaves like an
array (i.e. it can be accessed using an index). However, it may not actually
be stored as array internally. The two most important implementations
of virtual arrays are those that correspond to an actual plain array and those
that have the same value for every index. However, many more
implementations exist for various reasons (interfacing with legacy attributes,
unified iterator over all points in multiple splines, ...).
With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and
`GVMutableArray`) can be used like "normal values". They typically live
on the stack. Before, they were usually inside a `std::unique_ptr`. This makes
passing them around much easier. Creation of new virtual arrays is also
much simpler now due to some constructors. Memory allocations are
reduced by making use of small object optimization inside the core types.
Previously, `VArray` was a class with virtual methods that had to be overridden
to change the behavior of a the virtual array. Now,`VArray` has a fixed size
and has no virtual methods. Instead it contains a `VArrayImpl` that is
similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly,
unless a new virtual array implementation is added.
To support the small object optimization for many `VArrayImpl` classes,
a new `blender::Any` type is added. It is similar to `std::any` with two
additional features. It has an adjustable inline buffer size and alignment.
The inline buffer size of `std::any` can't be relied on and is usually too
small for our use case here. Furthermore, `blender::Any` can store
additional user-defined type information without increasing the
stack size.
Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
|
|
|
void add_readonly_single_input(GVArray varray, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleInput(varray.type()), expected_name);
|
Geometry Nodes: refactor virtual array system
Goals of this refactor:
* Simplify creating virtual arrays.
* Simplify passing virtual arrays around.
* Simplify converting between typed and generic virtual arrays.
* Reduce memory allocations.
As a quick reminder, a virtual arrays is a data structure that behaves like an
array (i.e. it can be accessed using an index). However, it may not actually
be stored as array internally. The two most important implementations
of virtual arrays are those that correspond to an actual plain array and those
that have the same value for every index. However, many more
implementations exist for various reasons (interfacing with legacy attributes,
unified iterator over all points in multiple splines, ...).
With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and
`GVMutableArray`) can be used like "normal values". They typically live
on the stack. Before, they were usually inside a `std::unique_ptr`. This makes
passing them around much easier. Creation of new virtual arrays is also
much simpler now due to some constructors. Memory allocations are
reduced by making use of small object optimization inside the core types.
Previously, `VArray` was a class with virtual methods that had to be overridden
to change the behavior of a the virtual array. Now,`VArray` has a fixed size
and has no virtual methods. Instead it contains a `VArrayImpl` that is
similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly,
unless a new virtual array implementation is added.
To support the small object optimization for many `VArrayImpl` classes,
a new `blender::Any` type is added. It is similar to `std::any` with two
additional features. It has an adjustable inline buffer size and alignment.
The inline buffer size of `std::any` can't be relied on and is usually too
small for our use case here. Furthermore, `blender::Any` can store
additional user-defined type information without increasing the
stack size.
Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
|
|
|
BLI_assert(varray.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVArray>, std::move(varray));
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-03-21 19:31:24 +01:00
|
|
|
void add_readonly_vector_input(const GVectorArray &vector_array, StringRef expected_name = "")
|
|
|
|
{
|
2023-01-14 15:56:43 +01:00
|
|
|
this->add_readonly_vector_input(
|
|
|
|
this->resource_scope().construct<GVVectorArray_For_GVectorArray>(vector_array),
|
|
|
|
expected_name);
|
2021-03-21 19:31:24 +01:00
|
|
|
}
|
2021-08-20 11:43:54 +02:00
|
|
|
void add_readonly_vector_input(const GSpan single_vector, StringRef expected_name = "")
|
|
|
|
{
|
|
|
|
this->add_readonly_vector_input(
|
2023-01-14 15:56:43 +01:00
|
|
|
this->resource_scope().construct<GVVectorArray_For_SingleGSpan>(single_vector,
|
|
|
|
min_array_size_),
|
2021-08-20 11:43:54 +02:00
|
|
|
expected_name);
|
|
|
|
}
|
2021-03-21 19:31:24 +01:00
|
|
|
void add_readonly_vector_input(const GVVectorArray &ref, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForVectorInput(ref.type()), expected_name);
|
2020-07-03 14:20:42 +02:00
|
|
|
BLI_assert(ref.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<const GVVectorArray *>, &ref);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2020-07-23 17:57:11 +02:00
|
|
|
template<typename T> void add_uninitialized_single_output(T *value, StringRef expected_name = "")
|
2020-07-21 17:20:05 +02:00
|
|
|
{
|
2020-07-23 17:57:11 +02:00
|
|
|
this->add_uninitialized_single_output(GMutableSpan(CPPType::get<T>(), value, 1),
|
|
|
|
expected_name);
|
2020-07-21 17:20:05 +02:00
|
|
|
}
|
2020-07-23 17:57:11 +02:00
|
|
|
void add_uninitialized_single_output(GMutableSpan ref, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForSingleOutput(ref.type()), expected_name);
|
2020-07-03 14:20:42 +02:00
|
|
|
BLI_assert(ref.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GMutableSpan>, ref);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2021-09-14 14:52:44 +02:00
|
|
|
void add_ignored_single_output(StringRef expected_name = "")
|
|
|
|
{
|
|
|
|
this->assert_current_param_name(expected_name);
|
|
|
|
const int param_index = this->current_param_index();
|
2023-01-07 17:32:28 +01:00
|
|
|
const ParamType ¶m_type = signature_->params[param_index].type;
|
|
|
|
BLI_assert(param_type.category() == ParamCategory::SingleOutput);
|
2023-05-10 04:06:27 +02:00
|
|
|
const DataType data_type = param_type.data_type();
|
|
|
|
const CPPType &type = data_type.single_type();
|
2023-01-14 15:35:44 +01:00
|
|
|
|
|
|
|
if (bool(signature_->params[param_index].flag & ParamFlag::SupportsUnusedOutput)) {
|
|
|
|
/* An empty span indicates that this is ignored. */
|
|
|
|
const GMutableSpan dummy_span{type};
|
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GMutableSpan>, dummy_span);
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
this->add_unused_output_for_unsupporting_function(type);
|
|
|
|
}
|
2021-09-14 14:52:44 +02:00
|
|
|
}
|
2020-06-16 16:35:57 +02:00
|
|
|
|
2020-07-23 17:57:11 +02:00
|
|
|
void add_vector_output(GVectorArray &vector_array, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForVectorOutput(vector_array.type()),
|
2020-07-23 17:57:11 +02:00
|
|
|
expected_name);
|
2020-07-03 14:20:42 +02:00
|
|
|
BLI_assert(vector_array.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVectorArray *>, &vector_array);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2020-07-23 17:57:11 +02:00
|
|
|
void add_single_mutable(GMutableSpan ref, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForMutableSingle(ref.type()), expected_name);
|
2020-07-03 14:20:42 +02:00
|
|
|
BLI_assert(ref.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GMutableSpan>, ref);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2020-07-23 17:57:11 +02:00
|
|
|
void add_vector_mutable(GVectorArray &vector_array, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_current_param_type(ParamType::ForMutableVector(vector_array.type()),
|
2020-07-23 17:57:11 +02:00
|
|
|
expected_name);
|
2020-07-03 14:20:42 +02:00
|
|
|
BLI_assert(vector_array.size() >= min_array_size_);
|
2023-01-06 11:50:56 +01:00
|
|
|
actual_params_.append_unchecked_as(std::in_place_type<GVectorArray *>, &vector_array);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2023-12-17 14:00:07 +01:00
|
|
|
int next_param_index() const
|
|
|
|
{
|
|
|
|
return actual_params_.size();
|
|
|
|
}
|
|
|
|
|
2020-07-20 12:16:20 +02:00
|
|
|
GMutableSpan computed_array(int param_index)
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(ELEM(signature_->params[param_index].type.category(),
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamCategory::SingleOutput,
|
|
|
|
ParamCategory::SingleMutable));
|
2023-01-14 14:16:51 +01:00
|
|
|
return std::get<GMutableSpan>(actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2020-07-20 12:16:20 +02:00
|
|
|
GVectorArray &computed_vector_array(int param_index)
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(ELEM(signature_->params[param_index].type.category(),
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamCategory::VectorOutput,
|
|
|
|
ParamCategory::VectorMutable));
|
2023-01-14 14:16:51 +01:00
|
|
|
return *std::get<GVectorArray *>(actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
2023-01-07 17:32:28 +01:00
|
|
|
void assert_current_param_type(ParamType param_type, StringRef expected_name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2020-07-23 17:57:11 +02:00
|
|
|
UNUSED_VARS_NDEBUG(param_type, expected_name);
|
2023-12-04 15:13:06 +01:00
|
|
|
#ifndef NDEBUG
|
2020-07-20 12:16:20 +02:00
|
|
|
int param_index = this->current_param_index();
|
2020-07-23 17:57:11 +02:00
|
|
|
|
|
|
|
if (expected_name != "") {
|
2023-01-07 16:51:26 +01:00
|
|
|
StringRef actual_name = signature_->params[param_index].name;
|
2020-07-23 17:57:11 +02:00
|
|
|
BLI_assert(actual_name == expected_name);
|
|
|
|
}
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamType expected_type = signature_->params[param_index].type;
|
2020-06-16 16:35:57 +02:00
|
|
|
BLI_assert(expected_type == param_type);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-09-14 14:52:44 +02:00
|
|
|
void assert_current_param_name(StringRef expected_name)
|
|
|
|
{
|
|
|
|
UNUSED_VARS_NDEBUG(expected_name);
|
2023-12-04 15:13:06 +01:00
|
|
|
#ifndef NDEBUG
|
2021-09-14 14:52:44 +02:00
|
|
|
if (expected_name.is_empty()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
const int param_index = this->current_param_index();
|
2023-01-07 16:51:26 +01:00
|
|
|
StringRef actual_name = signature_->params[param_index].name;
|
2021-09-14 14:52:44 +02:00
|
|
|
BLI_assert(actual_name == expected_name);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2020-07-20 12:16:20 +02:00
|
|
|
int current_param_index() const
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-06 11:50:56 +01:00
|
|
|
return actual_params_.size();
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2023-01-14 15:35:44 +01:00
|
|
|
|
2023-01-14 15:56:43 +01:00
|
|
|
ResourceScope &resource_scope()
|
|
|
|
{
|
|
|
|
if (!scope_) {
|
|
|
|
scope_ = std::make_unique<ResourceScope>();
|
|
|
|
}
|
|
|
|
return *scope_;
|
|
|
|
}
|
|
|
|
|
2023-01-14 15:35:44 +01:00
|
|
|
void add_unused_output_for_unsupporting_function(const CPPType &type);
|
2020-06-16 16:35:57 +02:00
|
|
|
};
|
|
|
|
|
2023-01-14 15:42:52 +01:00
|
|
|
class Params {
|
2020-06-16 16:35:57 +02:00
|
|
|
private:
|
2023-01-07 17:32:28 +01:00
|
|
|
ParamsBuilder *builder_;
|
2020-06-16 16:35:57 +02:00
|
|
|
|
|
|
|
public:
|
2023-03-29 16:50:54 +02:00
|
|
|
Params(ParamsBuilder &builder) : builder_(&builder) {}
|
2020-06-16 16:35:57 +02:00
|
|
|
|
Geometry Nodes: refactor virtual array system
Goals of this refactor:
* Simplify creating virtual arrays.
* Simplify passing virtual arrays around.
* Simplify converting between typed and generic virtual arrays.
* Reduce memory allocations.
As a quick reminder, a virtual arrays is a data structure that behaves like an
array (i.e. it can be accessed using an index). However, it may not actually
be stored as array internally. The two most important implementations
of virtual arrays are those that correspond to an actual plain array and those
that have the same value for every index. However, many more
implementations exist for various reasons (interfacing with legacy attributes,
unified iterator over all points in multiple splines, ...).
With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and
`GVMutableArray`) can be used like "normal values". They typically live
on the stack. Before, they were usually inside a `std::unique_ptr`. This makes
passing them around much easier. Creation of new virtual arrays is also
much simpler now due to some constructors. Memory allocations are
reduced by making use of small object optimization inside the core types.
Previously, `VArray` was a class with virtual methods that had to be overridden
to change the behavior of a the virtual array. Now,`VArray` has a fixed size
and has no virtual methods. Instead it contains a `VArrayImpl` that is
similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly,
unless a new virtual array implementation is added.
To support the small object optimization for many `VArrayImpl` classes,
a new `blender::Any` type is added. It is similar to `std::any` with two
additional features. It has an adjustable inline buffer size and alignment.
The inline buffer size of `std::any` can't be relied on and is usually too
small for our use case here. Furthermore, `blender::Any` can store
additional user-defined type information without increasing the
stack size.
Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
|
|
|
template<typename T> VArray<T> readonly_single_input(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
Geometry Nodes: refactor virtual array system
Goals of this refactor:
* Simplify creating virtual arrays.
* Simplify passing virtual arrays around.
* Simplify converting between typed and generic virtual arrays.
* Reduce memory allocations.
As a quick reminder, a virtual arrays is a data structure that behaves like an
array (i.e. it can be accessed using an index). However, it may not actually
be stored as array internally. The two most important implementations
of virtual arrays are those that correspond to an actual plain array and those
that have the same value for every index. However, many more
implementations exist for various reasons (interfacing with legacy attributes,
unified iterator over all points in multiple splines, ...).
With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and
`GVMutableArray`) can be used like "normal values". They typically live
on the stack. Before, they were usually inside a `std::unique_ptr`. This makes
passing them around much easier. Creation of new virtual arrays is also
much simpler now due to some constructors. Memory allocations are
reduced by making use of small object optimization inside the core types.
Previously, `VArray` was a class with virtual methods that had to be overridden
to change the behavior of a the virtual array. Now,`VArray` has a fixed size
and has no virtual methods. Instead it contains a `VArrayImpl` that is
similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly,
unless a new virtual array implementation is added.
To support the small object optimization for many `VArrayImpl` classes,
a new `blender::Any` type is added. It is similar to `std::any` with two
additional features. It has an adjustable inline buffer size and alignment.
The inline buffer size of `std::any` can't be relied on and is usually too
small for our use case here. Furthermore, `blender::Any` can store
additional user-defined type information without increasing the
stack size.
Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
|
|
|
const GVArray &varray = this->readonly_single_input(param_index, name);
|
|
|
|
return varray.typed<T>();
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2021-03-21 19:31:24 +01:00
|
|
|
const GVArray &readonly_single_input(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::SingleInput);
|
2023-01-14 14:16:51 +01:00
|
|
|
return std::get<GVArray>(builder_->actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-09-14 14:52:44 +02:00
|
|
|
/**
|
|
|
|
* \return True when the caller provided a buffer for this output parameter. This allows the
|
|
|
|
* called multi-function to skip some computation. It is still valid to call
|
|
|
|
* #uninitialized_single_output when this returns false. In this case a new temporary buffer is
|
|
|
|
* allocated.
|
|
|
|
*/
|
|
|
|
bool single_output_is_required(int param_index, StringRef name = "")
|
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::SingleOutput);
|
2023-01-14 14:16:51 +01:00
|
|
|
return !std::get<GMutableSpan>(builder_->actual_params_[param_index]).is_empty();
|
2021-09-14 14:52:44 +02:00
|
|
|
}
|
|
|
|
|
2020-06-16 16:35:57 +02:00
|
|
|
template<typename T>
|
2020-07-20 12:16:20 +02:00
|
|
|
MutableSpan<T> uninitialized_single_output(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
|
|
|
return this->uninitialized_single_output(param_index, name).typed<T>();
|
|
|
|
}
|
2020-07-20 12:16:20 +02:00
|
|
|
GMutableSpan uninitialized_single_output(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::SingleOutput);
|
2023-01-14 15:35:44 +01:00
|
|
|
BLI_assert(
|
|
|
|
!bool(builder_->signature_->params[param_index].flag & ParamFlag::SupportsUnusedOutput));
|
2023-01-14 14:16:51 +01:00
|
|
|
GMutableSpan span = std::get<GMutableSpan>(builder_->actual_params_[param_index]);
|
2023-01-14 15:35:44 +01:00
|
|
|
BLI_assert(span.size() >= builder_->min_array_size_);
|
|
|
|
return span;
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-09-20 13:12:25 +02:00
|
|
|
/**
|
|
|
|
* Same as #uninitialized_single_output, but returns an empty span when the output is not
|
|
|
|
* required.
|
|
|
|
*/
|
|
|
|
template<typename T>
|
|
|
|
MutableSpan<T> uninitialized_single_output_if_required(int param_index, StringRef name = "")
|
|
|
|
{
|
|
|
|
return this->uninitialized_single_output_if_required(param_index, name).typed<T>();
|
|
|
|
}
|
|
|
|
GMutableSpan uninitialized_single_output_if_required(int param_index, StringRef name = "")
|
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::SingleOutput);
|
2023-01-14 15:35:44 +01:00
|
|
|
BLI_assert(
|
|
|
|
bool(builder_->signature_->params[param_index].flag & ParamFlag::SupportsUnusedOutput));
|
2023-01-14 14:16:51 +01:00
|
|
|
return std::get<GMutableSpan>(builder_->actual_params_[param_index]);
|
2021-09-20 13:12:25 +02:00
|
|
|
}
|
|
|
|
|
2021-03-21 19:31:24 +01:00
|
|
|
template<typename T>
|
|
|
|
const VVectorArray<T> &readonly_vector_input(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2021-03-21 19:31:24 +01:00
|
|
|
const GVVectorArray &vector_array = this->readonly_vector_input(param_index, name);
|
2023-01-14 15:56:43 +01:00
|
|
|
return builder_->resource_scope().construct<VVectorArray_For_GVVectorArray<T>>(vector_array);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2021-03-21 19:31:24 +01:00
|
|
|
const GVVectorArray &readonly_vector_input(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::VectorInput);
|
2023-01-14 14:16:51 +01:00
|
|
|
return *std::get<const GVVectorArray *>(builder_->actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-03-21 19:31:24 +01:00
|
|
|
template<typename T>
|
|
|
|
GVectorArray_TypedMutableRef<T> vector_output(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2021-03-21 19:31:24 +01:00
|
|
|
return {this->vector_output(param_index, name)};
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
2020-07-20 12:16:20 +02:00
|
|
|
GVectorArray &vector_output(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::VectorOutput);
|
2023-01-14 14:16:51 +01:00
|
|
|
return *std::get<GVectorArray *>(builder_->actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2020-07-20 12:16:20 +02:00
|
|
|
template<typename T> MutableSpan<T> single_mutable(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
|
|
|
return this->single_mutable(param_index, name).typed<T>();
|
|
|
|
}
|
2020-07-20 12:16:20 +02:00
|
|
|
GMutableSpan single_mutable(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::SingleMutable);
|
2023-01-14 14:16:51 +01:00
|
|
|
return std::get<GMutableSpan>(builder_->actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
2021-03-21 19:31:24 +01:00
|
|
|
template<typename T>
|
|
|
|
GVectorArray_TypedMutableRef<T> vector_mutable(int param_index, StringRef name = "")
|
2020-06-22 15:48:08 +02:00
|
|
|
{
|
2021-03-21 19:31:24 +01:00
|
|
|
return {this->vector_mutable(param_index, name)};
|
2020-06-22 15:48:08 +02:00
|
|
|
}
|
2020-07-20 12:16:20 +02:00
|
|
|
GVectorArray &vector_mutable(int param_index, StringRef name = "")
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
2023-01-07 17:32:28 +01:00
|
|
|
this->assert_correct_param(param_index, name, ParamCategory::VectorMutable);
|
2023-01-14 14:16:51 +01:00
|
|
|
return *std::get<GVectorArray *>(builder_->actual_params_[param_index]);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
2023-01-07 17:32:28 +01:00
|
|
|
void assert_correct_param(int param_index, StringRef name, ParamType param_type)
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
|
|
|
UNUSED_VARS_NDEBUG(param_index, name, param_type);
|
2023-12-04 15:13:06 +01:00
|
|
|
#ifndef NDEBUG
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(builder_->signature_->params[param_index].type == param_type);
|
2020-06-16 16:35:57 +02:00
|
|
|
if (name.size() > 0) {
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(builder_->signature_->params[param_index].name == name);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
void assert_correct_param(int param_index, StringRef name, ParamCategory category)
|
2020-06-16 16:35:57 +02:00
|
|
|
{
|
|
|
|
UNUSED_VARS_NDEBUG(param_index, name, category);
|
2023-12-04 15:13:06 +01:00
|
|
|
#ifndef NDEBUG
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(builder_->signature_->params[param_index].type.category() == category);
|
2020-06-16 16:35:57 +02:00
|
|
|
if (name.size() > 0) {
|
2023-01-07 16:51:26 +01:00
|
|
|
BLI_assert(builder_->signature_->params[param_index].name == name);
|
2020-06-16 16:35:57 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2023-01-07 17:32:28 +01:00
|
|
|
} // namespace blender::fn::multi_function
|