tornavis/source/blender/blenlib/BLI_generic_virtual_array.hh

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

970 lines
28 KiB
C++
Raw Normal View History

/* SPDX-FileCopyrightText: 2023 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
#pragma once
/** \file
* \ingroup bli
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
*
* A generic virtual array is the same as a virtual array, except for the fact that the data type
* is only known at runtime.
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
*/
#include "BLI_generic_array.hh"
#include "BLI_generic_span.hh"
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
#include "BLI_timeit.hh"
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
#include "BLI_virtual_array.hh"
namespace blender {
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* -------------------------------------------------------------------- */
/** \name #GVArrayImpl and #GVMutableArrayImpl.
* \{ */
class GVArray;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
class GVArrayImpl;
class GVMutableArray;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
class GVMutableArrayImpl;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* A generically typed version of #VArrayImpl. */
class GVArrayImpl {
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
protected:
const CPPType *type_;
int64_t size_;
public:
GVArrayImpl(const CPPType &type, int64_t size);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
virtual ~GVArrayImpl() = default;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
const CPPType &type() const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
int64_t size() const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
virtual void get(int64_t index, void *r_value) const;
virtual void get_to_uninitialized(int64_t index, void *r_value) const = 0;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
virtual CommonVArrayInfo common_info() const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
virtual void materialize(const IndexMask &mask, void *dst) const;
virtual void materialize_to_uninitialized(const IndexMask &mask, void *dst) const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
virtual void materialize_compressed(const IndexMask &mask, void *dst) const;
virtual void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
virtual bool try_assign_VArray(void *varray) const;
};
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* A generic version of #VMutableArrayImpl. */
class GVMutableArrayImpl : public GVArrayImpl {
public:
GVMutableArrayImpl(const CPPType &type, int64_t size) : GVArrayImpl(type, size) {}
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
virtual void set_by_copy(int64_t index, const void *value);
virtual void set_by_relocate(int64_t index, void *value);
virtual void set_by_move(int64_t index, void *value) = 0;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
virtual void set_all(const void *src);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
virtual bool try_assign_VMutableArray(void *varray) const;
};
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* -------------------------------------------------------------------- */
/** \name #GVArray and #GVMutableArray
* \{ */
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
namespace detail {
struct GVArrayAnyExtraInfo {
const GVArrayImpl *(*get_varray)(const void *buffer) =
[](const void * /*buffer*/) -> const GVArrayImpl * { return nullptr; };
template<typename StorageT> static constexpr GVArrayAnyExtraInfo get();
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
};
} // namespace detail
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
class GVMutableArray;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/**
* Utility class to reduce code duplication between #GVArray and #GVMutableArray.
* It pretty much follows #VArrayCommon. Don't use this class outside of this header.
*/
class GVArrayCommon {
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/**
* See #VArrayCommon for more information. The inline buffer is a bit larger here, because
* generic virtual array implementations often require a bit more space than typed ones.
*/
using Storage = Any<detail::GVArrayAnyExtraInfo, 40, 8>;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
const GVArrayImpl *impl_ = nullptr;
Storage storage_;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
protected:
GVArrayCommon() = default;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArrayCommon(const GVArrayCommon &other);
GVArrayCommon(GVArrayCommon &&other) noexcept;
GVArrayCommon(const GVArrayImpl *impl);
GVArrayCommon(std::shared_ptr<const GVArrayImpl> impl);
~GVArrayCommon();
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename ImplT, typename... Args> void emplace(Args &&...args);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void copy_from(const GVArrayCommon &other);
void move_from(GVArrayCommon &&other) noexcept;
const GVArrayImpl *impl_from_storage() const;
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
const CPPType &type() const;
operator bool() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
int64_t size() const;
bool is_empty() const;
IndexRange index_range() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> bool try_assign_VArray(VArray<T> &varray) const;
bool may_have_ownership() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void materialize(void *dst) const;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, void *dst) const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void materialize_to_uninitialized(void *dst) const;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed(const IndexMask &mask, void *dst) const;
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const;
CommonVArrayInfo common_info() const;
/**
* Returns true when the virtual array is stored as a span internally.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool is_span() const;
/**
2022-02-24 18:01:22 +01:00
* Returns the internally used span of the virtual array. This invokes undefined behavior if the
* virtual array is not stored as a span internally.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GSpan get_internal_span() const;
/**
* Returns true when the virtual array returns the same value for every index.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool is_single() const;
/**
* Copies the value that is used for every element into `r_value`, which is expected to point to
* initialized memory. This invokes undefined behavior if the virtual array would not return the
* same value for every index.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get_internal_single(void *r_value) const;
/**
* Same as `get_internal_single`, but `r_value` points to initialized memory.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get_internal_single_to_uninitialized(void *r_value) const;
void get(int64_t index, void *r_value) const;
/**
* Returns a copy of the value at the given index. Usually a typed virtual array should
* be used instead, but sometimes this is simpler when only a few indices are needed.
*/
template<typename T> T get(int64_t index) const;
void get_to_uninitialized(int64_t index, void *r_value) const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** Generic version of #VArray. */
class GVArray : public GVArrayCommon {
private:
friend GVMutableArray;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
public:
GVArray() = default;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArray(const GVArray &other);
GVArray(GVArray &&other) noexcept;
GVArray(const GVArrayImpl *impl);
GVArray(std::shared_ptr<const GVArrayImpl> impl);
GVArray(varray_tag::span /*tag*/, GSpan span);
GVArray(varray_tag::single_ref /*tag*/, const CPPType &type, int64_t size, const void *value);
GVArray(varray_tag::single /*tag*/, const CPPType &type, int64_t size, const void *value);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> GVArray(const VArray<T> &varray);
template<typename T> VArray<T> typed() const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename ImplT, typename... Args> static GVArray For(Args &&...args);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
static GVArray ForSingle(const CPPType &type, int64_t size, const void *value);
static GVArray ForSingleRef(const CPPType &type, int64_t size, const void *value);
static GVArray ForSingleDefault(const CPPType &type, int64_t size);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
static GVArray ForSpan(GSpan span);
static GVArray ForGArray(GArray<> array);
static GVArray ForEmpty(const CPPType &type);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArray slice(IndexRange slice) const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArray &operator=(const GVArray &other);
GVArray &operator=(GVArray &&other) noexcept;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
const GVArrayImpl *get_implementation() const
{
return impl_;
}
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** Generic version of #VMutableArray. */
class GVMutableArray : public GVArrayCommon {
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVMutableArray() = default;
GVMutableArray(const GVMutableArray &other);
GVMutableArray(GVMutableArray &&other) noexcept;
GVMutableArray(GVMutableArrayImpl *impl);
GVMutableArray(std::shared_ptr<GVMutableArrayImpl> impl);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> GVMutableArray(const VMutableArray<T> &varray);
template<typename T> VMutableArray<T> typed() const;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename ImplT, typename... Args> static GVMutableArray For(Args &&...args);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
static GVMutableArray ForSpan(GMutableSpan span);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
operator GVArray() const &;
operator GVArray() && noexcept;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVMutableArray &operator=(const GVMutableArray &other);
GVMutableArray &operator=(GVMutableArray &&other) noexcept;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GMutableSpan get_internal_span() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> bool try_assign_VMutableArray(VMutableArray<T> &varray) const;
void set_by_copy(int64_t index, const void *value);
void set_by_move(int64_t index, void *value);
void set_by_relocate(int64_t index, void *value);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void fill(const void *value);
/**
* Copy the values from the source buffer to all elements in the virtual array.
*/
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set_all(const void *src);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVMutableArrayImpl *get_implementation() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
private:
GVMutableArrayImpl *get_impl() const;
};
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* -------------------------------------------------------------------- */
/** \name #GVArraySpan and #GMutableVArraySpan.
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
* \{ */
/* A generic version of VArraySpan. */
class GVArraySpan : public GSpan {
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
private:
GVArray varray_;
void *owned_data_ = nullptr;
public:
GVArraySpan();
GVArraySpan(GVArray varray);
GVArraySpan(GVArraySpan &&other);
~GVArraySpan();
GVArraySpan &operator=(GVArraySpan &&other);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
};
/* A generic version of MutableVArraySpan. */
class GMutableVArraySpan : public GMutableSpan, NonCopyable, NonMovable {
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
private:
GVMutableArray varray_;
void *owned_data_ = nullptr;
bool save_has_been_called_ = false;
bool show_not_saved_warning_ = true;
public:
GMutableVArraySpan();
GMutableVArraySpan(GVMutableArray varray, bool copy_values_to_span = true);
GMutableVArraySpan(GMutableVArraySpan &&other);
~GMutableVArraySpan();
GMutableVArraySpan &operator=(GMutableVArraySpan &&other);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
const GVMutableArray &varray() const;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void save();
void disable_not_applied_warning();
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
/* -------------------------------------------------------------------- */
/** \name Conversions between generic and typed virtual arrays.
* \{ */
/* Used to convert a typed virtual array into a generic one. */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> class GVArrayImpl_For_VArray : public GVArrayImpl {
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VArray<T> varray_;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArrayImpl_For_VArray(VArray<T> varray)
: GVArrayImpl(CPPType::get<T>(), varray.size()), varray_(std::move(varray))
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
{
}
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get(const int64_t index, void *r_value) const override
{
*static_cast<T *>(r_value) = varray_[index];
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get_to_uninitialized(const int64_t index, void *r_value) const override
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
new (r_value) T(varray_[index]);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, void *dst) const override
{
varray_.materialize(mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed(mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_VArray(void *varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
*(VArray<T> *)varray = varray_;
return true;
}
CommonVArrayInfo common_info() const override
{
return varray_.common_info();
}
};
/* Used to convert any generic virtual array into a typed one. */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> class VArrayImpl_For_GVArray : public VArrayImpl<T> {
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVArray varray_;
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VArrayImpl_For_GVArray(GVArray varray) : VArrayImpl<T>(varray.size()), varray_(std::move(varray))
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
BLI_assert(varray_);
BLI_assert(varray_.type().template is<T>());
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
}
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
T get(const int64_t index) const override
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
{
T value;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray_.get(index, &value);
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
return value;
}
CommonVArrayInfo common_info() const override
Functions: refactor virtual array data structures When a function is executed for many elements (e.g. per point) it is often the case that some parameters are different for every element and other parameters are the same (there are some more less common cases). To simplify writing such functions one can use a "virtual array". This is a data structure that has a value for every index, but might not be stored as an actual array internally. Instead, it might be just a single value or is computed on the fly. There are various tradeoffs involved when using this data structure which are mentioned in `BLI_virtual_array.hh`. It is called "virtual", because it uses inheritance and virtual methods. Furthermore, there is a new virtual vector array data structure, which is an array of vectors. Both these types have corresponding generic variants, which can be used when the data type is not known at compile time. This is typically the case when building a somewhat generic execution system. The function system used these virtual data structures before, but now they are more versatile. I've done this refactor in preparation for the attribute processor and other features of geometry nodes. I moved the typed virtual arrays to blenlib, so that they can be used independent of the function system. One open question for me is whether all the generic data structures (and `CPPType`) should be moved to blenlib as well. They are well isolated and don't really contain any business logic. That can be done later if necessary.
2021-03-21 19:31:24 +01:00
{
return varray_.common_info();
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_GVArray(GVArray &varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray = varray_;
return true;
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, T *dst) const override
{
varray_.materialize(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_to_uninitialized(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed_to_uninitialized(mask, dst);
}
};
/* Used to convert any typed virtual mutable array into a generic one. */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> class GVMutableArrayImpl_For_VMutableArray : public GVMutableArrayImpl {
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VMutableArray<T> varray_;
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
GVMutableArrayImpl_For_VMutableArray(VMutableArray<T> varray)
: GVMutableArrayImpl(CPPType::get<T>(), varray.size()), varray_(std::move(varray))
{
}
protected:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get(const int64_t index, void *r_value) const override
{
*static_cast<T *>(r_value) = varray_[index];
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void get_to_uninitialized(const int64_t index, void *r_value) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
new (r_value) T(varray_[index]);
}
CommonVArrayInfo common_info() const override
{
return varray_.common_info();
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set_by_copy(const int64_t index, const void *value) override
{
const T &value_ = *(const T *)value;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray_.set(index, value_);
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set_by_relocate(const int64_t index, void *value) override
{
T &value_ = *static_cast<T *>(value);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray_.set(index, std::move(value_));
value_.~T();
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set_by_move(const int64_t index, void *value) override
{
T &value_ = *static_cast<T *>(value);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray_.set(index, std::move(value_));
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set_all(const void *src) override
{
varray_.set_all(Span(static_cast<const T *>(src), size_));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, void *dst) const override
{
varray_.materialize(mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.min_array_size()));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed(mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override
{
varray_.materialize_compressed_to_uninitialized(
mask, MutableSpan(static_cast<T *>(dst), mask.size()));
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_VArray(void *varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
*(VArray<T> *)varray = varray_;
return true;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_VMutableArray(void *varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
*(VMutableArray<T> *)varray = varray_;
return true;
}
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* Used to convert an generic mutable virtual array into a typed one. */
template<typename T> class VMutableArrayImpl_For_GVMutableArray : public VMutableArrayImpl<T> {
protected:
GVMutableArray varray_;
public:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VMutableArrayImpl_For_GVMutableArray(GVMutableArray varray)
: VMutableArrayImpl<T>(varray.size()), varray_(varray)
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
BLI_assert(varray_);
BLI_assert(varray_.type().template is<T>());
}
private:
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
T get(const int64_t index) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
T value;
varray_.get(index, &value);
return value;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
void set(const int64_t index, T value) override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray_.set_by_relocate(index, &value);
}
CommonVArrayInfo common_info() const override
{
return varray_.common_info();
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_GVArray(GVArray &varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray = varray_;
return true;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
bool try_assign_GVMutableArray(GVMutableArray &varray) const override
{
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
varray = varray_;
return true;
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, T *dst) const override
{
varray_.materialize(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_to_uninitialized(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed(mask, dst);
}
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize_compressed_to_uninitialized(const IndexMask &mask, T *dst) const override
{
varray_.materialize_compressed_to_uninitialized(mask, dst);
}
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
/* -------------------------------------------------------------------- */
/** \name #GVArrayImpl_For_GSpan.
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
* \{ */
class GVArrayImpl_For_GSpan : public GVMutableArrayImpl {
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
protected:
void *data_ = nullptr;
const int64_t element_size_;
public:
GVArrayImpl_For_GSpan(const GMutableSpan span)
: GVMutableArrayImpl(span.type(), span.size()),
data_(span.data()),
element_size_(span.type().size())
{
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
protected:
GVArrayImpl_For_GSpan(const CPPType &type, int64_t size)
: GVMutableArrayImpl(type, size), element_size_(type.size())
{
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
public:
void get(int64_t index, void *r_value) const override;
void get_to_uninitialized(int64_t index, void *r_value) const override;
void set_by_copy(int64_t index, const void *value) override;
void set_by_move(int64_t index, void *value) override;
void set_by_relocate(int64_t index, void *value) override;
CommonVArrayInfo common_info() const override;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
virtual void materialize(const IndexMask &mask, void *dst) const override;
virtual void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
virtual void materialize_compressed(const IndexMask &mask, void *dst) const override;
virtual void materialize_compressed_to_uninitialized(const IndexMask &mask,
void *dst) const override;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
};
class GVArrayImpl_For_GSpan_final final : public GVArrayImpl_For_GSpan {
public:
using GVArrayImpl_For_GSpan::GVArrayImpl_For_GSpan;
private:
CommonVArrayInfo common_info() const override;
};
template<> inline constexpr bool is_trivial_extended_v<GVArrayImpl_For_GSpan_final> = true;
/** \} */
/* -------------------------------------------------------------------- */
/** \name #GVArrayImpl_For_SingleValueRef.
* \{ */
class GVArrayImpl_For_SingleValueRef : public GVArrayImpl {
protected:
const void *value_ = nullptr;
public:
GVArrayImpl_For_SingleValueRef(const CPPType &type, const int64_t size, const void *value)
: GVArrayImpl(type, size), value_(value)
{
}
protected:
GVArrayImpl_For_SingleValueRef(const CPPType &type, const int64_t size) : GVArrayImpl(type, size)
{
}
void get(const int64_t index, void *r_value) const override;
void get_to_uninitialized(const int64_t index, void *r_value) const override;
CommonVArrayInfo common_info() const override;
BLI: refactor IndexMask for better performance and memory usage Goals of this refactor: * Reduce memory consumption of `IndexMask`. The old `IndexMask` uses an `int64_t` for each index which is more than necessary in pretty much all practical cases currently. Using `int32_t` might still become limiting in the future in case we use this to index e.g. byte buffers larger than a few gigabytes. We also don't want to template `IndexMask`, because that would cause a split in the "ecosystem", or everything would have to be implemented twice or templated. * Allow for more multi-threading. The old `IndexMask` contains a single array. This is generally good but has the problem that it is hard to fill from multiple-threads when the final size is not known from the beginning. This is commonly the case when e.g. converting an array of bool to an index mask. Currently, this kind of code only runs on a single thread. * Allow for efficient set operations like join, intersect and difference. It should be possible to multi-thread those operations. * It should be possible to iterate over an `IndexMask` very efficiently. The most important part of that is to avoid all memory access when iterating over continuous ranges. For some core nodes (e.g. math nodes), we generate optimized code for the cases of irregular index masks and simple index ranges. To achieve these goals, a few compromises had to made: * Slicing of the mask (at specific indices) and random element access is `O(log #indices)` now, but with a low constant factor. It should be possible to split a mask into n approximately equally sized parts in `O(n)` though, making the time per split `O(1)`. * Using range-based for loops does not work well when iterating over a nested data structure like the new `IndexMask`. Therefor, `foreach_*` functions with callbacks have to be used. To avoid extra code complexity at the call site, the `foreach_*` methods support multi-threading out of the box. The new data structure splits an `IndexMask` into an arbitrary number of ordered `IndexMaskSegment`. Each segment can contain at most `2^14 = 16384` indices. The indices within a segment are stored as `int16_t`. Each segment has an additional `int64_t` offset which allows storing arbitrary `int64_t` indices. This approach has the main benefits that segments can be processed/constructed individually on multiple threads without a serial bottleneck. Also it reduces the memory requirements significantly. For more details see comments in `BLI_index_mask.hh`. I did a few tests to verify that the data structure generally improves performance and does not cause regressions: * Our field evaluation benchmarks take about as much as before. This is to be expected because we already made sure that e.g. add node evaluation is vectorized. The important thing here is to check that changes to the way we iterate over the indices still allows for auto-vectorization. * Memory usage by a mask is about 1/4 of what it was before in the average case. That's mainly caused by the switch from `int64_t` to `int16_t` for indices. In the worst case, the memory requirements can be larger when there are many indices that are very far away. However, when they are far away from each other, that indicates that there aren't many indices in total. In common cases, memory usage can be way lower than 1/4 of before, because sub-ranges use static memory. * For some more specific numbers I benchmarked `IndexMask::from_bools` in `index_mask_from_selection` on 10.000.000 elements at various probabilities for `true` at every index: ``` Probability Old New 0 4.6 ms 0.8 ms 0.001 5.1 ms 1.3 ms 0.2 8.4 ms 1.8 ms 0.5 15.3 ms 3.0 ms 0.8 20.1 ms 3.0 ms 0.999 25.1 ms 1.7 ms 1 13.5 ms 1.1 ms ``` Pull Request: https://projects.blender.org/blender/blender/pulls/104629
2023-05-24 18:11:41 +02:00
void materialize(const IndexMask &mask, void *dst) const override;
void materialize_to_uninitialized(const IndexMask &mask, void *dst) const override;
void materialize_compressed(const IndexMask &mask, void *dst) const override;
void materialize_compressed_to_uninitialized(const IndexMask &mask, void *dst) const override;
};
class GVArrayImpl_For_SingleValueRef_final final : public GVArrayImpl_For_SingleValueRef {
public:
using GVArrayImpl_For_SingleValueRef::GVArrayImpl_For_SingleValueRef;
private:
CommonVArrayInfo common_info() const override;
};
template<>
inline constexpr bool is_trivial_extended_v<GVArrayImpl_For_SingleValueRef_final> = true;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
/* -------------------------------------------------------------------- */
/** \name Inline methods for #GVArrayImpl.
* \{ */
inline GVArrayImpl::GVArrayImpl(const CPPType &type, const int64_t size)
: type_(&type), size_(size)
{
BLI_assert(size_ >= 0);
}
inline const CPPType &GVArrayImpl::type() const
{
return *type_;
}
inline int64_t GVArrayImpl::size() const
{
return size_;
}
/** \} */
/* -------------------------------------------------------------------- */
/** \name Inline methods for #GVMutableArrayImpl.
* \{ */
inline void GVMutableArray::set_by_copy(const int64_t index, const void *value)
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
this->get_impl()->set_by_copy(index, value);
}
inline void GVMutableArray::set_by_move(const int64_t index, void *value)
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
this->get_impl()->set_by_move(index, value);
}
inline void GVMutableArray::set_by_relocate(const int64_t index, void *value)
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
this->get_impl()->set_by_relocate(index, value);
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T>
inline bool GVMutableArray::try_assign_VMutableArray(VMutableArray<T> &varray) const
{
BLI_assert(impl_->type().is<T>());
return this->get_impl()->try_assign_VMutableArray(&varray);
}
inline GVMutableArrayImpl *GVMutableArray::get_impl() const
{
return const_cast<GVMutableArrayImpl *>(static_cast<const GVMutableArrayImpl *>(impl_));
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
}
/** \} */
/* -------------------------------------------------------------------- */
/** \name Inline methods for #GVArrayCommon.
* \{ */
template<typename ImplT, typename... Args> inline void GVArrayCommon::emplace(Args &&...args)
{
static_assert(std::is_base_of_v<GVArrayImpl, ImplT>);
if constexpr (std::is_copy_constructible_v<ImplT> && Storage::template is_inline_v<ImplT>) {
impl_ = &storage_.template emplace<ImplT>(std::forward<Args>(args)...);
}
else {
std::shared_ptr<const GVArrayImpl> ptr = std::make_shared<ImplT>(std::forward<Args>(args)...);
impl_ = &*ptr;
storage_ = std::move(ptr);
}
}
/* Copies the value at the given index into the provided storage. The `r_value` pointer is
* expected to point to initialized memory. */
inline void GVArrayCommon::get(const int64_t index, void *r_value) const
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
impl_->get(index, r_value);
}
template<typename T> inline T GVArrayCommon::get(const int64_t index) const
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
BLI_assert(this->type().is<T>());
T value{};
impl_->get(index, &value);
return value;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* Same as `get`, but `r_value` is expected to point to uninitialized memory. */
inline void GVArrayCommon::get_to_uninitialized(const int64_t index, void *r_value) const
{
BLI_assert(index >= 0);
BLI_assert(index < this->size());
impl_->get_to_uninitialized(index, r_value);
}
template<typename T> inline bool GVArrayCommon::try_assign_VArray(VArray<T> &varray) const
{
BLI_assert(impl_->type().is<T>());
return impl_->try_assign_VArray(&varray);
}
inline const CPPType &GVArrayCommon::type() const
{
return impl_->type();
}
inline GVArrayCommon::operator bool() const
{
return impl_ != nullptr;
}
inline CommonVArrayInfo GVArrayCommon::common_info() const
{
return impl_->common_info();
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
inline int64_t GVArrayCommon::size() const
{
if (impl_ == nullptr) {
return 0;
}
return impl_->size();
}
inline bool GVArrayCommon::is_empty() const
{
return this->size() == 0;
}
/** \} */
/** To be used with #call_with_devirtualized_parameters. */
template<typename T, bool UseSingle, bool UseSpan> struct GVArrayDevirtualizer {
const GVArrayImpl &varray_impl;
template<typename Fn> bool devirtualize(const Fn &fn) const
{
const CommonVArrayInfo info = this->varray_impl.common_info();
const int64_t size = this->varray_impl.size();
if constexpr (UseSingle) {
if (info.type == CommonVArrayInfo::Type::Single) {
return fn(SingleAsSpan<T>(*static_cast<const T *>(info.data), size));
}
}
if constexpr (UseSpan) {
if (info.type == CommonVArrayInfo::Type::Span) {
return fn(Span<T>(static_cast<const T *>(info.data), size));
}
}
return false;
}
};
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* -------------------------------------------------------------------- */
/** \name Inline methods for #GVArray.
* \{ */
inline GVArray::GVArray(varray_tag::span /*tag*/, const GSpan span)
{
/* Use const-cast because the underlying virtual array implementation is shared between const
* and non const data. */
GMutableSpan mutable_span{span.type(), const_cast<void *>(span.data()), span.size()};
this->emplace<GVArrayImpl_For_GSpan_final>(mutable_span);
}
inline GVArray::GVArray(varray_tag::single_ref /*tag*/,
const CPPType &type,
const int64_t size,
const void *value)
{
this->emplace<GVArrayImpl_For_SingleValueRef_final>(type, size, value);
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
namespace detail {
template<typename StorageT> constexpr GVArrayAnyExtraInfo GVArrayAnyExtraInfo::get()
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
{
static_assert(std::is_base_of_v<GVArrayImpl, StorageT> ||
is_same_any_v<StorageT, const GVArrayImpl *, std::shared_ptr<const GVArrayImpl>>);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
if constexpr (std::is_base_of_v<GVArrayImpl, StorageT>) {
return {[](const void *buffer) {
return static_cast<const GVArrayImpl *>((const StorageT *)buffer);
}};
}
else if constexpr (std::is_same_v<StorageT, const GVArrayImpl *>) {
return {[](const void *buffer) { return *(const StorageT *)buffer; }};
}
else if constexpr (std::is_same_v<StorageT, std::shared_ptr<const GVArrayImpl>>) {
return {[](const void *buffer) { return ((const StorageT *)buffer)->get(); }};
}
else {
BLI_assert_unreachable();
return {};
}
}
} // namespace detail
template<typename ImplT, typename... Args> inline GVArray GVArray::For(Args &&...args)
{
static_assert(std::is_base_of_v<GVArrayImpl, ImplT>);
GVArray varray;
varray.template emplace<ImplT>(std::forward<Args>(args)...);
return varray;
}
template<typename T> inline GVArray::GVArray(const VArray<T> &varray)
{
if (!varray) {
return;
}
const CommonVArrayInfo info = varray.common_info();
if (info.type == CommonVArrayInfo::Type::Single) {
*this = GVArray::ForSingle(CPPType::get<T>(), varray.size(), info.data);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
return;
}
/* Need to check for ownership, because otherwise the referenced data can be destructed when
* #this is destructed. */
if (info.type == CommonVArrayInfo::Type::Span && !info.may_have_ownership) {
*this = GVArray::ForSpan(GSpan(CPPType::get<T>(), info.data, varray.size()));
return;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
}
if (varray.try_assign_GVArray(*this)) {
return;
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
}
*this = GVArray::For<GVArrayImpl_For_VArray<T>>(varray);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
}
template<typename T> inline VArray<T> GVArray::typed() const
{
if (!*this) {
return {};
}
BLI_assert(impl_->type().is<T>());
const CommonVArrayInfo info = this->common_info();
if (info.type == CommonVArrayInfo::Type::Single) {
return VArray<T>::ForSingle(*static_cast<const T *>(info.data), this->size());
}
/* Need to check for ownership, because otherwise the referenced data can be destructed when
* #this is destructed. */
if (info.type == CommonVArrayInfo::Type::Span && !info.may_have_ownership) {
return VArray<T>::ForSpan(Span<T>(static_cast<const T *>(info.data), this->size()));
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VArray<T> varray;
if (this->try_assign_VArray(varray)) {
return varray;
}
return VArray<T>::template For<VArrayImpl_For_GVArray<T>>(*this);
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/** \} */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
/* -------------------------------------------------------------------- */
/** \name Inline methods for #GVMutableArray.
* \{ */
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename ImplT, typename... Args>
inline GVMutableArray GVMutableArray::For(Args &&...args)
{
static_assert(std::is_base_of_v<GVMutableArrayImpl, ImplT>);
GVMutableArray varray;
varray.emplace<ImplT>(std::forward<Args>(args)...);
return varray;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> inline GVMutableArray::GVMutableArray(const VMutableArray<T> &varray)
{
if (!varray) {
return;
}
const CommonVArrayInfo info = varray.common_info();
if (info.type == CommonVArrayInfo::Type::Span && !info.may_have_ownership) {
*this = GVMutableArray::ForSpan(
GMutableSpan(CPPType::get<T>(), const_cast<void *>(info.data), varray.size()));
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
return;
}
if (varray.try_assign_GVMutableArray(*this)) {
return;
}
*this = GVMutableArray::For<GVMutableArrayImpl_For_VMutableArray<T>>(varray);
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
template<typename T> inline VMutableArray<T> GVMutableArray::typed() const
{
if (!*this) {
return {};
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
BLI_assert(this->type().is<T>());
const CommonVArrayInfo info = this->common_info();
if (info.type == CommonVArrayInfo::Type::Span && !info.may_have_ownership) {
return VMutableArray<T>::ForSpan(
MutableSpan<T>(const_cast<T *>(static_cast<const T *>(info.data)), this->size()));
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
VMutableArray<T> varray;
if (this->try_assign_VMutableArray(varray)) {
return varray;
}
Geometry Nodes: refactor virtual array system Goals of this refactor: * Simplify creating virtual arrays. * Simplify passing virtual arrays around. * Simplify converting between typed and generic virtual arrays. * Reduce memory allocations. As a quick reminder, a virtual arrays is a data structure that behaves like an array (i.e. it can be accessed using an index). However, it may not actually be stored as array internally. The two most important implementations of virtual arrays are those that correspond to an actual plain array and those that have the same value for every index. However, many more implementations exist for various reasons (interfacing with legacy attributes, unified iterator over all points in multiple splines, ...). With this refactor the core types (`VArray`, `GVArray`, `VMutableArray` and `GVMutableArray`) can be used like "normal values". They typically live on the stack. Before, they were usually inside a `std::unique_ptr`. This makes passing them around much easier. Creation of new virtual arrays is also much simpler now due to some constructors. Memory allocations are reduced by making use of small object optimization inside the core types. Previously, `VArray` was a class with virtual methods that had to be overridden to change the behavior of a the virtual array. Now,`VArray` has a fixed size and has no virtual methods. Instead it contains a `VArrayImpl` that is similar to the old `VArray`. `VArrayImpl` should rarely ever be used directly, unless a new virtual array implementation is added. To support the small object optimization for many `VArrayImpl` classes, a new `blender::Any` type is added. It is similar to `std::any` with two additional features. It has an adjustable inline buffer size and alignment. The inline buffer size of `std::any` can't be relied on and is usually too small for our use case here. Furthermore, `blender::Any` can store additional user-defined type information without increasing the stack size. Differential Revision: https://developer.blender.org/D12986
2021-11-16 10:15:51 +01:00
return VMutableArray<T>::template For<VMutableArrayImpl_For_GVMutableArray<T>>(*this);
}
/** \} */
} // namespace blender