tornavis/source/blender/blenkernel/BKE_cryptomatte.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

64 lines
2.3 KiB
C
Raw Normal View History

/* SPDX-FileCopyrightText: 2020 Blender Authors
*
* SPDX-License-Identifier: GPL-2.0-or-later */
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
/** \file
* \ingroup bke
*/
#pragma once
#include "BLI_sys_types.h"
#include "DNA_layer_types.h"
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
#ifdef __cplusplus
extern "C" {
#endif
Compositor: Redesign Cryptomatte node for better usability In the current implementation, cryptomatte passes are connected to the node and elements are picked by using the eyedropper tool on a special pick channel. This design has two disadvantages - both connecting all passes individually and always having to switch to the picker channel are tedious. With the new design, the user selects the RenderLayer or Image from which the Cryptomatte layers are directly loaded (the type of pass is determined by an enum). This allows the node to automatically detect all relevant passes. Then, when using the eyedropper tool, the operator looks up the selected coordinates from the picked Image, Node backdrop or Clip and reads the picked object directly from the Renderlayer/Image, therefore allowing to pick in any context (e.g. by clicking on the Combined pass in the Image Viewer). The sampled color is looked up in the metadata and the actual name is stored in the cryptomatte node. This also allows to remove a hash by just removing the name from the matte id. Technically there is some loss of flexibility because the Cryptomatte pass inputs can no longer be connected to other nodes, but since any compositing done on them is likely to break the Cryptomatte system anyways, this isn't really a concern in practise. In the future, this would also allow to automatically translate values to names by looking up the value in the associated metadata of the input, or to get a better visualization of overlapping areas in the Pick output since we could blend colors now that the output doesn't have to contain the exact value. Idea + Original patch: Lucas Stockner Reviewed By: Brecht van Lommel Differential Revision: https://developer.blender.org/D3959
2021-03-16 07:37:30 +01:00
/* Forward declarations. */
struct CryptomatteSession;
2021-02-18 04:14:17 +01:00
struct Material;
2020-12-16 06:26:23 +01:00
struct Object;
struct RenderResult;
Compositor: Redesign Cryptomatte node for better usability In the current implementation, cryptomatte passes are connected to the node and elements are picked by using the eyedropper tool on a special pick channel. This design has two disadvantages - both connecting all passes individually and always having to switch to the picker channel are tedious. With the new design, the user selects the RenderLayer or Image from which the Cryptomatte layers are directly loaded (the type of pass is determined by an enum). This allows the node to automatically detect all relevant passes. Then, when using the eyedropper tool, the operator looks up the selected coordinates from the picked Image, Node backdrop or Clip and reads the picked object directly from the Renderlayer/Image, therefore allowing to pick in any context (e.g. by clicking on the Combined pass in the Image Viewer). The sampled color is looked up in the metadata and the actual name is stored in the cryptomatte node. This also allows to remove a hash by just removing the name from the matte id. Technically there is some loss of flexibility because the Cryptomatte pass inputs can no longer be connected to other nodes, but since any compositing done on them is likely to break the Cryptomatte system anyways, this isn't really a concern in practise. In the future, this would also allow to automatically translate values to names by looking up the value in the associated metadata of the input, or to get a better visualization of overlapping areas in the Pick output since we could blend colors now that the output doesn't have to contain the exact value. Idea + Original patch: Lucas Stockner Reviewed By: Brecht van Lommel Differential Revision: https://developer.blender.org/D3959
2021-03-16 07:37:30 +01:00
struct Scene;
struct CryptomatteSession *BKE_cryptomatte_init(void);
struct CryptomatteSession *BKE_cryptomatte_init_from_render_result(
const struct RenderResult *render_result);
Compositor: Redesign Cryptomatte node for better usability In the current implementation, cryptomatte passes are connected to the node and elements are picked by using the eyedropper tool on a special pick channel. This design has two disadvantages - both connecting all passes individually and always having to switch to the picker channel are tedious. With the new design, the user selects the RenderLayer or Image from which the Cryptomatte layers are directly loaded (the type of pass is determined by an enum). This allows the node to automatically detect all relevant passes. Then, when using the eyedropper tool, the operator looks up the selected coordinates from the picked Image, Node backdrop or Clip and reads the picked object directly from the Renderlayer/Image, therefore allowing to pick in any context (e.g. by clicking on the Combined pass in the Image Viewer). The sampled color is looked up in the metadata and the actual name is stored in the cryptomatte node. This also allows to remove a hash by just removing the name from the matte id. Technically there is some loss of flexibility because the Cryptomatte pass inputs can no longer be connected to other nodes, but since any compositing done on them is likely to break the Cryptomatte system anyways, this isn't really a concern in practise. In the future, this would also allow to automatically translate values to names by looking up the value in the associated metadata of the input, or to get a better visualization of overlapping areas in the Pick output since we could blend colors now that the output doesn't have to contain the exact value. Idea + Original patch: Lucas Stockner Reviewed By: Brecht van Lommel Differential Revision: https://developer.blender.org/D3959
2021-03-16 07:37:30 +01:00
struct CryptomatteSession *BKE_cryptomatte_init_from_scene(const struct Scene *scene);
struct CryptomatteSession *BKE_cryptomatte_init_from_view_layer(
const struct ViewLayer *view_layer);
void BKE_cryptomatte_free(struct CryptomatteSession *session);
void BKE_cryptomatte_add_layer(struct CryptomatteSession *session, const char *layer_name);
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
uint32_t BKE_cryptomatte_hash(const char *name, int name_len);
uint32_t BKE_cryptomatte_object_hash(struct CryptomatteSession *session,
const char *layer_name,
const struct Object *object);
uint32_t BKE_cryptomatte_material_hash(struct CryptomatteSession *session,
const char *layer_name,
const struct Material *material);
uint32_t BKE_cryptomatte_asset_hash(struct CryptomatteSession *session,
const char *layer_name,
const struct Object *object);
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
float BKE_cryptomatte_hash_to_float(uint32_t cryptomatte_hash);
/**
* Find an ID in the given main that matches the given encoded float.
*/
Compositor: Redesign Cryptomatte node for better usability In the current implementation, cryptomatte passes are connected to the node and elements are picked by using the eyedropper tool on a special pick channel. This design has two disadvantages - both connecting all passes individually and always having to switch to the picker channel are tedious. With the new design, the user selects the RenderLayer or Image from which the Cryptomatte layers are directly loaded (the type of pass is determined by an enum). This allows the node to automatically detect all relevant passes. Then, when using the eyedropper tool, the operator looks up the selected coordinates from the picked Image, Node backdrop or Clip and reads the picked object directly from the Renderlayer/Image, therefore allowing to pick in any context (e.g. by clicking on the Combined pass in the Image Viewer). The sampled color is looked up in the metadata and the actual name is stored in the cryptomatte node. This also allows to remove a hash by just removing the name from the matte id. Technically there is some loss of flexibility because the Cryptomatte pass inputs can no longer be connected to other nodes, but since any compositing done on them is likely to break the Cryptomatte system anyways, this isn't really a concern in practise. In the future, this would also allow to automatically translate values to names by looking up the value in the associated metadata of the input, or to get a better visualization of overlapping areas in the Pick output since we could blend colors now that the output doesn't have to contain the exact value. Idea + Original patch: Lucas Stockner Reviewed By: Brecht van Lommel Differential Revision: https://developer.blender.org/D3959
2021-03-16 07:37:30 +01:00
bool BKE_cryptomatte_find_name(const struct CryptomatteSession *session,
float encoded_hash,
Compositor: Redesign Cryptomatte node for better usability In the current implementation, cryptomatte passes are connected to the node and elements are picked by using the eyedropper tool on a special pick channel. This design has two disadvantages - both connecting all passes individually and always having to switch to the picker channel are tedious. With the new design, the user selects the RenderLayer or Image from which the Cryptomatte layers are directly loaded (the type of pass is determined by an enum). This allows the node to automatically detect all relevant passes. Then, when using the eyedropper tool, the operator looks up the selected coordinates from the picked Image, Node backdrop or Clip and reads the picked object directly from the Renderlayer/Image, therefore allowing to pick in any context (e.g. by clicking on the Combined pass in the Image Viewer). The sampled color is looked up in the metadata and the actual name is stored in the cryptomatte node. This also allows to remove a hash by just removing the name from the matte id. Technically there is some loss of flexibility because the Cryptomatte pass inputs can no longer be connected to other nodes, but since any compositing done on them is likely to break the Cryptomatte system anyways, this isn't really a concern in practise. In the future, this would also allow to automatically translate values to names by looking up the value in the associated metadata of the input, or to get a better visualization of overlapping areas in the Pick output since we could blend colors now that the output doesn't have to contain the exact value. Idea + Original patch: Lucas Stockner Reviewed By: Brecht van Lommel Differential Revision: https://developer.blender.org/D3959
2021-03-16 07:37:30 +01:00
char *r_name,
int name_maxncpy);
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
char *BKE_cryptomatte_entries_to_matte_id(struct NodeCryptomatte *node_storage);
void BKE_cryptomatte_matte_id_to_entries(struct NodeCryptomatte *node_storage,
const char *matte_id);
void BKE_cryptomatte_store_metadata(const struct CryptomatteSession *session,
struct RenderResult *render_result,
const ViewLayer *view_layer);
EEVEE Cryptomatte Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased. Cryptomatte was already available in Cycles this patch adds it to the EEVEE render engine. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf **Accurate mode** Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used. When accuracy mode is active, the number of render samples is used. **Deviation from standard** Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In EEVEE a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. **Implementation Overview** When drawing to the cryptomatte GPU buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is written in the GPU buffer. The exact layout depends on the active cryptomatte layers. The GPU buffer is downloaded and integrated into an accumulation buffer (stored in CPU RAM). The accumulation buffer stores the hashes + weights for a number of levels, layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer. After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first). Blender Kernel now has a `BKE_cryptomatte` header that access to common functions for cryptomatte. This will in the future be used by the API. * Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass. This is a list of tasks that needs to be done for the same release that this patch lands on (Blender 2.92) * T82571 Add render tests. * T82572 Documentation. * T82573 Store hashes + Object names in the render result header. * T82574 Use threading to increase performance in accumulation and post processing. * T82575 Merge the cycles and EEVEE settings as they are identical. * T82576 Add RNA to extract the cryptomatte hashes to use in python scripts. Reviewed By: Clément Foucault Maniphest Tasks: T81058 Differential Revision: https://developer.blender.org/D9165
2020-12-04 08:28:43 +01:00
#ifdef __cplusplus
}
#endif