tornavis/source/blender/makesdna/DNA_scene_types.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

2905 lines
74 KiB
C
Raw Normal View History

/* SPDX-FileCopyrightText: 2001-2002 NaN Holding BV. All rights reserved.
*
* SPDX-License-Identifier: GPL-2.0-or-later */
2002-10-12 13:37:38 +02:00
/** \file
* \ingroup DNA
*/
#pragma once
2011-12-30 08:25:49 +01:00
#include "DNA_defs.h"
/* XXX(@ideasman42): temp feature. */
#define DURIAN_CAMERA_SWITCH
/**
* Check for cyclic set-scene.
* Libraries can cause this case which is normally prevented, see (#42009).
*/
#define USE_SETSCENE_CHECK
#include "DNA_ID.h"
#include "DNA_color_types.h" /* color management */
2021-10-18 02:16:24 +02:00
#include "DNA_customdata_types.h" /* Scene's runtime custom-data masks. */
Render Layers and Collections (merge from render-layers) Design Documents ---------------- * https://wiki.blender.org/index.php/Dev:2.8/Source/Layers * https://wiki.blender.org/index.php/Dev:2.8/Source/DataDesignRevised User Commit Log --------------- * New Layer and Collection system to replace render layers and viewport layers. * A layer is a set of collections of objects (and their drawing options) required for specific tasks. * A collection is a set of objects, equivalent of the old layers in Blender. A collection can be shared across multiple layers. * All Scenes have a master collection that all other collections are children of. * New collection "context" tab (in Properties Editor) * New temporary viewport "collections" panel to control per-collection visibility Missing User Features --------------------- * Collection "Filter" Option to add objects based on their names * Collection Manager operators The existing buttons are placeholders * Collection Manager drawing The editor main region is empty * Collection Override * Per-Collection engine settings This will come as a separate commit, as part of the clay-engine branch Dev Commit Log -------------- * New DNA file (DNA_layer_types.h) with the new structs We are replacing Base by a new extended Base while keeping it backward compatible with some legacy settings (i.e., lay, flag_legacy). Renamed all Base to BaseLegacy to make it clear the areas of code that still need to be converted Note: manual changes were required on - deg_builder_nodes.h, rna_object.c, KX_Light.cpp * Unittesting for main syncronization requirements - read, write, add/copy/remove objects, copy scene, collection link/unlinking, context) * New Editor: Collection Manager Based on patch by Julian Eisel This is extracted from the layer-manager branch. With the following changes: - Renamed references of layer manager to collections manager - I doesn't include the editors/space_collections/ draw and util files - The drawing code itself will be implemented separately by Julian * Base / Object: A little note about them. Original Blender code would try to keep them in sync through the code, juggling flags back and forth. This will now be handled by Depsgraph, keeping Object and Bases more separated throughout the non-rendering code. Scene.base is being cleared in doversion, and the old viewport drawing code was poorly converted to use the new bases while the new viewport code doesn't get merged and replace the old one. Python API Changes ------------------ ``` - scene.layers + # no longer exists - scene.objects + scene.scene_layers.active.objects - scene.objects.active + scene.render_layers.active.objects.active - bpy.context.scene.objects.link() + bpy.context.scene_collection.objects.link() - bpy_extras.object_utils.object_data_add(context, obdata, operator=None, use_active_layer=True, name=None) + bpy_extras.object_utils.object_data_add(context, obdata, operator=None, name=None) - bpy.context.object.select + bpy.context.object.select = True + bpy.context.object.select = False + bpy.context.object.select_get() + bpy.context.object.select_set(action='SELECT') + bpy.context.object.select_set(action='DESELECT') -AddObjectHelper.layers + # no longer exists ```
2017-02-07 10:18:38 +01:00
#include "DNA_layer_types.h"
#include "DNA_listBase.h"
#include "DNA_scene_enums.h"
#include "DNA_vec_types.h"
#include "DNA_view3d_types.h"
2002-10-12 13:37:38 +02:00
struct AnimData;
struct Brush;
Collections and groups unification OVERVIEW * In 2.7 terminology, all layers and groups are now collection datablocks. * These collections are nestable, linkable, instanceable, overrideable, .. which opens up new ways to set up scenes and link + override data. * Viewport/render visibility and selectability are now a part of the collection and shared across all view layers and linkable. * View layers define which subset of the scene collection hierarchy is excluded for each. For many workflows one view layer can be used, these are more of an advanced feature now. OUTLINER * The outliner now has a "View Layer" display mode instead of "Collections", which can display the collections and/or objects in the view layer. * In this display mode, collections can be excluded with the right click menu. These will then be greyed out and their objects will be excluded. * To view collections not linked to any scene, the "Blender File" display mode can be used, with the new filtering option to just see Colleciton datablocks. * The outliner right click menus for collections and objects were reorganized. * Drag and drop still needs to be improved. Like before, dragging the icon or text gives different results, we'll unify this later. LINKING AND OVERRIDES * Collections can now be linked into the scene without creating an instance, with the link/append operator or from the collections view in the outliner. * Collections can get static overrides with the right click menu in the outliner, but this is rather unreliable and not clearly communicated at the moment. * We still need to improve the make override operator to turn collection instances into collections with overrides directly in the scene. PERFORMANCE * We tried to make performance not worse than before and improve it in some cases. The main thing that's still a bit slower is multiple scenes, we have to change the layer syncing to only updated affected scenes. * Collections keep a list of their parent collections for faster incremental updates in syncing and caching. * View layer bases are now in a object -> base hash to avoid quadratic time lookups internally and in API functions like visible_get(). VERSIONING * Compatibility with 2.7 files should be improved due to the new visibility controls. Of course users may not want to set up their scenes differently now to avoid having separate layers and groups. * Compatibility with 2.8 is mostly there, and was tested on Eevee demo and Hero files. There's a few things which are know to be not quite compatible, like nested layer collections inside groups. * The versioning code for 2.8 files is quite complicated, and isolated behind #ifdef so it can be removed at the end of the release cycle. KNOWN ISSUES * The G-key group operators in the 3D viewport were left mostly as is, they need to be modified still to fit better. * Same for the groups panel in the object properties. This needs to be updated still, or perhaps replaced by something better. * Collections must all have a unique name. Less restrictive namespacing is to be done later, we'll have to see how important this is as all objects within the collections must also have a unique name anyway. * Full scene copy and delete scene are exactly doing the right thing yet. Differential Revision: https://developer.blender.org/D3383 https://code.blender.org/2018/05/collections-and-groups/
2018-04-30 15:57:22 +02:00
struct Collection;
struct ColorSpace;
struct CurveMapping;
struct CurveProfile;
struct CustomData_MeshMasks;
struct Editing;
struct Image;
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
struct MovieClip;
struct Object;
struct Scene;
struct World;
struct bGPdata;
struct bNodeTree;
/* -------------------------------------------------------------------- */
/** \name FFMPEG
* \{ */
2002-10-12 13:37:38 +02:00
typedef struct AviCodecData {
/** Save format. */
void *lpFormat;
/** Compressor options. */
void *lpParms;
/** Size of lpFormat buffer. */
unsigned int cbFormat;
/** Size of lpParms buffer. */
unsigned int cbParms;
/** Stream type, for consistency. */
unsigned int fccType;
/** Compressor. */
unsigned int fccHandler;
/** Keyframe rate. */
unsigned int dwKeyFrameEvery;
/** Compress quality 0-10,000. */
unsigned int dwQuality;
/** Bytes per second. */
unsigned int dwBytesPerSecond;
/** Flags... see below. */
unsigned int dwFlags;
/** For non-video streams only. */
unsigned int dwInterleaveEvery;
char _pad[4];
char avicodecname[128];
2002-10-12 13:37:38 +02:00
} AviCodecData;
typedef enum eFFMpegPreset {
FFM_PRESET_NONE = 0,
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
#ifdef DNA_DEPRECATED_ALLOW
/* Previously used by h.264 to control encoding speed vs. file size. */
FFM_PRESET_ULTRAFAST = 1, /* DEPRECATED */
FFM_PRESET_SUPERFAST = 2, /* DEPRECATED */
FFM_PRESET_VERYFAST = 3, /* DEPRECATED */
FFM_PRESET_FASTER = 4, /* DEPRECATED */
FFM_PRESET_FAST = 5, /* DEPRECATED */
FFM_PRESET_MEDIUM = 6, /* DEPRECATED */
FFM_PRESET_SLOW = 7, /* DEPRECATED */
FFM_PRESET_SLOWER = 8, /* DEPRECATED */
FFM_PRESET_VERYSLOW = 9, /* DEPRECATED */
#endif
/* Used by WEBM/VP9 and h.264 to control encoding speed vs. file size.
* WEBM/VP9 use these values directly, whereas h.264 map those to
* respectively the MEDIUM, SLOWER, and SUPERFAST presets. */
/** The default and recommended for most applications. */
FFM_PRESET_GOOD = 10,
/** Recommended if you have lots of time and want the best compression efficiency. */
FFM_PRESET_BEST = 11,
/** Recommended for live / fast encoding. */
FFM_PRESET_REALTIME = 12,
} eFFMpegPreset;
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
/**
* Mapping from easily-understandable descriptions to CRF values.
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
* Assumes we output 8-bit video. Needs to be remapped if 10-bit
* is output.
* We use a slightly wider than "subjectively sane range" according
* to https://trac.ffmpeg.org/wiki/Encode/H.264#a1.ChooseaCRFvalue
*/
typedef enum eFFMpegCrf {
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
FFM_CRF_NONE = -1,
FFM_CRF_LOSSLESS = 0,
FFM_CRF_PERC_LOSSLESS = 17,
FFM_CRF_HIGH = 20,
FFM_CRF_MEDIUM = 23,
FFM_CRF_LOW = 26,
FFM_CRF_VERYLOW = 29,
FFM_CRF_LOWEST = 32,
} eFFMpegCrf;
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
typedef enum eFFMpegAudioChannels {
FFM_CHANNELS_MONO = 1,
FFM_CHANNELS_STEREO = 2,
FFM_CHANNELS_SURROUND4 = 4,
FFM_CHANNELS_SURROUND51 = 6,
FFM_CHANNELS_SURROUND71 = 8,
} eFFMpegAudioChannels;
typedef struct FFMpegCodecData {
int type;
int codec;
int audio_codec;
int video_bitrate;
int audio_bitrate;
int audio_mixrate;
int audio_channels;
float audio_volume;
int gop_size;
/** Only used if FFMPEG_USE_MAX_B_FRAMES flag is set. */
int max_b_frames;
int flags;
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
int constant_rate_factor;
/** See eFFMpegPreset. */
int ffmpeg_preset;
int rc_min_rate;
int rc_max_rate;
int rc_buffer_size;
int mux_packet_size;
int mux_rate;
void *_pad1;
} FFMpegCodecData;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Audio
* \{ */
Commit message and the brunt of the code courtesy of intrr, apologies for the size of this; Finally, the Sequencer audio support and global audio/animation sync stuff! (See http://intrr.org/blender/audiosequencer.html) Stuff that has been done: ./source/blender/blenloader/intern/writefile.c ./source/blender/blenloader/intern/readfile.c Added code to make it handle sounds used by audio strips, and to convert Scene data from older (<2.28) versions to init Scene global audio settings (Scene->audio) to defaults. ./source/blender/include/BSE_seqaudio.h ./source/blender/src/seqaudio.c The main audio routines that start/stop/scrub the audio stream at a certain frame position, provide the frame reference for the current stream position, mix the audio, convert the audio, mixdown the audio into a file. ./source/blender/makesdna/DNA_sound_types.h Introduced new variables in the bSound struct to accomodate the sample data after converted to the scene's global mixing format (stream, streamlen). Also added a new flag SOUND_FLAGS_SEQUENCE that gets set if the Sound belongs to a sequence strip. ./source/blender/makesdna/DNA_scene_types.h Added AudioData struct, which holds scene-global audio settings. ./source/blender/makesdna/DNA_sequence_types.h Added support for audio strips. Some variables to hold Panning/Attenuation information, position information, reference to the sample, and some flags. ./source/blender/makesdna/DNA_userdef_types.h ./source/blender/src/usiblender.c Added a "Mixing buffer size" userpref. Made the versions stuff initialize it to a default for versions <2.28. ./source/blender/makesdna/DNA_space_types.h ./source/blender/src/filesel.c Added a Cyan dot to .WAV files. Any other suggestions on a better color? :) ./source/blender/src/editsound.c Changes (fixes) to the WAV file loader, re-enabled some gameengine code that is needed for dealing with bSounds and bSamples. ./source/blender/src/editipo.c ./source/blender/src/drawseq.c ./source/blender/src/editnla.c ./source/blender/src/space.c ./source/blender/src/drawview.c ./source/blender/src/renderwin.c ./source/blender/src/headerbuttons.c - Created two different wrappers for update_for_newframe(), one which scrubs the audio, one which doesn't. - Replaced some of the occurences of update_for_newframe() with update_for_newframe_muted(), which doesn't scrub the audio. - In drawview.c: Changed the synchronization scheme to get the current audio position from the audio engine, and use that as a reference for setting CFRA. Implements a/v sync and framedrop. - In editipo.c: Changed handling of Fac IPOs to be usable for audio strips as volume envelopes. - In space.c: Added the mixing buffer size Userpref, enabled audio scrubbing (update_for_newframe()) for moving the sequence editor framebar. ./source/blender/src/editseq.c Added support for audio strips and a default directory for WAV files which gets saved from the last Shift-A operation. ./source/blender/src/buttons.c Added Scene-global audio sequencer settings in Sound buttons. ./source/blender/src/sequence.c Various stuff that deals with handling audio strips differently than usual strips.
2003-07-13 22:16:56 +02:00
typedef struct AudioData {
int mixrate; /* 2.5: now in FFMpegCodecData: audio_mixrate. */
float main; /* 2.5: now in FFMpegCodecData: audio_volume. */
float speed_of_sound;
float doppler_factor;
int distance_model;
Commit message and the brunt of the code courtesy of intrr, apologies for the size of this; Finally, the Sequencer audio support and global audio/animation sync stuff! (See http://intrr.org/blender/audiosequencer.html) Stuff that has been done: ./source/blender/blenloader/intern/writefile.c ./source/blender/blenloader/intern/readfile.c Added code to make it handle sounds used by audio strips, and to convert Scene data from older (<2.28) versions to init Scene global audio settings (Scene->audio) to defaults. ./source/blender/include/BSE_seqaudio.h ./source/blender/src/seqaudio.c The main audio routines that start/stop/scrub the audio stream at a certain frame position, provide the frame reference for the current stream position, mix the audio, convert the audio, mixdown the audio into a file. ./source/blender/makesdna/DNA_sound_types.h Introduced new variables in the bSound struct to accomodate the sample data after converted to the scene's global mixing format (stream, streamlen). Also added a new flag SOUND_FLAGS_SEQUENCE that gets set if the Sound belongs to a sequence strip. ./source/blender/makesdna/DNA_scene_types.h Added AudioData struct, which holds scene-global audio settings. ./source/blender/makesdna/DNA_sequence_types.h Added support for audio strips. Some variables to hold Panning/Attenuation information, position information, reference to the sample, and some flags. ./source/blender/makesdna/DNA_userdef_types.h ./source/blender/src/usiblender.c Added a "Mixing buffer size" userpref. Made the versions stuff initialize it to a default for versions <2.28. ./source/blender/makesdna/DNA_space_types.h ./source/blender/src/filesel.c Added a Cyan dot to .WAV files. Any other suggestions on a better color? :) ./source/blender/src/editsound.c Changes (fixes) to the WAV file loader, re-enabled some gameengine code that is needed for dealing with bSounds and bSamples. ./source/blender/src/editipo.c ./source/blender/src/drawseq.c ./source/blender/src/editnla.c ./source/blender/src/space.c ./source/blender/src/drawview.c ./source/blender/src/renderwin.c ./source/blender/src/headerbuttons.c - Created two different wrappers for update_for_newframe(), one which scrubs the audio, one which doesn't. - Replaced some of the occurences of update_for_newframe() with update_for_newframe_muted(), which doesn't scrub the audio. - In drawview.c: Changed the synchronization scheme to get the current audio position from the audio engine, and use that as a reference for setting CFRA. Implements a/v sync and framedrop. - In editipo.c: Changed handling of Fac IPOs to be usable for audio strips as volume envelopes. - In space.c: Added the mixing buffer size Userpref, enabled audio scrubbing (update_for_newframe()) for moving the sequence editor framebar. ./source/blender/src/editseq.c Added support for audio strips and a default directory for WAV files which gets saved from the last Shift-A operation. ./source/blender/src/buttons.c Added Scene-global audio sequencer settings in Sound buttons. ./source/blender/src/sequence.c Various stuff that deals with handling audio strips differently than usual strips.
2003-07-13 22:16:56 +02:00
short flag;
char _pad[2];
float volume;
char _pad2[4];
Commit message and the brunt of the code courtesy of intrr, apologies for the size of this; Finally, the Sequencer audio support and global audio/animation sync stuff! (See http://intrr.org/blender/audiosequencer.html) Stuff that has been done: ./source/blender/blenloader/intern/writefile.c ./source/blender/blenloader/intern/readfile.c Added code to make it handle sounds used by audio strips, and to convert Scene data from older (<2.28) versions to init Scene global audio settings (Scene->audio) to defaults. ./source/blender/include/BSE_seqaudio.h ./source/blender/src/seqaudio.c The main audio routines that start/stop/scrub the audio stream at a certain frame position, provide the frame reference for the current stream position, mix the audio, convert the audio, mixdown the audio into a file. ./source/blender/makesdna/DNA_sound_types.h Introduced new variables in the bSound struct to accomodate the sample data after converted to the scene's global mixing format (stream, streamlen). Also added a new flag SOUND_FLAGS_SEQUENCE that gets set if the Sound belongs to a sequence strip. ./source/blender/makesdna/DNA_scene_types.h Added AudioData struct, which holds scene-global audio settings. ./source/blender/makesdna/DNA_sequence_types.h Added support for audio strips. Some variables to hold Panning/Attenuation information, position information, reference to the sample, and some flags. ./source/blender/makesdna/DNA_userdef_types.h ./source/blender/src/usiblender.c Added a "Mixing buffer size" userpref. Made the versions stuff initialize it to a default for versions <2.28. ./source/blender/makesdna/DNA_space_types.h ./source/blender/src/filesel.c Added a Cyan dot to .WAV files. Any other suggestions on a better color? :) ./source/blender/src/editsound.c Changes (fixes) to the WAV file loader, re-enabled some gameengine code that is needed for dealing with bSounds and bSamples. ./source/blender/src/editipo.c ./source/blender/src/drawseq.c ./source/blender/src/editnla.c ./source/blender/src/space.c ./source/blender/src/drawview.c ./source/blender/src/renderwin.c ./source/blender/src/headerbuttons.c - Created two different wrappers for update_for_newframe(), one which scrubs the audio, one which doesn't. - Replaced some of the occurences of update_for_newframe() with update_for_newframe_muted(), which doesn't scrub the audio. - In drawview.c: Changed the synchronization scheme to get the current audio position from the audio engine, and use that as a reference for setting CFRA. Implements a/v sync and framedrop. - In editipo.c: Changed handling of Fac IPOs to be usable for audio strips as volume envelopes. - In space.c: Added the mixing buffer size Userpref, enabled audio scrubbing (update_for_newframe()) for moving the sequence editor framebar. ./source/blender/src/editseq.c Added support for audio strips and a default directory for WAV files which gets saved from the last Shift-A operation. ./source/blender/src/buttons.c Added Scene-global audio sequencer settings in Sound buttons. ./source/blender/src/sequence.c Various stuff that deals with handling audio strips differently than usual strips.
2003-07-13 22:16:56 +02:00
} AudioData;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Render Layers
* \{ */
/** Render Layer. */
2006-01-26 23:18:46 +01:00
typedef struct SceneRenderLayer {
struct SceneRenderLayer *next, *prev;
/** MAX_NAME. */
char name[64] DNA_DEPRECATED;
/** Converted to ViewLayer setting. */
struct Material *mat_override DNA_DEPRECATED;
/** Converted to LayerCollection cycles camera visibility override. */
unsigned int lay DNA_DEPRECATED;
/** Converted to LayerCollection cycles holdout override. */
unsigned int lay_zmask DNA_DEPRECATED;
unsigned int lay_exclude DNA_DEPRECATED;
/** Converted to ViewLayer layflag and flag. */
int layflag DNA_DEPRECATED;
/* Pass_xor has to be after passflag. */
/** Pass_xor has to be after passflag. */
int passflag DNA_DEPRECATED;
/** Converted to ViewLayer passflag and flag. */
int pass_xor DNA_DEPRECATED;
/** Converted to ViewLayer setting. */
int samples DNA_DEPRECATED;
/** Converted to ViewLayer pass_alpha_threshold. */
float pass_alpha_threshold DNA_DEPRECATED;
/** Converted to ViewLayer id_properties. */
IDProperty *prop DNA_DEPRECATED;
/** Converted to ViewLayer freestyleConfig. */
struct FreestyleConfig freestyleConfig DNA_DEPRECATED;
2006-01-26 23:18:46 +01:00
} SceneRenderLayer;
/** #SceneRenderLayer::layflag */
enum {
SCE_LAY_SOLID = 1 << 0,
SCE_LAY_UNUSED_1 = 1 << 1,
SCE_LAY_UNUSED_2 = 1 << 2,
SCE_LAY_UNUSED_3 = 1 << 3,
SCE_LAY_SKY = 1 << 4,
SCE_LAY_STRAND = 1 << 5,
SCE_LAY_FRS = 1 << 6,
SCE_LAY_AO = 1 << 7,
SCE_LAY_VOLUMES = 1 << 8,
SCE_LAY_MOTION_BLUR = 1 << 9,
/* Flags between (1 << 9) and (1 << 15) are set to 1 already, for future options. */
SCE_LAY_FLAG_DEFAULT = ((1 << 15) - 1),
SCE_LAY_UNUSED_4 = 1 << 15,
SCE_LAY_UNUSED_5 = 1 << 16,
SCE_LAY_DISABLE = 1 << 17,
SCE_LAY_UNUSED_6 = 1 << 18,
SCE_LAY_UNUSED_7 = 1 << 19,
};
/** #SceneRenderLayer::passflag */
typedef enum eScenePassType {
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_COMBINED = (1 << 0),
SCE_PASS_Z = (1 << 1),
SCE_PASS_UNUSED_1 = (1 << 2), /* RGBA */
SCE_PASS_UNUSED_2 = (1 << 3), /* DIFFUSE */
SCE_PASS_UNUSED_3 = (1 << 4), /* SPEC */
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_SHADOW = (1 << 5),
SCE_PASS_AO = (1 << 6),
SCE_PASS_POSITION = (1 << 7),
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_NORMAL = (1 << 8),
SCE_PASS_VECTOR = (1 << 9),
SCE_PASS_UNUSED_5 = (1 << 10), /* REFRACT */
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_INDEXOB = (1 << 11),
SCE_PASS_UV = (1 << 12),
SCE_PASS_UNUSED_6 = (1 << 13), /* INDIRECT */
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_MIST = (1 << 14),
SCE_PASS_UNUSED_7 = (1 << 15), /* RAYHITS */
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
SCE_PASS_EMIT = (1 << 16),
SCE_PASS_ENVIRONMENT = (1 << 17),
SCE_PASS_INDEXMA = (1 << 18),
SCE_PASS_DIFFUSE_DIRECT = (1 << 19),
SCE_PASS_DIFFUSE_INDIRECT = (1 << 20),
SCE_PASS_DIFFUSE_COLOR = (1 << 21),
SCE_PASS_GLOSSY_DIRECT = (1 << 22),
SCE_PASS_GLOSSY_INDIRECT = (1 << 23),
SCE_PASS_GLOSSY_COLOR = (1 << 24),
SCE_PASS_TRANSM_DIRECT = (1 << 25),
SCE_PASS_TRANSM_INDIRECT = (1 << 26),
SCE_PASS_TRANSM_COLOR = (1 << 27),
SCE_PASS_SUBSURFACE_DIRECT = (1 << 28),
SCE_PASS_SUBSURFACE_INDIRECT = (1 << 29),
SCE_PASS_SUBSURFACE_COLOR = (1 << 30),
SCE_PASS_ROUGHNESS = (1u << 31u),
} eScenePassType;
#define RE_PASSNAME_DEPRECATED "Deprecated"
#define RE_PASSNAME_COMBINED "Combined"
#define RE_PASSNAME_Z "Depth"
#define RE_PASSNAME_VECTOR "Vector"
#define RE_PASSNAME_POSITION "Position"
#define RE_PASSNAME_NORMAL "Normal"
#define RE_PASSNAME_UV "UV"
#define RE_PASSNAME_EMIT "Emit"
#define RE_PASSNAME_SHADOW "Shadow"
#define RE_PASSNAME_AO "AO"
#define RE_PASSNAME_ENVIRONMENT "Env"
#define RE_PASSNAME_INDEXOB "IndexOB"
#define RE_PASSNAME_INDEXMA "IndexMA"
#define RE_PASSNAME_MIST "Mist"
#define RE_PASSNAME_DIFFUSE_DIRECT "DiffDir"
#define RE_PASSNAME_DIFFUSE_INDIRECT "DiffInd"
#define RE_PASSNAME_DIFFUSE_COLOR "DiffCol"
#define RE_PASSNAME_GLOSSY_DIRECT "GlossDir"
#define RE_PASSNAME_GLOSSY_INDIRECT "GlossInd"
#define RE_PASSNAME_GLOSSY_COLOR "GlossCol"
#define RE_PASSNAME_TRANSM_DIRECT "TransDir"
#define RE_PASSNAME_TRANSM_INDIRECT "TransInd"
#define RE_PASSNAME_TRANSM_COLOR "TransCol"
#define RE_PASSNAME_SUBSURFACE_DIRECT "SubsurfaceDir"
#define RE_PASSNAME_SUBSURFACE_INDIRECT "SubsurfaceInd"
#define RE_PASSNAME_SUBSURFACE_COLOR "SubsurfaceCol"
#define RE_PASSNAME_FREESTYLE "Freestyle"
EEVEE: Render Passes This patch adds new render passes to EEVEE. These passes include: * Emission * Diffuse Light * Diffuse Color * Glossy Light * Glossy Color * Environment * Volume Scattering * Volume Transmission * Bloom * Shadow With these passes it will be possible to use EEVEE effectively for compositing. During development we kept a close eye on how to get similar results compared to cycles render passes there are some differences that are related to how EEVEE works. For EEVEE we combined the passes to `Diffuse` and `Specular`. There are no transmittance or sss passes anymore. Cycles will be changed accordingly. Cycles volume transmittance is added to multiple surface col passes. For EEVEE we left the volume transmittance as a separate pass. Known Limitations * All materials that use alpha blending will not be rendered in the render passes. Other transparency modes are supported. * More GPU memory is required to store the render passes. When rendering a HD image with all render passes enabled at max extra 570MB GPU memory is required. Implementation Details An overview of render passes have been described in https://wiki.blender.org/wiki/Source/Render/EEVEE/RenderPasses Future Developments * In this implementation the materials are re-rendered for Diffuse/Glossy and Emission passes. We could use multi target rendering to improve the render speed. * Other passes can be added later * Don't render material based passes when only requesting AO or Shadow. * Add more passes to the system. These could include Cryptomatte, AOV's, Vector, ObjectID, MaterialID, UV. Reviewed By: Clément Foucault Differential Revision: https://developer.blender.org/D6331
2020-02-20 14:53:53 +01:00
#define RE_PASSNAME_BLOOM "BloomCol"
#define RE_PASSNAME_VOLUME_LIGHT "VolumeDir"
#define RE_PASSNAME_TRANSPARENT "Transp"
#define RE_PASSNAME_CRYPTOMATTE_OBJECT "CryptoObject"
#define RE_PASSNAME_CRYPTOMATTE_ASSET "CryptoAsset"
#define RE_PASSNAME_CRYPTOMATTE_MATERIAL "CryptoMaterial"
/** \} */
/* -------------------------------------------------------------------- */
/** \name Multi-View
* \{ */
/** View (Multi-view). */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
typedef struct SceneRenderView {
struct SceneRenderView *next, *prev;
/** MAX_NAME. */
char name[64];
/** MAX_NAME. */
char suffix[64];
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
int viewflag;
char _pad2[4];
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
} SceneRenderView;
/** #SceneRenderView::viewflag */
enum {
SCE_VIEW_DISABLE = 1 << 0,
};
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
/** #RenderData::views_format */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
enum {
SCE_VIEWS_FORMAT_STEREO_3D = 0,
SCE_VIEWS_FORMAT_MULTIVIEW = 1,
};
/** #ImageFormatData::views_format (also used for #Sequence::views_format). */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
enum {
R_IMF_VIEWS_INDIVIDUAL = 0,
R_IMF_VIEWS_STEREO_3D = 1,
R_IMF_VIEWS_MULTIVIEW = 2,
};
typedef struct Stereo3dFormat {
short flag;
/** Encoding mode. */
char display_mode;
/** Anaglyph scheme for the user display. */
char anaglyph_type;
/** Interlace type for the user display. */
char interlace_type;
char _pad[3];
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
} Stereo3dFormat;
/** #Stereo3dFormat::display_mode */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
typedef enum eStereoDisplayMode {
S3D_DISPLAY_ANAGLYPH = 0,
S3D_DISPLAY_INTERLACE = 1,
S3D_DISPLAY_PAGEFLIP = 2,
S3D_DISPLAY_SIDEBYSIDE = 3,
S3D_DISPLAY_TOPBOTTOM = 4,
} eStereoDisplayMode;
/** #Stereo3dFormat::flag */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
typedef enum eStereo3dFlag {
S3D_INTERLACE_SWAP = (1 << 0),
S3D_SIDEBYSIDE_CROSSEYED = (1 << 1),
S3D_SQUEEZED_FRAME = (1 << 2),
} eStereo3dFlag;
/** #Stereo3dFormat::anaglyph_type */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
typedef enum eStereo3dAnaglyphType {
S3D_ANAGLYPH_REDCYAN = 0,
S3D_ANAGLYPH_GREENMAGENTA = 1,
S3D_ANAGLYPH_YELLOWBLUE = 2,
} eStereo3dAnaglyphType;
/** #Stereo3dFormat::interlace_type */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
typedef enum eStereo3dInterlaceType {
S3D_INTERLACE_ROW = 0,
S3D_INTERLACE_COLUMN = 1,
S3D_INTERLACE_CHECKERBOARD = 2,
} eStereo3dInterlaceType;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Image Format Data
* \{ */
2006-01-26 23:18:46 +01:00
/**
* Generic image format settings,
* this is used for #NodeImageFile and IMAGE_OT_save_as operator too.
*
* NOTE: its a bit strange that even though this is an image format struct
2018-09-02 10:28:27 +02:00
* the imtype can still be used to select video formats.
* RNA ensures these enum's are only selectable for render output.
*/
typedef struct ImageFormatData {
/**
* R_IMF_IMTYPE_PNG, R_...
* \note Video types should only ever be set from this structure when used from #RenderData.
*/
char imtype;
/**
* bits per channel, R_IMF_CHAN_DEPTH_8 -> 32,
* not a flag, only set 1 at a time. */
char depth;
/** R_IMF_PLANES_BW, R_IMF_PLANES_RGB, R_IMF_PLANES_RGBA. */
char planes;
2022-09-19 06:47:27 +02:00
/** Generic options for all image types, alpha Z-buffer. */
char flag;
2022-09-16 10:13:19 +02:00
/** (0 - 100), eg: JPEG quality. */
char quality;
2022-09-16 10:13:19 +02:00
/** (0 - 100), eg: PNG compression. */
char compress;
/* --- format specific --- */
/** OpenEXR. */
char exr_codec;
2023-01-16 03:57:10 +01:00
/** CINEON. */
char cineon_flag;
short cineon_white, cineon_black;
float cineon_gamma;
/** Jpeg2000. */
char jp2_flag;
char jp2_codec;
/** TIFF. */
char tiff_codec;
char _pad[4];
/** Multi-view. */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
char views_format;
Stereo3dFormat stereo3d_format;
/* Color management members. */
char color_management;
char _pad1[7];
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 12:05:07 +02:00
ColorManagedViewSettings view_settings;
ColorManagedDisplaySettings display_settings;
ColorManagedColorspaceSettings linear_colorspace_settings;
} ImageFormatData;
/** #ImageFormatData::imtype */
enum {
R_IMF_IMTYPE_TARGA = 0,
R_IMF_IMTYPE_IRIS = 1,
// R_HAMX = 2, /* DEPRECATED */
// R_FTYPE = 3, /* DEPRECATED */
R_IMF_IMTYPE_JPEG90 = 4,
// R_MOVIE = 5, /* DEPRECATED */
R_IMF_IMTYPE_IRIZ = 7,
R_IMF_IMTYPE_RAWTGA = 14,
R_IMF_IMTYPE_AVIRAW = 15,
R_IMF_IMTYPE_AVIJPEG = 16,
R_IMF_IMTYPE_PNG = 17,
// R_IMF_IMTYPE_AVICODEC = 18, /* DEPRECATED */
// R_IMF_IMTYPE_QUICKTIME = 19, /* DEPRECATED */
R_IMF_IMTYPE_BMP = 20,
R_IMF_IMTYPE_RADHDR = 21,
R_IMF_IMTYPE_TIFF = 22,
R_IMF_IMTYPE_OPENEXR = 23,
R_IMF_IMTYPE_FFMPEG = 24,
// R_IMF_IMTYPE_FRAMESERVER = 25, /* DEPRECATED */
R_IMF_IMTYPE_CINEON = 26,
R_IMF_IMTYPE_DPX = 27,
R_IMF_IMTYPE_MULTILAYER = 28,
R_IMF_IMTYPE_DDS = 29,
R_IMF_IMTYPE_JP2 = 30,
R_IMF_IMTYPE_H264 = 31,
R_IMF_IMTYPE_XVID = 32,
R_IMF_IMTYPE_THEORA = 33,
R_IMF_IMTYPE_PSD = 34,
R_IMF_IMTYPE_WEBP = 35,
R_IMF_IMTYPE_AV1 = 36,
R_IMF_IMTYPE_INVALID = 255,
};
/** #ImageFormatData::flag */
enum {
// R_IMF_FLAG_ZBUF = 1 << 0, /* DEPRECATED, and cleared. */
R_IMF_FLAG_PREVIEW_JPG = 1 << 1,
};
/* */
/**
* #ImageFormatData::depth
*
* Return values from #BKE_imtype_valid_depths, note this is depths per channel.
*/
typedef enum eImageFormatDepth {
/** 1bits (unused). */
R_IMF_CHAN_DEPTH_1 = (1 << 0),
/** 8bits (default). */
R_IMF_CHAN_DEPTH_8 = (1 << 1),
/** 10bits (uncommon, Cineon/DPX support). */
R_IMF_CHAN_DEPTH_10 = (1 << 2),
/** 12bits (uncommon, jp2/DPX support). */
R_IMF_CHAN_DEPTH_12 = (1 << 3),
/** 16bits (TIFF, half float EXR). */
R_IMF_CHAN_DEPTH_16 = (1 << 4),
/** 24bits (unused). */
R_IMF_CHAN_DEPTH_24 = (1 << 5),
/** 32bits (full float EXR). */
R_IMF_CHAN_DEPTH_32 = (1 << 6),
} eImageFormatDepth;
/** #ImageFormatData::planes */
enum {
R_IMF_PLANES_RGB = 24,
R_IMF_PLANES_RGBA = 32,
R_IMF_PLANES_BW = 8,
};
/** #ImageFormatData::exr_codec */
enum {
R_IMF_EXR_CODEC_NONE = 0,
R_IMF_EXR_CODEC_PXR24 = 1,
R_IMF_EXR_CODEC_ZIP = 2,
R_IMF_EXR_CODEC_PIZ = 3,
R_IMF_EXR_CODEC_RLE = 4,
R_IMF_EXR_CODEC_ZIPS = 5,
R_IMF_EXR_CODEC_B44 = 6,
R_IMF_EXR_CODEC_B44A = 7,
R_IMF_EXR_CODEC_DWAA = 8,
R_IMF_EXR_CODEC_DWAB = 9,
R_IMF_EXR_CODEC_MAX = 10,
};
/** #ImageFormatData::jp2_flag */
enum {
/** When disabled use RGB. */
R_IMF_JP2_FLAG_YCC = 1 << 0, /* Was `R_JPEG2K_YCC`. */
R_IMF_JP2_FLAG_CINE_PRESET = 1 << 1, /* Was `R_JPEG2K_CINE_PRESET`. */
R_IMF_JP2_FLAG_CINE_48 = 1 << 2, /* Was `R_JPEG2K_CINE_48FPS`. */
};
/** #ImageFormatData::jp2_codec */
enum {
R_IMF_JP2_CODEC_JP2 = 0,
R_IMF_JP2_CODEC_J2K = 1,
};
/** #ImageFormatData::cineon_flag */
enum {
R_IMF_CINEON_FLAG_LOG = 1 << 0, /* Was `R_CINEON_LOG`. */
};
/** #ImageFormatData::tiff_codec */
enum {
R_IMF_TIFF_CODEC_DEFLATE = 0,
R_IMF_TIFF_CODEC_LZW = 1,
R_IMF_TIFF_CODEC_PACKBITS = 2,
R_IMF_TIFF_CODEC_NONE = 3,
};
/** \} */
/* -------------------------------------------------------------------- */
/** \name Render Bake
* \{ */
/** #ImageFormatData::color_management */
enum {
R_IMF_COLOR_MANAGEMENT_FOLLOW_SCENE = 0,
R_IMF_COLOR_MANAGEMENT_OVERRIDE = 1,
};
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
typedef struct BakeData {
struct ImageFormatData im_format;
/** FILE_MAX. */
char filepath[1024];
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
short width, height;
short margin, flag;
float cage_extrusion;
float max_ray_distance;
int pass_filter;
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
char normal_swizzle[3];
char normal_space;
char target;
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
char save_mode;
char margin_type;
char view_from;
char _pad[4];
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
struct Object *cage_object;
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
} BakeData;
/** #BakeData::margin_type (char). */
typedef enum eBakeMarginType {
R_BAKE_ADJACENT_FACES = 0,
R_BAKE_EXTEND = 1,
} eBakeMarginType;
/** #BakeData::normal_swizzle (char). */
typedef enum eBakeNormalSwizzle {
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
R_BAKE_POSX = 0,
R_BAKE_POSY = 1,
R_BAKE_POSZ = 2,
R_BAKE_NEGX = 3,
R_BAKE_NEGY = 4,
R_BAKE_NEGZ = 5,
} eBakeNormalSwizzle;
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
/** #BakeData::target (char). */
typedef enum eBakeTarget {
R_BAKE_TARGET_IMAGE_TEXTURES = 0,
R_BAKE_TARGET_VERTEX_COLORS = 1,
} eBakeTarget;
/** #BakeData::save_mode (char). */
typedef enum eBakeSaveMode {
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
R_BAKE_SAVE_INTERNAL = 0,
R_BAKE_SAVE_EXTERNAL = 1,
} eBakeSaveMode;
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
/** #BakeData::view_from (char). */
typedef enum eBakeViewFrom {
R_BAKE_VIEW_FROM_ABOVE_SURFACE = 0,
R_BAKE_VIEW_FROM_ACTIVE_CAMERA = 1,
} eBakeViewFrom;
/** #BakeData::pass_filter */
typedef enum eBakePassFilter {
R_BAKE_PASS_FILTER_NONE = 0,
R_BAKE_PASS_FILTER_UNUSED = (1 << 0),
R_BAKE_PASS_FILTER_EMIT = (1 << 1),
R_BAKE_PASS_FILTER_DIFFUSE = (1 << 2),
R_BAKE_PASS_FILTER_GLOSSY = (1 << 3),
R_BAKE_PASS_FILTER_TRANSM = (1 << 4),
R_BAKE_PASS_FILTER_SUBSURFACE = (1 << 5),
R_BAKE_PASS_FILTER_DIRECT = (1 << 6),
R_BAKE_PASS_FILTER_INDIRECT = (1 << 7),
R_BAKE_PASS_FILTER_COLOR = (1 << 8),
} eBakePassFilter;
#define R_BAKE_PASS_FILTER_ALL (~0)
/** \} */
/* -------------------------------------------------------------------- */
/** \name Render Data
* \{ */
2002-10-12 13:37:38 +02:00
typedef struct RenderData {
struct ImageFormatData im_format;
2002-10-12 13:37:38 +02:00
struct AviCodecData *avicodecdata;
struct FFMpegCodecData ffcodecdata;
/** Frames as in 'images'. */
int cfra, sfra, efra;
2023-01-16 03:57:10 +01:00
/** Sub-frame offset from `cfra`, in 0.0-1.0. */
float subframe;
/** Start+end frames of preview range. */
int psfra, pefra;
int images, framapto;
short flag, threads;
2002-10-12 13:37:38 +02:00
float framelen, blurfac;
/** Frames to jump during render/playback. */
int frame_step;
char _pad10[2];
/** For the dimensions presets menu. */
short dimensionspreset;
/** Size in %. */
short size;
char _pad6[2];
/* From buttons: */
2002-10-12 13:37:38 +02:00
/**
* The desired number of pixels in the x direction
*/
int xsch;
2002-10-12 13:37:38 +02:00
/**
* The desired number of pixels in the y direction
*/
int ysch;
/**
* render tile dimensions
*/
int tilex DNA_DEPRECATED;
int tiley DNA_DEPRECATED;
short planes DNA_DEPRECATED;
short imtype DNA_DEPRECATED;
short subimtype DNA_DEPRECATED;
short quality DNA_DEPRECATED;
Option to lock the interface while rendering Added function called WM_set_locked_interface which does two things: - Prevents event queue from being handled, so no operators (see below) or values are even possible to run or change. This prevents any kind of "destructive" action performed from user while rendering. - Locks interface refresh for regions which does have lock set to truth in their template. Currently it's just a 3D viewport, but in the future more regions could be considered unsafe, or we could want to lock different parts of interface when doing different jobs. This is needed because 3D viewport could be using or changing the same data as renderer currently uses, leading to threading conflict. Notifiers are still allowed to handle, so render progress is seen on the screen, but would need to doublecheck on this, in terms some notifiers could be changing the data. For now interface locking happens for render job only in case "Lock Interface" checkbox is enabled. Other tools like backing would also benefit of this option. It is possible to mark operator as safe to be used in locked interface mode by adding OPTYPE_ALLOW_LOCKED bit to operator template flags. This bit is completely handled by wm_evem_system, not with operator run routines, so it's still possible to run operators from drivers and handlers. Currently allowed image editor navigation and zooming. Reviewers: brecht, campbellbarton Reviewed By: campbellbarton Differential Revision: https://developer.blender.org/D142
2014-01-29 11:07:14 +01:00
char use_lock_interface;
char _pad7[3];
2002-10-12 13:37:38 +02:00
/**
* Flags for render settings. Use bit-masking to access the settings.
*/
int scemode;
2002-10-12 13:37:38 +02:00
/**
* Flags for render settings. Use bit-masking to access the settings.
*/
int mode;
short frs_sec;
2002-10-12 13:37:38 +02:00
/**
2019-02-07 23:29:08 +01:00
* What to do with the sky/background.
2023-01-16 03:57:10 +01:00
* Picks sky/pre-multiply blending for the background.
2002-10-12 13:37:38 +02:00
*/
char alphamode;
char _pad0[1];
2023-01-16 03:57:10 +01:00
/** Render border to render sub-regions. */
rctf border;
/* Information on different layers to be rendered. */
/** Converted to Scene->view_layers. */
ListBase layers DNA_DEPRECATED;
/** Converted to Scene->active_layer. */
short actlay DNA_DEPRECATED;
char _pad1[2];
/**
* Adjustment factors for the aspect ratio in the x direction, was a short in 2.45
*/
float xasp, yasp;
float frs_sec_base;
2002-10-12 13:37:38 +02:00
/**
* Value used to define filter size for all filter options.
*/
float gauss;
/** Color management settings - color profiles, gamma correction, etc. */
int color_mgt_flag;
/** Dither noise intensity. */
float dither_intensity;
/* Bake Render options. */
Remove Blender Internal and legacy viewport from Blender 2.8. Brecht authored this commit, but he gave me the honours to actually do it. Here it goes; Blender Internal. Bye bye, you did great! * Point density, voxel data, ocean, environment map textures were removed, as these only worked within BI rendering. Note that the ocean modifier and the Cycles point density shader node continue to work. * Dynamic paint using material shading was removed, as this only worked with BI. If we ever wanted to support this again probably it should go through the baking API. * GPU shader export through the Python API was removed. This only worked for the old BI GLSL shaders, which no longer exists. Doing something similar for Eevee would be significantly more complicated because it uses a lot of multiplass rendering and logic outside the shader, it's probably impractical. * Collada material import / export code is mostly gone, as it only worked for BI materials. We need to add Cycles / Eevee material support at some point. * The mesh noise operator was removed since it only worked with BI material texture slots. A displacement modifier can be used instead. * The delete texture paint slot operator was removed since it only worked for BI material texture slots. Could be added back with node support. * Not all legacy viewport features are supported in the new viewport, but their code was removed. If we need to bring anything back we can look at older git revisions. * There is some legacy viewport code that I could not remove yet, and some that I probably missed. * Shader node execution code was left mostly intact, even though it is not used anywhere now. We may eventually use this to replace the texture nodes with Cycles / Eevee shader nodes. * The Cycles Bake panel now includes settings for baking multires normal and displacement maps. The underlying code needs to be merged properly, and we plan to add back support for multires AO baking and add support to Cycles baking for features like vertex color, displacement, and other missing baking features. * This commit removes DNA and the Python API for BI material, lamp, world and scene settings. This breaks a lot of addons. * There is more DNA that can be removed or renamed, where Cycles or Eevee are reusing some old BI properties but the names are not really correct anymore. * Texture slots for materials, lamps and world were removed. They remain for brushes, particles and freestyle linestyles. * 'BLENDER_RENDER' remains in the COMPAT_ENGINES of UI panels. Cycles and other renderers use this to find all panels to show, minus a few panels that they have their own replacement for.
2018-04-19 17:34:44 +02:00
short bake_mode, bake_flag;
short bake_margin, bake_samples;
short bake_margin_type;
char _pad9[6];
Remove Blender Internal and legacy viewport from Blender 2.8. Brecht authored this commit, but he gave me the honours to actually do it. Here it goes; Blender Internal. Bye bye, you did great! * Point density, voxel data, ocean, environment map textures were removed, as these only worked within BI rendering. Note that the ocean modifier and the Cycles point density shader node continue to work. * Dynamic paint using material shading was removed, as this only worked with BI. If we ever wanted to support this again probably it should go through the baking API. * GPU shader export through the Python API was removed. This only worked for the old BI GLSL shaders, which no longer exists. Doing something similar for Eevee would be significantly more complicated because it uses a lot of multiplass rendering and logic outside the shader, it's probably impractical. * Collada material import / export code is mostly gone, as it only worked for BI materials. We need to add Cycles / Eevee material support at some point. * The mesh noise operator was removed since it only worked with BI material texture slots. A displacement modifier can be used instead. * The delete texture paint slot operator was removed since it only worked for BI material texture slots. Could be added back with node support. * Not all legacy viewport features are supported in the new viewport, but their code was removed. If we need to bring anything back we can look at older git revisions. * There is some legacy viewport code that I could not remove yet, and some that I probably missed. * Shader node execution code was left mostly intact, even though it is not used anywhere now. We may eventually use this to replace the texture nodes with Cycles / Eevee shader nodes. * The Cycles Bake panel now includes settings for baking multires normal and displacement maps. The underlying code needs to be merged properly, and we plan to add back support for multires AO baking and add support to Cycles baking for features like vertex color, displacement, and other missing baking features. * This commit removes DNA and the Python API for BI material, lamp, world and scene settings. This breaks a lot of addons. * There is more DNA that can be removed or renamed, where Cycles or Eevee are reusing some old BI properties but the names are not really correct anymore. * Texture slots for materials, lamps and world were removed. They remain for brushes, particles and freestyle linestyles. * 'BLENDER_RENDER' remains in the COMPAT_ENGINES of UI panels. Cycles and other renderers use this to find all panels to show, minus a few panels that they have their own replacement for.
2018-04-19 17:34:44 +02:00
float bake_biasdist, bake_user_scale;
/* Path to render output. */
/** 1024 = FILE_MAX. */
/* NOTE: Excluded from `BKE_bpath_foreach_path_` / `scene_foreach_path` code. */
char pic[1024];
/** Stamps flags. */
int stamp;
/** Select one of blenders bitmap fonts. */
short stamp_font_id;
char _pad3[2];
/** Stamp info user data. */
2012-01-21 15:54:53 +01:00
char stamp_udata[768];
/* Foreground/background color. */
float fg_stamp[4];
float bg_stamp[4];
/** Sequencer options. */
char seq_prev_type;
/** UNUSED. */
char seq_rend_type;
/** Flag use for sequence render/draw. */
char seq_flag;
char _pad5[3];
/* Render simplify. */
short simplify_subsurf;
short simplify_subsurf_render;
short simplify_gpencil;
float simplify_particles;
float simplify_particles_render;
float simplify_volumes;
float simplify_shadows;
float simplify_shadows_render;
/** Freestyle line thickness options. */
int line_thickness_mode;
/** In pixels. */
float unit_line_thickness;
/** Render engine. */
char engine[32];
char _pad2[2];
/** Performance Options. */
short perf_flag;
/** Cycles baking. */
Bake API - bpy.ops.object.bake() New operator that can calls a bake function to the current render engine when available. This commit provides no feature for the users, but allows external engines to be accessed by the operator and be integrated with the baking api. The API itself is simple. Blender sends a populated array of BakePixels to the renderer, and gets back an array of floats with the result. The Blender Internal (and multires) system is still running independent, but we eventually will pipe it through the API as well. Cycles baking will come next as a separated commit Python Operator: ---------------- The operator can be called with some arguments, or a user interface can be created for it. In that case the arguments can be ommited and the interface can expose the settings from bpy.context.scene.render.bake bpy.ops.object.bake(type='COMBINED', filepath="", width=512, height=512, margin=16, use_selected_to_active=False, cage_extrusion=0, cage="", normal_space='TANGENT', normal_r='POS_X', normal_g='POS_Y', normal_b='POS_Z', save_mode='INTERNAL', use_clear=False, use_split_materials=False, use_automatic_name=False) Note: external save mode is currently disabled. Supported Features: ------------------ * Margin - Baked result is extended this many pixels beyond the border of each UV "island," to soften seams in the texture. * Selected to Active - bake shading on the surface of selected object to the active object. The rays are cast from the lowpoly object inwards towards the highpoly object. If the highpoly object is not entirely involved by the lowpoly object, you can tweak the rays start point with Cage Extrusion. For even more control of the cage you can use a Cage object. * Cage Extrusion - distance to use for the inward ray cast when using selected to active * Custom Cage - object to use as cage (instead of the lowpoly object). * Normal swizzle - change the axis that gets mapped to RGB * Normal space - save as tangent or object normal spaces Supported Passes: ----------------- Any pass that is supported by Blender renderlayer system. Though it's up to the external engine to provide a valid enum with its supported passes. Normal passes get a special treatment since we post-process them to converted and "swizzled" Development Notes for External Engines: --------------------------------------- (read them in bake_api.c) * For a complete implementation example look at the Cycles Bake commit (next). Review: D421 Reviewed by: Campbell Barton, Brecht van Lommel, Sergey Sharybin, Thomas Dinge Normal map pipeline "consulting" by Andy Davies (metalliandy) Original design by Brecht van Lommel. The entire commit history can be found on the branch: bake-cycles
2014-01-02 22:05:07 +01:00
struct BakeData bake;
int _pad8;
short preview_pixel_size;
short _pad4;
/* MultiView. */
/** SceneRenderView. */
ListBase views;
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
short actview;
short views_format;
/* Hair Display. */
short hair_type, hair_subdiv;
/** Motion blur shutter. */
struct CurveMapping mblur_shutter_curve;
2002-10-12 13:37:38 +02:00
} RenderData;
/** #RenderData::quality_flag */
typedef enum eQualityOption {
SCE_PERF_HQ_NORMALS = (1 << 0),
} eQualityOption;
/** #RenderData::hair_type */
typedef enum eHairType {
SCE_HAIR_SHAPE_STRAND = 0,
SCE_HAIR_SHAPE_STRIP = 1,
} eHairType;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Render Conversion/Simplification Settings
* \{ */
/** Control render convert and shading engine. */
typedef struct RenderProfile {
struct RenderProfile *next, *prev;
char name[32];
short particle_perc;
short subsurf_max;
short shadbufsample_max;
char _pad1[2];
float ao_error;
char _pad2[4];
} RenderProfile;
2002-10-12 13:37:38 +02:00
/* UV Paint. */
/** #ToolSettings::uv_sculpt_settings */
enum {
UV_SCULPT_LOCK_BORDERS = 1,
UV_SCULPT_ALL_ISLANDS = 2,
};
/** #ToolSettings::uv_relax_method */
enum {
UV_SCULPT_TOOL_RELAX_LAPLACIAN = 1,
UV_SCULPT_TOOL_RELAX_HC = 2,
UV_SCULPT_TOOL_RELAX_COTAN = 3,
};
/* Stereo Flags. */
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
#define STEREO_RIGHT_NAME "right"
#define STEREO_LEFT_NAME "left"
#define STEREO_RIGHT_SUFFIX "_R"
#define STEREO_LEFT_SUFFIX "_L"
/** #View3D::stereo3d_camera / #View3D::multiview_eye / #ImageUser::multiview_eye */
typedef enum eStereoViews {
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
STEREO_LEFT_ID = 0,
STEREO_RIGHT_ID = 1,
STEREO_3D_ID = 2,
STEREO_MONO_ID = 3,
} eStereoViews;
Multi-View and Stereo 3D Official Documentation: http://www.blender.org/manual/render/workflows/multiview.html Implemented Features ==================== Builtin Stereo Camera * Convergence Mode * Interocular Distance * Convergence Distance * Pivot Mode Viewport * Cameras * Plane * Volume Compositor * View Switch Node * Image Node Multi-View OpenEXR support Sequencer * Image/Movie Strips 'Use Multiview' UV/Image Editor * Option to see Multi-View images in Stereo-3D or its individual images * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images I/O * Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images Scene Render Views * Ability to have an arbitrary number of views in the scene Missing Bits ============ First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report. Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report. Everything else is likely small todos, and may wait until we are sure none of the above is happening. Apart from that there are those known issues: * Compositor Image Node poorly working for Multi-View OpenEXR (this was working prefectly before the 'Use Multi-View' functionality) * Selecting camera from Multi-View when looking from camera is problematic * Animation Playback (ctrl+F11) doesn't support stereo formats * Wrong filepath when trying to play back animated scene * Viewport Rendering doesn't support Multi-View * Overscan Rendering * Fullscreen display modes need to warn the user * Object copy should be aware of views suffix Acknowledgments =============== * Francesco Siddi for the help with the original feature specs and design * Brecht Van Lommel for the original review of the code and design early on * Blender Foundation for the Development Fund to support the project wrap up Final patch reviewers: * Antony Riakiotakis (psy-fi) * Campbell Barton (ideasman42) * Julian Eisel (Severin) * Sergey Sharybin (nazgul) * Thomas Dinged (dingto) Code contributors of the original branch in github: * Alexey Akishin * Gabriel Caraballo
2015-04-06 15:40:12 +02:00
/** \} */
/* -------------------------------------------------------------------- */
/** \name Time Line Markers
* \{ */
typedef struct TimeMarker {
Added the new Timeline Window, copied from Tuhopuu, coded by Matt Ebb. Main change is that it's an own Space type now, not part of the Audio window... the audio window should restrict to own options. This way functionality is nicely separated. Since it's the first time I added a new space (since long!) I've made an extensive tutorial as well. You can find that here: http://www.blender3d.org/cms/Adding_new_Space_Window.557.0.html Notes for using timewindow; - Add time markers with MKey - CTRL+M gives option to name Marker - Markers cannot be moved yet... - Pageup-Pagedown keys moves current frame to next-prev Marker - Xkey removes Markers - If an object has Ipos or an Action, it draws key lines - CTRL+Pageup-Pagedown moves current frame to next-prev Key - Press S or E to set start/end frame for playback Notes about the implementation in Tuhopuu: - Add new Marker now selects new, deselects others - Selecting Marker didn't work like elsewhere in Blender, on click it should deselect all, except the indicated Marker. Not when holding SHIFT of course - Not exported functions are static now - Removed unused defines (MARKER_NONE NEXT_AVAIL) - Drawing order was confusing, doing too many matrix calls - Removed not needed scrollbar, added new function to draw time values. (Has advantage the MMB scroll works not confusing on a scrollbar) - Added proper support for 'frame mapping' - The string button (name Marker) had a bug (checked str[64] while str was only 64 long) - String button itself didn't allow "OK on enter" - Made frame buttons in header larger, the arrows overlapped - Removed support for negative frame values, that won't work so simple!
2005-05-05 19:19:21 +02:00
struct TimeMarker *next, *prev;
int frame;
char name[64];
unsigned int flag;
struct Object *camera;
struct IDProperty *prop;
Added the new Timeline Window, copied from Tuhopuu, coded by Matt Ebb. Main change is that it's an own Space type now, not part of the Audio window... the audio window should restrict to own options. This way functionality is nicely separated. Since it's the first time I added a new space (since long!) I've made an extensive tutorial as well. You can find that here: http://www.blender3d.org/cms/Adding_new_Space_Window.557.0.html Notes for using timewindow; - Add time markers with MKey - CTRL+M gives option to name Marker - Markers cannot be moved yet... - Pageup-Pagedown keys moves current frame to next-prev Marker - Xkey removes Markers - If an object has Ipos or an Action, it draws key lines - CTRL+Pageup-Pagedown moves current frame to next-prev Key - Press S or E to set start/end frame for playback Notes about the implementation in Tuhopuu: - Add new Marker now selects new, deselects others - Selecting Marker didn't work like elsewhere in Blender, on click it should deselect all, except the indicated Marker. Not when holding SHIFT of course - Not exported functions are static now - Removed unused defines (MARKER_NONE NEXT_AVAIL) - Drawing order was confusing, doing too many matrix calls - Removed not needed scrollbar, added new function to draw time values. (Has advantage the MMB scroll works not confusing on a scrollbar) - Added proper support for 'frame mapping' - The string button (name Marker) had a bug (checked str[64] while str was only 64 long) - String button itself didn't allow "OK on enter" - Made frame buttons in header larger, the arrows overlapped - Removed support for negative frame values, that won't work so simple!
2005-05-05 19:19:21 +02:00
} TimeMarker;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Paint Mode/Tool Data
* \{ */
#define PAINT_MAX_INPUT_SAMPLES 64
typedef struct Paint_Runtime {
/** Avoid having to compare with scene pointer everywhere. */
unsigned int tool_offset;
unsigned short ob_mode;
char _pad[2];
} Paint_Runtime;
/** We might want to store other things here. */
typedef struct PaintToolSlot {
struct Brush *brush;
} PaintToolSlot;
/** Paint Tool Base. */
typedef struct Paint {
struct Brush *brush;
/**
* Each tool has its own active brush,
* The currently active tool is defined by the current 'brush'.
*/
struct PaintToolSlot *tool_slots;
int tool_slots_len;
char _pad1[4];
struct Palette *palette;
/** Cavity curve. */
struct CurveMapping *cavity_curve;
/** WM Paint cursor. */
void *paint_cursor;
unsigned char paint_cursor_col[4];
/** Enum #ePaintFlags. */
int flags;
/** Paint stroke can use up to PAINT_MAX_INPUT_SAMPLES inputs to smooth the stroke. */
int num_input_samples;
/** Flags used for symmetry. */
int symmetry_flags;
float tile_offset[3];
char _pad2[4];
struct Paint_Runtime runtime;
} Paint;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Image Paint
* \{ */
/** Texture/Image Editor. */
typedef struct ImagePaintSettings {
Paint paint;
short flag, missing_data;
/** For projection painting only. */
short seam_bleed, normal_angle;
/** Capture size for re-projection. */
short screen_grab_size[2];
/** Mode used for texture painting. */
int mode;
/** Workaround until we support true layer masks. */
struct Image *stencil;
/** Clone layer for image mode for projective texture painting. */
struct Image *clone;
/** Canvas when the explicit system is used for painting. */
struct Image *canvas;
float stencil_col[3];
/** Dither amount used when painting on byte images. */
float dither;
/** Display texture interpolation method. */
int interp;
char _pad[4];
} ImagePaintSettings;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Paint Mode Settings
* \{ */
typedef struct PaintModeSettings {
/** Source to select canvas from to paint on (#ePaintCanvasSource). */
char canvas_source;
char _pad[7];
/** Selected image when canvas_source=PAINT_CANVAS_SOURCE_IMAGE. */
Image *canvas_image;
PBVH Pixel extractor. This patch contains an initial pixel extractor for PBVH and an initial paint brush implementation. PBVH is an accelleration structure blender uses internally to speed up 3d painting operations. At this moment it is extensively used by sculpt, vertex painting and weight painting. For the 3d texturing brush we will be using the PBVH for texture painting. Currently PBVH is organized to work on geometry (vertices, polygons and triangles). For texture painting this should be extended it to use pixels. {F12995467} Screen recording has been done on a Mac Mini with a 6 core 3.3 GHZ Intel processor. # Scope This patch only contains an extending uv seams to fix uv seams. This is not actually we want, but was easy to add to make the brush usable. Pixels are places in the PBVH_Leaf nodes. We want to introduce a special node for pixels, but that will be done in a separate patch to keep the code review small. This reduces the painting performance when using low and medium poly assets. In workbench textures aren't forced to be shown. For now use Material/Rendered view. # Rasterization process The rasterization process will generate the pixel information for a leaf node. In the future those leaf nodes will be split up into multiple leaf nodes to increase the performance when there isn't enough geometry. For this patch this was left out of scope. In order to do so every polygon should be uniquely assigned to a leaf node. For each leaf node for each polygon If polygon not assigned assign polygon to node. Polygons are to complicated to be used directly we have to split the polygons into triangles. For each leaf node for each polygon extract triangles from polygon. The list of triangles can be stored inside the leaf node. The list of polygons aren't needed anymore. Each triangle has: poly_index. vert_indices delta barycentric coordinate between x steps. Each triangle is rasterized in rows. Sequential pixels (in uv space) are stored in a single structure. image position barycentric coordinate of the first pixel number of pixels triangle index inside the leaf node. During the performed experiments we used a fairly simple rasterization process by finding the UV bounds of an triangle and calculate the barycentric coordinates per pixel inside the bounds. Even for complex models and huge images this process is normally finished within 0.5 second. It could be that we want to change this algorithm to reduce hickups when nodes are initialized during a stroke. Reviewed By: brecht Maniphest Tasks: T96710 Differential Revision: https://developer.blender.org/D14504
2022-04-15 16:39:50 +02:00
ImageUser image_user;
} PaintModeSettings;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Particle Edit
* \{ */
/** Settings for a Particle Editing Brush. */
typedef struct ParticleBrushData {
/** Common setting. */
short size;
/** For specific brushes only. */
short step, invert, count;
int flag;
float strength;
} ParticleBrushData;
/** Particle Edit Mode Settings. */
typedef struct ParticleEditSettings {
short flag;
short totrekey;
short totaddkey;
short brushtype;
2018-01-18 04:02:26 +01:00
ParticleBrushData brush[7];
/** Runtime. */
void *paintcursor;
float emitterdist;
char _pad0[4];
int selectmode;
int edittype;
int draw_step, fade_frames;
struct Scene *scene;
struct Object *object;
struct Object *shape_object;
} ParticleEditSettings;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Sculpt
* \{ */
=== Custom Transform Orientation === Custom Orientations can be added with Ctrl-Shift-C (hotkey suggestions are welcomed), this adds and select the new alignment. Custom Orientations can also be added, deleted, selected from the Transform Orientations panel (View -> Transform Orientations). Standard orientations (global, local, normal, view) can also be selected from this panel. If you plan on using only a single custom orientation and don't really need a list, I suggest you use the hotkey as it adds and selects at the same time. Custom Orientations are save in the scene and are selected per 3D view (like normal orientation). Adding from an object, the orientation is a normalized version of the object's orientation. Adding from mesh data, a single element (vertex, edge, face) must be selected in its respective selection mode. Vertex orientation Z-axis is based on the normal, edge Z-axis on the edge itself (X-axis is on the XoY plane when possible, Y-axis is perpendicular to the rest). Face orientation Z-axis is the face normal, X-axis is perpendicular to the first edge, Y-axis is perpendicular to the rest. (More logical orientations can be suggested). I plan to add: 2 vertice (connected or not) => edge orientation , 3 vertice = face orientation Differences from the patch: - orientations no longer link back to the object they came from, everything is copy on creation. - orientations are overwritten based on name (if you add an orientation with the same name as one that already exists, it overwrites the old one)
2008-01-13 19:24:09 +01:00
/** Sculpt. */
typedef struct Sculpt {
Paint paint;
/** For rotating around a pivot point. */
// float pivot[3]; XXX not used?
int flags;
/** Transform tool. */
int transform_mode;
int automasking_flags;
// /* Control tablet input. */
// char tablet_size, tablet_strength; XXX not used?
int radial_symm[3];
/** Maximum edge length for dynamic topology sculpting (in pixels). */
float detail_size;
/** Direction used for `SCULPT_OT_symmetrize` operator. */
int symmetrize_direction;
/** Gravity factor for sculpting. */
float gravity_factor;
/* Scale for constant detail size. */
/** Constant detail resolution (Blender unit / constant_detail). */
float constant_detail;
float detail_percent;
int automasking_cavity_blur_steps;
float automasking_cavity_factor;
char _pad[4];
float automasking_start_normal_limit, automasking_start_normal_falloff;
float automasking_view_normal_limit, automasking_view_normal_falloff;
struct CurveMapping *automasking_cavity_curve;
/** For use by operators. */
struct CurveMapping *automasking_cavity_curve_op;
struct Object *gravity_object;
} Sculpt;
typedef struct CurvesSculpt {
Paint paint;
} CurvesSculpt;
typedef struct UvSculpt {
Paint paint;
} UvSculpt;
/** Grease pencil drawing brushes. */
typedef struct GpPaint {
Paint paint;
int flag;
/** Mode of paint (Materials or Vertex Color). */
int mode;
} GpPaint;
/** #GpPaint::flag */
enum {
GPPAINT_FLAG_USE_MATERIAL = 0,
GPPAINT_FLAG_USE_VERTEXCOLOR = 1,
};
/** Grease pencil vertex paint. */
typedef struct GpVertexPaint {
Paint paint;
int flag;
char _pad[4];
} GpVertexPaint;
/** Grease pencil sculpt paint. */
typedef struct GpSculptPaint {
Paint paint;
int flag;
char _pad[4];
} GpSculptPaint;
/** Grease pencil weight paint. */
typedef struct GpWeightPaint {
Paint paint;
int flag;
char _pad[4];
} GpWeightPaint;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Vertex Paint
* \{ */
/** Vertex Paint. */
typedef struct VPaint {
Paint paint;
char flag;
char _pad[3];
/** For mirrored painting. */
int radial_symm[3];
} VPaint;
/** #VPaint::flag */
enum {
/** Weight paint only. */
VP_FLAG_VGROUP_RESTRICT = (1 << 7),
};
/** \} */
/* -------------------------------------------------------------------- */
/** \name Grease-Pencil Stroke Sculpting
* \{ */
/** #GP_Sculpt_Settings::lock_axis */
typedef enum eGP_Lockaxis_Types {
GP_LOCKAXIS_VIEW = 0,
GP_LOCKAXIS_X = 1,
GP_LOCKAXIS_Y = 2,
GP_LOCKAXIS_Z = 3,
2019-04-16 16:40:47 +02:00
GP_LOCKAXIS_CURSOR = 4,
} eGP_Lockaxis_Types;
/** Settings for a GPencil Speed Guide. */
typedef struct GP_Sculpt_Guide {
char use_guide;
char use_snapping;
char reference_point;
char type;
char _pad2[4];
float angle;
float angle_snap;
float spacing;
float location[3];
struct Object *reference_object;
} GP_Sculpt_Guide;
/** GPencil Stroke Sculpting Settings. */
typedef struct GP_Sculpt_Settings {
/** Runtime. */
void *paintcursor;
2019-01-07 15:00:40 +01:00
/** #eGP_Sculpt_SettingsFlag. */
int flag;
2019-01-07 15:00:40 +01:00
/** #eGP_Lockaxis_Types lock drawing to one axis. */
int lock_axis;
/** Threshold for intersections. */
float isect_threshold;
char _pad[4];
2022-03-30 02:38:24 +02:00
/** Multi-frame edit falloff effect by frame. */
struct CurveMapping *cur_falloff;
/** Curve used for primitive tools. */
struct CurveMapping *cur_primitive;
/** Guides used for paint tools. */
struct GP_Sculpt_Guide guide;
} GP_Sculpt_Settings;
/** #GP_Sculpt_Settings::flag */
typedef enum eGP_Sculpt_SettingsFlag {
2021-02-13 07:44:51 +01:00
/** Enable falloff for multi-frame editing. */
GP_SCULPT_SETT_FLAG_FRAME_FALLOFF = (1 << 0),
2021-02-13 07:44:51 +01:00
/** Apply primitive curve. */
GP_SCULPT_SETT_FLAG_PRIMITIVE_CURVE = (1 << 1),
2021-02-13 07:44:51 +01:00
/** Scale thickness. */
GP_SCULPT_SETT_FLAG_SCALE_THICKNESS = (1 << 3),
/** Stroke Auto-Masking for sculpt. */
GP_SCULPT_SETT_FLAG_AUTOMASK_STROKE = (1 << 4),
/** Stroke Layer Auto-Masking for sculpt. */
GP_SCULPT_SETT_FLAG_AUTOMASK_LAYER_STROKE = (1 << 5),
/** Stroke Material Auto-Masking for sculpt. */
GP_SCULPT_SETT_FLAG_AUTOMASK_MATERIAL_STROKE = (1 << 6),
/** Active Layer Auto-Masking for sculpt. */
GP_SCULPT_SETT_FLAG_AUTOMASK_LAYER_ACTIVE = (1 << 7),
/** Active Material Auto-Masking for sculpt. */
GP_SCULPT_SETT_FLAG_AUTOMASK_MATERIAL_ACTIVE = (1 << 8),
} eGP_Sculpt_SettingsFlag;
/** #GP_Sculpt_Settings::gpencil_selectmode_sculpt */
typedef enum eGP_Sculpt_SelectMaskFlag {
2021-02-13 07:44:51 +01:00
/** Only affect selected points. */
GP_SCULPT_MASK_SELECTMODE_POINT = (1 << 0),
2021-02-13 07:44:51 +01:00
/** Only affect selected strokes. */
GP_SCULPT_MASK_SELECTMODE_STROKE = (1 << 1),
/** Only affect selected segments. */
GP_SCULPT_MASK_SELECTMODE_SEGMENT = (1 << 2),
} eGP_Sculpt_SelectMaskFlag;
/** #GP_Sculpt_Settings::gpencil_selectmode_vertex */
typedef enum eGP_vertex_SelectMaskFlag {
2021-02-13 07:44:51 +01:00
/** Only affect selected points. */
GP_VERTEX_MASK_SELECTMODE_POINT = (1 << 0),
2021-02-13 07:44:51 +01:00
/** Only affect selected strokes. */
GP_VERTEX_MASK_SELECTMODE_STROKE = (1 << 1),
2021-02-13 07:44:51 +01:00
/** Only affect selected segments. */
GP_VERTEX_MASK_SELECTMODE_SEGMENT = (1 << 2),
} eGP_Vertex_SelectMaskFlag;
/** Settings for GP Interpolation Operators. */
typedef struct GP_Interpolate_Settings {
/** Custom interpolation curve (for use with GP_IPO_CURVEMAP). */
struct CurveMapping *custom_ipo;
} GP_Interpolate_Settings;
/** #GP_Interpolate_Settings::flag */
typedef enum eGP_Interpolate_SettingsFlag {
/** Apply interpolation to all layers. */
GP_TOOLFLAG_INTERPOLATE_ALL_LAYERS = (1 << 0),
/** Apply interpolation to only selected. */
GP_TOOLFLAG_INTERPOLATE_ONLY_SELECTED = (1 << 1),
/** Exclude breakdown keyframe type as extreme. */
GP_TOOLFLAG_INTERPOLATE_EXCLUDE_BREAKDOWNS = (1 << 2),
} eGP_Interpolate_SettingsFlag;
/** #GP_Interpolate_Settings::type */
typedef enum eGP_Interpolate_Type {
/** Traditional Linear Interpolation. */
GP_IPO_LINEAR = 0,
/** CurveMap Defined Interpolation. */
GP_IPO_CURVEMAP = 1,
/* Easing Equations. */
GP_IPO_BACK = 3,
GP_IPO_BOUNCE = 4,
GP_IPO_CIRC = 5,
GP_IPO_CUBIC = 6,
GP_IPO_ELASTIC = 7,
GP_IPO_EXPO = 8,
GP_IPO_QUAD = 9,
GP_IPO_QUART = 10,
GP_IPO_QUINT = 11,
GP_IPO_SINE = 12,
} eGP_Interpolate_Type;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Unified Paint Settings
* \{ */
/**
* These settings can override the equivalent fields in the active
2012-03-09 19:28:30 +01:00
* Brush for any paint mode; the flag field controls whether these
* values are used
*/
typedef struct UnifiedPaintSettings {
/** Unified radius of brush in pixels. */
int size;
/** Unified radius of brush in Blender units. */
float unprojected_radius;
/** Unified strength of brush. */
float alpha;
/** Unified brush weight, [0, 1]. */
float weight;
/** Unified brush color. */
float rgb[3];
/** Unified brush secondary color. */
float secondary_rgb[3];
/** User preferences for sculpt and paint. */
int flag;
/* Rake rotation. */
/** Record movement of mouse so that rake can start at an intuitive angle. */
float last_rake[2];
float last_rake_angle;
int last_stroke_valid;
float average_stroke_accum[3];
int average_stroke_counter;
float brush_rotation;
float brush_rotation_sec;
2018-09-02 10:28:27 +02:00
/*******************************************************************************
* all data below are used to communicate with cursor drawing and tex sampling *
*******************************************************************************/
int anchored_size;
/**
* Normalization factor due to accumulated value of curve along spacing.
* Calculated when brush spacing changes to dampen strength of stroke
* if space attenuation is used.
*/
float overlap_factor;
char draw_inverted;
/** Check is there an ongoing stroke right now. */
char stroke_active;
char draw_anchored;
char do_linear_conversion;
/**
* Store last location of stroke or whether the mesh was hit.
* Valid only while stroke is active.
*/
float last_location[3];
int last_hit;
float anchored_initial_mouse[2];
/**
* Radius of brush, pre-multiplied with pressure.
* In case of anchored brushes contains the anchored radius.
*/
float pixel_radius;
float initial_pixel_radius;
float start_pixel_radius;
/** Drawing pressure. */
float size_pressure_value;
/** Position of mouse, used to sample the texture. */
float tex_mouse[2];
/** Position of mouse, used to sample the mask texture. */
float mask_tex_mouse[2];
/** ColorSpace cache to avoid locking up during sampling. */
struct ColorSpace *colorspace;
} UnifiedPaintSettings;
/** #UnifiedPaintSettings::flag */
typedef enum {
2013-04-13 02:43:49 +02:00
UNIFIED_PAINT_SIZE = (1 << 0),
UNIFIED_PAINT_ALPHA = (1 << 1),
UNIFIED_PAINT_WEIGHT = (1 << 5),
UNIFIED_PAINT_COLOR = (1 << 6),
/** Only used if unified size is enabled, mirrors the brush flag #BRUSH_LOCK_SIZE. */
2013-04-13 02:43:49 +02:00
UNIFIED_PAINT_BRUSH_LOCK_SIZE = (1 << 2),
UNIFIED_PAINT_FLAG_UNUSED_0 = (1 << 3),
UNIFIED_PAINT_FLAG_UNUSED_1 = (1 << 4),
} eUnifiedPaintSettingsFlags;
typedef struct CurvePaintSettings {
char curve_type;
char flag;
char depth_mode;
char surface_plane;
char fit_method;
char _pad;
short error_threshold;
float radius_min, radius_max;
float radius_taper_start, radius_taper_end;
float surface_offset;
float corner_angle;
} CurvePaintSettings;
/** #CurvePaintSettings::flag */
enum {
CURVE_PAINT_FLAG_CORNERS_DETECT = (1 << 0),
CURVE_PAINT_FLAG_PRESSURE_RADIUS = (1 << 1),
CURVE_PAINT_FLAG_DEPTH_STROKE_ENDPOINTS = (1 << 2),
CURVE_PAINT_FLAG_DEPTH_STROKE_OFFSET_ABS = (1 << 3),
};
/** #CurvePaintSettings::fit_method */
enum {
CURVE_PAINT_FIT_METHOD_REFIT = 0,
CURVE_PAINT_FIT_METHOD_SPLIT = 1,
};
/** #CurvePaintSettings::depth_mode */
enum {
CURVE_PAINT_PROJECT_CURSOR = 0,
CURVE_PAINT_PROJECT_SURFACE = 1,
};
/** #CurvePaintSettings::surface_plane */
enum {
CURVE_PAINT_SURFACE_PLANE_NORMAL_VIEW = 0,
CURVE_PAINT_SURFACE_PLANE_NORMAL_SURFACE = 1,
CURVE_PAINT_SURFACE_PLANE_VIEW = 2,
};
/** \} */
/* -------------------------------------------------------------------- */
/** \name Mesh Visualization
* \{ */
/** Stats for Meshes. */
typedef struct MeshStatVis {
char type;
char _pad1[2];
/* Overhang. */
char overhang_axis;
float overhang_min, overhang_max;
/* Thickness. */
float thickness_min, thickness_max;
char thickness_samples;
char _pad2[3];
/* Distort. */
float distort_min, distort_max;
2013-04-18 19:09:56 +02:00
/* Sharp. */
2013-04-18 19:09:56 +02:00
float sharp_min, sharp_max;
} MeshStatVis;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Sequencer Tool Settings
* \{ */
typedef struct SequencerToolSettings {
/** #eSeqImageFitMethod. */
int fit_method;
short snap_mode;
short snap_flag;
/** #eSeqOverlapMode. */
int overlap_mode;
2022-01-06 03:54:52 +01:00
/**
* When there are many snap points,
* 0-1 range corresponds to resolution from bound-box to all possible snap points.
*/
int snap_distance;
int pivot_point;
} SequencerToolSettings;
typedef enum eSeqOverlapMode {
SEQ_OVERLAP_EXPAND,
SEQ_OVERLAP_OVERWRITE,
SEQ_OVERLAP_SHUFFLE,
} eSeqOverlapMode;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Tool Settings
* \{ */
/** #CurvePaintSettings::surface_plane */
enum {
AUTO_MERGE = 1 << 0,
AUTO_MERGE_AND_SPLIT = 1 << 1,
};
typedef struct ToolSettings {
/** Vertex paint. */
VPaint *vpaint;
/** Weight paint. */
VPaint *wpaint;
Sculpt *sculpt;
/** Uv smooth. */
UvSculpt *uvsculpt;
/** Gpencil paint. */
GpPaint *gp_paint;
/** Gpencil vertex paint. */
GpVertexPaint *gp_vertexpaint;
/** Gpencil sculpt paint. */
GpSculptPaint *gp_sculptpaint;
/** Gpencil weight paint. */
GpWeightPaint *gp_weightpaint;
/** Curves sculpt. */
CurvesSculpt *curves_sculpt;
/** Vertex group weight - used only for editmode, not weight paint. */
float vgroup_weight;
/** Remove doubles limit. */
float doublimit;
char automerge;
char object_flag;
/** Selection Mode for Mesh. */
char selectmode;
/* UV Calculation. */
char unwrapper;
char uvcalc_flag;
char uv_flag;
char uv_selectmode;
char uv_sticky;
float uvcalc_margin;
/* Auto-IK. */
/** Runtime only. */
short autoik_chainlen;
/* Grease Pencil. */
/** Flags/options for how the tool works. */
char gpencil_flags;
/** Stroke placement settings: 3D View. */
char gpencil_v3d_align;
/** General 2D Editor. */
char gpencil_v2d_align;
char _pad0[2];
/* Annotations. */
/** Stroke placement settings - 3D View. */
char annotate_v3d_align;
/** Default stroke thickness for annotation strokes. */
short annotate_thickness;
/** Stroke selection mode for Edit. */
char gpencil_selectmode_edit;
/** Stroke selection mode for Sculpt. */
char gpencil_selectmode_sculpt;
/** Grease Pencil Sculpt. */
struct GP_Sculpt_Settings gp_sculpt;
/** Grease Pencil Interpolation Tool(s). */
struct GP_Interpolate_Settings gp_interpolate;
/** Image Paint (8 bytes aligned please!). */
struct ImagePaintSettings imapaint;
/** Settings for paint mode. */
struct PaintModeSettings paint_mode;
/** Particle Editing. */
struct ParticleEditSettings particle;
/** Transform Proportional Area of Effect. */
float proportional_size;
/** Select Group Threshold. */
float select_thresh;
/* Auto-Keying Mode. */
/** Defines in DNA_userdef_types.h. */
short autokey_flag;
char autokey_mode;
/** Keyframe type (see DNA_curve_types.h). */
char keyframe_type;
/** Multi-resolution meshes. */
char multires_subdiv_type;
/** Edge tagging, store operator settings (no UI access). */
char edge_mode;
char edge_mode_live_unwrap;
/* Transform. */
char transform_pivot_point;
char transform_flag;
2022-09-16 10:13:19 +02:00
/** Snap elements (per space-type), #eSnapMode. */
char snap_node_mode;
short snap_mode;
short snap_uv_mode;
short snap_anim_mode;
2022-09-16 10:13:19 +02:00
/** Generic flags (per space-type), #eSnapFlag. */
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
short snap_flag;
short snap_flag_node;
short snap_flag_seq;
short snap_flag_anim;
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
short snap_uv_flag;
char _pad[4];
/** Default snap source, #eSnapSourceOP. */
/**
* TODO(@gfxcoder): Rename `snap_target` to `snap_source` to avoid previous ambiguity of
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
* "target" (now, "source" is geometry to be moved and "target" is geometry to which moved
* geometry is snapped).
*/
char snap_target;
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
/** Snap mask for transform modes, #eSnapTransformMode. */
char snap_transform_mode_flag;
/** Steps to break transformation into with face nearest snapping. */
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
short snap_face_nearest_steps;
char proportional_edit, prop_mode;
/** Proportional edit, object mode. */
char proportional_objects;
/** Proportional edit, mask editing. */
char proportional_mask;
/** Proportional edit, action editor. */
char proportional_action;
/** Proportional edit, graph editor. */
char proportional_fcurve;
/** Lock marker editing. */
char lock_markers;
2019-05-02 10:25:13 +02:00
/** Auto normalizing mode in wpaint. */
char auto_normalize;
/** Present weights as if all locked vertex groups were
* deleted, and the remaining deform groups normalized. */
char wpaint_lock_relative;
/** Paint multiple bones in wpaint. */
char multipaint;
char weightuser;
/** Subset selection filter in wpaint. */
char vgroupsubset;
/** Stroke selection mode for Vertex Paint. */
char gpencil_selectmode_vertex;
/* UV painting. */
char uv_sculpt_settings;
char uv_relax_method;
char workspace_tool_type;
/**
* XXX: these `sculpt_paint_*` fields are deprecated, use the
* unified_paint_settings field instead!
*/
short sculpt_paint_settings DNA_DEPRECATED;
int sculpt_paint_unified_size DNA_DEPRECATED;
float sculpt_paint_unified_unprojected_radius DNA_DEPRECATED;
float sculpt_paint_unified_alpha DNA_DEPRECATED;
/** Unified Paint Settings. */
struct UnifiedPaintSettings unified_paint_settings;
struct CurvePaintSettings curve_paint_settings;
struct MeshStatVis statvis;
/** Normal Editing. */
2018-05-25 18:54:24 +02:00
float normal_vector[3];
char _pad6[4];
/**
* Custom Curve Profile for bevel tool:
* Temporary until there is a proper preset system that stores the profiles or maybe stores
* entire bevel configurations.
*/
struct CurveProfile *custom_bevel_profile_preset;
struct SequencerToolSettings *sequencer_tool_settings;
short snap_mode_tools; /* If SCE_SNAP_TO_NONE, use #ToolSettings::snap_mode. #eSnapMode. */
char plane_axis; /* X, Y or Z. */
char plane_depth; /* #eV3DPlaceDepth. */
char plane_orient; /* #eV3DPlaceOrient. */
char use_plane_axis_auto;
char _pad7[2];
} ToolSettings;
/** \} */
/* Assorted Scene Data. */
/* -------------------------------------------------------------------- */
/** \name Unit Settings
* \{ */
/** Display/Editing unit options for each scene. */
typedef struct UnitSettings {
/** Maybe have other unit conversions? */
float scale_length;
/** Imperial, metric etc. */
char system;
/** Not implemented as a proper unit system yet. */
char system_rotation;
short flag;
char length_unit;
char mass_unit;
char time_unit;
char temperature_unit;
char _pad[4];
} UnitSettings;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Global/Common Physics Settings
* \{ */
Unified effector functionality for particles, cloth and softbody * Unified scene wide gravity (currently in scene buttons) instead of each simulation having it's own gravity. * Weight parameters for all effectors and an effector group setting. * Every effector can use noise. * Most effectors have "shapes" point, plane, surface, every point. - "Point" is most like the old effectors and uses the effector location as the effector point. - "Plane" uses the closest point on effectors local xy-plane as the effector point. - "Surface" uses the closest point on an effector object's surface as the effector point. - "Every Point" uses every point in a mesh effector object as an effector point. - The falloff is calculated from this point, so for example with "surface" shape and "use only negative z axis" it's possible to apply force only "inside" the effector object. * Spherical effector is now renamed as "force" as it's no longer just spherical. * New effector parameter "flow", which makes the effector act as surrounding air velocity, so the resulting force is proportional to the velocity difference of the point and "air velocity". For example a wind field with flow=1.0 results in proper non-accelerating wind. * New effector fields "turbulence", which creates nice random flow paths, and "drag", which slows the points down. * Much improved vortex field. * Effectors can now effect particle rotation as well as location. * Use full, or only positive/negative z-axis to apply force (note. the z-axis is the surface normal in the case of effector shape "surface") * New "force field" submenu in add menu, which adds an empty with the chosen effector (curve object for corve guides). * Other dynamics should be quite easy to add to the effector system too if wanted. * "Unified" doesn't mean that force fields give the exact same results for particles, softbody & cloth, since their final effect depends on many external factors, like for example the surface area of the effected faces. Code changes * Subversion bump for correct handling of global gravity. * Separate ui py file for common dynamics stuff. * Particle settings updating is flushed with it's id through DAG_id_flush_update(..). Known issues * Curve guides don't yet have all ui buttons in place, but they should work none the less. * Hair dynamics don't yet respect force fields. Other changes * Particle emission defaults now to frames 1-200 with life of 50 frames to fill the whole default timeline. * Many particles drawing related crashes fixed. * Sometimes particles didn't update on first frame properly. * Hair with object/group visualization didn't work properly. * Memory leaks with PointCacheID lists (Genscher, remember to free pidlists after use :).
2009-10-01 00:10:14 +02:00
typedef struct PhysicsSettings {
float gravity[3];
int flag, quick_cache_step;
char _pad0[4];
Unified effector functionality for particles, cloth and softbody * Unified scene wide gravity (currently in scene buttons) instead of each simulation having it's own gravity. * Weight parameters for all effectors and an effector group setting. * Every effector can use noise. * Most effectors have "shapes" point, plane, surface, every point. - "Point" is most like the old effectors and uses the effector location as the effector point. - "Plane" uses the closest point on effectors local xy-plane as the effector point. - "Surface" uses the closest point on an effector object's surface as the effector point. - "Every Point" uses every point in a mesh effector object as an effector point. - The falloff is calculated from this point, so for example with "surface" shape and "use only negative z axis" it's possible to apply force only "inside" the effector object. * Spherical effector is now renamed as "force" as it's no longer just spherical. * New effector parameter "flow", which makes the effector act as surrounding air velocity, so the resulting force is proportional to the velocity difference of the point and "air velocity". For example a wind field with flow=1.0 results in proper non-accelerating wind. * New effector fields "turbulence", which creates nice random flow paths, and "drag", which slows the points down. * Much improved vortex field. * Effectors can now effect particle rotation as well as location. * Use full, or only positive/negative z-axis to apply force (note. the z-axis is the surface normal in the case of effector shape "surface") * New "force field" submenu in add menu, which adds an empty with the chosen effector (curve object for corve guides). * Other dynamics should be quite easy to add to the effector system too if wanted. * "Unified" doesn't mean that force fields give the exact same results for particles, softbody & cloth, since their final effect depends on many external factors, like for example the surface area of the effected faces. Code changes * Subversion bump for correct handling of global gravity. * Separate ui py file for common dynamics stuff. * Particle settings updating is flushed with it's id through DAG_id_flush_update(..). Known issues * Curve guides don't yet have all ui buttons in place, but they should work none the less. * Hair dynamics don't yet respect force fields. Other changes * Particle emission defaults now to frames 1-200 with life of 50 frames to fill the whole default timeline. * Many particles drawing related crashes fixed. * Sometimes particles didn't update on first frame properly. * Hair with object/group visualization didn't work properly. * Memory leaks with PointCacheID lists (Genscher, remember to free pidlists after use :).
2009-10-01 00:10:14 +02:00
} PhysicsSettings;
/**
* Safe Area options used in Camera View & Sequencer.
*/
typedef struct DisplaySafeAreas {
/* Each value represents the (x,y) margins as a multiplier.
* 'center' in this context is just the name for a different kind of safe-area. */
/** Title Safe. */
float title[2];
/** Image/Graphics Safe. */
float action[2];
/* Use for alternate aspect ratio. */
float title_center[2];
float action_center[2];
} DisplaySafeAreas;
/**
* Scene Display - used for store scene specific display settings for the 3d view.
*/
typedef struct SceneDisplay {
/** Light direction for shadows/highlight. */
float light_direction[3];
float shadow_shift, shadow_focus;
/** Settings for Cavity Shader. */
float matcap_ssao_distance;
float matcap_ssao_attenuation;
int matcap_ssao_samples;
/** Method of AA for viewport rendering and image rendering. */
char viewport_aa;
char render_aa;
char _pad[6];
/** OpenGL render engine settings. */
View3DShading shading;
} SceneDisplay;
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
/**
2023-08-05 05:46:22 +02:00
* Ray-tracing parameters.
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
*/
typedef struct RaytraceEEVEE {
/** Higher values will take lower strides and have less blurry intersections. */
float screen_trace_quality;
/** Thickness in world space each surface will have during screen space tracing. */
float screen_trace_thickness;
/** Maximum roughness before using horizon scan. */
float screen_trace_max_roughness;
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
/** Resolution downscale factor. */
int resolution_scale;
/** Maximum intensity a ray can have. */
float sample_clamp;
/** #RaytraceEEVEE_Flag. */
int flag;
/** #RaytraceEEVEE_DenoiseStages. */
int denoise_stages;
char _pad0[4];
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
} RaytraceEEVEE;
typedef struct SceneEEVEE {
int flag;
int gi_diffuse_bounces;
int gi_cubemap_resolution;
int gi_visibility_resolution;
float gi_irradiance_smoothing;
float gi_glossy_clamp;
float gi_filter_quality;
int gi_irradiance_pool_size;
float gi_cubemap_draw_size;
float gi_irradiance_draw_size;
int taa_samples;
int taa_render_samples;
int sss_samples;
float sss_jitter_threshold;
float ssr_quality;
float ssr_max_roughness;
float ssr_thickness;
float ssr_border_fade;
float ssr_firefly_fac;
float volumetric_start;
float volumetric_end;
int volumetric_tile_size;
int volumetric_samples;
float volumetric_sample_distribution;
float volumetric_light_clamp;
int volumetric_shadow_samples;
EEVEE-Next: Add mesh volume bounds estimation This adds correct object bounds estimation. This works by creating an occupancy texture where one bit represents one froxel. A geometry pre-pass fill this occupancy texture and doesn't do any shading. Each bit set to 0 will not be considered occupied by the object volume and will discard the material compute shader for this froxel. There is 2 method of computing the occupancy map: - Atomic XOR: For each fragment we compute the amount of froxels **center** in-front of it. We then convert that into occupancy bitmask that we apply to the occupancy texture using `imageAtomicXor`. This is straight forward and works well for any manifold geometry. - Hit List: For each fragment we write the fragment depth in a list (contained in one array texture). This list is then processed by a fullscreen pass (see `eevee_occupancy_convert_frag.glsl`) that sorts and converts all the hits to the occupancy bits. This emulate Cycles behavior by considering only back-face hits as exit events and front-face hits as entry events. The result stores it to the occupancy texture using bit-wise `OR` operation to compose it with other non-hit list objects. This also decouple the hit-list evaluation complexity from the material evaluation shader. ## Limitations ### Fast - Non-manifolds geometry objects are rendered incorrectly. - Non-manifolds geometry objects will affect other objects in front of them. ### Accurate - Limited to 16 hits per layer for now. - Non-manifolds geometry objects will affect other objects in front of them. Pull Request: https://projects.blender.org/blender/blender/pulls/113731
2023-10-19 19:22:14 +02:00
int volumetric_ray_depth;
float gtao_distance;
float gtao_factor;
float gtao_quality;
float gtao_thickness;
float gtao_focus;
EEVEE: Depth of field: New implementation This is a complete refactor over the old system. The goal was to increase quality first and then have something more flexible and optimised. |{F9603145} | {F9603142}|{F9603147}| This fixes issues we had with the old system which were: - Too much overdraw (low performance). - Not enough precision in render targets (hugly color banding/drifting). - Poor resolution near in-focus regions. - Wrong support of orthographic views. - Missing alpha support in viewport. - Missing bokeh shape inversion on foreground field. - Issues on some GPUs. (see T72489) (But I'm sure this one will have other issues as well heh...) - Fix T81092 I chose Unreal's Diaphragm DOF as a reference / goal implementation. It is well described in the presentation "A Life of a Bokeh" by Guillaume Abadie. You can check about it here https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04 Along side the main implementation we provide a way to increase the quality by jittering the camera position for each sample (the ones specified under the Sampling tab). The jittering is dividing the actual post processing dof radius so that it fills the undersampling. The user can still add more overblur to have a noiseless image, but reducing bokeh shape sharpness. Effect of overblur (left without, right with): | {F9603122} | {F9603123}| The actual implementation differs a bit: - Foreground gather implementation uses the same "ring binning" accumulator as background but uses a custom occlusion method. This gives the problem of inflating the foreground elements when they are over background or in-focus regions. This is was a hard decision but this was preferable to the other method that was giving poor opacity masks for foreground and had other more noticeable issues. Do note it is possible to improve this part in the future if a better alternative is found. - Use occlusion texture for foreground. Presentation says it wasn't really needed for them. - The TAA stabilisation pass is replace by a simple neighborhood clamping at the reduce copy stage for simplicity. - We don't do a brute-force in-focus separate gather pass. Instead we just do the brute force pass during resolve. Using the separate pass could be a future optimization if needed but might give less precise results. - We don't use compute shaders at all so shader branching might not be optimal. But performance is still way better than our previous implementation. - We mainly rely on density change to fix all undersampling issues even for foreground (which is something the reference implementation is not doing strangely). Remaining issues (not considered blocking for me): - Slight defocus stability: Due to slight defocus bruteforce gather using the bare scene color, highlights are dilated and make convergence quite slow or imposible when using jittered DOF (or gives ) - ~~Slight defocus inflating: There seems to be a 1px inflation discontinuity of the slight focus convolution compared to the half resolution. This is not really noticeable if using jittered camera.~~ Fixed - Foreground occlusion approximation is a bit glitchy and gives incorrect result if the a defocus foreground element overlaps a farther foreground element. Note that this is easily mitigated using the jittered camera position. |{F9603114}|{F9603115}|{F9603116}| - Foreground is inflating, not revealing background. However this avoids some other bugs too as discussed previously. Also mitigated with jittered camera position. |{F9603130}|{F9603129}| - Sensor vertical fit is still broken (does not match cycles). - Scattred bokeh shapes can be a bit strange at polygon vertices. This is due to the distance field stored in the Bokeh LUT which is not rounded at the edges. This is barely noticeable if the shape does not rotate. - ~~Sampling pattern of the jittered camera position is suboptimal. Could try something like hammersley or poisson disc distribution.~~Used hexaweb sampling pattern which is not random but has better stability and overall coverage. - Very large bokeh (> 300 px) can exhibit undersampling artifact in gather pass and quite a bit of bleeding. But at this size it is preferable to use jittered camera position. Codewise the changes are pretty much self contained and each pass are well documented. However the whole pipeline is quite complex to understand from bird's-eye view. Notes: - There is the possibility of using arbitrary bokeh texture with this implementation. However implementation is a bit involved. - Gathering max sample count is hardcoded to avoid to deal with shader variations. The actual max sample count is already quite high but samples are not evenly distributed due to the ring binning method. - While this implementation does not need 32bit/channel textures to render correctly it does use many other textures so actual VRAM usage is higher than previous method for viewport but less for render. Textures are reused to avoid many allocations. - Bokeh LUT computation is fast and done for each redraw because it can be animated. Also the texture can be shared with other viewport with different camera settings.
2021-02-12 22:35:18 +01:00
float bokeh_overblur;
float bokeh_max_size;
float bokeh_threshold;
EEVEE: Depth of field: New implementation This is a complete refactor over the old system. The goal was to increase quality first and then have something more flexible and optimised. |{F9603145} | {F9603142}|{F9603147}| This fixes issues we had with the old system which were: - Too much overdraw (low performance). - Not enough precision in render targets (hugly color banding/drifting). - Poor resolution near in-focus regions. - Wrong support of orthographic views. - Missing alpha support in viewport. - Missing bokeh shape inversion on foreground field. - Issues on some GPUs. (see T72489) (But I'm sure this one will have other issues as well heh...) - Fix T81092 I chose Unreal's Diaphragm DOF as a reference / goal implementation. It is well described in the presentation "A Life of a Bokeh" by Guillaume Abadie. You can check about it here https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04 Along side the main implementation we provide a way to increase the quality by jittering the camera position for each sample (the ones specified under the Sampling tab). The jittering is dividing the actual post processing dof radius so that it fills the undersampling. The user can still add more overblur to have a noiseless image, but reducing bokeh shape sharpness. Effect of overblur (left without, right with): | {F9603122} | {F9603123}| The actual implementation differs a bit: - Foreground gather implementation uses the same "ring binning" accumulator as background but uses a custom occlusion method. This gives the problem of inflating the foreground elements when they are over background or in-focus regions. This is was a hard decision but this was preferable to the other method that was giving poor opacity masks for foreground and had other more noticeable issues. Do note it is possible to improve this part in the future if a better alternative is found. - Use occlusion texture for foreground. Presentation says it wasn't really needed for them. - The TAA stabilisation pass is replace by a simple neighborhood clamping at the reduce copy stage for simplicity. - We don't do a brute-force in-focus separate gather pass. Instead we just do the brute force pass during resolve. Using the separate pass could be a future optimization if needed but might give less precise results. - We don't use compute shaders at all so shader branching might not be optimal. But performance is still way better than our previous implementation. - We mainly rely on density change to fix all undersampling issues even for foreground (which is something the reference implementation is not doing strangely). Remaining issues (not considered blocking for me): - Slight defocus stability: Due to slight defocus bruteforce gather using the bare scene color, highlights are dilated and make convergence quite slow or imposible when using jittered DOF (or gives ) - ~~Slight defocus inflating: There seems to be a 1px inflation discontinuity of the slight focus convolution compared to the half resolution. This is not really noticeable if using jittered camera.~~ Fixed - Foreground occlusion approximation is a bit glitchy and gives incorrect result if the a defocus foreground element overlaps a farther foreground element. Note that this is easily mitigated using the jittered camera position. |{F9603114}|{F9603115}|{F9603116}| - Foreground is inflating, not revealing background. However this avoids some other bugs too as discussed previously. Also mitigated with jittered camera position. |{F9603130}|{F9603129}| - Sensor vertical fit is still broken (does not match cycles). - Scattred bokeh shapes can be a bit strange at polygon vertices. This is due to the distance field stored in the Bokeh LUT which is not rounded at the edges. This is barely noticeable if the shape does not rotate. - ~~Sampling pattern of the jittered camera position is suboptimal. Could try something like hammersley or poisson disc distribution.~~Used hexaweb sampling pattern which is not random but has better stability and overall coverage. - Very large bokeh (> 300 px) can exhibit undersampling artifact in gather pass and quite a bit of bleeding. But at this size it is preferable to use jittered camera position. Codewise the changes are pretty much self contained and each pass are well documented. However the whole pipeline is quite complex to understand from bird's-eye view. Notes: - There is the possibility of using arbitrary bokeh texture with this implementation. However implementation is a bit involved. - Gathering max sample count is hardcoded to avoid to deal with shader variations. The actual max sample count is already quite high but samples are not evenly distributed due to the ring binning method. - While this implementation does not need 32bit/channel textures to render correctly it does use many other textures so actual VRAM usage is higher than previous method for viewport but less for render. Textures are reused to avoid many allocations. - Bokeh LUT computation is fast and done for each redraw because it can be animated. Also the texture can be shared with other viewport with different camera settings.
2021-02-12 22:35:18 +01:00
float bokeh_neighbor_max;
float bokeh_denoise_fac;
float bloom_color[3];
float bloom_threshold;
float bloom_knee;
float bloom_intensity;
float bloom_radius;
float bloom_clamp;
int motion_blur_samples DNA_DEPRECATED;
int motion_blur_max;
int motion_blur_steps;
int motion_blur_position;
float motion_blur_shutter;
float motion_blur_depth_scale;
int shadow_method DNA_DEPRECATED;
int shadow_cube_size;
int shadow_cascade_size;
int shadow_pool_size;
int shadow_ray_count;
int shadow_step_count;
float shadow_normal_bias;
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
int ray_split_settings;
int ray_tracing_method;
struct RaytraceEEVEE reflection_options;
struct RaytraceEEVEE refraction_options;
struct RaytraceEEVEE diffuse_options;
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
struct LightCache *light_cache DNA_DEPRECATED;
struct LightCache *light_cache_data;
/* Need a 128 byte string for some translations of some messages. */
char light_cache_info[128];
float overscan;
float light_threshold;
} SceneEEVEE;
typedef struct SceneGpencil {
float smaa_threshold;
char _pad[4];
} SceneGpencil;
typedef struct SceneHydra {
int export_method;
int _pad0;
} SceneHydra;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Transform Orientation
* \{ */
typedef struct TransformOrientationSlot {
int type;
int index_custom;
char flag;
char _pad0[7];
} TransformOrientationSlot;
/** Indices when used in #Scene::orientation_slots. */
enum {
SCE_ORIENT_DEFAULT = 0,
SCE_ORIENT_TRANSLATE = 1,
SCE_ORIENT_ROTATE = 2,
SCE_ORIENT_SCALE = 3,
};
/** \} */
/* -------------------------------------------------------------------- */
/** \name Scene ID-Block
* \{ */
2002-10-12 13:37:38 +02:00
typedef struct Scene {
ID id;
/** Animation data (must be immediately after id for utilities to use it). */
struct AnimData *adt;
/**
* Engines draw data, must be immediately after AnimData. See IdDdtTemplate and
* DRW_drawdatalist_from_id to understand this requirement.
*/
Attribute Node: support accessing attributes of View Layer and Scene. The attribute node already allows accessing attributes associated with objects and meshes, which allows changing the behavior of the same material between different objects or instances. The same idea can be extended to an even more global level of layers and scenes. Currently view layers provide an option to replace all materials with a different one. However, since the same material will be applied to all objects in the layer, varying the behavior between layers while preserving distinct materials requires duplicating objects. Providing access to properties of layers and scenes via the attribute node enables making materials with built-in switches or settings that can be controlled globally at the view layer level. This is probably most useful for complex NPR shading and compositing. Like with objects, the node can also access built-in scene properties, like render resolution or FOV of the active camera. Lookup is also attempted in World, similar to how the Object mode checks the Mesh datablock. In Cycles this mode is implemented by replacing the attribute node with the attribute value during sync, allowing constant folding to take the values into account. This means however that materials that use this feature have to be re-synced upon any changes to scene, world or camera. The Eevee version uses a new uniform buffer containing a sorted array mapping name hashes to values, with binary search lookup. The array is limited to 512 entries, which is effectively limitless even considering it is shared by all materials in the scene; it is also just 16KB of memory so no point trying to optimize further. The buffer has to be rebuilt when new attributes are detected in a material, so the draw engine keeps a table of recently seen attribute names to minimize the chance of extra rebuilds mid-draw. Differential Revision: https://developer.blender.org/D15941
2022-09-11 23:30:58 +02:00
DrawDataList drawdata;
2002-10-12 13:37:38 +02:00
struct Object *camera;
struct World *world;
2002-10-12 13:37:38 +02:00
struct Scene *set;
ListBase base DNA_DEPRECATED;
/** Active base. */
struct Base *basact DNA_DEPRECATED;
void *_pad1;
/** 3d cursor location. */
View3DCursor cursor;
/** Bit-flags for layer visibility (deprecated). */
unsigned int lay DNA_DEPRECATED;
/** Active layer (deprecated). */
int layact DNA_DEPRECATED;
char _pad2[4];
/** Various settings. */
short flag;
char use_nodes;
char _pad3[1];
struct bNodeTree *nodetree;
/** Sequence editor data is allocated here. */
struct Editing *ed;
/** Default allocated now. */
struct ToolSettings *toolsettings;
void *_pad4;
struct DisplaySafeAreas safe_areas;
/* Migrate or replace? depends on some internal things... */
/* No, is on the right place (ton). */
2002-10-12 13:37:38 +02:00
struct RenderData r;
struct AudioData audio;
Added the new Timeline Window, copied from Tuhopuu, coded by Matt Ebb. Main change is that it's an own Space type now, not part of the Audio window... the audio window should restrict to own options. This way functionality is nicely separated. Since it's the first time I added a new space (since long!) I've made an extensive tutorial as well. You can find that here: http://www.blender3d.org/cms/Adding_new_Space_Window.557.0.html Notes for using timewindow; - Add time markers with MKey - CTRL+M gives option to name Marker - Markers cannot be moved yet... - Pageup-Pagedown keys moves current frame to next-prev Marker - Xkey removes Markers - If an object has Ipos or an Action, it draws key lines - CTRL+Pageup-Pagedown moves current frame to next-prev Key - Press S or E to set start/end frame for playback Notes about the implementation in Tuhopuu: - Add new Marker now selects new, deselects others - Selecting Marker didn't work like elsewhere in Blender, on click it should deselect all, except the indicated Marker. Not when holding SHIFT of course - Not exported functions are static now - Removed unused defines (MARKER_NONE NEXT_AVAIL) - Drawing order was confusing, doing too many matrix calls - Removed not needed scrollbar, added new function to draw time values. (Has advantage the MMB scroll works not confusing on a scrollbar) - Added proper support for 'frame mapping' - The string button (name Marker) had a bug (checked str[64] while str was only 64 long) - String button itself didn't allow "OK on enter" - Made frame buttons in header larger, the arrows overlapped - Removed support for negative frame values, that won't work so simple!
2005-05-05 19:19:21 +02:00
ListBase markers;
ListBase transform_spaces;
/** First is the [scene, translate, rotate, scale]. */
TransformOrientationSlot orientation_slots[4];
void *sound_scene;
void *playback_handle;
void *sound_scrub_handle;
void *speaker_handles;
2023-01-16 03:57:10 +01:00
/** (runtime) info/cache used for presenting playback frame-rate info to the user. */
void *fps_info;
/** None of the dependency graph vars is mean to be saved. */
struct GHash *depsgraph_hash;
char _pad7[4];
/* User-Defined KeyingSets. */
/**
* Index of the active KeyingSet.
* first KeyingSet has index 1, 'none' active is 0, 'add new' is -1
*/
int active_keyingset;
/** KeyingSets for this scene. */
ListBase keyingsets;
/* Units. */
struct UnitSettings unit;
/** Grease Pencil - Annotations. */
struct bGPdata *gpd;
Unified effector functionality for particles, cloth and softbody * Unified scene wide gravity (currently in scene buttons) instead of each simulation having it's own gravity. * Weight parameters for all effectors and an effector group setting. * Every effector can use noise. * Most effectors have "shapes" point, plane, surface, every point. - "Point" is most like the old effectors and uses the effector location as the effector point. - "Plane" uses the closest point on effectors local xy-plane as the effector point. - "Surface" uses the closest point on an effector object's surface as the effector point. - "Every Point" uses every point in a mesh effector object as an effector point. - The falloff is calculated from this point, so for example with "surface" shape and "use only negative z axis" it's possible to apply force only "inside" the effector object. * Spherical effector is now renamed as "force" as it's no longer just spherical. * New effector parameter "flow", which makes the effector act as surrounding air velocity, so the resulting force is proportional to the velocity difference of the point and "air velocity". For example a wind field with flow=1.0 results in proper non-accelerating wind. * New effector fields "turbulence", which creates nice random flow paths, and "drag", which slows the points down. * Much improved vortex field. * Effectors can now effect particle rotation as well as location. * Use full, or only positive/negative z-axis to apply force (note. the z-axis is the surface normal in the case of effector shape "surface") * New "force field" submenu in add menu, which adds an empty with the chosen effector (curve object for corve guides). * Other dynamics should be quite easy to add to the effector system too if wanted. * "Unified" doesn't mean that force fields give the exact same results for particles, softbody & cloth, since their final effect depends on many external factors, like for example the surface area of the effected faces. Code changes * Subversion bump for correct handling of global gravity. * Separate ui py file for common dynamics stuff. * Particle settings updating is flushed with it's id through DAG_id_flush_update(..). Known issues * Curve guides don't yet have all ui buttons in place, but they should work none the less. * Hair dynamics don't yet respect force fields. Other changes * Particle emission defaults now to frames 1-200 with life of 50 frames to fill the whole default timeline. * Many particles drawing related crashes fixed. * Sometimes particles didn't update on first frame properly. * Hair with object/group visualization didn't work properly. * Memory leaks with PointCacheID lists (Genscher, remember to free pidlists after use :).
2009-10-01 00:10:14 +02:00
/* Movie Tracking. */
/** Active movie clip. */
struct MovieClip *clip;
/** Physics simulation settings. */
Unified effector functionality for particles, cloth and softbody * Unified scene wide gravity (currently in scene buttons) instead of each simulation having it's own gravity. * Weight parameters for all effectors and an effector group setting. * Every effector can use noise. * Most effectors have "shapes" point, plane, surface, every point. - "Point" is most like the old effectors and uses the effector location as the effector point. - "Plane" uses the closest point on effectors local xy-plane as the effector point. - "Surface" uses the closest point on an effector object's surface as the effector point. - "Every Point" uses every point in a mesh effector object as an effector point. - The falloff is calculated from this point, so for example with "surface" shape and "use only negative z axis" it's possible to apply force only "inside" the effector object. * Spherical effector is now renamed as "force" as it's no longer just spherical. * New effector parameter "flow", which makes the effector act as surrounding air velocity, so the resulting force is proportional to the velocity difference of the point and "air velocity". For example a wind field with flow=1.0 results in proper non-accelerating wind. * New effector fields "turbulence", which creates nice random flow paths, and "drag", which slows the points down. * Much improved vortex field. * Effectors can now effect particle rotation as well as location. * Use full, or only positive/negative z-axis to apply force (note. the z-axis is the surface normal in the case of effector shape "surface") * New "force field" submenu in add menu, which adds an empty with the chosen effector (curve object for corve guides). * Other dynamics should be quite easy to add to the effector system too if wanted. * "Unified" doesn't mean that force fields give the exact same results for particles, softbody & cloth, since their final effect depends on many external factors, like for example the surface area of the effected faces. Code changes * Subversion bump for correct handling of global gravity. * Separate ui py file for common dynamics stuff. * Particle settings updating is flushed with it's id through DAG_id_flush_update(..). Known issues * Curve guides don't yet have all ui buttons in place, but they should work none the less. * Hair dynamics don't yet respect force fields. Other changes * Particle emission defaults now to frames 1-200 with life of 50 frames to fill the whole default timeline. * Many particles drawing related crashes fixed. * Sometimes particles didn't update on first frame properly. * Hair with object/group visualization didn't work properly. * Memory leaks with PointCacheID lists (Genscher, remember to free pidlists after use :).
2009-10-01 00:10:14 +02:00
struct PhysicsSettings physics_settings;
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
void *_pad8;
/**
* XXX: runtime flag for drawing, actually belongs in the window,
* only used by #BKE_object_handle_update()
*/
struct CustomData_MeshMasks customdata_mask;
/** XXX: same as above but for temp operator use (viewport renders). */
struct CustomData_MeshMasks customdata_mask_modal;
/* Color Management. */
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 12:05:07 +02:00
ColorManagedViewSettings view_settings;
ColorManagedDisplaySettings display_settings;
ColorManagedColorspaceSettings sequencer_colorspace_settings;
/** RigidBody simulation world+settings. */
struct RigidBodyWorld *rigidbody_world;
struct PreviewImage *preview;
Render Layers and Collections (merge from render-layers) Design Documents ---------------- * https://wiki.blender.org/index.php/Dev:2.8/Source/Layers * https://wiki.blender.org/index.php/Dev:2.8/Source/DataDesignRevised User Commit Log --------------- * New Layer and Collection system to replace render layers and viewport layers. * A layer is a set of collections of objects (and their drawing options) required for specific tasks. * A collection is a set of objects, equivalent of the old layers in Blender. A collection can be shared across multiple layers. * All Scenes have a master collection that all other collections are children of. * New collection "context" tab (in Properties Editor) * New temporary viewport "collections" panel to control per-collection visibility Missing User Features --------------------- * Collection "Filter" Option to add objects based on their names * Collection Manager operators The existing buttons are placeholders * Collection Manager drawing The editor main region is empty * Collection Override * Per-Collection engine settings This will come as a separate commit, as part of the clay-engine branch Dev Commit Log -------------- * New DNA file (DNA_layer_types.h) with the new structs We are replacing Base by a new extended Base while keeping it backward compatible with some legacy settings (i.e., lay, flag_legacy). Renamed all Base to BaseLegacy to make it clear the areas of code that still need to be converted Note: manual changes were required on - deg_builder_nodes.h, rna_object.c, KX_Light.cpp * Unittesting for main syncronization requirements - read, write, add/copy/remove objects, copy scene, collection link/unlinking, context) * New Editor: Collection Manager Based on patch by Julian Eisel This is extracted from the layer-manager branch. With the following changes: - Renamed references of layer manager to collections manager - I doesn't include the editors/space_collections/ draw and util files - The drawing code itself will be implemented separately by Julian * Base / Object: A little note about them. Original Blender code would try to keep them in sync through the code, juggling flags back and forth. This will now be handled by Depsgraph, keeping Object and Bases more separated throughout the non-rendering code. Scene.base is being cleared in doversion, and the old viewport drawing code was poorly converted to use the new bases while the new viewport code doesn't get merged and replace the old one. Python API Changes ------------------ ``` - scene.layers + # no longer exists - scene.objects + scene.scene_layers.active.objects - scene.objects.active + scene.render_layers.active.objects.active - bpy.context.scene.objects.link() + bpy.context.scene_collection.objects.link() - bpy_extras.object_utils.object_data_add(context, obdata, operator=None, use_active_layer=True, name=None) + bpy_extras.object_utils.object_data_add(context, obdata, operator=None, name=None) - bpy.context.object.select + bpy.context.object.select = True + bpy.context.object.select = False + bpy.context.object.select_get() + bpy.context.object.select_set(action='SELECT') + bpy.context.object.select_set(action='DESELECT') -AddObjectHelper.layers + # no longer exists ```
2017-02-07 10:18:38 +01:00
/** ViewLayer, defined in DNA_layer_types.h */
ListBase view_layers;
/** Not an actual data-block, but memory owned by scene. */
struct Collection *master_collection;
/** Settings to be override by work-spaces. */
IDProperty *layer_properties;
/**
* Frame range used for simulations in geometry nodes by default, if SCE_CUSTOM_SIMULATION_RANGE
* is set. Individual simulations can overwrite this though.
*/
int simulation_frame_start;
int simulation_frame_end;
struct SceneDisplay display;
struct SceneEEVEE eevee;
struct SceneGpencil grease_pencil_settings;
struct SceneHydra hydra;
2002-10-12 13:37:38 +02:00
} Scene;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Render Data Enum/Flags
* \{ */
2002-10-12 13:37:38 +02:00
/** #RenderData::flag. */
enum {
/** Use preview range. */
SCER_PRV_RANGE = 1 << 0,
SCER_LOCK_FRAME_SELECTION = 1 << 1,
/** Show/use sub-frames (for checking motion blur). */
SCER_SHOW_SUBFRAME = 1 << 3,
};
/** #RenderData::mode. */
enum {
R_MODE_UNUSED_0 = 1 << 0, /* dirty */
R_MODE_UNUSED_1 = 1 << 1, /* cleared */
R_MODE_UNUSED_2 = 1 << 2, /* cleared */
R_MODE_UNUSED_3 = 1 << 3, /* cleared */
R_MODE_UNUSED_4 = 1 << 4, /* cleared */
R_MODE_UNUSED_5 = 1 << 5, /* cleared */
R_MODE_UNUSED_6 = 1 << 6, /* cleared */
R_MODE_UNUSED_7 = 1 << 7, /* cleared */
R_MODE_UNUSED_8 = 1 << 8, /* cleared */
R_BORDER = 1 << 9,
R_MODE_UNUSED_10 = 1 << 10, /* cleared */
R_CROP = 1 << 11,
/** Disable camera switching: runtime (DURIAN_CAMERA_SWITCH) */
R_NO_CAMERA_SWITCH = 1 << 12,
R_MODE_UNUSED_13 = 1 << 13, /* cleared */
R_MBLUR = 1 << 14,
/* unified was here */
R_MODE_UNUSED_16 = 1 << 16, /* cleared */
R_MODE_UNUSED_17 = 1 << 17, /* cleared */
R_MODE_UNUSED_18 = 1 << 18, /* cleared */
R_MODE_UNUSED_19 = 1 << 19, /* cleared */
R_FIXED_THREADS = 1 << 19,
R_MODE_UNUSED_20 = 1 << 20, /* cleared */
R_MODE_UNUSED_21 = 1 << 21, /* cleared */
R_NO_OVERWRITE = 1 << 22, /* Skip existing files. */
R_TOUCH = 1 << 23, /* Touch files before rendering. */
R_SIMPLIFY = 1 << 24,
R_EDGE_FRS = 1 << 25, /* R_EDGE reserved for Freestyle */
R_PERSISTENT_DATA = 1 << 26, /* Keep data around for re-render. */
R_MODE_UNUSED_27 = 1 << 27, /* cleared */
};
/** #RenderData::seq_flag */
enum {
R_SEQ_UNUSED_0 = (1 << 0), /* cleared */
R_SEQ_UNUSED_1 = (1 << 1), /* cleared */
R_SEQ_UNUSED_2 = (1 << 2), /* cleared */
R_SEQ_UNUSED_3 = (1 << 3), /* cleared */
R_SEQ_UNUSED_4 = (1 << 4), /* cleared */
R_SEQ_OVERRIDE_SCENE_SETTINGS = (1 << 5),
};
/** #RenderData::filtertype (used for nodes) */
enum {
R_FILTER_BOX = 0,
R_FILTER_TENT = 1,
R_FILTER_QUAD = 2,
R_FILTER_CUBIC = 3,
R_FILTER_CATROM = 4,
R_FILTER_GAUSS = 5,
R_FILTER_MITCH = 6,
R_FILTER_FAST_GAUSS = 7,
};
/** #RenderData::scemode */
enum {
R_DOSEQ = 1 << 0,
R_BG_RENDER = 1 << 1,
/* Passepartout is camera option now, keep this for backward compatibility. */
R_PASSEPARTOUT = 1 << 2,
R_BUTS_PREVIEW = 1 << 3,
R_EXTENSION = 1 << 4,
R_MATNODE_PREVIEW = 1 << 5,
R_DOCOMP = 1 << 6,
R_COMP_CROP = 1 << 7,
R_SCEMODE_UNUSED_8 = 1 << 8, /* cleared */
R_SINGLE_LAYER = 1 << 9,
R_SCEMODE_UNUSED_10 = 1 << 10, /* cleared */
R_SCEMODE_UNUSED_11 = 1 << 11, /* cleared */
R_NO_IMAGE_LOAD = 1 << 12,
R_SCEMODE_UNUSED_13 = 1 << 13, /* cleared */
R_NO_FRAME_UPDATE = 1 << 14,
R_SCEMODE_UNUSED_15 = 1 << 15, /* cleared */
R_SCEMODE_UNUSED_16 = 1 << 16, /* cleared */
R_SCEMODE_UNUSED_17 = 1 << 17, /* cleared */
R_TEXNODE_PREVIEW = 1 << 18,
R_SCEMODE_UNUSED_19 = 1 << 19, /* cleared */
R_EXR_CACHE_FILE = 1 << 20,
R_MULTIVIEW = 1 << 21,
};
/** #RenderData::stamp */
enum {
R_STAMP_TIME = 1 << 0,
R_STAMP_FRAME = 1 << 1,
R_STAMP_DATE = 1 << 2,
R_STAMP_CAMERA = 1 << 3,
R_STAMP_SCENE = 1 << 4,
R_STAMP_NOTE = 1 << 5,
/** Draw in the image space. */
R_STAMP_DRAW = 1 << 6,
R_STAMP_MARKER = 1 << 7,
R_STAMP_FILENAME = 1 << 8,
R_STAMP_SEQSTRIP = 1 << 9,
R_STAMP_RENDERTIME = 1 << 10,
R_STAMP_CAMERALENS = 1 << 11,
R_STAMP_STRIPMETA = 1 << 12,
R_STAMP_MEMORY = 1 << 13,
R_STAMP_HIDE_LABELS = 1 << 14,
R_STAMP_FRAME_RANGE = 1 << 15,
R_STAMP_HOSTNAME = 1 << 16,
};
2011-12-30 08:25:49 +01:00
#define R_STAMP_ALL \
(R_STAMP_TIME | R_STAMP_FRAME | R_STAMP_DATE | R_STAMP_CAMERA | R_STAMP_SCENE | R_STAMP_NOTE | \
R_STAMP_MARKER | R_STAMP_FILENAME | R_STAMP_SEQSTRIP | R_STAMP_RENDERTIME | \
R_STAMP_CAMERALENS | R_STAMP_MEMORY | R_STAMP_HIDE_LABELS | R_STAMP_FRAME_RANGE | \
R_STAMP_HOSTNAME)
2002-10-12 13:37:38 +02:00
/** #RenderData::alphamode */
enum {
R_ADDSKY = 0,
R_ALPHAPREMUL = 1,
};
2002-10-12 13:37:38 +02:00
/** #RenderData::color_mgt_flag */
enum {
/** Deprecated, should only be used in versioning code only. */
R_COLOR_MANAGEMENT = (1 << 0),
R_COLOR_MANAGEMENT_UNUSED_1 = (1 << 1),
};
/* bake_mode: same as RE_BAKE_xxx defines. */
/** #RenderData::bake_flag */
enum {
R_BAKE_CLEAR = 1 << 0,
// R_BAKE_OSA = 1 << 1, /* Deprecated. */
R_BAKE_TO_ACTIVE = 1 << 2,
// R_BAKE_NORMALIZE = 1 << 3, /* Deprecated. */
R_BAKE_MULTIRES = 1 << 4,
R_BAKE_LORES_MESH = 1 << 5,
// R_BAKE_VCOL = 1 << 6, /* Deprecated. */
R_BAKE_USERSCALE = 1 << 7,
R_BAKE_CAGE = 1 << 8,
R_BAKE_SPLIT_MAT = 1 << 9,
R_BAKE_AUTO_NAME = 1 << 10,
};
/** #RenderData::bake_normal_space */
enum {
R_BAKE_SPACE_CAMERA = 0,
R_BAKE_SPACE_WORLD = 1,
R_BAKE_SPACE_OBJECT = 2,
R_BAKE_SPACE_TANGENT = 3,
};
/** #RenderData::line_thickness_mode */
enum {
R_LINE_THICKNESS_ABSOLUTE = 1,
R_LINE_THICKNESS_RELATIVE = 2,
};
/* Sequencer seq_prev_type seq_rend_type. */
/** #RenderData::engine (scene.cc) */
extern const char *RE_engine_id_BLENDER_EEVEE;
extern const char *RE_engine_id_BLENDER_EEVEE_NEXT;
extern const char *RE_engine_id_BLENDER_WORKBENCH;
2014-10-28 12:49:04 +01:00
extern const char *RE_engine_id_CYCLES;
/** \} */
/* -------------------------------------------------------------------- */
/** \name Scene Defines
* \{ */
/* Note that much higher max-frames give imprecise sub-frames, see: #46859. */
/* Current precision is 16 for the sub-frames closer to MAXFRAME. */
/* For general use. */
#define MAXFRAME 1048574
#define MAXFRAMEF 1048574.0f
#define MINFRAME 0
#define MINFRAMEF 0.0f
/** (Minimum frame number for current-frame). */
#define MINAFRAME -1048574
#define MINAFRAMEF -1048574.0f
/** \} */
/* -------------------------------------------------------------------- */
/** \name Scene Related Macros
* \{ */
#define BASE_VISIBLE(v3d, base) BKE_base_is_visible(v3d, base)
#define BASE_SELECTABLE(v3d, base) \
(BASE_VISIBLE(v3d, base) && \
((v3d == NULL) || (((1 << (base)->object->type) & (v3d)->object_type_exclude_select) == 0)) && \
(((base)->flag & BASE_SELECTABLE) != 0))
#define BASE_SELECTED(v3d, base) (BASE_VISIBLE(v3d, base) && (((base)->flag & BASE_SELECTED) != 0))
#define BASE_EDITABLE(v3d, base) \
(BASE_VISIBLE(v3d, base) && !ID_IS_LINKED((base)->object) && \
(!ID_IS_OVERRIDE_LIBRARY_REAL((base)->object) || \
((base)->object->id.override_library->flag & LIBOVERRIDE_FLAG_SYSTEM_DEFINED) == 0))
#define BASE_SELECTED_EDITABLE(v3d, base) \
(BASE_EDITABLE(v3d, base) && (((base)->flag & BASE_SELECTED) != 0))
/* deprecate this! */
2018-04-06 18:17:18 +02:00
#define OBEDIT_FROM_OBACT(ob) ((ob) ? (((ob)->mode & OB_MODE_EDIT) ? ob : NULL) : NULL)
#define OBPOSE_FROM_OBACT(ob) ((ob) ? (((ob)->mode & OB_MODE_POSE) ? ob : NULL) : NULL)
#define OBWEIGHTPAINT_FROM_OBACT(ob) \
((ob) ? (((ob)->mode & OB_MODE_WEIGHT_PAINT) ? ob : NULL) : NULL)
#define V3D_CAMERA_LOCAL(v3d) ((!(v3d)->scenelock && (v3d)->camera) ? (v3d)->camera : NULL)
#define V3D_CAMERA_SCENE(scene, v3d) \
((!(v3d)->scenelock && (v3d)->camera) ? (v3d)->camera : (scene)->camera)
2012-12-09 06:15:21 +01:00
#define PRVRANGEON (scene->r.flag & SCER_PRV_RANGE)
#define PSFRA ((PRVRANGEON) ? (scene->r.psfra) : (scene->r.sfra))
#define PEFRA ((PRVRANGEON) ? (scene->r.pefra) : (scene->r.efra))
#define FRA2TIME(a) ((((double)scene->r.frs_sec_base) * (double)(a)) / (double)scene->r.frs_sec)
#define TIME2FRA(a) ((((double)scene->r.frs_sec) * (double)(a)) / (double)scene->r.frs_sec_base)
#define FPS (((double)scene->r.frs_sec) / (double)scene->r.frs_sec_base)
/** \} */
/* -------------------------------------------------------------------- */
/** \name Scene Enum/Flags
* \{ */
/* Base.flag is in `DNA_object_types.h`. */
2002-10-12 13:37:38 +02:00
/** #ToolSettings::transform_flag */
enum {
SCE_XFORM_AXIS_ALIGN = (1 << 0),
SCE_XFORM_DATA_ORIGIN = (1 << 1),
SCE_XFORM_SKIP_CHILDREN = (1 << 2),
};
/** #ToolSettings::object_flag */
enum {
SCE_OBJECT_MODE_LOCK = (1 << 0),
};
/** #ToolSettings::workspace_tool_flag */
enum {
SCE_WORKSPACE_TOOL_FALLBACK = 0,
SCE_WORKSPACE_TOOL_DEFAULT = 1,
};
/** #ToolSettings::snap_flag */
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
typedef enum eSnapFlag {
SCE_SNAP = (1 << 0),
SCE_SNAP_ROTATE = (1 << 1),
SCE_SNAP_PEEL_OBJECT = (1 << 2),
// SCE_SNAP_PROJECT = (1 << 3), /* DEPRECATED, see #SCE_SNAP_INDIVIDUAL_PROJECT. */
/** Was `SCE_SNAP_NO_SELF`, but self should be active. */
SCE_SNAP_NOT_TO_ACTIVE = (1 << 4),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
SCE_SNAP_ABS_GRID = (1 << 5),
/* Same value with different name to make it easier to understand in time based code. */
SCE_SNAP_ABS_TIME_STEP = (1 << 5),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
SCE_SNAP_BACKFACE_CULLING = (1 << 6),
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
SCE_SNAP_KEEP_ON_SAME_OBJECT = (1 << 7),
/** see #eSnapTargetOP */
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
SCE_SNAP_TO_INCLUDE_EDITED = (1 << 8),
SCE_SNAP_TO_INCLUDE_NONEDITED = (1 << 9),
SCE_SNAP_TO_ONLY_SELECTABLE = (1 << 10),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
} eSnapFlag;
ENUM_OPERATORS(eSnapFlag, SCE_SNAP_TO_ONLY_SELECTABLE)
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
/** See #ToolSettings::snap_target (to be renamed `snap_source`) and #TransSnap.source_operation */
typedef enum eSnapSourceOP {
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
SCE_SNAP_SOURCE_CLOSEST = 0,
SCE_SNAP_SOURCE_CENTER = 1,
SCE_SNAP_SOURCE_MEDIAN = 2,
SCE_SNAP_SOURCE_ACTIVE = 3,
} eSnapSourceOP;
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
ENUM_OPERATORS(eSnapSourceOP, SCE_SNAP_SOURCE_ACTIVE)
2022-11-17 17:13:38 +01:00
/**
* #TransSnap::target_operation and #ToolSettings::snap_flag
* (#SCE_SNAP_NOT_TO_ACTIVE, #SCE_SNAP_TO_INCLUDE_EDITED, #SCE_SNAP_TO_INCLUDE_NONEDITED,
* #SCE_SNAP_TO_ONLY_SELECTABLE).
*/
typedef enum eSnapTargetOP {
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
SCE_SNAP_TARGET_ALL = 0,
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
SCE_SNAP_TARGET_NOT_SELECTED = (1 << 0),
SCE_SNAP_TARGET_NOT_ACTIVE = (1 << 1),
SCE_SNAP_TARGET_NOT_EDITED = (1 << 2),
SCE_SNAP_TARGET_ONLY_SELECTABLE = (1 << 3),
SCE_SNAP_TARGET_NOT_NONEDITED = (1 << 4),
} eSnapTargetOP;
ENUM_OPERATORS(eSnapTargetOP, SCE_SNAP_TARGET_NOT_NONEDITED)
/** #ToolSettings::snap_mode */
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
typedef enum eSnapMode {
SCE_SNAP_TO_NONE = 0,
Transform Snap: nearest face snap mode, snapping options, refactoring. This commit adds a new face nearest snapping mode, adds new snapping options, and (lightly) refactors code around snapping. The new face nearest snapping mode will snap transformed geometry to the nearest surface in world space. In contrast, the original face snapping mode uses projection (raycasting) to snap source to target geometry. Face snapping therefore only works with what is visible, while nearest face snapping can snap geometry to occluded parts of the scene. This new mode is critical for retopology work, where some of the target mesh might be occluded (ex: sliding an edge loop that wraps around the backside of target mesh). The nearest face snapping mode has two options: "Snap to Same Target" and "Face Nearest Steps". When the Snap to Same Object option is enabled, the selected source geometry will stay near the target that it is nearest before editing started, which prevents the source geometry from snapping to other targets. The Face Nearest Steps divides the overall transformation for each vertex into n smaller transformations, then applies those n transformations with surface snapping interlacing each step. This steps option handles transformations that cross U-shaped targets better. The new snapping options allow the artist to better control which target objects (objects to which the edited geometry is snapped) are considered when snapping. In particular, the only option for filtering target objects was a "Project onto Self", which allowed the currently edited mesh to be considered as a target. Now, the artist can choose any combination of the following to be considered as a target: the active object, any edited object that isn't active (see note below), any non- edited object. Additionally, the artist has another snapping option to exclude objects that are not selectable as potential targets. The Snapping Options dropdown has been lightly reorganized to allow for the additional options. Included in this patch: - Snap target selection is more controllable for artist with additional snapping options. - Renamed a few of the snap-related functions to better reflect what they actually do now. For example, `applySnapping` implies that this handles the snapping, while `applyProject` implies something entirely different is done there. However, better names would be `applySnappingAsGroup` and `applySnappingIndividual`, respectively, where `applySnappingIndividual` previously only does Face snapping. - Added an initial coordinate parameter to snapping functions so that the nearest target before transforming can be determined(for "Snap to Same Object"), and so the transformation can be broken into smaller steps (for "Face Nearest Steps"). - Separated the BVH Tree getter code from mesh/edit mesh to its own function to reduce code duplication. - Added icon for nearest face snapping. - The original "Project onto Self" was actually not correct! This option should be called "Project onto Active" instead, but that only matters when editing multiple meshes at the same time. This patch makes this change in the UI. Reviewed By: Campbell Barton, Germano Cavalcante Differential Revision: https://developer.blender.org/D14591
2022-06-30 02:52:00 +02:00
/** #ToolSettings::snap_node_mode */
SCE_SNAP_TO_NODE_X = (1 << 0),
SCE_SNAP_TO_NODE_Y = (1 << 1),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
/** #ToolSettings::snap_anim_mode */
SCE_SNAP_TO_FRAME = (1 << 0),
SCE_SNAP_TO_SECOND = (1 << 1),
SCE_SNAP_TO_MARKERS = (1 << 2),
/** #ToolSettings::snap_mode and #ToolSettings::snap_node_mode and #ToolSettings.snap_uv_mode */
SCE_SNAP_TO_POINT = (1 << 0),
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
SCE_SNAP_TO_EDGE_MIDPOINT = (1 << 1),
SCE_SNAP_TO_EDGE_ENDPOINT = (1 << 2),
SCE_SNAP_TO_EDGE_PERPENDICULAR = (1 << 3),
SCE_SNAP_TO_EDGE = (1 << 4),
SCE_SNAP_TO_FACE = (1 << 5),
SCE_SNAP_TO_VOLUME = (1 << 6),
SCE_SNAP_TO_GRID = (1 << 7),
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
SCE_SNAP_TO_INCREMENT = (1 << 8),
/** For snap individual elements. */
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
SCE_SNAP_INDIVIDUAL_NEAREST = (1 << 9),
SCE_SNAP_INDIVIDUAL_PROJECT = (1 << 10),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
} eSnapMode;
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
/* Due to dependency conflicts with Cycles, header cannot directly include `BLI_utildefines.h`. */
/* TODO: move this macro to a more general place. */
#ifdef ENUM_OPERATORS
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
ENUM_OPERATORS(eSnapMode, SCE_SNAP_INDIVIDUAL_PROJECT)
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
#endif
#define SCE_SNAP_TO_VERTEX (SCE_SNAP_TO_POINT | SCE_SNAP_TO_EDGE_ENDPOINT)
#define SCE_SNAP_TO_GEOM \
Snap: New icons For Blender 4.0 we decided to support individual icons for different snap elements. This was originally contributed by Erik Abrahamsson as !107054 with some contributions by myself (Germano). This set of icons being simple geometric symbols, that should be familiar to CAD artists. Note that Face and Volume share the same icon (circle). This is deliberate since they communicate a similar functionality - are not aimed at precision snapping the same way the vertex or perpendicular are. Also note that later we should also try to change the icons shown in the snap menu to match the symbols that the artists see in the preview window. ——— On the decision process: The version currently in main (and rolled back here) was an initial attempt of aggregating more information to the icons (e.g., by aligning the icons to the target edges) while making them more suitable to Blender. After presenting both options to (parts of the) community, there was nothing fundamentally broken found with either option, though options diverged over personal preference. With that in mind, in the latest UI module meeting it was agreed to use the original proposal then. This final call was proposed by Dalai Felinto on his role of commissioner (stakeholder) for the snap polishing tasks (#73993) and designer for the related Snap Base design #66484. ——— This commit reverts commit 9c2e768f5b. The reverted icons (referred originally as minimalistic icons) may be proposed later as a separate theme option.
2023-09-27 21:25:55 +02:00
(SCE_SNAP_TO_VERTEX | SCE_SNAP_TO_EDGE | SCE_SNAP_TO_FACE | SCE_SNAP_TO_EDGE_MIDPOINT | \
SCE_SNAP_TO_EDGE_PERPENDICULAR)
/** #SequencerToolSettings::snap_mode */
enum {
SEQ_SNAP_TO_STRIPS = 1 << 0,
SEQ_SNAP_TO_CURRENT_FRAME = 1 << 1,
SEQ_SNAP_TO_STRIP_HOLD = 1 << 2,
};
/** #SequencerToolSettings::snap_flag */
enum {
SEQ_SNAP_IGNORE_MUTED = 1 << 0,
SEQ_SNAP_IGNORE_SOUND = 1 << 1,
SEQ_SNAP_CURRENT_FRAME_TO_STRIPS = 1 << 2,
};
/** #ToolSettings::snap_transform_mode_flag */
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
typedef enum eSnapTransformMode {
SCE_SNAP_TRANSFORM_MODE_TRANSLATE = (1 << 0),
SCE_SNAP_TRANSFORM_MODE_ROTATE = (1 << 1),
SCE_SNAP_TRANSFORM_MODE_SCALE = (1 << 2),
Refactor: Snap-related. Clarified attribute names and refactored #defines into enums The transformation snapping code contains a bunch of `#define`s, some ambiguously or incorrectly named attributes. This patch contains refactored code to improve this. This patch does (should) not change functionality of snapping. Clarified ambiguously / incorrectly named attributes. - "Target" is used to refer to the part of the source that is to be snapped (Active, Median, Center, Closest), but several other areas of Blender use the term "target" to refer to the thing being snapped to and "source" to refer to the thing getting snapped. Moreover, the implications of the previous terms do not match the descriptions. For example: `SCE_SNAP_TARGET_CENTER` does not snap the grabbed geometry to the center of the target, but instead "Snap transforamtion center onto target". - "Select" refers to the condition for an object to be a possible target for snapping. - `SCE_SNAP_MODE_FACE` is renamed to `SCE_SNAP_MODE_FACE_RAYCAST` to better describe its affect and to make way for other face snapping methods (ex: nearest). Refactored related `#define` into `enum`s. In particular, constants relating to... - `ToolSettings.snap_flag` are now in `enum eSnapFlag` - `ToolSettings.snap_mode` are now in `enum eSnapMode` - `ToolSettings.snap_source` (was `snap_target`) are now in `enum eSnapSourceSelect` - `ToolSettings.snap_flag` (`SCE_SNAP_NO_SELF`) and `TransSnap.target_select` are now in `enum eSnapTargetSelect` As the terms became more consistent and the constants were packed together into meaningful enumerations, some of the attribute names seemed ambiguous. For example, it is unclear whether `SnapObjectParams.snap_select` referred to the target or the source. This patch also adds a small amount of clarity. This patch also swaps out generic types (ex: `char`, `short`, `ushort`) and unclear hard coded numbers (ex: `0`) used with snap-related enumerations with the actual `enum`s and values. Note: I did leave myself some comments to follow-up with further refactoring. Specifically, using "target" and "source" consistently will mean the Python API will need to change (ex: `ToolSettings.snap_target` is not `ToolSettings.snap_source`). If the API is going to change, it would be good to make sure that the used terms are descriptive enough. For example, `bpy.ops.transform.translate` uses a `snap` argument to determine if snapping should be enabled while transforming. Perhaps `use_snap` might be an improvement that's more consistent with other conventions. This patch is (mostly) a subset of D14591, as suggested by @mano-wii. Task T69342 proposes to separate the `Absolute Grid Snap` option out from `Increment` snapping method into its own method. Also, there might be reason to create additional snapping methods or options. (Indeed, D14591 heads in this direction). This patch can work along with these suggestions, as this patch is trying to clarify the snapping code and to prompt more work in this area. Reviewed By: mano-wii Differential Revision: https://developer.blender.org/D15037
2022-06-06 16:28:14 +02:00
} eSnapTransformMode;
/** #ToolSettings::selectmode */
enum {
SCE_SELECT_VERTEX = 1 << 0, /* for mesh */
SCE_SELECT_EDGE = 1 << 1,
SCE_SELECT_FACE = 1 << 2,
};
/** #MeshStatVis::type */
enum {
SCE_STATVIS_OVERHANG = 0,
SCE_STATVIS_THICKNESS = 1,
SCE_STATVIS_INTERSECT = 2,
SCE_STATVIS_DISTORT = 3,
SCE_STATVIS_SHARP = 4,
};
/** #ParticleEditSettings::selectmode for particles */
enum {
SCE_SELECT_PATH = 1 << 0,
SCE_SELECT_POINT = 1 << 1,
SCE_SELECT_END = 1 << 2,
};
/** #ToolSettings::prop_mode (proportional falloff) */
enum {
PROP_SMOOTH = 0,
PROP_SPHERE = 1,
PROP_ROOT = 2,
PROP_SHARP = 3,
PROP_LIN = 4,
PROP_CONST = 5,
PROP_RANDOM = 6,
PROP_INVSQUARE = 7,
PROP_MODE_MAX = 8,
};
/** #ToolSettings::proportional_edit & similarly named members. */
enum {
PROP_EDIT_USE = (1 << 0),
PROP_EDIT_CONNECTED = (1 << 1),
PROP_EDIT_PROJECTED = (1 << 2),
};
/** #ToolSettings::weightuser */
enum {
OB_DRAW_GROUPUSER_NONE = 0,
OB_DRAW_GROUPUSER_ACTIVE = 1,
OB_DRAW_GROUPUSER_ALL = 2,
};
2022-08-14 20:32:54 +02:00
/* object_vgroup.cc */
#define WT_VGROUP_MASK_ALL \
((1 << WT_VGROUP_ACTIVE) | (1 << WT_VGROUP_BONE_SELECT) | (1 << WT_VGROUP_BONE_DEFORM) | \
(1 << WT_VGROUP_BONE_DEFORM_OFF) | (1 << WT_VGROUP_ALL))
/** #Scene::flag */
enum {
SCE_DS_SELECTED = 1 << 0,
SCE_DS_COLLAPSED = 1 << 1,
SCE_NLA_EDIT_ON = 1 << 2,
SCE_FRAME_DROP = 1 << 3,
SCE_KEYS_NO_SELONLY = 1 << 4,
SCE_READFILE_LIBLINK_NEED_SETSCENE_CHECK = 1 << 5,
SCE_CUSTOM_SIMULATION_RANGE = 1 << 6,
};
/* Return flag BKE_scene_base_iter_next functions. */
enum {
// F_ERROR = -1, /* UNUSED. */
F_START = 0,
F_SCENE = 1,
F_DUPLI = 3,
};
2002-10-12 13:37:38 +02:00
/** #AudioData::flag */
enum {
AUDIO_MUTE = 1 << 0,
AUDIO_SYNC = 1 << 1,
AUDIO_SCRUB = 1 << 2,
AUDIO_VOLUME_ANIMATED = 1 << 3,
};
Commit message and the brunt of the code courtesy of intrr, apologies for the size of this; Finally, the Sequencer audio support and global audio/animation sync stuff! (See http://intrr.org/blender/audiosequencer.html) Stuff that has been done: ./source/blender/blenloader/intern/writefile.c ./source/blender/blenloader/intern/readfile.c Added code to make it handle sounds used by audio strips, and to convert Scene data from older (<2.28) versions to init Scene global audio settings (Scene->audio) to defaults. ./source/blender/include/BSE_seqaudio.h ./source/blender/src/seqaudio.c The main audio routines that start/stop/scrub the audio stream at a certain frame position, provide the frame reference for the current stream position, mix the audio, convert the audio, mixdown the audio into a file. ./source/blender/makesdna/DNA_sound_types.h Introduced new variables in the bSound struct to accomodate the sample data after converted to the scene's global mixing format (stream, streamlen). Also added a new flag SOUND_FLAGS_SEQUENCE that gets set if the Sound belongs to a sequence strip. ./source/blender/makesdna/DNA_scene_types.h Added AudioData struct, which holds scene-global audio settings. ./source/blender/makesdna/DNA_sequence_types.h Added support for audio strips. Some variables to hold Panning/Attenuation information, position information, reference to the sample, and some flags. ./source/blender/makesdna/DNA_userdef_types.h ./source/blender/src/usiblender.c Added a "Mixing buffer size" userpref. Made the versions stuff initialize it to a default for versions <2.28. ./source/blender/makesdna/DNA_space_types.h ./source/blender/src/filesel.c Added a Cyan dot to .WAV files. Any other suggestions on a better color? :) ./source/blender/src/editsound.c Changes (fixes) to the WAV file loader, re-enabled some gameengine code that is needed for dealing with bSounds and bSamples. ./source/blender/src/editipo.c ./source/blender/src/drawseq.c ./source/blender/src/editnla.c ./source/blender/src/space.c ./source/blender/src/drawview.c ./source/blender/src/renderwin.c ./source/blender/src/headerbuttons.c - Created two different wrappers for update_for_newframe(), one which scrubs the audio, one which doesn't. - Replaced some of the occurences of update_for_newframe() with update_for_newframe_muted(), which doesn't scrub the audio. - In drawview.c: Changed the synchronization scheme to get the current audio position from the audio engine, and use that as a reference for setting CFRA. Implements a/v sync and framedrop. - In editipo.c: Changed handling of Fac IPOs to be usable for audio strips as volume envelopes. - In space.c: Added the mixing buffer size Userpref, enabled audio scrubbing (update_for_newframe()) for moving the sequence editor framebar. ./source/blender/src/editseq.c Added support for audio strips and a default directory for WAV files which gets saved from the last Shift-A operation. ./source/blender/src/buttons.c Added Scene-global audio sequencer settings in Sound buttons. ./source/blender/src/sequence.c Various stuff that deals with handling audio strips differently than usual strips.
2003-07-13 22:16:56 +02:00
/** #FFMpegCodecData::flags */
enum {
#ifdef DNA_DEPRECATED_ALLOW
/* DEPRECATED: you can choose none as audio-codec now. */
FFMPEG_MULTIPLEX_AUDIO = (1 << 0),
#endif
FFMPEG_AUTOSPLIT_OUTPUT = (1 << 1),
FFMPEG_LOSSLESS_OUTPUT = (1 << 2),
FFmpeg interface improvements This patch changes a couple of things in the video output encoding. {F362527} - Clearer separation between container and codec. No more "format", as this is too ambiguous. As a result, codecs were removed from the container list. - Added FFmpeg speed presets, so the user can choosen from the range "Very slow" to "Ultra fast". By default no preset is used. - Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate depending on the desired quality and the input. This generally produces the best quality videos, at the expense of not knowing the exact bit-rate and file size. - Added optional maximum of non-B-frames between B-frames (`max_b_frames`). - Presets were adjusted for these changes, and new presets added. One of the new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264) for reviewing videos, as it allows players to scrub through it easily. Might be nice in weeklies. This preset also requires control over the `max_b_frames` setting. GUI-only changes: - Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more accurate. After all, FFmpeg is used when this option is chosen, which can also output non-MPEG files. - Certain parts of the GUI are disabled when not in use: - bit rate options are not used when a constant rate factor is given. - audio bitrate & volume are not used when no audio is exported. Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a good idea to determine whether we want to keep it at all. After this patch has been accepted, I'd be happy to go through the code and remove any then-obsolete bits, such as the handling of "XVID" as a container format. Reviewers: sergey, mont29, brecht Subscribers: mpan3, Blendify, brecht, fsiddi Tags: #bf_blender Differential Revision: https://developer.blender.org/D2242
2016-09-21 15:01:51 +02:00
FFMPEG_USE_MAX_B_FRAMES = (1 << 3),
};
/** #Paint::flags */
typedef enum ePaintFlags {
PAINT_SHOW_BRUSH = (1 << 0),
PAINT_FAST_NAVIGATE = (1 << 1),
PAINT_SHOW_BRUSH_ON_SURFACE = (1 << 2),
PAINT_USE_CAVITY_MASK = (1 << 3),
PAINT_SCULPT_DELAY_UPDATES = (1 << 4),
} ePaintFlags;
2021-02-14 10:58:04 +01:00
/**
* #Paint::symmetry_flags
2021-02-14 10:58:04 +01:00
* (for now just a duplicate of sculpt symmetry flags).
*/
typedef enum ePaintSymmetryFlags {
PAINT_SYMM_NONE = 0,
PAINT_SYMM_X = (1 << 0),
PAINT_SYMM_Y = (1 << 1),
PAINT_SYMM_Z = (1 << 2),
PAINT_SYMMETRY_FEATHER = (1 << 3),
PAINT_TILE_X = (1 << 4),
PAINT_TILE_Y = (1 << 5),
PAINT_TILE_Z = (1 << 6),
} ePaintSymmetryFlags;
2022-11-10 02:27:41 +01:00
ENUM_OPERATORS(ePaintSymmetryFlags, PAINT_TILE_Z);
#define PAINT_SYMM_AXIS_ALL (PAINT_SYMM_X | PAINT_SYMM_Y | PAINT_SYMM_Z)
#ifdef __cplusplus
inline ePaintSymmetryFlags operator++(ePaintSymmetryFlags &flags, int)
{
flags = ePaintSymmetryFlags(char(flags) + 1);
return flags;
}
#endif
2021-02-14 10:58:04 +01:00
/**
* #Sculpt::flags
2021-02-14 10:58:04 +01:00
* These can eventually be moved to paint flags?
*/
typedef enum eSculptFlags {
SCULPT_FLAG_UNUSED_0 = (1 << 0), /* cleared */
SCULPT_FLAG_UNUSED_1 = (1 << 1), /* cleared */
SCULPT_FLAG_UNUSED_2 = (1 << 2), /* cleared */
2013-04-13 02:43:49 +02:00
SCULPT_LOCK_X = (1 << 3),
SCULPT_LOCK_Y = (1 << 4),
SCULPT_LOCK_Z = (1 << 5),
SCULPT_FLAG_UNUSED_6 = (1 << 6), /* cleared */
SCULPT_FLAG_UNUSED_7 = (1 << 7), /* cleared */
2013-04-13 02:43:49 +02:00
SCULPT_ONLY_DEFORM = (1 << 8),
// SCULPT_SHOW_DIFFUSE = (1 << 9), /* deprecated */
/** If set, the mesh will be drawn with smooth-shading in dynamic-topology mode. */
SCULPT_FLAG_UNUSED_8 = (1 << 10), /* deprecated */
/** If set, dynamic-topology brushes will subdivide short edges. */
SCULPT_DYNTOPO_SUBDIVIDE = (1 << 12),
/** If set, dynamic-topology brushes will collapse short edges. */
SCULPT_DYNTOPO_COLLAPSE = (1 << 11),
/** If set, dynamic-topology detail size will be constant in object space. */
SCULPT_DYNTOPO_DETAIL_CONSTANT = (1 << 13),
SCULPT_DYNTOPO_DETAIL_BRUSH = (1 << 14),
/* unused = (1 << 15), */
SCULPT_DYNTOPO_DETAIL_MANUAL = (1 << 16),
} eSculptFlags;
/** #Sculpt::transform_mode */
typedef enum eSculptTransformMode {
SCULPT_TRANSFORM_MODE_ALL_VERTICES = 0,
SCULPT_TRANSFORM_MODE_RADIUS_ELASTIC = 1,
} eSculptTrasnformMode;
/** #PaintModeSettings::mode */
typedef enum ePaintCanvasSource {
/** Paint on the active node of the active material slot. */
PAINT_CANVAS_SOURCE_MATERIAL = 0,
/** Paint on a selected image. */
PAINT_CANVAS_SOURCE_IMAGE = 1,
/** Paint on the active color attribute (vertex color) layer. */
PAINT_CANVAS_SOURCE_COLOR_ATTRIBUTE = 2,
} ePaintCanvasSource;
/** #ImagePaintSettings::mode */
/* Defines to let old texture painting use the new enum. */
/* TODO(jbakker): rename usages. */
#define IMAGEPAINT_MODE_MATERIAL PAINT_CANVAS_SOURCE_MATERIAL
#define IMAGEPAINT_MODE_IMAGE PAINT_CANVAS_SOURCE_IMAGE
/** #ImagePaintSettings::interp */
enum {
IMAGEPAINT_INTERP_LINEAR = 0,
IMAGEPAINT_INTERP_CLOSEST = 1,
};
/** #ImagePaintSettings::flag */
enum {
IMAGEPAINT_DRAWING = 1 << 0,
// IMAGEPAINT_DRAW_TOOL = 1 << 1, /* Deprecated. */
// IMAGEPAINT_DRAW_TOOL_DRAWING = 1 << 2, /* Deprecated. */
};
/* Projection painting only. */
/** #ImagePaintSettings::flag */
enum {
IMAGEPAINT_PROJECT_XRAY = 1 << 4,
IMAGEPAINT_PROJECT_BACKFACE = 1 << 5,
IMAGEPAINT_PROJECT_FLAT = 1 << 6,
IMAGEPAINT_PROJECT_LAYER_CLONE = 1 << 7,
IMAGEPAINT_PROJECT_LAYER_STENCIL = 1 << 8,
IMAGEPAINT_PROJECT_LAYER_STENCIL_INV = 1 << 9,
};
/** #ImagePaintSettings::missing_data */
enum {
IMAGEPAINT_MISSING_UVS = 1 << 0,
IMAGEPAINT_MISSING_MATERIAL = 1 << 1,
IMAGEPAINT_MISSING_TEX = 1 << 2,
IMAGEPAINT_MISSING_STENCIL = 1 << 3,
};
/** #ToolSettings::uvcalc_flag */
enum {
UVCALC_FILLHOLES = 1 << 0,
/** Would call this UVCALC_ASPECT_CORRECT, except it should be default with old file. */
UVCALC_NO_ASPECT_CORRECT = 1 << 1,
/** Adjust UVs while transforming with Vert or Edge Slide. */
UVCALC_TRANSFORM_CORRECT_SLIDE = 1 << 2,
/** Use mesh data after subsurf to compute UVs. */
UVCALC_USESUBSURF = 1 << 3,
/** Adjust UVs while transforming to avoid distortion */
UVCALC_TRANSFORM_CORRECT = 1 << 4,
/** Keep equal values merged while correcting custom-data. */
UVCALC_TRANSFORM_CORRECT_KEEP_CONNECTED = 1 << 5,
};
/** #ToolSettings::uv_flag */
enum {
UV_SYNC_SELECTION = 1,
UV_SHOW_SAME_IMAGE = 2,
};
/** #ToolSettings::uv_selectmode */
enum {
UV_SELECT_VERTEX = 1 << 0,
UV_SELECT_EDGE = 1 << 1,
UV_SELECT_FACE = 1 << 2,
UV_SELECT_ISLAND = 1 << 3,
};
/** #ToolSettings::uv_sticky */
enum {
SI_STICKY_LOC = 0,
SI_STICKY_DISABLE = 1,
SI_STICKY_VERTEX = 2,
};
/** #ToolSettings::gpencil_flags */
typedef enum eGPencil_Flags {
/** Enables multi-frame editing. */
GP_USE_MULTI_FRAME_EDITING = (1 << 0),
/** When creating new frames, the last frame gets used as the basis for the new one. */
GP_TOOL_FLAG_RETAIN_LAST = (1 << 1),
/** Add the strokes below all strokes in the layer. */
GP_TOOL_FLAG_PAINT_ONBACK = (1 << 2),
/** Show compact list of colors. */
GP_TOOL_FLAG_THUMBNAIL_LIST = (1 << 3),
2023-01-16 03:57:10 +01:00
/** Generate weight data for new strokes. */
GP_TOOL_FLAG_CREATE_WEIGHTS = (1 << 4),
/** Auto-merge with last stroke. */
GP_TOOL_FLAG_AUTOMERGE_STROKE = (1 << 5),
} eGPencil_Flags;
Grease Pencil Todos: "Sketching Sessions" Due to popular request and usability considerations, this commit reintroduces functionality similar to 2.4's "Draw Mode" for Grease Pencil. In the toolbar under the Draw/Line/Eraser buttons, you can find the "Use Sketching Sessions" toggle, which enables this feature. This is a per-scene setting, and defaults to off, so that the current 2.5 behaviour is still the default (i.e. the Grease Pencil operator will only do a single stroke at a time). With this option enabled, drawing with Grease Pencil will enter a semi-modal state where you can draw multiple strokes without needing to keep holding the DKEY throughout (though you'll still need to do so to start the strokes, unless you use some toolbar buttons), while still being able to manipulate the viewport. Header help-text prints show the appropriate keybindings (i.e. press ESCKEY or ENTER to end the sketching session). Notes: - To aid maintainability of the 3D-View toolbar code, I've taken the liberty to factor out the groups of widgets which commonly occur in most of the toolbars into separate functions (namely "Repeat" and "Grease Pencil"). Perhaps it might make it slightly harder to newbies to the toolbar code to grasp, though the physics panels are far worse ;) - I've reshuffled some code in the Grease Pencil code to separate out the various states of operation again more clearly, though some more work is still needed there (TODO) - There can now be only one Grease Pencil operator running at a time - Redoing Grease Pencil operations where sketching sessions was enabled still needs work. Namely, a way of delimiting the set of points recorded into strokes is still needed (TODO) - Ultimately, it should be possible to switch tools midway through a session. Currently sessions are limited to only being able to be used with a single drawing mode (TODO) - After ending a drawing session, the titlebar contols may not work on Windows without manually making the main window lose focus and then regain (i.e. click on some other window in toolbar, then come back). This may be related to (bug #25480)
2011-01-04 04:14:01 +01:00
/** #Scene::r.simplify_gpencil */
typedef enum eGPencil_SimplifyFlags {
/** Simplify. */
SIMPLIFY_GPENCIL_ENABLE = (1 << 0),
/** Simplify on play. */
SIMPLIFY_GPENCIL_ON_PLAY = (1 << 1),
/** Simplify fill on viewport. */
SIMPLIFY_GPENCIL_FILL = (1 << 2),
/** Simplify modifier on viewport. */
SIMPLIFY_GPENCIL_MODIFIER = (1 << 3),
/** Simplify Shader FX. */
SIMPLIFY_GPENCIL_FX = (1 << 5),
/** Simplify layer tint. */
SIMPLIFY_GPENCIL_TINT = (1 << 7),
/** Simplify Anti-aliasing. */
SIMPLIFY_GPENCIL_AA = (1 << 8),
} eGPencil_SimplifyFlags;
/** `ToolSettings.gpencil_*_align` - Stroke Placement mode flags. */
typedef enum eGPencil_Placement_Flags {
/** New strokes are added in viewport/data space (i.e. not screen space). */
GP_PROJECT_VIEWSPACE = (1 << 0),
// /** Viewport space, but relative to render canvas (Sequencer Preview Only) */
// GP_PROJECT_CANVAS = (1 << 1), /* UNUSED */
/** Project into the screen's Z values. */
GP_PROJECT_DEPTH_VIEW = (1 << 2),
GP_PROJECT_DEPTH_STROKE = (1 << 3),
/** "Use Endpoints". */
GP_PROJECT_DEPTH_STROKE_ENDPOINTS = (1 << 4),
GP_PROJECT_CURSOR = (1 << 5),
GP_PROJECT_DEPTH_STROKE_FIRST = (1 << 6),
} eGPencil_Placement_Flags;
/** #ToolSettings::gpencil_selectmode */
typedef enum eGPencil_Selectmode_types {
GP_SELECTMODE_POINT = 0,
GP_SELECTMODE_STROKE = 1,
GP_SELECTMODE_SEGMENT = 2,
} eGPencil_Selectmode_types;
/** #ToolSettings::gpencil_guide_types */
typedef enum eGPencil_GuideTypes {
GP_GUIDE_CIRCULAR = 0,
GP_GUIDE_RADIAL = 1,
GP_GUIDE_PARALLEL = 2,
GP_GUIDE_GRID = 3,
GP_GUIDE_ISO = 4,
} eGPencil_GuideTypes;
/** #ToolSettings::gpencil_guide_references */
typedef enum eGPencil_Guide_Reference {
GP_GUIDE_REF_CURSOR = 0,
GP_GUIDE_REF_CUSTOM = 1,
GP_GUIDE_REF_OBJECT = 2,
} eGPencil_Guide_Reference;
/** #ToolSettings::particle flag */
enum {
PE_KEEP_LENGTHS = 1 << 0,
PE_LOCK_FIRST = 1 << 1,
PE_DEFLECT_EMITTER = 1 << 2,
PE_INTERPOLATE_ADDED = 1 << 3,
PE_DRAW_PART = 1 << 4,
PE_UNUSED_6 = 1 << 6, /* cleared */
PE_FADE_TIME = 1 << 7,
PE_AUTO_VELOCITY = 1 << 8,
};
/** #ParticleEditSettings::brushtype */
enum {
PE_BRUSH_NONE = -1,
PE_BRUSH_COMB = 0,
PE_BRUSH_CUT = 1,
PE_BRUSH_LENGTH = 2,
PE_BRUSH_PUFF = 3,
PE_BRUSH_ADD = 4,
PE_BRUSH_SMOOTH = 5,
PE_BRUSH_WEIGHT = 6,
};
/** #ParticleBrushData::flag */
enum {
PE_BRUSH_DATA_PUFF_VOLUME = 1 << 0,
};
/** #ParticleBrushData::edittype */
enum {
PE_TYPE_PARTICLES = 0,
PE_TYPE_SOFTBODY = 1,
PE_TYPE_CLOTH = 2,
};
/** #PhysicsSettings::flag */
enum {
PHYS_GLOBAL_GRAVITY = 1,
};
Unified effector functionality for particles, cloth and softbody * Unified scene wide gravity (currently in scene buttons) instead of each simulation having it's own gravity. * Weight parameters for all effectors and an effector group setting. * Every effector can use noise. * Most effectors have "shapes" point, plane, surface, every point. - "Point" is most like the old effectors and uses the effector location as the effector point. - "Plane" uses the closest point on effectors local xy-plane as the effector point. - "Surface" uses the closest point on an effector object's surface as the effector point. - "Every Point" uses every point in a mesh effector object as an effector point. - The falloff is calculated from this point, so for example with "surface" shape and "use only negative z axis" it's possible to apply force only "inside" the effector object. * Spherical effector is now renamed as "force" as it's no longer just spherical. * New effector parameter "flow", which makes the effector act as surrounding air velocity, so the resulting force is proportional to the velocity difference of the point and "air velocity". For example a wind field with flow=1.0 results in proper non-accelerating wind. * New effector fields "turbulence", which creates nice random flow paths, and "drag", which slows the points down. * Much improved vortex field. * Effectors can now effect particle rotation as well as location. * Use full, or only positive/negative z-axis to apply force (note. the z-axis is the surface normal in the case of effector shape "surface") * New "force field" submenu in add menu, which adds an empty with the chosen effector (curve object for corve guides). * Other dynamics should be quite easy to add to the effector system too if wanted. * "Unified" doesn't mean that force fields give the exact same results for particles, softbody & cloth, since their final effect depends on many external factors, like for example the surface area of the effected faces. Code changes * Subversion bump for correct handling of global gravity. * Separate ui py file for common dynamics stuff. * Particle settings updating is flushed with it's id through DAG_id_flush_update(..). Known issues * Curve guides don't yet have all ui buttons in place, but they should work none the less. * Hair dynamics don't yet respect force fields. Other changes * Particle emission defaults now to frames 1-200 with life of 50 frames to fill the whole default timeline. * Many particles drawing related crashes fixed. * Sometimes particles didn't update on first frame properly. * Hair with object/group visualization didn't work properly. * Memory leaks with PointCacheID lists (Genscher, remember to free pidlists after use :).
2009-10-01 00:10:14 +02:00
/* UnitSettings */
#define USER_UNIT_ADAPTIVE 0xFF
/** #UnitSettings::system */
enum {
USER_UNIT_NONE = 0,
USER_UNIT_METRIC = 1,
USER_UNIT_IMPERIAL = 2,
};
/** #UnitSettings::flag */
enum {
USER_UNIT_OPT_SPLIT = 1,
USER_UNIT_ROT_RADIANS = 2,
};
/** #SceneEEVEE::flag */
enum {
// SCE_EEVEE_VOLUMETRIC_ENABLED = (1 << 0), /* Unused */
SCE_EEVEE_VOLUMETRIC_LIGHTS = (1 << 1),
SCE_EEVEE_VOLUMETRIC_SHADOWS = (1 << 2),
// SCE_EEVEE_VOLUMETRIC_COLORED = (1 << 3), /* Unused */
SCE_EEVEE_GTAO_ENABLED = (1 << 4),
SCE_EEVEE_GTAO_BENT_NORMALS = (1 << 5),
SCE_EEVEE_GTAO_BOUNCE = (1 << 6),
// SCE_EEVEE_DOF_ENABLED = (1 << 7), /* Moved to camera->dof.flag */
SCE_EEVEE_BLOOM_ENABLED = (1 << 8),
SCE_EEVEE_MOTION_BLUR_ENABLED = (1 << 9),
SCE_EEVEE_SHADOW_HIGH_BITDEPTH = (1 << 10),
SCE_EEVEE_TAA_REPROJECTION = (1 << 11),
// SCE_EEVEE_SSS_ENABLED = (1 << 12), /* Unused */
// SCE_EEVEE_SSS_SEPARATE_ALBEDO = (1 << 13), /* Unused */
SCE_EEVEE_SSR_ENABLED = (1 << 14),
SCE_EEVEE_SSR_REFRACTION = (1 << 15),
SCE_EEVEE_SSR_HALF_RESOLUTION = (1 << 16),
SCE_EEVEE_SHOW_IRRADIANCE = (1 << 17),
SCE_EEVEE_SHOW_CUBEMAPS = (1 << 18),
SCE_EEVEE_GI_AUTOBAKE = (1 << 19),
SCE_EEVEE_SHADOW_SOFT = (1 << 20),
SCE_EEVEE_OVERSCAN = (1 << 21),
EEVEE: Depth of field: New implementation This is a complete refactor over the old system. The goal was to increase quality first and then have something more flexible and optimised. |{F9603145} | {F9603142}|{F9603147}| This fixes issues we had with the old system which were: - Too much overdraw (low performance). - Not enough precision in render targets (hugly color banding/drifting). - Poor resolution near in-focus regions. - Wrong support of orthographic views. - Missing alpha support in viewport. - Missing bokeh shape inversion on foreground field. - Issues on some GPUs. (see T72489) (But I'm sure this one will have other issues as well heh...) - Fix T81092 I chose Unreal's Diaphragm DOF as a reference / goal implementation. It is well described in the presentation "A Life of a Bokeh" by Guillaume Abadie. You can check about it here https://epicgames.ent.box.com/s/s86j70iamxvsuu6j35pilypficznec04 Along side the main implementation we provide a way to increase the quality by jittering the camera position for each sample (the ones specified under the Sampling tab). The jittering is dividing the actual post processing dof radius so that it fills the undersampling. The user can still add more overblur to have a noiseless image, but reducing bokeh shape sharpness. Effect of overblur (left without, right with): | {F9603122} | {F9603123}| The actual implementation differs a bit: - Foreground gather implementation uses the same "ring binning" accumulator as background but uses a custom occlusion method. This gives the problem of inflating the foreground elements when they are over background or in-focus regions. This is was a hard decision but this was preferable to the other method that was giving poor opacity masks for foreground and had other more noticeable issues. Do note it is possible to improve this part in the future if a better alternative is found. - Use occlusion texture for foreground. Presentation says it wasn't really needed for them. - The TAA stabilisation pass is replace by a simple neighborhood clamping at the reduce copy stage for simplicity. - We don't do a brute-force in-focus separate gather pass. Instead we just do the brute force pass during resolve. Using the separate pass could be a future optimization if needed but might give less precise results. - We don't use compute shaders at all so shader branching might not be optimal. But performance is still way better than our previous implementation. - We mainly rely on density change to fix all undersampling issues even for foreground (which is something the reference implementation is not doing strangely). Remaining issues (not considered blocking for me): - Slight defocus stability: Due to slight defocus bruteforce gather using the bare scene color, highlights are dilated and make convergence quite slow or imposible when using jittered DOF (or gives ) - ~~Slight defocus inflating: There seems to be a 1px inflation discontinuity of the slight focus convolution compared to the half resolution. This is not really noticeable if using jittered camera.~~ Fixed - Foreground occlusion approximation is a bit glitchy and gives incorrect result if the a defocus foreground element overlaps a farther foreground element. Note that this is easily mitigated using the jittered camera position. |{F9603114}|{F9603115}|{F9603116}| - Foreground is inflating, not revealing background. However this avoids some other bugs too as discussed previously. Also mitigated with jittered camera position. |{F9603130}|{F9603129}| - Sensor vertical fit is still broken (does not match cycles). - Scattred bokeh shapes can be a bit strange at polygon vertices. This is due to the distance field stored in the Bokeh LUT which is not rounded at the edges. This is barely noticeable if the shape does not rotate. - ~~Sampling pattern of the jittered camera position is suboptimal. Could try something like hammersley or poisson disc distribution.~~Used hexaweb sampling pattern which is not random but has better stability and overall coverage. - Very large bokeh (> 300 px) can exhibit undersampling artifact in gather pass and quite a bit of bleeding. But at this size it is preferable to use jittered camera position. Codewise the changes are pretty much self contained and each pass are well documented. However the whole pipeline is quite complex to understand from bird's-eye view. Notes: - There is the possibility of using arbitrary bokeh texture with this implementation. However implementation is a bit involved. - Gathering max sample count is hardcoded to avoid to deal with shader variations. The actual max sample count is already quite high but samples are not evenly distributed due to the ring binning method. - While this implementation does not need 32bit/channel textures to render correctly it does use many other textures so actual VRAM usage is higher than previous method for viewport but less for render. Textures are reused to avoid many allocations. - Bokeh LUT computation is fast and done for each redraw because it can be animated. Also the texture can be shared with other viewport with different camera settings.
2021-02-12 22:35:18 +01:00
SCE_EEVEE_DOF_HQ_SLIGHT_FOCUS = (1 << 22),
SCE_EEVEE_DOF_JITTER = (1 << 23),
SCE_EEVEE_SHADOW_ENABLED = (1 << 24),
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
SCE_EEVEE_RAYTRACE_OPTIONS_SPLIT = (1 << 25),
};
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
typedef enum RaytraceEEVEE_Flag {
RAYTRACE_EEVEE_USE_DENOISE = (1 << 0),
} RaytraceEEVEE_Flag;
typedef enum RaytraceEEVEE_DenoiseStages {
RAYTRACE_EEVEE_DENOISE_SPATIAL = (1 << 0),
RAYTRACE_EEVEE_DENOISE_TEMPORAL = (1 << 1),
RAYTRACE_EEVEE_DENOISE_BILATERAL = (1 << 2),
} RaytraceEEVEE_DenoiseStages;
typedef enum RaytraceEEVEE_Method {
RAYTRACE_EEVEE_METHOD_NONE = 0,
RAYTRACE_EEVEE_METHOD_SCREEN = 1,
2023-08-05 05:46:22 +02:00
/* TODO(fclem): Hardware ray-tracing. */
EEVEE-Next: Ray-tracing Denoise Pipeline This is a full rewrite of the raytracing denoise pipeline. It uses the same principle as before but now uses compute shaders for every stages and a tile base approach. More aggressive filtering is needed since we are moving towards having no prefiltered screen radiance buffer. Thus we introduce a temporal denoise and a bilateral denoise stage to the denoising. These are optionnal and can be disabled. Note that this patch does not include any tracing part and only samples the reflection probes. It is focused on denoising only. Tracing will come in another PR. The motivation for this is that having hardware raytracing support means we can't prefilter the radiance in screen space so we have to have better denoising. Also this means we can have better surface appearance with support for other BxDF model than GGX. Also GGX support is improved. Technically, the new denoising fixes some implementation mistake the old pipeline did. It separates all 3 stages (spatial, temporal, bilateral) and use random sampling for all stages hoping to create a noisy enough (but still stable) output so that the TAA soaks the remaining noise. However that's not always the case. Depending on the nature of the scene, the input can be very high frequency and might create lots of flickering. That why another solution needs to be found for the higher roughness material as denoising them becomes expensive and low quality. Pull Request: https://projects.blender.org/blender/blender/pulls/110117
2023-08-03 15:32:06 +02:00
// RAYTRACE_EEVEE_METHOD_HARDWARE = 2,
} RaytraceEEVEE_Method;
/** #SceneEEVEE::shadow_method */
enum {
SHADOW_ESM = 1,
2020-02-20 00:21:23 +01:00
/* SHADOW_VSM = 2, */ /* UNUSED */
/* SHADOW_METHOD_MAX = 3, */ /* UNUSED */
};
/** #SceneEEVEE::motion_blur_position */
enum {
SCE_EEVEE_MB_CENTER = 0,
SCE_EEVEE_MB_START = 1,
SCE_EEVEE_MB_END = 2,
};
2021-02-14 10:58:04 +01:00
/** #SceneDisplay->render_aa and #SceneDisplay->viewport_aa */
enum {
SCE_DISPLAY_AA_OFF = 0,
SCE_DISPLAY_AA_FXAA = 1,
SCE_DISPLAY_AA_SAMPLES_5 = 5,
SCE_DISPLAY_AA_SAMPLES_8 = 8,
SCE_DISPLAY_AA_SAMPLES_11 = 11,
SCE_DISPLAY_AA_SAMPLES_16 = 16,
SCE_DISPLAY_AA_SAMPLES_32 = 32,
};
/** #SceneHydra->export_method */
enum {
SCE_HYDRA_EXPORT_HYDRA = 0,
SCE_HYDRA_EXPORT_USD = 1,
};
/** \} */