Even though the brush rotation is computed as a 2D angle (based on the mouse
movement), it currently gets applied by rotating the projected X direction
around the the normal in 3D.
This patch ensures that rotation gets applied first, and only then does the
motion direction get projected into the tangent plane. A potential issue with
the current approach is that the random perturbations will also be applied in
2D, but this seems to be fine from discussions with @JulienKaspar and @Sergey.
Also, there was an error where the location should probably be converted *to*
world coordinates.
All these changes seem to fix the issue described in #116418.
I also noticed some minor "inconsistencies" with how the rotation is applied:
For curve strokes, the direction of the curve corresponded to the upward
direction of the brush. For view plane, area plane mapping and anchored strokes,
the mouse motion indicated the downward direction of the brush.
For compatibility, I tried my best to enforce the latter conventions throughout,
but I'm not so confident about oversights.
Pull Request: https://projects.blender.org/blender/blender/pulls/116539
Along with regular fixes and improvements,
this removes a warning on startup for Linux, see:
https://github.com/kcat/openal-soft/issues/554
Some deprecated CMake variables were removed and needed to be renamed.
Note that the previous URL from http://openal-soft.org/
is no longer responsive, using the GITHUB URL instead.
Ref !116877
Whenever custom `filter_items` is used, everything regarding filtering
and sorting has to be implemented "by hand" here, this was just not done
for the sorting (which is now added).
Pull Request: https://projects.blender.org/blender/blender/pulls/116629
Regression in [0] & [1]. Resolve by reverting most of [0],
restoring the original logic from 3.6.
The only significant changes kept are the use of selected UV faces
when called from the UV editor.
[0]: e0e3650495
[1]: 7b3e1cbb96
Even if related, they don't have the same performance
impact.
To avoid any performance hit, we replace the Diffuse
by a Subsurface Closure for legacy EEVEE and
use the subsurface closure only where needed for
EEVEE-Next leveraging the random sampling.
This increases the compatibility with cycles that
doesn't modulate the radius of the subsurface anymore.
This change is only present in EEVEE-Next.
This commit changes the principled BSDF code so that
it is easier to follow the flow of data.
For legacy EEVEE, the SSS switch is moved to a
`radius == -1` check.
Regression from [0] based on the incorrect assumption that X11's
Time was a uint64. Despite the `Time` type being 8 bytes,
the value wraps at UINT32_MAX.
Details:
- GHOST_SystemX11::m_start_time now uses CLOCK_MONOTONIC instead of
gettimeofday(..) since event times should always increase.
- Event timestamps now accumulate uint32 rollover.
[0]: efef709ec7
Update the command line help message to reflect the actual image output
formats available. Remove mention of IRIZ and DDS, rename MPEG to
FFMPEG. HDR and TIFF are always valid now.
Pull Request: https://projects.blender.org/blender/blender/pulls/115987
Before 111e586424, the normals were written back to the mesh
from the values stored in the undo step. Now though, due to caching of
normals and better const correct-ness, this is not so simple or safe.
That still may be a nice optimization to apply in the future, but for
now the simplicity of just recalculating them is much more feasible.
Since the recalculation is localized to the brush action, performance
should be okay. And maybe normals won't have to be stored in undo steps
in the future as well, since they're derived data.
`set_default_remaining_node_outputs` didn't work because the mapping
between the original node group sockets and the lazy function outputs
wasn't set up during the construction of the node type, as done by the
bake node and others.
Instead of storing a boolean "update tag" for every vertex, just
recalculate the normals of all the faces and vertices in affected PBVH
nodes. This avoids the overhead of tracking position updates at the
vertex level, and simplifies the normal calculation, since we can
just calculate values for all the normals in a node.
The main way we gain performance is avoiding building a `VectorSet`
for all vertices and faces that need updates in the entire mesh. That
process had to be single threaded, and was a large bottleneck when many
vertices were affected at a time.
That `VectorSet` was necessary for thread safety deduplication of work
though, because neighboring nodes can't calculate the normals of the
same faces or vertices at the same time. (Normals need to be calculated
for all faces connected to moved vertices and all vertices connected to
those faces). In this PR, we only build a set of the *boundary* faces
and vertices. We calculate those in a separate step, which duplicates
work from the non-boundary calculations, but at least it's threadsafe.
I measured normal recalculation timing when sculpting on a 16 million
vertex mesh. The removal of the `vert_bitmap` array also saves 16 MB
of memory.
| Nodes | Affected Vertices | Before (ms) | After (ms) |
| ----- | ------------ | ----------- | ---------- |
| 4 | 15625 | 0.208 | 0.304 |
| 35 | 136281 | 2.98 | 0.988 |
| 117 | 457156 | 15.0 | 3.21 |
| 2455 | 9583782 | 566 | 84.0 |
Pull Request: https://projects.blender.org/blender/blender/pulls/116209
When a mesh light is shadow-linked to something with a specific
shader network it was possible that the emission_sd_storage was
not bit enough for sampling.
The shade_dedicate_light kernel only evaluates emission of either
light or mesh emitter and then resumes the regular path tracer.
There is no need to store closures.
This change makes it so a smaller storage is used, and also
passes flag to the shader evaluation function indicating that
closures are not to be stored.
Pull Request: https://projects.blender.org/blender/blender/pulls/116907
Code inside `IMB_transform` (which is pretty much only used inside VSE
to do translation/rotation/scale of image or movie strips) was not
correctly doing mapping between pixel and texel spaces. This is similar
to e.g. GPU rasterization rules and has to do with whether some
coordinate refers to pixel/texel "corner" or "center" etc. It's a long
topic, but short summary would be:
- Coordinates refer to pixel/texel corner,
- Do sampling at pixel centers,
- Bilinear filter should use floor(x-0.5) and floor(x-0.5)+1 texels.
Also, there was a sign error introduced in Subsampling 3x3 filter, in
commit b3fd169259.
After making the PR, I found out that this seems to fix#90785, #112923
and possibly some others.
Long explanation with lots of images is in the PR.
Pull Request: https://projects.blender.org/blender/blender/pulls/116628
This exposes the internal `ID.session_uuid` in the Python API as `ID.session_uid`.
The exposed name is not `session_uuid`, because it's not actually universally unique,
and we want to change the internal name too.
The reason for exposing this is to allow Python scripts to call operators that take the
session id as input. A fair number of operators do this as you can see when searching
for `WM_operator_properties_id_lookup`.
Pull Request: https://projects.blender.org/blender/blender/pulls/116888
As discussed in #105407, it can be useful to support returning
a fallback value specified by the user instead of failing the driver
if a driver variable cannot resolve its RNA path. This especially
applies to context variables referencing custom properties, since
when the object with the driver is linked into another scene, the
custom property can easily not exist there.
This patch adds an optional fallback value setting to properties
based on RNA path (including ordinary Single Property variables
due to shared code and similarity). When enabled, RNA path lookup
failures (including invalid array index) cause the fallback value
to be used instead of marking the driver invalid.
A flag is added to track when this happens for UI use. It is
also exposed to python for lint type scripts.
When the fallback value is used, the input field containing
the property RNA path that failed to resolve is highlighted in red
(identically to the case without a fallback), and the driver
can be included in the With Errors filter of the Drivers editor.
However, the channel name is not underlined in red, because
the driver as a whole evaluates successfully.
Pull Request: https://projects.blender.org/blender/blender/pulls/110135
In order to prepare for introduction of fallbacks, refactor the
function to return its status as an enumeration value. Also, move
the index range check inside the function from python-specific code.
F-Curves with a broken rna path are highlighted with a red underline
in the channel list of the animation editors. I think it makes sense
to also apply this to drivers that failed to evaluate and were disabled.
Otherwise, it is not apparent which drivers are broken without checking
every one manually, or applying the errors filter.
`armature.collections.move(x, y)` now retains the active collection. Due to
the move, the active collection index can change. The index is now updated
to accomodate for this.
Fix the 'Show All' bone collection operator, by making it operate on all
collections instead of just the roots. It also now ensures that all
ancestors of the solo'ed collection are shown (otherwise it's the only
visible one, but given that its parent is hidden, it's still not
visible).
This also fixes a related issue, where calling the operator without
passing the `name` parameter would do nothing. Now it soloes the active
bone collection.
Ensure that the active bone collection doesn't change, when adding a new
bone collection via low-level functions (for example
`armature.collections.new()` in Python).
The change to the active bone collection happened because adding a new
child may change the index of the active bone collection.
The graph editor supports an Extend transformation mode, which is
essentially Move that only affects keyframes either before or
after the current frame, based on the mouse cursor position.
This fixes a bug in this mode where it doesn't respect the snap setting.
Converting a near-identity matrix to a euler could be off by over
4.5 degrees when the result of the hypotenuse calculated from the
matrix was small (around 2e-6).
Resolve by increasing the epsilon ~20x to 3.75e-05 which results
in ~1/20th the error.
This was tested against many generated near-identity matrices
(which tend to cause arithmetic error) to ensure this change doesn't
cause worse results in other cases (see report for details).
The GPU Directional Blur node does not match CPU. This is because GPU
accumulates the scale, while the CPU increments it. This patch
incremenets the scale for the GPU to make them match.
Adds a `--weeks-ago` option to be able to control which week the report
should be made for. In practice people sometimes need to create reports
for a few weeks ago or for the current week.