This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
This makes sense when we want to avoid float precision error
for near co-linear edges. OTOH, this is an arbitrary decision,
so keep functions separate.
The issue was caused by combination of following factors:
- Clipboard cleanup function will pass node tree as NULL to node free
function.
This is fine on it's own, we don't have tree in clipboard.
- Node free function will call node storage cleanup only when there is
a non-NULL node tree.
This is somewhat weird, because storage cleanup does not take node
tree as argument.
So the solution here: move node storage cleanup outside of check that
node tree is not NULL.
imapaint's clone and canvas are refcounting Image usages.
And particle's editsettings' object and scene seem to be pure runtime
data (they are reset to NULL in readcode), so resetting them to NULL
here as well.
Really, really, really need to get rid of this usercount handling
everywhere, hopefully incomming ID copying rewrite will help sanitize
that mess. But fix was needed for 2.79 release!
The issue was caused by combination of following factors:
- Blender Internal viewport render can not distinguish between which parts of
main database changed, so it does full database re-sync when anything is
tagged for an update.
This way, if any NodeTree (including compositor) is changed, Blender Internal
viewport is tagged for full render database update.
- With old dependency graph, scene-level drivers are evaluated on every
iteration of scene_update_tagged, even if nothing is tagged for an update.
This causes compositor drivers be evaluated quite often.
- Driver evaluation checks whether value was changed, and if so it tags
corresponding ID type as updated (this is what was telling viewport to do
render database update).
This check was quite stupid: current property value was checked against the
one coming from driver expression. This means, if driver value is outside
of the hard limit range of the property, the property will always be
considered updated.
The fix is to compare current property value against clamped value from the
driver.
Using an arbitrary face as the source of the UV data is mostly fine, as
vertices on seams will generally map to different parts of the texture
that have the same color.
This is regarding fed853ea78
Last fix only accounted for direct changes to the RB settings, but
failed for, say, object transformations. This fix accounts for any
change that might invalidate the RB cache.
Fix 9cd6b03187 introduced a bug that
prevented simulation after a cache invalidation (for instance when
changing a setting after simulating). This fixes that.
This makes the last time (`ltime`) stored in the rigid body world (`rbw`)
only be updated once a simulation step actually occurs, this prevents
another simulation step from being solved unless the current time is
exactly one frame after the last cached frame. Thus this prevents the
formation of gaps in the cache, such as seen in T50230.
Reviewers: mont29, sergey, angavrilov
Tags: #physics
Maniphest Tasks: T50230
Differential Revision: https://developer.blender.org/D2458
Now that some node types may have custom context, we need to handle that
in the (convoluted :| ) UI code of nodes as well.
Reported in T43295 by Gabriel Gazzán (@gab3d), thanks.
Fix is a bit ugly, but cannot think of another solution for now, at
least this **should** not break anything else.
And now I go find myself a very remote, high and lonely mountain, climb
to its top, roar "I hate proxies!" a few times, and relax hearing the echos...
This is annoying especially for exporters who do use mesh name, since it
broke any relation with actual Mesh naming in original Blend file.
Unfortunately, we cannot avoid the extra .xxx digits. ;)
Children where always getting at least one segment of fixed length...
Now fully hidden ones (zero length) get no segment at all.
Note that even very short ones keep getting one 'unit' length segment - would
rather avoid changing that at this point, given how complex children
particles 'length' can get with all kind of modifiers... Think we can
live with that for now anyway.
'Convert To...' Object operation has very weird effect of actually
working at obdata level, not object level, which means *all* objects
(even unselected/hidden/in other scenes/...) using same obdata will be
converted to new selected type.
IMHO this is very bad behavior, but... not a bug really, so do not
change this for now.
But at least, do not do that when working on some linked data, else it
leaves Blend file in invalid (incoherent) state until next reload.
So workaround for now is to enforce the 'Keep Original' option when some
linked object/obdata is affected by the operation.
Also fixed somewhat broken usercount handling in Curve->Mesh part.
Eeeeeek!^2 Calling unconditionnaly ID freeing `BKE_libblock_free()` on a
datablock (ob->data, i.e. Curve) that may be used elsewhere...
Veryveryvery bad!
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
Was internally a no-op operation, which only caused extra work
to be done during depsgrpah traversal and evaluation, without
making any measurable improvement.