Blender V4.3
blender::fn::lazy_function Namespace Reference

Namespaces

namespace  detail
 
namespace  tests
 

Classes

class  BasicParams
 
struct  Context
 
struct  CurrentTask
 
class  Executor
 
class  FunctionNode
 
class  Graph
 
class  GraphExecutor
 
class  GraphExecutorLFParams
 
class  GraphExecutorLogger
 
class  GraphExecutorNodeExecuteWrapper
 
class  GraphExecutorSideEffectProvider
 
struct  Input
 
class  InputSocket
 
struct  InputState
 
class  InterfaceNode
 
class  LazyFunction
 
class  LocalUserData
 
struct  LockedNode
 
class  Node
 
struct  NodeState
 
struct  Output
 
class  OutputSocket
 
struct  OutputState
 
class  Params
 
class  RemappedParams
 
struct  ScheduledNodes
 
class  Socket
 
class  UserData
 

Typedefs

using GraphInputSocket = OutputSocket
 
using GraphOutputSocket = InputSocket
 

Enumerations

enum class  ValueUsage : uint8_t { Used , Maybe , Unused }
 
enum class  NodeScheduleState : uint8_t { NotScheduled , Scheduled , Running , RunningAndRescheduled }
 

Functions

template<typename... Inputs, typename... Outputs>
void execute_lazy_function_eagerly (const LazyFunction &fn, UserData *user_data, LocalUserData *local_user_data, std::tuple< Inputs... > inputs, std::tuple< Outputs *... > outputs)
 

Detailed Description

This file implements the evaluation of a lazy-function graph. It's main objectives are:

  • Only compute values that are actually used.
  • Stay single threaded when nodes are executed quickly.
  • Allow spreading the work over an arbitrary number of threads efficiently.

This executor makes use of FN_lazy_threading.hh to enable multi-threading only when it seems beneficial. It operates in two modes: single- and multi-threaded. The use of a task pool and locks is avoided in single-threaded mode. Once multi-threading is enabled the executor starts using both. It is not possible to switch back from multi-threaded to single-threaded mode.

The multi-threading design implemented in this executor requires no main thread that coordinates everything. Instead, one thread will trigger some initial work and then many threads coordinate themselves in a distributed fashion. In an ideal situation, every thread ends up processing a separate part of the graph which results in less communication overhead. The way TBB schedules tasks helps with that: a thread will next process the task that it added to a task pool just before.

Communication between threads is synchronized by using a mutex in every node. When a thread wants to access the state of a node, its mutex has to be locked first (with some documented exceptions). The assumption here is that most nodes are only ever touched by a single thread and therefore the lock contention is reduced the more nodes there are.

Similar to how a #LazyFunction can be thought of as a state machine (see FN_lazy_function.hh), each node can also be thought of as a state machine. The state of a node contains the evaluation state of its inputs and outputs. Every time a node is executed, it has to advance its state in some way (e.g. it requests a new input or computes a new output).

When a node is executed it may send notifications to other nodes which may in turn schedule those nodes. For example, when the current node has computed one of its outputs, then the computed value is forwarded to all linked inputs, changing their node states in the process. If this input was the last missing required input, the node will be scheduled that it is executed next.

When all tasks are completed, the executor gives back control to the caller which may later provide new inputs to the graph which in turn leads to new nodes being scheduled and the process starts again.

Typedef Documentation

◆ GraphInputSocket

Interface input sockets are actually output sockets on the input node. This renaming makes the code less confusing.

Definition at line 199 of file FN_lazy_function_graph.hh.

◆ GraphOutputSocket

Enumeration Type Documentation

◆ NodeScheduleState

Enumerator
NotScheduled 

Default state of every node.

Scheduled 

The node has been added to the task pool or is otherwise scheduled to be executed in the future.

Running 

The node is currently running.

RunningAndRescheduled 

The node is running and has been rescheduled while running. In this case the node run again. This state exists, because we don't want to add the node to the task pool twice, because then the node might run twice at the same time, which is not allowed. Instead, once the node is done running, it will reschedule itself.

Definition at line 58 of file lazy_function_graph_executor.cc.

◆ ValueUsage

Enumerator
Used 

The value is definitely used and therefore has to be computed.

Maybe 

It's unknown whether this value will be used or not. Computing it is ok but the result may be discarded.

Unused 

The value will definitely not be used. It can still be computed but the result will be discarded in all cases.

Definition at line 58 of file FN_lazy_function.hh.

Function Documentation

◆ execute_lazy_function_eagerly()

template<typename... Inputs, typename... Outputs>
void blender::fn::lazy_function::execute_lazy_function_eagerly ( const LazyFunction & fn,
UserData * user_data,
LocalUserData * local_user_data,
std::tuple< Inputs... > inputs,
std::tuple< Outputs *... > outputs )
inline

In some cases (mainly for tests), the set of inputs and outputs for a lazy-function is known at compile time and one just wants to compute the outputs based on the inputs, without any laziness.

This function does exactly that. It takes all inputs in a tuple and writes the outputs to points provided in a second tuple. Since all inputs have to be provided, the lazy-function has to compute all outputs.

Definition at line 141 of file FN_lazy_function_execute.hh.

References BLI_assert, blender::fn::lazy_function::detail::execute_lazy_function_eagerly_impl(), blender::fn::lazy_function::LazyFunction::inputs(), and blender::fn::lazy_function::LazyFunction::outputs().

Referenced by blender::fn::lazy_function::tests::TEST(), blender::fn::lazy_function::tests::TEST(), and blender::fn::lazy_function::tests::TEST().