IOptimizer
The IOptimizer
interface is responsible for optimization tasks in the MAC-VO system. It supports both sequential and parallel execution modes, allowing optimization to run in the background when configured.
Interface
class IOptimizer(ABC, Generic[T_GraphInput, T_Context, T_GraphOutput], SubclassRegistry):
def __init__(self, config: SimpleNamespace) -> None: ...
@abstractmethod
def _get_graph_data(self, global_map: VisualMap, frame_idx: torch.Tensor) -> T_GraphInput: ...
@staticmethod
@abstractmethod
def init_context(config) -> T_Context: ...
@staticmethod
@abstractmethod
def _optimize(context: T_Context, graph_data: T_GraphInput) -> tuple[T_Context, T_GraphOutput]: ...
@staticmethod
@abstractmethod
def _write_map(result: T_GraphOutput | None, global_map: VisualMap) -> None: ...
Methods to Implement
-
_get_graph_data(global_map, frame_idx) -> T_GraphInput
- Extracts optimization data from global map and frames
- Parameters:
global_map
: Current visual map stateframe_idx
: Tensor of frame indices to optimize
- Returns data structure for optimization
-
init_context(config) -> T_Context
- Static method to initialize mutable context
- Preserves state between optimization runs
- Can cache frequently used objects (e.g., optimizers, kernels)
-
_optimize(context, graph_data) -> tuple[T_Context, T_GraphOutput]
- Static method for core optimization logic
- Parameters:
context
: Mutable optimization contextgraph_data
: Input data for optimization
- Returns updated context and optimization results
-
_write_map(result, global_map) -> None
- Static method to update global map with results
- Parameters:
result
: Optimization results (or None)global_map
: Map to update
Public Interface
-
optimize(global_map: VisualMap, frame_idx: torch.Tensor) -> None
- Main entry point for optimization
- Non-blocking in parallel mode
- Blocking in sequential mode
-
write_map(global_map: VisualMap) -> None
- Updates global map with optimization results
- Blocks until optimization completes in parallel mode
- Immediate in sequential mode
-
terminate() -> None
- Forces termination of child process in parallel mode
- No-op in sequential mode
-
is_running -> bool
- Property indicating optimization status
- Returns True if job is running on child process
- Returns False if:
- Not in parallel mode
- Job is finished
- No job sent to child process
Parallel Processing
The interface supports parallel execution through multiprocessing:
-
When
config.parallel
is True:- Spawns child process for optimization
- Uses pipe for parent-child communication
- Parent process handles data transfer
- Child process runs optimization loop
-
Data Transfer:
- Uses
move_dataclass_to_local
for tensor copying - Handles both regular tensors and TensorBundles
- Ensures thread-safe data sharing
- Uses
-
Process Management:
- Child process ignores keyboard interrupts
- Parent can force terminate child process
- Manages job status to prevent deadlocks
Implementations
Base Implementation
EmptyOptimizer
- Minimal implementation that does nothing
- Used for debugging and mapping-only mode
- Implements all required methods as no-ops
- Uses
EmptyMessageType
for data transfer
Two-Frame Optimizers
-
TwoFramePoseOptimizer
- Basic two-frame pose graph optimization
- Uses ICP-like residual formulation
- Optimizes camera pose only
- Configuration:
vectorize
: Enable vectorized optimizationdevice
: Target device ("cuda" or "cpu")
-
ReprojectOptimizer
- Reprojection error-based optimization
- Supports multiple constraint types:
none
: Basic reprojection errornetwork
: Uses network-predicted depth uncertaintymatch_aware
: Uses match-aware depth uncertaintydisparity
: Uses disparity-based constraints
- Configuration:
vectorize
: Enable vectorized optimizationdevice
: Target deviceuse_fancy_cov
: Use sophisticated covariance computationconstraint
: Constraint type
-
ReprojectBAOptimizer
- Bundle adjustment with reprojection optimization
- Jointly optimizes pose and 3D points
- Supports same constraints as ReprojectOptimizer
- Main optimizer used in MAC-VO
- Configuration:
vectorize
: Enable vectorized optimizationdevice
: Target deviceconstraint
: Constraint type
Common Features
All implementations use PyPose optimization tools:
- Levenberg-Marquardt optimizer
- Trust Region strategy
- Huber robust kernel
- FastTriggs corrector
- PINV solver
Usage Example
# Initialize optimizer
optimizer = IOptimizer.instantiate(config.optimizer.type, config.optimizer.args)
# Run optimization (non-blocking in parallel mode)
optimizer.optimize(global_map, frame_indices)
# Do other work while optimization runs
process_other_tasks()
# Write results back to map (blocks if optimization not done)
optimizer.write_map(global_map)
# Clean up when done
optimizer.terminate()
Configuration
parallel
: Boolean flag for parallel executionvectorize
: Enable vectorized optimizationdevice
: Target compute deviceconstraint
: Type of constraint to use- Example configuration:
config = SimpleNamespace(
parallel=True, # Enable parallel processing
vectorize=True, # Enable vectorized optimization
device="cuda", # Use GPU acceleration
constraint="none", # Basic reprojection error
# Implementation-specific settings...
)