SaphireStudio

Development & Art by Phil Gosch

Improved Bin-Packing Algorithm for Blenders UV Editor

These are the original notes for my work on improving Blenders UV tools, with a special focus on the improved UV island packing tool. The original of this article is still available in Blenders wiki archive.

Reference Papers/Articles analysed for Bin Packing

We’re looking for an algorithm for 2D nested bin packing of irregular shapes. Approaches to investigate include simulated annealing, genetic algorithms and lots of heuristics.

  • Also worth to take a look at 1bit bitmap based packing algorithms since with UVs you’re dealing with pixel-based data most of the time anyway.

Main Paper

It really is hard to find papers presenting solutions/algorithms for bin packing that fit our requirements. To be exact, we are looking for Nested Bin Packing Of Irregular 2D Shapes With Fixed Boundaries. The paper which looks most promising is the following one:

http://ref.scielo.org/xxn79v

We choose to follow this one with our implementation since it coincidentally features Simulated Annealing, which is an approach we were looking to implement regardless from the actual packing algorithm.

Analysis

This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional small items inside a bi-dimensional large object. Since this problem is NP-hard (which basically means that it can take forever to compute the optimal solution) some heuristics are needed to get useful results in reasonable time. The heuristic we’re going to use is based on Simulated Annealing (see below).

To find good placement solutions usually “external penalization” is used (overlap between parts for example). Since this is suprisingly difficult in our case, No-Fit polygons are used. No-fit polygons are useful in determining the collision-free region for placing items. In the proposed solution, for each non-placed small item, a limited-depth binary search is performed to find a scale factor (between 0 and 1) that, when applied to the small item, would allow it to be fitted in the large object. Rotation and placement of items are controlled by the Simulated Annealing.

For non placed small items, there are 3 possible ways to define a sequence:

  • Larger-First
  • Random Permutation
  • Weight Sorted

We we’ll go with Larger-First since it is a method proven to produce good results and we have utilities in place already to rank charts in size/area. It works best if combined with the bottom-left placement heuristic.

This algorithm works with non-convex small items and large objects.

Simulated Annealing

Simulated Annealing is a probabilistic metaheuristic to find a global optimum of a given function in a large solution space. In contrast to Greedy Algorithms it doesn’t get stuck in local optima and is used if it is more important to have a “pretty good” solution in a short timeframe than the perfect solution which takes a lot of computation time (if it is possible at all). It’s inspiration comes from physics when a system is cooling down and the internal energy of that system is analogous to the function to be minimized. The goal is to bring the system from an arbitrary initial state to a state with the minimum possible energy. As the system is cooling, so is the probability of accepting worse solutions (compared to the current solution).

In our case the temperature of the system is the unused/wasted space (in %) of the UV Map. The parameters controlled and altered by the simulated annealing algorithm are (θ, r, f):

  • θ: The rotation of a small item is derived from this parameter. In other implementation the user usually has a choice of how big the rotation steps are.
  • f: Defines in which of the many possible regions of the collision free region to place the small item in. For that, the m available collision-free regions are sorted according to some criteria (in this work they are sorted by the x coordinate of their leftmost vertex). The index j in this sorting of the particular region where the small item will be placed is derived from parameter f as follows: j = floor(f * m)
  • r: The variable r determines where in the chosen collision free region to place the item. It is just the perimeter (to be exact: the points on the perimeter) of the region mapped to a 0-1 intervall (starting with 0 from the leftmost point on the perimeter and then going counterclockwise).

In Pseudocode, this is what the simulated annealing approach looks like:

x <- <initial random solution>
i <- 0
WHILE<Global stop condition not satisfied> DO
  t <- Cooling Schedule(i)
  i <- i + 1
  WHILE<Local stop condition not satisfied> DO
    x* <- PickRandomNeighbor(x)
    dE = F(x*) - F(x)
    IF dE < 0 THEN
      x <- x*
    ELSE
      r <- <random uniform number between 0 - 1>
      IF r < exp(-dE/(k*t)) THEN
        x <- x*
      ENDIF
    ENDIF
  ENDWHILE
ENDWHILE
<Display solution x>

First a random inital packing solution x is computed, then while the stop conditions aren’t satisfied a neighboring solution is computed: To get a neighbor solution x* of the current solution x the packing algorithm is called with slightly adjusted/randomized parameters (θ, r, f), this happen in PickRandomNeighbor(x).

Once a neighboring solution x* is found, the enegry difference dE between them is computed. In our case that is the difference in wasted/used UV space. In case the new solution x* is better, the new solution is kept (“locally better solution preferred”). In case the new solution x* is worse then the current solution there is still a chance it is kept instead of the current one. This chance is evaluated as follows:

random number between 0 – 1 < exp(-dE/(k*t)) with k and t being parameters of the cooling process (processed in CoolingSchedule(i))

With this probability of accepting worse solutions it is possible to avoid being stuck in local optima and get to the global optimum. The probability of accepting worse solutions gets lower as the cooling advances.

Placement Heuristics: No Fit Polygons and Collision-Free Region

Also known as Minkowski sums and configuration space obstacle. Minkowski sums can be calculated very efficiently for convex small items. The result of a Minkowski sum of two convex small items is a convex polygon built from the edges of the original small items (notice this construction is unique). Non-convex small items can be decomposed into convex polygons in a pre-processing step, as the transformations applied (rotations and translations) do not affect such a decomposition.

In our case the inner-fit polygon is also used, which is derived from the no-fit polygon and represents the feasible set of points for the placement of the small item inside a large object. The inner-fit polygon can be computed by sliding a small item along the internal contour of the large object (Dowsland et al., 2002).

The no-fit polygon induced by fixed small item i to the moveable small item j is represented as NFP_ij. The inner-fit polygon induced by the large object to the moveable small item j is represented as IFP_j. The collision-free region can result in a set of multiple disconnected polygons. The collision-free region for a given small item j is obtained by the following equation:

CFR_j = IFP_j – U(NFP_ij) with i part of sfsi (the set of fixed small items)

There are are mainly three different general approaches to calculate the No-Fit Polygon (source):

  • Minkowski: The Minkowski based approaches use the property that given two polygons A and B the Minkowski sum of A and −B at least contains the NFP. These approaches generally consists of first generating the possibly complex set of edges that at least contains the NFP and then extracting the NFP from it.
  • Orbiting/sliding: Orbiting or sliding based approaches try to solve the problem by actually placing the polygons in a touching position and then attempting to slide one of polygons around the other using trigonometry, generating the NFP.
  • Decomposition: The third option utilizes the fact that it is simple to compute the NFP of two convex polygons. The solution for when any of the input polygons contain concavities is to decompose them into smaller convex pieces. Calculate the sub-NFPs and then unify them into the actual NFP.

Aside from No-Fit Polygons there are other placement heuristics available:

  • Bottom-Left (BL): This is the best known heuristic of its type. The piece starts at the top right hand corner of the object and it slides down and left with a sequence of movements until no other movement is possible. If the final position does not overlap the object boundaries, the piece is placed in that position. The heuristic does not allow a piece to skip around another placed piece. The performance of this heuristic greatly depends on the initial ordering (eg largest to smallest) of the pieces (Dowsland et al. 1998, 2002). Its advantage lies in its speed and simplicity.
  • Constructive Approach (CA): The heuristic starts by placing the first piece at the bottom-left of the object. Then, for each placed piece five alternative positions are computed and stored in a list: (x*, 0), (0, y*) , (x’, y*) , (x*, y*) and (x*,y’), where x*, x’, y*, and y’ are the maximum and minimum coordinates in x and y. Then it slides down and left with a sequence of movements until no other movement is possible. Given that some positions might coincide, each position appears only once in the list. Among the final positions, the most bottomleft position is chosen.
  • Constructive Approach (Minimum Area) (CAA): This is a modification of the previous heuristic. The variation consists of selecting the best position from the list based on which one yields the bounding rectangle with minimum area, containing all pieces, and that fits in the bottom left hand corner of the object.
  • Constructive Approach (Maximum Adjacency) (CAD): This heuristic is partially based on the approach suggested by Uday et al. (2001). However, when the first piece is to be placed, our implementation considers only the four corners of the big object. For the subsequent pieces, the possible points are the same as those in the constructive approach (CA, listed as our second placement heuristic), described above. Each position in the list is evaluated twice: the piece starts in the initial position and its adjacency (i.e., the common boundary between its perimeter and the placed pieces and the object edges) is computed. Then, the piece is slid down and left and the adjacency is computed again. The position with the largest adjacency is selected as the position of the new piece.

Implementation

For the time being I decided to work on a implementation as a new operator, alongside the current Pack Islands operator. This makes sense since the new operator is designed as an operator that incrementally computes better solutions until a certain threshold is reached, a certain number of iterations is reached or the user aborts the operator. The tool requires quite a bit of old code shuffled around and new code added, so it’s better to not modify the codebase of the current one, but start from scratch and reuse certain functions that are already in place. After the new operator works it should be possible to merge them using a “Use legacy packing” option or even keep them as separate tools. (User feedback needed)

Since a lot of tools internally call the packing operator (Unwrap operator for example) it would make sense to keep the current one as-is, since an incremental operator is not an acceptable solution for this purpose. It would be possible to call the incremental/new operator with max_iterations set to 1, but testing would be needed to see if the initial solution surpasses the one from the current rectangle based bin-packing algorithm.

Current Bin Packing

The current bin-packing algorithm in Blender (Pack Islands) is based on axis-aligned rectangle bin packing, meaning that only the bounding rectangles of the UV islands are taken as a basis for the packing computation. There are utilities in place to compute the minimal area bounding rectangle per island/bin. While it is fast to compute, a big drawback of this approach is that Nesting of parts (eg. interlocking parts) are not possible.

The user has the option to allow rotation of pieces, islands are then rotated in accordance to minimum area rectangle computation. Additionally the option to include a margin is given to the user. If margin != 0 the bounding rectangles are scaled up by the amount specified (which seems to be some arbitrary number instead of pixels?).

The important parts of the implementation can be found in void param_pack() in uvedit_parametrizer.c.

Operator

The operator for this should be modal, running until the user cancels it (either confirming/keeping the current result with Enter/LMB or canceling the whole computation and reverting back to the state before the operator was called).

A good candidate to take inspiration from is the modal “Minimize Stretch” operator for UVs, since it operates in exactly the way we’re looking for.

It should be possible to call the function which places a small item among already placed items from a separate operator, this was also requested by users. Have to look into this once the implementation details are fixed.

Possible options the operator should give to the user are:

  • Margin: A margin between UV islands.
  • Rotation steps: Step size (in degrees?) for rotating UV islands
  • Max. Iterations: Stop the computation after this many iterations

Reusable functions

Some functions that are (probably) needed for the implementation are already in Blenders UV code, all of these can be found in uvedit_parametrizer.c:

  • p_chart_uv_translate(), p_chart_uv_scale_xy(), p_chart_uv_transform(), etc.: Functions to transform UV charts (=islands).
  • qsort(): Sorts uv islands (currently actually bounding box size) from largest to smallest. Called with box_areasort as sorting parameter.
  • p_face_area(), p_face_area_signed(), p_area_signed(): Area computiation functions.
  • p_chart_uv_to_array(): Creates an array filled with the UV coordinates of all vertices of a chart.
  • p_intersect_line_2d_dir(): Intersection test for two lines.
  • p_intersect_line_2d(): Intersection test for 2 edges, currently “#if 0”.
  • p_face_backup_uvs(): Create an backup of current UVs before starting a new computation.
  • p_face_restore_uvs(): Restore uvs with backed up version.

Needed helper functions

Some functions that are needed besides the main algorithm:

  • Compute wasted area: A function to compute the wasted area of the UV map in percent. Image/Canvas area – (sum of all uv chart areas). (-> all charts need to be clipped inside UV area for correct results) (Bonus: Always calculate this and display in UI, maybe interesting/useful for users?)
  • Compute No-Fit polygon: Computes the no-fit polygon for a small item with another small item.
  • Compute inner-fit polygon: Computes the inner-fit polygon for a small item inside a large item (In our case the UV space).
  • Compute collision-free region: CFR_j = IFP_j – U(NFP_ij) (refering to above)
  • Compute concave bounds: Computes the concave (non-convex) boundaries of a chart

Next Post

Previous Post

© 2024 SaphireStudio

Theme by Anders Norén