The library has a new search technique that consistently finds feasible to near-optimal solutions in a reasonable time period.
Some characteristics of this technology are:
The most important feature is the invention that provides the ability of the library to focus on finding good solutions. The deepest search on the capacitated vehicle problem averaged 0.83% over optimal. When this feature was turned off, solutions averaged 4.26% over optimal (which is still rather impressive).
The search is non-monotonic: assumptions that are no longer viable are removed, rather than backtracking to a prior point in the search where the assumption does not exist. The library prevents the search from becoming circular without eliminating the ability to examine any part of the search space.
Partial solutions, such as the plan for an activity, can be accepted and the search tree used to derive it is removed. The partial solution that is accepted at this point may be changed later in the search. This allows the required computer memory to remain relatively small. It will also allow the search to be interrupted and restarted later.
The above feature is the basis for incorporating a prior solution into the search, and for incremental refinement of a solution.
Once a problem has a solution, the search tree can be discarded. If a change is needed, the change can be solved without re-solving the entire problem, although all previously solved elements are subject to change. In some cases this can be done in near real time. This also allows an existing solution to be improved. For the benchmarks in this document, a preliminary solution was computed, and the routings with the highest mileage were solved again with an extended search.
Since the search tree can be discarded as each activity is solved, the computer memory required is relatively small. In the benchmark tests, the 666 city problem has a total process size of less than 360 megabytes.
Extent of the search is dynamically controlled by the application.
There are single threaded and multithreaded versions of the library. The multithreaded library allows computers with multiple CPUs to be effectively used, although writing an application for it is more complex.
The system has been carefully crafted for maximum speed as that is critical for these problems. The truth maintenance system is extraordinarily efficient. Although there is a wide variation, the benchmarks in this document were creating about 5,000 hypotheses per second.
Sometimes multiple steps may be required to solve an activity. For example, in the job shop scheduler a machining operation plan might require planning the machine, tooling, and a machine operator. If a machine setup is needed, even more steps are required.
To do this, the library asks the application if a hypothesis represents a solution for the activity. If not, the library asks the application to generate additional hypotheses in new contexts until the application signals that the activity is fully solved.
Zero Cost Solution
When an application is defined as cost-based (the best solution is the lowest value), the library has an optional feature to reduce the size of the search. This also essentially requires activities to be solved one-at-a-time.
When a solution is found that adds no cost to the previous solution, it is immediately accepted.
In the benchmarks, the value for the context is computed as the length of the routing minus the shortest distance from that city to any other city. Travel to the nearest city is often the first hypothesis offered by the benchmark program. If this hypothesis does not invalidate part of the existing solution, the search would only generate a single context and exit.
It may seem counterintuitive to stop the search at that point, since continued search might lead to a better overall solution. But any additional search would basically be blind and random. If a better solution exists, it should be found elsewhere.
There is a recursive assumption that the previous solution is as good as possible given the extent of the search. So, if the new solution adds no cost, it is optimal, at least relative to the prior solution.
The system also has an optional definition of "good enough" – alternatives with a small relative cost can be immediately accepted. This prevents the system from wasting time trying to improve on an already good alternative, and allows extended examination when the best alternative has a relatively high cost.
Incremental Improvement of Solution
When I am working on a plan, I do a rough cut of the complete plan and then begin doing successive refinements to the plan. The library can be used in a similar way.
This is perhaps best illustrated by describing the benchmark application.
The cities for a problem are initially solved one at a time. The program selects the city that has the highest average distance from other unsolved cities. It then calls the library to solve that city. The search space is then discarded, and the next city is selected. This is repeated until all cities have been solved. (The solution for any one city will cause substantial changes to previously solved cities.)
When this is complete, a full solution to the problem has been found. A loop is then repeated some number of times. The single-city solution that adds the most unavoidable distance (planned distance minus the distance to the nearest city) to the complete solution is selected. The plan for that city is retracted from the solution. Then it is solved again with deeper search parameters. In the benchmarks, this improved most solutions by 5% to 20%.
An existing solution may be recreated very simply. The activities and the planned actions would need to be saved to a database, along with the value of the solution.
Only the application data and solution value need to be saved – the library requires only the solution value.
To recreate the solution, only activities and plan (and supporting data) need to be created and the solution value set in the root context. At this point, new data can be added and solved.
Some applications could be written to process real-time changes. Once an initial solution has been computed, changes can be handled incrementally. The actual speed – near real time or not – would be dependent on the application.