Can a Known Lower Bound Improve UWrMaxSat Performance? #12
Replies: 15 comments 6 replies
-
|
Hi, I have created a new branch called user-lower-bound and pushed there my modifications to the code in order to allow a new option: -lower-bound=. The value given in this option is used in two situations: Unfortunately, as far as I know, it is not possible to use the given lower bound in the default search strategy, because the value of lower bound there must start from 0 and is increased after each UNSAT-core given by SAT-solver. So, you can checkout the new branch and test if the option is usable for your problems. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Thank you for the implementation proposed. I tried it on some wcnf file and while it seem to help for some, it seems also a little conterproductive on other. For example, on the wcnf file (I changed the extension to .txt to be able to attach it) f.txt The optimal solution has a cost of 66. If I try to indicate that the lower bound is 66 with this command: ./build/release/bin/uwrmaxsat -no-scip f.wcnf -bin -v3 -lower-bound=66The result gives something like this: Shouldn't the starting lower bound be 66 here instead of 1? (In addition, in this case, the option lower bound actually slows down the resolution a little compared to the same binary search without the option, even though both seem to iterate over all possible lower bounds). And if I try to set the if (lb < opt_lower_bound) {
lb = opt_lower_bound;
LB_goalvalue = opt_lower_bound;
}The starting lower bound correctly starts at 66, but no solution is found with a cost of 66. I guess this is because the assumption size is 1032 when looking for the lower bound 66 here when it should be equal to 715, as seen above. Is this the intended way, and am I just misinterpret the logs? Or maybe am I just misunderstanding the process, and it is not possible to directly set the lower bound like this, even with the binary search? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, In fact, there are two version of a binary search algorithm in UWrMaxSat: the first one is an implementation of BIN-C (Algorithm 15) from the paper [A. Morgado, F. Heras, M. H. Liffiton, J. Planes, and J. Marques-Silva, “Iterative and core-guided maxsat solving: A survey and assessment,” Constraints, vol. 18, no. 4, pp. 478–534, 2013] and the second one is a version of BIN (Algorithm 5) from the paper. The first one is run with -bin option, the second one is started automatically after the default UNSAT-based strategy. In BIN-C the goal pseudo-Boolean constrain is built iteratively from UNSAT-cores given by the SAT solver. The value of the variable LB_goalvalue is strongly connected with this process and cannot be modified by setting it to opt_lower_bound. But opt_lower_bound can be used to select a better middle value in the binary search. This is what I have done. For many "industrial" instances the default strategy is more effective than the others. The problem whether and how an user-provided lower-bound can be used in this algorithm is an interesting research question. The other problem with its implementation is: what should an MaxSAT solver do when a given lower bound is wrong, that is, it is greater than the optimum. In particular, if the solver finds a solution with weight equal to the lower bound, should it claim that the optimum is found without verifying the statement. I am not sure about that, so I have not implemented this test. If you are interested in MaxSAT algorithms, read, for example, the paper cited above. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Thank you for the detailed explanation as well as the source for the algorithms implemented. It seems that I should the paper to have a better understanding of the inner working of the partial weighted MaxSAT. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hello, After reading some papers on MaxSAT, I now have a better understanding of its inner workings. Initially, I opened this discussion because I felt that the current IPAMIR interface implementation wasn't fully utilizing the incremental aspect. Specifically, if I called the solver twice with the same formula, the second call didn't seem to be resolved any faster. I wanted to explore whether information from a previous call could be used to help subsequent calls. With my improved understanding of the algorithms used in UrMaxSat, I now see why my initial proposition was incorrect. However, after reading about core-guided MaxSAT (specifically OLL, which I believe is the algorithm you use exclusively with the IPAMIR interface), I have a new idea. To simplify, let's assume we don't have assumption variables. Could we maintain a map of soft unsatisfiability cores (soft literals and soft cardinalities in the core) to the constraints created due to them? Then, when recalling the MaxSAT solver in a subsequent call, we could check all the soft literals identical to the last MaxSAT solver call (with the same variables and weights). With these, we could determine all the previous unsatisfiability cores that would remain unsatisfiable in the new formula as well (since the hard clauses of the last formula are still in the new formula as well). Once we identify these cores, we could directly add their corresponding constraints (in the appropriate order if some cores contain soft cardinalities created due to previous cores) instead of finding them again through SAT calls, and update relevant variables. Before attempting to implement this, I wanted to check if you think this approach is theoretically sound, and if so, do you think it could help improve the performance of the MaxSAT solver with the IPAMIR interface ? Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard, it is true that between consecutive calls to maxsat_solve mainly a state of the SAT solver is preserved. It is also true that calls to the SAT solver consume most of the time. So it is possibly a good idea to log the mapping (set of assumptions) --> (UNSAT-core C, a sequence of conditional variables x_1, x_2, ...), where x_k implies cardinality_constrain(C) <= k. Then we can avoid a call to the SAT solver when the same set of assumptions is repeated. Moreover, an encoding of the corresponding cardinality constrain into CNF is already in the SAT solver, so we do not have to repeat the encoding. Thus, if you would like to implement an extension of UWrMaxSat on your own, it is an idea what you can do and this will be quite a local change to the solver. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Marek, I appreciate the guidance. I will try to implement the suggested changes and get back to you once they are complete. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard,
Thank you for the cooperation. I will look at your code and merge it with
the master branch.
Best regards,
Marek
czw., 7 lis 2024, 14:58 użytkownik Quenard Gaspard ***@***.***>
napisał:
… Hi Marek,
I wanted to let you know that I implemented this idea. After a MaxSat
call, I save all the unsat cores found, as well as the constraints they
created:
// MsSolver.h
// Each idx correspond to a core and the constrains it creates
Minisat::vec<Minisat::vec<Lit>> saved_cores;
Minisat::vec<Minisat::vec<Lit>> core_to_constrains_lits;
Then, in the maxsat_solve function, within the loop, before calling the
SAT solver, I check if I can reuse a previous core with the current
assumptions (i.e., if the literals of the unsat core are a subset of the
current assumptions). If that’s the case, I go directly to your section of
the code where constraints are created from the unsat core, and I reuse
this core.
Of course, there are some additional subtleties, and in this version, I do
not reuse the previous constraints created by an unsat core; instead, I
allow the code to recreate them. This approach seems to work for all my
benchmarks (around 250 problems) and provides a significant improvement
overall, especially for difficult problems.
// MsSolver.cc
do { // a loop to process GBMO splitting points
while (1) {
// ...
if (reuse_previous_cores) {
reusable_core_idx = findReusableCore(saved_cores_used, assump_ps);
if (reusable_core_idx >= 0) {
saved_cores_used[reusable_core_idx] = true;
goto CreateConstrainsUnsatCore;
}
}
status =
base_assump.size() == 1 && base_assump[0] == assump_lit ? l_True :
base_assump.size() == 1 && base_assump[0] == ~assump_lit ? l_False :
satSolveLimited(assump_ps);
// ...
CreateConstrainsUnsatCore:
Minisat::vec<Lit> core_mus;
if (reuse_previous_cores && reusable_core_idx >= 0) {
// Reuse the core
saved_cores[reusable_core_idx].copyTo(core_mus);
} else {
if (opt_core_minimization) {
if (weighted_instance) {
vec<Pair<Pair<Int, int>, Lit> > Cs_mus;
// ...
I didn’t submit this as a pull request because, even though it seems to
work for my benchmarks, I’m not certain it works in all possible cases (I
use this exclusively with IPAMIR), and I’m not confident that my
modifications are in the correct place. However, if you’re interested, or
if there are other users who might benefit from incremental MaxSAT, here’s
my modification of the MsSolver.cc and MsSolver.h files. You can open the
files with a diff tool to see the changes compared to the original code.
If you have any questions or would like to test with additional
benchmarks, I’d be happy to provide them.
MsSolver.zip
<https://github.com/user-attachments/files/17662494/MsSolver.zip>
Best Regards,
Gaspard
—
Reply to this email directly, view it on GitHub
<#12 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJFFNS2BAXJC3RHBA4ULJ3LZ7NWZBAVCNFSM6AAAAABJSUPSUSVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMJXHAZDKMI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi Marek, I’m happy to contribute ! After reviewing my proposed code, I realized that I do indeed rely on certain assumptions that do not hold in the general case (although for the ones I've identified, it shouldn't be too hard to generalize them). For example, while I keep a list of reusable cores after each MaxSAT call, at the beginning of the maxsat_solve function, I remove all reusable cores that are no longer relevant (with the function However, I make two main assumptions in this function: first, that soft literals can be removed between two MaxSAT calls, by setting their weight to 0, though we cannot change their value—only drop them by assigning a weight of 0. Second, that each soft clause contains only one soft literal. While these assumptions hold true in my benchmarks, they clearly do not apply in the general case. As a result, the code, as it currently stands, cannot be implemented directly. There are probably other problems or lacks of optimization (For some of my benchmarks, I feel that I use too little reusable cores while for other I use almost/all of them, maybe it can be due to the delayed assumptions that I do not take into account ?). If after reading the code you have some remarks or questions, I can propose another version based on your remarks. Best Regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard, I have reviewed your code and it looks OK. It is also quite easy to follow due to your comments. But I have following suggestions: (1) The technique with saved cores is only relevant if IPAMIR interface is used. So the variable reuse_previous_core should be set to true only if the variable ipamir_used is true. (2) The function findReusableCore works in time that is proportional to the sum of sizes of all saved cores ( in the worst case) times the logarithm of the size of a set of assumptions. By changing the data structure used for saved cores it can be proportional to: the size of the set of assumptions plus the sum of sizes of saved cores in which those assumptions appear plus the number of saved cores. Assuming that there can be quite a big number of literals in saved cores compared to the size of the assumptions, it should improve the running time. This new structure consists of (a) a Map<Minisat::Lit, vec), where for each literal from saved cores we have a list of indices of saved cores, in which it appears, (b) a vec L with sizes of saved cores. To check if a set A of assumptions contains one or more of saved core, we can copy L to TL and for each appearance of x in A in a saved core with index y, we can decrease TL[y] by 1. When TL[y] reduces to zero, the core is found. We can even find all cores with this property and select one with the smallest size. (3) With the data structure described above, it is not necessary to reduce the set of saved cores in each call to the function maxsat_solve. It can be done from time to time, when the set of saved cores becomes too big. Moreover, I can imagine other rules for selecting non-reusable cores: a small number of times it was reused or an old last time it was reused. (4) Finally, a small technical tip: all calls to printf should be replaced with reportf and made them conditional on (opt_verbosity > 1). Your second main assumption is generally true in IPAMIR: a soft clause contains only one literal. The first one is not: the interface allows to change the weight of a soft literal to any nonnegative value. But the technique with saved cores should works also in this general case. Best Regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard, you can create a class named, for example IpamirCores, and put its definition in files IpamirCores.h and IpamirCores.cc. To explain (3): since the size of saved cores can grow, we should keep it under control and reduce it when it becomes too big. It is saved to remove any core in the store - it can be recreated by the sat solver. About your issue: try to compare logs of UWrMaxSat obtainable by the option -v2 when reusable cores are switch on/off. Are you using SCIP? Can you see "Switching to binary search ..." in logs. If yes, try to switch it off by the option -no-bin. Best, Marek |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard, you are right - elements of global_assumptions (literals from ipamir_assume) are removed from core_mus, so they are not present in saved_cores. The possible solution is to copy the vector sat_solver.conflicts to saved cores instead of core_mus, and, in order to reuse it, copy from saved_cores to sat_conflicts before the removal . The procedure of finding a reusable core should also check vectors global_assumptions and harden_assump, beside assump_ps. Best, |
Beta Was this translation helpful? Give feedback.
-
|
Hi again, did you manage to fix saved_cores? Best Regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Marek, Sorry for the delay. I wanted to test another slightly better approach where I just prevent the maxsat solver from saving cores if they contain global_assumptions (using the boolean variable } else { // UNSAT returned
global_assumptions_in_core = false; // Boolean variable to check if the sat_solver.conflict contains global assumptions
if (sat_solver.conflict.size() == 0) break; // unconditional UNSAT
sat_solver.conflict.copyTo(sat_conflicts);
if (global_assumptions.size() > 0 || harden_assump.size() > 0) {
int j = 0;
Sort::sort(harden_assump);
for (int i = 0; i < sat_conflicts.size(); i++) { // remove global assumptions from sat_conflicts (core)
if (global_assump_vars.at(var(sat_conflicts[i]))) {
global_assumptions_in_core = true;
continue;
}
if (Sort::bin_search(harden_assump,sat_conflicts[i]) >= 0) {
global_assumptions_in_core = true;
continue;
}
if (j < i) sat_conflicts[j] = sat_conflicts[i];
j++;
}
if (j == 0) break; // unconditional UNSAT
if (j < sat_conflicts.size()) sat_conflicts.shrink(sat_conflicts.size() - j);
}
if (assump_ps.size() == 0 && assump_lit == lit_Undef ||
opt_minimization == 0 && sat_conflicts.size() == 1 && sat_conflicts[0] == ~assump_lit) break;
{
CreateConstrainsUnsatCore:
// ...
if (reuse_previous_cores && reusable_core_idx == -1 && !global_assumptions_in_core) {
// Save the new core, as well as the constrains linked to it
}I assume your proposed technique (saving sat_solver.conflict instead of core_mus into saved_cores and checking if a core is reusable with assump_ps, harden_assump and global_assumptions) wouldn't work either due to the same issue mentioned above ? } else { // UNSAT returned
CreateConstrainsUnsatCore:
if (reuse_previous_cores && reusable_core_idx == -1) {
new_saved_cores.push();
new (&new_saved_cores[new_saved_cores.size() - 1]) Minisat::vec<Lit>();
sat_solver.conflict.copyTo(new_saved_cores[new_saved_cores.size() - 1]);
new_core_to_constrains_lits.push();
new (&new_core_to_constrains_lits[new_core_to_constrains_lits.size() - 1]) Minisat::vec<Lit>();
}
if (reuse_previous_cores && reusable_core_idx >= 0) {
// Reuse the core
saved_cores[reusable_core_idx].copyTo(sat_solver.conflict);
} else {
if (sat_solver.conflict.size() == 0) break; // unconditional UNSAT
sat_solver.conflict.copyTo(sat_conflicts);
}So for now, I'm sticking with the fix where I don't save cores found during maxsat calls that have global assumptions. But, I would be very interested in your thoughts on why the boolean variable Best Regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hi Gaspard, to avoid calls to core_minimization, a core from sat_solver.conflict should be saved in two parts: core_mus and, say, core_ext, where both parts are check to qualify this core as reusable and if it is, then the first part is copied to core_mus. What concerns incorrect results, I suspect the code, where old literals are replaced by new ones in saved_cores: a mapping old-to-new is created based only on the identical positions in the corresponding vectors. But I am not sure. Regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Thanks again for your solver, I use it for my PhD thesis (on the subject of automated planning using SAT solvers) and it helps me achieve amazing results ! :)
I have a question regarding the use of a known lower bound before looking for a solution.
If I have prior knowledge of a lower bound for the cost of the best possible solution, is it possible to incorporate this information into the solver? From what I've understood (I may be wrong), after finding an initial solution, the solver determines a lower bound and attempts to find a solution with this lower bound. If unsuccessful, it increments the bound until a solution is found or returns to the cost of the initial solution.
For my problem, I have a reliable lower bound which consistently reflects the cost of the optimal solution from the previous call (I use the incremental with the IPAMIR interface). I've attempted to modify the
maxsat_solvefunction by settingLB_goalvalueto this lower bound, but I haven't been able to achieve optimal solutions using this approach. It seems the issue might lie withassump_ps, which I suspect needs to be initialized differently when incorporating a lower bound, though I'm unsure of the correct method to do so.If possible, could you please advise me on how to modify the
maxsat_solvefunction to effectively incorporate a lower bound?Thank you for your assistance!
Best regards,
Gaspard
Beta Was this translation helpful? Give feedback.
All reactions