1:- module(sudoku_strong, []).    2
    3% (setq module-query  "qcompile(util('sudoku-strong')), module(sudoku_strong).")
    4% ?- qcompile(util('sudoku-strong')), module(sudoku_strong).
    5
    6% ?- module(sudoku_strong).
    7% Hi all,
    8
    9% in the most recent git version of SWI 5.7, the all_distinct/1 constraint
   10% has become significantly stronger. all_distinct/1 was always stronger
   11% than all_different/1, for example:
   12
   13% ?-  Vs = [X,Y,Z], Vs ins 1..2, all_different(Vs).
   14%   %@ Vs = [X, Y, Z],
   15%   %@ X in 1..2,
   16%   %@ all_different([X, Y, Z]),
   17%   %@ Y in 1..2,
   18%   %@ Z in 1..2.
   19
   20% whereas all_distinct/1 can detect the inconsistency:
   21
   22%   :- Vs = [X,Y,Z], Vs ins 1..2, all_distinct(Vs).
   23%   %@ false.
   24
   25% Now, all_distinct/1 is even stronger (arc-consistent). As an example,
   26% consider the recently posted Sudoku program:
   27
   28:- use_module(library(clpfd)).   29:- use_module(util(matrix)).   30
   31sudoku(Rows) :-
   32      length(Rows, 9), maplist(length_(9), Rows),
   33      append(Rows, Vs), Vs ins 1..9,
   34%     maplist(all_different, Rows),
   35      maplist(all_distinct, Rows),
   36      matrix:transpose_matrix(Rows, Columns),
   37% 	  maplist(all_different, Columns),
   38	  maplist(all_distinct, Columns),
   39      Rows = [A,B,C,D,E,F,G,H,I],
   40      blocks(A, B, C),
   41	  blocks(D, E, F),
   42	  blocks(G, H, I).
   43
   44length_(L, Ls) :- length(Ls, L).
   45
   46blocks([], [], []).
   47blocks([A,B,C|Bs1], [D,E,F|Bs2], [G,H,I|Bs3]) :-
   48      all_distinct([A,B,C,D,E,F,G,H,I]),
   49      blocks(Bs1, Bs2, Bs3).
   50
   51problem(3, P) :-
   52          P = [[1,_,_,_,_,_,_,_,_],
   53               [_,_,2,7,4,_,_,_,_],
   54               [_,_,_,5,_,_,_,_,4],
   55               [_,3,_,_,_,_,_,_,_],
   56               [7,5,_,_,_,_,_,_,_],
   57               [_,_,_,_,_,9,6,_,_],
   58               [_,4,_,_,_,6,_,_,_],
   59               [_,_,_,_,_,_,_,7,1],
   60               [_,_,_,_,_,1,_,3,_]].
   61
   62problem(4, P) :- % from iPhone App.
   63          P = [[9,_,2,4,_,_,_,_,7],
   64               [_,_,4,_,_,_,_,1,_],
   65               [_,_,_,7,_,_,_,9,2],
   66               [7,5,_,_,_,8,_,_,_],
   67               [2,_,_,_,_,_,7,_,_],
   68               [_,_,_,1,7,_,_,2,_],
   69               [5,_,_,_,2,_,_,_,_],
   70               [6,_,_,_,_,_,_,7,_],
   71               [_,_,_,_,_,4,6,3,1]].
   72
   73problem(5, P) :- % from iPhone App.
   74          P = [[_,_,2,4,_,_,_,_,7],
   75               [_,_,4,_,_,_,_,1,_],
   76               [_,_,_,7,_,_,_,9,2],
   77               [7,5,_,_,_,8,_,_,_],
   78               [2,_,_,_,_,_,7,_,_],
   79               [_,_,_,1,7,_,_,2,_],
   80               [5,_,_,_,2,_,_,_,_],
   81               [6,_,_,_,_,_,_,7,_],
   82               [_,_,_,_,_,4,6,3,1]].
   83
   84
   85
   86% ?- qcompile(util('sudoku-strong')), module(sudoku_strong).
   87% ?- help(labeling).
   88
   89% ?- problem(4, S), sudoku(S), maplist(writeln, S).
   90
   91% ?- time((problem(5, S), sudoku(S))), append(S, U),
   92%	 labeling([], U), maplist(writeln, S).
   93
   94% ?- time((problem(4, S), sudoku(S))), append(S, U),
   95%	 labeling([], U), maplist(writeln, S).
   96
   97% ?- findall(S,  (problem(5, S), sudoku(S), append(S, U),
   98%	 labeling([], U)), V), length(V, L).
   99
  100% ?- findall(S,  (problem(4, S), sudoku(S), append(S, U),
  101%	 labeling([], U)), V), length(V, L).
  102
  103%  ?- time((problem(4, S), sudoku(S))), append(S, U),
  104
  105% Document by Markus.
  106%
  107% This version is now more than 6 times faster than previously, in fact
  108% all_distinct/1 is now so strong that the unique solution is found
  109% without labeling:
  110
  111% ?- time((problem(3, S), sudoku(S))), maplist(writeln, S).
  112
  113% The algorithm now used in all_distinct/1 is due to J-C. R辿gin. The price
  114% to pay for this much smaller search space (compared to all_different/1)
  115% is slower propagation speed, often significantly slower especially for
  116% larger sets of variables. I'm currently trying to speed up propagation,
  117% which mainly consists of finding a maximum matching and the strongly
  118% connected components of a bipartite graph. Please take a look at
  119% library(clpfd)'s source if you are interested in helping with this.
  120
  121% Also, please let me know if you are currently using all_distinct/1 and
  122% the new filtering algorithm is now too slow for your use case. I will
  123% restore the previous all_distinct/1 and make the new filtering algorithm
  124% available as an option for a new all_different/2, or under a different
  125% predicate, if you consider the current speed a significant regression.
  126
  127% Independently, I welcome suggestions on how to make options available
  128% for a new all_different/2. For SICStus compatibility, we could consider
  129% consistency(domain/bound/value) and on(dom/min/max/minmax/val) to set
  130% consistency and triggering events. Please let me know what you think.
  131
  132% Thank you and all the best,
  133% Markus
  134
  135% P.S.: Without querying Prolog, how far can you reduce the domains in:
  136
  137% ?- test(X, Y).
  138test(Vs,Ds):- length(Vs, 6),
  139         Ds = [1