Commit cb73e1c5 authored by ZIMMERMANN Paul's avatar ZIMMERMANN Paul
Browse files

reviewed and completed filtering part

parent cd8ec12c
......@@ -222,11 +222,14 @@ to perform again exactly the same computation, we provide in the
produced for each 1G special-q sub-range.
We can then have a visual plot of this data, as shown in
[`rel_count.pdf`](rel_count.pdf). The plot is very regular except for special-q's
[`rel_count.pdf`](rel_count.pdf), where the x-coordinate denotes the
special-q (in multiples of 1G).
The plot is very regular except for special-q's
around 150G and 225G. The irregularities in these areas correspond to
the beginning of the computation when we were still adjusting our
scripts. We had two independent servers in charge of distributing sieving
tasks, each one dealing with one half of the special-q range.
tasks, one dealing with [150G,225G], and the other one dealing with
In order to validate our computation, it is possible to re-compute only
one of the sub-ranges (not one in the irregular areas) and check that the
......@@ -237,6 +240,25 @@ extrapolate.
## Reproducing the filtering results
Several filtering experiments were done during the sieving phase.
The final one can be reproduced as follows, with revision 492b804fc:
purge -out purged7.gz -nrels 2380725637 -outdel relsdel7.gz -keep 3 -col-min-index 0 -col-max-index 2960421140 -t 56 -required_excess 0.0 files
where `files` is the list of files with unique relations (output of `dup2`).
This took about 7.5 hours on the machine wurst, with 575GB of peak memory.
The merge step can be reproduced as follows:
merge-dl -mat purged7.gz -out history250_7 -target_density 250 -skip 0 -t 28
and took about 20 minutes on the machine wurst, with a peak memory of 118GB.
Finally the replay step can be reproduced as follows:
replay-dl -purged purged7.gz -his history250_7.gz -out p240.matrix7.250.bin -index p240.index7.gz -ideals p240.ideals7.gz
## Estimating linear algebra time more precisely, and choosing parameters
## Reproducing the linear algebra results
......@@ -250,11 +272,11 @@ optimized for large computations, and in particular it starts by reading
the whole database of known discrete logarithms in central memory which
is slow and required a machine with a huge amount of memory.
In the file `howto-descent.txt`, we explain what we did to make our lives
In the file [`howto-descent.txt`](howto-descent.txt), we explain what we did to make our lives
simpler with this step. We do not claim full reproducibility here, since
this is admittedly hackish (a small C-code is also given, that searches
in the database file without having an in-memory image). In any case,
this step can not be done without the database file
`dlp240.reconstructlog.dlog` that is too large (465 GB) to put in this
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment