Commit f550f12f authored by Emmanuel Thomé's avatar Emmanuel Thomé
Browse files

minor typos

parent 00295141
......@@ -165,7 +165,7 @@ $CADO_BUILD/sieve/ecm/precompbatch -poly dlp240.poly -lim1 0 -lim0 536870912 -ba
Then a typical benchmark is as follows:
```
```shell
time $CADO_BUILD/sieve/las -v -poly dlp240.poly -t auto -fb0 $DATA/dlp240.fb0.gz -allow-compsq -qfac-min 8192 -qfac-max 100000000 -allow-largesq -A 31 -lim1 0 -lim0 536870912 -lpb0 35 -lpb1 35 -mfb1 250 -mfb0 70 -batchlpb0 29 -batchlpb1 28 -batchmfb0 70 -batchmfb1 70 -lambda1 5.2 -lambda0 2.2 -batch -batch1 $DATA/dlp240.batch1 -sqside 0 -bkmult 1.10 -q0 150e9 -q1 300e9 -fbc /tmp/dlp240.fbc -random-sample 2048
```
......@@ -236,7 +236,7 @@ DATA=$DATA CADO_BUILD=$CADO_BUILD MPI=$MPI nrows=37000000 density=250 nthreads=3
This second method reports about 3.1 seconds per iteration. Allowing for
some inaccuracy, these experiments are sufficient to build confidence
that the time per iteration in the krylov (a.k.a. "sequence") step of
block Wiedemann is close 3 seconds per iteration, perhaps slightly less.
block Wiedemann is close to 3 seconds per iteration, perhaps slightly less.
The time per iteration in the mksol (a.k.a. "evaluation") step is in the
same ballpark. The time for krylov+mksol can then be estimated as the
product of this timing with `(1+n/m+1/n)*N`, with `N` the number of rows,
......@@ -265,8 +265,8 @@ global q-range.
Since we do not expect anyone to spend again as much computing resources
to perform again exactly the same computation, we provide in the
[`dlp240-rel_count`](dlp240-rel_count) file the count of how many (non-unique) relations were
produced for each 1G special-q sub-range.
[`dlp240-rel_count`](dlp240-rel_count) file the count of how many
(non-unique) relations were produced for each 1G special-q sub-range.
We can then have a visual plot of this data, as shown in
[`dlp240-plot_rel_count.pdf`](dlp240-plot_rel_count.pdf), where the
......@@ -292,7 +292,7 @@ undergo filtering in order to produce a linear system. The process is as
follows.
The filtering follows roughly the same general workflow as in the
[rsa-240 case](../rsa240/filtering.md), with some notable changes:
[RSA-240 case](../rsa240/filtering.md), with some notable changes:
- not one, but two programs must be used to generate important companion
files beforehand:
```
......@@ -322,9 +322,10 @@ $CADO_BUILD/filter/dup1 -prefix dedup -out $DATA/dedup/ -basepath $DATA -filelis
grep '^# slice.*received' $DATA/dup1.$EXP.stderr > $DATA/dup1.$EXP.per_slice.txt
```
This first pass takes about 3 hours. Numbers of relations per slice are
printed by the program and must be saved for later use (hence the
`$DATA/dup1.$EXP.per_slice.txt` file).
This first pass takes about 3 hours (if done on the full data set).
Numbers of relations per slice are printed by the program and must be
saved for later use (hence the `$DATA/dup1.$EXP.per_slice.txt` file).
The second pass of duplicate removal works independently on each of the
non-overlapping slices. The number of slices can thus be used as a sort
of time-memory tradeoff (here, `-n 2` tells the program to do `2^2=4`
......@@ -387,14 +388,19 @@ We did several filtering experiments based on the DLP-240 data set, as
relations kept coming in. For each of these experiments, we give the
number of raw relations, the number of relations after the initial
"purge" step, as well as the number of rows of the final matrix after
"merge", for target densities d=100, d=150, and d=200.
"merge", for target densities d=150, d=200, and d=250.
| | rels | purged | d=150 | d=200 | d=250
| -------------|-------|--------|-------|-------|------
| experiment 4 | 2.07G | 1.87G | 51.6M | 46.1M | (skip)
| experiment 5 | 2.30G | 1.59G | 45.2M | 40.4M | (skip)
| experiment 6 | 2.38G | 1.52G | 43.0M | 39.0M | (skip)
| experiment 7 | 2.38G | 1.50G | 42.9M | 38.9M | 36.2M
(experiment 7 is basically experiment 6 with a minor tweak: in
`singletons_and_clique_removal()` in `purge.c`, the value of `nsteps` was
doubled, to allow slightly better accuracy with that step.)
Each of these experiments produced a matrix, and it was possible to run a
few iterations of each, in order to guide the final choice. For this,
a single command line is sufficient. For consistency with the other
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment