# las memory goes out of control with adjust-strategy 2.

Report by @x-EHall on the cado-nfs mailing list.

https://sympa.inria.fr/sympa/arc/cado-nfs/2021-04/msg00000.html

Current master (1d08d432, thus with the fix for #30012 (closed)), and adjust-strategy 0:

```
$ ./build/localhost/sieve/las -poly /tmp/ed.poly -q0 540314000 -I 15 -q1 540316000 -lim0 536000000 -lim1 536000000 -lpb0 32 -lpb1 32 -mfb0 64 -mfb1 94 -ncurves0 20 -ncurves1 20 -fb1 /tmp/ed.roots1.gz -fbc /tmp/ed.fbc -t auto -sqside 1 -adjust-strategy 0 | tee /tmp/rels.txt | egrep '(multiplier|Expected.*footprint)'
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 9.20 GB
# Updating 1s bucket multiplier to 1.000*33054/32944*1.1=1.054
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 9.63 GB
# Global 1s bucket multiplier has already grown to 1.054. Not updating, since this will cover 1.000*33002/32976*1.1=1.051
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 9.63 GB
# Updating 1s bucket multiplier to 1.000*33070/32944*1.1=1.054
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 10.08 GB
# Global 1s bucket multiplier has already grown to 1.110. Not updating, since this will cover 1.000*32981/32944*1.1=1.051
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 10.08 GB
[...]
```

Same commit with adjust-strategy 2. By the fix of #30012 (closed), we avoid potential composites in relations, but we're killed by the large variance in the number of updates per bucket that is caused by bucket-sieving projective primes.

```
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 11.89 GB
# Updating 1s bucket multiplier to 1.000*33038/32976*1.1=1.052
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 12.45 GB
# Updating 1s bucket multiplier to 1.000*33111/32976*1.1=1.054
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 13.07 GB
# Updating 1s bucket multiplier to 1.109*47763/44464*1.1=1.251
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 14.59 GB
# Updating 1s bucket multiplier to 1.109*52052/40336*1.1=1.503
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 19.36 GB
```

Now with !29 :

```
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 11.89 GB
# Updating 1s bucket multiplier to 1.000*33083/32976*1.1=1.053
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 12.47 GB
# Updating 1s bucket multiplier to 1.000*33179/32976*1.1=1.056
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 13.11 GB
# Global 1s bucket multiplier has already grown to 1.113. Not updating, since this will cover 1.000*36443/36416*1.1=1.051
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 13.11 GB
# Global 1s bucket multiplier has already grown to 1.113. Not updating, since this will cover 1.000*40247/40128*1.1=1.053
# Expected memory use for 1 binding zones and 4 1-threaded jobs per zone, counting 41 MB of base footprint: 13.11 GB
```

(yes, adjust-strategy 2 inherently adds some memory cost).

The point here is the growth of the multiplier. We see that !29 successfully keeps it under control, which the fix of #30012 (closed) does not. So !29 clearly goes in the right direction.