diff --git a/docs/release-notes/SUMMARY.md b/docs/release-notes/SUMMARY.md
index ff3d13e338b50e5bf84dc090f2b6f52f79280c7f..e134a525eed22649cec92b813ac904fb666e8216 100644
--- a/docs/release-notes/SUMMARY.md
+++ b/docs/release-notes/SUMMARY.md
@@ -1,3 +1,4 @@
+- [v2.6.0](v2.6.0.md)
 - [v2.5.1](v2.5.1.md)
 - [v2.5.0](v2.5.0.md)
 - [v2.4.1](v2.4.1.md)
diff --git a/docs/release-notes/v2.6.0.md b/docs/release-notes/v2.6.0.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c1da1aaa278fd3c727d4a1ea1a12335ac603e34
--- /dev/null
+++ b/docs/release-notes/v2.6.0.md
@@ -0,0 +1,150 @@
+# declearn v2.6.0
+
+Released: 26/07/2024
+
+## Release Highlights
+
+### Group-Fairness capabilities
+
+This new version of DecLearn brings a whole new type of federated optimization
+algorithms to the party, introducing an API and various algorithms to measure
+and optimize the group fairness of the trained model over the union of clients'
+training datasets.
+
+This is the result of a year-long collaboration with Michaël Perrot and Brahim
+Erraji to design and evaluate algorithms to learn models under group-fairness
+constraints in a federated learning setting, using either newly-introduced
+algorithms or existing ones from the litterature.
+
+A dedicated [guide on fairness features](../user-guide/fairness.md) was added
+to the documentation, that is the advised entry-point for people interested
+in getting around these new features. The guide is both about explaining what
+(group-)fairness in machine learning is, what the design choices (and limits)
+of our new API are, how the API works, which algorithms are available, and how
+to write custom fairness definitions or fairness-enforcing algorithms.
+
+As noted in the guide, end-users with an interest in fairness-aware federated
+learning are very welcome to get in touch if they have feedback, questions or
+requests about the current capabilities and possible future ones.
+
+To sum it up shortly:
+
+- The newly-introduced `declearn.fairness` submodule provides with an API and
+  concrete algorithms to enforce fairness constraints in a federated learning
+  process.
+- When such an algorithm is to be used, the only required modifications to an
+  existing user-defined process is to:
+    - Plug a `declearn.fairness.api.FairnessControllerServer` subclass instance
+      (or its configuration) into the `declearn.main.config.FLOptimConfig` that
+      is defined by the server.
+    - Wrap each and every client's training dataset as a
+      `declearn.fairness.api.FairnessDataset`; for instance using
+      `declearn.fairness.core.FairnessInMemoryDataset`, which is an extension
+      of the base `declearn.dataset.InMemoryDataset`.
+- There are currently three available algorithms to enforce fairness:
+    - Fed-FairGrad, defined under `declearn.fairness.fairgrad`
+    - Fed-FairBatch/FedFB, defined under `declearn.fairness.fairbatch`
+    - FairFed, defined under `declearn.fairness.fairfed`
+- In addition, `declearn.fairness.monitor` provides with an algorithm to
+  merely measure fairness throughout training, typically to evaluate baselines
+  when conducting experiments on fairness-enforcing algorithms.
+- There are currently four available group-fairness criteria that can be used
+  with the previous algorithms:
+    - Accuracy Parity
+    - Demographic Parity
+    - Equalized Odds
+    - Equality of Opportunity
+
+### Scheduler API for learning rates
+
+DecLearn 2.6.0 also introduces a long-awaited feature: scheduling rules for the
+learning rate (and/or weight decay factor), that adjust the scheduled value
+throughout training based on the number of training steps and/or rounds already
+taken.
+
+This takes the form of a new (and extensible) `Scheduler` API, implemented
+under the new `declearn.optimizer.schedulers` submodule. Instances of
+`Scheduler` subclasses (or their JSON-serializable specs) may be passed to
+`Optimizer.__init__` instead of float values to specify the `lrate` and/or
+`w_decay` parameters, resulting in time-varying values being computed and used
+rather than a constant one.
+
+`Scheduler` is easily-extensible by end-users to write their own rules.
+At the moment, DecLearn natively provides with:
+
+- Various kinds of decay (step, multi-steps or round based; linear,
+  exponential, polynomial...);
+- Cyclic learning rates (based on [this](https://arxiv.org/pdf/1506.01186)
+  and [that](https://arxiv.org/abs/1608.03983) paper);
+- Linear warmup (steps or round based; combinable with another scheduler
+  to use after the warmup period).
+
+The [user-guide on the Optimizer API](../user-guide/optimizer.md) was updated
+to cover this new feature, and remains the preferred entry-point for new users
+that want to get hold of the overall design and specific features offered by
+this API. Users already familiar with `Optimizer` may simply check out the API
+docs for the new [`Scheduler`][declearn.optimizer.schedulers.Scheduler] API.
+
+### `declearn.training` submodule reorganization
+
+DecLearn 2.6.0 introduces the `declearn.training` submodule, that merely
+refactors some unchanged classes previously made available under
+`declearn.main.utils` and `declearn.main.privacy`. The mapping of changes
+is the following:
+
+- `declearn.main.TrainingManager` -> `declearn.training.TrainingManager`
+- `declearn.main.privacy` -> `declearn.training.dp` (which remains a
+   manual-import submodule relying on the availability of the optional
+   `opacus` third-party dependency)
+
+The former `declearn.main.privacy` is deprecated and will be removed in
+DecLearn 2.8 and/or 3.0. It is kept for now as an alias re-export of
+`declearn.training.dp`, that raises a `DeprecationWarning` unpon manual
+import.
+
+The `declearn.main.utils` submodule is kept, but importing `TrainingManager`
+from it is deprecated and will also be removed in version 2.8 and/or 3.0.
+For now, the class is merely re-exported from it.
+
+### Evaluation rounds can now be skipped
+
+Prior to this release, `FederatedServer` always deterministically ran training
+and evaluation rounds in alternance as part of a Federated Learning process.
+This can now be modularized, using the new `frequency` parameter as part of
+`declearn.main.config.EvaluateConfig` (_i.e._ the "evaluate" field of the
+`declearn.main.config.FLRunConfig` instance, dict or TOML file provided as
+input to `FederatedServer.run`).
+
+By default, `frequency=1`, meaning an evaluation round is run after each and
+every training round. But make it `frequency=N` and evaluation will only
+occur after the N-th, 2*N-th, ... training rounds. Note that if the server
+is checkpointing results, then an evaluation round will forcefully occur
+after the last training round.
+
+Note that a similar parameter is available for `FairnessConfig`, albeit working
+slightly differently, because fairness evaluation rounds occur _before_
+training. Hence, with `frequency=N`, fairness evaluation and constraints
+update will occur before the 1st, N+1-th, 2*N+1-th, ... training rounds. Note
+that if the server if checkpointing results, then a fairness round will
+forcefully occur after the last training round, for the sake of measuring the
+fairness levels of the last model.
+
+## Other changes
+
+A few minor changes are shipped with this new release, that are mostly of
+interest to developers - including end-users writing custom algorithms or
+bridging DecLearn APIs within their own orchestration code.
+
+- The `declearn.secagg.messaging.aggregate_secagg_messages` function was
+  introduced as a refactoring of previous backend code to combine and
+  decrypt an ensemble of client-emitted `SecaggMessage` instances into a
+  single aggregated cleartext `Message`.
+- The `declearn.utils.TomlConfig` class, from which all TOML-parsing config
+  dataclasses of DecLearn inherit, now has a new `autofill_fields` class
+  attribute to indicate fields that may be left empty by users and will then
+  be dynamically filled when parsing all fields. For instance, this enables
+  not specifying `evaluate` in the TOML file to a `FLRunConfig` instance.
+- New unit tests were added, most notably for `FederatedServer`, that now
+  benefits from proper coverage by mere unit tests that verify high-level
+  logic and coherence of actions with inputs and documentation - whereas
+  the overall working keeps being assessed using functional tests.