Training global codebook from multiple timepoints
Currently, global codebook is created from all planes in the dataset, but just from a single timepoint. This was fine, because our test datasets had only one timepoint.
With the datasets, which have multiple timepoints, this could lead to worse compression results, mainly higher compression error.
First idea is to load all the data from all planes and all timepoints, which would probably result in not enough memory in the system error.
I think the solution is to sample the dataset and choose training planes across all timepoints. We would probably create some parameter, which would limit the maximum memory used during compression, which would control the sample size.
In order to implement this, #3 (closed) must be finished first.
Edited by Vojtech Moravec