Skip to content
Snippets Groups Projects
Commit dd8f404b authored by Vojtech Moravec's avatar Vojtech Moravec
Browse files

Added support for cached codebook into benchmark.

parent 25b6f9bf
Branches
No related tags found
No related merge requests found
## DataCompressor usage
# DataCompressor usage
Help output:
```
usage: azgracompress.DataCompressor [options] input
-b,--bits <arg> Bit count per pixel [Default 8]
-c,--compress Compress 16 bit raw image
-d,--decompress Decompress 16 bit raw image
-h,--help Print help
-i,--inspect Inspect the compressed file
-o,--output <arg> Custom output file
-rp,--reference-plane <arg> Reference plane index
-sq,--scalar-quantization Use scalar quantization.
-v,--verbose Make program verbose
-vq,--vector-quantization <arg> Use vector quantization. Need to pass
vector size eg. 9,9x1,3x3
-b,--bits <arg> Bit count per pixel [Default 8]
-bench,--benchmark Benchmark
-c,--compress Compress 16 bit raw image
-cbc,--codebook-cache <arg> Folder of codebook caches
-d,--decompress Decompress 16 bit raw image
-h,--help Print help
-i,--inspect Inspect the compressed file
-o,--output <arg> Custom output file
-rp,--reference-plane <arg> Reference plane index
-sq,--scalar-quantization Use scalar quantization.
-tcb,--train-codebook Train codebook and save learned
codebook to cache file.
-v,--verbose Make program verbose
-vq,--vector-quantization <arg> Use vector quantization. Need to pass
vector size eg. 9,9x1,3x3
-wc,--worker-count <arg> Number of worker threads
```
### Application methods:
- `-c`, `--compress` - Compress image planes file (Currently only RAW image files).
- Use scalar quantization using `-sq` or `--scalar-quantization`
- Use vector quantization using `-vq` or `--vector-quantization` and specify the row vector size (9, 9x1, etc.) or matrix dimensions (2x2, 4x6, etc.)
- Set the bits per pixel amount using `-b` or `--bits` and integer value from 1 to 8. Codebook size is equal to (2^bits).
### Quantization types (QT):
- This program supports two (*three*) different quantization types:
- Scalar quantization, selected by `-sq` or `--scalar-quantization`
- Vector quantization, selected by `-vq` or `--vector-quantization`
- Vector quantization requires you to input the vector dimension after the flag
- For One-Dimensional row vectors you can the length as `9` or `9x1`
- For Two-Dimensional matrix vectors the dimensions is set by `DxD` format, eg. `3x3`, `5x3`
## Main program methods
### Compress
- Use with `-c` or `--compress`
- Compress the selected image planes (*Currently supporting only loading from RAW image files*).
- QT is required
- Set the bits per pixel using `-b` or `--bits` and integer value from 1 to 8. Codebook size is equal to (*2^bits*).
- Normally the codebook is created for each image plane separately, if one wants to use general codebook for all planes, these are the options:
- Set the reference plane index using `-rp` or `--reference-plane`. Reference plane is used to create codebook for all planes.
- `-d`, `--decompress` - Decompress the file compressed by this application. This options doesn't require any further parameters.
- `-i`, `--inspect` - Inspect the compressed file. Read compressed file header are write out informations about that file.
- Set the cache folder by `cbc` or `--codebook-cache`, quantizer will look for cached codebook of given file and codebook size.
- For input file info seee Input File section
### Decompress
- Use with `-d` or `--decompress`
- Decompress the file compressed by this application.
- This method doesn't require any additional options.
### Inspect
- Use with `-i` or `--inspect`
- Inspect the compressed file. Read compressed file header are write the informations from that header.
### Train codebook
- Use with `-tcb` or `--train-codebook`
- QT is required
- This method load all the selected input planes and create one codebook.
- Codebook is saved to the cache folder configured by the `-o` option.
- Codebook is trained from planes configured by the input file, see Input File section
### Benchmark
- Use with `-bench` or `--benchmark `
- QT is required.
- Run benchmarking code on input planes with selected quantization type,
### Input file
- Input file is required for compress, decompress and inspect methods
- decompress and inspect require only the input file path
- compress additionaly requires the input file dimensions in format DxDxD [D] [D-D]
- Input file is required for all methods.
- decompress and inspect require only the input file path, while other also require its dimensions
- Input file dimensions are inputed in format of DxDxD [D] [D-D]
- DxDxD is image dimension. Eg. 1920x1080x1, 1041x996x946 (946 planes of 1041x996 images)
- [D] is optional plane index. Only this plane will be compressed.
- [D-D] is optional plane range. Only plane in this range will be compressed.
- D stands for integer values.
- Planes selected by the index or plane range are used for:
- Compression
- Training of codebook
- Running benchmark
### Additional options:
- `-v`, `--verbose` - Make program output verbose.
- `-o`, `--output` - Set the ouput of compression, decompression, codebook training, benchmark.
- `-wc`, `--worker-count` - Set the number of worker threads.
......@@ -26,6 +26,9 @@ abstract class BenchmarkBase {
protected final boolean hasReferencePlane;
protected final int referencePlaneIndex;
protected final int codebookSize;
protected final boolean hasCacheFolder;
protected final String cacheFolder;
protected final boolean hasGeneralQuantizer;
protected BenchmarkBase(final String inputFile,
final String outputDirectory,
......@@ -39,6 +42,10 @@ abstract class BenchmarkBase {
hasReferencePlane = false;
referencePlaneIndex = -1;
codebookSize = 256;
hasCacheFolder = false;
cacheFolder = null;
hasGeneralQuantizer = false;
}
protected BenchmarkBase(final ParsedCliOptions options) {
......@@ -56,18 +63,21 @@ abstract class BenchmarkBase {
final int to = options.getToPlaneIndex();
final int count = to - from;
this.planes = new int[count];
for (int i = 0; i < count; i++) {
this.planes = new int[count + 1];
for (int i = 0; i <= count; i++) {
this.planes[i] = from + i;
}
} else {
final int planeCount = options.getImageDimension().getZ();
this.planes = new int[planeCount + 1];
for (int i = 0; i <= planeCount; i++) {
this.planes = new int[planeCount];
for (int i = 0; i < planeCount; i++) {
this.planes[i] = i;
}
}
hasCacheFolder = options.hasCodebookCacheFolder();
cacheFolder = options.getCodebookCacheFolder();
hasGeneralQuantizer = hasReferencePlane || hasCacheFolder;
}
/**
......
......@@ -6,6 +6,7 @@ import azgracompress.data.V3i;
import azgracompress.de.DeException;
import azgracompress.de.shade.ILShadeSolver;
import azgracompress.quantization.QTrainIteration;
import azgracompress.quantization.QuantizationValueCache;
import azgracompress.quantization.scalar.LloydMaxU16ScalarQuantization;
import azgracompress.quantization.scalar.ScalarQuantizer;
......@@ -37,7 +38,17 @@ public class ScalarQuantizationBenchmark extends BenchmarkBase {
boolean dirCreated = new File(this.outputDirectory).mkdirs();
System.out.println(String.format("|CODEBOOK| = %d", codebookSize));
ScalarQuantizer quantizer = null;
if (hasReferencePlane) {
if (hasCacheFolder) {
QuantizationValueCache cache = new QuantizationValueCache(cacheFolder);
try {
final int[] quantizationValues = cache.readCachedValues(inputFile, codebookSize);
quantizer = new ScalarQuantizer(U16.Min, U16.Max, quantizationValues);
} catch (IOException e) {
System.err.println("Failed to read quantization values from cache file.");
e.printStackTrace();
return;
}
} else if (hasReferencePlane) {
final int[] refPlaneData = loadPlaneData(referencePlaneIndex);
if (refPlaneData.length == 0) {
System.err.println("Failed to load reference plane data.");
......@@ -61,7 +72,7 @@ public class ScalarQuantizationBenchmark extends BenchmarkBase {
}
if (!hasReferencePlane) {
if (!hasGeneralQuantizer) {
if (useDiffEvolution) {
quantizer = trainDifferentialEvolution(planeData, codebookSize);
} else {
......
......@@ -2,6 +2,7 @@ package azgracompress.benchmark;
import azgracompress.cli.ParsedCliOptions;
import azgracompress.data.*;
import azgracompress.quantization.QuantizationValueCache;
import azgracompress.quantization.vector.CodebookEntry;
import azgracompress.quantization.vector.LBGResult;
import azgracompress.quantization.vector.LBGVectorQuantizer;
......@@ -59,7 +60,21 @@ public class VectorQuantizationBenchmark extends BenchmarkBase {
System.out.println(String.format("|CODEBOOK| = %d", codebookSize));
VectorQuantizer quantizer = null;
if (hasReferencePlane) {
if (hasCacheFolder) {
QuantizationValueCache cache = new QuantizationValueCache(cacheFolder);
try {
final CodebookEntry[] codebook = cache.readCachedValues(inputFile,
codebookSize,
qVector.getX(),
qVector.getY());
quantizer = new VectorQuantizer(codebook);
} catch (IOException e) {
e.printStackTrace();
System.err.println("Failed to read quantization vectors from cache.");
return;
}
} else if (hasReferencePlane) {
final ImageU16 plane = loadPlane(referencePlaneIndex);
if (plane == null) {
......@@ -88,7 +103,7 @@ public class VectorQuantizationBenchmark extends BenchmarkBase {
final int[][] planeData = getPlaneVectors(plane, qVector);
if (!hasReferencePlane) {
if (!hasGeneralQuantizer) {
LBGVectorQuantizer vqInitializer = new LBGVectorQuantizer(planeData, codebookSize);
LBGResult vqResult = vqInitializer.findOptimalCodebook();
quantizer = new VectorQuantizer(vqResult.getCodebook());
......
......@@ -101,7 +101,7 @@ public class ImageCompressor extends CompressorDecompressorBase {
header.setQuantizationType(options.getQuantizationType());
header.setBitsPerPixel((byte) options.getBitsPerPixel());
// Codebook per plane is used only if reference plane isn't set nor is the cache folder.
final boolean oneCodebook = options.hasReferencePlaneIndex() || options.hasCodebookCacheFolder();
header.setCodebookPerPlane(!oneCodebook);
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment