Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
B
BigDataViewer_Core_Extension
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Container registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
BioinformaticDataCompression
BigDataViewer_Core_Extension
Commits
eeca5028
Commit
eeca5028
authored
8 years ago
by
Tobias Pietzsch
Browse files
Options
Downloads
Patches
Plain Diff
javadoc
parent
78f40195
No related branches found
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
src/main/java/bdv/cache/LoadingVolatileCache.java
+38
-13
38 additions, 13 deletions
src/main/java/bdv/cache/LoadingVolatileCache.java
src/main/java/bdv/cache/util/FetcherThreads.java
+11
-1
11 additions, 1 deletion
src/main/java/bdv/cache/util/FetcherThreads.java
with
49 additions
and
14 deletions
src/main/java/bdv/cache/LoadingVolatileCache.java
+
38
−
13
View file @
eeca5028
...
...
@@ -37,9 +37,30 @@ import bdv.img.cache.VolatileGlobalCellCache;
/**
* TODO rename
* TODO revise javadoc
* A loading cache mapping keys to {@link VolatileCacheValue}s. The cache spawns
* a set of {@link FetcherThreads} that asynchronously load data for cached
* values.
* <p>
* Using {@link #createGlobal(Object, CacheHints, VolatileCacheValueLoader)}, a
* key is added to the cache, specifying a {@link VolatileCacheValueLoader} to
* provide the value for the key. After adding the key to the cache, it is
* immediately associated with a value. However, that value may be initially
* {@link VolatileCacheValue#isValid() invalid}. When the value is made valid
* (loaded) depends on the provided {@link CacheHints}, specifically the
* {@link CacheHints#getLoadingStrategy() loading strategy}. The strategy may be
* to load the value immediately, to load it immediately if there is enough IO
* budget left, to enqueue it for asynchronous loading, or to not load it at
* all.
* <p>
* Using {@link #getGlobalIfCached(Object, CacheHints)} a value for the
* specified key is returned if the key is in the cache (otherwise {@code null}
* is returned). Again, the returned value may be invalid, and when the value is
* loaded depends on the provided {@link CacheHints}.
*
* @param <K>
* the key type.
* @param <V>
* the value type.
*
* @author Tobias Pietzsch <tobias.pietzsch@gmail.com>
*/
...
...
@@ -60,6 +81,10 @@ public final class LoadingVolatileCache< K, V extends VolatileCacheValue > imple
private
volatile
long
currentQueueFrame
=
0
;
/**
* Create a new {@link LoadingVolatileCache} with the specified number of
* priority levels and number of {@link FetcherThreads} for asynchronous
* loading of cache entries.
*
* @param maxNumLevels
* the number of priority levels.
* @param numFetcherThreads
...
...
@@ -74,30 +99,30 @@ public final class LoadingVolatileCache< K, V extends VolatileCacheValue > imple
}
/**
* Get a value if it is in the cache or {@code null}. Note, that a value
* being in the cache only means that there is data, but not necessarily
* that the data is {@link VolatileCacheValue#isValid() valid}.
* Get the value for the specified key if the key is in the cache (otherwise
* return {@code null}). Note, that a value being in the cache only means
* that there is data, but not necessarily that the data is
* {@link VolatileCacheValue#isValid() valid}.
* <p>
* If the value is not valid, do the following, depending on the
* If the value is
present but
not valid, do the following, depending on the
* {@link LoadingStrategy}:
* <ul>
* <li>{@link LoadingStrategy#VOLATILE}: Enqueue the e
ntr
y for asynchronous
* <li>{@link LoadingStrategy#VOLATILE}: Enqueue the
k
ey for asynchronous
* loading by a fetcher thread, if it has not been enqueued in the current
* frame already.
* <li>{@link LoadingStrategy#BLOCKING}: Load the
cell
data immediately.
* <li>{@link LoadingStrategy#BUDGETED}: Load the
cell
data immediately if
*
there
is enough {@link IoTimeBudget} left for the current thread group.
* <li>{@link LoadingStrategy#BLOCKING}: Load the data immediately.
* <li>{@link LoadingStrategy#BUDGETED}: Load the data immediately if
there
* is enough {@link IoTimeBudget} left for the current thread group.
* Otherwise enqueue for asynchronous loading, if it has not been enqueued
* in the current frame already.
* <li>{@link LoadingStrategy#DONTLOAD}: Do nothing.
* </ul>
*
* @param key
* coordinate of the cell (comprising timepoint, setup, level,
* and flattened index).
* the key to query.
* @param cacheHints
* {@link LoadingStrategy}, queue priority, and queue order.
* @return
a cell
with the specified
coordinates or
null.
* @return
the value
with the specified
key in the cache or {@code
null
}
.
*/
public
V
getGlobalIfCached
(
final
K
key
,
final
CacheHints
cacheHints
)
{
...
...
This diff is collapsed.
Click to expand it.
src/main/java/bdv/cache/util/FetcherThreads.java
+
11
−
1
View file @
eeca5028
...
...
@@ -52,6 +52,15 @@ public class FetcherThreads< K >
private
final
ArrayList
<
Fetcher
<
K
>
>
fetchers
;
/**
* Create (and start) a set of fetcher threads.
* <p>
* Fetcher threads are named {@code Fetcher-0} ... {@code Fetcher-n}.
*
* @param queue the queue from which request keys are taken.
* @param loader loads data associated with keys.
* @param numFetcherThreads how many parallel fetcher threads to start.
*/
public
FetcherThreads
(
final
BlockingFetchQueues
<
K
>
queue
,
final
Loader
<
K
>
loader
,
...
...
@@ -61,9 +70,10 @@ public class FetcherThreads< K >
}
/**
* Create (and start) a set of fetcher threads.
*
* @param cache the cache that contains entries to load.
* @param queue the queue from which request keys are taken.
* @param loader loads data associated with keys.
* @param numFetcherThreads how many parallel fetcher threads to start.
* @param threadIndexToName a function for naming fetcher threads (takes an index and returns a name).
*/
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment