Skip to content
Snippets Groups Projects
Select Git revision
  • master
  • laudantium-unde-et-iste-et
  • ea-dolor-quia-et-sint
  • ipsum-consequatur-et-in-et
  • sapiente-et-possimus-neque-est
  • qui-in-quod-nam-voluptatem
  • aut-deleniti-est-voluptatum-repellat
  • modi-et-quam-sunt-consequatur
  • et-laudantium-voluptas-quos-pariatur
  • voluptatem-quia-fugit-ut-perferendis
  • at-adipisci-ducimus-qui-nihil
  • dolorem-ratione-sed-illum-minima
  • inventore-temporibus-ipsum-neque-rerum
  • autem-at-dolore-molestiae-et
  • doloribus-dolorem-quos-adipisci-et
  • sed-sit-tempore-expedita-possimus
  • et-recusandae-deleniti-voluptas-consectetur
  • atque-corrupti-laboriosam-nobis-explicabo
  • nostrum-ut-vel-voluptates-et
  • quisquam-dolorum-minus-non-ipsam
20 results

segment_ipp

  • Petr Strakos's avatar
    Petr Strakos authored
    b250122a
    History

    Segment IPP

    This is a repository with the code for the segmentation of a pellet object from the image data

    The code is built on Meta's recently released neural network model, segment-anything. It can segment different objects in the image data with high precision. Objects are then analysed using other computer vision algorithms, and only the pellet is taken as an output for further processing.


    Installation


    Clone the repository on your disk. Then, use the Conda package and environment management system for Python. Using the *.yaml files, you can create a Python environment on your system and install the required modules. There are two versions: a regular CPU (environment.yaml) and a GPU (environment_GPU.yaml). If you have a CUDA-compatible GPU, use the GPU yaml file to install the GPU support. It will provide faster segmentation of image data.

    To create a CPU environment, use:

    conda env create -f environment.yaml

    To create a GPU environment, use:

    conda env create -f environment_GPU.yaml

    Measure the pellet


    To measure the size of the pellet in the images and create a time of size evolution, use either the eval_images.py or eval_images_GPU.py. These should be run as a Python main script that expects four arguments as input: Path to model, Path to data, Time stamp of a measured sample, and Sample size. The path should be provided as an absolute path. The time stamp should be in the format "Y:m:d H:M:S" and should match the capture time of one of the images submitted as data. Sample size should also be provided with units to assign the correct units in the final plot. Individual arguments should be provided as strings, i.e. in quotes. All the results are stored in the newly created sub-folder output in the path to image data.

    Example:

    python eval_images.py "Path\\to\\model\\model.pth" "Path\\to\\image_data\\folder" "2023:10:03 15:35:40" "4.57mm"

    License


    The model is licensed under the Apache 2.0 license.