Skip to content
Snippets Groups Projects
workbench.md 2.39 KiB
Newer Older
  • Learn to ignore specific revisions
  • Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # Workbench
    
    ## Workbench Batch Mode
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    It is possible to run Workbench scripts in a batch mode.
    You need to configure solvers of individual components to run in parallel mode.
    Open your project in Workbench.
    Then, for example, in *Mechanical*, go to *Tools - Solve Process Settings...*.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ![](../../../img/AMsetPar1.png)
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Enable the *Distribute Solution* checkbox and enter the number of cores (e.g. 72 to run on two Barbora nodes).
    If you want the job to run on more than 1 node, you must also provide a so called MPI appfile.
    In the *Additional Command Line Arguments* input field, enter:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
        -mpifile /path/to/my/job/mpifile.txt
    ```
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Where `/path/to/my/job` is the directory where your project is saved.
    We will create the file `mpifile.txt` programmatically later in the batch script.
    For more information, refer to \*ANSYS Mechanical APDL Parallel Processing\* \*Guide\*.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Now, save the project and close Workbench.
    We will use this script to launch the job:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```bash
        #!/bin/bash
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        #SBATCH --nodes=2
        #SBATCH --ntasks-per-node=128
        #SBATCH --job-name=test9_mpi_2
        #SBATCH --partition=qcpu
        #SBATCH --account=ACCOUNT_ID
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
        # change the working directory
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        DIR=/scratch/project/PROJECT_ID/$SLURM_JOB_ID
    
        mkdir -p "$DIR"
        cd "$DIR" || exit
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
        echo Running on host `hostname`
        echo Time is `date`
        echo Directory is `pwd`
        echo This jobs runs on the following nodes:
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        echo `$SLURM_NODELIST`
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        ml ANSYS/2023R2-intel-2022.12
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
        #### Set number of processors per host listing
        procs_per_host=24
        #### Create MPI appfile
        echo -n "" > mpifile.txt
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        for host in `$SLURM_NODELIST`
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
        do
          echo "-h $host -np $procs_per_host $ANSYS160_DIR/bin/ansysdis161 -dis" > mpifile.txt
        done
    
        #-i input.dat includes the input of analysis in APDL format
        #-o file.out is output file from ansys where all text outputs will be redirected
        #-p the name of license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial), aa_r_dy=Academic AUTODYN)
    
        # prevent using scsif0 interface on accelerated nodes
        export MPI_IC_ORDER="UDAPL"
        # spawn remote process using SSH (default is RSH)
        export MPI_REMSH="/usr/bin/ssh"
    
        runwb2 -R jou6.wbjn -B -F test9.wbpj
    ```
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The solver settings are saved in the `solvehandlers.xml` file,
    which is not located in the project directory.
    Verify your solved settings when uploading a project from your local computer.