ANSYS HPC Packs are a flexible way to license parallel processing capability. For single users who want the ability to run simulations on a workstation, a single ANSYS HPC Pack provides great value with increased throughput of up to 8 times. For users with access to larger HPC resources, multiple HPC Packs can be combined to enable parallel processing on hundreds or even thousands of processing cores. ANSYS HPC Packs offer virtually unlimited parallel processing for the biggest jobs, combined with the ability to run multiple jobs using everyday parallel capacity.
/BATCH
resume, modle.db
your script begins here
#!/bin/sh
#name the program, your default output error file are Test_ansys.oJobid
#and Test_ansys.eJobid
#PBS -N Test_ansys
#follwing 2 lines ensures that you'll be notified by email when your job is done
#PBS -M user_id@tigermail.auburn.edu
#PBS -m e
#you are asking for 1 node 16 processor each, 16 processor as a total for 50hrs
#after 50 hours your job will be killed
#PBS -l nodes=1:ppn=16,walltime=50:00:00
#your directory path that can be obtained by pwd
#PBS -d /home/your_directory_path/
#loading variables do not change this
export PATH=/export/apps/ansys_inc/v130/ansys/bin:$PATH
#writing mpd_nodes to boot mpd in these nodes and conf_file to select processors
`sort -u $PBS_NODEFILE > mpd_nodes`
#assignin nhosts variable to number of nodes
nhosts=`cat mpd_nodes | wc -l`
#generating conf_file to select processors
`sort $PBS_NODEFILE > conf_file`
proc=`cat conf_file | wc -l`
#printing initial timestamp
date > out
#printing which host performed computation
/bin/hostname >> out
#executing ansys with 16 processor, < your input file that call model file inside from it >> your output file
ansys130 -b -np $proc < input_sun_3.txt >> out
#end time stamp
date >> out
end of your script