Warning! Some of this page is specific to the clusters at Oxford University.Update: When running this on the slurm scheduler the command is:
Nice description of the problems and links to scripts: http://lukas.ahrenberg.se/archives/731
An open source project to run on clusters: https://code.google.com/p/clusterlogo/
Detailed message about how to use BehaviorSpace on a cluster: http://groups.yahoo.com/group/netlogo-users/message/11210
Documentation of advanced command line uses of BehaviorSpace: http://ccl.northwestern.edu/netlogo/docs/behaviorspace.html#advanced
New project that might be useful: http://www.openmole.org/getting-started/
All researchers at the University of Oxford can register for a free account.
If on Microsoft Windows install Putty and X11. I use XLaunch to connect to one of the Oxford clusters. Then I run
To install NetLogo (version 5.0.4) I used
I created experiments using BehaviorSpace accessible from NetLogo's tool menu.
Then I read the documentation of how to construct job scripts and submit them.
I then wrote a script to run the same experiment on different nodes of the cluster. The script is for the Sal cluster in Oxford.
Since the first post on how to run NetLogo on clusters, there have been a few interesting changes on the Oxford clusters. Most notably, the new system (arcus-b) uses the SLURM scheduler. This is a very common scheduler and is used also by the NOTUR system (among others). So included in this update is the script to run scripts with a SLURM scheduler.
In order to run an experiment, the setup is still the same as the previous post. However, in order to submit the script, SLURM uses a different format. Notice a few differences, #PBS is now #SBATCH and the arguments are different in many ways.
Also, we have to load define new MPI host details. These can be seen below.
In a command prompt I used the following to copy the file
***If you are a windows user, you can also use WinSCP to copy files
Later to copy all the files in the
To submit jobs to the cluster
To run 4 copies of the same experiment I use (where spanish-flu-test1.sh is the name of the file containing the script):
Update: When using the slurm scheduler this command is
To check it is in the queue and see what else is queued use:
You can use this Java application to combine the results from multiple runs of the same experiment. It takes two arguments -- a folder that contains only CSV files created by NetLogo's BehaviorSpace and the file name of the desired combined file. E.g.
You'll need to quote the file paths if they contain spaces or other special characters.
To split a large NetLogo BehaviorSpace experiment into piecesSometimes one has a very large experiment that doesn't involve many copies of the same run (perhaps because the model is not very stochastic). For this one can
Update concerning the use of split_nlogo_experiments.py...
First, if you are using a windows machine to write the template file, you have to change the formatting. It isn’t readily apparent to everyone, but running dos2unix <scripttemplate>.sh will convert the line breaks. This is a subtle issue and the way around it is to write your template in a linux environment.
Second, don’t split the files on your local machine. It is too complicated and can be time consuming. Split the files on the cluster, it makes it easier to manage later on.
Lastly, you may not have the ability to install the split_nlogo_experiments script file on the cluster with your current privileges (this is the case for research students for example). In this case, you can just run the script with the necessary arguments afterward. For example, I ran the script below to split an experiment into separate files and save the output XML files (used to setup the experiments) and the script files (used to submit the files).
Here is an example...
This will split the experiment my_experiment from the NetLogo file my_model.nlogo into experiments stored in