Upgrade instructions to XRD 3.2+
Follow the same instructions as for 3.1.0 below, currently the default (PRO) version is 3.2.4.
After installing 3.2.4 or later please patch xrootd.xrootd.cf.tmp adding this line:
xrd.report localhost:1234 every 2m -client
Known issues and workarounds (v1.8 to v3.2.2, by Jiri Horgy)
- If you decide to use the new file format using extended attributes on you old servers, and you need to do the conversion of your existing files, please be aware of the two bugs (both are described in one ticket) -> https://savannah.cern.ch/bugs/?96364
The first one is critical as it prevents you to do the conversion of files of 2-4GiB in size, the second one is just irrating as it limits the number of files that could be converted in a one run of the 'frm_admin convert' command. It could be workarounded by running the conversion command as many times as there is nothing to convert anymore:
while ! frm_admin -c /usr/local/xrootd3/etc/xrootd/server/xrootd.cf convert -a old2new spaces; do : ;done
The patch for the first bug is included in the ticket's comments, the patch for the other issue does not exist yet (check the ticket for updates).
Besides these problems, following this guide (http://xrootd.slac.stanford.edu/doc/prod/frm_migr.htm#_Toc284350517) went ok. The conversion procedure took ~240minutes per one server containing 100TiB of data.
- The ALICE xrootd installation script (xrd3-installer) downloads xrdbase.tar.gz (xrootd-3.2.2) software from http://alitorrent.cern.ch/src/xrd3/ . This version of xrootd contains some code that is not included in the official release of xrootd that adds an ability to monitors per
client activity. Unfortunately, the added code contains two serious bugs causing xrootd server segfaults as we found out in this ticket -> https://savannah.cern.ch/support/?130984
A possibility could be to use vanilla xrootd instead, but this should fix both problems (only one of the patches is included in the original ticket):
diff -u -r ./src/XrdXrootd/XrdXrootdResponse.cc ../../xrootd-3.2.2/src/XrdXrootd/XrdXrootdResponse.cc --- ./src/XrdXrootd/XrdXrootdResponse.cc 2012-07-17 14:50:02.000000000 +0200 +++ ../../xrootd-3.2.2/src/XrdXrootd/XrdXrootdResponse.cc 2012-08-10 13:08:28.000000000 +0200 @@ -37,7 +37,7 @@ using namespace std; extern XrdClientLock client_mutex; -map clients; +map clients; /******************************************************************************/ /* L o c a l D e f i n e s */ diff -u -r ./src/XrdXrootd/XrdXrootdStats.cc ../../xrootd-3.2.2/src/XrdXrootd/XrdXrootdStats.cc --- ./src/XrdXrootd/XrdXrootdStats.cc 2012-07-25 14:58:14.000000000 +0200 +++ ../../xrootd-3.2.2/src/XrdXrootd/XrdXrootdStats.cc 2012-08-10 16:27:08.000000000 +0200 @@ -9,6 +9,7 @@ /******************************************************************************/ #include +#include #include
Shortcut: Update to XRD 3.1.0 (by J.M. Barbet)
The instructions below describe the process of updating xrootd by building an RPM that can be installed via Quattor. You may want to install xrootd to a separate directory from the previous version (xrootd-vmss_1.8b-1) to keep both versions of them if you had to backtrack.
- Important Note - For sites that use the directive 'oss.cache'
By default, this new version of xrootd attempts to use a different method to write data to the partitions defined by oss.cache. This method uses the option 'user extended attribute' on the underlying filesystems and requires a migration of the data already on the server. For now, the strategy of Alice VO is to postpone the migration to later using the compatibility mode that is activated with the following directive in the file system.cnf:
oss.runmodeold
If you currently use 'oss.cache', you must use this directive: for storage servers, which aggregate several partitions, the following line should be changed from
export OSSCACHE="oss.cache public /(partition 1)\noss.cache public /(partition 2)....."
to
export OSSCACHE="oss.runmodeold\noss.cache public /(partiton 1)\noss.cache public /(partition 2)...."
- Important Note 2
If writing doesn't work on the storage add this to etc/xrootd/server/xrootd.xrootd.cf.tmp :
xrootd.async off nosf
- Preparation
On the machine where you compile xrootd, It is necessary to have the cmake utility available in the repository DAG.
yum install cmake --enablerepo=dag
- Compilation
wget http://alitorrent.cern.ch/src/xrd3/xrd3-installer chmod +x xrd3-installer ./xrd3-installer --noclean --prefix /usr/local/xrootd3 --install
- Creation of RPM
We must create a .spec file to create a binary RPM from installed files:
xrootd3.spec Summary: xrootd - compiled on site Name: xrootd3 Version: 3.1.0 Release: 1 License: open Group: High performance data access Source: none Url: http://savannah.cern.ch/projects/xrootd Packager: Prenom Nom # %description Xrootd RPM created from the result of the compilation on SL5/x86_64 of the distribution available at http://alitorrent.cern.ch/src/xrd3/ # %files # /usr/local/xrootd3/ # %config(noreplace) /usr/local/xrootd3/etc/xrootd/system.cnf %config(noreplace) /usr/local/xrootd3/etc/xrootd/authz.cnf
The file / usr/local/xrootd3/etc/init.d/xrdservices must be modified to be used as a target of a link / etc / init.d / xrdservices (personal choice):
vi /usr/local/xrootd3/etc/init.d/xrdservices [...] XRDLOCATION="/usr/local/xrootd3"
Construction of the RPM:
rpmbuild -bb -v xrootd.spec
The RPM is ready to be installed with Quattor, there are two configuration files to manage the component FileCopy: system.cnf and authz.cnf. The installed file authz.cnf is normally correct.
The script / etc / init.d / xrdservices must be altered to start the new version, a possibility is to create a link via Quattor:
#JMB 13/03/2012 include {'components/symlink/config'}; "/software/components/symlink/links" = push(nlist( "name", "/etc/init.d/xrdservices", "target", "/usr/local/xrootd3/etc/init.d/xrdservices", "replace", nlist("all","yes","link", "yes") ));
Xrootd and how it works
Xrootd is a data server part of the so-called "Scalla suite for data access", and its purpose is to give tools and ways to build high performance scalable data repositories. Despite the historical name of the data serving daemon (xrootd), its field of application is completely general and environment-agnostic. It was originally developed for BaBar by a collaboration between SLAC and INFN Padova, but now is treated as a generic open source project, with many contributors and a core development team. For general and specific information about xrootd, visit this web page http://savannah.cern.ch/projects/xrootd. For sources and binaries, at xrootd.org.
This short description is intended as an HowTo, related to the typical needs of ALICE sites who decided to install their storage clusters by using the 'pure xrootd' approach.
Preconditions
- You need one or several disk servers where you want to install xrootd. This HowTo does not take into account all the dimensioning issues which have to be faced when designing a site. So, the purpose of the system is to allow an efficient utilization of the available resources (storage space AND total storage throughput AND performance AND robustness in general). However, any decision about the quantity of resources and their 'balancing' with other parts of the site, must be made by the site.
- All servers need internet connection to CERN (the 'wget' command should be available)
- You must choose a non root account to run xrootd. It could be alicesgm, for clarity you can use 'xrootd' as the account running xrootd. It should be a regular unix account, and password known to people who will manage xrootd.
- You don't need to be root to set it up. If you decide to set it up as root, it will work (internally doing a "su" to the xrootd user), but you are doing an useless thing which is going to make your life worse.
- You should install xrootd in the home directory of the 'xrootd' account e.g. /home/xrootd/xrootdinstall
- You need a working (GCC) compiler (and its environment) to install xrootd with the xrd-installer.
- You need the following system libraries and headers installed:
- libxml2 (bin/lib + dev headers)
- libssl (bin/lib + dev headers)
- autotools (just the binaries; automake, autoconf, libtool)
- libuuid-devel (SLC6 doesn't install the headers by default any more)
- swig
- zlib (bin/lib + dev headers)
- New from v. 3.1.0: Cmake, installation will warn you if it is not available (dropped classic & automake build)
- Firewall configuration. If your site has a firewall, you need (in all the machines of the storage cluster - redirector and servers) :
- 2 open TCP ports for incoming worldwide WAN access (default 1094 + 1095)
- 2 open TCP ports for internal LAN access (3122 + 3123). In all the machines of the cluster.
- The mahines must be able to contact the VOBOX through port 8884/UDP for the monitoring to work
- Ability to initiate TCP connections towards the outside world. In particular, all the ALICE sites.
- Increase the number of file descriptors - to be applied to the redirector and data servers
- The default number of file descriptors (ulimit -n, 1024) is insufficient to assure access by many clients simultaneously
- The limit should be increased to 65500 descriptors for root and for the user under which xrootd is running
- The procedure below is valid for SL(C)4/5 installations, for other flavours e.g. Ubuntu, consult the OS HowTos:
Configure the system to accept the desired value for maximum number of open files. Check the value in /proc/sys/fs/file-max to see if it is larger than the value needed for the maximum number of open files:
# cat /proc/sys/fs/file-max
Echo the appropriate number into the variable and add the change to /etc/sysctl.conf to make it persistent across reboots.
# echo 65500 > /proc/sys/fs/file-max
and edit /etc/sysctl.conf to include the line:
fs.file-max = 65500
Set the value for maximum number of open files In the file /etc/security/limits.conf, below the commented line that reads
# domain type item value
add this line:
* - nofile 65500
This line sets the default number of open file descriptors for every user on the system to 65500. Note that the "nofile" item has two possible limit values under the header: hard and soft. Both types of limits must be set before the change in the maximum number of open files will take effect. By using the "-" character, both hard and soft limits are set simultaneously. The hard limit represents the maximum value a soft limit may have and the soft limit represents the limit being actively enforced on the system at that time. Hard limits can be lowered by normal users, but not raised and soft limits cannot be set higher than hard limits. Only root may raise hard limits.
Modify the SSH daemon to remove privilege separation. Edit /etc/ssh/sshd_config and find the line
# UsePrivilegeSeparation yes
Change it to read:
UsePrivilegeSeparation no
Similarly
# PAMAuthenticationViaKbdInt no
should be changed to read
PAMAuthenticationViaKbdInt yes
For the change to take effect, you'll need to restart the SSH service:
# service sshd restart
After making this change, when users log in via SSH they will automatically have the maximum number of open files that was set in /etc/security/limits.conf. No additional work is necessary.
Note that some of the libraries in version 3.1.0 has been reshuffled/renamed.
Single server configuration
In a single server setup, xrootd serves data using the 'xrootd' protocol from a single disk server. xrootd runs as a multithreaded process (name 'xrootd'). The idea behind the Scalla/xrootd architecture is that:
- a server can aggregate many mounted storage partitions
- many servers are aggregated into a cluster
- so, the total available throughput depends mainly from the total number of mounted disks.
The described setup procedure configures nevertheless always a multiple server setup.
Multiple server configuration
The real advantage of xrootd comes when there are many data servers. In this case xrootd provides single name space, load balancing, low file location/open overhead and an efficient data access protocol, which allows clients to get access data wherever they are. On each data server additionally one 'cmsd' process is started. This is used to localize files in the xrootd cluster. The 'xrootd' and the 'cmsd' processes use both a single port to accept connections. Connections to the 'cmsd' port are only from machines within the cluster, while connections to the 'xrootd' port can be worldwide.
By default all disk servers are automatically configured by the contained setup scripts to be a redirector (Manager) and a disk server at the same time.T The Xrootd redirector is a "head node" where a client makes the initial connect to open a file. Xrootd redirector then locates the file using it's cmsd and cmsds on other data servers, and then redirects the client to a server which has the file. If multiple replicas are available, the load is balanced among the machines hosting them.
Warning: if you start xrootd with the default parameters without the ALICE authorization library it exports all files from the root directory '/'. At least, in this case, you should set the LOCALROOT to a meaningful value.
The ALICE global name space and the global redirector
Note: ALICE Global Redirector is now disabled from 3.1.0.
Additionally, to set up an xrootd-based storage element for ALICE, it is required that the clusters subscribe themselves to the so-called 'ALICE Global redirector', which gives a degree of cooperation between remote clusters, to enhance the general robustness of the whole ALICE xrootd-based storage, and give a unified view of all the content. To learn more about this kind of features, the reader is encouraged to read the recent presentations about the Virtual Mass Storage features of the Scalla suite here http://savannah.cern.ch/projects/xrootd. This automated setup is to be considered a generic one which installs and correctly configures these 'global' features too. These instructions instead refer to a more specific ALICE set of requirements, but, in any case, the differences with a plain agnostic installation are really minimal.
Installation using 'xrd-installer'
Download the xrd-installer program from http://alitorrent.cern.ch/src/xrd3/xrd3-installer (formerly from http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/tarballs/installbox/xrd-installer or AFS /afs/cern.ch/sw/arda/www/public/xrootd/tarballs/installbox/xrd-installer). Syntax of the installer program:
xrd-installer [--install] [-h] [-l] [n] [-p packagename] [--prefix install-prefix] [--version version] [--compiledir compile-directory] [--noclean]
-l : list packages
-p name : select package
-n : don't install autotools
--noclean : don't cleanup the compilation directory
--install : install all packages or the selected packages with -p option
--prefix : set the installation prefix (default /home/alientest/xrdserver)
--version : select the version to install (default is PRO)
--compiledir
: set the compilation directory (default is /tmp/xrd-installer-alientest)
To install a full xrootd package, run the installer with the option '--install'. If you don't specify any other arguments, all packages of the PRO version are installed in $HOME/xrdinstall and compiled in the local directory /tmp/xrd-installer-$USER. The installer will also download the 'right' autotools packages unless you switch this feature off with the '-n' switch.
xrd-installer tries to compile & install the following packages (attention, some of them have been reshuffled/renamed): * Package libtokenauthz : libTokenAuthz * Package xrdbase : xrootd-base * Package xrdapmon : xrootd-apmon package (removed, not needed) * Package alicetokenacc : xrootd-alicetokenacc package * Package xrdstartscript : xrootd-startscript package * Package xrdshell : xrootd-shell shell binaries * Package ApMon2 : libapmoncpp * Package xrdcpapmonplugin : ApMon-enabling plugin for xrdcp * Package xrdaggregatingN2N : inter-site namespace prefix aggregating library
Attention, the binary installation is only supported for 64bit platforms:
- SLC 5 - 64 bit => name sl5_64
- SLC 6 - 64 bit => name sl6_64
- Ubuntu - 64 bits => name ubuntu_64
If anybody builds sources on 32bit the binary tar ball could be added to the repository.
Example Installation with 'xrd-installer' from AFS
Building RPMs using 'xrd-rpmer' (obsolete)
For the new version, it is also possible to use RPMs from xrootd.org and compile and install only the plugins. SL5 and SL6 RPMs for the plugins can be made available if demanded. The xrootd RPMs provide also standard service scripts.
Previously, the RPM build script 'xrd-rpmer' could be downloaded from http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/tarballs/installbox/xrd-rpmer. Alternatively you can run it directly from AFS /afs/cern.ch/sw/arda/www/public/xrootd/tarballs/installbox/xrd-rpmer.
Syntax of the RPM script: Usage: xrd-rpmer [--interactive | -i] [--help|-h] [--version ] [--workdir ]
--help | -h : print help
--version : specify the version to build (default PRO)
--workdir : specify a working directory where to place downloaded packages & logfiles (default /tmp/xrootd-rpm)
To build the RPM do the following :
- Go into a work subdirectory:
- mkdir work
- cd work
- Get the script:
- wget http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/tarballs/installbox/xrd-rpmer
- chmod 755 ./xrd-rpmer
- Run it as root user (and wait):
- sudo ./xrd-rpmer
And check that there were no compilation failures.
The RPM will be placed in the usual directory /usr/src/redhat/RPMS/. Depending on the Linux distribution, the location could be different. This procedure is supposed to work also in non RPM-based distributions, e.g. like Debian, Ubuntu and others. In some cases (e.g. Ubuntu), in order to install the RPM you will be forced to use the --nodeps switch.
NOTE: the RPM will install a 'standard' init script in /etc/init.d/xrdservices
You can use that script to start/stop the service. Please remember that now your machine will start the xrootd servces at each reboot.
Configuration
Basic xrootd Configuration
There are two configuration files under the installation directory which you need to modify for a standard setup:
- etc/xrootd/system.cnf
- etc/xrootd/authz.cnf
Both configuration files are heavily self-documenting.
The default 'system.conf' looks like this:
Technically, 'system.cnf' it is a bash script which sets system variables, which constitute the meta-configuration of your xrootd server or redirector (or both in the same machine). Internally, the startup scripts generate the actual full configuration file by processing a config template file, which, in general, has not to be modified for normal or advanced usage. Let's have a look at the variables one has to configure, and their practical meaning.
- MANAGERHOST is the name of the machine where the xrootd redirector(manager) is running. It can be the name of the local machine if the local machine is the redirector. If you have only a single disk server you can leave the default (which takes the name of the local machine).
- SERVERONREDIRECTOR: For very small setups (1-2 servers) one can decide to use the same machine for both tasks, i.e. cluster manager + data server. Hence, if SERVERONREDIRECTOR=1, in that machine the two instances will be run automatically. For bigger clusters, this works as well, but is not a very good idea, since the machine will have a higher load to sustain, in terms of CPU and file descriptors usage.
- MONALISA_HOST is the machine name which receives monitoring information. Normally this is the site VO box
- SE_NAME should be the AliEn name of the storage element to which your xrootd setup is assigned. Ask the ALICE Grid admin.
- XRDDEBUG is by default switched on (set to '-d') . This traces everything into the xrootd log files. For better performance you can switch off debug output setting it to "". This automated setup also correctly compresses and rotates old logs.
- XRDUSER should be set to the user id running the xrootd & cmsd executable
- XRDMAXFD is the limit of file descriptors to be used. The default is 65000, and there is no need to change it.
- The SYSTEM variable defines if this installation is a plain xrootd setup or an xrootd on top of a DPM/CASTOR installation. However, this howto is only related to pure xrootd sites, i.e. not integrated with DPM or CASTOR.
- XRDSERVERPORT, XRDMANAGERPORT, OLBSERVERPORT, OLBMANAGERPORT are the ports to be used on the disk server and the redirector machines for xrootd & cmsd. The defaults are just fine and are the preferred values.
- OFSLIB is the plugin library to be used in xrootd. For an ALICE setup set it to 'TokenAuthzOfs' to enable token authorization. Otherwise, for a free server, set it to 'Ofs'. In this case, other security protocols can be used, referring to the XrdSec configuration, but these are not the topics of this description. Alert! This is discontinued now! Please look for the ACCLIB item
- ACCLIB is the new (since 10 June 2009) plugin library implementing the Alice strong security. For an ALICE site set it to 'libXrdAliceTokenAcc.so' to enable token authorization. Otherwise, for a free server, set it to '' (empty string)
- METAMGRHOST, METAMGRPORT The default values for this are 'alice-gr01.cern.ch' and 1213, respectively. Since you are setting up an ALICE site, there is no need to change them.
- VMSS_SOURCE: this is where this cluster tries to fetch files from, in the case they are absent. For an ALICE SE, the value should be 'root://${METAMGRHOST}/' (note the single slash at the end).
- LOCALPATHPFX is the prefix of the namespace which has to be made "facultative", i.e. a path for a file may have it or not, but it is anyway referring to the same file. This parameter is site dependent and very important. Please read the more accurate description about this in the next paragraphs.
- LOCALROOT is the local (relative to the mounted disks) place where all the data is put/kept by the xrootd server. In other words, this is the root of the data exported by this server. This is mandatory as a 'good practice' in setting up SEs. Not putting this, or setting it to '/' or empty string is a serious security hole in your system. An example is '/data/disk1/namespaceroot' (this is compatible with the following example relative to OSSCACHE).
- OSSCACHE: probably your server has more than one disk to use to store data. If this is the case, you must make xrootd aware of this, in order to aggregate them into a single name space. Set this variable to the configuration snippet containing all the 'oss.cache X' statements, separated by \n. An example is 'oss.cache public /data/disk1\noss.cache public /data/disk2'.
The default 'authz.conf' looks like this:
- This file is only used if you have specified OFSLIB="TokenAuthzOfs" in 'system.cnf'. If you don't use the ALICE OFSLIB the complete exported namespace is readable/writeable via xrootd by default. To change this look in the next sections.
- Add an EXPORT statement for every directory you want to make readable via this xrootd e.g. add all partitions where you want to write to
- NOTE: since authz.cnf is used only for the ALICE security, the default one shipped with the distribution is the one to use in the ALICE computing model. It does not need modifications. If you think that you need to modify it, please get in touch with the maintainers/developers/operations manager. You may be wrong or you might have some worth to consider requirements.
The global ALICE namespace
One of the more recent requirements and directions is that the ALICE namespace has to be considered as a coherent name space. This mean that a file has to be named in the same way, no matter the place where it is stored. If there are multiple copies of the same file in different SEs, then, again, they are the same file and they must have the same name (which is an Alien PFN). Several of the old SE installations historically prepended a SE-dependent prefix to a (coherent by itself) Alien PFN. Hence, different sites were calling the same things with different names. Hence, in the xrootd-based SEs now it is required to load a particular library which takes care of the translation. This is already covered by the setup process, so the only thing to do is to write this historical prefix in the system.cnf file, as the content of the LOCALPATHPFX variable.
The result of this is that the correctly configured SE will give access to files with and without the prefix. The usual Alien tools will access the file by specifying the prefix (just because the file catalog has it), while the 'agnostic' tools (like a remote xrootd cluster willing to pick a file up) will just use the global namespace.
Expert xrootd Configuration
If you need to edit the xrootd configuration file, you have to edit the following template files under the installation directory:
- etc/xrootd/server/xrootd.xrootd.cf.tmp
- etc/xrootd/server/xrootd.dpm.cf.tmp
- etc/xrootd/manager/xrootd.xrootd.cf.tmp
- etc/xrootd/manager/xrootd.dpm.cf.tmp
There is one config file for a DPM or native XROOTD setup and one config file each for the redirector (manager) and disk server (server) hosts.
For more sophisticated setups like staging from MSS etc. please refer to the Scalla/xrootd documentation.
Start/Stop Scripts to run xrootd & cmsd service & MonaLisa monitoring daemon
The installer installs the start/stop script to control the xrootd & cmsd daemons. The script is located under the installation directory in scripts/xrootd/xrd.sh and accepts several parameters.
- check the status of xrootd/cmsd & monalisa (apmon) daemon
xrd.sh
- check teh status of xrootd/cmsd & monalisa daemon and start the one which are currently not running
xrd.sh -c
- force a restart of all daemons
xrd.sh -f
- stop all daemons
xrd.sh -k
If the daemons are once started a crontab entry is made to keep them always alive. If you execute 'xrd.sh -k' the crontab entry is automatically removed. If you are not allowed to use crontab you will see a warning while executing the script.
The script can be also used from the 'root' account. The daemons are then running under the account specified with the XRDUSER variable in etc/xrootd/system.cnf.
Logfiles
xrootd & cmsd
All logfiles are located under logs/ in the installation directory. The xrootd & cmsd log files are found in a subdirectory depending if they run in server or manager mode.
- logs/server/xrdlog
- logs/server/cmslog
- logs/manager/xrdlog
- logs/manager/cmslog
xrootd automatically rotates these files on a daily basis. The scripts automatically make .tar.gz archives of these logs, on a daily basis. The compressed logs are deleted automatically when they are more than 15 days old. This ensures that your disk will never be filled up by logs.
apmon
The apmon logfile is found under logs/apmon.log .
OLB plugin configuration
There are two olb plugins available. One is for DPM and one for Castor2.
DPM/xrootd setup
Note: this part has to be fixed, since the DPM integration with the latest Scalla/xrootd versions is not yet available.
To run xrootd on a DPM setup install xrootd with 'xrd-installer' on all DPM machines under the 'dpmmgr' account. The installer will automatically compile the DPM plugin if the DPM library is installed.
You have to set SYSTEM to DPM in etc/xrootd/system.cnf and you have to export the dpm namespace you want ('/dpm/...') with an EXPORT statement in etc/xrootd/authz.cnf and run the daemons under the 'dpmmrg' account (XRDUSER="dpmmgr")
The environment of 'dpmmgr' should have the following environment set:
DPNS_HOST
DPM_HOST
CSEC_MECH=ID
The configuration of the DPM server (/etc/shift.conf) should contain a PROTOCOLS directive that contains 'xroot'.
F.e. /etc/shift.conf should look like that:
DPM PROTOCOLS rfio gsiftp xroot
Test of an xrootd Installation
Add ' /bin' to your PATH environment variable to get the 'xrdcp' command in the PATH.
Upload a local file to an xrootd cluster (it can even be composed by 1 machine)
xrdcp /etc/passwd root:////tmp/testfile
Download a file from an xrootd cluster
xrdcp root:////tmp/testfile /tmp/testfile
.. where portnumb can be omitted if the storage cluster is accessible on the standard port 1094.
In case you see problems with up- or download of files, run the commands with the debug option enabled e.g.
xrdcp -d 3 root:////tmp/testfile /tmp/testfile-diskserver
To see the whole list of possible command-line parameters for xrdcp, run it without parameters:
xrdcp