-
Information about the Experimental Environment(Duration: 2 mins)
We are going to install Spectrum Scale 5.1.1 on three RHEL 8.3 x86 servers.
The three servers have been pre-configured as follows:
(1) Local YUM source has been increased;
(2) Mutual trust has been established between servers and password-free login was supported;
(3) The host name and IP address have been added to /etc/hosts;
(4) The firewall has been closed;
(5) Pre-install below RPM package
yum install kernel-devel cpp gcc gcc-c++ glibc sssd ypbind openldap-clients krb5-workstation elfutils elfutils-devel make
-
Extract the Basic Package Required by GPFS(Duration: 1 min)
Note: If there is no special instructions, it can be executed on node gpfs101 only.
"--Silent" means that silent extraction will be performed. If this parameter is not added, you need to enter number "1" and agree to the relevant protocol:
/root/Spectrum_Scale_Advanced-5.1.1.0-x86_64-Linux-install --silent
-
Configure the Install node(Duration: 1 min)
cd /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/
./spectrumscale setup -s 192.168.1.101
-
Add Nodes and Check Configuration(Duration: 2 mins)
where, -a means an admin node; -g means a GUI node; -n means a NSD node; -m means a manager node and -q means a quorum node:
./spectrumscale node add gpfs101 -a -g -n -m -q
./spectrumscale node add gpfs102 -a -g -n -q
./spectrumscale node add gpfs103 -n -q
./spectrumscale node list
-
Increase and Check NSD Disk Space(Duration: 1 min)
where, -p represents the main NSD node; -fs represents the created file system; -fg represents the Failure Group where it is located; -po represents the storage pool; -u represents the type of stored data and "/dev/sdx" represents the disk device. In this experiment, there are three local disks sdb/sdc/sdd for three NSD nodes for creating NSD. The following command creates a Filesystem (gpfs), 3 Pools (system/pool01/pool02) and 3 Failure Groups (101/102/103):
./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po system -u dataAndMetadata "/dev/sdb"
./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po pool01 -u dataOnly "/dev/sdc"
./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po pool02 -u dataOnly "/dev/sdd"
./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po system -u dataAndMetadata "/dev/sdb"
./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po pool01 -u dataOnly "/dev/sdc"
./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po pool02 -u dataOnly "/dev/sdd"
./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po system -u dataAndMetadata "/dev/sdb"
./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po pool01 -u dataOnly "/dev/sdc"
./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po pool02 -u dataOnly "/dev/sdd"
./spectrumscale nsd list
-
Check the file system, adjust the number of copies and the mount point(Duration: 1 min)
where -mr represents the current number of metadata copies; -MR represents the maximum number of metadata copies; -r represents the current number of data copies; -R represents the maximum number of data copies, and -m represents the location of the mount point. You may check the number of copies and the comparison of the original mount point and the changed mount point with the "list" command:
./spectrumscale filesystem list
./spectrumscale filesystem modify gpfs -mr 2 -MR 3 -r 2 -R 3 -m /gpfs
./spectrumscale filesystem list
-
Configure the Performance Monitoring Function(Duration: 2 mins)
Turn on performance monitoring, which is turned on by default:
./spectrumscale config perfmon -r on
-
Configure and Check the GPFS Cluster Name and Communication Port(Duration: 1 min)
where, -c represents the cluster name, and -e represents the GPFS Daemon communication port range:
./spectrumscale config gpfs -c gpfsdemo -e 60000-61000
./spectrumscale config gpfs --list
-
Configure the Callhome Function(Duration: 2 mins)
We turn off the callhome function:
./spectrumscale callhome disable
-
View and check the GPFS cluster configuration information(Duration: 2 mins)
./spectrumscale install --precheck
-
Start to install the GPFS cluster(Duration: 16 mins)
including the installation of NSD, performance monitoring, GUI, file system, etc.
Note: The above steps are only configuration steps, and the current step starts to perform installation based on the above configuration:
./spectrumscale install
Running of this command takes a long time, so please wait 16 minutes patiently.
-
Configure the GUI Account(Duration: 4 mins)
Create an admin account and add it to the Administrator and SecurityAdmin groups:
/usr/lpp/mmfs/gui/cli/mkuser admin -g Administrator,SecurityAdmin
Then the password, for example, admin001, should be entered twice, and in this way, you can access the GUI interface through http://192.168.1.101 in the browser.
Note: The use of the GUI interface is not included in this experiment, so please return to the command line interface to continue the experiment.
-
Modify the Hosts File for CES Service(Duration: 2 mins)
Note: This step needs to be performed on all nodes.
Write the IP address of CES in the hosts files of all nodes.
Tips: Cluster Export Services (CES) can provide highly available file and object services, including NFS, SMB and Object:
echo "192.168.1.104 ces104.cscdemo.cn ces104">>/etc/hosts
echo "192.168.1.105 ces105.cscdemo.cn ces105">>/etc/hosts
echo "192.168.1.106 ces106.cscdemo.cn ces106">>/etc/hosts
-
Add and Check Protocol Service Node(Duration: 3 mins)
Configure gpfs101/gpfs102/gpfs103 as the protocol service node:
./spectrumscale node add gpfs101 -p
./spectrumscale node add gpfs102 -p
./spectrumscale node add gpfs103 -p
./spectrumscale node list
-
Assign an IP address to CES protocol service(Duration: 2 mins)
./spectrumscale config protocols -e 192.168.1.104,192.168.1.105,192.168.1.106
-
Configure cesShareRoot(Duration: 5 mins)
where, -f represents the file system for placement, and -m represents the mount path:
./spectrumscale config protocols -f gpfs -m /gpfs
-
Configure to enable NFS and SMB service protocols(Duration: 1 min)
./spectrumscale enable nfs
./spectrumscale enable smb
-
Check protocol configuration information(Duration: 2 mins)
./spectrumscale deploy --precheck
-
Deploy Protocol Service(Duration: 10 mins)
Deploy CES, NFS and SMB:
./spectrumscale deploy
Running of this command line takes a long time, so please wait 10 minutes patiently.
-
Configure the Protocol Service Authentication Mode(Duration: 2 mins)
Local authentication method is used here:
/usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type userdefined
-
Add Authenticated User(Duration: 3 mins)
Note: This step needs to be performed on all nodes.
Create a local user. A cscdemo user is created below, and "password" is entered as the password:
useradd cscdemo
/usr/lpp/mmfs/bin/smbpasswd -a cscdemo
-
Release SMB Shared Directory(Duration: 3 mins)
Create the smbshare1 directory, set access to the directory for the cscdemo user, and release the shared directory:
mkdir /gpfs/smbshare1
chown cscdemo /gpfs/smbshare1
/usr/lpp/mmfs/bin/mmsmb export add smbshare1 /gpfs/smbshare1
-
Access SMB Directory(Duration: 2 mins)
In the upper right corner of the desktop, select Applications -> File Manager
, and enter smb://192.168.1.104/smbshare1
.Click Enter(Click Enter once and wait a few seconds, and the dialog box will pop up).
Select Connect as user
and enter the Username: cscdemo
and Password: password
, and click Connect
.
Try to create any file or directory under the opened directory, and return to /gpfs/smbshare1 of gpfs101 to check it.