days left for your 2X growth
 / 

2X Growth

2X the points gained during the growth doubling period

Points

Get points: use points for successful completion of experiments, challenges and learning route courses:points can unlock advanced features such as: discovery mode, schedule experiments, extended experiments, VIP experiments, etc.

Number of active activities

Each time you complete an experiment, challenge, or course, you increase the number of active activities to earn a higher level

The system detects that you've not completed the task
Quick path - Complete any of the following experiments
You can
Unlock more experiment and learn path
Schedule experiment and access the discovery mode
Get your own experiment environment with VIP authority
2X Growth Points
News about the latest experiment, learning path and other events information.
You have Bonus Task(s) awaiting completion.
Bonus task list
Preference Setting

Save

IBM Storage Scale Build
High-performance solution for managing large-scaled data

Building and Simple Use of a Parallel File System

Only    0 seat(s) available

    0 already completed

The data volume that organizations are building, analyzing and storing is larger than ever before. Only the organization that is able to faster deliver insights and manage fast-growing infrastructure can be the leader in the industry. To deliver these insights, the foundational storage of an enterprise must be able to support both new and traditional applications and provide excellent security, reliability and high performance. As a high-performance solution for managing large-scaled data, IBM Spectrum Scale provides unique archiving and analytics capabilities to help you address these challenges.

Experiment: Building and Simple Use of a Parallel File System

344 already completed

Experiment Content:

This experiment will lead you to get through the installation and configuration of IBM Storage Scale (formerly GPFS) parallel file system to understand the basic concept, architecture and simple use method of this system, letting you get perceptual knowledge of the distributed software-defined storage solution - Storage Scale.

Experiment Resources:

IBM Storage Scale 5.1.1 software
Red Hat Enterprise Linux 8.3 (VM)

Demo Video

Medal Status


Lighten Medals

Latest Activites

zhoufc

has completed the experiment and received a blue medal

chenjian

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

syyjcyao

has completed the experiment and received a blue medal

syyjcyao

has completed the experiment and received a blue medal

yanqingc

has completed the experiment and received a blue medal

yanqingc

has completed the experiment and received a blue medal

Discovery:Building and Simple Use of a Parallel File System

14 already completed

Experiment Content:

This experiment will lead you to get through the installation and configuration of IBM Storage Scale (formerly GPFS) parallel file system to understand the basic concept, architecture and simple use method of this system, letting you get perceptual knowledge of the distributed software-defined storage solution - Storage Scale.

Experiment Resources:

  • IBM Storage Scale 5.1.1 software
    Red Hat Enterprise Linux 8.3 (VM)

Tips

1. Discovery provides longer time for your experience;you are home free
2. Data will be cleared after the end of discovery
3. It is needed to finish the experiment and challenge first to start your discovery

Please start your challenge after you finish the experiment

Please start your discovery after you finish the challenge.

Please start your discovery after you finish the experiment.

Experiment Manual

The following content is displayed on the same screen for your experiment so that you can make any necessary reference in experiment. Start your experiment now!

  1. Information about the Experimental Environment(Duration: 2 mins)

    We are going to install Spectrum Scale 5.1.1 on three RHEL 8.3 x86 servers.
    The three servers have been pre-configured as follows:
    (1) Local YUM source has been increased;
    (2) Mutual trust has been established between servers and password-free login was supported;
    (3) The host name and IP address have been added to /etc/hosts;
    (4) The firewall has been closed;
    (5) Pre-install below RPM package
    yum install kernel-devel cpp gcc gcc-c++ glibc sssd ypbind openldap-clients krb5-workstation elfutils elfutils-devel make

  2. Extract the Basic Package Required by GPFS(Duration: 1 min)

    Note: If there is no special instructions, it can be executed on node gpfs101 only.
    "--Silent" means that silent extraction will be performed. If this parameter is not added, you need to enter number "1" and agree to the relevant protocol:

    /root/Spectrum_Scale_Advanced-5.1.1.0-x86_64-Linux-install --silent

  3. Configure the Install node(Duration: 1 min)

    cd /usr/lpp/mmfs/5.1.1.0/ansible-toolkit/
    ./spectrumscale setup -s 192.168.1.101

  4. Add Nodes and Check Configuration(Duration: 2 mins)

    where, -a means an admin node; -g means a GUI node; -n means a NSD node; -m means a manager node and -q means a quorum node:
    ./spectrumscale node add gpfs101 -a -g -n -m -q
    ./spectrumscale node add gpfs102 -a -g -n -q
    ./spectrumscale node add gpfs103 -n -q
    ./spectrumscale node list

  5. Increase and Check NSD Disk Space(Duration: 1 min)

    where, -p represents the main NSD node; -fs represents the created file system; -fg represents the Failure Group where it is located; -po represents the storage pool; -u represents the type of stored data and "/dev/sdx" represents the disk device. In this experiment, there are three local disks sdb/sdc/sdd for three NSD nodes for creating NSD. The following command creates a Filesystem (gpfs), 3 Pools (system/pool01/pool02) and 3 Failure Groups (101/102/103):
    ./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po system -u dataAndMetadata "/dev/sdb"
    ./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po pool01 -u dataOnly "/dev/sdc"
    ./spectrumscale nsd add -p gpfs101 -fs gpfs -fg 101 -po pool02 -u dataOnly "/dev/sdd"
    ./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po system -u dataAndMetadata "/dev/sdb"
    ./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po pool01 -u dataOnly "/dev/sdc"
    ./spectrumscale nsd add -p gpfs102 -fs gpfs -fg 102 -po pool02 -u dataOnly "/dev/sdd"
    ./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po system -u dataAndMetadata "/dev/sdb"
    ./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po pool01 -u dataOnly "/dev/sdc"
    ./spectrumscale nsd add -p gpfs103 -fs gpfs -fg 103 -po pool02 -u dataOnly "/dev/sdd"
    ./spectrumscale nsd list

  6. Check the file system, adjust the number of copies and the mount point(Duration: 1 min)

    where -mr represents the current number of metadata copies; -MR represents the maximum number of metadata copies; -r represents the current number of data copies; -R represents the maximum number of data copies, and -m represents the location of the mount point. You may check the number of copies and the comparison of the original mount point and the changed mount point with the "list" command: 
    ./spectrumscale filesystem list
    ./spectrumscale filesystem modify gpfs -mr 2 -MR 3 -r 2 -R 3 -m /gpfs
    ./spectrumscale filesystem list

  7. Configure the Performance Monitoring Function(Duration: 2 mins)

    Turn on performance monitoring, which is turned on by default:
    ./spectrumscale config perfmon -r on

  8. Configure and Check the GPFS Cluster Name and Communication Port(Duration: 1 min)

    where, -c represents the cluster name, and -e represents the GPFS Daemon communication port range:
    ./spectrumscale config gpfs -c gpfsdemo -e 60000-61000
    ./spectrumscale config gpfs --list

  9. Configure the Callhome Function(Duration: 2 mins)

    We turn off the callhome function:
    ./spectrumscale callhome disable

  10. View and check the GPFS cluster configuration information(Duration: 2 mins)

    ./spectrumscale install --precheck

  11. Start to install the GPFS cluster(Duration: 16 mins)

    including the installation of NSD, performance monitoring, GUI, file system, etc.
    Note: The above steps are only configuration steps, and the current step starts to perform installation based on the above configuration:
    ./spectrumscale install
    Running of this command takes a long time, so please wait 16 minutes patiently.

  12. Configure the GUI Account(Duration: 4 mins)

    Create an admin account and add it to the Administrator and SecurityAdmin groups:
    /usr/lpp/mmfs/gui/cli/mkuser admin -g Administrator,SecurityAdmin
    Then the password, for example, admin001, should be entered twice, and in this way, you can access the GUI interface through http://192.168.1.101 in the browser.
    Note: The use of the GUI interface is not included in this experiment, so please return to the command line interface to continue the experiment.

  13. Modify the Hosts File for CES Service(Duration: 2 mins)

    Note: This step needs to be performed on all nodes.
    Write the IP address of CES in the hosts files of all nodes.
    Tips: Cluster Export Services (CES) can provide highly available file and object services, including NFS, SMB and Object:
    echo "192.168.1.104 ces104.cscdemo.cn ces104">>/etc/hosts
    echo "192.168.1.105 ces105.cscdemo.cn ces105">>/etc/hosts
    echo "192.168.1.106 ces106.cscdemo.cn ces106">>/etc/hosts

  14. Add and Check Protocol Service Node(Duration: 3 mins)

    Configure gpfs101/gpfs102/gpfs103 as the protocol service node:
    ./spectrumscale node add gpfs101 -p
    ./spectrumscale node add gpfs102 -p
    ./spectrumscale node add gpfs103 -p
    ./spectrumscale node list

  15. Assign an IP address to CES protocol service(Duration: 2 mins)

    ./spectrumscale config protocols -e 192.168.1.104,192.168.1.105,192.168.1.106

  16. Configure cesShareRoot(Duration: 5 mins)

    where, -f represents the file system for placement, and -m represents the mount path:
    ./spectrumscale config protocols -f gpfs -m /gpfs

  17. Configure to enable NFS and SMB service protocols(Duration: 1 min)

    ./spectrumscale enable nfs
    ./spectrumscale enable smb

  18. Check protocol configuration information(Duration: 2 mins)

    ./spectrumscale deploy --precheck

  19. Deploy Protocol Service(Duration: 10 mins)

    Deploy CES, NFS and SMB:
    ./spectrumscale deploy
    Running of this command line takes a long time, so please wait 10 minutes patiently.

  20. Configure the Protocol Service Authentication Mode(Duration: 2 mins)

    Local authentication method is used here:
    /usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type userdefined

  21. Add Authenticated User(Duration: 3 mins)

    Note: This step needs to be performed on all nodes.
    Create a local user. A cscdemo user is created below, and "password" is entered as the password:
    useradd cscdemo
    /usr/lpp/mmfs/bin/smbpasswd -a cscdemo

  22. Release SMB Shared Directory(Duration: 3 mins)

    Create the smbshare1 directory, set access to the directory for the cscdemo user, and release the shared directory:
    mkdir /gpfs/smbshare1
    chown cscdemo /gpfs/smbshare1
    /usr/lpp/mmfs/bin/mmsmb export add smbshare1 /gpfs/smbshare1

  23. Access SMB Directory(Duration: 2 mins)

    In the upper right corner of the desktop, select Applications -> File Manager, and enter smb://192.168.1.104/smbshare1 .Click Enter(Click Enter once and wait a few seconds, and the dialog box will pop up).
    Select Connect as user and enter the Username: cscdemo and Password: password, and click Connect.
    Try to create any file or directory under the opened directory, and return to /gpfs/smbshare1 of gpfs101 to check it.

Scan here to share it


Reserve Experiment Summary

Experiment Name:

Experiment Content: Building and Simple Use of a Parallel File System

:

Hour(s)

Points:,This appointment will use 50 points

You have successfully reserve this experiment. You can view it at Personal Center > My Reservation later

You are not authorized to reserve the experiment!

It’s only for Premium Member.

VIP 项目申请

什么是VIP项目:
可以在一段时期内使用您专有的实验资源,进行深入测试。期间可以根据您的需要手工进行环境的初始化与回收

实验名称:
IBM Storage Scale 搭建
并行文件系统的搭建和简单使用

Please login before sharing



    Copy succeeded


Please fill in the email address

    Send succeeded

Poster sharing

Scan to share poster

您将使用100个消费积分开启自由实验

您当前的消费积分不足

您将使用200个消费积分解锁VIP实验

您当前的消费积分不足

您将使用50个消费积分预约实验

您当前的消费积分不足

该天预约已满,请选择其他时间

Non-Premium Member only has 5 opportunities every month to experiment. You still have 0 opportunities. Do you want to start the experiment now?

p.s Premium Member enjoys unlimited access to routine experiments.
沪ICP备18004249号-1