days left for your 2X growth
 / 

2X Growth

2X the points gained during the growth doubling period

Points

Get points: use points for successful completion of experiments, challenges and learning route courses:points can unlock advanced features such as: discovery mode, schedule experiments, extended experiments, VIP experiments, etc.

Number of active activities

Each time you complete an experiment, challenge, or course, you increase the number of active activities to earn a higher level

The system detects that you've not completed the task
Quick path - Complete any of the following experiments
You can
Unlock more experiment and learn path
Schedule experiment and access the discovery mode
Get your own experiment environment with VIP authority
2X Growth Points
News about the latest experiment, learning path and other events information.
You have Bonus Task(s) awaiting completion.
Bonus task list
Preference Setting

Save

IBM Storage Scale Lifecycle Management
High-performance solution for managing large-scaled data

Migration Policy Demonstration

Only    0 seat(s) available

    0 already completed

The data volume that organizations are building, analyzing and storing is larger than ever before. Only the organization that is able to faster deliver insights and manage fast-growing infrastructure can be the leader in the industry. To deliver these insights, the foundational storage of an enterprise must be able to support both the Big Data in the new era and traditional applications and provide excellent security, reliability and high performance. As a high-performance solution for managing large-scaled data, IBM Spectrum Scale provides unique archiving and analytics capabilities to help you address these challenges.

Experiment: Migration Policy Demonstration

353 already completed

Experiment Content:

This experiment is intended to let you understand the basic operations and concepts of the migration policies in IBM Storage Scale (GPFS) parallel file system.

Experiment Resources:

IBM Storage Scale 5.0.1 software
Red Hat Enterprise Linux 7.4 (VM)

Demo Video

Medal Status


Lighten Medals

Latest Activites

zhoufc

has completed the experiment and received a blue medal

chenjian

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

739163912

has completed the experiment and received a blue medal

syyjcyao

has completed the experiment and received a blue medal

syyjcyao

has completed the experiment and received a blue medal

yanqingc

has completed the experiment and received a blue medal

yanqingc

has completed the experiment and received a blue medal

Challenges: Intelligent Allocation of Storage Space

170 already completed

Challenge Background:

At a small Internet company, high-performance storage is very precious. When this resource pool reaches a certain threshold, manual data migration is needed. Please help the company to complete the automatic migration and allocation of storage resource pool based on the migration rules of Storage Scale.

Challenge Goal:

Log as admin/admin001 onto the graphic management interface of Storage Scale and configure a migration rule by which when the space usage of the resource pool "ssdpool" exceeds 20%, the files of json and xml formats would be migrated into the resource pool "nlsaspool" so as to release the space of ssdpool until it has 99% available space, then use the command "dd if=/dev/zero of=test.json bs=1M count=1000" to simulate writing files into the directory /gpfs to trigger migration.

Challenge Rules:

1. Once the challenge starts, the system will start timing for 30min.
2. Click the "Submit Results" button on the upper left corner after the challenge task is completed.
3. The system evaluates your performance automatically and gives your challenge results and score.
4. Your score is ranked according to your time taken to challenge. The shorter your time is, the higher your rank is.

Medal Status


Lighten Medals

Challenge Ranking List

Rank Nickname Time
mengweihangzhou 5s
80468165 2:1s
caizc 2:47s
4 柠檬糖ㄨ梦 2:50s
5 lg_13606 3:54s

Discovery:Migration Policy Demonstration

13 already completed

Experiment Content:

This experiment is intended to let you understand the basic operations and concepts of the migration policies in IBM Storage Scale (GPFS) parallel file system.

Experiment Resources:

  • IBM Storage Scale 5.0.1 software
    Red Hat Enterprise Linux 7.4 (VM)

Tips

1. Discovery provides longer time for your experience;you are home free
2. Data will be cleared after the end of discovery
3. It is needed to finish the experiment and challenge first to start your discovery

Please start your challenge after you finish the experiment

Please start your discovery after you finish the challenge.

Please start your discovery after you finish the experiment.

Experiment Manual

The following content is displayed on the same screen for your experiment so that you can make any necessary reference in experiment. Start your experiment now!

  1. Log on the graphic management interface (GUI) of IBM Storage Scale(Duration: 3 mins)

    Click  Advanced...  -> Accept the Risk and Continue
    Input "admin" as name and "admin001" as password, and then click the "Sign In" button

    Log onto the Spectrum Scale management platform

  2. View resource pools(Duration: 4 mins)
    Navigate to the menu "Storage -> Pools" at the left.

    View all the resource pools currently managed by the system:

    - ssdpool: a resource pool consisting of high-performance disks, which is mainly used for storing hot data or the data with relatively high storage performance requirements
    - saspool: a resource pool consisting of medium-performance disks, which is mainly used for storing the data with medium storage performance requirements
    - nlsaspool: a resource pool consisting of low-performance disks, which is mainly used for storing warm data and the data that needs to be retained for a long term
    Note: For the data stored into a disk, Spectrum Scale supports the data migration based on automatic migration policies. For example, once lots of profiles in json or xml are written into ssdpool, the upper limit of disk capacity would be triggered quickly. In this case, Spectrum Scale can automatically migrate inactive data to other pools like nlsapool
    Next, we will quickly configure automatic migration policies in the GUI of Spectrum Scale:
  3. Enter the Information Lifecycle Management page(Duration: 4 mins)
    Navigate to the Menu "Files -> Information Lifecycle" at the left

    View the list of policies at the left:

    Active Policy: refers to currently active policy rules
    Policy Repository: refers to the policy repository
  4. Create a policy(Duration: 5 mins)
    detail.storage.offering.scale.lifecycle.migration.manual.list.item04=- Click into the Policy Repository tab page

    - Click the button "+" to create a new policy and name it as "mypolicy2"
  5. Configure the default placement rules(Duration: 5 mins)
    Note: Our purpose here is to let the common files without special statements be written into the resource pool "saspool" by default
    - Click to select the default rule "Placement default (*)" under mypolicy2
    - Edit the rule at the right as "pool = saspool" (meaning that all files are to be placed saspool by default)
    - Click the button "Apply Changes" to save your settings
  6. Create and configure the placement rules for files with high storage performance requirements(Duration: 5 mins)

    Note: Our purpose here is to let the files of json and xml format be written into ssdpool by default
    - Click the button "Add Rule" to create a new placement rule (Rule name: highperf; Rule type: Placement)

    - Edit the rule at the right as "pool = ssdpool"

    - Scroll down and edit the rule (Placement Criteria: Extension IN *.json, *.xml) as shown in the figure

    - Click the button "Apply Changes" at the lower left corner to save your settings

  7. Create and configure the migration rules(Duration: 5 mins)

    Note: Our purpose here is that, when the space usage of the resource pool "ssdpool" exceeds 20%, the files of json and xml formats can be migrated into the resource pool "nlsaspool" so as to release the space of ssdpool until it has 99% available space
    - Click the button "Add Rule" to create a new migration rule (Rule name: freeup; Rule type: Migration)

    - Configure relevant parameters at the right
    - Source=ssdpool, target=nlsaspool,
    - Migration Threshold (start=20%, stop=1%),
    - Migration Criteria (Extension IN *.json, *.xml), as shown in the figure below

    - Click the button "Apply Changes" at the left to save your settings

  8. Adjust the sequence of placement rules(Duration: 5 mins)
    - Drag the "Placement default" rule to the bottom

    - Click the button "Apply Changes" to save your settings
  9. Activate the policies(Duration: 5 mins)
    Note: The newly-created "mypolicy2" policy containing migration rules does not take effect and is just registered in the Policy Repository. Next, we should activate all these rules.
    At the left, scroll up to the top, right click to select mypolicy2, select "Apply as Active Policy", click into the "Active Policy" tab page, and view the list of active policies
  10. Simulate writing files to trigger migration conditions and verify the migration policies(Duration: 8 mins)

    Note: The command line operation instructions are given as follows. In the directory "/gpfs/migration" at the GPFS server side, we can see that the files test1.json and test2.json are stored in ssdpool by default. Then, we simulate writing an 1GB test file "test.json", and the usage of ssdpool will exceed 20% to trigger the migration of this json file to nlsapool. Waiting for several minutes, we can see that the files test1.json and test2.json have been migrated into nlsapool, proving that the migration policy is set successfully.
    - Find the PuTTY client in the taskbar at the bottom of desktop, which has logged into the GPFS server by default
    - Enter the directory /gpfs/migrationtest

    # cd /ibm/gpfs/migrationtest
    - Use the commands of Spectrum Scale to verify the storage resource pools where the current test files are located

    # mmlsattr -L test1.json

    # mmlsattr -L test2.json

    View the values of "storage pool name" values in the output results, which should be normally displayed as follows:
    
test1.json -> ssdpool

    test2.json -> ssdpool
    - Use the command "mmdf gpfs" to view the usage of ssdpool resource pool

    # mmdf gpfs -P ssdpool --block-size auto

    You will see that the remaining space in ssdpool (free in full blocksz) is about 93%
-
    - Create a test file to trigger the migration conditions (20%)

    Note: We have created an 1GB file named test.json; based on previously set default placement policy, this file will be written into ssdpool automatically, and then the threshold of 20% will be triggered.

    # dd if=/dev/zero of=test.json bs=1M count=1000
    - Use the command "mmdf gpfs" to view the usage of ssdpool resource pool again

    # mmdf gpfs -P ssdpool --block-size auto

    You will see that the remaining space in ssdpool (free in full blocksz) is about 77%, which will trigger the migration conditions (20%)
-
    - Wait for about 5-10 min and then view results

    # mmlsattr -L test1.json

    # mmlsattr -L test2.json
    - View the values of "storage pool name" values in the output results, which should be normally displayed as follows:
    
test1.json -> nlsaspool

    test2.json -> nlsaspool
    Through the simple tests described above, we can find that Spectrum Scale allows you to online migrate data by quick configuration. These tests only demonstrate the judgment conditions based on file suffix. You may test with other parameters, such as user or user group.

Scan here to share it


Reserve Experiment Summary

Experiment Name:

Experiment Content: Placement Policy Demonstration
Migration Policy Demonstration
QoS Current-Limiting Demonstration

:

Hour(s)

Points:,This appointment will use 50 points

You have successfully reserve this experiment. You can view it at Personal Center > My Reservation later

You are not authorized to reserve the experiment!

It’s only for Premium Member.

VIP 项目申请

什么是VIP项目:
可以在一段时期内使用您专有的实验资源,进行深入测试。期间可以根据您的需要手工进行环境的初始化与回收

实验名称:
IBM Storage Scale 多功能演示
迁移(Migration)策略演示

Please login before sharing



    Copy succeeded


Please fill in the email address

    Send succeeded

Poster sharing

Scan to share poster

您将使用100个消费积分开启自由实验

您当前的消费积分不足

您将使用200个消费积分解锁VIP实验

您当前的消费积分不足

您将使用50个消费积分预约实验

您当前的消费积分不足

该天预约已满,请选择其他时间

Non-Premium Member only has 5 opportunities every month to experiment. You still have 0 opportunities. Do you want to start the experiment now?

p.s Premium Member enjoys unlimited access to routine experiments.
沪ICP备18004249号-1