Release Notes
MindSpore 2.3.1 Release Notes
Major Features and Improvements
[STABLE] Remove the restriction that the value of device_matrix must be 2 correspongding to interleaved_parallel when using Layout to construct the parallel strategy.
[STABLE] Add user-defined control edges environment MS_CUSTOM_DEPEND_CONFIG_PATH support to achieve better overlapping of communication and computation.
API Change
New API
[STABLE] Add new API mindspore.mint.repeat_interleave.
Contributors
ccsszz;dairenjie;DeshiChen;fuhouyu;gaoshuanglong;gaoyong10;GuoZhibin;halo;huoxinyou;jiangchao_j;jiaorui;jiaxueyu;jijiarong;JuiceZ;lichen;liujunzhu;liuluobin;LLLRT;looop5;luoyang ;Margaret_wangrui;mengyuanli;panzhihui;pengqi;PingqiLi;Renyuan Zhang;tanghuikang;tianxiaodong;TuDouNi;wudawei;XianglongZeng;xiaosh;xiaoxin_zhang;XinDu;yanghaoran;yanglong;yangruoqi713;Yanzhi_YI;yao_yf;YijieChen;yuchaojie;YuJianfeng;zangqx;zhengzuohe;zhouyaqiang0;ZPaC;zyli2020;胡彬;宦晓玲;康伟;李林杰;刘崇鸣;王禹程;俞涵;周莉莉;邹文祥
Contributions of any kind are welcome!
MindSpore 2.3.0 Release Notes
Major Features and Improvements
AutoParallel
[STABLE] Extend functional parallelism. mindspore.shard supports now the Graph mode. In Graph mode, the parallel sharding strategy of input and weight can be set for nn.Cell/function. For other operators, the parallel strategy can be automatically configured through "sharding_propagation". Add mindspore.reshard interface that supports manual rearranging and set up a precise sharding strategy (mindspore.Layout) for tensors.
[STABLE] Added Callback interface mindspore.train.FlopsUtilizationCollector statistical model flops utilization information MFU and hardware flops utilization information HFU.
[STABLE] Add functional communication API mindspore.communication.comm_func.
[BETA] Optimize the memory usage of interleaved pipeline in O0 and O1 mode.
[BETA] AutoParallel supports automatic pipeline strategy generation in multi-nodes scenarios (not supported in single-node scenario). Need to set
parallel_mode
toauto_parallel
andsearch_mode
torecursive_programming
.
PyNative
[STABLE] Optimize the basic data structure of PyNative and improve operator API performance.
[STABLE] Tensor supports register_hook so that users can print or modify the gradient with respect to the tensor.
[STABLE] The PyNative mode supports the recompute function. You can use the recompute interface to reduce the peak device memory of the network.
FrontEnd
[STABLE] Optimize Checkpoint saving and loading basic processes to improve performance by 20%.
[STABLE] Support CRC verification of Checkpoint files during saving and loading processes to enhance security.
Dataset
[STABLE] Support Ascend processing backend for the following transforms: Equalize, Rotate, AutoContrast, Posterize, AdjustSharpness, Invert, Solarize, ConvertColor, Erase.
[STABLE] Support video files reading and parsing function. For more detailed information, see APIs: mindspore.dataset.vision.DecodeVideo, mindspore.dataset.vision.read_video, and mindspore.dataset.vision.read_video_timestamps.
[STABLE] Support specifying the
max_rowsize
parameter as -1 inmindspore.dataset.GeneratorDataset
,mindspore.dataset.Dataset.map
andmindspore.dataset.Dataset.batch
interfaces. The size of shared memory used by the dataset multiprocessing will be dynamically allocated according to the size of the data. Themax_rowsize
parameter does not need to be adjusted manually.
Inference
[STABLE] 14 large models such as LLaMa2, LLaMa3, and Qwen1.5 are added to support the integrated training and inference architecture to unify scripts, distributed strategies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
PIJIT
[BETA] Support bytecode parsing for Python 3.8 and Python 3.10 to expand the supporting version of Python.
[BETA] Support dynamic shape and symbolic shape as input to enable the dynamic input scenarios.
[BETA] Enable single-step composition capability to optimize compile time
[BETA] Support bytecode capture with side effects (STORE_ATTR, STORE_GLOBAL, LIST_APPEND, dict.pop) by bytecode tuning, enabling auto-mixed precision, reduction of cleavage diagrams, and improved performance.
Profiler
[STABLE] Provides a hierarchical Profiler function, controls different levels of performance data collection through the profiler_level parameter.
[STABLE] Profiler analyse adds a new mode parameter to configure asynchronous parsing mode to parallelize performance data parsing and training.
[STABLE] The Profiler adds a new data_simplification parameter, which allows users to control whether to delete redundant data after parsing the performance data to save hard disk space.
[STABLE] The Profiler enhances the memory analysis function. Users can collect the memory application and release information of the framework, CANN and hardware through the profile_memory parameter, and visualize and analyze the information through the MindStudio tool.
[BETA] In Pynative mode, Timeline integrates host profiling information, including task time and user side stack information.
Dump
[STABLE] Enhanced synchronous & asynchronous dump functionality and adds L2Norm information to statistics dumps, and the statistic_category field to allow users to customize which statistics to save, improving dump usability. For details about the support for synchronous/asynchronous dump, see Dump Introduction.
[STABLE] Improved synchronous dump functionality: Enables overflow and exception dumps through the op_debug_mode field.
[STABLE] Enhanced synchronous dump functionality: The stat_calc_mode field enables device-side computation of statistics (default is host-side), and the sample_mode field is configured to perform sample-based dumps, improving dump performance.
[STABLE] Enhanced asynchronous dump functionality: Now supports saving in complex64 and complex128 formats.
Runtime
[Stable] Supports multi-level compilation of the staic graph by setting mindspore.set_context(jit_config={"jit_level": "O0/O1/O2"}). The default value is empty, the framework automatically selects the optimization level according to the product category, O2 for Altas training products and O0 for the rest of the products.
[Stable] Staic graph supports multi-stream concurrent execution of communication calculations in O0/O1.
[STABLE] Add memory management API mindspore.hal.memory.
[Beta] The memory pool supports virtual memory defragmentation, and virtual memory is enabled by default under graph O0/O1.
Ascend
[STABLE] Provide an operator memory out of bounds access detection switch on the Ascend platform, where users can detect internal memory out of bounds issues of operators on the Ascend platform by setting
mindspore.set_context (Ascend_configuration={"op_debug_option": "oom"})
.[BETA] The environment variable MS_SIMULATION_LEVEL supports graph compilation O0 execution mode on the Ascend platform, which can support compilation performance and runtime memory analysis
[BETA] Ascend platform supports AscendC custom operators through AOT.
API Change
New APIs
[STABLE] Adds mindspore.mint API, provides a lot of functional, nn, optimizer interfaces. The API usage and functions are consistent with the mainstream usage in the industry, which is convenient for users to refer to and use. The mint interface is currently an experimental interface and performs better than ops in
jit_level="O0"
and pynative mode. Currently, the graph sinking mode and CPU/GPU backend are not supported, and it will be gradually improved in the future.mindspore.mint
mindspore.mint.eye
mindspore.mint.rand_like
mindspore.mint.isfinite
mindspore.mint.any
mindspore.mint.ones
mindspore.mint.rand
mindspore.mint.log
mindspore.mint.greater_equal
mindspore.mint.ones_like
mindspore.mint.gather
mindspore.mint.logical_and
mindspore.mint.all
mindspore.mint.zeros
mindspore.mint.permute
mindspore.mint.logical_not
mindspore.mint.mean
mindspore.mint.zeros_like
mindspore.mint.repeat_interleave
mindspore.mint.logical_or
mindspore.mint.prod
mindspore.mint.arange
mindspore.mint.abs
mindspore.mint.mul
mindspore.mint.sum
mindspore.mint.broadcast_to
mindspore.mint.add
mindspore.mint.neg
mindspore.mint.eq
mindspore.mint.cat
mindspore.mint.clamp
mindspore.mint.negative
mindspore.mint.ne
mindspore.mint.index_select
mindspore.mint.cumsum
mindspore.mint.pow
mindspore.mint.greater
mindspore.mint.max
mindspore.mint.atan2
mindspore.mint.reciprocal
mindspore.mint.gt
mindspore.mint.min
mindspore.mint.arctan2
mindspore.mint.rsqrt
mindspore.mint.isclose
mindspore.mint.scatter_add
mindspore.mint.ceil
mindspore.mint.sigmoid
mindspore.mint.le
mindspore.mint.narrow
mindspore.mint.unique
mindspore.mint.sin
mindspore.mint.less_equal
mindspore.mint.nonzero
mindspore.mint.div
mindspore.mint.sqrt
mindspore.mint.lt
mindspore.mint.normal
mindspore.mint.divide
mindspore.mint.square
mindspore.mint.maximum
mindspore.mint.tile
mindspore.mint.erf
mindspore.mint.sub
mindspore.mint.minimum
mindspore.mint.topk
mindspore.mint.erfinv
mindspore.mint.tanh
mindspore.mint.inverse
mindspore.mint.sort
mindspore.mint.exp
mindspore.mint.bmm
mindspore.mint.searchsorted
mindspore.mint.stack
mindspore.mint.floor
mindspore.mint.matmul
mindspore.mint.argmax
mindspore.mint.where
mindspore.mint.flip
mindspore.mint.split
mindspore.mint.cos
mindspore.mint.less
mindspore.mint.nn
mindspore.mint.nn.Dropout
mindspore.mint.nn.Unfold
mindspore.mint.nn.Fold
mindspore.mint.nn.Linear
mindspore.mint.nn.BCEWithLogitsLoss
mindspore.mint.nn.functional
mindspore.mint.nn.functional.batch_norm
mindspore.mint.nn.functional.group_norm
mindspore.mint.nn.functional.fold
mindspore.mint.nn.functional.layer_norm
mindspore.mint.nn.functional.max_pool2d
mindspore.mint.nn.functional.linear
mindspore.mint.nn.functional.binary_cross_entropy
mindspore.mint.nn.functional.unfold
mindspore.mint.nn.functional.sigmoid
mindspore.mint.nn.functional.one_hot
mindspore.mint.nn.functional.tanh
mindspore.mint.nn.functional.elu
mindspore.mint.nn.functional.binary_cross_entropy_with_logits
mindspore.mint.nn.functional.gelu
mindspore.mint.nn.functional.dropout
mindspore.mint.nn.functional.leaky_relu
mindspore.mint.nn.functional.embedding
mindspore.mint.nn.functional.silu
mindspore.mint.nn.functional.grid_sample
mindspore.mint.nn.functional.softplus
mindspore.mint.nn.functional.relu
mindspore.mint.nn.functional.softmax
mindspore.mint.nn.functional.pad
mindspore.mint.optim
mindspore.mint.optim.AdamW
mindspore.mint.linalg
mindspore.mint.linalg.inv
Non-compatible Interface Changes
Interface name:
Profiler
Changes: The performance data file generated by parsing is streamlined to save space. Delete the FRAMEWORK directory data and other redundant data after exporting the performance data. Retain only the deliverables of the profiler and the original performance data in the PROF_XXX directory to save space. Data simplification mode can be turned off by configuring the
data_simplification
parameter toFalse
, which will be consistent with the performance data files generated by the historical version.Interface name: The
saved_data
field in the configuration file of the dump function is"tensor"
.Changes: The name of the file to be dumped to disks is changed.
"/"
is replaced with"_"
, and the operator name is changed to the global name of the operator.Original interface v2.1 interface File name format: {op_type}.{op_name}.{task_id}.{stream_id}. {timestamp}.{input_output_index}.{slot}.{format}.npy Example: Conv2D.Conv2D-op12.0.0.1623124369613540. output.0.DefaultFormat.npy
File name format: {op_type}.{op_name}.{task_id}.{stream_id}. {timestamp}.{input_output_index}.{slot}.{format}.npy Example: Conv2D.Default_network-WithLossCell__backbone-AlexNet_conv3 -Conv2d_Conv2D-op12.0.0.1623124369613540.output.0.DefaultFormat.npy
Interface name: The
saved_data
field in the Dump function configuration file is"statistic"
.Changes: By default,
'max'
,'min'
,'avg'
,'count'
,'negative zero count'
,'positive zero count'
,'nan count'
,'negative inf count'
,'positive inf count'
,'zero count'
and'md5'
. In the 2.3 version, the'max'
,'min'
, and'l2norm'
statistical items are saved by default. You can customize statistical items by configuring'statistic_category'
.
Contributors
caifubi;candanzg;ccsszz;chaiyouheng;changzherui;chenfei_mindspore;chengbin;chengfeng27;Chong;dairenjie;DavidFFFan;DeshiChen;dingjinshan;douzhixing;emmmmtang;Erpim;fary86;fengyixing;fuhouyu;gaoyong10;GuoZhibin;guozhijian;halo;haozhang;hejianheng;Henry Shi;horcham;huandong1;huangbingjian;Jackson_Wong;jiangchenglin3;jiangshanfeng;jiangzhenguang;jiaorui;bantao;jiaxueyu;jijiarong;JuiceZ;jxl;kairui_kou;lanzhineng;LiangZhibo;lichen;limingqi107;linqingke;liubuyu;liujunzhu;liuluobin;liyan2022;liyejun;LLLRT;looop5;lujiale;luochao60;luoyang;lvxudong;machenggui;maning202007;Margaret_wangrui;master_2;mengyuanli;moran;Mrtutu;NaCN;nomindcarry;panzhihui;pengqi;qiuyufeng;qiuzhongya;Renyuan Zhang;shaoshengqi;Shawny;shen_haochen;shenhaojing;shenwei41;shij1anhan;shilishan;shiziyang;shunyuanhan;shuqian0;TAJh;tanghuikang;tan-wei-cheng;Thibaut;tianxiaodong;TronZhang;TuDouNi;VectorSL;wang_ziqi;wanghenchang;wangjie;weiyang;wudawei;wujiangming;wujueying;XianglongZeng;xiaotianci;xiaoxin_zhang;xiaoxiongzhu;xiaoyao;XinDu;xuxinglei;yangchen;yanghaoran;yanglong;yangruoqi713;yangzhenzhang;yangzishuo;Yanzhi_YI;yao_yf;yefeng;yide12;YijieChen;YingLai Lin;yuchaojie;YuJianfeng;zangqx;zhaiyukun;zhangminli;zhangqinghua;ZhangZGC;zhengxinQian;zhengzuohe;zhouyaqiang0;zhuguodong;zhupuxu;zichun_ye;zjun;zlq2020;ZPaC;zuochuanyong;zyli2020;阿琛;狄新凯;范吉斌;冯一航;胡彬;宦晓玲;黄勇;康伟;雷仪婧;李良灿;李林杰;刘崇鸣;刘力力;刘勇琪;刘子涵;吕浩宇;王禹程;熊攀;徐安越;徐永飞;俞涵;张王泽;张栩浩;郑裔;周莉莉;周先琪;朱家兴;邹文祥
Contributions of any kind are welcome!
MindSpore 2.3.0-rc2 Release Notes
Major Features and Improvements
AutoParallel
[STABLE] Transpose/Sub/Add/Mul/Div/ReLU/Softmax/Sigmoid supports layout configuration.
[STABLE] The collective communication precision will affect network convergence. The configuration item force_fp32_communication is provided in the interface mindspore.set_auto_parallel_context. When set to True, the communication type of the reduce communication operator can be forced to be converted to float32.
[BETA] Pipeline parallel support Interleave. Optimize the performance when micro batch is limited.
[BETA] Optimize checkpoint transformation speed when using pipeline parallel, support single stage transform.
PyNative
[BETA] Support recompute on PyNative mode.
[STABLE] Support register_hook on PyNative mode.
API Change
Add timeout environment variables in dynamic networking scenarios:
MS_TOPO_TIMEOUT
: Cluster networking phase timeout time in seconds.MS_NODE_TIMEOUT
: Node heartbeat timeout in seconds.MS_RECEIVE_MSG_TIMEOUT
: Node timeout for receiving messages in seconds.
Added new environment variable MS_ENABLE_LCCL
to support the use of LCCL communication library.
Bug Fixes
Contributors
Thanks goes to these wonderful people:
bantao,caifubi,changzherui,chenfei_mindspore,chenweifeng,dairenjie,dingjinshan,fangzehua,fanyi20,fary86,GuoZhibin,hanhuifeng,haozhang,hedongdong,Henry Shi,huandong1,huangbingjian,huoxinyou,jiangchenglin3,jiangshanfeng,jiaorui,jiaxueyu,jxl,kairui_kou,lichen,limingqi107,liuluobin,LLLRT,looop5,luochao60,luojianing,maning202007,NaCN,niyuxin94520,nomindcarry,shiziyang,tanghuikang,TronZhang,TuDouNi,VectorSL,wang_ziqi,wanghenchang,wudawei,XianglongZeng,xiaoxiongzhu,xiaoyao,yanghaoran,Yanzhi_YI,yao_yf,yide12,YijieChen,YingLai Lin,yuchaojie,YuJianfeng,zangqx,zhanghanLeo,ZhangZGC,zhengzuohe,zhouyaqiang0,zichun_ye,zjun,ZPaC,zyli2020,冯一航,李林杰,刘力力,王禹程,俞涵,张栩浩,朱家兴,邹文祥
Contributions of any kind are welcome!
MindSpore 2.3.0-rc1 Release Notes
Major Features and Improvements
DataSet
[STABLE] Support integrity check, encryption and decryption check for MindRecord to protect the integrity and security of user data.
[STABLE] MindRecord api changes: FileWriter.open_and_set_header is deprecated since it has been integrated into FilterWriter, if the old version code reports an error, delete this call; Add type checking for data in FileWriter to ensure that the data type defined by the Schema matches the real data type; The return value of all methods under Mindrecord are removed, replaced by an exception when processing error is occurred.
[STABLE] Support Ascend processing backend for the following transforms: ResizedCrop, HorizontalFlip, VerticalFlip, Perspective, Crop, Pad, GaussianBlur, Affine.
[STABLE] Optimized the content of data processing part in model migration guide, providing more examples to compare with third-party frameworks.
[STABLE] Optimized the parsing efficiency of TFRecordDataset in multiple data columns scenario, improving the parsing performance by 20%.
PIJIT
[BETA]PIJit analyzes and adjusts the Python bytecode and performs graph capture and graph optimization on the execution flow. Supported Python codes are executed in static graph mode, and unsupported ones are divided into subgraphs and executed in dynamic graph mode, automatically achieving dynamic and static unification. Users can enable the PIJit function by decorating the function with @jit(mode="PIJit", jit_config={options:value}).
Inference
[DEMO] The integrated architecture of large model inference, upgrade, training, and promotion unifies scripts, distributed policies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
AutoParallel
[STABLE] Add msrun startup method to launch distributed job with single instruction.
[STABLE] Add to be deprecated hint for RankTable startup method.
[STABLE] Eliminate redundant constants in graph mode to improve compilation performance and memory overhead.
[STABLE] The subgraph scenario optimizer parallelizes the first subgraph inline, allowing some computation and communication masking under pipeline parallelism to be performed.
[STABLE] Communication information export: export model communication information (communication domain, communication volume) during compilation, and input it to the cluster as the basis for communication scheduling.
[STABLE] Pipeline parallel inference is optimized, eliminates shared weights forwarding between stages, improving execution performance. Supports automatic broadcast of pipeline inference results, improving the usability of autoregressive inference.
[STABLE] Operator-level parallel sharding supports the configuration of the mapping between the device layout and tensor layout during MatMul/Add/LayerNorm/GeLU/BiasAdd operator sharding.
[STABLE] Supports gradient communication and backward calculation overlapping in the data parallel dimension.
[STABLE] Single device simulation compilation, used to simulate the compilation process of a certain device in multi device distributed training, assisting in analyzing the compilation processes and memory usage on the front and back ends.
[STABLE] Implement ops.Tril sharding to reduce the memory and performance requirements on a single device.
[BETA] Supports the fusion between communication operators and computing operators, in order to overlap communication overheads with computation and improve network performance.
[BETA] Load checkpoints and compile graphs in parallel to accelerate fault recovery.
Runtime
[BETA] Support O0/O1/O2 multi-level compilation to improve static graph debugging and tuning capabilities.
FrontEnd
[STABLE] The framework supports the bfloat16 data type. dtype=mindspore.bfloat16 can be specified when a tensor is created.
[STABLE] The syntax support capability of the rewrite component is optimized, syntaxs such as class variables, functions, and control flows can be parsed.
[STABLE] New context setting: debug_level. User can use mindspore.set_context(debug_level=mindspore.DEBUG) to get more debug information.
Profiler
[BETA] Dynamically start and stop profiling. Users can collect profiling data in real time according to the training situation, reducing the amount of data collected.
[BETA] Profiling the communication operator time-consuming matrix. Users can find cluster communication performance bottlenecks by analyzing the communication operator time-consuming matrix.
[BETA] Improve the performance of Ascend environment in parsing profiling data.
[BETA] Supports offline analysis of data generated by Profiling. Users can collect data first and then parse the data as needed.
[BETA] Supports collecting performance data of On-Chip Memory, PCIe, and l2_cache to enrich performance analysis indicators.
Dump
[BETA] The statistical information saved by Dump records MD5 values, and users can determine small differences in tensor values through MD5 values.
[BETA] Dump supports the float16 data type and supports users to locate float16 type operator accuracy issues.
PyNative
[STABLE] Reconstruct the single operator calling process for dynamic graphs to improve the performance of dynamic graphs.
Ascend
[BETA] Support set configuration options of CANN, which are divided into two categories: global and session. Users can configure them through mindspore.set_context(Ascend_configuration={"ge_options": {"global": {"global_option": "option_value"}, "session": {"session option": "option_value"}}).
API Change
Add mindspore.hal API to support stream, event, and device management capabilities.
Add mindspore.multiprocessing API to provide the capability of creating multiple processes.
Operators
[BETA] mindspore.ops.TopK now supports the second input k as an int32 type tensor.
Bug Fixes
[#I92H93] Fixed the issue of 'Launch kernel failed' when using the Print operator to print string objects on the Ascend platform.
[#I8S6LY] Fixed RuntimeError: Attribute dyn_input_sizes of Default/AddN-op1 is [const vector]{}, of which size is less than 0 error of variable-length input operator, such as AddN or Concat, for dynamic shape process in graph mode on the Ascend platform.
[#I9ADZS] Fixed the data timeout issue in network training due to inefficient dataset recovery in the fault recovery scenario.
Contributors
Thanks goes to these wonderful people:
AlanCheng511,AlanCheng712,bantao,Bingliang,BJ-WANG,Bokai Li,Brian-K,caifubi,cao1zhg,CaoWenbin,ccsszz,chaiyouheng,changzherui,chenfei_mindspore,chengbin,chengfeng27,chengxb7532,chenjianping,chenkang,chenweifeng,Chong,chuht,chujinjin,Cynthia叶,dairenjie,DavidFFFan,DeshiChen,douzhixing,emmmmtang,Erpim,fangzhou0329,fary86,fengxun,fengyixing,fuhouyu,gaoshuanglong,gaoyong10,GaoZhenlong,gengdongjie,gent1e,Greatpan,GTT,guoqi,guoxiaokang1,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong,hejianheng,Henry Shi,heyingjiao,HighCloud,Hongxing,huandong1,huangbingjian,HuangLe02,huangxinjing,huangziling,hujiahui8,huoxinyou,jiangchenglin3,jianghui58,jiangshanfeng,jiaorui,jiaxueyu,JichenZhao,jijiarong,jjfeing,JoeyLin,JuiceZ,jxl,kairui_kou,kate,KevinYi,kisnwang,lanzhineng,liangchenghui,LiangZhibo,lianliguang,lichen,ligan,lihao,limingqi107,ling,linqingke,liruyu,liubuyu,liuchao,liuchengji,liujunzhu,liuluobin,liutongtong9,liuzhuoran2333,liyan2022,liyejun,LLLRT,looop5,luochao60,luojianing,luoyang,LV,machenggui,maning202007,Margaret_wangrui,MaZhiming,mengyuanli,MooYeh,moran,Mrtutu,NaCN,nomindcarry,panshaowu,panzhihui,PingqiLi,qinzheng,qiuzhongya,Rice,shaojunsong,Shawny,shenwei41,shenyaxin,shunyuanhan,silver,Songyuanwei,tangdezhi_123,tanghuikang,tan-wei-cheng,TingWang,TronZhang,TuDouNi,VectorSL,WANG Cong,wang_ziqi,wanghenchang,wangpingan,wangshaocong,wangtongyu6,weiyang,WinXPQAQ,wtcheng,wudawei,wujiangming,wujueying,wuweikang,wwwbby,XianglongZeng,xiaosh,xiaotianci,xiaoxin_zhang,xiaoxiongzhu,xiaoyao,XinDu,xingzhongfan,yanghaoran,yangluhang,yangruoqi713,yangzhenzhang,yangzishuo,yanjiaming,Yanzhi_YI,yao_yf,yefeng,yeyunpeng2020,yide12,YijieChen,YingLai Lin,YingtongHu,youshu,yuchaojie,YuJianfeng,zangqx,zby,zhaiyukun,zhangdanyang,zhanghaibo,zhanghanLeo,zhangminli,zhangqinghua,zhangyanhui,zhangyifan,zhangyinxia,zhangyongxian,ZhangZGC,zhanzhan,zhaoting,zhengyafei,zhengzuohe,ZhihaoLi,zhouyaqiang0,zhuguodong,zhumingming,zhupuxu,zichun_ye,zjun,zlq2020,ZPaC,zuochuanyong,zyli2020,陈宇,代宇鑫,狄新凯,范吉斌,冯一航,胡彬,宦晓玲,黄勇,康伟,李良灿,李林杰,刘崇鸣,刘力力,刘勇琪,吕浩宇,没有窗户的小巷,王禹程,吴蕴溥,熊攀,徐安越,徐永飞,许哲纶,俞涵,张峻源,张树仁,张王泽,张栩浩,郑裔,周莉莉,周先琪,朱家兴,邹文祥
Contributions of any kind are welcome!