eISSN: 2093-8462 http://jesk.or.kr
Open Access, Peer-reviewed
Jinki Jung:
, Hongtae Kim
10.5143/JESK.2017.36.5.385 Epub 2017 October 31
Abstract
Objective: The aim of this study is to investigate how to design effective virtual reality-based training (i.e., virtual training) in maritime safety and to present methods for enhancing interface fidelity by employing immersive interaction and 3D user interface (UI) design.
Background: Emerging virtual reality technologies and hardware enable to provide immersive experiences to individuals. There is also a theory that the improvement of fidelity can improve the training efficiency. Such a sense of immersion can be utilized as an element for realizing effective training in the virtual space.
Method: As an immersive interaction, we implemented gesture-based interaction using leap motion and Myo armband type sensors. Hand gestures captured from both sensors are used to interact with the virtual appliance in the scenario. The proposed 3D UI design is employed to visualize appropriate information for tasks in training.
Results: A usability study to evaluate the effectiveness of the proposed method has been carried out. As a result, the usability test of satisfaction, intuitiveness of UI, ease of procedure learning, and equipment understanding showed that virtual training-based exercise was superior to existing training. These improvements were also independent of the type of input devices for virtual training.
Conclusion: We have shown through experiments that the proposed interaction design results are more efficient interactions than the existing training method. The improvement of interface fidelity through intuitive and immediate feedback on the input device and the training information improve user satisfaction with the system, as well as training efficiency.
Application: Design methods for an effective virtual training system can be applied to other areas by which trainees are required to do sophisticated job with their hands.
Keywords
Virtual reality Virtual training Training effectiveness Maritime safety
The scope of virtual reality (VR) users has enormously increased through the popularization of head-mounted displays (HMDs) and input devices and the VR content market expansion. Several platforms have shown a possibility as the next-generation mobile platform by implementing VR through the combination of smartphones and external devices. VR expands its use scope to travel, medical service, social network, and broadcasting fields, in addition to game area, due to the development of diverse types of platforms and the relevant technologies.
VR-based training (i.e., virtual training) has been used in the military, in medical service, and in disaster/accident countermeasure areas (Chellali et al., 2016; Sharma et al., 2014; Chittaro and Buttussi, 2015). Virtual training has provided a way to improve skills that need to be enhanced through human-computer interaction (HCI), as well as visual and physical simulations beyond existing one-way simulation technology. Xu et al. (2014) reported the obvious strengths of virtual training, compared with the existing training modes. Such strengths as safety, scenario controllability, ease of repeated training, space and time cost reduction, and reusability become more remarkably known as the performance of input and output devices and gesture recognition technology improve. In addition, training efficiency can be maximized in a more safety-ensured situation.
This study proposes a more efficient training system method in the implementation of virtual training. The degree of having the closest relationship with immersion and training efficiency in virtual training is fidelity. Fidelity defined by Waller et al. (1998) means the level of similarity between the sense felt in the virtual environment and the sense felt in the real environment. This study aims to improve fidelity through interactions and interface design and to comparatively analyze the effects. The existing training dealt with as a comparison subject in this study is defined as lecture- and teaching material-based training in most educational institutions because of safety assurance and low-implementation cost. This means the investigation of the strengths and weaknesses of virtual training, compared with the existing training.
The basic background of training efficiency in this study is based on the theory of fidelity of Waller et al. (1998). Waller classified the factors of fidelity into two aspects: environment fidelity defined as the similarity between the real environment and virtual (training) environment and interface fidelity forming a user's mental model through interactions in the virtual environment. The objective of this study on interface fidelity is to investigate training efficiency according to the interface of virtual training.
As a training scenario for training efficiency validation, lifeboat launching operation in the maritime safety training field was selected. In lifeboat launching operation, there are plenty of equipment to remember (46 equipment), compared with other trainings, and a 10-step launching sequence should be understood. On the basis of the complexity of lifeboat launching operation, it was chosen as the training to measure training efficiency.
1.1 Fidelity and training effectiveness
As there are many factors to implement virtual reality and each factor is associated with user experiences, fidelity has been analyzed and studied from various aspects. Hough et al. (2015) carried out a fidelity study on how much the reflection of interactions looks like reality to an observer through interactions using both hands. His study concluded that placement of a virtual object between both hands sharply improves an observer's felt fact, compared with hand gesture recognition improvement.
Ragan et al. (2015) conducted an experiment on training efficiency according to field of view (FOV) and visual complexity among the visual factors of VR. This has composed the correlation between FOV and rendering complexity and human visual factors, and has set up artificial and visual fidelity conditions. As a result of the experiment, they have concluded that training efficiency is higher as FOV is higher and visual complexity is lower, targeting the visual scanning task, which was set up as target training. According to the fact that training efficiency is higher as visual complexity is lower, judgement in training can be improved as visual information received by a human becomes more simplified and obvious, although visual information is much different from reality. In addition, training and evaluation were carried out in the virtual space in this experiment. Ragan reported that virtual training application as an evaluation factor on real training efficiency is not desirable, pointing out that training efficiency is higher when the fidelity and evaluation of training match, regardless of high or low fidelity.
Carlson et al. (2015) carried out an experiment for memory retention in virtual training on factory assembly work. The experiment was on memory retention in virtual training using physical-based training and a haptic device. Immediately after the experiment, the assembly time of the physical training experimental group was short, but the assembly time of the virtual training experimental group was reduced more than the control group two weeks after the experiment. A conclusion that colors have a positive effect on memory retention was drawn.
McMahan et al. (2012) undertook an experiment on the effects of display and interaction fidelity on FPS games in the CAVE system. The experiment result asserted that display and interaction fidelity have significant effects on a game's usability, engagement, presence, and performance. Display fidelity improvement induced a positive result in user evaluation.
We had four systematic objectives as implementation of efficient virtual training: maximization of environment fidelity, maximization of scenario fidelity, immersive interaction, and intuitive feedback (GUI). In this chapter, the explanation of each implementation factor is given, and more detailed explanation on immersive interaction directly connected with interface fidelity and intuitive feedback is provided.
Environment fidelity in this study implemented the details of real training spaces to 3D on a photorealistic level and deployed required interaction factors, thus maximizing environment fidelity. Scenario fidelity indicates compliance between a training scenario and a real program. We implemented a one-to-one mapping of the sub-objective of training and a task class in program implementation. Task is the component consisting of primitive elements internally, and a scenario can be implemented with the gathering of the tasks. Specifically, we made a system by a one-to-one mapping of the training scenario from the component design. Such a method not only maximizes scenario fidelity, but has a structure to make the existing training in diversified way through the reuse of components.
2.1 Immersive interaction design
Immersive interaction in virtual training is defined as inputting gestures generated from training as user input. Scoville and Milner (1957) identified that training based on specific motions affects the long-term memory of spaces and activities. In an emergency situation where a possibility of normal judgment is able to be made within a short time, it is essential to elevate long-term memory on motions. A study of Lim et al. (2012) mentioned the importance on the reflection of motions in training. What is important in reflecting such a motion is the type of input sensor and accuracy. The importance, when they are used systematically, is scalability on input sensor.
This study proposes an interaction design increasing interface fidelity. To this end, the most basic unit of gesture is defined as a class. When the input mapper class suitable for each sensor is implemented, a method is used by inheriting the class. This study defined four functions as primitive gestures: next, pause, cancel, and hold. "Next" is input to go to the next task, and "pause" is input to stop a task and view the current status. "Cancel" is an input to cancel all motions and return them to the previous status. "Hold" is used for interaction with an object.
We implemented the interface using keyboard/mouse, joy pad, wearable hand sensor classes, respectively, according to the definition of the four functions above, and made the selection of input occur at application level, not at implementation level. An example on interaction change according to input is as follows: the keyboard/mouse type is conducted through the drag/drop gesture of mouse on the hold function, while the wearable hand sensor type is carried out through electromyogram recognition on clenching one's fist (Figure 1).
2.2 3D user interface design
The most important part of the training is the guidance on behavior that the trainee should take and an immediate description of what interaction to take with the object. If such guidance is not properly provided, there is a high probability that an unskilled trainee will become ignorant of the task that he/she has to do. Such visual guidance is purely virtual and is about how to implement the element of the actual training with the virtual interface. In a study of Rose et al. (2000), the importance of visual feedback in virtual training was mentioned. This chapter describes 3D UI design offering visual feedback in the 3D space.
If graphical user interface (GUI) in VR is implemented in the 2D coordinate system which does not consider binocular disparity, it can reduce immersion and cause motion sickness. For this reason, GUI in VR is implemented as a factor in the 3D space. This study defines GUI in the three following factors: highlight, 3D directional vector, and billboard. Highlight is defined as visual overlay on an object to be interacted by a user in a task. Since the immediate feedback of highlight has a merit that it sets up a clear goal to a user and has an effect that it cannot be realized, the reduction of fidelity is caused. 3D directional vector reminds a user of a clear goal like highlight by visualizing a direction to which a user needs to move at the current point in time. Billboard is a basic factor visualizing letters and figure information, and is defined as a 3D object. Billboard conducts a function corresponding to the training items' explanation and guide in the existing training. Each factor is schematized in Figure 2.
The purpose of the experiment is to validate the training efficiency of virtual training that applied the proposed design. Such training efficiency felt by a trainee was measured by dividing it into the ease of procedure learning and usefulness of equipment understanding. The interaction methods of virtual training in this experiment were set up as monitor and keyboard/mouse, HMD and joy pad, and HMD and wearable hand sensor groups. As the hardware used for the experiment, a desktop having 3.40GHz Intel Dual core CPU and 8GB memory was used. Oculus DK2 was used as HMD, and Leap motion and Myo armband were used as wearable hand sensor. Leap motion was used for a user's hand visualization, and Myo armband was used for hand motion recognition. Haptic feedback was not considered in this experiment.
The experiment targeted 64 participants. The balance of participants' gender was not even with 63 males and 1 female. The participants commonly received one session of lifeboat launching operation. The participant's mean age was 28.4, and the standard deviation (SD) of age distribution was 3.039. The participants received the same amount of compensation for participation in the experiment. According to a pre-questionnaire survey, three students had an experience of using VR devices.
The virtual training experiment on lifeboat launching was carried out in the following sequence: The purpose and procedures of the experiment was explained to the participants, and consents were gained from them. The participants were assigned to one experimental group randomly among four experimental groups through a draw, and experiment participation schedule was chosen correspondingly. A pre-questionnaire survey on experiment factors was undertaken as full-swing participation in the experiment. The participants who finished filling out the questionnaire underwent training for 15 minutes according to the assigned experimental group, and filled out the post-questionnaire after the training.
Figures 3-6 shows the experiment results. In the questionnaire survey, 64 subjects participated, and the same number of trainees was included in the four groups, respectively, and thus 16 subjects participated in each experimental group. The first result acquired from the survey was satisfaction with training, and it was measured with a 5-point Likert scale (1: Fully unsatisfactory; 5: Fully satisfactory). Concerning mean satisfaction, monitor/keyboard (m=3.93, ∂=1.06), HMD/joy pad (m=4.0, ∂=0.81), and HMD/hand (m=4.0, ∂=1.09), which are features of virtual training, higher scores than the existing training (m=3.56, ∂=0.51) ensued. As a result of ANOVA test, F (3, 60) = 0.671, p<.573, and thus the null-hypothesis was adopted.
As for the mean score of the 5-poing Likert scale (1: Strong objection; 5: Strong consent) carried out on an input mode's intuitiveness, monitor/keyboard (4.06, ∂=1.18), HMD/joy pad (4.0, ∂=1.03), and HMD/hand (4.12, ∂=1.08), which are part of virtual training, showed higher scores than the existing training (3.12, ∂=0.71). As a result of ANOVA test, F (3, 60) = 3.418, p<.0229, and thus the null-hypothesis was rejected. Table 1 shows the pairwise post-test results based on t-test applying bonferroni correction to the questionnaire values of intuitiveness by which the null-hypothesis was rejected. Based on α=.15, LEC/MAT and MON/KEY, LEC/MAT and HMD/PAD, and LEC/MAT and HMD/WEA showed lower values (.070, .110, .044 in order) than the criterion value.
Concerning the mean score of a 5-poing Likert scale (1: Strong objection; 5: Strong consent) carried out on ease of procedure learning, the existing training was 3.12 (∂=0.80), monitor/keyboard 4.0 (∂=0.96), HMD/joy pad 3.87 (∂=1.13), and HMD/hand 3.93 (∂=0.77). As a result of ANOVA test, F (3, 60) = 2.926, p<.041, and therefore the null-hypothesis was rejected. Meanwhile, Table 2 shows the pairwise post-test results based on t-test applying bonferroni correction. LEC/MAT and MON/KEY, and LEC/MAT and HMD/WEA showed lower p values (.061, .099, respectively) than criterion α=.15.
Regarding the mean score of usability on equipment understanding on a 5-point Likert scale (1: Strong objection, 6: Strong consent), the existing training showed 2.5 (∂=0.61), monitor/keyboard 3.43 (∂=0.83), HMD/joy pad 3.43 (∂=1.12), and HMD/hand 3.56 (∂=0.91). The ANOVA test result showed F (3, 60) = 4.686, p<.00525, and thus the null-hypothesis was rejected. Table 3 shows the pairwise post-test results based on t-test applying bonferroni correction to the questionnaire values of usability on equipment understanding by which the null-hypothesis was rejected. LEC/MAT and MON/KEY, LEC/MAT and HMD/PAD, and LEC/MAT and HMD/WEA showed lower p values (.030, .030, .009 in order) than α=.15 criterion.
|
LEC/MAT |
MON/KEY |
HMD/PAD |
MON/KEY |
0.070 |
- |
- |
HMD/PAD |
0.110 |
1.000 |
- |
HMD/WEA |
0.044 |
1.000 |
1.000 |
|
|
LEC/MAT |
MON/KEY |
HMD/PAD |
MON/KEY |
0.061 |
- |
- |
HMD/PAD |
0.558 |
1.000 |
- |
HMD/WEA |
0.099 |
1.000 |
1.000 |
|
|
LEC/MAT |
MON/KEY |
HMD/PAD |
MON/KEY |
0.030 |
- |
- |
HMD/PAD |
0.030 |
1.000 |
- |
HMD/WEA |
0.009 |
1.000 |
1.000 |
In the case of virtual training applying the proposed interaction mode and UI based on the concluded values, satisfaction did not show statistically significant differences from the existing training; however, statistically significant differences in the usability of intuitiveness, ease of learning procedure, and equipment understanding were shown. As for the post-test result carried out on significant differences, significant differences were not revealed between the experimental groups in virtual training; however, statistically significant differences were shown between the existing training and virtual training (excluding the comparison result of LEC/MAT and HMD/PAD on usability concerned with ease of procedure learning).
This study proposed efficient virtual training design, and implemented it through application of it to a lifeboat launching operation system. Through usability tests on the lifeboat launching operation system, the virtual training applying the proposed design in this study through usability tests showed significant differences in the comparison of usability on an input mode's intuitiveness, ease of procedure learning, and equipment understanding with the existing lecture-based training. Also higher result values were acquired on average, compared with the existing training method.
In this regard, a conclusion that fidelity (environment fidelity, scenario fidelity, interface fidelity), which was improved through the virtual training design, shows improved usability to trainees, compared with the existing training, was drawn.
In a further study, we plan to implement a more accurate and detailed motion/gesture recognition through the fusion of a hand input sensor and to carry out tests by searching for training items that can improve training efficiency out of the existing training. We will also promote development as a general-use platform through which virtual training can be offered anywhere by combining with a mobile platform.
References
1. Carlson, P., Peters, A., Gilbert, S.B., Vance, J.M. and Luse, A., Virtual Training: Learning Transfer of Assembly Tasks. IEEE Transactions on Visualization and Computer Graphics, 21(6), 770-782, 2015.
Crossref
Google Scholar
2. Chellali, A., Mentis, H., Miller, A., Ahn, W., Arikatla, V.S., Sankaranarayanan, G., De, S., Schwaitzberg, S.D. and Cao, C.G.L., Achieving interface and environment fidelity in the Virtual Basic Laparoscopic Surgical Trainer. International Journal of Human Computer Studies, 96, 22-37, 2016.
Crossref
Google Scholar
3. Chittaro, L. and Buttussi, F., Assessing knowledge retention of an immersive serious game vs. A traditional education method in aviation safety. IEEE Transactions on Visualization and Computer Graphics, 21(4), 529-538, 2015.
Crossref
Google Scholar
4. Hough, G., Williams, I. and Athwal, C., Fidelity and plausibility of bimanual interaction in mixed reality. IEEE Transactions on Visualization and Computer Graphics, 21(12), 1377-1389, 2015.
Crossref
Google Scholar
PubMed
5. Lim, C.J., Lee, N., Jeong, Y. and Heo, S., Gesture based Natural User Interface for e-Training. Journal of the Ergonomics Society Korea, 31(4), 577-583, 2012.
Crossref
Google Scholar
6. McMahan, R.P., Bowman, D.A., Zielinski, D.J. and Brady, R.B., Evaluating display fidelity and interaction fidelity in a virtual reality game. IEEE Transactions on Visualization and Computer Graphics, 18(4), 626-633, 2012.
Crossref
Google Scholar
7. Ragan, E.D., Bowman, D.A., Kopper, R., Stinson, C., Scerbo, S. and Mcmahan, R.P., Effects of Field of View and Visual Complexity on Virtual Reality Training Effectiveness for a Visual Scanning Task. IEEE Transactions on Visualization and Computer Graphics, 21(7), 794-807, 2015.
Crossref
Google Scholar
8. Rose, F.D., Attree, E.A., Brooks, B.M., Parslow, D.M. and Penn, P.R., Training in virtual environments: transfer to real world tasks and equivalence to real task training. Ergonomics, 43(4), 494-511, 2000.
Crossref
Google Scholar
9. Scoville, W.B. and Milner, B., Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery, and Psychiatry. 20(1), 11-21, 1957.
Crossref
10. Sharma, S., Jerripothula, S., Mackey, S. and Soumare, O., Immersive virtual reality environment of a subway evacuation on a cloud for disaster preparedness and response training. 2014 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI), 1-6, 2014.
Crossref
Google Scholar
11. Waller, D., Hunt, E. and Knapp, D., The Transfer of Spatial Knowledge in Virtual Environment Training. Presence: Teleoperators and Virtual Environments, 7(2), 129-143, 1998.
Crossref
Google Scholar
12. Xu, Z., Lu, X. Z., Guan, H., Chen, C. and Ren, A.Z., A virtual reality based fire training simulator with smoke hazard assessment capacity. Advances in Engineering Software, 68(FEBRUARY), 1-8, 2014.
Crossref
Google Scholar
PIDS App ServiceClick here!