aka: Online Role Play Simulation Design - 4
Previous posts in this series: 1, 2 and 3.
Working almost against the main trend, Fablusi, the role play simulation platform I develop is NOT "pedagogically neutral". In fact, it embraces one pedagogical approach, namely problem-based, experiential, online role play. Roni and I called it "dynamic goal base" - meaning the game goal within the role play is changing all the time. We spend a lot of time thinking how our platform can deliver the kind of learning we designed into the platform and how effective that may be. One issue which came up since the LOW-1 meeting in Finland last year is how "virtual reality" rendered world can be integrated into Fablusi. Before answering that question, I am asking: "Does rendered world enhance the kind of learning that we build into Fablusi?" Here is my answer at this point in time.
Here are several assumptions/beliefs/facts I have about human nature:
1. Human sensory system is highly selective. Depending on the focus at the time when the information arrives, only those information which have our attention are captured. Most, if not all, other information are simply got filtered out. (This has been verified by a famous HCI experiment - but I cannot remember the source.)
2. Our internal world model is built, bit by bit and accumulated throughout our life via a lossy memory system. Such accumulated experience has been filtered by our sensory system. (This is very much the basis of constructism.)
3. When a model is lacking in details, our brain will fill in the details when necessary. This is what I called "imaginated reality". I can recall how disappointed I was when one of the character of a novel was played by one of the most beautiful actress in a television show. Although the actress is beautiful, she cannot compared with the imagined beauty I have created via reading the novel. One interesting thing about the "details" we filled in. Many may be based by experiences which have escaped our initial "attention" but somehow retained and show up. Others are simply made up at the "blink".
The way learning context is created by Fablusi online role play:
Our online role play mainly deal with human situations which involve different roles playing different stakeholders under different social relationship. The context of the "game" is abstracted to a number of roles representing different interest of different stakeholders. The players get a brief role information describing the situation of the game from the role's point of view. The game is started by the release of the initial kick-start episode. These roles interact in iSpace (interaction space). By differentially giving the roles different level of information, different rights in various iSpaces and wealth, we created a framework to model most social situation.
The role information is deliberately brief. One of the reason is to enable the players to embellish the role, filling in some details about the roles. Obviously, no one will be able to describe a full persona in the limited number of words allowed (by time and by design). However, this created a springboard for further imagination and players can fill in details from their past experience - and hence link the present simulated experience to their existing world model.
For political science simulations, where stakeholders are real political figures, we do not need to create the images of the persona in the game. The players can map their roles to real world figures.
For non-political simulations, e.g. commercial or human resource simulations, the players usually create a persona typical to their past experience. The imagined reality among the players are obviously different, but just like any other human interaction, we never have a perfect model of our communication partner any way. In this way, we have created a very realistic simulation, linking players past experience right into the game - AND that's exactly what we want to achieve.
iSpaces, e.g. corporate meeting rooms, the News room, the UN council are abstracted to asynchronous interaction spaces. We do not provide extensive graphics to represent the iSpaces as well. The players "tele-port" from one iSpace to another simply by clicking the appropriate buttons. There is no "walking" between iSpaces.
One obvious suggestion was to use 3D-rendered world to represent the iSpaces. I am rejecting this based on three reasons:
1. Typical interaction in 3D-world is synchronous. This will break some of the pedagogical values we have implemented using asynchronous mode, not to mention the additional loss in flexibility to the players in playing the simulation at any convenient time.
2. The extra rendering details may shift the attention of the players from thinking strategically and tactically how to deal with the simulated situation to the graphical details, which does not add any additional value to the learning objectives.
3. 3-D world rendering is an additional cost with no obviously benefit.
Another suggestion is to use 3D world to represent the "desktop" of the players. The reasons 2 and 3 from above applies.
Avatar may be used to represent the other roles in the game. Since we are leveraging on the asynchronous nature of our model to provide players "breathing space" to do background research, discuss with team members and formulate moves. Taking that away is not constructive to the pedagogical power of the design. Again, when you have an imaged model, any rendering may destroy your imaged model - which may not be in-line with our design as well.
Please note that my exposition here only relate to the value of virtual world in online role play. Again, my view of life is "fit for the purpose". Virtually rendered world must be good for something. Here I am just focussing on online role play simulation.