21st Century Education System

Preparing for the 21st century education system.

Sunday, January 31, 2010

The Interviewer as a Superhero

The interviewer must be very well trained in order to note the truth, the whole truth and nothing but the truth.

In order to note the whole truth - everything about the participant's attitudes, feelings, expectations etc - the interviewer should notice everything the participant says. Also important is how they say it - voice inflections and pauses, and what they avoid saying. Beyond these voice and sound effects, the interviewer should notice all body language cues the interviewee manifests. Moreover, it is not enough for the interviewer to notice all this information, but they also need to take accurate notes of it. The interviewer should then accurately interpret all the information that needs interpretation, such as non-verbal communication: Wincing, folding arms, palms open/closed, leaning forward/backward, looking left/right and up/down, closing the eyes, making/avoiding eye contact, feet on ground stable/on-edge, fidgeting, playing with rings/hair, touching their own face - nose, eyes, mouth, ear, scratching here and there, grooming themselves, etc.

In order to note nothing but the truth, the interviewer must somehow see everything with objective eyes, not putting in any judgements, expectations, preferences or even benign thinking habits. The interviewer mustn't paraphrase, omit or add anything to what the participant says. This means that the field notes the interviewer takes must be complete. Otherwise the interviewer would have to rely on memory, which almost certainly will introduce unconscious selectivity and self-editing, compromising both the wholeness of truth and the nothingness but the truth. Even with my limited understanding of human beings, I can safely say that this is already beyond human capabilities.

The perfect interviewer should not only observe and note every aspect of the reactions of the participant, the interviewer should sometimes respond to the participant's behavior, without reacting to it. This means that the interviewer must consciously notice the participant’s feelings and attitudes as they are expressed by the participant’s verbal and non-verbal behavior. The interviewer then must mindfully decide how to behave towards the participant in such a way that will benefit the interview, and then actually act that way. This is in contrast to “reaction” which would be unconscious and unmindful, like being defensive if the participant is critical of something in the interview.

But that is not all. The well trained interviewer must avoid influencing the interviewee. neither by choice of words in the questions or between the questions, nor by voice inflections and pauses, nor by body-language. So the interviewer should be constantly aware and in control of what cues they are transmitting towards the participant. These cues may affect the interviewee, and we don't want that. The interviewer must appear impartial: Not to look as if they approve or disapprove of the interviewee’s responses. The interviewer must create and maintain the right amount of rapport: Enough to ensure the participant participates willfully, but not enough to make the participant change their responses in order to indulge the interviewer.

Of course the interviewer must not allow his or her emotions and interests to influence the interview. In the few times I was interviewed - usually on the telephone, I wasn't terribly impressed with the interviewers’ ability to lend themselves completely to the interview. Most often, their agenda of completing as many interviews as possible in a short time affected their voice (impatience) and their effort to get me to find an answer quickly, never mind if the answer reflects my opinion or attitude that were supposedly being looked at.

An excellent interviewer must be no less than a very good actor. Many of the demands listed here are the same as what is required of a good psychoanalyst. Yet, the interviewers are normally much less trained than actors and psychoanalysts. So, to be really good at it, the interviewer should be quite a bit beyond what's humanly possible. To mitigate our human failings, we can use technology. It is often recommended that the interview be recorded and then reviewed, to make sure the interviewer wrote down everything. It is often also recommended that the recordings be transcribed. There are different levels of using recordings:

If no recording is done and reviewed, the results reported by the interviewer depend on: Their notes, which are almost-by-definition paraphrased and not accurate; Their memory, which is incomplete and self-edited according to the interviewer’s views; Ultimately very inaccurate.

If an audio recording is done and transcribed: The transcript may be rephrased or incomplete in terms of the actual text; All voice inflections, pauses, mumbling etc. are lost, and they may have contained interesting information. The interviewer may complete these from memory, but then we are back to incomplete and edited memory.

If an audio recording is done and reviewed: All body-language information is lost. The interviewer may complete these from memory, but then we are back to incomplete and edited memory.

If a video recording is done at standard frame-rate, and reviewed: As long as the video angles are comprehensive enough to capture all the participants’ movements, many body language cues can be analyzed, which is great. Micro expressions, which are very fleeting, will still be lost.

If a high-speed video recording is done from many angles, in both visible and infra-red wavelength, and then reviewed repeatedly by a team of experts: This would be really nice, but it is beyond reasonable expectations for most studies in the next few decades.

In the context of FIRE: What can the facilitation Institute for Research in Education do to make researchers' life easier when it comes to finding and managing superheros? FIRE can consider providing, for example:

  • Excellent interviewers for research that genuinely requires interviews
  • Training for interviewers
  • Assessment of the quality of interviewers
  • Audio and video recording and transcription equipment
  • Access and training in the use of tagging and analysis software for video and audio recordings

Saturday, January 30, 2010

Data Collection - Interview

To do research about any aspect of the world, we need data about the world. There are many different ways to collect data for education research. One of them has to do with interviewing research participants, and this data collection method is the focus here. More specifically, I will look at some scientific weaknesses inherent to interviews, and how they can be minimized.

The weak link in the rigor of any scientific research is the human factor. An interview explicitly relies on that weak link - the interviewer. On the other hand, education research has to do with humans, so one cannot escape being heavily involved with that weak link. Even if we eliminate the weak-link effects of the researchers, we are left with the weak-link effects of the participants. For example, if we use a perfect questionnaire to collect data, the participants who fill out the questionnaire are still fallible humans.

It may be that an excellent interviewer can produce more accurate and deeper results than a self-report such as a questionnaire. But as in all professions, excellent practitioners are not common. And the slightly-less-than-excellent interviewer, even if he/she is pretty good, is likely to miss some cues from the participant, to behave in a way that affects the participant, to occasionally misinterpret what the participant says, and in general put themselves as part of the entity creating the information rather than remain an objective tool for collecting information. The theoretical excellent interviewer is transparent, while the real-life almost-excellent interviewer is opaque. The qualities of an excellent interviewer are discussed separately. Here I concentrate on what's inherent to the interview, and lies outside the interviewer's control.

The more qualitative the interview is - open questions, free-form follow up questions - the more the interviewer is active in the interaction, and the more the interviewer affects the results. However, if there is a need for open questions - for example when trying to check what impressions the participants have of a certain learning situation - a questionnaire may miss the point and there is no way to avoid using a human interviewer. Serious effort should be spent to design research in such a way that a minimal number of open questions. This way, the level of involvement of the interviewer can be minimized and the penalties in term of objectivity can be minimized. A second line of defense is having some open questions to be administered by an interviewer, but only in a preliminary part of the research, aimed at generating more accurate questions to be pursued in later parts of the research, using more objective methods. A third line of defense, when complicated open questions can’t be avoided, is to include - preferably close to the end of the interview - a few questions designed to check the level and direction of dependence created between the interviewer and participant. These extra questions should not be apparent to the participant. It would be great if they are also not apparent to the interviewer, but if the interviewer is an excellent one - they will know. Some examples of such questions are “Did you enjoy the interview?” or “Was the interview difficult?” or “Were there accurate enough options for the closed questions?” or “Do you feel the questions and answers in the interview capture what you wanted to say?”. If the average response for a certain interviewer differ from the average for the one excellent interviewer in the research - maybe there was too much of that interviewer in the interview.

A general tool that can help with pushing interviews towards the objective end of the scale is recording the interview and reviewing the recording after the interview. Since recordings can improve the results an interviewer gets from an interview, some detailed considerations about recordings are discussed in the context of the interviewer as a superhero.

Beyond the factors the excellent interviewer can control in an interview situation, the participant is also affected by the interviewer in ways that are outside the control of the interviewer. The interviewee may have judgments regarding how the interviewer looks, sounds, and smells - whether positive, negative or otherwise. The interviewer might remind the interviewee of someone or something that the interviewee liked or disliked: Maybe someone the interviewee is inclined to appease or to confront. A phone interview removes the interviewer a bit from the participant, therefore preventing some of the ways the interviewer may affect the participant: body-language, look, etc. It still leaves many ways the interviewer affects the participant: Voice inflections, pauses, choice of words in between questions (those words not prescribed by the interview protocol), etc. On the other hand, in a phone interview the interviewer misses the interviewee's body language information.

A hypothesis: The more open and in-depth interviews are required for a research, the more the main researcher tends to conduct the interviews personally, rather than using research assistants as interviewers. If this is true, it may mean the researchers themselves don’t believe in the ability of another interviewer to reach the same raw data as the researcher can. This would mean that the researchers themselves believe that the answers in the interview have a lot to do with the interviewer rather than relating purely to the participants. This in turn would indicate that such an interview’s meaning depends on the specific researcher conducting it. If you don’t happen to be that researcher, the interview is of very limited meaning for you.

In case this blog entry appears very negative: The fact that such weaknesses exist doesn't mean interviews shouldn't be used. It does mean they should be used with care, and whenever possible, safer methods should be used.

Friday, January 29, 2010

It's Complicated

The field of education is a complicated field for research. There are many variables that affect the success of a student. For example, thinking about a particular class the student takes, such variables may include: The student's ability in the particular subject being studied, the student's prior knowledge, the student's expectation about their ability, the student's attitude towards the subject, the number of hours the student slept the night before, the student's health that day, whether the student ate that morning or before class, the teacher's ability in the particular subject, the teacher's expectations for the particular student's ability to learn in general or the subject in particular, the teacher's expectations for their own ability to teach the subject, the teacher expectations from the class in general, the amounts of light, noise, oxygen, CO2 and scents in the class, the class's attitude towards the subject, the teacher and the student, the time of day, the class before the current class, the class scheduled after the current class, the time of year, the quality of the class materials, whether the student brought the class materials to class, etc. etc. etc.
Many of these variables interact with each other. Many are not well understood. Many cannot be easily determined. Many more variables are not known at all.

All this may be further complicated by adding the possible presence of a researcher or research aids such as a video camera. These are likely to affect the behavior of the teacher, the class and the particular student. Also, there are the additional limitations of privacy concerns, ethics, legal issues, lack of funds for research, academic pressures (just publish), researchers own pressures (just complete the degree), and probably more. And then there is the small issue of actually conducting a well planned, validated, executed, analyzed and reviewed research.

These, and similar issues are often alluded to as reasons why education research is by its nature "softer" than some other sciences. (Notice how "soft" the statements here are?) Since it is a soft science - or knowledge domain - it doesn't need to answer to the demands of the exact sciences. Most specifically, conducting a study that is not repeatable and coming up with explanations that are not falsifiable is counts as research.

But,

How special are these characteristics of education research?

Let's consider the amount of variables: In biochemistry, just looking at the issue of metabolism as it is described in wikipedia illustrates the complexity. More specifically, looking at the metabolic network of a bacterium may give us a hint about the complexity of higher organisms:

... and yet, we study human metabolism, conducting repeatable experiments, come up with refutable theories - and often do indeed refute them - and keep building our knowledge. So the complexity does not preclude rigorous scientific handling. Even in exact sciences such as physics we have annoying interplays between variables that are hard to pin down, such as the Three Body Problem, where we don't have a way to determine the behavior of three masses (e.g., stars) moving and attracting each other. And yet, we expect physics research to be very exact, and for the most part we physics research lives up to the expectations. The complexity and incomplete data do make a researcher's life more difficult and interesting, but that's not such a bad thing.

Then there is the problem that the subjects of education research - teachers, students, parents, administrators, etc - are often conscious of the research, and that fact affects their behavior. A researcher in a classroom, a questionnaire, a video camera and other players and tools all end up affecting the very behavior being researched. Thinking about the aspect of Observer Effect concerned with the observed changing behavior as a result of being observed, one can see this is a significant issue in education research. But remembering again that other sciences have to deal with similar issues, such as the Observer Effect in Physics, and seeing physicists courageously dealing with it without giving up h rigor of their research, can inspire us to aim that high in education research, too.

Another aspect of the observer/observee interaction is that of Observer-expectancy effect, which focuses on how the researcher's behavior affects the result of the research. This is a real issue that needs to be addressed. Part of the solution is minimizing the direct interaction between researcher and participants, when possible. Another part of the solution is to use blind and double blind experiment methods. Yet another part is to train researchers and research assistants very well indeed. There are probably more ways I am not aware of right now. The remainder of the problem is too small to justify much tolerance for weak research methods.

Creating a working model of the activity of teaching and learning is a problematic endeavor. Again there are the variables described above, and to make things worse, we are dealing with humans. They are fickle. It's hard to generalize about them, and it is not politically correct to try (could that be part of the issue?) One might say that every model we create would be wrong. I believe that's true, but "Every Model Is Wrong" is a statement we see mentioned in the context of many sciences, including the exact sciences. This statement is usually immediately succeeded with a statement of how models are still useful, even though in the purest sense they are wrong. The weather models are useful in predicting short term weather patterns. It's ok that these models can't predict longer term weather, as long as we are aware of their limitations. Bohr's model of the atom as a nucleus surrounded by electrons zooming around in clear paths is quite wrong, but it is so useful that we still teach it to kids, and it works nicely to explain a lot of chemical and physical behaviors. Philosophically speaking, humans may not have even the potential to conceive of a perfect model to describe anything significant. But even if a model is ultimately wrong, we mustn't give up trying to improve it. If we have an imperfect model, that is useful to describe a very limited subfield of education, this is very good. For example, if we have a model that works only for "teaching and learning English as a Second Language to a group of 10-15 adolescents whose mother tongue is a derivative of Latin", and if that model allows us to correctly predict what techniques would work to improve the teaching and learning, then we have an immensely useful tool in our hands.

Bottom line: Yes, it is complicated. So what?

Thursday, January 28, 2010

FIRE Mission Statement

Stating the mission is useful, in order to focus action, especially when many individuals are involved. And in the creation of a Facilitation Institute for Research in Education (FIRE), many individuals must be involved.

What we need is not to conduct research, but to enable many others to do research. And we have certain expectations from that research. Here is some step-by-step thinking.

  1. We can start with the mission that is implied by the name:
    "To facilitate research in education"

  2. Research in education already takes place in great numbers, but it seems like much of it doesn't lend itself to action in the field. Granted, basic research doesn't always lead to immediate implementation. If we limited ourselves only to applied research - research where we know in advance what we want to do with the results, we will miss unexpected parts of the picture. Parts that have the potential of becoming useful. This was illustrated many times in the research of physics, where phenomena like electromagnetism, radio waves and lasers were discovered and researched long before it became clear how they can be used. Still, it is possible to do research with an eye on the possible applications, with some attention to specific problems that can be solved. This is close to what FIRE is about: We are looking for useful, applicable research, that can lead to action. Again, this is not meant as a claim that there is no value for pure research in education - just that FIRE is concerned with relatively applicable research.
    So, maybe "To facilitate applicable research in education"

  3. Being aware that my very study of education research so far is short and limited, I still get a clear impression that many education research projects end up floating alone in science-space: They describe a specific situation with a limited group of participants as seen through the eyes of a limited group of researchers. Such studies are often not repeatable. And the suggested theories and explanations being derived from many such studies are not falsifiable or refutable - there is no potential way to show that they are wrong. Karl Popper wouldn't have considered such studies and theories as scientific. He rather liked falsifiability.
    So, possibly "To facilitate applicable scientific research in education"

  4. Another limitation of a study that is limited to a specific group of participants and researchers is that it don't lend themselves to be building blocks that can be accumulated and be built upon. Humanity's ability to educate the masses needs to catch up with a lot of changes that already occurred in the past few centuries, and then it needs to stay abreast of the ongoing fast changes in the 21st century and beyond. The knowledge needs to be built up.
    So: "To facilitate scientific research in education, aimed at building up a body of applicable knowledge"

  5. This mission statement does not identify a static goal to be achieved. It is more a statement of duty than a statement of mission. This makes some sense because FIRE is expected to be active for at least a few decades. One way to turn this into an accomplishable mission statement is to ask what it would take to make FIRE unnecessary.
    "To create a self sustaining scientific discipline of education research, continuously building up applicable knowledge"

FIRE will work along these lines - whether as a duty or as a mission, whether with the wording above or with better wording. To do that, FIRE should acquire the capability to provide services, to make resources (including financial) available to researchers, and to affect the culture of education research.

Friday, January 1, 2010

Less Than Two Years To Go

Having committed to a specific plan, action becomes urgent. It is no longer a matter of studying the issue, which can be endless. By December 31st, 2011, an organization codenamed FIRE (Facilitation Institute for Research in Education) should be up and running. For this large milestone to be met, many smaller milestones need to be passed along the way. If the first few minor milestones are not met on time, this will create a cascade of unmet milestones, probably leading to an unmet final target. So, action is urgent.

Below are a few tentative milestones, with lots of details marked as TBD (To Be Determined). The dates are aligned to quarters, along the dates of winter and summer solstice and equinox. No mystical intention is to be read into this fact - it just amuses me to align earthly plans with astronomical events. The tentative dates are listed in descending order, looking for what needs to be done to achieve each milestone.

21/12/2011
FIRE is up and running. FIRE is fully funded (amounts TBD) for the long term (timeframe TBD). FIRE is recognized by the academia (measurement TBD), by school (measurement TBD) and by the public in general (measurement TBD). FIRE is active in all its public relations domains (details TBD). All FIRE data pools, services and resource pools are active, available and being used by their respective intended users (details TBD).

21/9/2011
Databases further populated (amounts TBD). Automated services have been used by at least 400 end-users (user type distribution TBD). Raw data collection (details TBD) started, with the legal issues resolved. At least one significant (criteria TBD) research reviewed by FIRE. At least one significant (criteria TBD) research designed with the support of FIRE. At least 2 commercial companies are involved in educational research with FIRE.

21/6/2011
Databases further populated (amounts TBD). Automated services have been used by at least 100 end-users (user type distribution TBD). Funding secured until mid 2012. Each research support item owned by FIRE has been used in research. At least 3 commercial companies are actively (criteria TBD) considering conducting educational research with the support of FIRE.

21/3/2011
Automated services fully functional. Databases further populated (amounts TBD). Automated services have been used by at least 10 end-users. At least one questionnaire validated by FIRE was used in research. At least one senior researcher joins, to give FIRE to provide research design and review capabilities. At least 5 commercial companies show interest in conducting educational research with the support of FIRE.

21/12/2010
2011 milestones finalized. Automated services fully functional in beta mode (details TBD). All databases populated with real data (amounts and details TBD). Questionnaire validation services started being used. Access to research-support resources secured: Video recording and transliteration equipment, lab (more details TBD), together with maintenance capability. Approached 60% of the relevant commercial companies.

21/9/2010
100% of the TBD items resolved to specific targets. 90% of professors are approached and consulted. Automated services - Web-site and underlying databases and search engines - fully functional in alpha mode (details TBD). Professional legal activity started, addressing privacy issues of raw data collection. Questionnaire validation capability in place. Compiled a list of relevant commercial companies that may have an interest in educational research. Started approaching commercial companies.

21/6/2010
At least one co-founder with long-term commitment joins the project. Initial funding secured (amounts TBD). Professional PR activity started. The Board of Advisors is specified and 70% populated. 100% of a list of all relevant professors of education and education policy - compiled. More professors from the list are approached and consulted (percentage TBD). Initial web-site active (details TBD).

21/3/2010
70% of the "TBD" items resolved to specific targets. The legal entity of FIRE is in place. A Board of Advisors is started (types and numbers of advisors TBD). A list of professors and researchers of education and education policy is started. Some professors from the list (number TBD) approached, informed of the services to be provided by FIRE, feedback received and possibly changes made the FIRE specification.

1/1/2010
Initial work-plan in place and committed to