What Science is Actually Used by Nurses?
Glen S. Aikenhead
College of Education
University of Saskatchewan
28 Campus Drive
Saskatoon, SK, S7N 0X1
November 17, 2003
A paper presented to the Curriculum Studies Mini-Conference, University of Saskatchewan, November 22, 2003
A primary goal espoused for the science curriculum is to prepare students for science-related careers in industry, government, and the health professions, for example. Science educators expect that students will integrate the curriculum’s scientific content and processes into their own thinking (i.e. students will succeed at meaningful learning), so that this understanding is accessible later when students are engaged in a science-rich workplace. How well are science educators’ expectations fulfilled?
Several studies have shown a poor match between the scientific content generally taught in high school science courses and the type of scientific understanding required for success in science-based occupations in which knowledge of the practice of science and technology is either critical to the job or enhances occupational competence (Chin, Munby, Hutchinson, Taylor & Clark, in press; Coles, 1997; Lottero-Perdue & Brickhouse, 2002). Duggan and Gott (2002) investigated in some detail the role of science for employees in five science-based industries: a chemical plant specializing in colourants for foods, cosmetics and pharmaceuticals; a biotechnology firm specializing in medical diagnostic kits; an environmental analysis lab; an engineering company manufacturing pumps for the petrochemical industry; and an arable farm. Duggan and Gott discovered, along with the studies cited above, that most of the scientific conceptual understanding used by employees was learned on the job, not in high school or university courses.
But Duggan and Gott went further in their analysis of what knowledge employees used on the job and concluded, “a secure knowledge of procedural understanding appeared to be critical” (p. 674). Procedural understanding, the thinking behind the doing of science, draws upon a wealth of ideas about evidence itself, for example, concepts of validity and reliability of evidence. Duggan and Gott called this smaller cluster of ideas “concepts of evidence.” Thus, procedural understanding is informed by, in part, concepts of evidence.
The present study extended Duggan and Gott’s (2002) research program to the health professions, specifically acute-care nurses working in one hospital unit. Nursing represents a large sector of science-based jobs in Canada. Nurses must draw upon a constellation of declarative and procedural knowledge to interpret evidence as they: carry out orders from a doctor, follow an appropriate protocol, gather observational data on a patient, and respond to the patient’s physical and emotional needs. Nurses’ constellation of knowledge-in-use was of interest to this research project. By learning more about nurses’ knowledge-in-use on hospital wards, science educators may develop more effective science curricula for science-based occupations, such as health professions. For example, it would be helpful to discover what conceptual content in physics has a role in nursing, given the abundance of instruments utilized by nurses. It would also be helpful to know if there is a common core of concepts of evidence used by nurses as they engage in critical thinking, problem solving, and decision making. An effective school science curriculum appropriate to science-related careers such as nursing might also help improve the general public’s scientific literacy so the public can better understand and communicate with health professionals (Eijkelhof, 1990, 1994; Layton, 1991).
Although the topics of critical thinking and problem solving (e.g. novice versus expert) and of decision making (e.g. taking professional action) are beyond the scope of this research, these processes form the context in which evidence is acquired and used by nurses; and therefore, these processes form an important context for the research.
The study investigated the following question: While taking note of the specific declarative knowledge used by six acute-care nurses in a hospital (knowledge-in-use associated with the technical field of nursing and the abstract field of science), is there a core set of concepts of evidence that can be identified?
Science in the Everyday World
Interestingly, Duggan and Gott (2002) also discovered that the concepts of evidence needed by employees in science-related careers were critical to a non-science public who were involved with a science-related social issue, for instance, parents deciding whether or not to have their infant child immunized. This finding complements extensive research into the use of scientific knowledge in everyday science-related problem solving and decision making (Davidson & Schibeci, 2000; Dori & Tal, 2000; Goshorn, 1996; Lambert & Rose, 1990; Macgill, 1987; Michael, 1992; Tytler, Duggan & Gott, 2001; Wynne, 1991). Thirty-one different case studies of this type of research were reviewed by Ryder (2001) who firmly concluded: When people need to communicate with experts and/or take action, they usually learn the scientific knowledge as required. The qualification “as required” needs clarification.
Even though people seem to learn science in their everyday world as required, this learning is not often the “pure science” (canonical content) transmitted by school and university science courses. Research into the application of scientific knowledge to everyday events has produced one clear and consistent finding: most often, canonical scientific knowledge is not directly useable in science-related everyday situations, for various reasons (Cajas, 1998; Furnham, 1992; Jenkins, 1992; Layton, 1991; Layton, Jenkins, Macgill & Davey, 1993; Ryder, 2001; Solomon, 1984; Wynne, 1991). For instance, when investigating an everyday event for which canonical science content was directly relevant, Lawrenz and Gray (1995) found that science teachers with science degrees did not use scientific knowledge to make meaning out of the event, but instead used other content knowledge such as values. Equivalent research with nurses has not been conducted. What type of knowledge-in-use do they tend to rely on in their workplace? In other words, what science is actually used by nurses?
The pervasive failure of scientific knowledge to be directly applied to everyday science-related problem solving can be explained, in part, by the discovery that canonical science must be transformed (i.e. deconstructed and then reconstructed according to the idiosyncratic demands of the context) into knowledge very different in character from the “pure science” knowledge of university science courses (Jenkins, 1992, 2002; Layton, 1991), as one moves from “pure science” for explaining or describing, to “practical science” for action (e.g. professional knowledge of nursing).
Two general conclusions can be drawn from the literature reviewed here. First, empirical evidence consistently contradicts scientists’ and science teachers’ hypothetical claims that science is directly applicable to one’s everyday life and science-related jobs. What scientists and science teachers probably mean is that scientific concepts can be used to abstract meaning from an everyday or job-related event. The fact that this type of intellectual abstraction is only relevant to those who enjoy explaining everyday experiences this way (i.e. those who have a worldview that harmonizes with a worldview endemic to science; Cobern, 1991; Cobern & Aikenhead, 1998) suggests that scientific explanations likely appear irrelevant to those who do not embrace a scientific worldview.
A second general conclusion points to the existence of a type of knowledge, concepts of evidence, not normally emphasized in the school science curriculum to any extent, but nevertheless used extensively by people in science-related careers or in everyday circumstances requiring a decision on a science-related matter.
Three ideas, worldviews, concepts of evidence, and the nature of evidence, need to be clarified in the context of nursing in order to establish a theoretical framework in which to interpret the results presented in this paper.
Nurses’ Worldviews Related to Science
Worldview refers to our fundamental unconscious presuppositions with which we give meaning to our experiences in the world around us. Drawing upon cultural anthropologists Geertz (1973) and Kearney (1984), Cobern (1991) outlined for science educators seven categories that define worldview: Self, Non-self, Classification, Relationships, Causality, Time, and Space. Although different disciplines within science enjoy their own orientation to describing/explaining nature, the worldview generally associated with those disciplines is characterized by assumptions that the world is: mechanistic (i.e. knowledge expressed in terms of inorganic machine metaphors), reductionist (i.e. the whole is a simple sum of its parts), and knowable through causal relationships linearly conceived and context independent.
To what extent do nurses’ worldviews reflect features of a scientific worldview? Cobern (1993) conducted in-depth research with 15 nursing students taking advanced university science courses. The nursing students talked about nature and what it meant to them. Cobern’s data showed that most of the students did not share a materialistic and reductionistic worldview towards nature, as did their science instructor, but instead held an aesthetic, religious, or emotional worldview towards nature. Several students did not even connect science with knowledge of the natural world. Only six of the 15 nursing students spoke in ways that suggested scientific ideas had become integrated into their thinking, but five of those six used that scientific thinking in heterodox fashions, for example, mixing scientific knowledge (neurological synapses) and religious knowledge (a divinity’s created world). Only one nurse (Carla) out of the 15 in Cobern’s study expressed views similar to an orthodox scientific point of view. Cobern concluded that the nursing students appeared more interested in relating to, rather than scientifically knowing about, nature. How well did these nurses integrate the science curriculum’s content and processes into their own thinking? “Student views in this study suggest that one can pass the exams and still not have had one’s basic views of the world changed. Most of these students said little or nothing about science. When they did, the science was usually cast in an unorthodox context” (Cobern, 1993, p. 948).
Cobern’s study adds richness to our understanding that, in general, a small proportion of people make sense of their world in a way that harmonizes with the worldview held by most scientists, and everyone else feels more comfortable with other worldview orientations. Cobern, however, did not conduct his research in the context of professional practice, where nurses gather and evaluate scientific-like data as they attend to specific patients, solve problems, and make decisions. What science is actually used under these circumstances?
Concepts of Evidence
In the process of gathering and evaluating data to determine if the data warrant the status of evidence, and in the process of evaluating evidence to decide what to do next, people use conceptions (or misconceptions) concerning data and evidence (Duggan & Gott, 2002). Gott et al. (2003) provide an encyclopaedia of “concepts of evidence” derived from research into events experienced by people in science-related careers and by people with no particular science background. Concepts of evidence are usually applied unconsciously as tacit knowledge (Higgs & Jones, 2002) to determine how credible the data are, and then in turn how credible and important the evidence is, given the social context in which action may occur on the basis of that evidence (Duggan & Gott, 2002).
Reliability is a general concept of evidence. According to Gott et al. (2003), the scientific meaning of reliability usually refers to the consistency of readings when multiple readings are gathered. Reliability generally is enhanced by: (1) repetitive readings from the same instrument (e.g. measurement of blood alcohol concentration can be assessed with a breathalyser, but at least three independent readings are made before the measure is considered legally reliable evidence in Canada); (2) multiple instrument readings using similar types of instruments, a procedure often called “measurement triangulation” (e.g. measuring blood alcohol concentration with two different models of breathalizers); and (3) multiple observers (e.g. spot checks of measurement techniques by co-workers are sometimes built into routine procedures) to minimize human error in the use of an instrument.
A fundamental concept of evidence that underscores reliability is the concept “non-repeatability:” repeated measurements of the same quantity with the same instrument seldom give exactly the same value. The sensitivity of an instrument is a measure of the amount of error inherent in the instrument itself (i.e. measurement error). Sensitive instruments produce less fluctuation in their readings (i.e. they have low measurement error). One way to express sensitivity or measurement error is with a ± value.
Reliability decreases as an instrument’s measurement error (its ± value) increases. Thus, a datum is weighed as evidence by considering the instrument’s measurement error and by considering the measurement procedures that have been ascertained. For example, the reliability of a measurement of a blood alcohol concentration should be assessed in terms of the measurement error associated with the breathalyser (e.g. ± 0.01) and in terms of how the measurement was taken (e.g. superficial breathing versus deep breathing by a subject). To investigate a patient’s source of pain, for instance, reliability of the investigation’s design would include an assessment of each measurement and every datum. Factors associated with the choice of measuring instruments must also be considered, for instance, the measurement error associated with each instrument. These concepts of evidence are generally associated with reliability in science-rich workplaces.
The quality of a scientific measurement is determined not only by its reliability but also by its validity. Validity is concerned with: “Does the reading actually measure what is claimed to be measured?” (Gott et al., 2003, 9.2). For instance, a particular monitor on a surgical ward is connected to a finger probe that produces a reading claimed to measure a patient’s blood oxygen saturation (the patient’s “sats”). But according to Gia, one of the nurses in this study, the finger probe may also inadvertently measure a patient’s smoking behaviour (yellow fingernails), a patient’s hand temperature, or a patient’s haemoglobin count. Depending on the patient, the finger probe may not yield a valid measure of blood oxygen saturation.
Police measure the blood alcohol concentration of a person by using a breathalyser and by crosschecking that measurement with a blood test. Crosschecking with a different process to measure the same variable is the concept of evidence “validity triangulation.”
Validity is a broad concept of evidence often discussed in science-based industries in terms of how close a piece of evidence comes to the “true” value; in other words, a measurement’s accuracy (Gott et al., 2003, 6.1). The two concepts, validity and accuracy, are very closely related. The word “accuracy” refers to a less abstract concept and it appeared in all the nurses’ transcripts, while the more abstract word “validity” never did.
Although the concepts of reliability and accuracy differ considerably, they are related when one judges whether or not some data should be considered as evidence. One’s confidence in a measure’s accuracy will be influenced by the measure’s reliability, for instance, unreliable readings do not engender the belief that an averaged datum is particularly accurate.
The Nature of Evidence
Evidence is normally thought of as data that have been scrutinized by various methods or validation criteria, such as comparisons with other data, or consistency with accepted knowledge (Gott, Duggan & Roberts, 1999). Scrutiny affords a degree of credibility in the data.
Different science-related workplaces have varying degrees of data richness. Cases of high complexity in some industries led Gott and colleagues (1999) to stipulate the following definitions: several readings produce a measurement; several measurements establish a datum; and a datum repeated over time accumulates into data. For simple situations, however, one reading or measurement could establish a datum, defined by Gott and colleagues (1999, p. 1) as “the measurement of a parameter (e.g. the volume of a gas),” and when repeated in concert with a variable, more than one datum becomes data (e.g. the volume of gas measured at various temperatures). A datum can be either quantitative or qualitative. An example of a qualitative datum on a surgical ward is “type of oxygen equipment” (e.g. prong or mask, along with several mask sizes).
Gott and colleagues (1999) devised a model for how a measurement develops into evidence during a science-related event, evidence which in turn is evaluated with respect to a possible outcome, such as making a decision based on the evidence. This outcome is always embedded in a social context of the science-related event (e.g. Does the product meet quality-assurance standards?). The evaluation of evidence is influenced by features of the social context (e.g. cost, practicality, and time). Figure 1 depicts this model (Gott et al., 1999). The model also frames concepts of evidence, that is, concepts about data and the credibility of those data (e.g. repeatability, calibration, instrument error, sampling, reliability, validity, and accuracy).
Figure 1 fits here.
In summary, concepts about data and about the evaluation of data together comprise concepts of evidence; and concepts of evidence plus the evaluation of that evidence in a particular setting, are all embraced by the model (Figure 1) proposed by Gott and colleagues (1999).
The research reported here was a preliminary study carried out on a modest scale. No generalizations were sought, only: (1) a description of the concepts of evidence apparent in nurses’ knowledge-in-use, and (2) comparisons to other science-based occupations. In no sense of the term were nurses evaluated in this study. Although critical thinking or problem solving usually leads to decision making and then to action taken by a nurse, this study was limited to the science-related knowledge-in-use in the context of critical thinking, problem solving, and decision making.
With the approval of several layers of ethics committees, I contacted a Unit Manager (the administrative head of a surgical ward) chosen by the Saskatoon Health Region’s research office, and I personally met with her to request the involvement of her unit. The Manager agreed and contacted nurses she thought might be interested, and when they expressed tentative interest she forwarded their name and home telephone number to me so I could contact them personally. This process occurred over a four-week period.
I met with each potential participant individually to give a short oral description of the study, to answer any questions, and to provide a written summary of the study and the ethics contract to be used. Several days later, I telephoned a potential participant at home to ask if they had further questions and to ask if they wished to volunteer to participate. All potential participants accepted; six nurses in total, four women and two men. (The surgical unit was comprised of 43 women and 7 men nurses.) For each nurse, a five-minute meeting was held at the hospital to sign the ethics contract, to give the nurse a miniature tape recorder (see below for an explanation), and to set a time for their first interview.
By the time six nurses had volunteered, it had become readily apparent from the first set of interviews that each interview would produce extensive and rich data to answer the research question effectively. Thus, a sample size of six participants was pragmatically arrived at. The nurses chose the following pseudonyms for themselves: Chloe, Gia, Jamie, Joan, Sarah, and Terry.
The Unit Manager of the surgical ward was involved in the study to help direct the research to ensure minimal disruption and optimum data collection, and to interact with a draft version of the Research Report.
The task of the researcher was to interpret the words of the participants, in order to identify their science-related knowledge-in-use, expressed during conversations about a personal science-related problem-solving or decision-making event on the ward. Because expert performers are seldom explicitly aware of the knowledge they use at any one moment, the usual type of semi-structured interviewing is rarely successful (Duggan & Gott, 2002). Therefore, unstructured interviews were conducted and they focused on the participants’ cognitive engagement in practice.
By talking into a personal miniature tape recorder during a shift, nurses identified on-going events (both normal and discrepant events) related to their evidence-based practice, and then later they were interviewed about some of these events, usually one per interview. Some nurses chose to use written notes instead of a miniature tape recorder. To ensure professional confidentiality between a patient and nurse, the interviewer did not observe or have any contact with patients.
Each nurse was interviewed four times during a four- to six-week period. The interviews took place in a private seminar room near the surgical unit, at a time convenient to the nurse (usually around noon for day shifts, and 10 pm for night shifts). Each interview required between 10 to 20 minutes, with most lasting 20 minutes. All interviews were audio taped. The project accumulated over 7 hours of focused discussions. Relevant portions of each tape were transcribed, but before any portion of a transcription became public data, it was cleared by the participant in terms of its accuracy in portraying the participant’s meaning and in terms of it safeguarding the participant’s anonymity. Each nurse scrutinized a draft of a transcript, made appropriate changes if they wished, and then signed a release statement.
The data (approximately 88 pages of transcriptions) were analyzed to tease out concepts of evidence specifically, and declarative and procedural knowledge in general, that contributed to the critical thinking, problem solving, or decision making in which a nurse had been engaged. In this paper, quotations from participants are referenced citing their pseudonym, the interview date, and lines in the interview transcript from which the quotation was taken.
A draft version of the Research Report was written, and then read by the Unit Manager who checked it for accuracy and for anonymity of the nurses and the hospital. She was interviewed to discuss her reaction to the research results, and this information was included in the final draft of the Research Report. For this purpose, this interview was audio taped, relevant portions transcribed, and the final transcription signed off by the Unit Manager.
It is important to note potential limitations to the interview data. The interviews took the form of a conversation between a nurse and myself, an outsider to nursing. Being an outsider gave me an advantage because I could “make the familiar strange” in order to discover implicit concepts of evidence used by nurses. Making the familiar strange is a conventional process in qualitative research. However, being an outsider might have had disadvantages as well. Even though the nurses were aware of my science background as a science educator, they may have simplified their descriptions by using a non-scientific genre of communication in much the same way as they would with a patient or a patient’s relatives. Because I did not observe nurses speaking among themselves or to other hospital professionals, I have no data with which to compare those conversations with my interview conversations with the nurses.
This issue of simplified descriptions never arose during the interviews with nurses, but it was discussed in the interview with the Unit Manager. For instance, the Unit Manager pointed out that the six nurses had consistently referred to a blood pressure instrument on a portable trolley as a “Dynamap,” even though there were two types of machines that measure blood pressure: one produced by the Critikon company and one by the Welch-Allyn company. Only the former has the brand name “Dynamap,” which happens to have a poor track record for accuracy (the Unit Manager, October 14, 66-71). The nurses referred to both machines as “Dynamaps.” It is not possible to conclude whether this simplification was part of their normal professional discourse (much like using “Kleenex” to represent different brands of tissues even though “Kleenex” refers to only one brand name), or whether the simplification was for my benefit as an outsider.
My sense of my conversations with each nurse, however, was that the nurses spoke to me much as they spoke to each other professionally, because I continually had to ask them to translate abbreviations they automatically used (e.g. “BP,” “sats,” and “DC”), and because their description of a sequence of events relied on tacit knowledge of nursing and did not follow the actual sequence of events, a situation that required my constant probing to sort out the actual sequence in my mind.
The first potential limitation in the data is, therefore, that one cannot be certain to what degree the nurses spoke in a lay genre to me as an outsider, or spoke in a professional genre to me as a science person. (Terry no doubt spoke to me in a science genre, as discussed later in this paper.)
A second potential limitation in the data concerns the scope of the study. During the 24 interviews conducted, there were about 30 events discussed, some of which overlapped between nurses. This represents a limited number of events. Hence, some key events on the surgical ward are most likely missing from this preliminary research project.
The results of this study are presented in a sequence that first apprises the reader of various surgical ward contexts in which specific concepts of evidence and scientific knowledge were found to be enacted by nurses. The results are organized around the following topics: research context, evaluating data, concepts of evidence, and scientific knowledge-in-use.
The knowledge one uses and the way one uses it depend on the function of the setting in which the knowledge is used. The way scientific knowledge is used in any particular setting often depends on the setting itself (Chin, Munby, Hutchinson, Taylor & Clark, in press; Layton, 1991; Ryder, 2001). Accordingly, Chin and colleagues (in press) proposed three features of any setting that involves science-related knowledge: purpose, accountability, and the substance (knowledge-in-use) found in that context. These three contextual features organize the description of the context of this research study, a hospital’s surgical unit.
Purpose of the Setting
In a number of science-rich workplaces studied by Duggan and Gott (2002), the purpose of the workplace was quality control of a product or process, a purpose that affords the luxury of repeated measurements and the creation of new methods to defend claims made in the workplace. However, the main purpose of nursing on a surgical ward is to ameliorate the health of patients and to reduce their pain (“to improve the condition and comfort of the patient;” Chloe, May 26, 31). Given the constraints of time, resources, and the immediate consequences to a patient, empirical evidence serves a much different purpose for acute-care nurses than for workers in most other science-related occupations. One indication of time constraints was the fact that nurses were unable to participate in a research interview for about 20% of the prearranged visits to the unit, due to unpredictable workload duties.
The nurses in the research study perceived their primary role as advocates for their patients’ physical and emotional healing, in a milieu of resources (e.g. medication, tests, and procedures) and of people (e.g. doctors, fellow professionals, technicians, and visiting family and friends of the patient).
Terry: It’s more my responsibility to advocate for that patient, to make the surgeon aware of what my findings are, and you say, “Well, you know, these [chest tubes] have been in for so long, and this is what’s draining and there’s this bubbling or tidalling” [fluctuating]. (June 15, 8-11)
Thus, the purpose of knowledge-in-use for acute-care nurses encompassed three domains: healing of patients, proper use of resources, and effective interaction with people. The last two domains always depended on the healing of patients – the primary purpose of a surgical ward.
Accountability in the Setting
For most science-related careers in business, industry, and government laboratories, for example, accountability is assessed with respect to the quality and efficiency of the product or with respect to the correctness and appropriate use of a procedure.
The nurses in this study talked about events related to gathering and evaluating evidence in the context of clinical reasoning. Based on these focused discussions, I inferred an outsider’s perspective on accountability in the surgical unit: the nurses were held accountable for the patient’s physical and emotional well being, for the appropriate use of resources (e.g. calling doctors/residents to perform a function), and for maintaining the hospital’s cultural standards of physical and emotional safety and comfort (e.g. managing a patient’s family and friends). Because the formal administrative hierarchy of accountability was never discussed in the interviews, an administrative perspective on accountability cannot be extrapolated from the data.
Knowledge-in-Use Enacted in the Setting
Although nurses do not stop to reflect on the various types of knowledge they happen to use during a problem-solving or decision-making event, it is convenient for a researcher to describe these types of knowledge in terms of categories found in the research literature. Categories help to articulate nurses’ knowledge-in-use that comprises an important aspect to their clinical reasoning (Higgs & Jones, 2002). The categories used in this research paper are summarized in Figure 2 and are described here.
Figure 2 fits here.
The first distinction to be made within the category of a nurse’s knowledge-in-use is between declarative knowledge (propositional knowledge, “knowing that”) and procedural knowledge (non-propositional knowledge, “knowing how”) (Chin et al., in press; Higgs & Jones, 2002). Declarative knowledge possessed by a nurse on a surgical ward, “declarative understanding,” can be divided into two further categories: scientific knowledge, abstract canonical content found in high school and university science courses – facts, concepts, and values; and professional knowledge of nursing, abstract and technical content found in nursing courses and apprenticeships – facts, concepts, and values.
Scientific knowledge is comprised of mechanistic explanations and classification schemes universally applicable (i.e. context independent). Its cognitive purpose is to explore the applicability of currently held paradigms and to create new knowledge by either resolving discrepancies that arise or by exploring new phenomena made accessible by advances in technology (Kuhn, 1970). Validation through consensus making is usually manifested by published articles accepted by recognized paradigm practitioners. Of potential interest to a surgical ward is “core science” (Cole, 1992), the kind of scientific knowledge (e.g. air pressure) that has been validated by such a strong consensus of scientists that it is not considered open to change. Not of interest to a surgical ward is “frontier science,” which is highly tentative or speculative scientific knowledge (e.g. the link between high-voltage power lines and childhood leukemia) that lacks a strong validating consensus at the time.
Higgs and Jones (2002) described declarative professional knowledge of nursing as multi-paradigmatic facts, concepts, and values that give emphasis to research-based empirical information directly related to nurses’ problem solving and decision making, contextualized in clinical reasoning.
For effective clinical reasoning, we consider that health professionals rely upon the scientific knowledge of human behaviour and body responses in health and illness, the aesthetic perception of significant human experiences, a personal understanding of the uniqueness of the self and others, and the ability to make decisions within concrete situations involving particular moral judgements. (p. 27)
According to Higgs and Jones (2002, p. 28), declarative knowledge has a clear purpose: to inform wise intuitive clinical reasoning. (Figure 2 does not represent clinical reasoning, but only the knowledge-in-use involved in clinical reasoning.) Accountability in a surgical unit is tied to this purpose in the context of a unique individual patient’s well being. Professional knowledge of nursing encompasses mechanistic explanations for a particular event (context dependent), and empirical relationships and correlations (also context dependent). The following excerpt expresses a mechanistic explanation contextualized in a nursing event:
Terry: What happened was this: he was accumulating a lot of fluid in his lungs, so the membrane was getting thicker. So when you have a larger barrier between the respiratory and circulatory systems, you’re going to get poorer oxygen exchange. (June 25, 62-64)
Empirical relationships within professional knowledge of nursing are exemplified by the following excerpts:
Sarah: Then I remembered from the day before, he had a lower potassium level, it was 3.1. So they were infusing him with some boluses to get it up. The normal is 3.5 to 5.5. Sometimes when it’s low it can cause confusion. (June 23, 8-10, emphasis added)
Sarah: Males and females are different. Males have more [haemoglobin]. (June 16, 48-49)
Chloe: One of the comments the CCA [Critical Care Associate] made when he arrived was that if the heart rate is greater than a rate of 140 minus the patient’s age, it’s not sustainable. This is a “ventricular rate,” and he [the patient] certainly fell into that category. (June 7, 134-136, emphasis added)
Further examples are cited below in the section “Evaluating Data.”
The distinction between scientific knowledge and professional knowledge of nursing can sometimes be vague, but a distinction has pragmatic value. A critical feature of the context of nursing is the uniqueness of each patient. Knowledge-in-use is relevant only to the extent that it acknowledges this unique individuality. Thus, decontextualized ideas (i.e. scientific knowledge) by their nature may be out of harmony with the contingencies of a unique patient. In comparison, chemical industries do not treat molecules as unique entities; quite the contrary, all molecules of carbon dioxide, for instance, are assumed to be identical (except for their statistically inscribed thermodynamic properties). Thus, the individual uniqueness of a patient would usually demand contextualized professional knowledge of nursing rather than decontextualized scientific knowledge. The patient’s uniqueness constitutes a particular context for knowledge.
The category “procedural knowledge” (Figure 2) comprises a host of facts, concepts, skills, and values, functioning at various levels of concreteness and abstraction. Procedural knowledge informs clinical reasoning (e.g. problem solving and decision making). Because problem solving and decision making served as a context for the study and not its focus, these processes are not represented in Figure 2. Problem solving and decision making involve an interaction between declarative understanding and procedural understanding, according to Duggan and Gott (2002), an interaction acknowledged in Figure 2 by a simple two-way arrow.
Procedural understanding (Figure 2) is underpinned by (1) the thinking associated with the collection of data and the judgment of the data’s significance as evidence (using concepts of evidence to do so); and (2) the action of nursing (“procedural capability”), that is, knowing what to do, how to do it, and how to communicate this with fellow nurses and doctors. Action produces data that are processed using concepts of evidence to help judge the data’s credibility. Credible data inform a nurse’s problem solving or decision making. One example would be nurses using a finger probe to determine a patient’s blood oxygenation saturation (the patient’s sats); another example would be nurses providing a patient with a greater flow of oxygen and observing a change in the patient’s lip colour. This evidence is then used in problem solving, the result of which is often a decision and action. The relationship is somewhat repetitive: action → data → thinking → decision making → thinking → action→ data→ thinking → decision making, etc.
The nature of evidence in clinical reasoning is the general focus of this research study; but in particular, the study investigated the implicit or explicit concepts of evidence (Figure 2) used by acute-care nurses during specific daily events (problem solving or decision making) on the hospital ward.
Gott and colleagues’ (1999) model (Figure 1) is applicable to the science-related work of nurses on a surgical ward. A typical datum, for example, is a patient’s blood oxygenation saturation (the sats). It can be a quantitative measure (e.g. taken by a finger probe; 82%) or a qualitative measure (e.g. taken by observing the degree of purple-bluishness in a patient’s lips). In terms of the measurement complexity found in other science-rich workplaces (e.g. chemical plants and environmental analysis labs), nursing appears to be at the non-complex end of the spectrum. Thus, the following terms were generally used interchangeably by the nurses in the study: measurement, reading, symptom, and observation. In the context of the surgical unit, these terms were synonymous with model’s term datum and will be applied interchangeably in this paper.
Surgical nurses appeared to assess data in three different ways. Data (readings, symptoms, measurements, or observations) became evidence when: (1) a datum was collaborated by other data, (2) trends in the data were perceived, and (3) there was a consistency or inconsistency between a datum and its context. In some instances, these three different ways worked in various combinations to produce evidence.
The first category (collaborated by other data) can be illustrated by the quantitative and qualitative blood oxygenation examples just above. The two examples are directly related to each other because each datum tends to collaborate the other: patients with very low blood oxygen saturation (i.e. 82%) tend to have purple-bluish lips – a condition called “cyanosis.” (This relationship is an instance of “professional knowledge of nursing – empirical relationship,” Figure 2.) The two measures taken together (each a datum) produce credible data; hence, together they likely constitute evidence upon which to make a nursing decision. In other words, several measurements can become evidence if they collaborate an inference about what is happening with a patient; in this case, oxygen deprivation. This type of data collaboration draws upon different ways of measuring the same variable and is known as “validity triangulation” (a concept of evidence).
A second category for describing how observations become evidence is a trend in data, which deals with repeated measurements over time (e.g. sats of 94%, 92%, and 90%). Because this occurrence produces data that form a trend or pattern recognizable to a nurse, the data become evidence. In the present example, 90% is not usually considered dangerously low, but the trend itself indicates an oxygen deprivation problem. Terry talked about a trend in a post-operative patient’s haemoglobin count:
Terry: All of a sudden you’re watching the red cell count go 120, 109, 98, over a period of time, and you stop to ask, “It [the haemoglobin] is going somewhere, where is it going?” (June 25, 32-34)
Chloe described the protocol for measuring blood pressure when attending a patient whose heart beat was dangerously increasing.
Chloe: …So every three minutes it [a blood pressure instrument on a portable trolley] would pump up the cuff and then give you a reading.
Chloe: So then we had a sheet of graph paper and as soon as the three minutes past and the data came on the screen, we would chart them on the graph paper and we could see a trend. (June 7, 147-151)
A third way evidence emerges from measurements is when a datum, in concert with its context, becomes evidence. Jamie happened to notice a discrepant event concerning a patient in the Special Observation section of the surgical unit who had recently come from the ICU (the hospital’s intensive care unit) after his operation. The patient was sitting comfortably upright happily eating a meal. These two observations (data) – upright and eating – taken out of context would have had no particular meaning. However, given the context that Jamie knew – recent surgery with a spinal anaesthetic – the data had a highly significant, potentially harmful meaning, and therefore they became evidence:
Jamie: Usually when they come back after a spinal anaesthetic, the protocol is to keep the patient relatively flat for 6 hours post-operatively, because they [the anaesthetists] drain off some of the spinal fluid. So if one sits erect, there is not so much spinal fluid surrounding the brain. Patients can get what we call a “spinal headache.” It’s hard to treat. (May 25, 6-9)
A ward aid had helped the patient into an upright position to make him more comfortable to eat. A nurse would have inclined the patient a maximum of 60-degrees, in spite of it being less comfortable for the patient (an instance of “procedural capability,” Figure 2). Jamie’s reason for this protocol, captured by the quotation above, is an instance of “professional knowledge of nursing – mechanistic explanations.”
Another case in which context affected a nurse’s thinking was one of Terry’s patients who was recovering from surgery that produced a colostomy. The patient had experienced a sudden stoppage in his colon output. When the vitals were taken, the data showed higher than normal blood pressure and heart rate (100 to 110 beats per minute rather than 60 to 80). Were these data credible enough to be evidence in formulating an inference? No, the idea of “normal” needed to be contextualized due to the patient’s pain, as Terry explained (drawing upon his “professional knowledge of nursing – empirical relationships”):
Terry: If someone is in pain, you expect a slight rise in blood pressure. You expect a rise in the heart rate. If somebody is having signs of infection then you are expecting those and an increase in temperature. (June 20, 95-97)
Thus, a heart rate of 110 could be normal if someone were in pain due to a blockage of the colon. But temperature needs to be contextualized in another way as well, because:
Terry: On this unit in the afternoon in the summer time, you come in at three o’clock in the afternoon and you can see everyone is running a low-grade temperature because it’s hot outside. Your environment is hot, therefore you’re going to be warm. (June 20, 136-138)
Terry looked for a trend in the temperature change in his patient (category 2, trends in data) in light of the context (time of day and season) before reaching a tentative conclusion. In Terry’s words:
Terry: So it is not only looking at the blood pressure and thinking, “Well, the blood pressure is up.” You have to take it in concert with all of the other things [triangulation and context]. It’s only one little test and you have to take it and you have to synthesize all of the information together before you can actually even form a hypothesis. (June 20, 100-103)
Thus, the context to be considered in the evaluation of a datum can be, for instance: type of surgery, a patient’s immediate circumstances (e.g. pain), a patient’s past history, and time of day/season.
In summary, applying the model (Figure 1) devised by Gott and colleagues (1999) to clinical reasoning on a surgical ward, one detects three pathways for moving from a measurement (reading, symptom, observation, or datum) to evidence. The pathways, either singular or in combination, lead to the next stage in Gott and colleague’s model: How credible and important is the evidence? (i.e. evidence evaluation). Two main functional purposes for evaluating evidence became apparent in the nurses’ interviews: (1) to move “to the next level” in attending to a patient’s well being, and (2) to initiate a procedure or intervention. Each is examined separately, although in reality they naturally occur simultaneously, as indicated by events mentioned in the following two subsections.
Taking It to the Next Level
Measurements seemed to form a hierarchical pattern on the surgical ward: (1) symptoms (detected by a nurse’s senses), (2) vitals (blood pressure, heart rate, temperature, respirations, and blood oxygenation saturation), and then (3) targeted tests to gather further data. Each represents a different level of data gathering, and therefore a different function for evidence at each level. To move from one level to another, nurses made a clinical decision that required credible evidence. (These transitions from one level to another are illustrated by events on the surgical ward reported just below.) Once a nurse reached level 3 (targeted tests), other levels and decisions became apparent: (3.a) tests that a nurse can carry out, (3.b) tests that require hospital specialists, and (3.c) tests that require decisions by residents or doctors. See Figure 3. To carry out these tests, instruments are used, of course, but some are simple/inexpensive while others are complex/expensive. The decision on what instrument to employ lies in predetermined protocols (“procedural capabilities,” Figure 2) or in the hands of hospital specialists and doctors (e.g. whether to obtain data with an x-ray or NMI). Several events described by surgical unit nurses clarify the phrase “taking it to the next level.”
Figure 3 fits here.
Often patients will report a discomfort or pain to a nurse, or alternatively, a nurse will spontaneously notice something about a patient upon approaching a bed, especially if the observation is unexpected. The following example illustrates moving from the symptom level directly to the targeted tests level.
Joan: A patient rang for me the other day and they were exhibiting symptoms; they said they felt “low”. It’s what the patient said specifically.
Glen: Now, when they said “low”, that’s a verbal message. What was the body language? By just the way they said it? What did you perceive?
Joan: They looked tired. They were sweating, a little shaky, felt sick to the stomach; all those kinds of things. (June 11, 4-10)
The expression “all those kinds of things” suggests Joan has tentatively recognized a pattern she associates with, in this case, diabetes (“professional knowledge of nursing – correlations,” Figure 2). She continued:
Joan: So those are the things that I see. … Then we automatically go and do a blood sugar testing. This patient was low. They read 2.3, which is low. A normal Glucometer reading would be 4 to 6, somewhere in there. (June 11, 10-13)
In this illustration, the decision to move to a higher level of data collection was a straightforward protocol (“procedural capability,” Figure 2). The datum “2.3” had sufficient credibility in the context of the surgical units’ familiarity with diabetes to warrant a different type of clinical decision by Joan, a decision to initiate a procedure or intervention – a second function of evidence (discussed below). In this case, she gave the patient sugared apple juice, one intervention among several, each justified by evidence:
Joan: We gave apple juice with sugar in it. And, if their sugars drop too low – it’s a very individual thing. For some people, 2.3 would be low enough that they would be so tired that they couldn’t talk to you. And in that case, we would need to give them either glucose (a syrup under the tongue that would absorb) or we would have to give them IV with dextrose medication to bring the sugar up quickly. So it would just depend on what we saw that patient going through in that state. That would determine what action we would take to solve it. (June 11, 39-44)
For renal patients with high blood sugar counts, for instance 10 or higher, a different intervention would be required (Unit Manager, October 14, 111-115).
Sometimes moving to a higher level of data collection does not bring evidence that a nurse seeks, and one needs to move on to the next level, as this next example demonstrates.
Gia: I walked into a room (that was about half an hour to 45 minutes after giving an Indocid suppository to a woman patient) and she had reacted to the Indocid suppository.
Glen: What did you notice?
Gia: Something was wrong because she was very confused, her eyes were really twitching, and she said she felt very heavy and her whole body felt heavy. … She said she felt kind of paralysed, she couldn’t move. So, I took a set of vitals, but her vitals were fine. (May 22, 4-17)
The baseline data (vitals) caused a discrepancy for Gia. Her professional knowledge of nursing did not help her in this event other than to tell her something was wrong. But she did not know what targeted measurement she should take next. She went to a higher level of personnel by consulting a doctor (level 3.c in Figure 3).
Gia: Indocid had affected her central nervous system. I had talked to Dr. [X] about it because I knew that something was off. He happened to be up here, anyway. He said there was a higher incident of reaction in women than in men. And that it was not uncommon, and so he discontinued the Indocid and put her on a different pain reliever called “Naprosyn.” (May 22, 22-26)
In the future, Gia will remember this event when she gives a patient Indocid: “I had no idea that it could do that. I will put that in the vault for future reference” (May 22, 44-45). “Vault” seems to be her expression for “professional knowledge of nursing,” and in this case, “empirical relationships.”
Clinical decisions are made on the basis of evidence, decisions concerning: going to the next level of data gathering (symptoms, vitals, targeted tests); getting other people involved (e.g. a respiratory therapist); and choosing what instruments to use next (e.g. from the stethoscope to x-rays, a decision dependent upon one’s authority within a hospital). Given the constraints of time and resources of a hospital, these decisions are based on the evaluation of the evidence that might warrant the decision to go to the next level. Joan succinctly summarized this conclusion herself: “To make the choice, evidence is necessary to go to the next point” (May 29, 131).
Not all choices are so straightforward, however. Sarah found herself in an awkward yet not unusual position of deciding whether or not to carry out a doctor-ordered intervention. This decision was directly connected to another decision: whether or not risk going to the next level of involving a doctor. Both decisions focus on the evaluation of evidence (including the lack of evidence in this case – Had the doctor recently seen the patient?), evidence relating to a patient’s well being, and to the social context of the surgical unit and hospital.
Sarah: The other day I saw an order to discontinue a Foley catheter from one of our patients. I saw that the order was written around 10:30 [a.m.] and so I got to it around 11. When I went to the patient, I was going to DC [discontinue] the Foley. But he had this Foley catheter in for quite a while [6 days] and he was quite edematous with his penis and his scrotum. (June 21, 4-7)
Other data included: edematous in the legs, pitting edema of the feet, looked bigger than usual for a small man, and the fact he had no past history of problems voiding.
Sarah: I thought to myself, “Well, should I take the Foley out? Has the doctor seen this or does he presume since it’s been in for so many days, you might as well discontinue it?” (June 21, 11-12)
He was an older man, probably around 72. So, I was concerned about discontinuing the Foley because we usually wait about six hours and if he doesn’t void, then we will call the resident or call the doctor. And it was around 11 o’clock. (14-16)
Time became an issue, along with the possibility of a doctor making a special trip to the hospital just for a relatively minor procedure.
Sarah: So at that time, around 5 or 6 p.m., usually the doctors are not around so then we would have to call them in and they would have to put another Foley in if he was unable to void. Considering: the time frame, not having communicated with the doctor about seeing it recently, and seeing what’s been going on, I didn’t know if I should discontinue it or not. So I decided I would leave it in until the doctor either came up or was notified. (June 21, 18-22)
Complications could arise if the patient was unable to void during the six-hour waiting period: “His bladder could have become full and it could have blocked into his kidneys” (June 21, line 34). Possible harm to the patient became part of the context in which the data (edematous in the lower body and no prior history of problems) reached the status of credible evidence to warrant not discontinuing the Foley catheter.
The importance of context to nurses on the surgical ward was clarified by Terry when he discussed haemoglobin data:
Terry: Now, depending on the hospital unit, we [in the surgical unit] don’t get concerned until it gets to be about 80, when we think seriously about transfusing someone. If it’s at 85, 89, then it’s something to mention, and once again, it’s trending [upward or downward trends]. If someone is chronically anaemic, then a low count is going to be normal for them, and their body has adapted to it; which is very different from someone who has a gastro intestinal bleed, someone who is bleeding heavily from an ulcer or something. (June 25, 27-32)
Initiating an Intervention
We have seen that Joan gave sugared apple juice to a patient who had a low Glucometer reading, and Sarah did not discontinue a Foley catheter from an edematous patient. Both events show how evidence is evaluated in procedural understanding to warrant initiating (or not initiating) an intervention. A more extensive example will further clarify the evaluation of evidence when it is used to act on a patient.
Under Chloe’s care, a patient recovering from vascular leg surgery showed the following symptoms (data): increased pain in the calf when the patient flexed his foot, redness of skin, hot to the touch, and the patient was reluctant to get up. The patient’s pain in this context had special meaning to a nurse (“Professional Knowledge of Nursing – Empirical Relationships”):
Chloe: The other significant thing was with the pain; it wasn’t the fact that he had pain in the calf, but the fact that when he flexed his foot the pain in the back of his calf got worse. It is a positive Homan’s sign, so it’s a specific pain that worsens with a specific movement. (May 26, 66-69)
Another concept in the professional knowledge of nursing was the correlation between pain and swelling in this context (May 26, 71). These data became evidence to warrant going to the next level of a targeted test, in this case measuring the degree of swelling over time (data that showed a 2 cm increase in the leg’s circumference over 3 hours). Now, the data reached the status of evidence to support a concern that the patient may have a deep vein thrombosis (DVT, also known as a blood clot). Was the circumference increase of 2 cm the evidence by itself? No.
Chloe: Not in isolation. But if there was significant pain when he flexed his foot, and redness that was hot to touch, all of those things together. So, it is not necessarily any one factor in isolation, but all of them together, you’d want to be sure there was no clot, and that the symptoms were caused by something else. (May 26, 88-91)
Chloe did have evidence for an immediate intervention (i.e. applying anti-amboli – anti-clotting – stockings to the patient’s leg) and for going to a higher level of targeted tests (level 3.c) by talking with a resident who authorized a Doppler ultra sound, the result of which ruled out DVT. The resident was then able to tentatively account for the patient’s pain by focusing on the muscle damage caused by the surgery. The anti-amboli stockings (support hose) resolved the patient’s swelling and pain within a day.
In Chloe’s scenario, a cluster of symptoms became evidence for moving to the next level (level 3.a), which was a targeted test (leg circumference measurement) and which in turn yielded a validity triangulation datum (2 cm increase over 3 hours). Both the cluster of symptoms and the triangulation datum suggested the possibility of DVT (i.e. a blood clot) and led to the decision to initiate an intervention (i.e. applying anti-amboli stockings). Validity triangulation is a concept of evidence explored in the subsection “Accuracy,” below.
The research results reported in this section, “Evaluating Data,” illustrate various circumstances by which measurements (observations) became evidence on a surgical ward: data collaboration (recognizable patterns or triangulation), data trends, and consistency/inconsistency with the context. The results also indicate two functional purposes for which nurses evaluated evidence: moving to the next level, and initiating an intervention. These results form the context for examining the central issue of this project: concepts of evidence used implicitly or explicitly by acute-care nurses.
Concepts of Evidence
In Duggan and Gott’s (2002) study, different groups of UK employees working in science-rich workplaces shared some common concepts of evidence, though each set of concepts of evidence differed somewhat, due to differences in workplaces or decision-making situations. Thus, we should expect the set of concepts of evidence employed by acute-care nurses to differ somewhat from chemists, pressure physicists, biotechnologists, and lay people.
The present study did indeed find that the science-rich surgical unit differed noticeably from the workplaces studied by Gott and colleagues (2003). The comparison between the nurses’ concepts of evidence and the compendium published by Gott and colleagues (summarized above) is not in anyway an evaluation of nurses. Comparing and contrasting is a reporting strategy, nothing more.
As anticipated, some concepts of evidence related to reliability apply to a surgical nurse’s knowledge-in-use, but some do not. One key concept of evidence used by industrial employees but not by nurses was repetitive readings from the same instrument (after which an average datum is calculated), that is, “repeated measures.” When Chloe discussed her measurement of the circumference of a patient’s leg, the following exchange occurred:
Glen: When you explain carefully how you want things to be measured, there is an old problem of, “How do you know that if you measured it directly afterwards, you’d get a slightly different measure only because of the tightness that you held it [the measuring tape]?” Do you re-measure or do you just take one reading?
Chloe: Usually just one. (May 26, 63-65)
Nurses seldom had time or the need to take several measures and calculate an average value because the purpose and accountability in the surgical unit militated against it. Precious time could better be spent acquiring validity triangulation data that produce more credible evidence for a nurse to decide what to do next. My questions about taking repeated measurements were often met with either polite incredulity or a diversion of the conversation to a topic that made sense. Several excerpts from interviews (below) illustrate the low status afforded “repeated measures” concept of evidence.
When asked about taking an immediate second reading from a Dynamap machine, Terry described how he would compare the original datum to its context rather than take a second reading, which is one of the ways a datum acquires the status of evidence. If Terry detected a discrepancy in a Dynamap reading of a patient’s blood pressure, he would use a different instrument to measure the blood pressure (e.g. a manual reading), thus demonstrating the use of the concept of evidence “validity triangulation.” He would not double check the Dynamap reading. Terry also used another concept of evidence about how the measurement was taken (i.e. instrument use: in this case using a proper cuff size).
Glen: So what I was focussing on was when you take a reading, how do you know if you need to take another reading for just –
Terry: It gets to be intuition.
Glen: You told me you take it and look at the chart and if it’s that much different than the chart –
Terry: Then I immediately go to the manual reading, because I want to know exactly what I am dealing with.
Glen: So it’s more consistent with –
Terry: If I get a big change, first thing I do is check to see if I have the right cuff size. If I have a larger cuff on the machine, I’m going to get a lower reading. If I took your blood pressure with a paediatric cuff right now on your arm, you would have an outrageously high blood pressure. (June 8, 161-170)
With time constraints and pressure to go to the next level (if necessary), the first and only measurement (datum) is often assessed in terms of its consistency with the context (using one’s “intuition,” as Terry stated above). Gia expressed the idea slightly differently:
Gia: I think that around here our gut judgement is everything. And just because it’s a machine, doesn’t mean that it’s always right. (June 13, 136-137)
Chloe described a typical context for a patient’s symptoms (data) when she talked about a patient whose heart rate had climbed to 140.
Glen: Were there some visual signs you were automatically looking for? The colour and things like that?
Chloe: Yes, he was pale. He didn’t start to go blue at all. He was grimacing in a way that is quite typical of someone having a heart attack, in that he was clutching his fists in front of his sternum and frowning. So, he was clearly having that sort of expression of cardiac pain. (June 7, 124-128)
If a measurement (in Chloe’s case, a heart rate of 140) is not consistent with a context of symptoms or with a nurse’s practical knowledge (i.e. intuition or gut judgement), then a nurse will usually go to the next level (i.e. they apply the concept of evidence “validity triangulation”), as Gia did:
Gia: I think you always have to go with your gut feeling and if it’s not what you expect, then find something more accurate.
Glen: Right, instead of just measuring it again.
Gia: Yes. (June 13, 139-140)
Nurses did not tend to think of an instrument as having an inherent measurement error. For instance, when explaining the fluctuations in oxygen saturation measurements produced by a figure probe instrument, Joan talked about a patient’s condition changing, and about the need for validity triangulation with more accurate data:
Joan: It [the sats] can change often, all the time, a very little bit. But if all of a sudden the person were to get extremely short of breath, it can drop to a significant number, very quickly. And that will be alarming.
Glen: That’s good information, because now I can ask, “When you take the reading, how do you know it is the right reading rather than one of the fluctuations?”
Joan: This oxygen finger probe can be backed up by a blood test of the oxygen. And that one will be more accurate. (May 29, 17-23)
Gia (June 13, 16-20) mentioned the temperature of a patient’s hands as a factor that might cause fluctuations in a finger probe reading. Similarly according to Jamie, the margin of error (the ± value) in a haemoglobin measurement was not caused by the instrument itself but was caused by other factors that could affect the measurement using a sensitive instrument:
Jamie: Depending on, again, what’s happening, what kind of surgery they’ve had. In some kind of surgeries we expect them to bleed a moderate amount. Other surgeries, you don’t. (June 18, 94-95)
When discussing the possible fluctuations in a blood sugar count produced by a Glucometer, Joan believed the measurement did not fluctuate. She explained this by the fact the Glucometer was a very accurate instrument (accuracy is a different major concept of evidence, taken up below).
Glen: I’m just wondering. Let’s say my job was operating this machine, and I did a test and it read 2.3. In your experience, how much would that 2.3 fluctuate if I did it a half minute later?
Joan: On the same person you mean?
Joan: Oh, very little. It would be very accurate. It’s a very accurate reading.
Glen: So it doesn’t fluctuate much?
Joan: From minute to minute? No. From hour to hour? Of course. Because depending on what they [patients] have eaten. But it’s a very accurate test. (June 11, 64-72)
Here again we see an instrument’s measurement error being ignored while a nurse focuses on factors related to a patient’s unique individuality and well being. On the other hand, Joan remembered once taking a second measurement in a case of a manual blood pressure reading:
Glen: Is there a situation that you recall that you would have taken it and said, “Well, I’m not sure,” and actually took it again. Or is this so accurate you just need to take one –
Joan: The manual, you mean?
Joan: Oh no, I’ve rechecked myself in some instances. Also, depending on the patient, you might check both arms and compare. (June 18, 55-62)
Joan: … If I got an 85 over 45 blood pressure, I would re-check it.
Glen: Repeat that reading, either on the same arm or different arm?
Joan: The other arm. You know, you could do it a couple of times, and even come back and do it again in a few minutes. (76-84)
When the Unit Manager distinguished between the Dynamap (the Critikon portable blood pressure machine) and the Welch-Allyn portable blood pressure machine, she described occasions when nurses actually measured one instrument against another (measurement triangulation).
UM: I’ve seen people take a blood pressure with a Critikon, check it with a Welch-Allyn, and then if they are still in doubt they’ll do a manual. But there is enough doubt about the Critikon’s accuracy that the nurses are not really that confident in its measurement. (October 14, 60-71)
This type of measurement triangulation did not arise in the 24 nurse interviews conducted during this research project (a circumstance mentioned in the subsection “Limitations”). Except for a few isolated instances, however, the concept of evidence “repeated measures” did not seem to guide a nurse’s actions.
In science, as mentioned above, the degree of reliability (or an instrument’s sensitivity) is conventionally expressed in terms of an instruments’ error of measurement, the ± value associated with a measurement. This simple value often masks statistical assumptions and reasoning that underlie the concepts “measurement error” and “confidence limits.” As already indicated, the nurses did not seem to have had a need to consider ± values during any of the events they discussed. However, when I specifically asked them to consider the ± value for a measurement they had taken, it turned out (as the excerpts below indicate) the ± value that scientists associate with measurement error was marginalized or ignored due to two other issues more important to a nurse’s clinical reasoning: (1) the variation in a reading is accounted for by a patient’s unique differences, differences that supersede any ± value inherent in an instrument reading; and (2) the variation in a reading is accounted for by changes in a patient’s environment or body system, changes that supersede any ± value in an instrument reading. In Gia’s discussion (above), she accounted for fluctuations in heart rate measurements by mentioning a patient’s arrhythmias and the context of what is normal for the unique individuality of a patient. In Jamie’s discussion (above), he rationalized a haemoglobin measurement fluctuation by describing contextual factors (e.g. type of operation). Later in Jamie’s interview, I altered my approach from talking about measurement fluctuations to talking about changes between consecutive readings and how big a change would cause concern on his part. We discussed a patient whose haemoglobin count had dropped 5 points from 90 to 85.
Glen: … to you that’s within a range where you wouldn’t be concerned enough to take action, but you would point it out to people.
Jamie: Yes, I’d point it out and I’d just be having a look at the patient to see if they didn’t look anaemic and pale and wiped out. (June 18, 121-124)
Glen: … If it was 90 to 88 [a drop of 2 points], or something, is that worth bringing to someone’s attention?
Glen: Okay, so –
Jamie: I don’t think I would. The doctors would probably check it every day and if they didn’t, well, I don’t think I’d be too worried about it, because it fluctuates a bit, day to day.
Glen: Okay, that is what I was wondering, too. Normally, to what degree does it fluctuate, without surgery and things like that? So plus or minus two is sort of a very safe range –
Jamie: Yes –
Glen: that you wouldn’t even bring it to someone’s attention.
Jamie: No. Probably even three or four.
Glen: Okay, so my example with five just moved beyond the “normal” range for you.
Jamie: Yes. A range of five is what I’m kind of thinking. (133-145)
In other words, a change of about ± 4 was a minor change in a haemoglobin count, and would not cause Jamie to go to the next level of appropriate action; but this change is thought to be due to a patient’s condition changing, not due to an uncertainty in the measurement. A change of 5 units, however, caused Jamie to go to the next level of appropriate action because it indicated a concern for the well being of the patient. Similarly, Sarah (June 16, 59-70) considered a change of ± 1 in a haemoglobin count to be an insignificant change in the patient’s condition, rather than being within the error of measurement of the instrument.
Another complicating factor related to fluctuating measurements on the surgical ward was whether the measurement fell within the normal range of measurements for the particular circumstances of the patient’s unique individuality, or whether the measurement fell outside the normal range. For instance, when talking to a patient about their blood pressure, Jamie rounded off the reading to the nearest 5 when the reading was within normalcy for a patient, but Jamie did not round off the reading when it was outside the normal range (June 18, 68-78). Although Joan never rounded off measurements, she too considered a variation in the systolic pressure of ± 5 to be insignificant when it lay within the patient’s normal range, but was sensitive to an even smaller change (e.g. ± 2) when it lay outside the normal range (June 18, 88-99). In the following exchange, Terry underscores the practical reasons for nurses to ignore the concepts “measurement error” and “repeated measures.”
Glen: So if this person who in this circumstance had a blood pressure of 160 over something and you came back half hour later and it was a 170 over whatever, you would think, “Well, maybe that’s in the reading,” rather than –
Terry: Well maybe it’s in the reading but, once again, you are going to ask, “And what else?”
Glen: “And what else.” Okay.
Terry: What else is causing that? How is he lying? Was he sitting up the last time the blood pressure was taken? Because if your body is sick and weak, it doesn’t compensate for lying down and standing up. (June 20, 117-124)
Of prime importance to a nurse is “what else?” because the central purpose of a nursing unit is “to improve the condition and comfort of the patient” (Chloe, May 26, 31), not to justify a measurement on the basis of the instrument’s reliability. In other words, the surgical ward was patient-oriented more than it was measurement-oriented. From the surgical Unit Manager’s way of thinking, “this is what encompasses the art of nursing” (UM, October 14, 47, emphasis in the original). In contrast to a surgical ward, science-rich workplaces that are product-oriented must rely heavily on an instrument’s reliability to claim a certain product quality, thus making these workplaces more measurement-oriented than is the case for a surgical ward.
Gott et al.’s (2003) wrote about establishing reliability of a measurement by multiple observers taking identical readings with the same instrument. On the surgical ward, this procedure did not appear to occur, likely because there was neither the time nor the need. Pervasively, nurses did seem to repeat measurements (e.g. a patient’s symptoms or vitals) when they first took on responsibility for a patient (e.g. after a shift change). Nurses did so not to double check the previous nurse’s measurement, but to continue a collection of data on a patient to look for a pattern or trend that might be important if a nurse had to decide whether or not to go to the next level.
In summary, the six surgical ward nurses who participated in this study were guided by two key concepts of evidence associated with reliability: measurement triangulation, and how a measurement was taken (i.e. instrument use). Key concepts of evidence that were seldom relevant to the nurses were: measurement error, repeated readings with the same instrument, and repeated readings by different observers. These results must be qualified by other concepts of evidence that nurses used but were not listed in Gott et al.’s (2003) compilation of concepts of evidence: (1) normalcy range, that is, a reading either lies within or outside what is normal for a person; though for some measures such as a potassium level, several categories outside of normalcy were considered (e.g. low and very low, or high and very high), or heart rate (e.g. low and dangerously low, or high and dangerously high); and (2) a patient’s unique individuality, a concept that helped define normalcy, and that overshadowed the concept of non-repeatability. As a consequence, when the nurses worked with evidence on a surgical ward there was little generalizability, but instead, there was always transferability to the specific context at hand.
In the context of nursing, measuring is more than simply representing a physical condition of a unique individual patient (e.g. blood pressure). A measurement often represents a changeable (i.e. dynamic) condition of that patient. This unpredictable variability within a patient’s complex body can affect an instrument reading, which may or may not have critical implications for the patient. This was Terry’s concern when he asked, “And what else?” (June 20, 120). By contrast, in many industries what is being measured (e.g. gas pressure) is generally assumed to remain static during the measurement process, and consequently, the fluctuations in measurements are attributable to the measurement process itself, that is, the error of measurement. But for the nurses in this study, fluctuations in measurements were either attributed to the changing condition of the patient being measured, or to the inaccuracy of an instrument, in which case a nurse engaged in validity triangulation with a more accurate instrument that used a different process to arrive at a measurement (a topic to which we now turn).
During my early interviews with the nurses, they talked about a machine in terms of its accuracy. In my later interviews, I focused the interview discussion on the notion of accuracy. I asked, “Of all the machines that are used on this ward, which one do you think is the most accurate? And which one do you think is the least accurate? As one should anticipate, the word “accuracy” had nuances of meanings depending on the context.
The nurses’ interviews strongly reflected the belief that nurses should not trust a machine, especially when a nurse can take a manual reading (e.g. for heart rate, blood pressure, etc.). When talking about a heart rate reading of 140, displayed on a Dynamap digital screen, the following exchange occurred.
Chloe: At that point, it was a case of implementing some other means of gathering evidence. I’ve always been taught never assume your machine is right. Get your hands on, take a radial pulse, get your stethoscope out and listen yourself directly to the heart.
Glen: That’s why you used the stethoscope.
Chloe: Yes. By then, the rate I heard myself was 170. So then when we attached the cardiac monitor, we saw it [the patient’s heart rate] was all over the map and still going up. (June 7, 115-120)
A similar view was expressed by Joan, Jamie, and Gia. However, when Gia also talked about a patient with a problem heart, hooked up to a 3-lead cardiac monitor (not a Dynamap), and talked about a medical paper trail, she seemed to put more trust in this particular machine:
Gia: … But in the heat of the moment, when everything’s happening, I tend to trust the monitor unless something would spark me otherwise. If I took a radial pulse and it didn’t match the monitor, I would tend to think that the radial pulse I was getting is wrong because when it’s that quick it’s difficult to count every beat; whereas, the monitor would actually pick up every beat.
Glen: But, when you have a moment, then you would go ahead and print it out just to have a –
Gia: A copy of it to put in the chart for proof, or evidence, … [we’d have proof] that that was really on the monitor.
Glen: In business, they call it a paper trail.
Gia: Right. It’s our medical paper trail. (June 4, 71-79)
Here Gia has provided an additional social context for dealing with data: permanent records create evidence to be used in the distant future (e.g. perhaps if a review took place), in addition to be used immediately to decide on an intervention. In the specific context described by Gia, a computer generated printout had greater value than a nurse’s manual reading. Except for this one occasion in which Gia questioned the accuracy of a manual heart rate measurement taken “in the heat of the moment,” the nurses never once mentioned human error inherent in a manual reading when they discussed accuracy. The Unit Manager, on the other hand, spoke specifically about human error in a manual reading:
UM: … There is so much subjectivity in a manual measurement of blood pressure. Yes, you can say, “I’m confident that his blood pressure was 100 over 70, because that’s what I heard.” But maybe now that I’m older my hearing isn’t as good as it used to be, and some young 22 year-old might take it and all of a sudden it’s 120. Well that’s a significant difference that a machine would probably have picked up. I think that, although we’ve been traditionally taught that manual and tactile measurements are the most accurate, in some ways we don’t realize how subjective some of them are. (October 14, 55-61)
Once again we see that context is everything in nursing. The crucial role of context in the evaluation of evidence is represented by the outer circle of Gott et al.’s (1999) model for measurement, data, and evidence (Figure 1).
Just above, Gia stated that a 3-lead cardiac monitor was more accurate than a manual reading. Her concept of accuracy seemed to be related to the detail provided by the machine.
Gia: It [the 3-lead cardiac monitor] takes a reading of your heart and translates it to the monitor. And it makes waveforms on the screen. To us, every wave means something different that the heart is doing. And depending on how many of those waveforms you get in a certain amount of squares (which is time) that tells us what the heart rate is. (June 4, 52-55)
Joan (June 18, 109-114) and Jamie (June 24, 87-101) agreed.
A topic directly related to accuracy is how well an instrument is functioning. Normally this quality is assured by a routine calibration of an instrument. This process entails using concepts of evidence such as (Gott et al., 2003): end points, intervening points, zero point, and scales. However, instrument calibration is not within the jurisdiction of nursing.
Glen: Again, I want to learn a little bit more about the monitor as an evidence-gathering device. Whose responsibility is it to make sure that the monitor is calibrated so when it says 180, it’s really 180. Because, that’s not your job, is it?
Gia: No. And you know, I don’t know what the routine is; if they have to be calibrated every year or every two years. But we have a clinical engineering faculty on staff in every hospital and they take care of repairs and all that kind of maintenance. (June, 4, 80-85)
As a consequence, the concept of evidence called “validity” is primarily an engineering responsibility in a hospital, except in the cases where nurses are cognizant of variables (i.e. specific contexts) that would jeopardize an instrument’s accuracy (e.g. yellow fingers caused by smoking interfere with a patient’s sats reading).
Other Concepts of Evidence
The compendium of concepts of evidence proposed by Gott et al. (2003) includes a substantial number of entries that clearly lie outside the purview of nursing. For instance, nurses do not concern themselves with: instrument calibration, the scales that underlie instruments, sampling, statistical treatment of data, and many topics related to the design of experimental investigations (for ethical reasons). On the other hand, concepts of evidence applicable to nursing included reliability, validity, and data presentation, and included the evaluation of evidence by relevant societal aspects (e.g. credibility of evidence, practicality of consequences, power structures, and acceptability of consequences), as illustrated in earlier sections of this paper.
One question concerning Gott el al.’s compendium remains: Do nurses use concepts of evidence not found in the compendium of concepts of evidence? Three were noted earlier: a normalcy range for a patient, the unique individuality of the patient (the object of measurement), and the variability within a patient’s complex body (i.e. Terry’s “And what else?”). However, another very different type of concept of evidence emerged from the nurses’ transcripts.
In the science-rich workplaces studied by Duggan and Gott (2002), people measured and assessed physical attributes of various entities. In the present study, however, people measured and assessed both physical and emotional attributes of patients. The nurses’ transcripts clearly indicated that human emotions defined an important subset to concepts of evidence not found in Gott et al.’s compendium, a subset related to such fields as psychology, sociology, and anthropology. This new subset is acknowledged here because of its role in the science-rich workplace of the surgical ward, but a detailed explication of specific, emotion-related, concepts of evidence requires further investigation beyond the parameters of the present study.
In several events recounted by nurses, emotion-related observations were assessed as evidence from which to make clinical decisions on how to improve the condition and comfort of a patient.
Chloe: Evidence [concerning the emotional state of patients] is often based on your own observations of people and what you’re told in the reports. (June 1, 7-8)
A patient’s improvement may be influenced by the interaction between the patient and their visiting relatives or friends. A nurse must therefore attend to these visitors to benefit the recovery of a patient. In one encounter, Chloe found herself dealing with extremely stressed relatives. The patient had undergone an amputation the night before.
Chloe: Over the course of the day the family became increasingly anxious to the point of being abusive and obstreperous, according to the night nurse’s report. I guess they had been up for three nights in a row by then (an elderly wife and three daughters from …). So there were some notes (made and recorded) of some conversations that had taken place between the emergency room staff and the family, and then between the RNs up here and the family. (June 1, 12-17)
The night before they had been very aggressive and abusive to the point that the surgeon had actually said to us that if these things happened again, call security and have them removed from the hospital. (50-52)
She described her first encounter as follows:
Chloe: When I entered his room in the morning, the patient was very comfortable and there was his elderly wife who just looked absolutely shattered. So I started to do my assessments and introduce myself to her. It was a situation in which I tried to make assessments of the patient and ask him about his level of pain or level of well being, but she would start to talk and answer for him. He was a perfectly lucid man. (June 1, 22-26)
The capability to notice symptoms for someone being emotionally shattered, as the wife was, required “watching for body language a lot” (Chloe, June 1, 81).
In order to collect more emotion-related data, Chloe needed to interact with the wife in a way that would help Chloe’s patient heal. Some of Chloe’s procedural and declarative understanding involved in this encounter seems best described as “intuition” or “intuitive knowledge,” a component of clinical reasoning (Higgs & Jones, 2002, p. 7). The context for her interaction with the wife was the clinical decision to “establish a relationship with his wife by asking her some questions” (June 1, 28-29). Chloe’s procedural capabilities were guided by a subset of concepts of evidence related to the domains of psychology, sociology, and/or anthropology. As a result, the patient’s wife was successfully cared for by Chloe over the next several hours and the wife did not impede the patient’s recovery.
Chloe: They [family members] haven’t been admitted and they don’t have their name above a bed. But, they’re just as important for the sake of the well being and recovery of the one in the bed. (June 1, 91-93)
A similar scenario unfolded when the daughters turned up later that day and became verbally upset over their father’s Foley catheter (June 1, 50ff). The scenario eventually ended by all the relatives expressing their heart-felt appreciation and confidence in the hospital (June 1, 73-78). In summary, Chloe collected data (body language mostly), she processed data, she collected new data by taking the nurse-visitor interaction to the next level, she enacted several interventions, and she monitored the results by collecting on-going data related to the emotional well being of the patient’s relatives.
Although a patient’s emotional well being is constantly on the mind of a nurse who focuses on the patient’s physical attributes, sometimes the focus does shift to the patient’s emotional attributes, as it did for Sarah (June 23). Her patient was found wandering around the ward unsafely at 7:30 a.m., a time when most patients continue to rest. Sarah apprised herself of his physical attributes (e.g. his old age, his low potassium level of 3.1 measured the day before for which he was being bolused, his meds, and bags under his eyes). However, Sarah attended to his psychological attributes, for instance, his tone of voice, and his body language (e.g. no eye contact, and pacing around), and to his social behaviour (e.g. irrational conversations), all of which served as evidence for the conclusion that he was confused. The intervention Sarah initiated was not a physical one so much as a purely emotional one. She engaged him in an authentic human-to-human conversation, rather than in a professional nurse-to-patient conversation.
Sarah: I think that maybe he needed someone to talk to because his family hadn’t been in for a couple of days. We’re often so busy, we just run in and out [of a patient’s room], not having time to just talk. (June 23, 96-98)
When Sarah focused on the patient’s emotional attributes, she collected data (feedback) and assessed those data with concepts of evidence strictly related to the patient’s emotional well being. Within a short time, the patient calmed down and began to rest comfortably. It was later during the same shift that his confusion was attributed to a physical set of circumstances unknown to the nurses that morning.
A different type of emotion-related event was called a “PR situation” (public relations) by Chloe (June 4) when she spoke about two similar incidents that happened simultaneously on her night shift after visiting hours. In each case, a life partner (spouse) wanted to stay the night and comfort the patient by holding them in their arms, which can only be done when both are in the same bed together. The benefit to the patient had to be weighed against the possible disruption to the other patients sharing the 4-bed wardroom. The appropriate initial intervention (i.e. to request the visitor partner to leave the ward) needed to be achieved in a way that made the visitor feel supported so there would be no detrimental effect on the emotional well being of the patient. This night shift event became much more socially charged due to the fact that one of the visitor-patient couples was a same-sex pair who initially challenged Chloe’s request by calling her request a case of discrimination against same-sex partners, being unaware of the identical intervention with an opposite-sex pair on the same ward. Chloe eventually solved both problems with each of the visitors by recognizing their basic concern for the patient and by proving a credible alternative to their staying the night. Success can be credited to her procedural understanding, her emotion-related concepts of evidence, and her intuition.
In these situations, sensitivity is a necessary quality in a nurse, but the word “sensitivity” has a much different meaning in this context than it has in Gott and colleague’s (2003) catalogue of concepts of evidence: “The sensitivity of an instrument is a measure of the amount of error inherent in the instrument itself” (section 4.5). Emotional sensitivity and instrument sensitivity are two very different concepts of evidence; reflecting the difference between emotional attributes and physical attributes considered by nurses in their evidence-based clinical reasoning.
In addition to exploring emotional sensitivity, future research into emotion-related concepts of evidence in nursing may want to investigate the roles played by such concepts as empathy, equity, and respect, and may want to explore nurses’ “aesthetic perception of significant human experiences” (Higgs & Jones, 2002, p. 27).
Earlier in this paper the following points were made: (1) research strongly suggests that most scientific understanding required in a science-rich workplace is learned on the job; (2) a pragmatic distinction can be made between scientific ideas and professional knowledge of nursing, on the basis of generalizable decontextualized knowledge versus transferable contextualized knowledge, respectively, although the distinction may be vague in some specific instances; and (3) the context of nursing predictably predisposes a nurse to draw upon professional knowledge rather than scientific knowledge.
This prediction for nurses was supported by the literature cited at the beginning of this paper that concluded: the transformation of canonical science knowledge requires that it be deconstructed from its universal context and then reconstructed according to the idiosyncratic demands of an everyday context. Most nurses would face a formidable task if they were required, in addition to all their other demands, to deconstruct abstract scientific concepts and reconstruct them to fit the demands of an idiosyncratic event on a surgical ward.
In the present study, the transcripts of five of the six nurses were almost devoid of references to scientific knowledge (except for the use of anatomical terms, an issue discussed below). As noted in the “Limitations” subsection in this paper, one cannot be certain whether the nurses spoke in a lay genre to me as an outsider, or spoke in a professional genre to me as a science person. My interpretation of the interviews favoured the latter state of affairs.
However, the transcripts of one nurse, Terry, were replete with descriptions and explanations from a scientific worldview perspective. In the following exchange, Terry made his viewpoint very clear when he stated, “You really have to understand the physics of what’s going on with those chest tubes.”
Terry: … And that’s also monitoring what’s happening with those chest tubes.
Glen: That’s when the evidence comes in.
Terry: Oh, absolutely. And for that you really have to understand the physics of what’s going on with those chest tubes. You have to understand why those chest tubes are there in the first place. Chest tubes are put in for two major reasons: either a haemothorax (“haemo” meaning blood, “thorax” meaning thoracic cavity) or pneumothorax (air in thoracic cavity). Then you have an open or closed haemothorax or pneumothorax. And “open” means it is open to the extreme environment through a hole through the rib cage through the intercostal spaces between the ribs. … (June 15, 37-44, emphasis added)
Gia (June 20), on the other hand, described how she successfully solved a chest tube problem but her account was formulated on commonsense professional knowledge in nursing (i.e. what patients do when they pull their chest tube equipment along to the bathroom) rather than a scientific explanation of differential gas pressures in closed or open systems. This is not to say that Gia could not describe how a scientist would explain her patient’s situation (she was not asked for that information), but rather, a scientific explanation for her was not relevant to the problem-solving task at hand. Gia represents the large majority of student nurses in Cobern’s (1993) study, while Terry is similar to Carla in that study.
Terry’s scientific worldview descriptions and explanations included: (1) conceiving blood pressure in terms of a hydraulic closed-system in which the heart was the pump, leading deductively to systolic and diastolic blood pressures (June 8, 16-26); (2) conceiving BP cuff size in terms of surface area (June 8, 174); (3) conceiving the act of breathing, in part, as differential air pressure in an open system (June 15, 39-58); (4) conceiving of pain in terms of mechanistic features of the sympathetic and parasympathetic nervous systems (June 20, 47ff), (5) conceiving of an edematous patient in terms of a series of closed systems within the body (June 25, 89-105); and (6) conceived the alveoli as a place for “the oxygen and carbon dioxide to exchange through osmosis” (June 25, 60). Although almost every event discussed by Terry was communicated within a scientific genre, he also drew upon professional knowledge of nursing as did his peers; for example, citing the empirical relationships between pain and blood pressure (June 20, 46), and between breathing/coughing and bringing a patient’s temperature down (June 20, 169).
My interpretation of Terry’s claim that a nurse needs to understand science so “you know if something is going wrong” (June 15, 70) is that Terry himself needs to understand the science because, like Carla in Cobern’s (1993) study, he likely explains nature from a scientific worldview perspective. Because I can share his perspective with him, communication between us was effective. The use of scientific knowledge in nursing may very well be to facilitate communication among professionals who happen to share a scientific worldview. This use represents a very limited view of the application of scientific knowledge to nursing, restricting it to a small minority of nurses.
The other five nurses in the study appeared to engage in clinical reasoning without expressing a need to draw upon scientific descriptions and explanations. Only three short exceptions occurred during their 20 interviews: Sarah’s (June 16) mechanistic description of the role of haemoglobin in the body, Gia’s (June 4) explanation for the beta-blocking effect of Metroprolol, and Joan’s explanation of how she solved a discrepancy (a technical problem associated with a patient’s medication):
Joan: It [the patient’s reaction] was not working as well as I thought it should, whereas it normally had worked in other situations. So when I looked at other things [information about the patient], I noticed that in his other medications he was taking a beta-blocker. Combivent works on a beta-receptor in the system. He was on a medication for his heart as a beta-blocker. So you’re blocking the beta-receptor when this medication works on the beta-receptor and so it was possibly not working for that reason. This is what I figured out. (May 15, 24-29)
On the other hand, when Gia described the event in which a patient’s nervous system reacted negatively to the medication Indocid, Gia mentioned that she would store her newly discovered empirical relationship “in the vault for future reference” (May 22, 45), which I interpreted as a reference to her professional knowledge of nursing. At that moment in the interview, I steered the conversation towards the topic of scientific knowledge.
Glen: Were you at all curious about the actual mechanism that explains how that medication works in the body? Or why the central nervous system seems to close down to some extent?
Gia: I never had time to look it up, but that would be interesting.
Glen: Does that affect how you would observe things?
Gia: Probably, I think you will have a more in-depth understanding of it, so you could probably recognize other signs and symptoms of an Indocid reaction. So, probably that will be helpful if I got into it deeper and actually knew the mechanism. But I don’t at this time.
It seemed as if a scientific explanation would hypothetically have had potential value, but at that moment, it was not salient to the clinical reasoning in which Gia was engaged. She was perfectly capable of learning the scientific mechanistic explanation but it did not seem particularly relevant in this context. One can only speculate that her worldview perspective was not a scientific one as Terry’s seemed to be. However, it was beyond the scope of this study to inquire into the worldviews or self-identities of the nurses.
Although all six nurses made use of anatomical terms (appropriate noun labels) as they talked about events, this ability to apply scientific/nursing vocabulary is not considered in this study to be a demonstration of understanding scientific knowledge. Instead it is taken as evidence for the procedural capability to communicate unambiguously with other health professionals (i.e. to participate in the culture of nursing). However, perhaps the use of anatomical terms by nurses is one of those vague areas between two categories in Figure 2: “scientific knowledge” and “procedural capability.”
Another pattern emerged from the interview data. Successful clinical reasoning did not necessarily draw directly upon measurement units grounded in scientific knowledge. Whenever a nurse mentioned a numerical observation (e.g. a blood pressure of 140 over 80), the numerals never had units associated with the measurement. Units of measurement were not apparently relevant to the nurses’ clinical reasoning. (It is important to note in passing that in the 24 interviews conducted, no nurse happened to describe an event that focused on a nurse measuring out a medication for a patient, an event that would certainly have involved the proper units along with the quantitative amount of medication. Measuring medications was not a topic that arose.) All nurses could identify some of the measurement units when I specifically asked what they were. Only Terry remembered all but one of the units of measurement when asked what they were. It happened that no nurse could identify the units for a haemoglobin count. On a surgical ward, measurement units likely get in the way of efficient data management and communication. Perhaps protocol does not include measurement units in this context because the units do not change. As discussed earlier in this paper, more important than the units themselves are: a measurement’s relation to what is normal or what is critically abnormal, its relation to the context, and its relation to a pattern of data. In contrast to the “unitless” measurements used in clinical reasoning by nurses, units of measurement are central to scientific thinking. Thus, clinical reasoning in nursing and scientific reasoning differ in this regard.
The results from this study support earlier research that questioned the direct applicability of scientific knowledge to a nurse’s knowledge-in-use, except for the rare nurse who happens to have a worldview in harmony with the worldview underlying scientific knowledge. However, the preliminary nature of this small study points to the need to investigate this issue further with a greater number of nurses in more diverse roles (e.g. in other hospital units, in community clinics, and in homecare units). The evidence to date certainly supports the claim that the context of nursing predisposes most nurses to draw upon the professional knowledge of nursing rather than upon scientific knowledge, when engaged in clinical reasoning.
An alternative interpretation arises from a perspective on professional knowledge of nursing that does not partition it from scientific knowledge because both are evidence-based practices. The Unit Manager explained:
UM: I look at evidence-based practice as something that becomes part of you. So maybe I can’t remember all the scientific ideas (e.g. the loop of Henley), but I still know that a water pill does its job. After thinking about it from that point of view, I think the nurses have assimilated the scientific principles that they learned, and they’ve taken the common denominator of: “This is how I understand that this is your water pill.” (October 14, 8-12)
If I’m talking to a patient I might not say, “This is your Furosenide pill.” I’m more apt to say, “This is your water pill.” I’m not going to go into all the intricacies of how that works (the sodium/potassium pump, etc.), I’m probably just going to say, “It takes the water off, so it alleviates fluid on your heart and helps take fluid off your feet.” I think sometimes when we’re responding to the public, we don’t come across as being scientific experts. But I think to really fully understand what we do, there has to be some grounding there somewhere. (14-20)
If one clearly conceives professional knowledge of nursing and scientific knowledge as two different systems of thought, a problem arises:
UM: But it’s like it almost changes the discourse. It no longer becomes a discussion about scientific principles as much as it actually becomes a system all by itself; nursing, if you will; whereby it uses all the principles from other disciplines but has developed most [principles] around science. (October 14, 32-35)
The issue raised here (i.e. whether or not to partition professional knowledge of nursing from scientific knowledge) is reminiscent of the “science versus technology (engineering)” issue debated in science education during the 1970s and 1980s (Gardner, 1994; Layton, 1991). Today in academia, science and technology are generally conceptualized as two distinct ways of knowing, even though they interact and borrow from each other extensively (Collingridge, 1989) and can be indistinguishable in certain R&D projects (Jenkins, 2002).
The Canadian public, hospital patients included, do not generally distinguish between science and technology (Ryan & Aikenhead, 1992), and the public tends to confer prestige and expertise on scientific discourse and methods. Therefore, the science of nursing has a crucial role to play along side the art of nursing, in the public forum. In the science education research community, however, professional knowledge of nursing is distinguished from scientific knowledge just as engineering is distinguished from science. This distinction does not in the least denigrate the intellectual expertise required of nurses; the distinction only acknowledges key differences, a perspective that has implications for curriculum development, not for public confidence in nursing.
Summary and Conclusions
What science is actually used by nurses? The research identified a core set of concepts of evidence that appeared to be shared by all six nurses on the surgical ward. Concepts of evidence related to reliability were: measurement triangulation, normalcy range, uniqueness of the patient measured, and variability within the patient’s array of physical attributes. On the other hand, the nurses seldom had reason to draw upon the following key concepts of evidence: repeated readings with the same instrument, measurement error, and multiple observers.
The nurses’ concepts of evidence related to validity centred on: accuracy; validity triangulation; and a general predilection for direct, sensual, personal access to a phenomenon over indirect, machine-managed access.
The concept of evidence called “data presentation” (Gott et al., 2003, 16.0) surfaced during the interviews as well, when nurses spoke about graphing data to detect trends.
The above concepts of evidence have a common characteristic: they all deal with physical attributes of patients. Missing from Gott el al.’s (2003) compendium of concepts of evidence, but apparent in the surgical unit, is a set of emotion-related concepts of evidence associated with psychology, sociology, and anthropology (e.g. cultural sensitivity). This is an area for future research.
The nurses’ concepts of evidence functioned within two interrelated contexts: (1) taking it to the next level, and (2) initiating a procedure or intervention. Both contexts exist for the prime purpose of healing patients (bounded by the realities of available time, resources, and interactions with other professionals in a hospital). Before engaging in either of these two types of processes (i.e. taking it to the next level and initiating a procedure or intervention), nurses considered the credibility of their observations, which they tended to evaluate as being either sufficient or insufficient. The parallel distinction is made in Gott el al.’s (1999) model (Figure 1) between data and evidence. The surgical nurses in this study appeared to evaluate their data in three different ways. Data became evidence when: a datum was collaborated by other data, trends in the data were perceived, and there was a consistency or inconsistency between a datum and its context. In some instances, these three ways worked in various combinations to confer the status of evidence on data. Figure 1 captures the dynamic nature of nurses’ measurements, data, and evidence, contextualized by the social functions and moral consequences that subsequent clinical action might bring (the outer ring of the model).
What conceptual content in physics has a direct role in nursing, given the abundance of instruments and physical procedures utilized by nurses? The answers “Some” and “None” are both correct but depend on the worldview of an individual nurse. The perspectives embraced by Terry and perhaps the Unit Manager, for instance, indicate some role for physics content, even if that role serves as communication invoking universal abstractions from physics, or as a source for the assimilation of physics principles. On the other hand, for the large majority of the nurses in this study (a proportion consistent the research literature reviewed above), “None” seems to be the evidence-based answer. In other words, a knowledge of physics may enhance communication among a small number of nurses, but clinical reasoning appears to draw heavily or exclusively upon the professional knowledge of nursing, not upon physics. Some current nursing content may very well be earlier deconstructions and reconstructions of physics content in a context of specific interest to most nurses, but unrecognizable in its present form as physics content to a purist in physics. The technical professional deconstruction/reconstruction of that physics knowledge may be relevant to nurses’ clinical reasoning, but the original physics knowledge itself is not. Conceptual content found in chemistry and biology courses is likely vulnerable to the same irrelevance, for most nurses.
These results harbour important implications for the science curriculum. Nurses and others working in science-rich workplaces, as well as the lay public involved in a science-related issue as consumers or decision makers, extensively use concepts of evidence not normally emphasized in a science curriculum. Therefore, I concur with Duggan and Gott’s (2002) recommendations that a more relevant approach to high school science teaching would improve the occupational preparation of most science-career bound students, while at the same time it would improve the scientific functional literacy of the general public. “There should be a greater emphasis on the explicit teaching of procedural understanding and a reduced emphasis on the teaching of conceptual content” (p. 675).
The concepts of evidence found in science curricula, such as the Pan-Canadian Science Framework (CMEC, 1997), are normally treated as skills, not conceptual knowledge (e.g. “evaluate the relevance, reliability, and adequacy of data and data collection methods;” CMEC skill 214-8). The status of concepts of evidence as skills diminishes their appearance on typical achievement tests, particularly in high-stake testing.
The results of this study reinforce the conclusions posited in numerous studies in non-health sectors: salient scientific concepts are learned on the job, rather than recalled or directly applied from high school or university science courses. In the context of nursing, this finding rests on the evidence that when engaged in clinical reasoning, nurses could rely on their professional knowledge of nursing rather than on scientific knowledge, except if their personal worldview happened to coincide with the worldview inherent to science.
What canonical science knowledge seems relevant to the public’s role in a science-related career or as a consumer of science-related information? The research unequivocally points to the public’s need to learn scientific knowledge as required. Thus, one central role of the science curriculum is to teach students how to learn scientific content as required by the context students find themselves in. It would not seem to matter what scientific content is placed in the curriculum, as long as it enhances students’ ability to learn how to learn scientific content.
Curriculum developers require criteria to decide what content should appear in a curriculum. The selection criterion “student interest” can achieve the goal “to learn how to learn scientific content” equally well as the criterion “prerequisite coherence with first-year university courses.” Moreover, given that student interest has far greater motivational currency for most students than prerequisite saliency (though not necessarily for students whose personal worldviews are similar to a scientific worldview), the selection criterion “student interest” promises greater success for achieving the learning-how-to-learn goal. Curriculum policy based on learning how to learn will produce a much different science curriculum document than a policy based on screening students with pre-university course content.
In short, school science that holds greatest potential for enhancing medical careers and the public’s capacity to communicate with health professionals is a curriculum that teaches procedural knowledge of science, particularly concepts of evidence, in a context that authentically engages student interest in any science content. This curriculum policy is clearly reflected in the STSE approach to school science found in the Saskatchewan science curriculum, but is largely absent in the assessment of students by provincial and national testing.
I appreciably acknowledge the cooperation and support of the Research Services Unit of the Saskatoon Health Region, and especially the surgical ward Unit Manager and the six nurses who generously gave their time and expertise to make the project a success. This research project was funded by the University of Saskatchewan President’s SSHRC Research Fund, whose support is gratefully acknowledged.
Cajas, F. (1998). Using out-of-school experience in science lessons: An impossible task? International Journal of Science Education, 20, 623-625.
Chin, P., Munby, H., Hutchinson, N.L., Taylor, J., & Clark, F. (in press). Where’s the science?: Understanding the form and function of workplace science. In E. Scanlon, P. Murphy, J. Thomas, & E. Whitelegg (Eds.), Reconsidering science learning. London: Routledge.
CMEC. (1997). Common framework of science learning outcomes. Ottawa, Canada: Council of Ministers of Education of Canada.
Cobern, W.W. (1991). World view theory and science education research (NARST Monograph No. 3). Cincinnati, Ohio: National Association for Research in Science Teaching.
Cobern, W.W. (1993). College students’ conceptualizations of nature: An interpretive world view analysis. Journal of Research in Science Teaching, 30, 935-951.
Cobern, W.W., & Aikenhead, G.S. (1998). Cultural aspects of learning science. In B.J. Fraser & K.G. Tobin (Eds.), International handbook of science education. Dordrecht, The Netherlands: Kluwer Academic Publishers, pp. 39-52.
Cole, S. (1992). Making science: Between nature and society. Cambridge, MA: Harvard University Press.
Coles, M. (1997). What does industry want from science education? In K. Calhoun, R. Panwar & S. Shrum (Eds.), Proceedings of the 8th symposium of IOSTE. Vol. 1. Edmonton, Canada: Faculty of Education, University of Alberta, pp. 292-300.
Collingridge, D. (1989). Incremental decision making in technological innovations: What role for science: Science, Technology, & Human Values, 14, 141-162.
Davidson, A., & Schibeci, R. (2000). The consensus conference as a mechanism for community responsive technology policy. In R.T. Cross & P.J. Fensham (Eds.), Science and the citizen for educators and the public. Melbourne: Arena Publications, pp. 47-59.
Dori, Y.J., & Tal, R.T. (2000). Formal and informal collaborative projects: Engaging in industry with environmental awareness. Science Education, 84, 95-113.
Duggan, S., & Gott, R. (2002). What sort of science education do we really need? International Journal of Science Education, 24, 661-679.
Eijkelhof, H.M.C. (1990). Radiation and risk in physics education. Utrecht, The Netherlands: University of Utrecht CDβ Press.
Eijkelhof, H.M.C. (1994). Toward a research base for teaching ionizing radiation in a risk perspective. In J. Solomon & G. Aikenhead (Eds.), STS education: International perspectives on reform. New York: Teachers College Press, pp. 205-215.
Furnham, A. (1992). Lay understanding of science: Young people and adults’ ideas of scientific concepts. Studies in Science Education, 20, 29-64.
Gardner, P. (1994). Representations of the relationship between science and technology in the curriculum. Studies in Science Education, 24, 1-28.
Geertz, C. (1973). The interpretation of culture. New York: Basic Books.
Goshorn, K. (1996). Social rationality, risk, and the right to know: Information leveraging with the toxic release inventory. Public Understanding of Science, 5, 297-320.
Gott, R., Duggan, S., & Roberts, R. (1999). Understanding scientific evidence. http://www.dur.ac.uk/~ded0www/evidence_main1.htm.
Gott, R., Duggan, S., & Roberts, R. (2003). Understanding scientific evidence. http://www.dur.ac.uk/~ded0rg/Evidence/cofev.htm.
Higgs, J., & Jones, M. (2002) Clinical reasoning in the health professions (2nd ed.). Boston: Butterworth Heinemann.
Jenkins, E. (1992). School science education: Towards a reconstruction. Journal of Curriculum Studies, 24, 229-246.
Jenkins, E. (2002). Linking school science education with action. In W-M. Roth & J. Désautels (Eds.), Science education as/for sociopolitical action. New York: Peter Lang, pp. 17-34.
Kearney, M. (1984). World view. Novato, CA: Chandler & Sharp Publishers.
Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.
Lambert, H., & Rose, H. (1990, April). Disembodied knowledge? Making sense of medical knowledge. A paper presented at the Public Understanding of Science conference, London Science Museum.
Lawrenz, F., & Gray, B. Investigation of worldview theory in a South African context. Journal of Research in Science Teaching, 32, 555-568.
Layton, D. (1991). Science education and praxis: The relationship of school science to practical action. Studies in Science Education, 19, 43-79.
Layton, D., Jenkins, E., Macgill, S., & Davey, A. (1993). Inarticulate science? Perspectives on the public understanding of science and some implications for science education. Driffield, East Yorkshire, UK: Studies in Education.
Lottero-Perdue, P.S., & Brickhouse, N.W. (2002). Learning on the job: The acquisition of scientific competence. Science Education, 86, 756-782.
Macgill, S. (1987). The politics of anxiety. London: Pion.
Michael, M. (1992). Lay discourses of science, science-in-general, science-in-particular and self. Science Technology & Human Values, 17, 313-333.
Ryan, A.G., & Aikenhead, G.S. (1992). Students’ preconceptions about the epistemology of science. Science Education, 76, 559-580.
Ryder, J. (2001). Identifying science understanding for functional scientific literacy. Studies in Science Education, 36, 1-42.
Solomon, J. (1984). Prompts, cues and discrimination: The utilization of two separate knowledge systems. European Journal of Science Education, 6, 277-284.
Tytler, R., Duggan, S., & Gott, R. (2001b). Public participation in an environmental dispute: Implications for science education. Public Understanding of Science, 10, 343-364.
Wynne, B. (1991). Knowledge in context. Science, Technology & Human Values, 16, 111-121.
Figure 1. A Model for Measurement, Data, and Evidence
From Gott, Duggan & Roberts (1999).
Figure 2. Knowledge-in-Use Held by Acute-Care Nurses for Use in Clinical Reasoning
Knowledge-in-Use by Nurses
Figure 3. A Scheme Depicting Different Types of Levels in “Taking It to the Next Level”
3.c. residents or doctors
3.b hospital specialists