I am reflecting on relief that the semester is winding down.
I enjoyed this class a great deal and learned much. However, I'm glad to have an upcoming break!
I must admit, by looking at the title of the Aviv article, I was expecting something very different. It was an interesting article, and I'm interested in asychronous learning network models. However, the analysis methods were mainly new to me, and I have to assume that they are meaningful in the context of of the the authors research questions. As to the conclusions, I can't help but think to Dede's comment in the design based research (DBR) articles introduction that the finding were "common sense" and that a practitioner rather than a researcher could have written it without actually conducting the research!
Ferdig's principles of good asynchronous discussion design all seemed reasonable to me, and it is nice to have something that succinctly sums up some good ideas and best practices.
Reeve's concluding article in the DBR set left me wondering if one can get published outside of core educaitonal tech journals doing design based research. Will a dbr-based dissertaiton be taken seriously. More importantly, can I secure a faculty position if I do a DBR-based dissertation or secure tenure if I publish dbr-based papers. Echoing another of Reeve's points, I do think the DBR papers do suffer from too much wandering narrative and not enough tight wording and succinctness. I think thsi kind of research really needs to be published in rich media. The research text piece needs to be made more succinct, and pointers to online/downloadable components of the "article" should be embedded. Again, I wonder if one could get this published in a mainstream educational print journal. I also think that we cannot ignore or completely abandon thought around scientific and experimental based research designs. Unfortunately, government funding and even the the general public has come to expect this paradigm for research. Perhaps greater quantitative rigor in the parts of the research that can support it will help, and perhaps borrowing from research and reporting techniques in engineering disciplines can be of some assistance.
Barry's overview of how he manages his information base on reading and research was very useful. I'm still not hugely fond of OneNote, but I do appreciate the fact that I need to come up with a strategy. I plan to mark up and annotate my research articles using Acrobat's tools and linking those article to Endnote. What I need to to think about is a tool (whether Acrobat's built in weak one or another) to search across PDFs. I'm hoping a combination of wise keyword development in endnote with good mark ups in the actual PDFs work. I'll still have to scan all those for which I had to make paper copies. Not fun!
I liked Joel's articles for this week. Although I'm not in science ed, I have a chemistry degree and the topics interest me. I was surprised to read how the inclusion of scientific technology (pH meters, CBL thermometers) into science ed is resisted by some teachers. Actually, I'm shocked. It appears the integration or inclusion of any kind of technology, scientific or educational (like computer simulations or models) is an issue in curriculum development. Once again, it seems that teacher preference and comfort are overriding factors. What surprises me is that teachers in science ed may not be reading literature in science technologies and not open to educational technologies. I thought being educated in science has a way of making one open to the process of scientific inquiry and trying new approaches. I think our institutions of higher education may be failing would-be teachers and that is trickling down to the way they approach science education. It'll be interesting to hear what Joel has to say about this.
I found both of the articles to be interesting but in different ways. In the article on asynchronous discourse, I was a bit puzzled about the need to anonymous postings, especially at the age of the participants This reminds me of another brought up in the study of a member of my group in the Interviewing class. She is interested in studying a class on Superintendents which is mainly online and purposefully masks the identity (including gender, I believe) of the participants. In both these cases, I wonder about issues of trust and social presence in these online classes. Also, through anonymity. are you encouraging behavior or actions that might not actually be part of the norm of an identified online participant. People often have differences in their online and offline personalities, but through anonymity and masking, are you allowing for greater skewing? I understand in both instances reasons for doing it. Allowing anonymous posting allows people to feel more comfortable to ask "silly" questions In the class, it was to try and help people understand racial, and ableness biases.
The GIS/GPS article was interesting because it raises the possibility of using these technologies is social studies, geography, science, and even business. I really think people have barely scratch the surface looking at how these technologies with the various map and census overlays can be used to teach people about economics and business patterns, settlement patterns, urban design, and issues about the environment.
The Web 2.0 Seminar I attended was useful in that it provided some actual examples of blogs, social bookmarking, and podcasting used in class settings. The reflection blog is an example of what they speakers were presenting. It would have been nice if the seminar would have touched a bit on wikis and collaborative work/project spaces as that is the area of collaboration technologies that interests me the most.
Since I instigated the reading of this week's three articles, I guess I should have something to say about them. I liked the spector article for the first two-thirds. I think it provides a useful overview of the information technology IT thoughts that surround knowledge management. However, his example was fairly weak and the description was perfunctory. He could have left that piece out of the article without detracting from what he had to say about information technology, knowledge management, and systems design. The Elmholdt article was an interesting case study on really poor implementation of a knowledge management system. Between this article and the Rowland article, I'm beginning to think that most knowledge management systems are not designed to work with or take advantage of tacit knowledge. Many people seem to think it is not fundamentally possible to make tacit knowledge explicit. If that is true, and I am beginning to believe that way, then you have to rethink your approach to working with tacit knowledge and its transfer or incorporation in others. A knowledge management systems approach should not be focused on documents, text, and other artifacts, but rather on collaboration models, simulations in context, mentorships, and other people/process/context approaches. The Rowland article argues that the generative dance between knowledge and knowing, between explicit knowledge, implicit knowledge, and context is actually a design epistemology and that design solutions are what learning technologies has to contribute to learning, knowledge management, and transfer. I think these ideas are much harder in practice and implementation, but I've been mulling over these implications for several weeks now. Especially within an engineering framework in an R&D organization, what kind of approach, curriculum, and systems are needed for learning and sharing among people and across the organization?
I enjoyed reading the articles in Educational Technology on Learning Sciences vs. Instructional/Educational (fill in the blank here). As I mentioned in an earlier reflection, this topic was discussed in a full session at AECT 2 years ago, and I remember some of the panelists saying the LS tried to to work directly with theory and cognitive aspects (variables, I guess), and instructional design often did not concern itself with these two thing. I do think, however, that Instructional X does concern itself with learning theories. I think these fields probably represent a continuum from theoretical concepts in cognition and learning to developed models in instruction. I do think one difference is that Learning Sciences seems to be a bit less concerned with testing an applying technology-based innovations. Instructional X seems less concerned with developing new models of cognition and learning. I'll be very interested in hearing what others think. I personally feel that I fit somewhere in between or among several fields - learning sciences, instructional x, management info systems, information science,and performance improvement.
Cassie's chosen articles on the Digital Divide were very interesting. I had not realized that in strictly terms of access, that the gap was shrinking and is now much smaller. However, in a broader sense of technical literacy, the ability to function with the technology and its information sources, and the ability to integrate conceptual work with this technology there are still some fundamental issues and many "littler divides." I believe these issues to be much harder to resolve because just throwing money at the problem doesn't help as much.
This was my second time around reading the Reiser articles having read them in my very first class in the program (Intro to IST). It was beneficial to read them again, however, because I have learned much about learning theory, instructional design, and the history of learning in the last few years. I am still struck by how Skinner's behaviorist theories and the military mindset molded the early field and still permeate some of the thought, designs,and models today. Although Bloom's and Gagne's work are useful, I think some people still try to cling to them rigidly without acknowledging that information processing theory, cognitive theory, and constructivist ideas have advanced some of the ideas in design.
People still confuse instructional media with educational technology, instructional design, instructional technology, or even learning technologies. I constantly have to define and explain the concepts to people when I tell them what I am studying and what I plan to research. The tools, of course, are important (and fun!), but they are just tools. It is what we do with the tools that is interesting.
In choosing the Rovai article for Thursdays class, I attempted to find something that spoke broadly about evaluating distance learning. There was almost nothing written that provided a nice overview. The Rovai's article's framework was the closet that I found. In general, articles tended to be focused around student perceptions and satisfaction of the class, usability analysis of the course and its features and interactions, and outcomes assessments of learning occurring in the classes. Those three areas appear to the main foci, and the Rovai framework adds an upfront analysis piece, a process analysis, and transfer (long term retention) evaluation.
Kelly chose the Frizell and Hubscher article because we liked the way that design patterns can be thought of broadly as guidelines. We were trying to avoid the "20 best tips" approach and instead find a useful framework to think about designing distance courses.
My overall impression on the lit review of web/e-mail survey article is that the research is completely inconclusive. That is, some research suggests that you might get better or more or most cost effective data with a web, or e-mail or web plus e-mail survey compared to a mal or phonse survey, or, then again, you might not. My conclusion? Use a web or e-mail survey when it makes sense, and use a paper or phone survey when they make sense based on your sample and research question.
The two articlce on computer mediated discourse were thought provoking. I really don't believe that the ILF example constitutes a virtual community. The participants were basically assigned to participate while in classes at IU. Those IU participants appear to be the bulk of the community along with some faculty and grad students doing research and assessment. Since it is not self-sustaining, I would not consider it an actual community. I was strck that both articles seemed to choose methods that stressed quantitaave counting methods. After coding, they counted. Why didn't they choose methods where after coding they could conduct thematic analysis. In the Thurlow article, I'm puzzleld about why there seems to be a concern about the abbreviations and truncated language used in text messaging. I've been using online communicaiton (remember dial up bulletin board systems, popular back at the end of the 1980s through the early 1990s) for a very long time, and typin/talking is a hassle, so of course you are going to take short cuts. Using a phone keypad is even more of a hassle so people will truncate even more. I still write in formal English and speak with reglar English expressions. People jump from communication style to communication style dpending on the social context.
On a topic unrelated to the readings, today's NY Times had article that talked about students at online colleges and unversities now qualifying for federal student aid. The for-profit education lobby has been trying to get the 50% rule (50% of classes had to be face to face to get aid) overturned. This law was put in place to try and protect students and taxpayers from diploma mills that were miliking the ferderal governement. Although this is stilcould be a problem, I think that the law was not the best way to to protect students or taxpayers. One spokesman forColleges and Universities was complining that there were no studies that proved that online-only degrees were as "good" as the traditional classes. I think this is argument is not relevant. There are plenty of students coming out of of traditional colleges and universities that haven't reached their potential. Students need to have information about what is a "good" college or universityincluding online ones but what is "good" isn't going to come from research. You would have to run comparison studies with all of the traditional colleges and universities also. Acredditation procedures are one indication of "quality" along with the various "reankings." Yes, I know they all have flawed methodologies and need to be taken with a grain of salt, but at least they say "something" if you can decipher their methodology. It is a much better strategy for the tradtional non profite educaitonal institutions to eduocate their customers (the students) why they are the superior product. To be honst, a little competition will keep us "honest."
The ideas and concepts around design-based research are interesting. Of the two articles I found the one by the Design-Based Research Collective (DBRC) a bit easier to read. It think they did a better job of tying some of the theory to an example, which helped clear up the question I had as I was reading them. I read the DBRC article second, and after reading it rereead te Cobb article. The Cobb article made more sense the secnd time around.
One really large weakness of this method is that it cannot asnwer quesitons of causality (P.7 of DBRC). I though both articles were trying to hint that it can, and I was very skepticl. I actually was glad to see the admission in the one article that it cannot say anything about casality.
It seems to be that this method actually sits very much in the quali8tiative merhods camp with its emphasis on thick descriptions and researcher embedded in the process. At first, I thoguh, "How is this differnet from a case study?" However, upon furthger refelction, I understand that case studies usually do not deal with an iteravive process that has adjustments along the life of the studied process. In fact, it seems that this proposed methodology share roots with Grounded Theory in Sociology and Case Studies with the added dimension of a process or product life cycle.
Two weaknesses that I see with this metohd is inability to answer causal quesitons and its inability to be largely generalizable. However, like grounded theory, it is useful for theory development and like a case study it can be useful to hone in on describing desired effect or best practices within a bounded context.
I had "issues" with the two articles that Kelly had us read (Sorry, Kelly!). In the Krentler article, I'm not sure they were asking the right question or were interpreting their data and results accurately. The question, in my opinion, is not whether technology contributed to their grade, but whether participating in class discussion (that just happended to be online) contributed to their grade. Aparently their results say yes to that. They also found that students who used the Internet more but didn't particpate in the dicussions also did better on their grades. I think it is quite a stretch to say that by being more frequent Internet users that technology use "caused" better scores. first, the sample was not randonmly selected. Second, the sample was not randomly assigned to groups. They can claim that there appears to be a correlation, but not a causal relationship. One last nitpick Their two plots (Figure 2. and Figure 3) are interaction plots that did not have the main effects subtracted out of the data plot. Thus the authors were not describing the interactions with those plots but rather the large main effects and the smaller interaction effects together.
In the van der Spa article, I kept thinking, "why did the author choose a general community discussion board to try and examine her theory?" Common sense could tell you that a general community board would be for social interactions and entertainment. You would have to look at specilaized groups and communities to answer some of her questions. Also, her questionaire drew from a convenience sample, and it had a low response rate. Also, I thougth her qualitiative quesitons were pretty weak; they didn't probe. So how did her convenience sample resondents differ from the general population. We can never know.
The Kirby et al article was very interesting. I remember attending a session at the AECT Conference in Chicago in 2004 where the topic of a session was ISD as compared to learning sciences (cognitive science and educational technology). The session grew out of the discussion of the 2004 Educational Technology issue that the authors mention at the beginning of the article. The session at AECT was packed. Unfortunately, I cannot remember who was on the panel. George attended this session also, and I wonder if he remembers. i'll have to ask him. I remember that the conversation grew spirited at times, but I know that I left the session thinking that the distinctions between the two groups was fairly blurry. I have come to view ISD/Learning Technlogies/IST really as an "engineering" discipline, drawing from educational psychology and cognitive science for its "scientific" background and theory. Since I have a background in the natural sciences (chemistry to be precise), my view is that LT is applied LS in much the same way as chemical engineering is applied chemistry and civil engineering is applied physics. Of course, this is a very broad definitional brush.
The results of the citation analysis in the article were very interesting. I'm actually pretty suprised at the low amounts of cross citation. I was expecting it to be low but not under 1%. Looking at the 66 authors who cross-published, I recognize about a dozen of the names, all from AECT or an ISD publication. I went to the isls.org website at took a look throught their current issue. The articles that were published would have bearing on only specific areas of interest within the ISD research space, and thus I better understand some of the reason for the low cross citation. Prior issues the contained articles in experiment design methodology would have a greater interest to a borader audience. The current issue on complex systems and using those theories as a lens in education is an interesting educaitonal philoophy, but it is too undeveloped to be of much current practical use to the "learning engineers." Over time, this lens may prove useful, and the citations to these articles may start to appear in ISD focused journals.
I liked the Mind Genius program (I was using the business trial), but I it would be better if it were much, much cheaper!
The two articles that Darrel asked us to read were interesting. Technology for special educaiton is outside of my interest and research areas so I had not given a great deal of thought except when a classmate has brought it up as their focus. From these two articles and from what other students have said and demonstrated before in other classes, it seems like there is a fair amount of technology available to help students with autism. In fact, autism almost always seems to be what is discussed, so I'm wondering if autistic children are a significant issue in schools.
I must admit, I disliked the Zhao and Frank article. I just did not believe the metaphor of computer uses being like zebra mussels. Since I didn't buy into their central theme, I nitpicked my way through the entire article. In my opinion, if Zhao and Frank wanted to use an ecological perspective, they should have built their case around the ecology of ideas in the schools. The ecology of ideas could then be compared to the biological species. However, the author grouped non-ideas like projection systems and desks along with the computer uses. These groupings made no sense to me. It seemed like they were grouping automobiles, tropical fruit, and interpretive dance and suggesting that they were all in the same category. Instead, if they would have treated specific technology ideas as the invading species, they could have focused on the teachers' and students' already existing ideas in their ecosystem, and thus would have had a cleaner model. The authors discussed memes, and I was surprised that they did not carry their arguments forward using more of the work surrounding memes.
I also did not like how they did their data collection and interpretation. They used a survey with Lickert scales which tends to be a fairly weak starting point for statistical analysis showing causal relationships. Collecting data via interviews usually results in better qualitative analysis rather than quantitative analysis. They only did observations in one one school per district, and they didn't describe how they choose that one school per district, so I am a bit skeptical about generalizability. Also, observations tend to lend themselves better to qualitative interpretation rather than quantitative interpretation with a causal claim. When the authors used direct quotes on p.825 to back up the quantitative claim, I didn't find that believable at all. First, the effect size for their analysis on the teacher-ecosystem interaction was only 11-14%. Though that is likely large enough to be noted, it isn't so large to be accepted as groundbreaking. Using direct quotes from only two teachers does not confirm anything. It only lends support in a qualitative claim for those teachers, not for a generalization. I found their findings and arguments around opportunities for mutual adaptation completely unbelievable. With an effect size of 1%-3% it isn't even worth noting, and their argument tying it to the ecological metaphor in the case was a gain a massive stretch.
I am also going to nitpick their model on p.829. First it is really unclear, even after read the preceding page with what all the lines and symbols meant. Second, if district in-service is so insignificant why does it have a huge bold arrow? also, it penetrates much to far inside of the model. If it is as insignificant as the authors claim, it should not penetrate at all and should be a slight arrow or not in the model at all.
The Niederhauser article was much better. Similar to the Klingner article it was useful in describing the publication process. The appendices were particularly useful in helping understand the process. The Booth chapters were again useful, though I wonder how well people really follow the claim, reason, evidence chain well. Zhao and Frank did follow the chain, I realize upon reflection. However, I didn't believe many of their reasons or evidence. In the Williams book, I am a bit unsure as to what is a "good" normalization. Though he gave examples, he spent so much effort in getting rid of them, the exceptions didn't seem to all that logical.
The Voithofer article was quite dense but also quite interesting. I was very interested in the ideas surrounding visual information, visual ideas, visual culture and how it the educational researcher needs to aware and versed in it going forward. I also liked the distinction he made between databases and retrieval interfaces and the algorithmic interfaces of games. In my experience and casual observation, uncovering learning through an algorithmic datastore offers stronger context and feed back than data retrieved through imposed hierarchical structures. There's more to this here, and I need to think more about it. I wonder if a Learning Technologist researcher/grad student will ever be allowed to submit a visual project dissertation. It's interesting to contemplate than rather than a book or journal article, instead what some researcher create as an end product of their research is a multimedia visual project. I know in the arts and film, this find of project or series of projects is needed. In education, however, it would be a completely different way of representing work. I'm not convinced yet that the visual "materiality" really represents a "different" way of knowing. I just think we may know have tools to better abstract and transcode the visual (to use Jenks words from V's article).
As usual, the Booth, et. al book ad very useful advice. I like the way they really distilled how to frame a problem, and the how they distinguished practical and pure research. The Willlimas book also had its usual nuggets, butt I am still unsure what to do about the gender bias in the second person singular. My strategy had been to pluralize (Researchers rather than Researcher or Groups rather than Group so I could then get away with you the the third person plural rather than singular). However, Williams describes why that is an unclear no-no. I don't like the solution of using the third person with the singular group for example, nor do I want to use the third person singular and say "he" or switch between "he" and "she". Both sound rather forced. So what's a writer to do?
I had a few initial impressions of the class as we progressed through the first session. First, I was happy that I knew many of the class members. In general, i enjoy classes where I already know at least several of the other students. I enjoyed learning a bit about each of the others in the last segment of the class. As we covered the syllabus, I had two conflicting thoughts: 1. Sounds fun; 2. Whoa!, a lot of work here.
The overview on wikis was interesting. Coincidentally, I had a guest lecturer cover wikis (and blogs) in the class I teach at St. Kate's on 1/7. They actually have been around awhile, but only becoming more popular (getting more press for example) now. I wonder what is causing its increase in popularity. Are more people finally starting to adopt them for collaboration? Or is it it simply there are now easy to install feature rich open source products readily available? I think it would be interesting to look at how prevalent wikis are now for collaborating in among various groups/settings.
I enjoyed the readings for the week. All contained very practical advice. The Kilger, Scanlon, and Pressley article had useful advice in choosing a journal and preparing a manuscript for publication. I found the discussion of the revise and resubmit cycle and the long timelines to be especially helpful. This was the first time I read a concrete description of the process.
The Booth, Colomb, Williams chapters were filled with useful advice on organizing a research paper and on organizing your arguments. As I was reading this, though, I was wondering how so many poorly written papers make it into the publication process. Some of their advice I have sen numerous times; know your reader/audience; be careful of paraphrasing; create a plan for writing your paper. Other concepts were new to me. I had not thought about warrants before, and know I understand why I have occasionally struggled to explain something in writing. If I am remembering correctly, I probably was having a problem with using a proper warrant to link the evidence to the claim.