budgets v. grassroots technology adoption

The EDUCAUSE Learning Initiative (ELI) is starting a conversation around technology adoption in an era of tight budgets, called “Seeking Evidence of Impact”: http://www.educause.edu/ELI/EDUCAUSELearningInitiative/SeekingEvidenceofImpact/206622

As they describe it:

As the pace of technology change continues unabated, institutions are faced with numerous decisions and choices with respect to support for teaching and learning. With many options and constrained budgets, faculty and administrators must make careful decisions about what practices to adopt and about where to invest their time, effort, and fiscal resources. As critical as these decisions are, the information available about the impact of these innovations is often scarce, uneven, or both. What evidence do we have that these changes and innovation are having the impact we hope for?

I’m all for evidence-gathering (although I have no particular expertise in it), but I’m also wondering if this group will be collecting evidence of the impact of lecturing, with or without PowerPoint. I guess I have the feeling that we are always asked to justify the use of technology in teaching and learning, while still primarily teaching as our ancestors did–and I do mean ancestors in the ancient sense.

The other concern I have in this discussion is that it reinforces the top-down technology adoption approach. A recent article in Faculty Focus, “Unleashing Innovation: The Structured Network Approach,” by John Orlando, compares the top-down model, where IT decides on one version of a common technology, approves and adopts it, and then wonders why faculty aren’t using it. A counter to what he calls the “GM model” of the old industrial era is the “Google model”:

Colleges do it backwards. They start by choosing one system and one way to use it and impose that on users. The smart institution will encourage instructors to try five different systems in five different ways and provide a forum for them to share their experiences. Faculty will see what their colleagues are doing and ask questions, thus generating interest in the technology. Best practices and systems will emerge from the discussion, at which point IT enters the picture to implement those systems that have been proven in the court of public opinion.

I think we do a little of both at Tri-C in our pilots of technology, although there can be complaints about how long it takes to get from pilot to adoption, and we generally pilot one technology at a time, for example one classroom capture technology instead of several.

So, I’m back at the beginning, wondering where the ELI conversation is headed and whether it can avoid reinforcing the old GM model.

Advertisements


Categories: education, innovation, learning, teaching, technology

Tags:

2 replies

  1. I like the bottom up idea. But, would you require the IT staff to support all of the different technologies? How long would the trial be? I think the trouble with the idea is in the details.

    Like

  2. Well, I do think that it’s the job of IT to support the educational process, and the focus should be on teaching and learning. I don’t think IT should be in the business of telling faculty what tools to use in teaching or research, especially today when there are tools to fit so many different needs. If it’s a big job to support multiple pilots, so be it. Even with faculty trying out multiple technologies, there can be project management templates used to make the timelines and assessments work together.

    In the big purchases, like LMS’s, one eventually has to be settled on, but I have heard of a few schools who have both a proprietary setup of Blackboard and an Open Source installation of Moodle that allow faculty to choose.

    Like

%d bloggers like this: