Tuesday, February 24, 2015

Has Technology Changed the Way Students Think?

     I have just finished reading a book called "Digital Leadership : Changing Paradigms for Changing Times" by Eric Sheninger. He is one of the "hot" ed tech school gurus right now. He spoke at our school this past Fall during a professional development day. Scheninger is the former principal of New Milford High School in New Jersey, and is known as "The Twitter Principal". The back cover of his book states that his philosophy "...takes into account recent changes such as ubiquitous connectivity, open source technology, mobile devices, and personalization to dramatically shift how schools have been run and structured for over a century". While at New Milford, Sheninger pushed for a complete embrace of all things digital. For example, instead of banning Smart Phones, Sheninger encouraged teachers and students to use them as learning tools within their classes. He implemented a policy known as BYOD (Bring Your Own Device). For example, there is an app known as "Poll Anywhere"  which allows a teacher to ask students a question, and then allows the students to text in their answers. This app takes the place of the commercial "clickers" that teachers have used in the past to get responses from all students.  Sheninger's BYOD idea was ingenious in the sense that, while he wanted all student's to use digital technology tools in his school, he realized that the funds didn't exist to outfit each student with a laptop, Chromebook, or Ipad. He leveraged the fact that most students already possessed powerful digital tools in their phones. Any students who didn't own Smart Phones could be provided some type of digital tool by the school. As principal, Sheninger embraced technologies such as Twitter and Facebook to communicate with parents, teachers, and students. So, how did this all work out? In terms of student learning, it is tough to tell. When Sheninger spoke at our school, he claimed that student achievement at New Milford had improved dramatically, but he didn't provide many details to back up his claims.
      This somewhat mirrors the general ed tech trend. There is not much hard data which shows that digital technologies dramatically improve educational outcomes. One problem I see is that some of the ideas embraced by the leaders in the ed tech movement are somewhat questionable. Here are a couple from Sheninger's book that I question:

" Digital learners prefer instant gratification and immediate rewards, but many educators prefer deferred gratification and delayed rewards"
"Digital learners prefer parallel processing and multitasking, but many educators prefer linear processing and single tasks or limited multitasking".

Let's look at the first of Sheninger's tenets. Just because digital learners prefer instant gratification doesn't mean that this is a desirable trait. In fact, if this is true, it is likely a mark against digital learning. For one, it ignores very well established psychology findings which show that the ability to delay gratification is an important predictor of academic success.  Stanford University's famous "Marshmallow Test" experiment was the first and most famous example illustrating this phenomena. In this experiment, children were given a marshmallow and told that they could eat it right away, but, if they could wait a while to eat it, they would get a second marshmallow. Thus, it was a test of a young child's ability to delay gratification in order to obtain a greater reward in the future. Subsequent follow-up research showed that there was a high correlation between a child's ability to exhibit self control (by not eating the marshmallow right away) and his/her subsequent success in school. Although some have questioned these findings, they correlate highly with my own anecdotal experiences. Here is a fun link which recreates the original experiment.

The second of Sheninger's tenets refers back to the title of this blog. According to Cognitive Psychologist,  Daniel Willingham, the answer is "no". In the article referenced by the preceding link, Willingham states "Is multitasking a good idea? Most of the time, no. One of the most stubborn, persistent phenomena of the mind is that when you do two things at once, you don't do either one as well as when you do them one at a time". He adds that research shows "...even simple tasks show a cost in the speed and accuracy with which we perform them when doubled up with another, equally simple task". Willingham has a lot more to say in this article, and it is well worth reading.

So, what to make of all this? As with all things "new", it is often tempting to believe that some of the traditional, perennial problems (students preferring instant gratification and not wanting to concentrate, for example) in teaching are artifacts of the mode of instruction rather than inherent challenges related to the teaching  of children. I've seen this happen throughout my teaching career. If technology is going to make a meaningful improvement in education, its proponents will have to maintain a consistently sober perspective on what technology can and cannot do. Technology most likely will have negative as well as positive effects in the classroom. Discerning between the two is an imperative.

4 comments:

  1. Kev,

    I remember Tricia some time back pointing me to research that showed that "multi-tasking" isn't really multi-tasking but is really "time-slicing." In other words, you work on one task for some period of time, switch to another, then back to the first, and keep cycling. This is how a computer OS works. But it's actually an inefficient way to do things because you have to reestablish context everytime you swap tasks.

    ReplyDelete
    Replies
    1. Yes, Willingham talks about that in the next paragraph after my excerpt. People don't realize that they are just quickly switching between tasks, not performing them simultaneously.

      Delete
  2. One thing about digital tasks - be they labs in virtual high school or problems/obstacles in video games - is that the user knows they have been designed with a certain level of difficulty and should be solvable with a predictable level of effort. What can happen is that a level of effort expectation is generated in the user, and when that level of effort has been reached the user gets frustrated or gives up. They have been conditioned to expect things to resolve themselves within a certain time or with a certain input of effort. But actual physical reality offers no such guarantees and, therefore, the level of effort required cannot always be predicted. Sometimes things are easier than we expect and sometimes they are harder, even much harder, than we expect. Someone conditioned to digitial reality may have difficulty with this... an actual physical lab has an infiinite number of things that can go wrong, while a digital lab, in its constrained and artificial environment, only has a limited, finite set of ways to go wrong.

    ReplyDelete