Categories
Mombasa

Defining the problem

I’ve written previously about my many questions about what can be done here to improve the education.  One of the first questions I asked was, “When is it too late?”  More specifically, when does a child effectively lose the possibility for improvement relative to his peers?  When is he locked forever into being either the “struggling kid” or the “bright kid?”

Third grade.

Curiously enough, I’m teaching art in grades two and three, so it will be interesting to see how my computer-lab time with those students goes after school, and if my experience here lines up with the research.

Another statistic I stumbled upon had nothing to do with Kenya.  It was this: the average 17-18-year old deaf student in America reads at which grade level?

Fourth grade.

And the definition of forth grade is this: middle-of-the-pack performance expected from a third grader entering fourth grade.  Wow.  I was surprised by this.  And disappointed.  Is this the ceiling?  If America, with all it money and teachers and computers, can only pull off a forth grade average for Deaf high school seniors, what can I possibly do here with limited resources?

Well, first of all, if I’m looking for a breakthrough, I shouldn’t simply apply American methods here, because even in a best-case scenario, that has a known result that’s still lacking, apparently.

Here is an opinion on why grades 3/4 seem to be where kids’ reading levels get stuck: it’s the “learn to read/read to learn” distinction that we push on the students, even if they’re not ready.

So what does America plan on doing about its own problem?  I see two related issues and movements.

Think about this: there are 44 phonemes in the English language, which more-or-less have a predictable relationship with the 26 letters of the English alphabet.  If you know how to talk, learning to read is simply a mapping between the phonemes and the letters.  Learn that mapping, learn the patterns, and you’ve basically taken two languages (speaking and writing) and turned them into one language (English).  It’s easy to take for granted that when you see a new word, you can usually figure out how to pronounce it, and vice versa: if you hear a word, you can reasonably guess its spelling.  A Deaf individual who reads a new word can’t extrapolate that into sign language, and vice versa: they remain two different languages, regardless of whether you use ASL, KSL, or Signed English.  They all have this problem.

The recent study done in the US had pushed phonics to the educational forefront for all students, and much thought has been put into how to teach the concepts to the deaf, even if it remains conceptual, because even if a student still can’t transfer a written word into a sign, at least he can use phonics to better understand word variations (phone, phonics, phonetic, etc), and can therefore do more independent learning.  This can be done in a totally abstract way, or it can be done in conjunction with lip reading, or even with microphones that show waveforms to demonstrate which sounds make which kinds of visuals on the screen.  Anyone who has done computer recording can vouch for the fact that plosives like “b” and “p” look different from other sounds on the screen.

A more dramatic, and it seems, proven technique, is to use Cued Speech in schools.  Cued Speech is the name for a kind of alternate sign language.  It works in conjunction with lip reading, and the goal is to clarify ambiguous phonemes.  “P” and “b” look the same on the lips, but if you make a small hand gesture near your mouth, you can convey to the “listener” which you intended.  Research seems pretty conclusive: children who learn this technique can better understand phonics, and end up being better readers and writers.  I haven’t yet seen these results debated.  It’s still not widespread, possibly because ASL is finally winning the battle against “Signed English,” which was designed to improve grammar but didn’t really work, and the people who fought for ASL are probably not happy to see yet another contender.  At least Cued Speech is sufficiently different (technically it’s not a sign language at all) so it would there is less room for confusion.

So those are some options on the table.  This is a computer program that shows real examples of a lot of these ideas (but not the waveforms) but it has possibly the worst user interface I’ve seen in a long time.

So I definitely need to implement a phonics solution of some kind.  Introducing Cued Speech would probably be too political, but there are still the other options.  My gut still tells me that chat programs may be an unexplored frontier worth trying.  Imagine class communication handled entirely via text chat.  The point of the class is to teach the students be better readers and writers.  There is no interaction via sign… just writing, and vocab is introduced slowly via pictures and videos to enable the students to chat with each other and the teacher.  Phonics or no phonics, I would imagine this doing a lot of good.  This is an idea I’m still mulling.

Well, congrats, you made it though another rambling post.  Here’s your reward: a photo of a student who visited the library yesterday.. one of the first using my new DOS Educational Boot Disk!

P1020731

4 replies on “Defining the problem”

Cued Speech is not a language – it is just a visual system of describing the way the words sounds out. I don’t think it’s really about ASL supporters not being happy with a new system on the scene – CS has been around for a good long time, it doesn’t really work as a language nor as a mode of communication outside the speech therapist’s office, it’s more of a question of what you want to spend your time on – teaching an actual language where you can build on, or teaching phonetics to deaf kids.

As for that fourth grade reading level – much has been made out of that 4th grade reading level, but if you really look / think about it, the national reading level of everyone in the United States is only one grade better, fifth grade. Only because the sample size of the Deaf community is smaller, you see it more than when you’re in the bubbles in the US where literacy is really high.

Cued Speech is definitely not its own language.

I’ve read that the idea is that you build on cued speech to create higher literacy– apparently cuers have better written English skills than their Hearing peers. The speech therapy element was more of an afterthough.

From what I understand, Signed English is on the way out, oralism is dead, and ASL is finally the undisputed champ in the US. I get the impression (reading between the lines) that some professors have fought so hard for ASL that that they react negatively to the idea of losing any ASL momentum. This is the most critical I’ve seen anyone get about it, and even then it is less an attack on Cued Speech and more a defense of ASL. Interesting, because CS is not designed to replace ASL, but rather to be learned in addition to it. Shades of Signed English? An interesting read.

Of course, I don’t know Cued Speech, so this is all theory, but it’s an interesting piece of the puzzle anyway.

I started to download that PDF, but ack, it takes a while, so will wait til I’m on the school computer to look it up.

I’m not saying that CS is a bad idea – not at all, I think it actually works for some students and should be kept as a system / source of additional education on an individual basis – so I’m just playing the devil’s advocate here.

One thing that I thought about just now before I read your comment was I wonder (maybe that pdf has the answer), if the results are skewered by a couple of things – first the students who uses cued speech generally (i’m saying generally, which means probably the majority), usually have more hearing, and is able to map out the sounds somewhat already, so the cued speech supplements that as a spoken language – was a control group given for that?

The second factor I could see – is if the other students were given the same amount of time that they spent with the cued speech students working on a language? I mean, if the cued speech students gets 30 minutes each day using CS, and the other students gets free time or whatever, then that would definitely make a difference – again, I’ll need to pull up that pdf so if it has already addressed these points, feel free to ignore this!

As for the ASL / oralism / signed english debate – it’s interesting because even in that NYT article, the headline uses “sign language” to describe cued speech – while it might makes sense to the general audience, it can be a little misleading. A few of these professors have either taught me, or are parents of my good friends, so I can see what they’re thinking, so it’s interesting for me. Signed English, you’re right, is pretty much dead, but oralism continues to have a strong push with all the financial power of the Alexander Graham Bell Association, so I think people are and get a little defensive about how ASL is perceived.

Well, I definitely wrote much more than I intended to – hope it all makes sense to you …

These are all the right questions about the CS research but I don’t think I’ve seen it explained that well. I keep hoping to find a site that summarizes all the arguments against it that might point out any testing flaws, but I just haven’t seen it yet. CS is either that good, or people are just not interested enough to pick it apart.

Comments are closed.