Categories
Mombasa

Defining the problem

I’ve written previously about my many questions about what can be done here to improve the education.  One of the first questions I asked was, “When is it too late?”  More specifically, when does a child effectively lose the possibility for improvement relative to his peers?  When is he locked forever into being either the “struggling kid” or the “bright kid?”

Third grade.

Curiously enough, I’m teaching art in grades two and three, so it will be interesting to see how my computer-lab time with those students goes after school, and if my experience here lines up with the research.

Another statistic I stumbled upon had nothing to do with Kenya.  It was this: the average 17-18-year old deaf student in America reads at which grade level?

Fourth grade.

And the definition of forth grade is this: middle-of-the-pack performance expected from a third grader entering fourth grade.  Wow.  I was surprised by this.  And disappointed.  Is this the ceiling?  If America, with all it money and teachers and computers, can only pull off a forth grade average for Deaf high school seniors, what can I possibly do here with limited resources?

Well, first of all, if I’m looking for a breakthrough, I shouldn’t simply apply American methods here, because even in a best-case scenario, that has a known result that’s still lacking, apparently.

Here is an opinion on why grades 3/4 seem to be where kids’ reading levels get stuck: it’s the “learn to read/read to learn” distinction that we push on the students, even if they’re not ready.

So what does America plan on doing about its own problem?  I see two related issues and movements.

Think about this: there are 44 phonemes in the English language, which more-or-less have a predictable relationship with the 26 letters of the English alphabet.  If you know how to talk, learning to read is simply a mapping between the phonemes and the letters.  Learn that mapping, learn the patterns, and you’ve basically taken two languages (speaking and writing) and turned them into one language (English).  It’s easy to take for granted that when you see a new word, you can usually figure out how to pronounce it, and vice versa: if you hear a word, you can reasonably guess its spelling.  A Deaf individual who reads a new word can’t extrapolate that into sign language, and vice versa: they remain two different languages, regardless of whether you use ASL, KSL, or Signed English.  They all have this problem.

The recent study done in the US had pushed phonics to the educational forefront for all students, and much thought has been put into how to teach the concepts to the deaf, even if it remains conceptual, because even if a student still can’t transfer a written word into a sign, at least he can use phonics to better understand word variations (phone, phonics, phonetic, etc), and can therefore do more independent learning.  This can be done in a totally abstract way, or it can be done in conjunction with lip reading, or even with microphones that show waveforms to demonstrate which sounds make which kinds of visuals on the screen.  Anyone who has done computer recording can vouch for the fact that plosives like “b” and “p” look different from other sounds on the screen.

A more dramatic, and it seems, proven technique, is to use Cued Speech in schools.  Cued Speech is the name for a kind of alternate sign language.  It works in conjunction with lip reading, and the goal is to clarify ambiguous phonemes.  “P” and “b” look the same on the lips, but if you make a small hand gesture near your mouth, you can convey to the “listener” which you intended.  Research seems pretty conclusive: children who learn this technique can better understand phonics, and end up being better readers and writers.  I haven’t yet seen these results debated.  It’s still not widespread, possibly because ASL is finally winning the battle against “Signed English,” which was designed to improve grammar but didn’t really work, and the people who fought for ASL are probably not happy to see yet another contender.  At least Cued Speech is sufficiently different (technically it’s not a sign language at all) so it would there is less room for confusion.

So those are some options on the table.  This is a computer program that shows real examples of a lot of these ideas (but not the waveforms) but it has possibly the worst user interface I’ve seen in a long time.

So I definitely need to implement a phonics solution of some kind.  Introducing Cued Speech would probably be too political, but there are still the other options.  My gut still tells me that chat programs may be an unexplored frontier worth trying.  Imagine class communication handled entirely via text chat.  The point of the class is to teach the students be better readers and writers.  There is no interaction via sign… just writing, and vocab is introduced slowly via pictures and videos to enable the students to chat with each other and the teacher.  Phonics or no phonics, I would imagine this doing a lot of good.  This is an idea I’m still mulling.

Well, congrats, you made it though another rambling post.  Here’s your reward: a photo of a student who visited the library yesterday.. one of the first using my new DOS Educational Boot Disk!

P1020731