Kwabena Boahen: Curiosity is the way forward to new knowledge

Tue, 01/25/2022

Kwabena Boahen | Photo by Rod Searcey

Kwabena Boahen is a professor of bioengineering, of electrical engineering and, by courtesy, of computer science at Stanford Engineering, where he runs the Brains in Silicon Lab, which develops silicon integrated circuits that emulate the way neurons compute and computational models that link neuronal biophysics to cognitive behavior.

In this interview, he credits his father, a professor of African history at the University of Ghana, for his interest in academia and his intuition for thinking there must be a more elegant way to bridge neurobiology, medicine, electronics and computer science to design circuits. Here are excerpts of a conversation with Stanford School of Engineering:

A childhood in Ghana

I grew up on the University of Ghana’s sprawling Legon campus, where my father was a professor of African history and my mother ran the cafeteria. It was quite the idyllic childhood – we had our run of the campus, which was on a hill, built go-karts and made a treehouse. We even raced from one tree to another, high above the ground, because the branches were so tightly connected.

My parents were quite different. My mom came from British Accra, the coast, which at the time was considered more cosmopolitan and sophisticated. Meanwhile, my father came from inland Ghana, home of the Ashanti, the warriors, and he carried that pride in him. He remembers walking six miles to school every day and he didn’t want us to take our privilege for granted. If he gave us a ride to school, it was accompanied by, “You are so lucky. Look at that kid walking – you think you’re better than he is? No!”

If you’d asked me back then if my parents influenced me, I would have said no. I always thought of myself as going my own way. But here I am, a professor like my dad, so his tremendous influence is clear. That said, history – his subject – was not my passion because it always felt like memorization to me as a student. My whole approach, even now, is to derive everything I know by learning the core theory and then applying it. That started early. I liked to take things apart on my dad’s old desk and put them back together again; the campus bookstore offered a steady supply of books with projects to build so I was always in the middle of an experiment.

When I was 12, I was sent 100 miles away to an all-boys boarding school – this was part of the British effort to further colonize and indoctrinate African children. I was good at math and science, and particularly enjoyed metal work using lathes. In fact, with a corn-planting machine I built with my classmate Michael Banson, I won a science fair and was flown by the government to Nigeria as part of the prize. I ended up being valedictorian of my class and then began to consider next steps. I did a year of national service and applied several times over the next few years to MIT and other excellent schools around the world. But what really changed the game for me was when my dad took his sabbatical at Johns Hopkins, where his colleagues encouraged me to apply and helped nominate me for a scholarship, making it possible to go.

I was probably the only 21-year-old freshman, and as a result I found actual freshmen difficult to relate to; Baltimore, which was highly segregated, was also a huge culture shock. I ended up becoming good friends with other international and American students, mostly upperclassmen.

A growing interest in computers and the brain

My interest in computers awoke because my dad brought back an early personal computer – a BBC Micro – from England during a previous sabbatical. At the time, I was too intimidated to take it apart, but I read everything I could find in our library back home, from how to execute a branch instruction to programming in BASIC.

To be quite honest, I was disgusted. It seemed incredibly simplistic to me. So much work had to be done just to get the computer to multiply two numbers together. In general, I feel creativity comes from the experiences you’ve had that make you different from everyone else – like my time growing up on a university campus, relentless tinkering as a child, and going to boarding school and then university in a different country. My intuition was telling me this could be more elegant, efficient and better all around. I began to wonder how else we might build a computer. And although biology on its own wasn’t for me, I was interested in Terry Sejnowski’s work on using a neural network to translate text into speech after attending a seminar he gave at Hopkins, where he was a faculty member. That’s really how my interest in the intersection of computational work, bioengineering and neuroscience began.

Sejnowski described taking an interesting computational approach to teaching a neural network to translate text to speech, which sounded much like teaching a human baby to talk. Eventually, however, the neural networks learned to speak and vocalize beyond initial babbling, just like that baby. Maybe there was an elegant way to use the computer after all! It made me think that with a reverse-engineering approach, we could design computer chips to work like our brains, which led me to work in an integrated electronics lab. My teaching assistant in the lab at the time, Andreas Andreou, asked my lab partner, Philippe Pouliquen, and I to help replicate parts of Carver Mead’s seminal work on brain-like chips. But we were also thinking about how to model different parts of the brain and put them on chips; Philippe developed the software while I built the hardware.

It was a great starting place – we built everything from scratch, literally from the point where the electron hit the wire.

I ended up getting both my undergrad and master’s degrees from Johns Hopkins – and then I went to Caltech to study with Carver Mead, widely regarded for his pioneering work on very large-scale integration (VLSI) computer chips. He’s a super gregarious, friendly, fun guy who said, “I’ve heard so much about you!” when we met for the first time for lunch after a conference in 1989. He knew how to put you at ease, which was good because I was nervous as hell. I went on to work with him to build neuromorphic chips. In other words, we asked ourselves, “How can we take what we know about the brain and build chips that work like it?” It became one of the guiding questions for all the work that’s followed.

Coming to Stanford

Once I finished my PhD in Computation and Neural Systems, it was time to find a job. I picked Penn; Peter Sterling, a mentor of mine, was on the faculty there and on my thesis committee. I had a lot of exposure to the multidisciplinary research environment there – and I wanted to cross-collaborate with neuroscientists like him on a concentrated campus. And a strong tradition of collaboration was Penn’s reputation, so I went there from 1997 until 2005.

Penn had been building a five-story center to bring together medicine, engineering and neuroscience. It was tremendously exciting to think about having experts from these different schools and departments essentially as your neighbors. But with the Clark Center opening up in 2003, Stanford took that spirit of interdisciplinary scientific and technical research to a whole new level. They hired Steve Quake and when I reached out to him, he talked about building a 21st-century bioengineering department and invited me to apply.

I couldn’t resist.

Today my lab focuses on how cognition arises from neuronal properties. We’re using silicon integrated circuits to emulate the way neurons compute, linking the seemingly disparate fields of electronics and computer science with neurobiology and medicine. We’re profoundly shifting computing away from a traditional, sequential, step-by-step paradigm toward a parallel, interconnected architecture that works much more like that of the human brain.

My group’s contributions to the field of neuromorphic engineering include a silicon retina that could be used to give the blind sight, a self-organizing chip that emulates the way the developing brain wires itself up, and a mixed analog-digital hardware platform, Neurogrid, that simulates a million cortical neurons in real time – rivaling a supercomputer while consuming only a few watts.

Now we’re working to understand neural supremacy – how our brains manage to use so much less energy than computers do. How can we understand what gives our brains that power to work at a more efficient capacity and translate that into computer chips? If we think about how we’re building chips now, it’s like the urban sprawl of Los Angeles. Much like a lengthy commute from one part of the city to another, the information on a chip must travel significant distances. We need to consider a model that’s more like Manhattan, where we build up electronic circuitry in a denser fashion – think up like high rises, not out like sprawl. Our brains do this really well – but we’re not able to fully replicate that yet with silicon chips.

It’s work that’s interesting because there’s a lot more to uncover.

Students should reclaim their curiosity

Students often ask how they should get started – how to identify an area of potentially fruitful inquiry or build an enduring research interest. I always tell them to go back to their 2-year-old self, the notion of the scientist in a crib. A 2-year-old will always get in trouble because they like to figure things out for themselves. Tell them to do it one way and they’ll look at you and go back to doing what they were originally doing: testing out a hypothesis for themselves!

Our innate curiosity gets drummed out of us by our parents and our teachers – always telling us how to do something, what to focus on, how fast to be done with it. But if you can go back to that time when you were curious and confident in your own ability to figure it out, it’s the way forward to new knowledge.

Related: Kwabena Boahen, professor of bioengineering and of electrical engineering


Kwabena Boahen: Curiosity is the way forward to new knowledge - by Leslie Hobbs - Stanford Engineering - January 25, 2022