I’ve been reading Ray Kurzweil’s book, “The Singularity Is Near.” For those of you who aren’t familiar with the singularity, it’s the point at which technology will supersede biology. In other words, according to Kurzweil, it’s the point at which computers will be able to think better and faster than human beings. In fact, Kurzweil purports that at a certain point, one computer will be able to think faster than every brain on the planet, combined. Once computing power reaches this point, computers will be able to manipulate their own source code, essentially programming themselves, as humans are learning to do with our own DNA.
Kurzweil also alludes to a time, in the not so distant future, where humans and technology will become irreversibly integrated. He predicts a time where we’ll have sentient machines living in our bodies, augmenting us. And a time where people will be able to transfer their thoughts into machines. It’s this idea that gives me pause.
As human beings, we’re always growing and evolving, always striving to better ourselves. And I can say that I see how, logically, such technological extension or integration could be the next step in human evolution. But as people become more and more machine, at what point do they stop being human? Or at what point do we consider machines to have become human? Should those lines ever blur? If machines can one day do everything that humans do, including feel emotion and create art, have we lost what abstractly makes us human? I’ll admit that the advantages of being so augmented are tempting. But what am I sacrificing? Is what makes us human our ability to appreciate beauty, to create, to constantly push our limits? I kind of feel like such cybernetic enhancements would eliminate that “pushing our limits” part. The endeavor to become more than what we are, to grow and change for the better, seems inconsequential if it can be done with the flick of a switch.