diptodip
দীপ্তদীপ

on purpose

August 09, 2016

In Ovid’s Metamorphoses, Pygmalion was a sculptor who carved an ivory statue so lifelike that he fell in love with it. He began secretly wishing that his creation would come to life. In time, the goddess Aphrodite took pity on Pygmalion and granted his wish.

Introduction

Like Pygmalion, intelligence researchers around the world have endeavored for a great many years to bring forth life from the sculpture. While “artificial intelligence” did not exist in the English language until John McCarthy introduced the phrase in 1956, clearly the concept of manmade intelligence has been a dream shared by even the most ancient of humans. The advent of the Church-Turing thesis in the 1930s formalized the notion of a computer being able to perform the same logical operations as a human*. Of course, this did not say anything about whether that capability would lead to a computer being conscious. All that has been said is that the computer can read the notes; it remains to be seen whether we can make it hear the music.

Pygmalion and Galatea having a chat about consciousness.

A Brief and Disproportionate History of AI1 2

Gary Kasparov, Chess World Champion for 12 years in a row, got up at 8:45 AM and leisurely sat down to a breakfast of bacon and eggs. It was the day he was to play against Deep Blue, IBM’s latest chess-playing computer. Having beaten IBM’s best efforts in the past, he wasn’t that worried. After breakfast, he discussed the Labor Party’s victories in the British elections with his technical advisor. Kasparov won only one game of six played during the match that followed (though Deep Blue only won two – the rest were draws). With a victory for Deep Blue, machines were officially better than humans at chess.

Of course, such initial forays into AI are now considered “mere” computation. Deep Blue simply employed a brute force approach: calculate as many moves ahead as possible and choose the best move found. Of course, Deep Blue could consider hundreds of thousands of moves ahead per second (and remember them) while Kasparov could consider 3 moves per second (and probably not remember as many). Even so, Deep Blue was not making decisions so much as making calculations. In the flow of what we consider (or have considered) AI, researchers quickly moved on to attempts that were knowledge-based (also known as expert systems): using predicate calculus (logic) and an existing base set of truths (the knowledge base), the AI could deduce more true statements and add them to its knowledge base. The AI might also have some sort of sensor to gather more truths about the world to add to its knowledge base. This can be considered “classical” AI, pioneered by people like John McCarthy. However, the limitations of this method of AI quickly became evident. An expert system could not learn about the world on its own. It could merely reason about the small world of rules it had been given. Thus, this too felt like mere computation and people quickly realized the limits of systems such as this. As excitement about AI waned, the world of AI faced a winter – a lack of research funding and visibility.

Deep learning (as well as other forms of machine learning) did not come under the spotlight until relatively recently. While I will keep the machine learning history worms canned for now, suffice it to say that deep learning via artificial neural networks is the closest computer scientists have come to creating an artificial brain. Deep learning creates functions and extracts features that we really do not have a way to comprehend as of yet. Just as you can’t tell what another person is thinking, so can we not tell what a deep learning network is calculating. There are those who believe that with a deep learning network that is complex enough, we can sufficiently replicate the brain to create a human-level intelligence. However, there are also those (myself included) who believe that we have yet to understand something crucial about consciousness 3.

Problems4

Inevitably, the field of artificial intelligence reminds me of the fate of unrequited love: problems.

Here are thousands of AI/AGI researchers in love with the idea of a machine that can think. But of course, this idea does not reciprocate – AGI (artificial general intelligence) has not been an easy field to make progress in. You probably noticed a trend in the history of AI. Excitement about a technique leads to the discovery of its limits leads to a decrease in excitement leads to that technique feeling less like “intelligence” and more like computation. Talk to any researcher in the field of AI or even computer science in general – nobody likes the label AI. This is because we are yet to truly construct an “intelligence” in the sense to which people refer in every day conversation. I used AI interchangeably with AGI earlier in this post (as those unfamiliar with the fields might) but having made the distinction I will refer to the fields by their appropriate names henceforth.

Of course, we are yet to truly define intelligence. Often when people in the field talk about AI, they talk about decision making techniques or machine learning methods – more practical and feasible problems like doing really well on this handwriting recognition test. When people normally think of “intelligence,” they think of a conscious machine. They think HAL 9000, Skynet, the Matrix, and C3PO. Without a theory of consciousness, how can we truly understand how to construct a conscious machine? This is the “easy” problem of consciousness: constructing a formal description of what constitutes consciousness. The “hard” problem is what brings the theory of consciousness field to the border between science and philosophy – why does consciousness seem to accompany subjective experience? Why are two conscious beings, like you and me, able to hear and see and smell the same things but come away with totally independent and unique experiences? Unfortunately, even the “easy” part of constructing a theory of consciousness requires combining (or perhaps shaping) the cutting edge of neuroscience, physics, and computer science. And even once we can define consciousness, what purpose would creating one serve?

Thus we come to a discussion about problems. When theory of consciousness researchers talk about problems, they generally refer to the “hard” and “easy” problems of consciousness. But there are even more problems. People have this notion of AGI being developed and leading us into a world of robots talking to us while acting as servants, which will usher in either a utopian future or lead to some dystopian hellscape. In my opinion, it seems not only pointless but also cruel to create a conscious machine in order for us to be able to do less work. I truly believe that most “small” problems (i.e. jobs which are repetitive or which are dangerous) can be solved by robots via deep learning. As many people know already, you don’t really need to be conscious in order to bag groceries or fill out paperwork§.

Why then, am I and so many others so infatuated with the idea of a conscious machine? Why has this idea pervaded the popular consciousness for so many centuries? I’m sure there are those who would immediately answer that it is man’s desire to encroach on the territory of gods. Perhaps people have some innate nature to end man’s solitude – to knock away our status as seemingly the only highly intelligent conscious beings in the world. But I have no desire for deification nor do I believe that we really need to look outward for companionship. So why do I want to build a conscious machine? I am myself still unsure, but I believe that upon the construction of a conscious machine, we will be able to find answers to not only those questions, but also some big questions about human nature. Will we treat the machines with respect? Will we try to put them to work? What will the machines think of us – will humanity be good or bad in the eyes of its child? Could we acknowledge or create a conscious being without giving it human values and traits?

Purpose5 6 7

I hope to answer explore these questions with this blog. This post became semi-notorious amongst my friends because I have essentially been working on it for months. Perhaps I am, as ever, bad at introductions, but I also believe it was because I started trying to answer these questions despite the fact that these questions are too heavy to answer perhaps in a lifetime, let alone an introduction post. If nothing else, it shows I am excited about working in this field (even if I couldn’t tell you why). I am starting this record at nearly the beginning of my journey into researching in this field – my second year of undergraduate education – but hopefully this blog will grow with me in terms of capability and qualification to discuss or even answer these questions.

The future is an exciting place. I don’t know if I will live to see a theory of consciousness fully constructed and a conscious machine built, especially because the theory of consciousness is such a nebulous field right now. There are various theories out there, vying for acceptance (much like physics), such as the Integrated Information Theory (IIT) which states that a conscious experience can be represented as information (in the information theory sense) and that this information is integrated (i.e. cannot be reduced to component parts). The extent of irreducibility (equivalently, how integrated the information is) is represented by ϕ. If a system has ϕ equal to 0, then the information in that system is completely reducable to its parts. With a large value of ϕ, the system is greater than the sum of its parts (i.e. conscious). As a side effect of complex consciousness requiring integration, no computer simulation of a conscious system can ever be conscious according to IIT. To be precise, a corollary of IIT is that the decision of whether or not a system is conscious cannot be decided by its input and output (its behavior) alone. This leads to even more interesting implications of IIT, such as the possibility of two functionally identical systems existing and only one of them being conscious, or the possibility of a true “zombie” system existing, which is a system that could be behaviorally nearly identical to a human while still lacking consciousness.

Another theory, Orch OR, also suggests that a conscious computer cannot be constructed (at least with a classical, non-quantum computer). Orch OR proposes that consciousness depends on quantum processes that occur within the microtubules of neurons in the brain. These quantum processes are somehow biologically “orchestrated”, but the dependence of consciousness upon quantum processes means that a classical computer simulation of the brain cannot truly be conscious. This theory has been met with a lot of criticism. One of the main issues is that decoherence (a process that essentially results in classically predictive behavior out of a quantum system) is expected to occur rapidly in a biological system. This is because compared to the ideal systems often studied in physics, biological systems are too “warm, wet, and noisy” to support “delicate” quantum processes. While the Orch OR theory may not be correct, it appeals to me because it attempts to describe what a possible cause of consciousness might be. In this regard, it seems to follow a trend of other theories of consciousness, including IIT. Many of these theories suggest that there is something missing – that consciousness is something more than just some form of classical computation. The living mind does indeed seem to be too warm, wet, and noisy to be captured by mere computation. Otherwise, we may as well be rocks moving around in a desert. It is my dream to one day find the mess of life within the machine. This is a step on that journey.

Footnotes

*: The original theorem actually stated that a human could only compute a function (on ℕ) if a theoretical computer (a Turing Machine) could compute that function.

: To clarify, I count myself among them (or at least I will in a few years of undergrad and a few many years of graduate school).

§: Not a knock against anybody working these jobs, but clearly people would prefer to not do them.

References

1: “Kasparov vs. Deep Blue: The Rematch.” IBM Research: https://www.research.ibm.com/deepblue/

2: “The History of Artificial Intelligence.” CSEP 590A, University of Washington: http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

3: Aaronson, Scott. “‘Can computers become conscious?’: My reply to Roger Penrose.” Shtetl-Optimized: http://www.scottaaronson.com/blog/?p=2756

4: Chalmers, David J. “Facing Up to the Problem of Consciousness.”: http://consc.net/papers/facing.html

5: Tononi, Giulio et al. “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0.”: PLOS Computational Biology: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588#s3

6: Penrose, Roger et al. “Consciousness in the universe: A review of the ‘Orch OR’ theory.” Physics of Life Reviews: http://www.sciencedirect.com/science/article/pii/S1571064513001188

7: Scholes, Gregory et al. “Life–warm, wet and noisy?: Comment on ‘Consciousness in the universe: a review of the ‘Orch OR’ theory’ by Hameroff and Penrose.”: http://www.ncbi.nlm.nih.gov/pubmed/24183930