March 30, 2017
About the author :
Joni holds a PhD in marketing. He is currently working as a postdoctoral researcher at Qatar Computing Research Institute and Turku School of Economics. Contact: joolsa (at) utu.fi
The ambiguity problem illustrated:
User: “Siri, call me an ambulance!”
Siri: “Okay, I will call you ‘an ambulance’.”
You’ll never reach the hospital, and end up bleeding to death.
Two potential solutions:
A. machine builds general knowledge (“common sense”)
B. machine identifies ambiguity & asks for clarification from humans
The whole “common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. It is iterative learning, in which trials and errors take place.
But, in fact, A. and B. converge by doing so. Which is fine, and ultimately needed.
To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained “why” a particular piece of information results in disambiguity. It’s not enough to say “you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say “you’re right”, but again the same association is needed.
1) try something
2) get told it’s not right, and why (linking to contextual information)
3) try something else, corresponding to why
4) get rewarded, if it’s right.
The problem is, currently machines are being trained by data, not by human feedback.
So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data. From data and probabilities, we would need to move into associations and concepts. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication) would probably also be needed.
Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.
Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven training approaches.