Plenary Speakers

Dietrich Dörner (Otto-Friedrich-Universität Bamberg)

Does Artificial Intelligence Exist? No, it doesn't! (We, 8:30-9:30)

Introduced by: Gerhard Strube

Does artificial intelligence exist? In my eyes there exists not one example for artificial intelligence. What is intelligence? Unfortunately Psychology does not provide us with a sound definition of this term, although since more than a hundred years psychologists try to develop systems to measure intelligence and the story of intelligence testing is considered as a story of success. Normally in intelligence testing subjects get a number of tiny tasks which they have to solve and intelligence is considered hence accordingly as a collection of abilities to solve such tasks.
Systems of artificial intelligence can solve such tiny tasks too, but they can solve "big" tasks additionally. They can play chess on grandmaster level, solve algorithmitically not solvable problems as the 'travelling salesman'-problem, etc. But unfortunately the systems of artificial intelligence have no insight, why they solve such problems in the form they do. And therefore they cannot understand why it is better to solve one problem in this way, another one in a different way. To understand why I use a certain strategy opens the possibility to elaborate strategies or to change them or to abandon them. Without this insight a system could change strategies only by the criterion of success or failure. Certainly this criterion works, but in no ways it is the best one. I shall discuss, what insight means in this context and why insightful thinking is only possible by using natural language with its polysemie and metaphors and unlimited levels of reasoning about the characteristics of a lower order language.
Please note: The talk will be given in German.

homepage

 

Rainer Goebel (Universiteit Maastricht)

Cracking the columnar-level code in the visual hierarchy with sub-millimeter fMRI and neural network modeling (Tu, 8:30-9:30)

Introduced by: Claus-Christian Carbon

The brain is the most complex organ we know but we still do not understand how cognitive phenomena such as object recognition, speaking and consciousness are created by billions of interacting neurons. With standard functional brain imaging (i.e. fMRI at 3 Tesla), we can routinely see specialized areas in the human brain, including “experts” for colour, visual motion, faces, words, language, planning, memory and emotions. This level of resolution reveals an amazing organisation of the brain that is similar, but not identical, across individuals. We still, however, know little about the representations coded inside specialised brain areas and how complex features emerge from combinations of simpler features when we move from one area to the next. With high-field MRI scanners (7 Tesla and beyond), the achievable functional resolution reaches to the sub-millimetre level (500–1000 microns). This is important since neurons with similar response properties spatially cluster into functional units or cortical columns with a lateral extent of hundreds of microns. Studying the brain at the cortical columnar level seems to be the right level to reveal the principles that the brain uses to code information. I propose that this columnar-level code can be "cracked" by adequately combining clever experimental designs (psychology), sub-millimetre fMRI (neuroimaging), sophisticated data analysis tools (signal analysis) and large-scale neuronal network modelling (computational neuroscience). It is my belief that, with a massive attempt to crack the columnar-level code in as many areas as possible, we will ultimately reach a deeper understanding how mind emerges from simpler units in the brain. I will present first empirical evidence that cracking the columnar-level code is indeed possible. I will also show exciting applications of real-time brain scanning at the columnar-level that will allow to create sophisticated brain computer interfaces (BCIs).

Short Biography

After his study of psychology and computer science in Marburg, Germany, (1983-1988), Rainer Goebel has worked on artificial neural networks of visual processing. He did his PhD at the Technical University of Braunschweig, Germany, (1990-1994) under Prof. Dr. Dirk Vorberg developing a large-scale oscillatory neural network model of scene segmentation, selective attention, and shape recognition. In 1993 he received the Heinz Maier Leibnitz Advancement award in cognitive science sponsored by the German minister of science and education for a publication on the binding problem and in 1994 he received the Heinz Billing award from the Max Planck society for developing a software package for the creation and simulation of neural network models. From 1995-1999 he was a postdoctoral fellow at the Max Planck Institute for Brain Research in Frankfurt/Main in the Department of Neurophysiology under Prof. Dr. Wolf Singer where he founded the functional neuroimaging group. In 1997/1998 he was a fellow at the Berlin Institute for Advanced Studies (“Wissenschaftskolleg zu Berlin”). Since January 2000, he is a full professor for Cognitive Neuroscience in the faculty of psychology and neuroscience at Maastricht University. Since 2005, he is director of the Maastricht Brain Imaging Centre (M-BIC). From 2006-2009 he was chair of the Organization for Human Brain Mapping. Since 2008 he is team leader of the “Neuromodeling and Neuroimaging” group at the Netherlands Institute for Neuroscience (NIN) of the Royal Netherlands Academy of Arts and Sciences (KNAW). His research includes high-resolution (columnar-level) functional imaging of the visual system, artificial large-scale columnar-level neural network models of visual cortex, fMRI neurofeedback, fMRI/fNIRS brain computer interfaces, and the development of new analysis methods for neuroimaging data. The research conducted by Rainer Goebel and his team is documented in an extensive publication record. He has successfully supervised more than 30 PhD students and more than 10 post-docs and many former students have now assumed leadership positions in academia. He also received several prestigous grant awards to support his research including an Advanced Investigators Grant from the European Research Council.
homepage

 

Mike Oaksford (Birkbeck College, University of London)

A Probabilistic Approach to the Psychology of Conditional Inference (Mo, 17:00-18:00)

Introduced by: Christoph Schlieder

A new paradigm in the psychology of reasoning has recently emerged (Over, 2009) that proposes a probabilistic approach to reasoning where previously logical approaches had prevailed, e.g., mental logic and mental models. In this talk, I discuss one particular probabilistic approach to conditional reasoning-involving if...then statements-developed by Nick Chater and I over the last couple of decades (Oaksford & Chater, 2009). At the computational level, this theory views conditional inference as dynamic belief update by Bayesian conditionalization (Oaksford et al, 2000; Oaksford & Chater, 2007). I review the theory and some of the data it can explain before addressing recent critiques (Oberauer et al, 2006). I show that a more recent model incorporating rigidity violations better explains the data (Oaksford & Chater, 2008) and that experiments on enthymematic and explicit conditional inference reveal results only consistent with the probabilistic approach. I then discuss possible algorithmic level implementations of the probabilistic approach in a constraint satisfaction neural network (Oaksford & Chater, 2010) and using Causal Bayesian Networks (Ali, Chater, & Oaksford, 2011). I conclude with a proposal for the cognitive architecture of reasoning based on these empirical and theoretical results and compare it to recent proposals in dual and tri process theory (Evans, 2010; Stanovich, 2011).

Short Biography

Mike Oaksford is Professor of Psychology and Head of Department at Birkbeck College, University of London. After receiving his PhD from the University of Edinburgh in 1989, he was a research fellow at the Centre for Cognitive Science, University of Edinburgh, before moving to the University of Wales, Bangor as a lecturer. He was then a senior lecturer at the University of Warwick, before moving to Cardiff University in 1996 as Professor of Experimental Psychology, a post he held until 2005. His research interests are in the area of human reasoning, argumentation, and decision making. With his colleague Nick Chater, he has developed Bayesian probabilistic models of data selection (Wason's selection task), conditional inference and quantified syllogistic reasoning tasks. With his colleague Ulrike Hahn, he has extended this approach to the more general area of argumentation, in particular developing rational Bayesian models of classic argumentative fallacies, e.g., the arguments ad hominem and ad ignorantium. According to this approach reasoning "biases" and argumentative fallacies arise as a result of applying the wrong normative model and failing to take account of the statistics of people's normal environment. He also studies the way the emotions affect and interact with reasoning and decision making processes.
homepage

 

Elsbeth Stern (ETH Zürich)

Understanding what students know: How Cognitive Science can inform Educational Decisions (Mo, 9:30-10:30)

It has been repeatedly shown that teachers’ pedagogical content knowledge (PCK) has noticeable influence on students’ learning gains. PCK is understood as the integration of content knowledge and knowledge about human learning and cognition, and it provides the basis for understanding learners’ difficulties with particular content areas and reacting properly to them. PCK can be characterized as seeing the content through the eyes of the learners. Therefore, teachers need knowledge about human learning and cognition, such as it has been developed in Cognitive Science. Concepts like working-memory, reasoning, procedural and declarative knowledge or problem solving have to be made usable for teachers, whose university training predominantly focused on subject knowledge. At present, however, there is still a wide gap between what Cognitive Science and Psychology have ascertained about powerful learning environments, and how these findings do inform daily classroom practice. On the one hand, schools are still governed by traditions that are in conflict with well accepted principles of human learning and cognitive functioning. On the other hand, findings from research on learning and cognition hardly ever go along with clear practical implications, and therefore may be more confusing than helpful for teachers. An approach in teacher education that has as yet not met with much success has been to teach the concepts developed in Cognitive Science in isolation. Instead of being applied to school subjects, these concepts often have been trivialized by deriving oversimplified recipes. In more promising attempts of teacher education, the concepts developed in Cognitive Science are translated from the very beginning into a language of didactics and merged into Pedagogical Content Knowledge. In my talk I will evaluate these attempts.

homepage

 

Presidential Lecture

Michael Pauen (Humboldt-Universität zu Berlin)

Kognitionswissenschaft als interdisziplinäres Projekt. Grenzen und Perspektiven. (Tu, 16:45-17:30)

Kognitionswissenschaft ist von ihrer Anlage her interdisziplinär. Will man sie ernsthaft betreiben, dann ist man auf die Zusammenarbeit einer ganzen Reihe von Disziplinen angewiesen: Dazu gehören Psychologie und Neurowissenschaften, Informatik, Linguistik und Philosophie. Diese Zusammenarbeit, ursprünglich ein Experiment mit ungewissem Ausgang, klappt mittlerweile erstaunlich gut - und sie wird vielfach gefördert. Meine These ist, dass die Zukunftsperspektiven interdisziplinärer Forschung umso besser sind, je genauer man sich deren Grenzen vor Augen hält. Das Wissen um solche Grenzen dient nicht nur dem Schutz vor Dilettantismus, sondern auch der Konzentration auf die wirklich erfolgversprechenden Projekte - eben die mit den besten Perspektiven.

homepage