The Final Day of ICA4 began with two thought-provoking lectures from **Shimon Ullman **and Zaven Paré, discussing how humans extract information from complex scenes and the peculiar nature of the relationship between humans and robots. Pare then revealed a provocative example of an existing robot that is addressed to isolation in a contemporary urban environment. The Fellows then took a break to explore the 3D virtual campus of the Paris IAS, called Teemew Campus Alpha, in which lounges, public seminar rooms, private meeting rooms and workshops are accessible to ICA4 Fellows and Mentors. Each Academia member possesses an avatar, created from his/her own image, which is then used to navigate through the IAS cyber campus. The Fellows then discussed their follow-up tasks and next step(s). The first session of ICA4 in Paris was then wrapped up with a presentation from, this time, the Fellows! They highlighted some questions to explore further, ideas for research papers, an interdisciplinary manifesto for Artificial Intelligence and finally, they reflected on the key takeaways from the entire event. The intellectually intense series of events was finally concluded with some cocktails at the Paris IAS, marking the very beginning of a series of scientific adventures which are yet to come, as the ICA4 Fellows continue to collectively explore some seemingly never-ending questions through combining various perspectives on Intelligence and Artificial Intelligence, ultimately discovering and shaping how such complex matters should, and will be, embedded within societies... Read more
The Final Day of ICA4 began with two thought-provoking lectures from **Shimon Ullman **and Zaven Paré, discussing how humans extract information from complex scenes and the peculiar nature of the relationship between humans and robots. Pare then revealed a provocative example of an existing robot that is addressed to isolation in a contemporary urban environment.
The Fellows then took a break to explore the 3D virtual campus of the Paris IAS, called Teemew Campus Alpha, in which lounges, public seminar rooms, private meeting rooms and workshops are accessible to ICA4 Fellows and Mentors. Each Academia member possesses an avatar, created from his/her own image, which is then used to navigate through the IAS cyber campus.
The Fellows then discussed their follow-up tasks and next step(s). The first session of ICA4 in Paris was then wrapped up with a presentation from, this time, the Fellows! They highlighted some questions to explore further, ideas for research papers, an interdisciplinary manifesto for Artificial Intelligence and finally, they reflected on the key takeaways from the entire event.
The intellectually intense series of events was finally concluded with some cocktails at the Paris IAS, marking the very beginning of a series of scientific adventures which are yet to come, as the ICA4 Fellows continue to collectively explore some seemingly never-ending questions through combining various perspectives on Intelligence and Artificial Intelligence, ultimately discovering and shaping how such complex matters should, and will be, embedded within societies...
The top-down and bottom-up in visual processing
Presented by Shimon Ullman
A major question in visual processing is how humans extract information from complex scenes. Images often tell us a story! Extracting such a narrative from a complex scene is a sophisticated task. There is a great deal of cultural and situational knowledge that serves as the background for directing attentional and visual processing resources.
Deep neural networks have made substantial progress in certain types of visual processing tasks. But there are massive data requirements. Merely identifying objects and characteristics is not enough to account for relationships between objects. Humans are very fast at identifying the important features in an image that relate to structure and narrative. There is an interesting set of psychophysics studies that examine how people extract information based on the time of exposure to an image. What we learn from these studies is that there is a substantial amount of cross-talk between visual and cognitive systems. It’s not the case that the visual system just analyses a scene and then sends the processed information upstream. Rather some visual information goes up to the cognitive centres, and that information is used to direct queries carried out by the visual system, which sends information up to the cognitive centres in an iterative process.
Can we model this process partially in an artificial system as an “unfolded RNN” that involves both bottom-up and top-down layers? In such a system, there are both symbolic and embedded representations. The inputs for the models are the images themselves and a set of instructions. The instructions are queries structured as vectors coded over objects and properties. The algorithm returns a correct answer when it pulls the queried information from the image. We can interpret what’s happening here symbolically as a program or sequential set of instructions for extracting information from an image.
A major challenge is a combinatorial generalisation. The same structure can be instantiated in many ways (compare “A brown dog chasing a terrified kitten” with “A large cat chasing a furry squirrel”). One way to test this is to leave out object/property pairs from the training set and test these. The bottom-up/top-down systems do well with the left-out data, but the bottom-up only system performs poorly on the left-out data.
There is a broader set of questions on modelling and understanding. Humans have a very high-level understanding of a concept like “drinking” that is very hard to imagine arising from a purely bottom-up model. More data, even massive amounts of data, might not be sufficient without some higher-order structure.
Upon completion of the presentation, ICA4 Fellows then asked questions related to how images are embedded in actions and within cultural contexts, the relationship between ontologies and bottom-up networks, and visual processing in non-humans and how that informs our thinking on the role of abstract reasoning in visual processing.
The role of the robot in society
Presented by Zaven Paré
Art can tell us something about society and technology. Once robots are embedded in contexts, they take on new characteristics – even if they are manufactured to be the “same,” they are changed by their environments. Different robots serve different functions in different societies. An important potential role for robots is to respond to isolation. Robots in deep sea and space are examples of how these systems can generate human-artificial interaction in conditions of isolation. The Gatebox product is a provocative example of an existing robot that is addressed to isolation in a contemporary urban environment.
Scribe: Mike Livermore
Chair: Alex Cayco Gajic
The fellows then discussed questions related to the importance of isolation, what it means for a robot to offer companionship and the social meaning of different anthropic forms being projected onto robots...
The Fellows and Mentors of ICA4 embarked on their last scientific trip to The École Normale Supérieure of Paris. Two back to back scientific sessions were held in the morning, both of which described the current issues related to Artificial Intelligence and offers future perspectives. The first lecture was given by the Director of the École Normale Supérieure (ENS), Marc Mézard, who stressed that we need a better understanding of AI. This was followed by a talk from the famous economist, Philippe Aghion. He discussed the impact of automation on the labour force, and how AI can be used to boost innovation and growth through facilitating the process of creative destruction. Both mentors emphasised a need for a global set of ethical rules and regulations. The day continued with a presentation of ENS and concluded with a discussion session among the Fellows, as they began preparing their presentations for the plenary seminar on the Final Day of ICA4. Read more
The Fellows and Mentors of ICA4 embarked on their last scientific trip to The École Normale Supérieure of Paris. Two back to back scientific sessions were held in the morning, both of which described the current issues related to Artificial Intelligence and offers future perspectives. The first lecture was given by the Director of the École Normale Supérieure (ENS), Marc Mézard, who stressed that we need a better understanding of AI. This was followed by a talk from the famous economist, Philippe Aghion. He discussed the impact of automation on the labour force, and how AI can be used to boost innovation and growth through facilitating the process of creative destruction. Both mentors emphasised a need for a global set of ethical rules and regulations. The day continued with a presentation of ENS and concluded with a discussion session among the Fellows, as they began preparing their presentations for the plenary seminar on the Final Day of ICA4.
Recent Progress and Future Challenges in AI
Presented by Marc Mézard
Mézard is a theoretical physicist with a personal interest in the development of a theoretical framework to explain how AI works, and more specifically how Deep Neural Networks operate. Huge innovations have been made in the predictive power of neural networks. Still, many of the conceptual foundations have been around since the 1980s. By describing the lineage of the technology, Mézard was able to convincingly argue the lack of a theoretical grounding for these networks. How do they work? "We know everything about these networks, but we understand nothing", the speaker provocatively posed.
Notwithstanding the impressive technological innovations in Deep Neural Networks, Mézard raises three main issues:
- The training of the networks still requires vast amounts of data, which is unpractical and a sign that the networks do not mimic the human brain. Humans can already learn and generalize after being exposed to a small set of training material.
- There is still a clear lack of understanding of what is going on in neural networks. The learning mechanism in networks is poorly understood. In other words, there is no way to explain how the machine makes decisions.
- Neural networks are not able to generate representations of the world. Neural networks are extremely adept at making predictions, but they are not able to generate representations of the world. All in, we are still very far from reaching General AI.
Mézard stressed that we need a better understanding of architecture, algorithms, and data structure, which can improve the explainability of AI. In addition, there is a need for a global set of ethical rules and regulations. We need control mechanisms and a global vision of the possible impacts of AI on our societies.
The Impact of AI on the Economy
Presented by Philippe Aghion
The speaker expanded on ways to stimulate research into AI and the development of AI in industry, while also emphasising the need for regulation. Aghion drew from his expertise on economic growth and drew parallels to the role of AI in society and global economies.
AI has already had a considerable impact on society, for example, through the impact of automation on the labour force. While AI has been instrumental in increasing productivity, economic growth has declined since the mid-2000s. Aghion's major concern is focused on the formation of large companies which boosted growth but also inhibited innovation. Much of the innovation related to AI is currently concentrated in such companies. To give AI more potential, we need to rethink ways to stimulate innovation while also making it sustainable. One crucial step is to re-calibrate the relationship between companies, institutions, and civic society. Rethinking funding strategies while also thinking of governmental regulatory measures and increasing the civic engagement with these companies is vital to achieving sustainable growth for companies developed AI.
Both talks showed the complicated interplay between research objectives related to AI and the societal and economic embedding of the technology. Current research is devoting more attention to the challenges that Mézard raised, for example, work on Explainable AI. However, public awareness of the limitations of neural networks and the research challenges is still lagging. Increasing this awareness might help to balance polarized views on AI that often oscillate between dystopian and utopian perspectives. An interdisciplinary project, offered through the 4th Intercontinental Academia, is of utmost importance in shaping how scholars should communicate the current state of AI and its challenges to a broader audience.
Scribe: Melvin Wevers
Chair: Jakub Growiec
While at SCAI, The ICA4 Fellows plunged into the philosophy of computation with two ICA4 mentors, who are philosophers of cognitive science (_Jack Copeland _and Oron Shagrir). The day continued with a presentation of SCAI from the Director followed by a tour from the robotics labs. Finally, Xiao-Jing Wang explained why Artificial Intelligence needs the prefrontal cortex. Read more
While at SCAI, The ICA4 Fellows plunged into the philosophy of computation with two ICA4 mentors, who are philosophers of cognitive science (_Jack Copeland _and Oron Shagrir). The day continued with a presentation of SCAI from the Director followed by a tour from the robotics labs. Finally, Xiao-Jing Wang explained why Artificial Intelligence needs the prefrontal cortex.
Computational indeterminacy: what is your computer doing?
Presented by _Jack Copeland _and Oron Shagrir
Both sessions focused on the notion of indeterminacy in computation. Perhaps it's necessary to describe what computational indeterminacy is, and then look at its applications and philosophical questions.
So, what is computational indeterminacy anyway?
Suppose you have a computer, carrying out all sorts of tasks. Electrical signals are pulsing through it, flowing across its circuits, inputs flowing in, outputs flowing out. Now, we can ask the question: what is the computer doing? Or, more precisely: which computations is it actually carrying out?
This may seem easy to answer: you simply look at the machine and find out which computations it’s carrying out, surely? Well, no. Here is where the issue of computational indeterminacy comes in, and to understand it we’ll need a tiny bit of logic.
Computers make use of what are called logic gates, or simply: gates. Each gate takes a certain set of inputs, then (based on the rules that govern the gate), it gives a certain output. So let’s take a really simple gate, with two inputs, and one output. Inputs and outputs are in the form of electrical pulses (so it can take in up to two electrical pulses as inputs, and give up to one electrical pulse as an output). Furthermore, the absence of a pulse can also be an output or an input as well.
Let’s say the gate follows this rule:
GATE-RULE:
IF I RECEIVE TWO ELECTRICAL PULSES, THEN I WILL OUTPUT ONE PULSE. IF I DO NOT RECEIVE TWO ELECTRICAL PULSES, I WILL NOT OUTPUT A PULSE.
More simply, the gate’s job is to only output a pulse if it receives two pulses as input. Otherwise, it stays totally silent. So far so good...right?
Now the core question: what is the gate computing? Someone with minimal undergraduate logic may think the answer is obvious: it’s computing AND, one of the most basic logical functions, which is characterised by the following rule:
AND-RULE:
IF I RECEIVE TWO INPUTS OF ‘TRUE’ (OR ‘1’) THEN I WILL OUTPUT ‘TRUE’ (OR ‘1’). OTHERWISE, I WILL OUTPUT ‘FALSE’ (OR ‘0’).
It initially looks pretty clear that the gate is computing AND. If you compare the AND-RULE to the GATE-RULE, they look pretty much the same. After all, to get the AND-RULE from the GATE-RULE, you just need to substitute an electrical pulse for ‘TRUE’ or ‘1’, and the absence of a pulse for ‘FALSE’ or ‘0’.
So what’s the problem? It stems from the way we map pulses to ‘TRUE’ (or ‘1’) and non-pulses to ‘FALSE’ (or ‘0’). If we interpret a pulse as TRUE, and a non-pulse as FALSE, then the gate will indeed compute the function AND (characterised by the AND-RULE, above). But that’s just a choice we made, to map pulses to TRUE and non-pulses to FALSE in these ways.
What if we did it the other way round? What if we switched the mappings round? What if we interpret a pulse as FALSE, and non-pulse as TRUE? What then? The gate will still output a pulse when (and only when) it receives two pulses. But now (since we’ve switched the mappings around), what it’s doing is outputting a ‘FALSE’ only when it receives two ‘FALSE’ inputs. In fact, what it’s now doing (as perhaps any junior logic student will describe) is computing OR. That is, it is computing this:
OR-RULE
IF I RECEIVE TWO INPUTS OF ‘FALSE’ (OR ‘0’) THEN I WILL OUTPUT ‘FALSE’ (OR ‘0’). OTHERWISE, I WILL OUTPUT ‘TRUE’ (OR ‘1’).
This is rather strange! Whether the gate is computing AND or OR is a matter of how we interpret it. It’s a matter of how we map electrical pulses to TRUE/1, or FALSE/0. There’s no correct answer to which computation the gate itself is doing, that’s a matter of how we interpret it. That’s computational indeterminacy.
Obviously, a computer is more than just one gate. It’s many many gates. As you add more and more gates into the system, the indeterminacy is going to multiply and multiply. Just one gate can give you OR or AND, but when you put enough gates together (like there are in a modern computer) the indeterminacy is only going to get more complex.
What is it good for?
This phenomenon was discovered by Ralph Slutz, the 20th-century computational engineer. However, its importance for philosophy and cognitive science is only now becoming apparent. Why is it so important? This is something Jack Copeland focussed on in his talk. Return to our example of a gate that could be OR or AND. Imagine you have a computer, and it has two different tasks to perform. Suppose that, in order to perform task 1, it will need an OR gate. In order to perform task 2, it needs an AND gate. Here’s the crucial question: will this computer need two different gates to do this?
No! As we saw above, whether a gate computes AND or OR is a matter of interpretation, so in a sense, the gate performs both of them. You only need one gate, and it can compute both. All you need is two different systems that interpret the gate (these are what Jack Copeland calls ‘probes’). One probe interprets the gate as an AND gate, the other interprets it as an OR gate. So you only need one gate, and it can do the job of an AND gate, and an OR gate. Your computer can perform both tasks with only one gate. You get two gates for the price of one. Gates, it turns out, are great at multi-tasking.
This could be especially useful in reducing redundancy in machine design. Rather than using huge numbers of gates, each of which doing one job, we might hope to get each one to multi-task. Ultimately, we might hope to end up with one large central system, with many gates all doing their thing. We might then hope that there could be multiple probes, each of which interprets this system in a different way. Maybe one of them interprets the system in a way that is relevant for one task, another which is relevant for another, and so on. All of the tasks are performed by just one system, but it gets put to many different uses, by being interpreted in different ways by different probes. It’s like getting many many different computations, all for the price of one!
Philosophical questions:
Let’s leave machine design to one side, and concentrate on the philosophical implications of all of this. Oron Shagrir takes the indeterminacy one step further. Above, we used the example of just two possible states (pulse and non-pulse) and two interpretations (TRUE/1 and FALSE/0). But of course, in reality, there are many states. Imagine you have three possible states. For example, suppose that instead of taking just a pulse or a non-pulse as inputs, the gate can take 3 volts, 6 volts, and 10 volts as inputs. Now we have three possible input states. And you can map these to TRUE/1 and FALSE/0 in different ways too. You could interpret 3 volts as TRUE/1, and then interpret 6 and 10 volts each as FALSE/0. Or you could go the other way, and interpret 3 or 6 volts as TRUE/1, and 10 volts as FALSE/0 (obviously there will be many more possible interpretations).
Now, by making things just a tiny bit more complicated in this way (by scrapping just pulse and non-pulse and replacing them with 3, 6 and 10 voltages, and mapping them in different ways to TRUE/1, and FALSE/0), the number of computations that a gate can compute will go up. One gate could not just do OR or AND, but also other ones as well, such as XOR (XOR is ‘exclusive-or’: it will output ‘TRUE’ only in the case that exactly one of the inputs is ‘TRUE’).
Here we encounter a problem. As we have seen, one physical system can perform many different computations at once (in our first example, the really simple gate can compute OR and AND at the same time). But which physical systems can be correctly interpreted in these ways? Where does this end? Let’s take the example of a rock. The rock is made up of atoms and molecules, each of which is engaged in complex physical activities. Could we interpret these complex physical interactions as performing computations in the same way that we interpreted our really simple gate above as performing AND and OR? Could we end up attributing computations to a rock? Surely that feels wrong. Rocks are not computers!
These are called ‘triviality results’, first introduced by philosophers such as Hilary Putnam. These are cases where you basically end up saying that everything is a computer. But we don’t want that. We want to say that laptops, smartphones (maybe brains) compute things. Rocks, sticks, stones, and sealing wax do not. We need to draw the line in the right place.
Oron’s own proposal is that we use semantics to solve this problem. The really computational states are the ones that have semantic content: the ones that carry meanings, that represent things in the world. Contrary to what a lot of philosophers have thought, computation may be a deeply semantic phenomenon...
Artificial intelligence needs the prefrontal cortex
Presented by Xiao-Jing Wang
Today’s remarkably successful AI systems roughly correspond to the biological systems of perception. By contrast, mental life and behavioural flexibility depend on other parts of the brain, especially the prefrontal cortex (PFC, often called the “CEO of the brain”). Wang discussed some experimental and computational work designed to elucidate “cognitive-type” neural circuits exemplified by the PFC. In particular, he presented a recurrent network model that learns to carry out many cognitive tasks involving working memory, decision-making, categorization and control of motor responses. This line of research motivated us to investigate how the brain utilises previously acquired knowledge to accelerate learning in solving a new problem (learning-to-learn). Both rule-based multi-tasking and learning-to-learn are frontier topics in the field of machine learning, therefore bridging the brain and the AI.
Scribe: Henry Taylor
Chair: Ithai Rabinowitch
While impossible to precisely predict, the convoluted future of AI may be presented in terms of some major challenges it will inevitably face within the upcoming decades. The ICA4 Mentors shared their views and thoughts on this fascinating, and intellectually challenging, subject over a roundtable discussion in the halls of Paris IAS! Read more
While impossible to precisely predict, the convoluted future of AI may be presented in terms of some major challenges it will inevitably face within the upcoming decades. The ICA4 Mentors shared their views and thoughts on this fascinating, and intellectually challenging, subject over a roundtable discussion in the halls of Paris IAS!
Professor Saadi Lahlou described the progress of sensors and actuators as a massive transformation. He further explained that there are already billions of them connected to the network, but not interconnected yet. The “Internet of Things” may operate such an interconnection by connecting perception and action systems (which are precisely the before mentioned sensors and actuators) to the information processing systems. Consequently, this can create, on a colossal scale, the perception-reasoning-action loop that characterises an autonomous as well as an intelligent actor. By enabling these systems to perceive, measure and evaluate the consequences of their actions and the operational quality of their judgement, these perception-action-evaluation loops will transform them into responsible, learning agents. To design, direct and control such recently established entities is a gigantic scientific problem. It is no longer theoretical: the movement is underway, on a planetary scale of almost unimaginable complexity! It is first and foremost a challenge to the sciences of matter and engineering.
But how will these new entities emerge? As such, we are faced with an Argus with billions of eyes and ears in addition to the capacity for action, and with infinite memory. What will be their characteristics? How shall we integrate them into human society? How will humans adapt to this new situation? This is the second challenge: a challenge to our societies created by the technology we have brought into the world. What rights, what duties, what values, what police for these entities? These questions are all the more difficult because such entities will remain in continuity with legal entities and natural persons, in technical, functional, economic and legal hybrids. In fact, they already are!
This is a challenge facing humanities and social sciences. In this spirit, it is worth mentioning the notion of the semantic Rubicon, created by Kindberg and Fox, which defines the limit between what is left to the appreciation (or decision) of the human, and what is left to the appreciation of the machine. This notion, which is little known today, will become an essential issue in many fields. In short, our civilisation is going to become hybrid, and not just for meetings. The social, economic, political and anthropological management of this evolution is a huge challenge.
The last challenge is that of global transitions: climatic, ecological, economic, and so on. Their urgency is obvious. The complexity of the problems, and the need for solutions finely adapted locally, taking into account a multitude of parameters, are challenges on the scale of these digital Argus that we are creating. How can we make problems and solutions meet, can we find more efficient methods of governance of research and technology than the famous garbage can theory of Cohen, March and Olsen, where the problem bearers meet the solution bearers somewhat by chance? Or, to quote them: “one can view a choice opportunity as a garbage can into which various kinds of problems and solutions are dumped by participants as they are generated”. Can we steer research in directions that will facilitate productive encounters? We can hope so…
Later on, Professor Shimon Ullman argued that with the amazing rate of advances in AI, anticipating correctly what will happen on a time scale of decades is an impossible task, and potentially embarrassing in retrospect.
As he sees it, a major general challenge and an open question would be: will current AI methods reach or approach some form of "true", human-like understanding?
This problem is common to different areas of AI. There has been impressive progress in a range of visual tasks (including object recognition, segmentation, image captioning and others), but we still struggle with the question: do AI vision systems really understand the scene they are looking at?
A preliminary challenge in tackling the problem of achieving human-like understanding is developing methods for evaluating the degree of understanding obtained by AI systems. ‘Understanding’ is not a case of all-or-none, but a matter of degree, and perhaps of different types of understanding. In any domain, understanding can range from a lack of understanding to understanding some of the principles, to a deep and detailed understanding. In the case of vision, directions I consider relevant for evaluating understanding include the ability to obtain meaningful scene structures, to reach broad semantic generalization, and to justify conclusions. However, creating useful systematic methods for evaluating understanding along such directions is a complex open challenge. As evaluation methods develop, the experience will show that current AI methods do not give rise by themselves to the emergence of human-like understanding. We will then face a fundamental challenge of identifying the missing ingredients, and of finding methods that lead to deeper, more human-like understanding. The process will be gradual. Indeed, reaching deep and detailed human-like understanding will prove to be a challenge for decades...
Professor Karen Yeung recognised two main challenges which include achieving the legal, cultural and organisational frameworks that will ensure appropriate (1) governance of data and (2) governance of AI, in ways that will be widely accepted as ensuring appropriate protection for individuals and organisations in ways that broadly align with core values upon which liberal democratic societies are grounded. In such ways, individuals, groups, and the community at large could then benefit from the value of data-driven technologies (including AI). She then further explained:
The data governance challenge:
Achieving this will be one of the greatest challenges of the new digitised and "datafied" era into which we are transitioning given the unique properties of digital data: permanent, instantaneously reproducible without loss of quality at scale. Hence the marks that we now leave in our digital traces have vastly different implications from the paper trails of an analogue world. Compare, for example, the criminal record stored on a little card kept in a local police office that testifies to the exuberance of youth, and the single Kodak photo of a naked photo sent by a person to her lover. At the same time, we have no reliable and trustworthy institutional mechanisms for ensuring and guaranteeing the provenance, accuracy and conditions under which data-sets have been collected, and that they are being utilised to create algorithmic models which are 'fit for purpose' so that the resulting predictions appropriately and meaningfully represent the phenomena they are claimed (by their creators) to represent.
The governance of AI challenge:
The highly sophisticated, opaque yet powerful capacities of these techniques and their embedding into myriad social applications have already demonstrated their adverse consequences - for individuals, groups and communities. No doubt there will be many that we have not yet properly identified or understood, particularly as new applications emerge and new vulnerabilities and problems are created. They are increasingly embedded into complex socio-technical systems that display emergent properties that are unstable and therefore uncertain. While these technologies have already delivered valuable benefits at the individual and collective level, many promises are made about their capabilities: yet whether these promises will be translated into real-world benefits of the kind that are promised remains a very open question. History has shown just that on many occasions. This is particularly problematic concerning the AI applications that are dependant upon data collected through the continuous and pervasive 'uberveillance' of individual behaviour and activity in ways that are already used in ways that are contrary to the interests, welfare and autonomy of persons.
Professor Ada Yonath had a rather different perspective. She believes that recent years have witnessed a major, highly astonishing, and rather unexpected revolution in understanding and conceptualizing life, owing to the introduction of AI.
Proteins are essential to life, supporting practically all biological functions. They are large complex molecules, made up of chains of 21 amino acids that are folded based on their sequence and local conditions. A protein’s shape and its potential alterations enable its function, which depends largely on its unique 3D structure and mobility. Thus, the characteristics of the specific proteins enable not only their basic functions. Such as digestion, inhaling and growth, but also molecular docking the introduction to Neural networks, intellectual interactions, etc. Figuring out proteins' shapes is being performed experimentally since mid 20th century, by nuclear magnetic resonance, X-ray crystallography and the newer cryo-electron microscopy. These were all time consuming and extensive trials that were heavily dependent on laboratory techniques.
In 1994, when less than 1000 protein structures were known, realizing the experimental shortcoming Professors John Moult and Krzysztof Fidelis founded CASP, as a biennial a blind Critical Assessment of protein Structure Prediction to catalyze research in structure-function biological and medicinal aspects, based on learning how to discover the delicate balance of many weak forces that dictate the global minimum of protein structures, dynamics and function. In this effort, protein structures were predicted and compared to experimental structure determination perfumed elsewhere in parallel, and its average success, for over 2 decades, until 2016, was around 40% (fig 1). However, the numerous functionally meaningful conformations of proteins with known particular structures (namely ~180000), alongside the enormously large number of proteins with still unknown structures (>2 106), still pose a grand challenge in biology.
Around 5 years ago, an extraordinary academic achievement, based on AI, which influences the entire life science field, was released. It is called AlphaFold, a system that generates 3D models of proteins as an open-source package that provides an implementation of intra-proteins contact predictions. Its future is overexciting, with a success rate of almost 100% (fig 1).
Nevertheless, this impressive achievement is not supposed to stand alone. Among the main challenges ahead for Artificial Intelligence & Machine Learning techniques are executing molecular docking, design therapeutics and Drug Discovery. It also has huge implications in advanced areas, such as modelling complex diseases e.g. diabetes and Alzheimer’s. I think that in less than ten years this will be the way that complex biology is understood and even implicated. And, of course, by aggregation from the vast available literature, many other structural biology applications, starting with expression data variation, and creation of complexes, introduction to Neural networks, collecting AI based information on molecular dynamic simulations. In fact, this breakthrough demonstrates the impact that AI can have on various scientific discoveries and its potential to dramatically accelerate progress in some of the most fundamental fields. Thus, in addition to developing treatments for diseases, it will provide scientific tools for many of the world’s greatest challenges, like local and global environmental issues, including finding enzymes that break down industrial waste.
A rather interesting view came from the Nobel prize winner, Professor Robert Aumann, who stated that the first challenge is that we already see with information technology is the continuous, and excessive rate of change and updates, driven by technology. We are continuously forced update and upgrade our devices, not because of our needs, but because the versions have changed. This is while we usually we have no choice: we must take it all without discussion or explanation! A challenge for the development of AI is to avoid this excessive technology-driven pace of top-down change. The changes in the applications of AI in the public domain should be driven by actual needs, not by technology.
The second challenge is to enable a discussion between the users and the AI systems. At the moment, it is not possible to discuss with an AI why and how the decisions are taken. The decision process is opaque, as well as the data they are based on. This is forced on us. We must develop a language to discuss, argue, negotiate with AIs. This will give the users a say in the interventions of AI in their life.
Professor Robert Zatorre identified two complementary mechanisms for animals in a broad sense. One is related to perception and action--the goal is to create internal models of the world in order to operate on it. The second mechanism, which receives its inputs from the first, is related to reward--the assignment of value to items in the environment and to actions performed on those items, to enhance survival or fitness. Value can be based on direct homeostatic needs (such as eating to satisfy hunger) or on more abstract needs (such as acquiring information to satisfy curiosity). In humans, this mechanism also leads to what I might call "aesthetic intelligence" (appreciation of beauty, creation of symbolic landscapes in art, and in music the manipulation of abstract sound properties to create tension, resolution, and pleasure).
AI systems seem to focus a great deal on the first of these mechanisms. What kind of work has been done or could be done to implement reward-like processes in AI agents? What problems might it solve? What problems might it create?
Scribe: Oksana Stalnov
Chair: Evandro Cunha
Although the vast field of AI is faced with prominent challenges, some of the best minds in the world have come together in Paris IAS to, in a collective effort established through ICA4, try to identify the important questions that must be asked, and perhaps also point to ways of solving them...
By the end of the 5th day of ICA4, the weekend had already approached, providing a perfect opportunity to rest and explore Paris. Mentors and Fellows were treated by Paris IAS to a Seine River Cruise over sunset: A Parisien conclusion to the week!
The Fellows embarked on their first scientific trip for ICA4 and were hosted by Ecole Normale Superieure de Paris-Saclay **throughout Day 4. The sessions at Saclay included two thought-provoking talks by Xiao-Jing Wang** and Jay McClelland, both of which touched upon the principles underlying cognitive behaviours, as well as the difference between human and machine intelligence. These were followed by a symposium on AI at University Paris-Saclay. The symposium was followed by a half-day event with multiple workshops in which ICA4 mentors discussed major advances and issues surrounding AI with other world-class researchers such as Stanislas Dehaene. Finally, the intellectually intense day came to an end with a talk in which Zaven Paré raised important questions regarding how we will interact with AI algorithms and intelligent robotics in the decades to come... Read more
The Fellows embarked on their first scientific trip for ICA4 and were hosted by Ecole Normale Superieure de Paris-Saclay **throughout Day 4. The sessions at Saclay included two thought-provoking talks by Xiao-Jing Wang** and Jay McClelland, both of which touched upon the principles underlying cognitive behaviours, as well as the difference between human and machine intelligence. These were followed by a symposium on AI at University Paris-Saclay. The symposium was followed by a half-day event with multiple workshops in which ICA4 mentors discussed major advances and issues surrounding AI with other world-class researchers such as Stanislas Dehaene. Finally, the intellectually intense day came to an end with a talk in which Zaven Paré raised important questions regarding how we will interact with AI algorithms and intelligent robotics in the decades to come...
Efforts to understand the computational principles underlying cognition
Presented by Xiao-Jing Wang
Deep neural networks, despite their recent success, differ from human cognition because they have no internal mental life - instead, they act as complex, nonlinear input-output functions. In humans, the prefrontal cortex (PFC) is known to be crucial for cognitive functions such as working memory, decision making, and executive function. An early avenue of this research involved understanding how persistent neural activity may underlie working memory by sustaining stimulus information in the brain after the sensory cue has disappeared. Such persistence is linked to recurrent connectivity, which is lacking in most deep networks. Wang described his previous research using spiking networks and tools from dynamical systems to understand the attractor dynamics behind this form of memory. In the second half of the talk, he showcased his more recent work which uses recurrent neural networks (RNNs) as a form of a model organism to probe how the PFC may perform multiple cognitive tasks simultaneously. These RNNs can then be used to address questions such as whether the PFC encodes cognitive building blocks in a compositional manner, similar to the psychological concepts of schema.
A different distinction between human intelligence and AI...
Presented by James McClelland
While the latter (in particular machine learning algorithms) learns from statistics on large-scale input data, humans learn to learn from explanations structured by culturally invented systems. Indeed, humans fail to perform in systematic ways, which we would expect if the structure were built into our cognitive functionality. But, McClelland points out that simply building in structure, as proposed by the pioneers of GOFAI, limits flexibility. This structure, McClelland argued, is built by culture. For example, he described a classic study by Scribner and Cole in 1973 which showed that non-Western cultures often lack a concept of absolute number and tend to classify objects based on concrete situations rather than abstract category membership. These authors proposed that Western education creates a context in which certain abstract relational concepts are learned, consistent with McClelland’s later work correlating sudoku puzzle performance to mathematical education level. McClelland closed by reiterating that AI learns by examples but humans learn by explanations and that his explanation-based learning (rather than built-in structure) may underlie our propensity for one-shot learning.
Upon completion of the talks by ICA4 Mentors, Paris-Saclay hosted a half-day event with multiple workshops in which ICA4 mentors and Paris-Saclay researchers discussed major advances and issues surrounding AI. Stanislas Dehaene presented a series of fMRI, MEG, and behavioural evidence that humans use symbolic and recursive strategies on prediction tasks with complex sequences, as compared with monkeys which seem to use a picture-based strategy. In a session focusing on AI and ethics, Paola Tubaro revealed the hidden human workers who provide the hand-labelled training data for products such as Siri. This is due to companies and corporations needing a cheap workforce in the same language, ultimately reproducing historic colonial patterns.
Finally, the intellectually intense day came to an end with a talk in which Zaven Paré discussed his artistic works based on electronic marionettes and his collaborations with robotics specialists in Japan. Paré’s conception of automaton-centred theatre enchants audiences while challenging our tendency towards anthropomorphisation. This raises important questions regarding how we will interact with AI algorithms and intelligent robotics in the decades to come...
Scribe: Alex Cayco Gajic
Chair: Diego Frassinelli