[Neural Networks and Evolution of Cooperation]

Date: Jan 2011
Tags: Java :: AI :: A-Life :: neural networks

The paper investigates artificial evolution of cooperation in the Iterated Prisoner's Dilemma using a number of player implementations. Existing strategy encoding and neural network models are compared with action-discriminating neural network created during writing of this paper.

Evaluation is performed in terms of number of generations needed for reaching a desired cooperation level as well as the nature of evolved strategies. Examples when the action-discriminating model evolved the most beneficent strategies are given.

Iterated Prisoner's Dilemma, evolution of cooperation, strategies, interaction, neural networks

{Please enable JavaScript in order to post comments}

[You might also be intested in...]

Does Communication Make a Difference?
This paper compares different animal groups from eusocial insect colonies to human society and discusses their mechanics and behaviour as agent systems.
V-REP, Gazebo or ARGoS? A robot simulators comparison
Let’s have a look at three commonly used open-source simulators for robotics: V-REP, Gazebo and ARGoS, to find out which one suits your project the best.
Designing Effective Roadmaps for Robotics Innovation
Automated factories, autonomous delivery drones, self-driving cars: these and similar technologies will soon touch every aspect of our lives. An engaging discussion about how these technologies are regulated and innovated took place at the IROS 2017 conference.
The Information-Cost-Reward framework for understanding robot swarm foraging
The Information-Cost-Reward (ICR) framework relates the way in which robots obtain and share information about where work needs to be done to the swarm’s ability to exploit that information in order to perform work efficiently in the context of a particular task and environment.
Behaviour-Data Relations Modelling Language For Multi-Robot Control Algorithms
Behaviour-Data Relations Modeling Language (BDRML) explicitely represents behaviours and data that robots utilise, as well as relationships between them. This allows BDRML to express control algorithms where robots cooperate and share information with each other while interacting with the environment.
Robot swarms in action
Watch e-puck robots collect resources and bring them back to base. While the previous simulation work helped us to learn a lot about the advantages and disadvantages of communication in swarms, doing similar experiments with real robots is already revealing interesting new things.