M. Xie and K. Okazaki (Japan)
Multi-agent system, Q-learning, disaster relief, action acquisition, view information
In the event of a disaster, rescuers cannot always effect the rescue of injured individuals because of the possible occurrence of secondary disasters. As such, disaster relief robots have been investigated extensively in recent years. One of the systems used to operate a number of autonomous robots is the multi-agent system. However, this system has a disadvantage with respect to the complexity of interaction between agents, which makes it difficult to assign action rules to the agents beforehand. This problem can be solved by using autonomous agents that can learn their own actions. In the present study, we constructed a simplified disaster relief multi-agent system and acquired action rules by Q-learning, which is a typical method of reinforcement learning. We then observed how the autonomous agents obtain their action rules and examined the influence of the learning situations on the system. As the number of learning iterations increased, the number of steps required by agents to rescue an injured individual decreased, which confirmed the effectiveness of the learning. Moreover, we considered how the system was influenced by learning situation and the view information of the agent.
Important Links:
Go Back