Abstract: We propose a method of autonomous learning of target decision strategies for coordination in the continuous cleaning domain. With ongoing advances in computer and sensor technologies, we can expect robot applications for covering large areas that often require coordinated/cooperative activities by multiple robots. We focus on the cleaning tasks by multiple robots or by agents which are programs to control the robots in this paper. We assumed situations where agents did not directly exchange deep and complicated internal information and reasoning results such as plans, strategies and long-term targets for their sophisticated coordinated activities, but rather exchanged superficial information such as the locations of other agents (using the equipment deployed) for their shallow coordination and individually learned appropriate strategies by observing how much dirt/dust had been vacuumed up in multi-agent system environments. We will first discuss the preliminary method of improving the coordinated activities by autonomously learning to select cleaning strategies to determine which targets to move to clear them. Although we could have improved the efficiency of cleaning, we observed a phenomenon where performance degraded if agents continued to learn strategies. This is because so many agents overly selected the same strategy (over-selection) by using autonomous learning. In addition, the preliminary method assumed information given about which regions in the environment easily became dirty. Thus, we propose a method that was extended by incorporating the preliminary method with (1) environmental learning to identify which places were likely to be dirty and (2) autonomous relearning through self-monitoring the amount of vacuumed dirt to avoid strategies from being over-selected. We experimentally evaluated the proposed method by comparing its performance with those obtained by the regimes of agents with a single strategy and obtained with the preliminary method. The experimental results revealed that the proposed method enabled agents to select target decision strategies and, if necessary, to abandon the current strategies from their own perspectives, resulting in appropriate combinations of multiple strategies. We also found that environmental learning on dirt accumulation was effectively learned.