In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M
In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M
The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions. In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at \url{this https URL}.
Utilizing Deep Reinforcement Learning (DRL) for Reconfigurable Intelligent Surface (RIS) assisted wireless communication has been extensively researched. However, existing DRL methods either act as a simple optimizer or only solve problems with concurrent Channel State Information (CSI) represented in the training data set. Consequently, solutions for RIS-assisted wireless communication systems under time-varying environments are relatively unexplored. However, communication problems should be considered with realistic assumptions; for instance, in scenarios where the channel is time-varying, the policy obtained by reinforcement learning should be applicable for situations where CSI is not well represented in the training data set. In this paper, we apply Meta-Reinforcement Learning (MRL) to the joint optimization problem of active beamforming at the Base Station (BS) and phase shift at the RIS, motivated by MRL's ability to extend the DRL concept of solving one Markov Decision Problem (MDP) to multiple MDPs. We provide simulation results to compare the average sum rate of the proposed approach with those of selected forerunners in the literature. Our approach improves the sum rate by more than 60% under time-varying CSI assumption while maintaining the advantages of typical DRL-based solutions. Our study's results emphasize the possibility of utilizing MRL-based designs in RIS-assisted wireless communication systems while considering realistic environment assumptions.
There are no more papers matching your filters at the moment.