Many machine intelligence algorithms are inspired by different natural systems. Genetic algorithms, neural nets, reinforcement agents, and ant colonies are, to mention some examples, well established methodologies motivated by evolution, human nervous system, psychology and animal intelligence, respectively. The learning in natural contexts such as these is generally sluggish. Genetic changes, for instance, take generations to introduce a new direction in the biological development. Behavior adjustment based on evaluative feedback, such as reward and punishment, needs prolonged learning time as well.
Social revolutions are, compared to progress rate of natural systems, extremely fast changes in human society. They occur to establish, simply expressed, the opposite circumstances. Revolutions are defined as “..a sudden, radical, or complete change…a fundamental change in political organization;…a fundamental change in the way of thinking about or visualizing something” [Webster Dictionary]. Regardless in a scientific, economical, cultural or political sense, revolutions are based on sudden and radical changes.
In many cases the learning begins at a random point. We, so to speak, begin from scratch and move, hopefully, toward an existing solution. The weights of a neural network are initialized randomly, the parameter population in genetic algorithms is configured randomly, and the action policy of reinforcement agents is initially based on randomness, to mention some examples. The random guess, if not far away from the optimal solution, can result in a fast convergence.
However, it is natural to state that if we begin with a random guess, which is very far away from the existing solution, let say in worst case it is in the opposite location, then the approximation, search or optimization will take considerably more time, or in worst case becomes intractable. Of course, in absence of any a-priori knowledge, it is not possible that we can make the best initial guess. Logically, we should be looking in all directions simultaneously, or more concretely, in the opposite direction.
Learning based on opposition was introduced in [1]. Extensions of Genetic Algorithms, Neural Networks and Reinforcement Learning have been introduced in the same paper. Opposition-based reinforcement learning has been investigated in [2] and [3]. Differential evolution has been extended to anti-chromosomes in [4] and [5].
References
[1] H.R.Tizhoosh, Opposition-Based Learning: A New Scheme for Machine Intelligence. Proceedings of International Conference on Computational Intelligence for Modelling Control and Automation – CIMCA’2005, Vienna, Austria, vol. I, pp. 695-701.
[2] H.R.Tizhoosh, Reinforcement Learning Based on Actions and Opposite Actions. ICGST International Conference on Artificial Intelligence and Machine Learning (AIML-05), Cairo, Egypt, December 19-21 2005.
[3] H. R. Tizhoosh, Opposition-Based Reinforcement Learning, to be published in the Journal of Advanced Computational Intelligence and Intelligent Informatics.
[4] S.Rahnamayan, H.R.Tizhoosh, M.M. Salama, Opposition-Based Differential Evolution Algorithms, 2006 IEEE Congress on Evolutionary Computation, to be held as part of IEEE World Congress on Computational Intelligence, Vancouver, July 16-21
[5] S.Rahnamayan, H.R.Tizhoosh, M.M. Salama, Opposition-Based Differential Evolution for Optimization of Noisy Problems, 2006 IEEE Congress on Evolutionary Computation, to be held as part of IEEE World Congress on Computational Intelligence, Vancouver, July 16-21