Adaptive Search and the Management of Logistics Systems: Base Models for Learning Agents

2001 ◽  
Vol 52 (5) ◽  
pp. 601-602
Author(s):  
S Salhi
1998 ◽  
Vol 49 (9) ◽  
pp. 966-975 ◽  
Author(s):  
L P Bertrand ◽  
J H Bookbinder
Keyword(s):  

2004 ◽  
Vol 23 (1) ◽  
pp. 15-27
Author(s):  
Jason C.H. Chen ◽  
Binshan Lin ◽  
Lingli Li ◽  
Patty S. Chen

Chinese businesses began with a weak foundation in the intense world trade environment, similar to the many other companies that grew from developing countries. How were these Chinese businesses able to compete with foreign competitors armed with strong capital structures and efficient communication networks? Haier is an excellent example of how Chinese companies have successfully adapted to and prospered in the global economy, using information technology as a strategic weapon to improve its competitive advantage and further to create collaborative advantage. Haier's growth is miraculous: in less than two decades, it grew from a state-owned refrigerator factory into an innovative international giant. The company has become China's first global brand and the fifth largest appliance seller in the world. What are the secrets of Haier's success? Many researchers have conducted extensive studies on Haier's management and found the key is Management Information Systems such as e-Commerce and logistics systems that improve business operations between its suppliers, customers, and business partners. This article recounts the journey of Haier's achievements to excellence through its MIS, and provides analyses of the company's business model, the market chain management model.


Biomimetics ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 13
Author(s):  
Adam Bignold ◽  
Francisco Cruz ◽  
Richard Dazeley ◽  
Peter Vamplew ◽  
Cameron Foale

Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repeat experiments as parameters are altered or to gain a sufficient sample size. In this regard, to require human interaction every time an experiment is restarted is undesirable, particularly when the expense in doing so can be considerable. Additionally, reusing the same people for the experiment introduces bias, as they will learn the behaviour of the agent and the dynamics of the environment. This paper presents a methodology for evaluating interactive reinforcement learning agents by employing simulated users. Simulated users allow human knowledge, bias, and interaction to be simulated. The use of simulated users allows the development and testing of reinforcement learning agents, and can provide indicative results of agent performance under defined human constraints. While simulated users are no replacement for actual humans, they do offer an affordable and fast alternative for evaluative assisted agents. We introduce a method for performing a preliminary evaluation utilising simulated users to show how performance changes depending on the type of user assisting the agent. Moreover, we describe how human interaction may be simulated, and present an experiment illustrating the applicability of simulating users in evaluating agent performance when assisted by different types of trainers. Experimental results show that the use of this methodology allows for greater insight into the performance of interactive reinforcement learning agents when advised by different users. The use of simulated users with varying characteristics allows for evaluation of the impact of those characteristics on the behaviour of the learning agent.


Sign in / Sign up

Export Citation Format

Share Document