TY - GEN
T1 - Collective problem-solving in evolving networks
T2 - 2018 Winter Simulation Conference, WSC 2018
AU - Songhori, Mohsen Jafari
AU - García-Díaz, César
N1 - Publisher Copyright:
© 2018 IEEE
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Research works in collective problem-solving usually assume fixed communication structures and explore effects thereof. In contrast, in real settings, individuals may modify their set of connections in the search of information and feasible solutions. This paper illustrates how groups collectively search for solutions in a space under the presence of dynamic structures and individual-level learning. For that, we built an agent-based computational model. In our model, individuals (i) simultaneously conduct search of solutions over a complex space (i.e. a NK landscape), (ii) are initially connected to each other according to a given network configuration, (iii) are endowed with learning capabilities (through a reinforcement learning algorithm), and (iv) update (i.e. create or severe) their links to other agents according to such learning features. Results reveal conditions under which performance differences are obtained, considering variations in the number of agents, space complexity, agents' screening capabilities and reinforcement learning.
AB - Research works in collective problem-solving usually assume fixed communication structures and explore effects thereof. In contrast, in real settings, individuals may modify their set of connections in the search of information and feasible solutions. This paper illustrates how groups collectively search for solutions in a space under the presence of dynamic structures and individual-level learning. For that, we built an agent-based computational model. In our model, individuals (i) simultaneously conduct search of solutions over a complex space (i.e. a NK landscape), (ii) are initially connected to each other according to a given network configuration, (iii) are endowed with learning capabilities (through a reinforcement learning algorithm), and (iv) update (i.e. create or severe) their links to other agents according to such learning features. Results reveal conditions under which performance differences are obtained, considering variations in the number of agents, space complexity, agents' screening capabilities and reinforcement learning.
UR - http://www.scopus.com/inward/record.url?scp=85062620969&partnerID=8YFLogxK
U2 - 10.1109/WSC.2018.8632328
DO - 10.1109/WSC.2018.8632328
M3 - Conference contribution
AN - SCOPUS:85062620969
T3 - Proceedings - Winter Simulation Conference
SP - 965
EP - 976
BT - WSC 2018 - 2018 Winter Simulation Conference
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 December 2018 through 12 December 2018
ER -