multi-agent-coordination 0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2022 ankur-tutlani
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,171 @@
1
+ Metadata-Version: 2.1
2
+ Name: multi-agent-coordination
3
+ Version: 0.1
4
+ Summary: Identification of strategic choices under multi-agent systems, coordination game and social networks
5
+ Home-page: https://github.com/ankur-tutlani/multi-agent-coordination
6
+ Download-URL: https://github.com/ankur-tutlani/multi_agent_coordination/archive/refs/tags/v_01.tar.gz
7
+ Author: ankurtutlani
8
+ Author-email: ankur.tutlani@gmail.com
9
+ License: MIT
10
+ Keywords: game theory,evolutionary game,social norms,multi-agent systems,evolution,social network,computational economics,simulation,agent-based modeling,computation
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Topic :: Software Development :: Build Tools
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3.7
16
+ Description-Content-Type: text/markdown
17
+ License-File: LICENSE.txt
18
+
19
+
20
+ # Evolution of Strategic Choices under Coordination and Social Networks
21
+
22
+ This library is used to understand how specific actions or choices can evolve as the dominant choices when agents donot have any specific choices to begin with and they form their opinions based upon their interactions with other agents repeatedly. Agents have incentives to coordinate with other agents and are connected with each other through a specific social network. The strategies or actions which satisfy the norm criteria are potential candidates for setting the norm. In simple terms, norm is something which is played by more number of agents and for a longer periods of time.
23
+
24
+ ## How to use it?
25
+
26
+
27
+ ```bash
28
+ # Install
29
+ pip install multi-agent-coordination
30
+
31
+ # Import
32
+ from multi_agent_coordination import simulation
33
+
34
+ # Execute
35
+ simulation.network_simulations(iteration_name = "test",
36
+ path_to_save_output = "C:\\Users\\Downloads\\",
37
+ num_neighbors = 2,
38
+ num_agents = 20,
39
+ prob_edge_rewire = 0.5,
40
+ grid_network_m = 5,
41
+ grid_network_n = 4,
42
+ name_len = 4,
43
+ num_of_trials = 20,
44
+ perturb_ratio = 0.05,
45
+ fixed_agents=[],
46
+ prob_new_name = 0,
47
+ network_name = "complete",
48
+ random_seed = 96852,
49
+ function_to_use = "perturbed_response1",
50
+ norms_agents_frequency = 0.7,
51
+ norms_time_frequency = 0.5
52
+
53
+ )
54
+
55
+
56
+ ```
57
+
58
+ ## Function parameters
59
+ Following are the parameters which are required to be specified. At the end in the parenthesis, it shows the data type of the parameter which is required or the possible values which is required to be used. In following we will use agents' strategies, actions, choices, responses, names proposed interchangeably. These all represent same intent and meaning in our context.
60
+
61
+ 1. iteration_name: Iteration name. (String)
62
+ 2. path_to_save_output: Path to save output files. (String)
63
+ 3. num_neighbors: Number of neighbours. (Integer)
64
+ 4. num_agents: Number of agents. (Integer)
65
+ 5. prob_edge_rewire: Small world network parameter. Probability of rewiring existing edges or adding new edges. (Float)
66
+ 6. grid_network_m: 2-dimensional grid network parameter. Number of nodes. (Integer)
67
+ 7. grid_network_n: 2-dimensional grid network parameter. Number of nodes. (Integer)
68
+ 8. name_len: Length of random keywords created by the game. (Integer)
69
+ 9. num_of_trials: Number of trials, how long the game should run. (Integer)
70
+ 10. perturb_ratio: Probability of agents taking action randomly. (Float)
71
+ 11. fixed_agents: Agents assumed as fixed. e.g. [2,3] shows we want agents 2 and 3 to be fixed. (List)
72
+ 12. prob_new_name: Probability of agent suggesting new name at any time during the game. (Float)
73
+ 13. network_name: Specify one of these values. A detailed explanation is provided below. ["small_world1","small_world2","small_world3","complete","random","grid2d"]. (String)
74
+ 14. random_seed: Random seed value to reproduce results. (Integer)
75
+ 15. function_to_use: Specify one of these values. A detailed explanation is provided below. ["perturbed_response1","perturbed_response2","perturbed_response3","perturbed_response4"]. (String)
76
+ 16. norm_agents_frequency: Norm condition. Minimum percentage of agents require to propose same name at any given time. Specify number from 0 (0% agents) to 1(100% agents). (Float)
77
+ 17. norm_time_frequency: Norm condition. Minimum percentage of times agents require to propose same name. Specify number from 0 (no time period) to 1 (all time periods). (Float)
78
+
79
+ <br />
80
+
81
+ In the "function_to_use" parameter above, below is the explanation for what these different values mean inside the list specified above.
82
+
83
+ 1. perturbed_response1: Agents select the best response (1-"perturb_ratio")*100% times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. Agents select random strategy ("perturb_ratio")*100% times among the strategies which are not most frequently used. Here also, if there is more than one such strategy, then agents select any one strategy randomly out of these.
84
+ 2. perturbed_response2: Agent selects strategy randomly according to the relative weightage of different strategies. These relative weights are the % of times the respective strategy has been used by opponents in the past.
85
+ 3. perturbed_response3: This is same as "perturbed_response1" function except agent selects random strategy ("perturb_ratio")*100% times from all the possible strategies and not only from the ones which were not used most frequently by their opponents.
86
+ 4. perturbed_response4: Agent selects the best response 100% of times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. There is no perturbation element ("perturb_ratio" is considered as zero).
87
+
88
+ In all the four different response functions, agents propose new name at any point during the game with probability of "prob_new_name".
89
+
90
+
91
+ <br/>
92
+
93
+ In the "network_name" parameter above, below is the explanation for what these different values mean inside the list specified above.
94
+ 1. small_world1: Returns a Watts Strogatz small-world graph. Here number of edges remained constant once we increase the "prob_edge_rewire" value. Shortcut edges if added would replace the existing ones. But total count of edges remained constant.
95
+ 2. small_world2: Returns a Newman Watts Strogatz small-world graph. Here number of edges increased once we increase the "prob_edge_rewire" value. It would add more shortcut edges in addition to what already exist.
96
+ 3. small_world3: Returns a connected Watts Strogatz small-world graph. Rest of the explanation remains as small_world1.
97
+ 4. complete: Returns the complete graph.
98
+ 5. random: Compute a random graph by swapping edges of a given graph. The given graph used is Watts Strogatz small-world graph (the one produced by "small_world1").
99
+ 6. grid2d: Return the 2d grid graph of mxn nodes, each connected to its nearest neighbors.
100
+
101
+ We have used the networkx python library to populate these graphs. For more information around these graphs and how these are produced please refer to the link in the reference section.
102
+
103
+
104
+
105
+ ## Function explanation
106
+ Here we explain the underlying functioning of this library with the help of an example. But the logic can be replicated to any coordination game where there is a positive reward if agents follow the same strategy as their neighbour/opponent, and zero/negative reward if agents follow different strategy.
107
+
108
+ We assume agents play naming game as defined in Young (2015). Naming game is a coordination game wherein two agents are shown a picture of a face and they simultaneously and independently suggest names for it. If agents provide same name, they earn a positive reward and if they provide different names, they pay a small penalty (negative reward). There is no restriction on the names that agents can provide, this is left to their imagination. Agents do not know with whom they are paired or their identities. Agents can recollect the names provided by their partners in previous rounds.
109
+
110
+ We assume there are 20 ("num_agents") agents and are connected with 2 ("num_neighbors") other agents in their neighbourhood. We assume that the positive reward and negative reward are constant values. If we assume agents are connected via ring network structure, the network looks like as in below figure. For instance, agent 0 relates to 2 agents, agent 1 and 19. Similarly, agent 5 is connected with agents 4 and 6. We assume the network to be undirected.
111
+
112
+
113
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/input_network.png)
114
+
115
+
116
+
117
+ We start the game with one edge of the network is selected at any given point. Edge of the network is represented as (0,1), (5,4) etc, implying agents 0 and 1 are connected with each other, agents 5 and 4 are connected with each other. During each time period, all edges are selected sequentially and the agents associated with those edges play the game.
118
+
119
+ To begin with, agents do not have history to look back into, hence agents propose names randomly. Agents do not know the identity of the other agents with whom they are being paired. At the end of a given time period, agents do know the names proposed by other agents. Once agents have history of names proposed by other agents, they take into consideration the names proposed by their opponents in the next successive plays. This way agents get to know the names which are popular among the agents.
120
+
121
+ We assume name to be any string of length "name_len", a combination of alpha and numeric characters generated randomly. Agents keep track of names proposed by other agents and accordingly update their actions in the successive rounds of play. We assume agents have bounded rationality and engaged in limited calculations to decide upon what action to take.
122
+
123
+ We have considered different combinations of approaches or methods that agents can adopt while taking actions. We assume that agents use perturbed best response implying agents can take action randomly with a certain probability (Young, 2015). At each time when agents require to take action, they consider the names proposed in the past by their opponents and decide what name to propose in the current period. We have tested four different ways which agents can use to decide what action to take. The "function_to_use" parameter provides details about these and how these are different to each other.
124
+
125
+ When the simulations are run for "num_of_trials" timeperiod, we get the percentage distribution of different names (strategies) which agents proposed. The names which satisfy the two conditions for norm specified by "norms_agents_frequency" and "norms_time_frequency" are considered as norm. We have looked at two dimensions of norms, number of agents following the norm and for how long it has been followed. We can see the output like below. In below graph, X-axis shows the timeperiod, and Y-axis shows the % of agents who proposed the respective name. Y-axis values are in ratio format (range from 0 - 1), so would need to multiply by 100 to get this in percentage format.
126
+
127
+
128
+
129
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/top_names.png)
130
+
131
+
132
+
133
+ In above figure, name "1E1C" satisfies the norm criteria, when we assume "norms_agents_frequency" as 0.7 and "norms_time_frequency" as 0.5. This implies at least 70% of agents following "1E1C" for at least 50% of times. The network structure looks like below by the end of "num_of_trials" timeperiods. Agents who proposed same name majority of times during the run are colored in same colour. Color itself has no significance here, it is just used to denote agents proposing same name. Same color agents taking same action majority of the times.
134
+
135
+
136
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/network_after_50_timeperiods.png)
137
+
138
+
139
+ The detailed explanation of the output generated is provided in the next section.
140
+
141
+
142
+ ## How to interpret output?
143
+ There are total 9 files generated in the output when there is at least 1 name which satisfies the norm criteria.
144
+
145
+ input_network_complete_2023-02-01-15-24-22.png
146
+ network_after_50_timeperiods_complete_2023-02-01-15-24-22.png
147
+ top_names_complete_2023-02-01-15-24-22.png
148
+ parameters_complete_2023-02-01-15-24-22.xlsx
149
+ aggregate_data_detailed_agent_complete_2023-02-01-15-24-22.xlsx
150
+ normcandidates_complete_2023-02-01-15-24-22.xlsx
151
+ time_when_reached_norm_complete_2023-02-01-15-24-22.xlsx
152
+ first_agent_proposed_norm_complete_2023-02-01-15-24-22.xlsx
153
+ fixed_agent_name_proposed_smallworld_10_2023-02-01-17-16-19.xlsx
154
+
155
+
156
+
157
+ The image files (.png) show the network graphs and the strategy trend graph for the simulation. The "input_network_.." file is the input network with which the game is started. The "network_after_..." file is the network state at the end of game. Agents following the same strategy most frequently, they would be coloured in the same colour in this file. The "top_names_.." png file shows the percent of times specific name is proposed over a period of time.
158
+
159
+ Parameters file "parameters_.." lists all the parameters which have been specified in the function call. File "aggregate_data_detailed_.." has the information on names proposed by each agents at all time periods.
160
+
161
+ Norm file "normcandidates_..." shows names that satisfy the norm criteria laid out. File "time_when_reached_norm_..." shows the time period number when the name met the norm criteria. "first_agent_proposed_norm_" file shows the agent information who proposed that name first during the simulation run. These 3 files are generated only when at least one name satisfies the norm criteria.
162
+
163
+ When there is at least one fixed agent specified, we will see "fixed_agent_name_proposed_.." file also gets generated. This file shows the names proposed by fixed agents.
164
+
165
+ All the file names end with date and time stamp when the function was executed. It also contains information on network structure used like "complete" or "smallworld" network etc.
166
+
167
+
168
+ ## References
169
+ 1. Young, H.P. "The evolution of social norms" Annual Review of Economics. 2015 (7),pp.359-387
170
+ 2. https://pypi.org/project/networkx/
171
+
@@ -0,0 +1,153 @@
1
+
2
+ # Evolution of Strategic Choices under Coordination and Social Networks
3
+
4
+ This library is used to understand how specific actions or choices can evolve as the dominant choices when agents donot have any specific choices to begin with and they form their opinions based upon their interactions with other agents repeatedly. Agents have incentives to coordinate with other agents and are connected with each other through a specific social network. The strategies or actions which satisfy the norm criteria are potential candidates for setting the norm. In simple terms, norm is something which is played by more number of agents and for a longer periods of time.
5
+
6
+ ## How to use it?
7
+
8
+
9
+ ```bash
10
+ # Install
11
+ pip install multi-agent-coordination
12
+
13
+ # Import
14
+ from multi_agent_coordination import simulation
15
+
16
+ # Execute
17
+ simulation.network_simulations(iteration_name = "test",
18
+ path_to_save_output = "C:\\Users\\Downloads\\",
19
+ num_neighbors = 2,
20
+ num_agents = 20,
21
+ prob_edge_rewire = 0.5,
22
+ grid_network_m = 5,
23
+ grid_network_n = 4,
24
+ name_len = 4,
25
+ num_of_trials = 20,
26
+ perturb_ratio = 0.05,
27
+ fixed_agents=[],
28
+ prob_new_name = 0,
29
+ network_name = "complete",
30
+ random_seed = 96852,
31
+ function_to_use = "perturbed_response1",
32
+ norms_agents_frequency = 0.7,
33
+ norms_time_frequency = 0.5
34
+
35
+ )
36
+
37
+
38
+ ```
39
+
40
+ ## Function parameters
41
+ Following are the parameters which are required to be specified. At the end in the parenthesis, it shows the data type of the parameter which is required or the possible values which is required to be used. In following we will use agents' strategies, actions, choices, responses, names proposed interchangeably. These all represent same intent and meaning in our context.
42
+
43
+ 1. iteration_name: Iteration name. (String)
44
+ 2. path_to_save_output: Path to save output files. (String)
45
+ 3. num_neighbors: Number of neighbours. (Integer)
46
+ 4. num_agents: Number of agents. (Integer)
47
+ 5. prob_edge_rewire: Small world network parameter. Probability of rewiring existing edges or adding new edges. (Float)
48
+ 6. grid_network_m: 2-dimensional grid network parameter. Number of nodes. (Integer)
49
+ 7. grid_network_n: 2-dimensional grid network parameter. Number of nodes. (Integer)
50
+ 8. name_len: Length of random keywords created by the game. (Integer)
51
+ 9. num_of_trials: Number of trials, how long the game should run. (Integer)
52
+ 10. perturb_ratio: Probability of agents taking action randomly. (Float)
53
+ 11. fixed_agents: Agents assumed as fixed. e.g. [2,3] shows we want agents 2 and 3 to be fixed. (List)
54
+ 12. prob_new_name: Probability of agent suggesting new name at any time during the game. (Float)
55
+ 13. network_name: Specify one of these values. A detailed explanation is provided below. ["small_world1","small_world2","small_world3","complete","random","grid2d"]. (String)
56
+ 14. random_seed: Random seed value to reproduce results. (Integer)
57
+ 15. function_to_use: Specify one of these values. A detailed explanation is provided below. ["perturbed_response1","perturbed_response2","perturbed_response3","perturbed_response4"]. (String)
58
+ 16. norm_agents_frequency: Norm condition. Minimum percentage of agents require to propose same name at any given time. Specify number from 0 (0% agents) to 1(100% agents). (Float)
59
+ 17. norm_time_frequency: Norm condition. Minimum percentage of times agents require to propose same name. Specify number from 0 (no time period) to 1 (all time periods). (Float)
60
+
61
+ <br />
62
+
63
+ In the "function_to_use" parameter above, below is the explanation for what these different values mean inside the list specified above.
64
+
65
+ 1. perturbed_response1: Agents select the best response (1-"perturb_ratio")*100% times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. Agents select random strategy ("perturb_ratio")*100% times among the strategies which are not most frequently used. Here also, if there is more than one such strategy, then agents select any one strategy randomly out of these.
66
+ 2. perturbed_response2: Agent selects strategy randomly according to the relative weightage of different strategies. These relative weights are the % of times the respective strategy has been used by opponents in the past.
67
+ 3. perturbed_response3: This is same as "perturbed_response1" function except agent selects random strategy ("perturb_ratio")*100% times from all the possible strategies and not only from the ones which were not used most frequently by their opponents.
68
+ 4. perturbed_response4: Agent selects the best response 100% of times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. There is no perturbation element ("perturb_ratio" is considered as zero).
69
+
70
+ In all the four different response functions, agents propose new name at any point during the game with probability of "prob_new_name".
71
+
72
+
73
+ <br/>
74
+
75
+ In the "network_name" parameter above, below is the explanation for what these different values mean inside the list specified above.
76
+ 1. small_world1: Returns a Watts Strogatz small-world graph. Here number of edges remained constant once we increase the "prob_edge_rewire" value. Shortcut edges if added would replace the existing ones. But total count of edges remained constant.
77
+ 2. small_world2: Returns a Newman Watts Strogatz small-world graph. Here number of edges increased once we increase the "prob_edge_rewire" value. It would add more shortcut edges in addition to what already exist.
78
+ 3. small_world3: Returns a connected Watts Strogatz small-world graph. Rest of the explanation remains as small_world1.
79
+ 4. complete: Returns the complete graph.
80
+ 5. random: Compute a random graph by swapping edges of a given graph. The given graph used is Watts Strogatz small-world graph (the one produced by "small_world1").
81
+ 6. grid2d: Return the 2d grid graph of mxn nodes, each connected to its nearest neighbors.
82
+
83
+ We have used the networkx python library to populate these graphs. For more information around these graphs and how these are produced please refer to the link in the reference section.
84
+
85
+
86
+
87
+ ## Function explanation
88
+ Here we explain the underlying functioning of this library with the help of an example. But the logic can be replicated to any coordination game where there is a positive reward if agents follow the same strategy as their neighbour/opponent, and zero/negative reward if agents follow different strategy.
89
+
90
+ We assume agents play naming game as defined in Young (2015). Naming game is a coordination game wherein two agents are shown a picture of a face and they simultaneously and independently suggest names for it. If agents provide same name, they earn a positive reward and if they provide different names, they pay a small penalty (negative reward). There is no restriction on the names that agents can provide, this is left to their imagination. Agents do not know with whom they are paired or their identities. Agents can recollect the names provided by their partners in previous rounds.
91
+
92
+ We assume there are 20 ("num_agents") agents and are connected with 2 ("num_neighbors") other agents in their neighbourhood. We assume that the positive reward and negative reward are constant values. If we assume agents are connected via ring network structure, the network looks like as in below figure. For instance, agent 0 relates to 2 agents, agent 1 and 19. Similarly, agent 5 is connected with agents 4 and 6. We assume the network to be undirected.
93
+
94
+
95
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/input_network.png)
96
+
97
+
98
+
99
+ We start the game with one edge of the network is selected at any given point. Edge of the network is represented as (0,1), (5,4) etc, implying agents 0 and 1 are connected with each other, agents 5 and 4 are connected with each other. During each time period, all edges are selected sequentially and the agents associated with those edges play the game.
100
+
101
+ To begin with, agents do not have history to look back into, hence agents propose names randomly. Agents do not know the identity of the other agents with whom they are being paired. At the end of a given time period, agents do know the names proposed by other agents. Once agents have history of names proposed by other agents, they take into consideration the names proposed by their opponents in the next successive plays. This way agents get to know the names which are popular among the agents.
102
+
103
+ We assume name to be any string of length "name_len", a combination of alpha and numeric characters generated randomly. Agents keep track of names proposed by other agents and accordingly update their actions in the successive rounds of play. We assume agents have bounded rationality and engaged in limited calculations to decide upon what action to take.
104
+
105
+ We have considered different combinations of approaches or methods that agents can adopt while taking actions. We assume that agents use perturbed best response implying agents can take action randomly with a certain probability (Young, 2015). At each time when agents require to take action, they consider the names proposed in the past by their opponents and decide what name to propose in the current period. We have tested four different ways which agents can use to decide what action to take. The "function_to_use" parameter provides details about these and how these are different to each other.
106
+
107
+ When the simulations are run for "num_of_trials" timeperiod, we get the percentage distribution of different names (strategies) which agents proposed. The names which satisfy the two conditions for norm specified by "norms_agents_frequency" and "norms_time_frequency" are considered as norm. We have looked at two dimensions of norms, number of agents following the norm and for how long it has been followed. We can see the output like below. In below graph, X-axis shows the timeperiod, and Y-axis shows the % of agents who proposed the respective name. Y-axis values are in ratio format (range from 0 - 1), so would need to multiply by 100 to get this in percentage format.
108
+
109
+
110
+
111
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/top_names.png)
112
+
113
+
114
+
115
+ In above figure, name "1E1C" satisfies the norm criteria, when we assume "norms_agents_frequency" as 0.7 and "norms_time_frequency" as 0.5. This implies at least 70% of agents following "1E1C" for at least 50% of times. The network structure looks like below by the end of "num_of_trials" timeperiods. Agents who proposed same name majority of times during the run are colored in same colour. Color itself has no significance here, it is just used to denote agents proposing same name. Same color agents taking same action majority of the times.
116
+
117
+
118
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/network_after_50_timeperiods.png)
119
+
120
+
121
+ The detailed explanation of the output generated is provided in the next section.
122
+
123
+
124
+ ## How to interpret output?
125
+ There are total 9 files generated in the output when there is at least 1 name which satisfies the norm criteria.
126
+
127
+ input_network_complete_2023-02-01-15-24-22.png
128
+ network_after_50_timeperiods_complete_2023-02-01-15-24-22.png
129
+ top_names_complete_2023-02-01-15-24-22.png
130
+ parameters_complete_2023-02-01-15-24-22.xlsx
131
+ aggregate_data_detailed_agent_complete_2023-02-01-15-24-22.xlsx
132
+ normcandidates_complete_2023-02-01-15-24-22.xlsx
133
+ time_when_reached_norm_complete_2023-02-01-15-24-22.xlsx
134
+ first_agent_proposed_norm_complete_2023-02-01-15-24-22.xlsx
135
+ fixed_agent_name_proposed_smallworld_10_2023-02-01-17-16-19.xlsx
136
+
137
+
138
+
139
+ The image files (.png) show the network graphs and the strategy trend graph for the simulation. The "input_network_.." file is the input network with which the game is started. The "network_after_..." file is the network state at the end of game. Agents following the same strategy most frequently, they would be coloured in the same colour in this file. The "top_names_.." png file shows the percent of times specific name is proposed over a period of time.
140
+
141
+ Parameters file "parameters_.." lists all the parameters which have been specified in the function call. File "aggregate_data_detailed_.." has the information on names proposed by each agents at all time periods.
142
+
143
+ Norm file "normcandidates_..." shows names that satisfy the norm criteria laid out. File "time_when_reached_norm_..." shows the time period number when the name met the norm criteria. "first_agent_proposed_norm_" file shows the agent information who proposed that name first during the simulation run. These 3 files are generated only when at least one name satisfies the norm criteria.
144
+
145
+ When there is at least one fixed agent specified, we will see "fixed_agent_name_proposed_.." file also gets generated. This file shows the names proposed by fixed agents.
146
+
147
+ All the file names end with date and time stamp when the function was executed. It also contains information on network structure used like "complete" or "smallworld" network etc.
148
+
149
+
150
+ ## References
151
+ 1. Young, H.P. "The evolution of social norms" Annual Review of Economics. 2015 (7),pp.359-387
152
+ 2. https://pypi.org/project/networkx/
153
+
@@ -0,0 +1 @@
1
+ from multi_agent_coordination.simulation import perturbed_response1,perturbed_response2,perturbed_response3,perturbed_response4,network_simulations
@@ -0,0 +1,480 @@
1
+
2
+ import numpy as np
3
+ import pandas as pd
4
+ import networkx as nx
5
+ import matplotlib.pyplot as plt
6
+ import datetime
7
+ import random
8
+ import string
9
+
10
+
11
+
12
+ # 1.iteration_name: iteration name. String
13
+ # 2.path_to_save_output: path to save output files. String
14
+ # 3.num_neighbors: number of neighbours. Integer
15
+ # 4.num_agents: number of agents. Integer
16
+ # 5.prob_edge_rewire: small world network parameter. Probability of rewiring each edge. Float
17
+ # 6.grid_network_m: 2-dimensional grid network parameter. Number of nodes. Integer
18
+ # 7.grid_network_n: 2-dimensional grid network parameter. Number of nodes. Integer
19
+ # 8.name_len: length of random keywords created by the game. Integer
20
+ # 9.num_of_trials: number of trials. Integer
21
+ # 10.perturb_ratio: probability of agents taking action randomly. Float
22
+ # 11.fixed_agents: agents assumed as fixed. List
23
+ # 12.prob_new_name: probability of agent suggesting new name. Float
24
+ # 13.network_name: specify one of these values [small_world1,small_world2,small_world3,complete,random,grid2d]. String
25
+ # 14.random_seed: random seed value. Integer
26
+ # 15.function_to_use: specify one of these values [perturbed_response1,perturbed_response2,perturbed_response3,perturbed_response4]. String
27
+ # 16.norm_agents_frequency: norm condition. Minimum percentage of agents require to propose same name. Specify number from 0 to 1. Float
28
+ # 17.norm_time_frequency: norm condition. Minimum percentage of times agents require to propose same name. Specify number from 0 to 1. Float
29
+
30
+ # function_to_use
31
+ # perturbed_response1: Agent selects the best response (1-perturb_ratio)*100% times among the strategies which are most frequently used. Agents selects random strategy (perturb_ratio)*100% times from which are not most frequently used. Agents propose new name at any point during the game with probability of prob_new_name.
32
+ # perturbed_response2: Agent selects strategy according to the % share in which it has been used by opponents in the past.Agents propose new name at any point during the game with probability of prob_new_name.
33
+ # perturbed_response3: This is same as perturbed_response1 function except agent selects random strategy (perturb_ratio)*100% times from all the strategies. Agents propose new name at any point during the game with probability of prob_new_name.
34
+ # perturbed_response4: Agent selects the best response 100% times among the strategies which are most frequently used. There is no perturbation element.Agents propose new name at any point during the game with probability of prob_new_name.
35
+
36
+
37
+ # Note there may be instances wherein more than 1 strategy has been used by opponent agents more frequently.
38
+ # E.g. if an agent comes across s1 and s2 strategy used by their opponent agents most frequently during any history and both s1 and s2
39
+ # have been used equally in the past, in that case agent deciding to take action will select randomly from s1 and s2.
40
+
41
+
42
+ # network_name
43
+ # small_world1: Returns a Watts–Strogatz small-world graph. Here number of edges remained constant once we increase the prob_edge_rewire value.Shortcut edges if added would replace the existing ones. But total count of edges remained constant.
44
+ # small_world2: Returns a Newman–Watts–Strogatz small-world graph. Here number of edges increased once we increase the prob_edge_rewire value. Would add more shortcut edges in addition to what already exist.
45
+ # small_world3: Returns a connected Watts–Strogatz small-world graph.
46
+ # complete: Returns the complete graph.
47
+ # random: Compute a random graph by swapping edges of a given graph.
48
+ # grid2d: Return the 2d grid graph of mxn nodes, each connected to its nearest neighbors.
49
+
50
+
51
+
52
+
53
+ def perturbed_response1(AGENT_NO,data_to_look,perturb_ratio,name_len,prob_new_name):
54
+
55
+ try:
56
+ count_pd = data_to_look.loc[data_to_look["agent"]==AGENT_NO]["name_offered_by_opponent"].value_counts().reset_index()
57
+ count_pd["tot_sum"] = count_pd["name_offered_by_opponent"] / sum(count_pd["name_offered_by_opponent"])
58
+ best_response = count_pd.loc[count_pd["tot_sum"] == max(count_pd["tot_sum"])]['index'].tolist()
59
+ draw1= random.choices(population=best_response,k=1)
60
+
61
+ Nonbest_response = count_pd.loc[count_pd["tot_sum"] != max(count_pd["tot_sum"])]['index'].tolist()
62
+ if len(Nonbest_response) > 0:
63
+ draw2= random.choices(population=Nonbest_response,k=1)
64
+ best_response = random.choices(population=[draw1[0],draw2[0]],weights=[1-perturb_ratio,perturb_ratio],k=1)
65
+ best_response = best_response[0]
66
+ else:
67
+ best_response = draw1[0]
68
+
69
+
70
+ except:
71
+ best_response = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
72
+ best_response = best_response[0]
73
+
74
+
75
+ best_response2 = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
76
+ best_response2 = best_response2[0]
77
+
78
+ best_response3 = random.choices(population=[best_response,best_response2],weights=[1-prob_new_name,prob_new_name],k=1)[0]
79
+
80
+ return best_response3
81
+
82
+
83
+
84
+
85
+ def perturbed_response2(AGENT_NO,data_to_look,name_len,prob_new_name):
86
+ try:
87
+ count_pd = data_to_look.loc[data_to_look["agent"]==AGENT_NO]["name_offered_by_opponent"].value_counts().reset_index()
88
+ count_pd["tot_sum"] = count_pd["name_offered_by_opponent"] / sum(count_pd["name_offered_by_opponent"])
89
+ names_list = count_pd['index'].tolist()
90
+ share_list = count_pd['tot_sum'].tolist()
91
+ draw1= random.choices(population=names_list,weights=share_list,k=1)
92
+ best_response = draw1[0]
93
+
94
+
95
+ except:
96
+ best_response = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
97
+ best_response = best_response[0]
98
+
99
+ best_response2 = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
100
+ best_response2 = best_response2[0]
101
+
102
+ best_response3 = random.choices(population=[best_response,best_response2],weights=[1-prob_new_name,prob_new_name],k=1)[0]
103
+
104
+
105
+ return best_response3
106
+
107
+
108
+
109
+
110
+ def perturbed_response3(AGENT_NO,data_to_look,perturb_ratio,name_len,prob_new_name):
111
+ try:
112
+ count_pd = data_to_look.loc[data_to_look["agent"]==AGENT_NO]["name_offered_by_opponent"].value_counts().reset_index()
113
+ count_pd["tot_sum"] = count_pd["name_offered_by_opponent"] / sum(count_pd["name_offered_by_opponent"])
114
+ best_response = count_pd.loc[count_pd["tot_sum"] == max(count_pd["tot_sum"])]['index'].tolist()
115
+ draw1= random.choices(population=best_response,k=1)
116
+
117
+ Nonbest_response = count_pd['index'].tolist()
118
+ draw2= random.choices(population=Nonbest_response,k=1)
119
+ best_response = random.choices(population=[draw1[0],draw2[0]],weights=[1-perturb_ratio,perturb_ratio],k=1)
120
+ best_response = best_response[0]
121
+
122
+ except:
123
+ best_response = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
124
+ best_response = best_response[0]
125
+
126
+ best_response2 = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
127
+ best_response2 = best_response2[0]
128
+
129
+ best_response3 = random.choices(population=[best_response,best_response2],weights=[1-prob_new_name,prob_new_name],k=1)[0]
130
+
131
+ return best_response3
132
+
133
+
134
+
135
+
136
+ def perturbed_response4(AGENT_NO,data_to_look,name_len,prob_new_name):
137
+ try:
138
+ count_pd = data_to_look.loc[data_to_look["agent"]==AGENT_NO]["name_offered_by_opponent"].value_counts().reset_index()
139
+ count_pd["tot_sum"] = count_pd["name_offered_by_opponent"] / sum(count_pd["name_offered_by_opponent"])
140
+ best_response = count_pd.loc[count_pd["tot_sum"] == max(count_pd["tot_sum"])]['index'].tolist()
141
+ draw1= random.choices(population=best_response,k=1)
142
+ best_response = draw1[0]
143
+
144
+ except:
145
+ best_response = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
146
+ best_response = best_response[0]
147
+
148
+ best_response2 = [''.join(random.choices(string.ascii_uppercase+string.digits,k=name_len))]
149
+ best_response2 = best_response2[0]
150
+
151
+ best_response3 = random.choices(population=[best_response,best_response2],weights=[1-prob_new_name,prob_new_name],k=1)[0]
152
+
153
+ return best_response3
154
+
155
+
156
+
157
+
158
+ def network_simulations(iteration_name,
159
+ path_to_save_output,
160
+ num_neighbors,
161
+ num_agents,
162
+ prob_edge_rewire,
163
+ grid_network_m,
164
+ grid_network_n,
165
+ name_len,
166
+ num_of_trials,
167
+ perturb_ratio,
168
+ fixed_agents,
169
+ prob_new_name,
170
+ network_name,
171
+ random_seed,
172
+ function_to_use,
173
+ norms_agents_frequency,
174
+ norms_time_frequency
175
+
176
+ ):
177
+
178
+
179
+ iteration_name = iteration_name
180
+ today = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
181
+ path_to_save_output = path_to_save_output
182
+ num_neighbors = num_neighbors
183
+ num_agents= num_agents
184
+ prob_edge_rewire = prob_edge_rewire
185
+ grid_network_m=grid_network_m
186
+ grid_network_n=grid_network_n
187
+ name_len = name_len
188
+ num_of_trials = num_of_trials
189
+ perturb_ratio = perturb_ratio
190
+ fixed_agents = fixed_agents
191
+ prob_new_name=prob_new_name
192
+ network_name = network_name
193
+ random_seed = random_seed
194
+ function_to_use = function_to_use
195
+ norms_agents_frequency = norms_agents_frequency
196
+ norms_time_frequency = norms_time_frequency
197
+
198
+ random.seed(random_seed)
199
+
200
+ if network_name == 'small_world1':
201
+ G = nx.watts_strogatz_graph(num_agents,num_neighbors,prob_edge_rewire)
202
+ if network_name == 'small_world2':
203
+ G = nx.newman_watts_strogatz_graph(n=num_agents,k=num_neighbors,p=prob_edge_rewire,seed=random_seed)
204
+ if network_name == 'small_world3':
205
+ G = nx.connected_watts_strogatz_graph(n=num_agents,k=num_neighbors,p=prob_edge_rewire,seed=random_seed)
206
+ if network_name == 'complete':
207
+ G = nx.complete_graph(num_agents)
208
+ if network_name == 'random':
209
+ G = nx.watts_strogatz_graph(num_agents,num_neighbors,prob_edge_rewire)
210
+ G = nx.random_reference(G, niter=5, connectivity=True, seed=random_seed)
211
+ if network_name == 'grid2d':
212
+ G = nx.grid_2d_graph(m=grid_network_m,n=grid_network_n)
213
+ mapping = dict(zip(G, range(len(G))))
214
+ G = nx.relabel_nodes(G, mapping)
215
+
216
+
217
+ nx.draw(G,with_labels=True)
218
+ plt.savefig(path_to_save_output+"input_network_"+iteration_name+"_"+today+".png")
219
+ plt.clf()
220
+
221
+ potential_edges = list(G.edges)
222
+
223
+ empty_df_to_fill_trial = pd.DataFrame()
224
+ empty_df_to_fill_trial["agent"] = -1
225
+ empty_df_to_fill_trial["name_offered"] = ''
226
+ empty_df_to_fill_trial["opponentagent"] = -1
227
+ empty_df_to_fill_trial["name_offered_by_opponent"] = ''
228
+ empty_df_to_fill_trial["pair"] = ""
229
+ empty_df_to_fill_trial["timeperiod"] = -1
230
+
231
+
232
+ fixed_values_to_use = dict()
233
+ num_of_rounds = len(potential_edges)
234
+ norms_db_to_fill = pd.DataFrame()
235
+ norms_db_to_fill['name'] = ''
236
+ norms_db_to_fill['percent_count'] = ''
237
+ norms_db_to_fill['timeperiod'] = ''
238
+
239
+
240
+ for timeperiod in range(1,num_of_trials+1):
241
+ empty_df_to_fill_temp = empty_df_to_fill_trial[0:0]
242
+ for v in range(num_of_rounds):
243
+ vcheck = potential_edges[v]
244
+ agent1 = vcheck[0]
245
+ agent2 = vcheck[1]
246
+
247
+ if function_to_use == 'perturbed_response1':
248
+ if len(empty_df_to_fill_temp) > len(empty_df_to_fill_trial):
249
+
250
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
251
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
252
+ else:
253
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
254
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
255
+
256
+ elif function_to_use == 'perturbed_response2':
257
+ if len(empty_df_to_fill_temp) > len(empty_df_to_fill_trial):
258
+
259
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
260
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
261
+ else:
262
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
263
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
264
+
265
+
266
+
267
+ elif function_to_use == 'perturbed_response3':
268
+ if len(empty_df_to_fill_temp) > len(empty_df_to_fill_trial):
269
+
270
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
271
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
272
+ else:
273
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
274
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
275
+
276
+
277
+
278
+ elif function_to_use == 'perturbed_response4':
279
+ if len(empty_df_to_fill_temp) > len(empty_df_to_fill_trial):
280
+
281
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
282
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_temp,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
283
+ else:
284
+ name_to_fill1 = perturbed_response1(AGENT_NO = agent1,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
285
+ name_to_fill2 = perturbed_response1(AGENT_NO = agent2,data_to_look = empty_df_to_fill_trial,perturb_ratio=perturb_ratio,name_len=name_len,prob_new_name=prob_new_name)
286
+
287
+
288
+
289
+ if (agent1 in fixed_agents or agent2 in fixed_agents) and len(fixed_values_to_use) < len(fixed_agents):
290
+ for jj in fixed_agents:
291
+ if jj not in list(fixed_values_to_use.keys()):
292
+ xx = empty_df_to_fill_temp.loc[empty_df_to_fill_temp["agent"]==jj].sort_values(["timeperiod"],ascending=True).head(1)
293
+ if len(xx) > 0:
294
+ fixed_values_to_use[xx["agent"].values[0]]=xx["name_offered"].values[0]
295
+
296
+ if agent1 in fixed_agents:
297
+ try:
298
+ name_offered = fixed_values_to_use[agent1]
299
+ except:
300
+ name_offered = name_to_fill1
301
+ else:
302
+ name_offered = name_to_fill1
303
+
304
+
305
+ if agent2 in fixed_agents:
306
+ try:
307
+ name_offered_by_opponent = fixed_values_to_use[agent2]
308
+ except:
309
+ name_offered_by_opponent = name_to_fill2
310
+ else:
311
+ name_offered_by_opponent = name_to_fill2
312
+
313
+
314
+ data_to_append = pd.DataFrame({'agent':agent1,
315
+ 'name_offered':name_offered,
316
+ 'name_offered_by_opponent':name_offered_by_opponent,
317
+ 'opponentagent':agent2,
318
+ 'pair':str(vcheck),
319
+ 'timeperiod':timeperiod
320
+ },index=[0])
321
+
322
+ data_to_append2 = pd.DataFrame({'agent':agent2,
323
+ 'name_offered':name_offered_by_opponent,
324
+ 'name_offered_by_opponent':name_offered,
325
+ 'opponentagent':agent1,
326
+ 'pair':str(vcheck),
327
+ 'timeperiod':timeperiod
328
+ },index=[0])
329
+
330
+
331
+ empty_df_to_fill_temp = pd.concat([empty_df_to_fill_temp,data_to_append,data_to_append2],ignore_index=True,sort=False)
332
+
333
+
334
+ empty_df_to_fill_trial = pd.concat([empty_df_to_fill_trial,empty_df_to_fill_temp],ignore_index=True,sort=False)
335
+ footemp = empty_df_to_fill_temp['name_offered'].value_counts(normalize=True).to_frame()
336
+ footemp.columns = ['percent_count']
337
+ footemp['name'] = footemp.index.values
338
+ footemp=footemp[['name','percent_count']].reset_index(drop=True)
339
+ footemp['timeperiod'] = timeperiod
340
+
341
+ norms_db_to_fill = pd.concat([norms_db_to_fill,footemp],ignore_index=True,sort=False)
342
+
343
+
344
+ norms_candidates = norms_db_to_fill.loc[norms_db_to_fill['percent_count']>=norms_agents_frequency]
345
+ norms_to_store = []
346
+ percent_time_frequency = []
347
+ if len(norms_candidates) > 0:
348
+ potential_norms_candidate = np.unique(norms_candidates['name']).tolist()
349
+ for k in potential_norms_candidate:
350
+ norms_candidates2 = norms_candidates.loc[norms_candidates['name']==k]
351
+ distinct_timeperiod = len(np.unique(norms_candidates2['timeperiod']))
352
+ norms_candidates2 = round(distinct_timeperiod/len(np.unique(norms_db_to_fill['timeperiod'])),2)
353
+ if norms_candidates2 >= norms_time_frequency:
354
+ norms_to_store.append(k)
355
+ percent_time_frequency.append(norms_candidates2)
356
+
357
+
358
+ try:
359
+ norms_candidates2 = pd.DataFrame()
360
+ norms_candidates2["percent_count"] = percent_time_frequency
361
+ norms_candidates2["name"] = norms_to_store
362
+ if len(norms_candidates2) > 0:
363
+ norms_candidates2.to_excel(path_to_save_output+"normcandidates_"+iteration_name+"_"+today+".xlsx",index=None)
364
+ except:
365
+ pass
366
+
367
+
368
+
369
+
370
+ db_to_fill2 = pd.DataFrame()
371
+ db_to_fill2["timeperiod"] = -1
372
+ db_to_fill2["name_offered"] = -1
373
+
374
+ if len(norms_to_store) > 0:
375
+ for j in norms_to_store:
376
+ foocheck = norms_candidates.loc[norms_candidates["name"]==j]
377
+ foocheck = foocheck.sort_values(["timeperiod"])
378
+ foocheck["count_names_offered"] = (foocheck["name"]==j).cumsum()
379
+ foocheck["cum_perc"] = foocheck["count_names_offered"]/len(np.unique(norms_db_to_fill['timeperiod'])) ## divide by total timeperiod.
380
+ xxxx= foocheck.loc[foocheck["cum_perc"]>=norms_time_frequency][["timeperiod"]].head(1)
381
+ if xxxx.shape[0] > 0:
382
+ timev = foocheck.loc[foocheck["cum_perc"]>=norms_time_frequency][["timeperiod"]].head(1)["timeperiod"].values[0]
383
+ foodb = pd.DataFrame({"timeperiod":[timev],"name_offered":[j]})
384
+ db_to_fill2 = pd.concat([db_to_fill2,foodb],ignore_index=True,sort=False)
385
+
386
+
387
+ try:
388
+ if len(db_to_fill2) > 0:
389
+ db_to_fill2.to_excel(path_to_save_output+"time_when_reached_norm_"+iteration_name+"_"+today+".xlsx",index=None)
390
+ except:
391
+ pass
392
+
393
+
394
+
395
+ empty_df_to_fill_trial = empty_df_to_fill_trial.sort_values(["timeperiod"])
396
+ agent_no_to_fill = []
397
+ if len(norms_to_store) > 0:
398
+ for j in norms_to_store:
399
+ xx = empty_df_to_fill_trial.loc[empty_df_to_fill_trial["name_offered"]==j].head(1)["agent"].values[0]
400
+ agent_no_to_fill.append(xx)
401
+
402
+
403
+ data1_to_save = pd.DataFrame({'first_agent_propose_the_name':agent_no_to_fill,'name_proposed':norms_to_store})
404
+ try:
405
+ if len(data1_to_save) > 0:
406
+ data1_to_save.to_excel(path_to_save_output+"first_agent_proposed_norm_"+iteration_name+"_"+today+".xlsx",index=None)
407
+ except:
408
+ pass
409
+
410
+
411
+ if len(fixed_agents) > 0:
412
+ fixed_agents_data =empty_df_to_fill_trial.loc[empty_df_to_fill_trial["agent"].isin(fixed_agents)][["agent","name_offered","timeperiod"]]
413
+ fixed_agents_data=fixed_agents_data.sort_values(["agent","timeperiod"])
414
+ fixed_agents_data = fixed_agents_data.groupby("agent").first().reset_index()
415
+ fixed_agents_data.to_excel(path_to_save_output+"fixed_agent_name_proposed_"+iteration_name+"_"+today+".xlsx",index=None)
416
+
417
+
418
+ list_to_fill_for_labels_2 = []
419
+ for i in range(len(G)):
420
+ perct_share = empty_df_to_fill_trial.loc[empty_df_to_fill_trial["agent"]==i]["name_offered"].value_counts(normalize=True).to_frame()
421
+ perct_share["name_index"] = perct_share.index.tolist()
422
+ xx = perct_share.head(1)["name_index"][0]
423
+ list_to_fill_for_labels_2.append(xx)
424
+
425
+
426
+ selected_norms = list(set(list_to_fill_for_labels_2))
427
+
428
+ fig,ax = plt.subplots()
429
+ data_for_trend_plot = norms_db_to_fill.loc[norms_db_to_fill['name'].isin(selected_norms)]
430
+ data_for_trend_plot = data_for_trend_plot.reset_index(drop=True)
431
+ for label,grp in data_for_trend_plot.groupby('name'):
432
+ grp.plot(x='timeperiod',y='percent_count',ax=ax,label=label)
433
+ ax.set_xlabel('Timeperiod')
434
+ ax.set_ylabel('Count %')
435
+ # plt.show()
436
+ plt.savefig(path_to_save_output+"top_names_"+iteration_name+"_"+today+".png")
437
+ plt.clf()
438
+
439
+ names_to_check = list(np.unique(empty_df_to_fill_trial['name_offered']))
440
+ get_colors = lambda n: list(map(lambda i: "#" + "%06x" % random.randint(0, 0xFFFFFF),range(n)))
441
+ list_of_colors = get_colors(len(names_to_check))
442
+
443
+ perct_share_temp=empty_df_to_fill_trial.copy()
444
+ list_to_fill_for_labels = []
445
+ for i in range(len(G)):
446
+ perct_share = perct_share_temp.loc[perct_share_temp["agent"]==i]["name_offered"].value_counts(normalize=True).to_frame()
447
+ perct_share["name_index"] = perct_share.index.tolist()
448
+ xx = perct_share.head(1)["name_index"][0]
449
+ list_to_fill_for_labels.append(xx)
450
+
451
+
452
+ color_map = []
453
+ for j in range(len(list_to_fill_for_labels)):
454
+ for i in range(len(names_to_check)):
455
+ if list_to_fill_for_labels[j] == names_to_check[i]:
456
+ color_map.append(list_of_colors[i])
457
+
458
+
459
+ nx.draw(G,with_labels=True,node_color=color_map)
460
+ l,r = plt.xlim()
461
+ plt.xlim(l-0.05,r+0.05)
462
+ plt.savefig(path_to_save_output+"network_after_"+str(num_of_trials)+'_timeperiods_'+iteration_name+"_"+today+".png")
463
+ plt.clf()
464
+
465
+ empty_df_to_fill_trial.to_excel(path_to_save_output+"aggregate_data_detailed_agent_"+iteration_name+"_"+today+".xlsx",index=None)
466
+
467
+ parameters_pd = pd.DataFrame([{'iteration_name':iteration_name,'path_to_save_output':path_to_save_output,
468
+ 'datetime':today,'num_neighbors':num_neighbors,'num_agents':num_agents,
469
+ 'prob_edge_rewire':prob_edge_rewire,'grid_network_m':grid_network_m,
470
+ 'grid_network_n':grid_network_n,'name_len':name_len,
471
+ 'num_of_trials':num_of_trials,'fixed_agents':str(fixed_agents),'prob_new_name':prob_new_name,
472
+ 'perturb_ratio':perturb_ratio,'network_name':network_name,'random_seed':random_seed,
473
+ 'function_to_use':function_to_use,'norms_agents_frequency':norms_agents_frequency,
474
+ 'norms_time_frequency':norms_time_frequency}]).T
475
+ parameters_pd.columns=["parameter_values"]
476
+ parameters_pd["parameter"]=parameters_pd.index
477
+ parameters_pd[["parameter","parameter_values"]].to_excel(path_to_save_output+"parameters_"+iteration_name+"_"+today+".xlsx",index=None)
478
+
479
+
480
+ return(print("done"))
@@ -0,0 +1,171 @@
1
+ Metadata-Version: 2.1
2
+ Name: multi-agent-coordination
3
+ Version: 0.1
4
+ Summary: Identification of strategic choices under multi-agent systems, coordination game and social networks
5
+ Home-page: https://github.com/ankur-tutlani/multi-agent-coordination
6
+ Download-URL: https://github.com/ankur-tutlani/multi_agent_coordination/archive/refs/tags/v_01.tar.gz
7
+ Author: ankurtutlani
8
+ Author-email: ankur.tutlani@gmail.com
9
+ License: MIT
10
+ Keywords: game theory,evolutionary game,social norms,multi-agent systems,evolution,social network,computational economics,simulation,agent-based modeling,computation
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Topic :: Software Development :: Build Tools
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3.7
16
+ Description-Content-Type: text/markdown
17
+ License-File: LICENSE.txt
18
+
19
+
20
+ # Evolution of Strategic Choices under Coordination and Social Networks
21
+
22
+ This library is used to understand how specific actions or choices can evolve as the dominant choices when agents donot have any specific choices to begin with and they form their opinions based upon their interactions with other agents repeatedly. Agents have incentives to coordinate with other agents and are connected with each other through a specific social network. The strategies or actions which satisfy the norm criteria are potential candidates for setting the norm. In simple terms, norm is something which is played by more number of agents and for a longer periods of time.
23
+
24
+ ## How to use it?
25
+
26
+
27
+ ```bash
28
+ # Install
29
+ pip install multi-agent-coordination
30
+
31
+ # Import
32
+ from multi_agent_coordination import simulation
33
+
34
+ # Execute
35
+ simulation.network_simulations(iteration_name = "test",
36
+ path_to_save_output = "C:\\Users\\Downloads\\",
37
+ num_neighbors = 2,
38
+ num_agents = 20,
39
+ prob_edge_rewire = 0.5,
40
+ grid_network_m = 5,
41
+ grid_network_n = 4,
42
+ name_len = 4,
43
+ num_of_trials = 20,
44
+ perturb_ratio = 0.05,
45
+ fixed_agents=[],
46
+ prob_new_name = 0,
47
+ network_name = "complete",
48
+ random_seed = 96852,
49
+ function_to_use = "perturbed_response1",
50
+ norms_agents_frequency = 0.7,
51
+ norms_time_frequency = 0.5
52
+
53
+ )
54
+
55
+
56
+ ```
57
+
58
+ ## Function parameters
59
+ Following are the parameters which are required to be specified. At the end in the parenthesis, it shows the data type of the parameter which is required or the possible values which is required to be used. In following we will use agents' strategies, actions, choices, responses, names proposed interchangeably. These all represent same intent and meaning in our context.
60
+
61
+ 1. iteration_name: Iteration name. (String)
62
+ 2. path_to_save_output: Path to save output files. (String)
63
+ 3. num_neighbors: Number of neighbours. (Integer)
64
+ 4. num_agents: Number of agents. (Integer)
65
+ 5. prob_edge_rewire: Small world network parameter. Probability of rewiring existing edges or adding new edges. (Float)
66
+ 6. grid_network_m: 2-dimensional grid network parameter. Number of nodes. (Integer)
67
+ 7. grid_network_n: 2-dimensional grid network parameter. Number of nodes. (Integer)
68
+ 8. name_len: Length of random keywords created by the game. (Integer)
69
+ 9. num_of_trials: Number of trials, how long the game should run. (Integer)
70
+ 10. perturb_ratio: Probability of agents taking action randomly. (Float)
71
+ 11. fixed_agents: Agents assumed as fixed. e.g. [2,3] shows we want agents 2 and 3 to be fixed. (List)
72
+ 12. prob_new_name: Probability of agent suggesting new name at any time during the game. (Float)
73
+ 13. network_name: Specify one of these values. A detailed explanation is provided below. ["small_world1","small_world2","small_world3","complete","random","grid2d"]. (String)
74
+ 14. random_seed: Random seed value to reproduce results. (Integer)
75
+ 15. function_to_use: Specify one of these values. A detailed explanation is provided below. ["perturbed_response1","perturbed_response2","perturbed_response3","perturbed_response4"]. (String)
76
+ 16. norm_agents_frequency: Norm condition. Minimum percentage of agents require to propose same name at any given time. Specify number from 0 (0% agents) to 1(100% agents). (Float)
77
+ 17. norm_time_frequency: Norm condition. Minimum percentage of times agents require to propose same name. Specify number from 0 (no time period) to 1 (all time periods). (Float)
78
+
79
+ <br />
80
+
81
+ In the "function_to_use" parameter above, below is the explanation for what these different values mean inside the list specified above.
82
+
83
+ 1. perturbed_response1: Agents select the best response (1-"perturb_ratio")*100% times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. Agents select random strategy ("perturb_ratio")*100% times among the strategies which are not most frequently used. Here also, if there is more than one such strategy, then agents select any one strategy randomly out of these.
84
+ 2. perturbed_response2: Agent selects strategy randomly according to the relative weightage of different strategies. These relative weights are the % of times the respective strategy has been used by opponents in the past.
85
+ 3. perturbed_response3: This is same as "perturbed_response1" function except agent selects random strategy ("perturb_ratio")*100% times from all the possible strategies and not only from the ones which were not used most frequently by their opponents.
86
+ 4. perturbed_response4: Agent selects the best response 100% of times among the strategies which are most frequently used. If there is more than one such strategy, then agents select any one strategy randomly. There is no perturbation element ("perturb_ratio" is considered as zero).
87
+
88
+ In all the four different response functions, agents propose new name at any point during the game with probability of "prob_new_name".
89
+
90
+
91
+ <br/>
92
+
93
+ In the "network_name" parameter above, below is the explanation for what these different values mean inside the list specified above.
94
+ 1. small_world1: Returns a Watts Strogatz small-world graph. Here number of edges remained constant once we increase the "prob_edge_rewire" value. Shortcut edges if added would replace the existing ones. But total count of edges remained constant.
95
+ 2. small_world2: Returns a Newman Watts Strogatz small-world graph. Here number of edges increased once we increase the "prob_edge_rewire" value. It would add more shortcut edges in addition to what already exist.
96
+ 3. small_world3: Returns a connected Watts Strogatz small-world graph. Rest of the explanation remains as small_world1.
97
+ 4. complete: Returns the complete graph.
98
+ 5. random: Compute a random graph by swapping edges of a given graph. The given graph used is Watts Strogatz small-world graph (the one produced by "small_world1").
99
+ 6. grid2d: Return the 2d grid graph of mxn nodes, each connected to its nearest neighbors.
100
+
101
+ We have used the networkx python library to populate these graphs. For more information around these graphs and how these are produced please refer to the link in the reference section.
102
+
103
+
104
+
105
+ ## Function explanation
106
+ Here we explain the underlying functioning of this library with the help of an example. But the logic can be replicated to any coordination game where there is a positive reward if agents follow the same strategy as their neighbour/opponent, and zero/negative reward if agents follow different strategy.
107
+
108
+ We assume agents play naming game as defined in Young (2015). Naming game is a coordination game wherein two agents are shown a picture of a face and they simultaneously and independently suggest names for it. If agents provide same name, they earn a positive reward and if they provide different names, they pay a small penalty (negative reward). There is no restriction on the names that agents can provide, this is left to their imagination. Agents do not know with whom they are paired or their identities. Agents can recollect the names provided by their partners in previous rounds.
109
+
110
+ We assume there are 20 ("num_agents") agents and are connected with 2 ("num_neighbors") other agents in their neighbourhood. We assume that the positive reward and negative reward are constant values. If we assume agents are connected via ring network structure, the network looks like as in below figure. For instance, agent 0 relates to 2 agents, agent 1 and 19. Similarly, agent 5 is connected with agents 4 and 6. We assume the network to be undirected.
111
+
112
+
113
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/input_network.png)
114
+
115
+
116
+
117
+ We start the game with one edge of the network is selected at any given point. Edge of the network is represented as (0,1), (5,4) etc, implying agents 0 and 1 are connected with each other, agents 5 and 4 are connected with each other. During each time period, all edges are selected sequentially and the agents associated with those edges play the game.
118
+
119
+ To begin with, agents do not have history to look back into, hence agents propose names randomly. Agents do not know the identity of the other agents with whom they are being paired. At the end of a given time period, agents do know the names proposed by other agents. Once agents have history of names proposed by other agents, they take into consideration the names proposed by their opponents in the next successive plays. This way agents get to know the names which are popular among the agents.
120
+
121
+ We assume name to be any string of length "name_len", a combination of alpha and numeric characters generated randomly. Agents keep track of names proposed by other agents and accordingly update their actions in the successive rounds of play. We assume agents have bounded rationality and engaged in limited calculations to decide upon what action to take.
122
+
123
+ We have considered different combinations of approaches or methods that agents can adopt while taking actions. We assume that agents use perturbed best response implying agents can take action randomly with a certain probability (Young, 2015). At each time when agents require to take action, they consider the names proposed in the past by their opponents and decide what name to propose in the current period. We have tested four different ways which agents can use to decide what action to take. The "function_to_use" parameter provides details about these and how these are different to each other.
124
+
125
+ When the simulations are run for "num_of_trials" timeperiod, we get the percentage distribution of different names (strategies) which agents proposed. The names which satisfy the two conditions for norm specified by "norms_agents_frequency" and "norms_time_frequency" are considered as norm. We have looked at two dimensions of norms, number of agents following the norm and for how long it has been followed. We can see the output like below. In below graph, X-axis shows the timeperiod, and Y-axis shows the % of agents who proposed the respective name. Y-axis values are in ratio format (range from 0 - 1), so would need to multiply by 100 to get this in percentage format.
126
+
127
+
128
+
129
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/top_names.png)
130
+
131
+
132
+
133
+ In above figure, name "1E1C" satisfies the norm criteria, when we assume "norms_agents_frequency" as 0.7 and "norms_time_frequency" as 0.5. This implies at least 70% of agents following "1E1C" for at least 50% of times. The network structure looks like below by the end of "num_of_trials" timeperiods. Agents who proposed same name majority of times during the run are colored in same colour. Color itself has no significance here, it is just used to denote agents proposing same name. Same color agents taking same action majority of the times.
134
+
135
+
136
+ ![](https://github.com/ankur-tutlani/multi-agent-coordination/raw/main/network_after_50_timeperiods.png)
137
+
138
+
139
+ The detailed explanation of the output generated is provided in the next section.
140
+
141
+
142
+ ## How to interpret output?
143
+ There are total 9 files generated in the output when there is at least 1 name which satisfies the norm criteria.
144
+
145
+ input_network_complete_2023-02-01-15-24-22.png
146
+ network_after_50_timeperiods_complete_2023-02-01-15-24-22.png
147
+ top_names_complete_2023-02-01-15-24-22.png
148
+ parameters_complete_2023-02-01-15-24-22.xlsx
149
+ aggregate_data_detailed_agent_complete_2023-02-01-15-24-22.xlsx
150
+ normcandidates_complete_2023-02-01-15-24-22.xlsx
151
+ time_when_reached_norm_complete_2023-02-01-15-24-22.xlsx
152
+ first_agent_proposed_norm_complete_2023-02-01-15-24-22.xlsx
153
+ fixed_agent_name_proposed_smallworld_10_2023-02-01-17-16-19.xlsx
154
+
155
+
156
+
157
+ The image files (.png) show the network graphs and the strategy trend graph for the simulation. The "input_network_.." file is the input network with which the game is started. The "network_after_..." file is the network state at the end of game. Agents following the same strategy most frequently, they would be coloured in the same colour in this file. The "top_names_.." png file shows the percent of times specific name is proposed over a period of time.
158
+
159
+ Parameters file "parameters_.." lists all the parameters which have been specified in the function call. File "aggregate_data_detailed_.." has the information on names proposed by each agents at all time periods.
160
+
161
+ Norm file "normcandidates_..." shows names that satisfy the norm criteria laid out. File "time_when_reached_norm_..." shows the time period number when the name met the norm criteria. "first_agent_proposed_norm_" file shows the agent information who proposed that name first during the simulation run. These 3 files are generated only when at least one name satisfies the norm criteria.
162
+
163
+ When there is at least one fixed agent specified, we will see "fixed_agent_name_proposed_.." file also gets generated. This file shows the names proposed by fixed agents.
164
+
165
+ All the file names end with date and time stamp when the function was executed. It also contains information on network structure used like "complete" or "smallworld" network etc.
166
+
167
+
168
+ ## References
169
+ 1. Young, H.P. "The evolution of social norms" Annual Review of Economics. 2015 (7),pp.359-387
170
+ 2. https://pypi.org/project/networkx/
171
+
@@ -0,0 +1,11 @@
1
+ LICENSE.txt
2
+ README.md
3
+ setup.cfg
4
+ setup.py
5
+ multi_agent_coordination/__init__.py
6
+ multi_agent_coordination/simulation.py
7
+ multi_agent_coordination.egg-info/PKG-INFO
8
+ multi_agent_coordination.egg-info/SOURCES.txt
9
+ multi_agent_coordination.egg-info/dependency_links.txt
10
+ multi_agent_coordination.egg-info/requires.txt
11
+ multi_agent_coordination.egg-info/top_level.txt
@@ -0,0 +1,5 @@
1
+ numpy
2
+ pandas
3
+ networkx
4
+ matplotlib
5
+ setuptools
@@ -0,0 +1 @@
1
+ multi_agent_coordination
@@ -0,0 +1,8 @@
1
+ [metadata]
2
+ description-file = README.md
3
+ license_files = LICENSE.txt
4
+
5
+ [egg_info]
6
+ tag_build =
7
+ tag_date = 0
8
+
@@ -0,0 +1,38 @@
1
+ from setuptools import setup
2
+ import os
3
+
4
+ def read_file(filename):
5
+ with open(os.path.join(os.path.dirname(__file__), filename)) as file:
6
+ return file.read()
7
+
8
+ setup(
9
+ name = 'multi-agent-coordination',
10
+ packages = ['multi_agent_coordination'],
11
+ version = '0.1',
12
+ license='MIT',
13
+ description = 'Identification of strategic choices under multi-agent systems, coordination game and social networks',
14
+ long_description=read_file('README.md'),
15
+ long_description_content_type='text/markdown',
16
+ author = 'ankurtutlani',
17
+ author_email = 'ankur.tutlani@gmail.com',
18
+ url = 'https://github.com/ankur-tutlani/multi-agent-coordination',
19
+ download_url = 'https://github.com/ankur-tutlani/multi_agent_coordination/archive/refs/tags/v_01.tar.gz',
20
+ keywords = ['game theory', 'evolutionary game', 'social norms','multi-agent systems','evolution','social network','computational economics','simulation','agent-based modeling','computation'],
21
+ install_requires=[
22
+ 'numpy',
23
+ 'pandas',
24
+ 'networkx',
25
+ 'matplotlib',
26
+ 'setuptools'
27
+
28
+
29
+ ],
30
+ classifiers=[
31
+ 'Development Status :: 3 - Alpha',
32
+ 'Intended Audience :: Developers',
33
+ 'Topic :: Software Development :: Build Tools',
34
+ 'License :: OSI Approved :: MIT License',
35
+
36
+ 'Programming Language :: Python :: 3.7',
37
+ ],
38
+ )