Specification¶
Import |
|
---|---|
Actions |
Discrete & Stochastic |
Observations |
Discrete and fully Observed with private observations |
Parallel API |
Yes |
Manual Control |
No |
Agent Names |
[\(firefighter\)_0, …, \(firefighter\)_n] |
# Agents |
[0, \(n_{firefighters}\)] |
Action Shape |
(\(envs\), 2) |
Action Values |
[\(fight_0\), …, \(fight_{tasks}\), \(noop\) (-1)] |
Observation Shape |
TensorDict: { |
Observation Values |
self: |
Usage¶
Parallel API¶
from free_range_zoo.envs import wildfire_v0
main_logger = logging.getLogger(__name__)
# Initialize and reset environment to initial state
env = wildfire_v0.parallel_env(render_mode="human")
observations, infos = env.reset()
# Initialize agents and give initial observations
agents = []
cumulative_rewards = {agent: 0 for agent in env.agents}
current_step = 0
while not torch.all(env.finished):
agent_actions = {
agent_name: torch.stack([agents[agent_name].act()])
for agent_name in env.agents
} # Policy action determination here
observations, rewards, terminations, truncations, infos = env.step(agent_actions)
rewards = {agent_name: rewards[agent_name].item() for agent_name in env.agents}
for agent_name, agent in agents.items():
agent.observe(observations[agent_name][0]) # Policy observation processing here
cumulative_rewards[agent_name] += rewards[agent_name]
main_logger.info(f"Step {current_step}: {rewards}")
current_step += 1
env.close()
AEC API¶
from free_range_zoo.envs import wildfire_v0
main_logger = logging.getLogger(__name__)
# Initialize and reset environment to initial state
env = wildfire_v0.parallel_env(render_mode="human")
observations, infos = env.reset()
# Initialize agents and give initial observations
agents = []
cumulative_rewards = {agent: 0 for agent in env.agents}
current_step = 0
while not torch.all(env.finished):
for agent in env.agent_iter():
observations, rewards, terminations, truncations, infos = env.last()
# Policy action determination here
action = env.action_space(agent).sample()
env.step(action)
rewards = {agent: rewards[agent].item() for agent in env.agents}
cumulative_rewards[agent] += rewards[agent]
current_step += 1
main_logger.info(f"Step {current_step}: {rewards}")
env.close()
Configuration¶
- class free_range_zoo.envs.wildfire.env.structures.configuration.AgentConfiguration(agents: torch.IntTensor, fire_reduction_power: torch.FloatTensor, attack_range: torch.Tensor, suppressant_states: int, initial_suppressant: int, suppressant_decrease_probability: float, suppressant_refill_probability: float, initial_equipment_state: int, equipment_states: torch.FlaotTensor, repair_probability: float, degrade_probability: float, critical_error_probability: float, initial_capacity: int, tank_switch_probability: float, possible_capacities: torch.Tensor, capacity_probabilities: torch.Tensor)[source]¶
Setting for configuring agent properties in the environment.
- Variables:
agents (torch.IntTensor) – torch.IntTensor - Tensor representing the location of each agent
fire_reduction_power (torch.FloatTensor) – torch.FloatTensor - Power of each agent to reduce the fire intensity
attack_range (torch.Tensor) – torch.Tensor - Range of attack for each agent
suppressant_states (int) – int - Number of suppressant states
initial_suppressant (int) – int - Initial suppressant value for each agent
suppressant_decrease_probability (float) – float - Probability of suppressant decrease
suppressant_refill_probability (float) – float - Probability of suppressant refill
intial_equipment_state – int - Initial equipment state for each agent
equipment_states (torch.FlaotTensor) – torch.FloatTensor - Definition of equipment states modifiers in the form of (capacity, power, range)
repair_probability (float) – float - Probability that an agent get their repaired equipment once fully damaged
degrade_probability (float) – float - Probability that an agent’s tank will degrade
critical_error_probability (float) – float - Probability that an agent at full will suffer a critical error
tank_switch_probability (float) – float - Probability that an agent will be supplied with a different tank on refill
possible_capacities (torch.Tensor) – torch.Tensor - Possible maximum suppressant values
capacity_probabilities (torch.Tensor) – torch.Tensor - Probability that each suppressant maximum is chosen
- class free_range_zoo.envs.wildfire.env.structures.configuration.FireConfiguration(fire_types: IntTensor, num_fire_states: int, lit: Tensor, intensity_increase_probability: float, intensity_decrease_probability: float, extra_power_decrease_bonus: float, burnout_probability: float, base_spread_rate: float, max_spread_rate: float, random_ignition_probability: float, cell_size: float, wind_direction: float, ignition_temp: IntTensor, initial_fuel: int)[source]¶
Setting for configuring fire properties in the environment.
- Variables:
fire_types (torch.IntTensor) – torch.IntTensor - Required attack power in order to extinguish the fire
num_fire_states (int) – int - Number of fire states
lit (torch.Tensor) – torch.IntTensor - Tensor representing the initially lit tiles
intensity_increase_probability (float) – float - Probability of fire intensity increase
intensity_decrease_probability (float) – float - Probability of fire intensity decrease
extra_power_decrease_bonus (float) – float - Additional decrease bonus per extra power
burnout_probability (float) – float - Probability of fire burnout
base_spread_rate (float) – float - Base spread rate of the fire
max_spread_rate (float) – float - Maximum spread rate of the fire
random_ignition_probability (float) – float - Probability of random ignition
cell_size (float) – float - Size of each cell
wind_direction (float) – float - Direction of the wind (radians)
ignition_temp (torch.IntTensor) – torch.IntTensor - Initial intensity of each fire once ignited
initial_fuel (int) – int - Initial fuel value of each cell in the grid, controls the number of re-ignitions
- class free_range_zoo.envs.wildfire.env.structures.configuration.RewardConfiguration(fire_rewards: FloatTensor, bad_attack_penalty: float, burnout_penalty: float, termination_reward: float)[source]¶
Settings for configuring the reward function.
- Variables:
fire_rewards (torch.FloatTensor) – torch.FloatTensor - Reward for extinguishing a fire
bad_attack_penalty (float) – float - Penalty for attacking a tile that is not on fire
burnout_penalty (float) – float - Penalty for attacking a burned out fire
termination_reward (float) – float - Reward for terminating the environment
- class free_range_zoo.envs.wildfire.env.structures.configuration.StochasticConfiguration(special_burnout_probability: bool, suppressant_refill: bool, suppressant_decrease: bool, tank_switch: bool, critical_error: bool, degrade: bool, repair: bool, fire_increase: bool, fire_decrease: bool, fire_spread: bool, realistic_fire_spread: bool, random_fire_ignition: bool, fire_fuel: bool)[source]¶
Configuration for the stochastic elements of the environment.
- Variables:
special_burnout_probability (bool) – bool - Whether to use special burnout probabilities
suppressant_refill (bool) – bool - Whether suppressants refill stochastically
suppressant_decrease (bool) – bool - Whether suppressants decrease stochastically
tank_switch (bool) – bool - Whether to use stochastic tank switching
critical_error (bool) – bool - Whether equipment state can have a critical error
degrade (bool) – bool - Whether equipment state stochastically degrades
repair (bool) – bool - Whether equipment state stochastically repairs
fire_decrease (bool) – bool - Whether fires decrease stochastically
fire_increase (bool) – bool - Whether fires increase stochastically
fire_spread (bool) – bool - Whether fires spread
realistic_fire_spread (bool) – bool - Whether fires spread realistically
random_fire_ignition (bool) – bool - Whether fires can ignite randomly
fire_fuel (bool) – bool - Whether fires consume fuel and have limited ignitions
- class free_range_zoo.envs.wildfire.env.structures.configuration.WildfireConfiguration(grid_width: int, grid_height: int, fire_config: FireConfiguration, agent_config: AgentConfiguration, reward_config: RewardConfiguration, stochastic_config: StochasticConfiguration)[source]¶
Configuration for the wildfire environment.
- Variables:
grid_width (int) – int - Width of the grid
grid_height (int) – int - Height of the grid
fire_configuration – FireConfiguration - Configuration for the fire properties
agent_configuration – AgentConfiguration - Configuration for the agent properties
reward_configuration – RewardConfiguration - Configuration for the environment rewards
stochastic_configuration – StochasticConf - Configuration for the stochastic elements
API¶
- class free_range_zoo.envs.wildfire.env.wildfire.env(wrappers: List[Callable], **kwargs)[source]¶
AEC wrapped version of the wildfire environment.
- Parameters:
wrappers – List[Callable[[BatchedAECEnv], BatchedAECEnv]] - the wrappers to apply to the environment
- Returns:
BatchedAECEnv – the AEC wrapped wildfire environment
- class free_range_zoo.envs.wildfire.env.wildfire.raw_env(*args, observe_other_suppressant: bool = False, observe_other_power: bool = False, show_bad_actions: bool = False, **kwargs)[source]¶
Environment definition for the wildfire environment.
Initialize the Wildfire environment.
- Parameters:
observe_others_suppressant – bool - whether to observe the suppressant of other agents
observe_other_power – bool - whether to observe the power of other agents
show_bad_actions – bool - whether to show bad actions