Audio
Softfire’s audio system is built on MonoGame’s SoundEffect API extended with 3D spatial positioning, a voice budget manager, categorised mixing, and a procedural music director. Audio is driven by the ECS like everything else — AudioSourceComponent and AudioListenerComponent are regular components.
AudioSource and AudioListener
AudioSourceComponent marks an entity as a sound emitter. AudioListenerComponent marks an entity (typically the camera or player) as the point from which the world is heard.
// Emitter — a gunshot sound attached to a weapon entityworld.Add(weaponEntity, new AudioSourceComponent{ SoundKey = "sfx/gunshot", Category = AudioCategory.SFX, Volume = 0.9f, Pitch = 0f, MinDistance = 2f, MaxDistance = 40f, Spatial = true,});
// Listener — attached to the cameraworld.Add(cameraEntity, new AudioListenerComponent());To trigger playback, post an AudioPlayEvent:
world.Post(new AudioPlayEvent{ SourceEntity = weaponEntity, OneShot = true, // Play once, no loop});The AudioSystem processes events each frame, applies 3D attenuation and HRTF panning, checks the voice budget, and delegates to MonoGame’s audio backend.
3D spatial audio with HRTF
When Spatial = true, the audio system calculates left/right pan and volume attenuation based on the relative positions of the source and listener. HRTF (Head-Related Transfer Function) processing is applied to approximate elevation cues. Enable HRTF in your audio settings:
{ "audio": { "hrtfEnabled": true, "hrtfProfile": "generic" }}HRTF profiles are generic, neutral, or you can supply a custom .hrtf asset for higher fidelity. On platforms where HRTF is not supported, the system falls back to simple stereo pan.
Reverb zones
Reverb zones are entities with a ReverbZoneComponent. Any audio source that enters the zone’s bounds will have reverb applied based on the zone’s preset.
world.Add(caveZoneEntity, new ReverbZoneComponent{ Bounds = new Rectangle(0, 0, 800, 600), Preset = ReverbPreset.Cave, BlendRadius = 50f, // Crossfade distance at zone edges});Available presets: Dry, Room, Hall, Cave, Sewer, Outdoors, Cathedral. Custom impulse responses can be loaded as .ir assets and referenced via CustomImpulseKey.
Audio categories and voice budget
All audio sources belong to a category. Categories map to mixer channels with independent volume, pitch, and effects settings. Default categories are Music, SFX, Voice, Ambience, and UI. Define your own in AudioConfig.json.
The voice budget caps how many concurrent sounds can play. When the budget is exceeded, lower-priority voices are stolen. Priority is calculated from a combination of category base priority, distance from the listener, and a per-source Priority override.
{ "audio": { "voiceBudget": 32, "categories": { "SFX": { "volume": 0.8, "priority": 5 }, "Music": { "volume": 1.0, "priority": 10 }, "Voice": { "volume": 1.0, "priority": 15 }, "Ambience": { "volume": 0.6, "priority": 3 }, "UI": { "volume": 0.9, "priority": 20 } } }}Higher priority values are less likely to be stolen.
Streaming audio
Large audio files (music, long ambience) should be loaded as StreamingAudioAsset rather than SoundEffect. Streaming assets are decoded in a background thread and feed a small ring buffer to the output mixer.
world.Add(musicEntity, new AudioSourceComponent{ SoundKey = "music/exploration_theme", Category = AudioCategory.Music, Streaming = true, Loop = true,});Non-streaming sounds are decoded fully into memory at content load time. Use streaming for anything over ~5 seconds.
Procedural SFX variation
AudioVariantSet is an asset that groups multiple sound files with randomisation rules. Reference it instead of a single SoundKey to get automatic pitch and volume variation on playback:
{ "variants": [ "sfx/footstep_grass_01", "sfx/footstep_grass_02", "sfx/footstep_grass_03" ], "pitchVariance": 0.08, "volumeVariance": 0.05, "playMode": "ShuffleNoRepeat"}PlayMode options: Random, ShuffleNoRepeat, RoundRobin. Reference the variant set asset key in AudioSourceComponent.SoundKey — the audio system detects that the key points to a variant set and applies the selection logic.
Adaptive music
The MusicDirector service drives layered adaptive music tracks. An AdaptiveMusicTrack asset defines a set of stems (individual audio layers) and the conditions under which each stem fades in or out.
// Load and start an adaptive trackvar director = Services.Get<MusicDirector>();director.Play("music/battle_adaptive");
// Raise the intensity (e.g., player enters combat)director.SetParameter("Intensity", 0.8f);
// Return to explorationdirector.SetParameter("Intensity", 0.2f);The track asset maps parameter ranges to stem fade-in rules. Transitions are crossfaded over a configurable duration so the switch is never audibly abrupt. Multiple parameters can drive different stems simultaneously — for example, Intensity drives percussion stems while Danger drives dissonant string layers.