MingShi is a real-time audio synthesizer that runs entirely in your browser. Instead of traditional oscillators or samples, it lets you write WGSL compute shaders that run directly on the GPU via WebGPU, giving you full programmatic control over every sample. No install, no sign-up, no server. Powered by Rust, WebAssembly, and WebGPU.
Write a @compute WGSL shader in the editor. Your shader receives a uniform buffer
with timing and sample-rate info, and writes samples directly into a storage buffer.
MingShi compiles and dispatches the shader on your GPU, reads back the results, encodes them
as a stereo WAV file, and plays it back instantly, all inside the browser tab.
Anything that can be expressed as a mathematical function of time: synthesized tones, additive and FM synthesis, physical modelling, procedural soundscapes, algorithmic music, and more. The default shader demonstrates multi-voice synthesis with envelopes, detuning, tremolo, and a simple delay-line reverb, all in plain WGSL.
Musicians and sound designers curious about GPU-driven synthesis, researchers exploring parallel audio generation, and developers who want a hands-on playground for WebGPU compute shaders. If you have ever wanted to treat audio generation as a massively parallel numerical problem, this is that tool.
MingShi is built with Rust compiled to WebAssembly, using Dioxus for the UI and wgpu for GPU access via the WebGPU API. The entire synthesis pipeline, including shader compilation, GPU dispatch, and WAV encoding, happens client-side. Your audio never touches a server.