I wrote an Android app to visualize spectrograms.
It's been a little frustrating. Unfortunately, at least with my Google Pixel 7 Pro. While the system happily records at 192khz both with its internal microphone(s) and with the external AudioMoth, in both cases the input is filtered to below 25khz. Which is an understandable measure for a "phone" that is supposed to work best for Human speech. Unfortunately, it does not support the "UNPROCESSED" audio source as some other Android devices seem to do.
I also verified that the AudioMoth is used by the Android app, rather than the internal MIC, and that the AudioMoth picks up frequencies above 20khz correctly when connected to a PC.
At this point there doesn't seem to be a workaround, even though the filtering must be in software, when it is applied to the USB microphone.
There is another post that mentions that the AUDIO.
Does anybody else have suggestions or informations on this?
My progress so far is that I got a realtime spectrogram viewer working. Now I'm trying to combine everything: Recording at 192khz, writing to a FLAC stream, viewing the ongoing spectrogram, and then going back to view older data. Github Copilot was also able to hack together a heterodyne conversion.
It turns out that recording at 192khz makes things tough all around, especially on a tight memory budget. A couple of minutes of audio data is all I can keep in memory easily and reliably.
I hope to be able to extend this to a fully fledged player/viewer, capable of both recording and replaying. If that works, maybe I can also add a video exporter, which combines the heterodyne converted audio with the spectrogram, maybe even with a camera recording.
I've also looked into the possibility to configure the USB firmware from the Android app and that should be possible. But don't hold your breath.
Having gone the Kotlin/Android SDK route, I can see why it may be advantageous to use the NDK instead, for example through C++, Rust, React Native, Python, Maui and so on.