Whenever I use monologue.to, it is great. It's great to know that when I use a transcription model, it can be run directly via my Mac without ever having AI try to analyze my data remotely and I really appreciate this feature. However, does this local-only detection and intelligence work for screen-sharing mode as well? The context that the transcription could get from seeing my screen seems useful; however, it is unclear to me whether it uses a local model, just like the actual voice-based transcription does. Can we make this more clear? Or does this even use local transcription for screen-sharing mode at all?
Thanks for making this app!
Please authenticate to join the conversation.
Completed
Feature Request
6 months ago

BluCreator
Get notified by email when there are changes.
Completed
Feature Request
6 months ago

BluCreator
Get notified by email when there are changes.