Does screen sharing mode still use a local model for transcription? Can we make it clearer whether sharing the screen will ever cause data to leave my machine?

Whenever I use monologue.to, it is great. It's great to know that when I use a transcription model, it can be run directly via my Mac without ever having AI try to analyze my data remotely and I really appreciate this feature. However, does this local-only detection and intelligence work for screen-sharing mode as well? The context that the transcription could get from seeing my screen seems useful; however, it is unclear to me whether it uses a local model, just like the actual voice-based transcription does. Can we make this more clear? Or does this even use local transcription for screen-sharing mode at all?

Thanks for making this app!

Please authenticate to join the conversation.

Upvoters
Status

Completed

Board
πŸ’‘

Feature Request

Date

6 months ago

Author

BluCreator

Subscribe to post

Get notified by email when there are changes.