Written in Python and using xsel and espeak-ng. Released under AGPLv3+.
You mark a bit of text, run the program using a custom key binding, and it will say it out loud. While it is speaking, mark another and run the thing and it will add that bit to the queue.
I use it all the time and I'm looking for users who would have some further input.
clipboard-speaker opens a FIFO file, and pipes the xsel's stdout into it. Then it starts espeak-ng and pipes into its stdin the FIFO file's output. While espeak-ng is running inside the first clipboard-speaker instance, you can run more clipboard-speaker instances to make them each pipe another xsel's output into the FIFO, which is how the queue works.
I am thinking of switching from this FIFO mechanism into a TCP server for greater capabilities, but as it is now it is almost perfect for my use case.
Do you use Okular at all? I have been trying to figure out a way to import engines/voices into it but I can only get basic flite voices working. I can't select any voices at all. I've been looking for a tut but nothing seems to be available. Thanks for your contribution btw. This is super helpful!
Hi, thanks for writing, I have not used Okular as it seams to offer the wrong feature for my use case (reading the whole screen instead of reading the selected text).
I wrote a dumb little script called clipboard-speaker published on https://codeberg.org/yuvallangerontheroad/clipboard-speaker/.
Written in Python and using xsel and espeak-ng. Released under AGPLv3+.
You mark a bit of text, run the program using a custom key binding, and it will say it out loud. While it is speaking, mark another and run the thing and it will add that bit to the queue.
I use it all the time and I'm looking for users who would have some further input.
Thanks I will have a look. Would it be to much work to make it use Mimic 1 and or Mimic 3 as the speech engine. Here is a post I wrote on mimic 3: https://zenquest.co/team/eben-farnworth/rabbitwhole/text-to-speech-on-linux-ai-rising (Not in Substack as it seams to stop google listings of posts).
Maybe, if you can pipe text into its stdin.
clipboard-speaker opens a FIFO file, and pipes the xsel's stdout into it. Then it starts espeak-ng and pipes into its stdin the FIFO file's output. While espeak-ng is running inside the first clipboard-speaker instance, you can run more clipboard-speaker instances to make them each pipe another xsel's output into the FIFO, which is how the queue works.
I am thinking of switching from this FIFO mechanism into a TCP server for greater capabilities, but as it is now it is almost perfect for my use case.
Do you use Okular at all? I have been trying to figure out a way to import engines/voices into it but I can only get basic flite voices working. I can't select any voices at all. I've been looking for a tut but nothing seems to be available. Thanks for your contribution btw. This is super helpful!
Hi, thanks for writing, I have not used Okular as it seams to offer the wrong feature for my use case (reading the whole screen instead of reading the selected text).
I also stopped writing on Substack as they seam to block my posts from search engines. I wrote another post following this on using open source AI voices, which are higher quality but use more resources. I posted it here: https://zenquest.co/team/eben-farnworth/rabbitwhole/text-to-speech-on-linux-ai-rising