

On my machine this produces a lot of responses, but the very last section looks like this: Mac OSXįfmpeg -f avfoundation -list_devices true -i "" The first challenge is to get the audio input from your mic source working and usable by FFmpeg. Note that all these demos can be reworked on Windows OS, but I don’t use windows, so you will have to do a bit of Googling to work out the nuances.

For good order we then recommend you check the stream can be accessed on multiple browsers across multiple machines.

Once established, we will check for the stream on the Icecast server and finally we will play that stream in a web browser. FFMpeg will then encapsulate this audio in an Icecast ICY/HTTP/TCP container format (a process called “packaging”) and establish a connection to the Icecast server up in the cloud. FFmpeg will listen to this input for uncompressed/PCM Audio, and then use an audio encoding codec (mp3 in this example) to compress the audio. Our microphone will be connected to the audio capture interface (“line/mic in”). Since there is no way to get your microphone plugged into a machine in the cloud, we will use the microphone on your laptop. You can’t really get a feel for that when you are pressing play and simply hear a recording streaming through the workflow. not least because things like latency become much more apparent when you say “hi” and you hear it a few seconds later. What is really interesting is hearing your own voice streaming out. Why the laptop? Well we need to present some audio, and so while you could use a file on a disc on another cloud machine, there is nothing very interesting about delivering an audio file from one location to another. So let’s assume you have the Icecast server up and running, waiting patiently in your cloud platform for a source from your laptop. I will then make a few comments that contrast this “proper” streaming workflow with the earlier rudimentary audio streaming article I wrote, which simply used a TCP connection to send audio data across your LAN.

In this article we are going to wake it up and send and audio stream. It just sits there confirming it is ready. Without a source-encoded stream to distribute, it is pretty boring. The Icecast server is in effect our distribution workflow. In order to live stream audio to multiple listeners we need two things: an encoding workflow and a delivery/distribution workflow. It was a pretty “meh” exercise in isolation, but it laid the groundwork for this article. In my last article we setup an Icecast server.
