Logitech Media Server is a piece of software and it well described here: https://en.wikipedia.org/wiki/Logitech_Media_Server
- It gets music files or streams from a plethora of diverse origins (files on local storage, files from private or public cloud storage, streams from other private streaming platforms eg another LMS or from public services eg Spotify, Tidal, Qobuz etcetcetc), transcodes formats if need be, and streams/sends the songs towards compatible “renderers”, i.e. music players which in their turn feed the actual audio hw (DAC -> AMP -> Transducer)
- It’s available for Windows, Mac and of course Linux including a few specialised Linux distributions
- It therefore runs on “usual” X386/64 hw, Apple hw, and – what matters most – on a huge array of low cost and especially low power consuming SBCs (Single Board Computer)
- Considering today’s available hw performance level, its system (CPU/RAM etc) requirements for an even fancy home setup are unbelievably low
- It’s free (GNU)
LMS does not “play music”, it just collects music, manages its stock, access and distribution to the actual players (“renderes”).
As a renderer you can use either a preconfigured hardware device (e.g. a Chromecast, a Squeezebox, etc etc), or install a compatible receiver sw on a general purpose system (e.g. your pc, your mac, your xbox, etc), or finally build a “hardware rendering device” from scracth, which is indeed my case and the good news is that it’s way less complicated than it seems.
The system acting as LMS server may also have a Renderer inside itself, while at the same time still be able to stream audio to other external Renderers.
While streaming audio to multiple renders LMS can also manage to keep them in sync, obtaining simultaneous music distribution in more rooms for example.
- LMS is the “server”, the manager of the whole system
- Renderers are the outputs, i.e. the points where digital data is finally sent to a DAC>AMP>Speaker/HP/IEM
So how can I use it
No I won’t write a full book on the infinite ways to deploy an LMS infrastructure. I’ll just describe how my own infrastructure is organised now, for you to take inspiration 🙂
My LMS is running on an SBC-class computer.
In my specific case we’re talking about a NanoPi NEO2 but it could easily be “any” Raspberry Pi 3 or above, or dozens of similar alternatives.
I’ve chosen an ARM-based SBC vs a more “common” X386/X64 NUC due to its dramatically lower power requirements.
My NeoPi drains like 2W while working, less than 0.5W while idle (easily 90% of its time), which means 5 KWh in a year, i.e. a whopping 1.5€ of total annual cost in the electrical bill.
By comparison, an entry level X386/X64 NUC consumes from 20 times more.
My Neo-LMS server is wired-connected to my main home network switch.
As I already have another (again SBC-class) server acting as a general file server for my home needs, that’s where my digital music files are deposited, and my Neo-LMS accesses them via NFS. In a simpler setup, I could plug a USB drive right onto Neo-LMS of course.
Once installed, the LMS server publishes an HTML interface.
Which means that from any of my pcs or wifi enabled devices (phones, tablets, daps…) I can access it as long as I can browse onto its address.
LMS creates an index of all music files on the storage, much like any “media manager” application does (including those inside DAPs).
Let’s now suspend the LMS description for a sec, and pass on to the Renderers.
My first Renderer is – guess what – a RaspberryPi ZeroW.
On it I loaded PiCorePlayer which I like as it offers two great features at the same time: it’s super-easy to install, and it sounds wonderfully well
It’s good to note that PiCorePlayer also optionally carries LMS built in. That means that in an even simpler situation I could have avoided keeping a standalone Neo-LMS device acting as a mere server, and I could have elected one or my Renderers to the role of Renderer and Server for itself and all others.
Now that at least one PiCorePlayer is installed and running, I can go back onto LMS’s webpage – called from a phone, while sitting on the sofa – and I’ll see a Renderer available in my network. At that point I can choose a song from the index, a Renderer to send it to, and click PLAY.
My 3 Renderers
My first Renderer – the aforementioned RaspberryPi ZeroW – is called Allo, it hosts an Allo MiniBoss I2C DAC card.
I bought the MiniBOSS some time ago to start getting my hands dirty with dac etc etc. Not a DAC to write home about in terms of reconstruction fidelity etc but it does have one VERY interesting feature, and that’s it built-in reclocker which allows it to avoid the heavy jitter that lowend RaspberryPi models are plagued with on their onboard USB. As I said, not a TOTL device, but not shit either… at all 🙂
Such mini-network-DAC box is subsequently connected to an Allo Volt amp box, giving juice to a pair of Roth Audio OLIRA1 bookshelvers. This covers a sitting corner in my livingroom with some non-pretentious-quality audio output.
My second Renderer is – guess again – another Raspberry, this time it’s a model 3B, which is sitting on my nightstand. A Groove is plugged in, and that’s where I plug my IEMs before sleeping, juggling my songs with my phone by remote controlling the back-end LMS box.
Why a 3B? Not much for its higher performances vs a Zero – those wouldnt be really so vital – rather because the 3B is the first Respberry model from which jitter issues have been fixed on the internal USB bus.
A RaspberryPi 3B is OOTB way less noisy than a PC but this doesnt mean it wouldnt welcome an at least entry-level-audio-PS (which I am using), and some (further) USB filtering… which is my next planned upgrade on there.
A RaspberryPi requires 5V power supply and drains less than 0.5W on its own, which becomes 1 or little more depending on “what you make it do”, or which devices you plug on its USB port. Read: a RaspberryPi renderer can be easily made “DIY-Style-Mobile” but adding an external battery, and finding a decent way to latch it to the main unit – for which countless dyi examples and parts are available online.
Finally my third renderer… is not a renderer, yet 🙂
I could in facts install a Squeezelite renderer on my Windows Laptop to use it with LMS while I’m seated at my desk, at home, but for whatever reason it didnt work “well” for me OOTB, and I didnt find the time to fix it for good yet.
So my laptop is not integrated into my home LMS infrastructure just yet, and I still use it the “old way” which is having MusicBee – my reference Windows music player, best compromise for me between sound quality and usage pleasure – independently access my centralised digital music files volumes (the same seen and managed by LMS), create its own separate index, and play out on (another) Groove via an iFi nano iUSB3 PS device to cleanse the shit out of the usb&power line coming out of my Lenovo.
Back to the LMS now
So until now we’ve seen LMS as a way to play digital files from a single storage onto a number of desktop players scattered around the network.
LMS comes with a host of additional features in forms of preinstalled plugins, which I can simply activate / remove.
Some key examples:
Tidal, Spotify, Qobuz integration. Put my accounts on there, use my phone to remote my LMS server, open the Tidal section, select (e.g.) the Nightstand Renderer, get Tidal out from a desktop class DAC-AMP. Not my cup of tea actually, but I tried it and it works.
UPnP / DLNA integration. Quite a few “music apps” on various mobile devices (including non-android wifi-enabled DAPs) have DLNA rendering capabilities.
An R3Pro (or equivalent) will “see” LMS as a DLNA source, and play files from it.
My Android phone will “see” my Renderers as UPnP eligible targets (requires apps like BubbleUPnP or similar).
And so on.
Airplay integration, Webradio integration, etc etc etc
Last but not least: after putting in place adequte security measures, an LMS server is accessible from “everywhere”, read “outside home”…