Dear forum members,
I encounter several problems with volumio2. I had an older version running along fine and was so impressed that I ordered a HifiBerry DAC for the
Raspberry B it was running on. I wanted to test the DAC and got a message that said there was a newer version and I upgraded.
It seems I upgraded into a lot of problems.
After a while I decided to start anew and downloaded and installed the latest download for the raspberry. Sometimes the DHCP gave out
a new IP address, sometimes not. If I probed the ip address with a browser, the browser gave an unable to connect message.
The wireless network Volumio sometimes existed, sometimes not. The 192.168.211.1 address dit not work, and
there was no reaction from volumio.local. Several times I just had to pull the power plug off to give it a new try.
In short I am wondering what the heck is going on?
Took the SDHC card out again and with Etcher I burned the image from november 23 2016. Nothing changed.
To check if the raspberry B was OK I started with another image that I use a lot and there were no glitches, errors, malfunctions etc; everything
worked as expected. Even the wifi came up nicely and connected with my home network.
This morning I started the november imaga again and logged in by ssh. It received a valid ip address and I browsed to it.
“Problem loading page. Unable to connect.”
Tried to have a look at the logfiles:
volumio@volumio:/static/var/log$ sudo tail -n50 lastlog
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.
[sudo] password for volumio:
sudo: unable to mkdir /var/lib/sudo/lectured: No space left on device
sudo: unable to mkdir /var/lib/sudo/ts: No space left on device
volumio@volumio:/static/var/log$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mmcblk0p2 2.0G 497M 1.4G 26% /imgpart
/dev/loop0 247M 247M 0 100% /static
overlay 354M 346M 0 100% /
devtmpfs 233M 0 233M 0% /dev
tmpfs 242M 0 242M 0% /dev/shm
tmpfs 242M 8.5M 233M 4% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 242M 0 242M 0% /sys/fs/cgroup
tmpfs 242M 4.0K 242M 1% /tmp
tmpfs 242M 0 242M 0% /var/spool/cups
tmpfs 242M 4.0K 242M 1% /var/log
tmpfs 242M 0 242M 0% /var/spool/cups/tmp
/dev/mmcblk0p1 61M 29M 33M 47% /boot
tmpfs 49M 0 49M 0% /run/user/1000
With top I noticed the node process is using into the ninety percent a lot of the time.
Has anyone got some insights into these troubles?