dapit
But it's ok with UFS now, I feel much much faster than before with the spinning disk
dapit
death_star% cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/ada1a / ufs rw 1 1 /dev/ada0a /var ufs rw 2 2 /dev/ada0b none swap sw 0 0 /dev/ada0d /home ufs rw 2 2 proc /proc procfs rw 0 0
./pascal.sh
mh okay it seems that my bareos server is running however on my linux client the bareos client now fails to start Can't open PID file /var/lib/bareos/bareos-fd.9102.pid (yet?) after start: No such file or directory does someone know how i can resolve this issue?
Anonymous
Check pid creating path file
./pascal.sh
this pid file is actually present and its owned by user root and group bareos 4 -rw-r----- 1 root bareos 5 Jun 30 12:50 /var/lib/bareos/bareos-fd.9102.pid
Anonymous
Check the chroot
./pascal.sh
what chroot?
./pascal.sh
still can't open PID file never had such a problem i don't understand that
Anonymous
Computering is frustrating i can not help i am just a freebsd user
dapit
Me too man, the problem is too enhance for me
dapit
Anonymous
This won't work. I think zfs will not work with two different drives
Take any boot cd and wipe your disks (= removing all partition tables). Start from scratch and re-create your disk layout as you have shown above with freebsd installer. Install. You should be able to boot then. All you can configure within the installer is valid for zfs. Indeed: UFS is faster than zfs, but not so robust. Another important hint: during my zfs-installations I NEVER created an fstab-file - neither automatically, nor manually. Once a zfs pool is created and you have installed into it, the system automatically knows what to do with it. You can export/import data-pools. Root-on-zfs-pools "are just there to be used" during boot-time - no manual adjustments whatsoever, no fstab-editing needed!
./pascal.sh
I am my help desk man 😭😭😭😭😭😭 but this is really frusttrating
dapit
I am my help desk man 😭😭😭😭😭😭 but this is really frusttrating
That's our life yes? We don't choose the easy way 🤪
Anonymous
Thank you, I will do it again once my bigger SSD comes, eventho it'll be able to use all the drive, I will try to put /home on different drive. I have to read some more about ZFS and it pools, I understand the concept and the feature, but don't know how to manage it
The effort is definitely worth it. Remember: what you can configure within the installer will be usable. But start with clean disks (no partitions at all or simple partition table with clean ntfs, fat32 or ext4-partitions). You have to make sure that there are no zfs remainings or UEFI boot partitions on your HDDs. Creating 'Stripe'-Sets on any disk size, type and numbers should be no problem at all for zfs. You only need to learn something more about this fs if you plan to create redundant (fail-safe) raidz1/2/3 disk pools OR mirrors with many drives (7+). Then you need to follow special recommendations, in order to avoid future trouble. Before your next attempt you read freebsd-handbook chapters dealing with zfs. Basically you deal with your zfs-machine via 2 command families: "zfs" and "zpool", like "zfs list" or "zpool status".
dapit
I will try again next time
Anonymous
Take the easy approach then. Earlier we already discussed that zfs loves whole drives. Experiment with such an easy setup and grow complexity later.
Krond
Also ZFS feels pretty fine with swap-on-block-volume.
dapit
Damn...I feel like doing it again now 😂
Anonymous
./pascal.sh
./pascal.sh
Maybe I should ditch FreeBSD in the favor of GNU/Linux ... just kidding
./pascal.sh
mh or i do occasional backups just to my nfs over rsync
./pascal.sh
but a centralised automated solution like bareos would be much nicer
Anonymous
One may become a world traveller while finding / connecting to 'local' freebsd-helpdesks / user groups these days... 😉 ... that's just a wild guess and aims for surroundings in the backcountry
Anonymous
On the other hand, it may get annoying for a real pro to explain the basics 1000x to 10.000 people...
Anonymous
So far I used manual backups with shell, mc and rsync
Anonymous
Currently, I'm discovering 'Syncthing' via WAN and LAN
./pascal.sh
oh yes i wanted to use syncthing for my phone
Anonymous
It's a really cool app
./pascal.sh
but i read that bareos / bacula is better for incremental backups on my PCs
Anonymous
I'm just running an instance on the freebsd-workstation am sitting at right now
Anonymous
it's a charm!
./pascal.sh
ok and you are also using it to backup your computers?
./pascal.sh
is it automated and incremental and can i restore it with one click?
Anonymous
not yet - but their handbook says it should be definitely usable for home-directories (you can even exclude files/folders, AFAICR)
./pascal.sh
ok thx i will check it out
Anonymous
well - it syncronizes folders over several machines
Anonymous
so I regard it as a 'hot spare' solution
Anonymous
A small example: I synchronize 1300+ epubs in one folder across 2 raspberry pi, 1 linux and 1 freebsd-workstation. With rsync this is already very annoying manual effort. Within syncthing it is almost a point-click-solution
Krond
Yes, sync solutions are nice. There are other in port though.
Anonymous
Otherwise, I'm still also looking for the best way to go regarding backups. So far, I used whole machines as cold spare backups. That sounds expensive, but I think it still was/is cheaper than a professional tape library solution.
Anonymous
Meanwhile, I'm convinced it is a very, very hard thing to do, when you gradually want to leave a Windows-world behind and transfer your 'historical' data into a Linux/Unix world. There are no books around dealing with this. At least according to my limited knowledge.
Anonymous
And industry's standard answer is: cloud + big data (AI)
Anonymous
That's not the pathway for the advanced computer user
Krond
My solution for backups was replicating ZFS FS changes via snapshots to other hosts.
Anonymous
that is a valid solution, I believe
Anonymous
zfs send/receive with another BSD-server or FreeNAS
Anonymous
Beyond this, my problem is to synchronize all data from quite a number of external HDDs, old system drives or whole computers into one big pool. My data growth rate always exceeded my purchase capacity for storage monsters (professional gear) 😉 - discipline is hard to keep, especially in the long run...
Anonymous
One needed those AI-capabilities at home
Anonymous
not in the cloud
Krond
Nextcloud is still to overengineered for me. I use Resilio.
Anonymous
I used nextcloud for a couple of months in a test-environment
Anonymous
heard about resilio, but not tested it yet
Anonymous
As I never lost any valuable data my backup strategies were not so bad. On the other hand my sad record for data redundancy a couple of years ago led to a file, which I had in 48 copies across countless archives ... 😞
Anonymous
How could you possibly sort out 2-3 million files by hand? Very hard to do...
./pascal.sh
OK i now tried syncthing and I am currently synchronizing my phone However its very slow. Is there any way I could speed it up?
Anonymous
yes - under 'settings' you can put an internal LAN-IP-adress
Anonymous
then it doesn't synchronize via relay-servers in the internet, but straight from computer to computer at your home
Anonymous
nevertheless: the methodology is very similar to torrents (jigsaw your files into thousands of pieces and glueing them together at the destination) - therefore its a lazy, safe way but surely not the quickest
Mr.
I want dhcp to handle my network interface but I want to set my own nameservers
Mr.
How????
Anonymous
Wtf It does WHAT?
It's a file-chopper, which torrents the pieces over LAN or WAN and re-syncs that mess at the target. A wonderful piece of software. 😉
./pascal.sh
It's a file-chopper, which torrents the pieces over LAN or WAN and re-syncs that mess at the target. A wonderful piece of software. 😉
ok i managed to deactivate that now its only running locally however its still really slow im currently synchronizing with 2.06 KiB/s (2.03 MiB) Down 81 B/s (220 MiB) Up and thats through my local network i suppose at least i disabled NAT in syncthing
./pascal.sh
Downloadrate 54 B/s (38.7 GiB) Uploadrate 2.25 KiB/s (51.6 MiB) still slow
Anonymous
I saw 60+MB on my LAN, depending on file size (ebooks - remember? Only a couple of them having 50+MB of size, where read/write speed can build up - in contrast to 200kB files)
Anonymous
I also saw MB/s via WAN
./pascal.sh
oh wait there was a long time written "preparing for synchronization" though it already synced stuff now there stands "synchronizing"
Anonymous
now we're talking!