neb
(Which means it's minimal and well-written)
neb
Thank you for joining, Mr. 61
neb
neb
neb
ZFS, zones, dtrace, more integrated into kernel than anywhere else (at least Open Source world).
neb
Not just slapped onto like with Free and even more onto Linux.
neb
illumos is the new *BSD 😄
In terms of being the next hipster OS
neb
neb
@illumosDistroes
neb
I'll walk myself out now…
Jay
aldebaran 🇮🇹
I had read something. Hipster is Openindiana? It hadn't UEFI support, or am I wrong?
ɴꙩᴍᴀᴅ
Dog
Phoronix
OpenBSD Marks 25th Anniversary By Releasing OpenBSD 6.8 With POWER 64-Bit Support
It was in October 1995 that Theo de Raadt began the OpenBSD project as a fork of NetBSD 1.0 following his resignation from the NetBSD core development team. Now twenty-five years later OpenBSD 6.8 has been released for marking the 25th anniversary of this popular BSD distribution...
neb
neb
aldebaran 🇮🇹
I had read it here
http://docs.openindiana.org/handbook/getting-started/#installing-openindiana
aldebaran 🇮🇹
neb
neb
@Aldeb2
neb
If somebody wants to enhance it, we have a project for it on Github: https://github.com/OpenIndiana/oi-docs
neb
aldebaran 🇮🇹
@neb_1984 thank you. I know nothing about Illumos
neb
So does OI support UEFI boot at all? There seems to be conflicting info in docs.
neb
illumos does, at least; I'm not sure if the OI installer creates the ESP
hereforyou
hereforyou
does anyone know how to clone a zfs (live) system onto a new hard disk? I need to replace my hdd with an ssd ..... most examples I see online are kinda confusing
Krond
Make recursive snapshot and send/receive it.
Krond
Rsync also works as always.
hereforyou
the ssd is blank as of now
Krond
1. Create new zpool on ssd.
2. zfs snapshot -R old-pool@move
3. zfs send -R old-pool@move | zfs receive new-pool
hereforyou
Krond
I guess you also need to understand how ZFS mounts filesystems. First remount new-pool somewhere else in the path so the copied filesystems would not overlap existing ones.
hereforyou
you mean i'll need another disk besides the hdd/ssd ?
hereforyou
I also have geli encryption on the hdd fwiw
Krond
No. By default root ZFS fs is set to legacy mount with all other fs set to mount over /. You can check how that works for you with:
zfs list -o name,mountpoint
hereforyou
interesting - will try to do these soon
Krond
In this case when you try replicating your full fs tree all fs from new zpool will inherit mountpoints from old zpool and will be mounted to the same locations.
hereforyou
thanks!
Krond
If you want to mount some zpool without overlapping your system use zpool import -R /other-root new-pool. This way any fs that should be mounted over /var would actually be mounted on /other-root/var.
hereforyou
I'm afraid i'll have to understand the details more before I try these - thanks though for the guidance
Krond
Yep, glad that helped.
dapit
Anonymous
Make recursive snapshot and send/receive it.
Having the same task ahead. Is this going to work?
Strategy for creating a replicated server with zfs send/receive
Environment:
2 machines: system_A & system_B
2 IP-adresses: 192.168.1.1 & 192.168.1.2
2 root-on-zfs-pools: zpool_A & zpool_B (zpool_B newer & larger & to be a zfs clone of zpool_A)
Task description:
Replicating a full fs tree with all fs from old zpool_A on system_A to new zpool_B with all inherited mountpoints and mounts to the same locations.
Pre-/post-task command
zfs list -o name,mountpoint
should deliver the same output.
1. install a fresh copy of freebsd on system_B (root-on-zfs on zpool_B create <mainuser>)
2. use shell on system_A as 'root': <ssh mainuser@192.168.1.2>
3. zfs snapshot -R zpool_A@system_A_recursive_snapshot
4. zfs send -R zpool_A@system_A_recursive_snapshot | zfs receive zpool_B
5. get back to system_B: login as 'root'
6. reboot system_B: login as 'mainuser'
Krond
If both disks are connected to the same host, you also just can zpool attach to create a zpool mirror consisting of two devices. If you need just to switch to the other disk then remove old disk from mirror, or if you just need another host you can disconnect new disk online and remove it afterwards. This way you will get a clone of your zpool, but I guess you wouldn't be able to use attach them both again.
Krond
In case of receiving snapshots on the running host — I guess that's not possible, It would need to empty all FS on that hosts and this will require removing existing mountpoints.
Anonymous
Other sources say: do NOT change drives within a running system unless the disks died abruptly, due to old-age hw-failures. I know procedures to replace HDDs within zfs-mirrors, but this is exactly, what I want to avoid, since I already got a 2nd server.
Anonymous
Baseline recommendation is: change HDDs in a running zfs-system only, when absolutely unavoidable.
Krond
Well, I guess it's not wise to connect/disconnect them online as there can be issues if your power source can't coup with another drive spinning up or connecting is not natively supported on motherboard. Otherwise if you can poweroff, add disk, powweron - there's no limitations.
Krond
My home server is currently living over 3 disk zraid, I'm buying cheap refurbished drives for it and replacing them when they start failing.
Krond
If you definitely need to transfer system over network - take look on rsync, it can make it easy to replicate all files over network.
Anonymous
Krond
zfs list -Ho name | xargs -n1 zfs get -H all | awk 'BEGIN{shard="";output=""}{if(shard!=$1 && shard!=""){output="zfs create";for(param in params)output=output" -o "param"="params[param];print output" "shard;delete params;shard=""}}$4~/local/{params[$2]=$3;shard=$1;next}$2~/type/{shard=$1}END{output="zfs create";for(param in params)output=output" -o "param"="params[param];print output" "shard;}'
if you need to extract your current fs structure as a shell script.
Anonymous
Question remains how to deal with send/receive towards 'empty' zfs fs...
Krond
Well, you can boot from CD or a flash drive for example...
Krond
This way whole new zpool would be totally rewritable.
Krond
There was years since I used freebsd-installer. I'm just copying or mounting and making make DESTDIR=/there installkernel installworld.
Anonymous
Too advanced for me, I'm afraid...
Anonymous
YT: v=COOAH_-CLws is quite helpful for zfs send/receive for data replication (backup purposes). My understanding of 'replicating' a server is yet a little bit different: it means cloning a root-pool onto a new machine. Couldn't figure out yet how to do this in the most straight-forward way.
Anonymous
Isn't that the idea of google-cache?? 😉
Anonymous
Crazy: Release Notes depict 89 bibliographic refs - the majority of them containing "Tanenbaum,A.S."!
Anonymous
https://www.cs.vu.nl/~ast/home/cv.pdf I had the *wrong* impression he was living and teaching in Canada over decades...
Anonymous
This article explains differences between zfs send/receive and rsync. According to my understanding all depends on data volume and frequency of snapshots, if zfs onboard tools should beat 'rsync': https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
Anonymous
Anonymous
Let me misinterpret your answer deliberately: 'the place' I had in mind was to put the result into the freebsd handbook (chapter 'zfs administration').
Eliab/Andi
How's your impression about the FreeBSD 12.2 RC3?
mrphyber
How's your impression about the FreeBSD 12.2 RC3?
I'm running it on my laptop and everything is fine. I've had only one problem: drm-fbsd12.0-kmod compiled for fbsd12.1 won't run on fbsd12.2 so I had to recompile it after installing the base 12.2 system
Andriy
Andriy
In rc3 efifb seems broken
Anonymous
Hello guys.
FreeBSD doesn't support geforce 210 graphic card?
Eliab/Andi