Use changed nested VMX attribute as trigger to refresh libvirt capability cache

Bug #1830268 reported by Dan Streetman on 2019-05-23
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
libvirt (Ubuntu)
Medium
Unassigned
Xenial
Undecided
Unassigned
Bionic
Undecided
Unassigned
Cosmic
Undecided
Unassigned

Bug Description

[impact]

libvirt caches the 'nested vmx' capability of the host and does not update that even if the host's capability to handle nested vmx changes. Having this domcapability missing means no guests are able to start any nested, kvm-accelerated, guests. Additionally, since openstack live migration requires matching cpu features, this makes migrating guests that do have vmx enabled impossible to hosts where libvirt thinks nested vmx is disabled.

Once the kernel module (kvm_intel) is reloaded with 'nested' enabled, libvirt does not update its domcapabilities cache, even over a libvirtd restart, or even over an entire system reboot. Only certain conditions cause libvirt to update its capabilities cache (possibly libvirt upgrade, or qemu upgrade, or kernel upgrade...I haven't verified any of those yet)

libvirt creates caches for its domcapabilities at /var/cache/libvirt/qemu/capabilities/.
removing the cache xml files there and restarting libvirtd will cause the caches to be recreated with the correct current values.

The fix backports the upstream fix:
https://libvirt.org/git/?p=libvirt.git;a=commit;h=b183a753
Which makes it always check the current vs the last stored attribute.

[test case]

check the kvm_intel module nested parameter:
$ cat /sys/module/kvm_intel/parameters/nested
Y

it can be Y or N. make sure libvirt agrees with the current setting:
$ virsh domcapabilities | grep vmx
      <feature policy='require' name='vmx'/>

if 'nested' is Y, domcapabilities should include a vmx feature line; if 'nested' is N, it should have no output (i.e. vmx not supported in guests).

Then, change the kernel nested setting, and re-check domcapabilities. Restarting libvirtd doesn't update the cache, and even rebooting the entire system doesn't update the cache.

$ virsh domcapabilities | grep vmx
$ cat /sys/module/kvm_intel/parameters/nested
N
$ sudo rmmod kvm_intel
$ sudo modprobe kvm_intel nested=1
$ cat /sys/module/kvm_intel/parameters/nested
Y
$ virsh domcapabilities | grep vmx
$ sudo systemctl restart libvirtd
$ virsh domcapabilities | grep vmx
$

Not only should it work, but further configurung libvirt debug [1] the fix should leave a message like this when triggering:
  VIR_DEBUG("Outdated capabilities for '%s': kvm kernel nested "
            "value changed from %d",)

Test #2:
- restart libvirtd
- call `virsh domcapabilities`
- repeat the above
- this should later on use the cache (faster)
- If it always regenerates the cache (see spawned qemu's and new file
  dates) the detection is wrong

Test #3:
- some arches (e.g. s390x) don't have this attribute, check on one of those how their behavior changes.

[regression potential]

This will make libvirt refresh the capability cache more often. This is a quite expensive tasks (depending on the number of qemu's installed which can be anything from none to all arch emulators and the kvm based ones ~10. Those will be forked and probed again. The new code now adds a rather safe detection as the nested attribute would usually only change on a reboot or a module reload. So it should be rather safe. The one real regression would be if the detection would be wrong and always trigger.
I added Test #2 above to check for that.

[other info]

related RH bugs, though no changes appear to have resulted from either:
https://bugzilla.redhat.com/show_bug.cgi?id=1474874
https://bugzilla.redhat.com/show_bug.cgi?id=1650950

Dan Streetman (ddstreet) on 2019-05-23
description: updated

Hi,
it didn't let me calm down that we had the qemu QMP reporting VMX as off.
I mean I have heard (there never was an Ubuntu bug, but people on IRC have run into it) of issues that people needed to regen capabilities.
After checking the logs of yesterday I found the trap that you laid for me :-)
When checking QMP we were on a system that really had
  $ cat /sys/module/kvm_intel/parameters/nested
  N
So the bit in the cpuid was off.
But when we made the cross check with the cpuid tool we were on a different system, probably with
  $ cat /sys/module/kvm_intel/parameters/nested
  Y
But the caps cache not regenerated. Hence there the tool reported the bit to be set.

That red herring put aside I can let go my confusion and focus on what is ahead, as discussed on IRC we might want to look into :
a) verifying the triggers for a reload today properly working (e.g. new qemu binary)
b) consider adding more triggers (maybe: libvirtd restart, module load times, surely: reboot)

Not sure if that will be today or next week.
But with manually cleaning the /var/cache/libvirt/qemu/capabilities/ gives you a workaround until then.

Changed in libvirt (Ubuntu):
status: New → Triaged
importance: Undecided → Medium

Currently libvirt tracks these elements to consider refreshing (they are stored in the capability XMLs themselves).

  <qemuctime>1558846766</qemuctime>
  <selfctime>1557980947</selfctime>
  <selfvers>4000000</selfvers>

There are other ctime based caches as well like virQEMUCapsKVMUsable for /dev/kvm.

The checker for the main caps is virQEMUCapsIsValid and so far checks:
- libvirt ctime (binary)
- libvirt version (internal build time value)
- qemu bin ctime (binary)
- do not not go further down if on emulated arch (won't change)
- /dev/kvm got accessible since last caching (DAC)
- /dev/kvm got unavailable since last caching (DAC)
- microcode changed (cpuinfo)
- kernel version changed
- Nesting is now supported

The latter sounds familiar right?
=> https://libvirt.org/git/?p=libvirt.git;a=commit;h=b183a75319b90d0af5512be513743e1eab950612

That is in 5.0 which means =>Disco already.
Lets check how backportable that is ...

Changed in libvirt (Ubuntu):
status: Triaged → Fix Released

There also was the discussion (on this bug) to re-probe on reboots in general.
That was already discussed upstream multiple times, but didn't happen to be implemented so far.
There is some pro (more reliable data) and con (this is really a heavy task) to this.

See discussions on:
=> https://www.redhat.com/archives/libvir-list/2017-October/msg00916.html
=> https://www.redhat.com/archives/libvir-list/2018-January/msg00657.html

So far the outcome always was to add just another minor check for whatever new case was identified instead of really probing all again on each reboot.

For some stuff that was mentioned in IRC discussions:
In no way seems "refresh on libvirtd restart" an option.
=> https://www.redhat.com/archives/libvir-list/2018-December/msg00722.html
=> https://www.redhat.com/archives/libvir-list/2019-January/msg00061.html
=> https://www.redhat.com/archives/libvir-list/2018-September/msg00662.html

Nor "provide something nicer than rm xml files"
=> https://www.redhat.com/archives/libvir-list/2018-November/msg00614.html

For all the MDS and its predecessors I wonder if "/proc/cpuinfo flags" would be another good candidate going forward to polish the existing caching mechanism even further. I might send an RFC about that, but will look at back-portability of the existing first.

Bionic and Disco will be easy to backport, Xenial is more interesting ...

While the noise isn't too hard on Xenial, but in Xenial right now the only thing considered is the ctime of qemu.
I wonder if we should backport more of the existing code now that we are touching this topic anyway.

The current status is:

Xenial: qemu-ctime
Bionic: qemu-ctime, libvirt-ctime, /dev/kvm accessible, microcode
Cosmic: qemu-ctime, libvirt-ctime, /dev/kvm accessible, microcode, kernel version
Disco: qemu-ctime, libvirt-ctime, /dev/kvm accessible, microcode, kernel version, native Arch, Nesting

Correction to the above, it turns out Xenial has the "if libvirt is changed" code just at a different place.
=> 0b4211f9 is in >=v1.2.16

KVM accessible got later improved (>=Disco) to be cached.
But there was no request to backport it to e.g. Bionic which would have been done if important (Change by IBM). So maybe the importance of that caching isn't too big.

Looking at the (not necessarily complete) list of related changes:

91684829 /dev/kvm check caching
b183a753 KVM Nesting
55e5eb94 Non Native
  88983855 Fix KVM enable check (has some wider code implications)
-- above is disco --
52b7d910 kernel version
-- above is cosmic --
b527589d microcode
-- above is bionic --
  d03de54e bigger rework (using virFileCache)
  731cfd5f minor rework for _virQEMUCapsCachePriv
  56a047a6 minor rework (arg qemuctime)
  7fcf66cf minor rework (location libvirt check)
  a63ef877 bigger rework (args libvirt ctime)
d87df9bd KVM permission change
  c29e6d48 bigger rework (Caps caching in general)
  f2dd7259 move and static virQEMUCapsIsValid
  729aa67d qemu ctime in XML
  68c70118 host model in caps
-- above is xenial --

We see a huge block of reworks which make backport to Xenial harder.
And by that also a bigger risk which we should not do for an SRU.
After all there is UCA Xenial-Queens which will is an official source for newer code as in Bionic.

Given the severity/importance/commonality of those issues/use-cases I'd try to:
- get Nesting to Xenial/Bionic/Cosmic
- get NonNative to Bionic/Cosmic
- get kernelversion to Bionic

While X lacks many of the further checks, as outlined the risk/work seems too much given that we had no issues in that regard reported up to now - and there is an, albeit ugly, userspace workaround in place (delete files e.g. on boot as most of those actions require a reboot).

I'll check if that would work as intended after lunch.

nonNative only is important when having 88983855 - we don't have that in B/C so we can ignore that as well.

That makes:
- get b183a753 Nesting to Xenial/Bionic/Cosmic
- get 52b7d910 kernelversion to Bionic

summary: - libvirt caches nested vmx capability (in domcapabilities)
+ Use more triggers to refresh libvirt capability cache

Since 18.04 was an LTS I have already added quite some patches on top of 4.0 before it was released to stabilize it.
Due to that the kernelversion detection already is in:
debian/patches/stable/0004-qemu-Refresh-caps-cache-after-booting-a-different-ke.patch

That leaves just nesting to be backported, but with a much better understanding of the overall situation \o/

summary: - Use more triggers to refresh libvirt capability cache
+ Use changed nested VMX attribute as trigger to refresh libvirt
+ capability cache
Changed in libvirt (Ubuntu Cosmic):
status: New → Triaged
Changed in libvirt (Ubuntu Bionic):
status: New → Triaged

Updated the SRU Template with extended Test and regression potential.

description: updated
Changed in libvirt (Ubuntu Xenial):
status: New → Triaged
description: updated

Tested against the PPA, I see the change in VMX picked up correctly:

root@b-wily:~# virsh domcapabilities | grep vmx; ll /var/cache/libvirt/qemu/capabilities/
      <feature policy='require' name='vmx'/>
total 38
drwxr-xr-x 2 root root 5 Jun 3 10:16 ./
drwxr-x--- 3 libvirt-qemu kvm 3 May 22 12:36 ../
-rw------- 1 root root 56642 Jun 3 10:16 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
-rw------- 1 root root 56642 Jun 3 10:16 ad11a0ad669fe8eb7496efebe5a20cc8014d83e6553a9bb0294e73973977ad05.xml
-rw------- 1 root root 58463 Jun 3 10:16 f11008721aacc79c97e592178e61264d75be551864cd79cc41fe820e31262f27.xml

root@b-wily:~# virsh domcapabilities | grep vmx; ll /var/cache/libvirt/qemu/capabilities/
total 30
drwxr-xr-x 2 root root 5 Jun 3 11:44 ./
drwxr-x--- 3 libvirt-qemu kvm 3 May 22 12:36 ../
-rw------- 1 root root 56628 Jun 3 11:44 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
-rw------- 1 root root 56642 Jun 3 10:16 ad11a0ad669fe8eb7496efebe5a20cc8014d83e6553a9bb0294e73973977ad05.xml
-rw------- 1 root root 58463 Jun 3 10:16 f11008721aacc79c97e592178e61264d75be551864cd79cc41fe820e31262f27.xml

Also tested the above on cosmic where it is good (uploaded).
But for Xenial we need another round, there it doesn't refresh - we have known there is more churn for Xenial, so maybe we need to work on the code.

I've uploaded xenial with the others, but will ask to cancel it until resolved

Changed in libvirt (Ubuntu Bionic):
status: Triaged → In Progress
Changed in libvirt (Ubuntu Cosmic):
status: Triaged → In Progress

Hmm, we just don't need the xenial portion.
As already mentioned at the backport the code was way different then.

I can start a VMX defined guest on Xenial:

  <cpu match='exact'>
    <model fallback='allow'>core2duo</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='vmx'/>
  </cpu>

And it becomes -cpu core2duo,+vmx for qemu commandline.
So Xenial is actually "invalid" as the issue described here doesn't apply there.

Changed in libvirt (Ubuntu Xenial):
status: Triaged → Invalid

Hello Dan, or anyone else affected,

Accepted libvirt into cosmic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/libvirt/4.6.0-2ubuntu3.6 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-cosmic to verification-done-cosmic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-cosmic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in libvirt (Ubuntu Cosmic):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-cosmic

I wanted to wait until the Bionic upload would show up as well.
But it didn't and I don't want to stall this more than needed.
So verifying cosmic:

root@c:~# virsh domcapabilities | grep vmx; ll /var/cache/libvirt/qemu/capabilities/
      <feature policy='require' name='vmx'/>
total 27
drwxr-xr-x 2 root root 5 Jun 13 10:31 ./
drwxr-x--- 3 libvirt-qemu kvm 3 Jun 13 10:30 ../
-rw------- 1 root root 35567 Jun 13 10:30 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
-rw------- 1 root root 56905 Jun 13 10:31 ad11a0ad669fe8eb7496efebe5a20cc8014d83e6553a9bb0294e73973977ad05.xml
-rw------- 1 root root 37388 Jun 13 10:30 f11008721aacc79c97e592178e61264d75be551864cd79cc41fe820e31262f27.xml
root@c:~# cat /proc/cpuinfo ^C
root@c:~# cat /sys/module/kvm_intel/parameters/nested
cat: /sys/module/kvm_intel/parameters/nested: No such file or directory
root@c:~# cat /sys/module/kvm_intel/parameters/nested; virsh domcapabilities | grep vmx; ll /var/cache/libvirt/qemu/capabilities/
N
total 27
drwxr-xr-x 2 root root 5 Jun 13 10:32 ./
drwxr-x--- 3 libvirt-qemu kvm 3 Jun 13 10:30 ../
-rw------- 1 root root 35567 Jun 13 10:30 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
-rw------- 1 root root 56891 Jun 13 10:32 ad11a0ad669fe8eb7496efebe5a20cc8014d83e6553a9bb0294e73973977ad05.xml
-rw------- 1 root root 37388 Jun 13 10:30 f11008721aacc79c97e592178e61264d75be551864cd79cc41fe820e31262f27.xml
root@c:~# cat /sys/module/kvm_intel/parameters/nested; virsh domcapabilities | grep vmx; ll /var/cache/libvirt/qemu/capabilities/
Y
      <feature policy='require' name='vmx'/>
total 27
drwxr-xr-x 2 root root 5 Jun 13 10:32 ./
drwxr-x--- 3 libvirt-qemu kvm 3 Jun 13 10:30 ../
-rw------- 1 root root 35567 Jun 13 10:30 926803a9278e445ec919c2b6cbd8c1c449c75b26dcb1686b774314180376c725.xml
-rw------- 1 root root 56905 Jun 13 10:32 ad11a0ad669fe8eb7496efebe5a20cc8014d83e6553a9bb0294e73973977ad05.xml
-rw------- 1 root root 37388 Jun 13 10:30 f11008721aacc79c97e592178e61264d75be551864cd79cc41fe820e31262f27.xml

The file refreshes on change of the nested attribute.

tags: added: verification-done verification-done-cosmic
removed: verification-needed verification-needed-cosmic
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package libvirt - 4.6.0-2ubuntu3.6

---------------
libvirt (4.6.0-2ubuntu3.6) cosmic; urgency=medium

  * d/p/ubuntu/lp-1830268-refresh-capabilities-on-KVM-nesting.patch: fix
    consideration of VMX flag (LP: #1830268)

 -- Christian Ehrhardt <email address hidden> Tue, 28 May 2019 07:59:48 +0200

Changed in libvirt (Ubuntu Cosmic):
status: Fix Committed → Fix Released

The verification of the Stable Release Update for libvirt has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Hello Dan, or anyone else affected,

Accepted libvirt into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/libvirt/4.0.0-1ubuntu8.11 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in libvirt (Ubuntu Bionic):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-bionic
removed: verification-done
Download full text (5.9 KiB)

Before the fix, not responding to lost flag

root@b:~# cat /sys/module/kvm_intel/parameters/nested; virsh domcapabilities | grep vmx; ls -ltr /var/cache/libvirt/qemu/capabilities/
N
      <feature policy='require' name='vmx'/>

Installing from proposed:

root@b:~# apt list --upgradable
Listing... Done
gstreamer1.0-plugins-base/bionic-proposed 1.14.4-1ubuntu1.1~ubuntu18.04.1 amd64 [upgradable from: 1.14.1-1ubuntu1~ubuntu18.04.2]
gstreamer1.0-plugins-good/bionic-proposed 1.14.4-1ubuntu1~ubuntu18.04.1 amd64 [upgradable from: 1.14.1-1ubuntu1~ubuntu18.04.1]
gstreamer1.0-x/bionic-proposed 1.14.4-1ubuntu1.1~ubuntu18.04.1 amd64 [upgradable from: 1.14.1-1ubuntu1~ubuntu18.04.2]
libgstreamer-plugins-base1.0-0/bionic-proposed 1.14.4-1ubuntu1.1~ubuntu18.04.1 amd64 [upgradable from: 1.14.1-1ubuntu1~ubuntu18.04.2]
libgstreamer-plugins-good1.0-0/bionic-proposed 1.14.4-1ubuntu1~ubuntu18.04.1 amd64 [upgradable from: 1.14.1-1ubuntu1~ubuntu18.04.1]
libnss-systemd/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
libpam-systemd/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
libssl1.1/bionic-proposed 1.1.1-1ubuntu2.1~18.04.3 amd64 [upgradable from: 1.1.1-1ubuntu2.1~18.04.1]
libsystemd0/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
libudev1/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
libvirt-clients/bionic-proposed 4.0.0-1ubuntu8.11 amd64 [upgradable from: 4.0.0-1ubuntu8.10]
libvirt-daemon/bionic-proposed 4.0.0-1ubuntu8.11 amd64 [upgradable from: 4.0.0-1ubuntu8.10]
libvirt-daemon-driver-storage-rbd/bionic-proposed 4.0.0-1ubuntu8.11 amd64 [upgradable from: 4.0.0-1ubuntu8.10]
libvirt-daemon-system/bionic-proposed 4.0.0-1ubuntu8.11 amd64 [upgradable from: 4.0.0-1ubuntu8.10]
libvirt0/bionic-proposed 4.0.0-1ubuntu8.11 amd64 [upgradable from: 4.0.0-1ubuntu8.10]
openssl/bionic-proposed 1.1.1-1ubuntu2.1~18.04.3 amd64 [upgradable from: 1.1.1-1ubuntu2.1~18.04.1]
snapd/bionic-proposed 2.39.2+18.04 amd64 [upgradable from: 2.38+18.04]
systemd/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
systemd-sysv/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
udev/bionic-proposed 237-3ubuntu10.23 amd64 [upgradable from: 237-3ubuntu10.22]
root@b:~# apt install libvirt-daemon-system
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libvirt-clients libvirt-daemon libvirt-daemon-driver-storage-rbd libvirt0
Suggested packages:
  libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-sheepdog libvirt-daemon-driver-storage-zfs numad radvd auditd systemtap nfs-common zfsutils pm-utils
The following packages will be upgraded:
  libvirt-clients libvirt-daemon libvirt-daemon-driver-storage-rbd libvirt-daemon-system libvirt0
5 upgraded, 0 newly installed, 0 to remove and 15 not upgraded.
Need to get 4114 kB of archives.
After this operation, 12.3 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 libvirt-daemon-system amd64 4.0.0-1ubu...

Read more...

tags: added: verification-done verification-done-bionic
removed: verification-needed verification-needed-bionic
Łukasz Zemczak (sil2100) wrote :

Thank you for the verification Christian! This looks good. In the SRU description though I also see Test #3 for testing the changes on a s390x machine that didn't seem to be executed (or maybe I missed it somehow?). Seems like something we might want to check. Could you follow up on that?

I beg your pardon for not explicitly mentioning #3.
In fact s390x has it, but on another path.
  /sys/module/kvm/parameters/nested

It works there as well after upgrading to the version from proposed.
If none of the paths is found it just does nothing (which is fine)

More interesting is that s390x uses 0/1 instead of Y/N, but the code also accommodates for that.

So #3 is tested as well (and now documented here).
Thanks for catching that detail Lukasz!

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package libvirt - 4.0.0-1ubuntu8.11

---------------
libvirt (4.0.0-1ubuntu8.11) bionic; urgency=medium

  * d/p/ubuntu/lp-1830268-refresh-capabilities-on-KVM-nesting.patch: fix
    consideration of VMX flag (LP: #1830268)

 -- Christian Ehrhardt <email address hidden> Mon, 27 May 2019 11:52:07 +0200

Changed in libvirt (Ubuntu Bionic):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers