open-iscsi is slowing down the boot process

Bug #1882986 reported by Zakhar
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
open-iscsi (Ubuntu)
Opinion
Undecided
Unassigned

Bug Description

This is not a bug, but rather an "optimisation" request.

(Probably set this as "enhancement request" + Low)

Apparently, the package is assuming the user will need some iscsi mounts for his session, and is putting dependencies in the systemd services/targets which effects are to delay "graphical.target" to after the point when the network is online.

A great job has been done by Ubuntu so that the O.S. appears to be "snappy" from the boot, and when the session is in "auto-login", it really makes a great difference and a good feeling of the system being very quick.

This assumption of open-iscsi sort of ruins that effort.

As an example, on my PC the graphical target is delayed 10 seconds more (was 22 seconds and is now 32). The impression is not as good and the system feels "slow again" (although it is just a feeling!)

Step to reproduced (you don't even need to have iscsi LUNs to to so, just install the package!)
- Start from a clean 20.04, boot up and issue: systemd-analyze
- Now install open-iscsi, reboot and issue again: systemd-analyze

The result will probably be a big impact on "graphical target", although total time does not change a lot.

My usage is not needing iscsi targets for my session.
I have a NAS with iscsi LUNs, and when I need those mounts, I just start them with a command.

sudo iscsiadm -m node -l

Then Gnome recognises a new disk has been inserted and does an "auto mount".
This command works whether the service was started or not.

This wrong assumption is easily fixed in my case with this command:

sudo systemctl disable iscsid.socket iscsid.service open-iscsi.service

Then, at the next reboot the graphical target is snappy again, and does not have to wait for network-online and remote-fs targets.

I don't know what can be done to cope with both situations : those who need an iscsi target mounted for their session, and those who don't... but I guess the philosophy now should be to assume the user does not need such targets, and don't put dependencies that delay the snappy boot process.

For those who need those mounted remote fs for their session, detailed help on how to enable iscsi services at startup should be provided.

Related branches

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Zakhar,

Thanks for taking the time to report this bug and help make Ubuntu better. Perhaps this could be because of other iscsid/open-iscsi service dependencies. Could you provide an example of your systemd-analyze with and without the services, in question, enabled ?

Thank you!

-rafaeldtinoco

Changed in open-iscsi (Ubuntu):
status: New → Triaged
assignee: nobody → Rafael David Tinoco (rafaeldtinoco)
importance: Undecided → Wishlist
tags: added: server-next
Revision history for this message
Zakhar (alainb06) wrote :

You're welcome.

Indeed, at first I though it was Virtualbox that has no .service (just a SystemV classic init) but changing that had no effect, and disabling iscsi gave me back my 10 seconds !

systemd-analyze:

summary: without iscsi

Startup finished in 17.404s (firmware) + 3.367s (loader) + 2.386s (kernel) + 23.669s (userspace) = 46.828s
graphical.target reached after 23.656s in userspace

summary: with iscsi

Startup finished in 13.841s (firmware) + 3.360s (loader) + 2.272s (kernel) + 31.785s (userspace) = 51.260s
graphical.target reached after 31.775s in userspace

(And you note the "firmware" was quicker since it is a "reboot", the first one is a cold boot, and needs to wait for the spinning disks to spin!)

As attachment, you have the full dumps.
I'm not yet enough at ease with SystemD algorithm to spot at first sight nasty dependencies!

Revision history for this message
Zakhar (alainb06) wrote :

And the other dump (iscsi ENABLED)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

For clarification, there are two related services:

#1 iscsid - provides the framework but usually would do nothing without being told to do so
   - until needed this is not running (socket activation)
#2 open-iscsid (alias = iscsi) - log in to default disks
   - would not run unless config has been defined

This is the state since we ensured the slow bits only run as needed in bug 1755858.
Might be a good read for some background.

I'd assume you have put such configs onto your system for the NAS devices you sometimes would mount later.
Your log confirms:
  ConditionDirectoryNotEmpty: |/etc/iscsi/nodes succeeded

That will most likely make #1 take a look at configs and run, the queries of this will then wake #2 as well.

In case iscsi disks are configured it is correct to wait for them as they "might" be important for the system and so far the systemd dependencies can't know about what is configured for iscsi.

The question IMHO now is what does it need 10 seconds for in your case.
I'd expect it loads, checks the config, finds nothing it needs to to at boot time and be done in 0-2 seconds.

But of the services involved in the logs I can see:
- iscsid.socket
  starts 17:37:06 (socket ready)
  used at 17:37:15
- iscsid consists of two commands:
  startup-checks.sh 17:37:14 - 17:37:14
  /sbin/iscsid 17:37:14 - 17:37:14
- open-iscsi consists of two commands:
   /sbin/iscsiadm -m node --loginall=automatic 17:37:15 - 17:37:15
   activate-storage.sh 17:37:15 - 17:37:15

None of these take a long time thou. Maybe by being present/active they slow something else down in regard to mounting or checking devices.

The `systemd-analyze dump` output you have was already interesting, but it is hard to check for dependencies. Could you add logs of the two cases which include all of the following:
$ systemd-analyze dump
$ systemd-analyze blame
$ systemd-analyze critical-chain
$ systemctl status iscsid.socket iscsid.service open-iscsi.service

Maybe we can spot something else slowing down - e.g. another service that increases in execution time.

Revision history for this message
Zakhar (alainb06) wrote :
Download full text (4.8 KiB)

Many thanks for the explanations Christian!

Here are some more clarification on my setup.

Indeed, I have done a "discover" for my NAS, and I have a node configured.
The other element is that the NAS is generally off, which means if open-iscsci would try to communicate with the NAS at startup, that could timeout.

What I don't yet fully understand with isci, is that when I need my NAS's LUN, since I have once and for all "discovered" it, I just run

$ sudo iscsiadm -m node -l

First, this works with or without the 3 services at startup... not sure it should, and then what is the use of those services (at least in my case!)

But also unlike the same open-iscsi the command line does NOT return... which does not prevent the remote mount to work perfectly fine.

When I'm done with the mount, I just unmount it, do CTRL-C on the command line (which was not necessary in 16.04), and do

$ sudo iscsiadm -m node -u

What I also don't get from reading the man is the difference in what you explain:

$ sudo iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2000-01.com.synology:diskstation.blocks, portal: 192.168.0.100,3260] (multiple)

And the command you quote from open-iscsi says:

$ sudo iscsiadm -m node --loginall=automatic
iscsiadm: No records found

Is it because my nodes configuration has a "manual" somewhere:

$ sudo cat /etc/iscsi/nodes/iqn.2000-01.com.synology:diskstation.blocks/192.168.0.100,3260,0/default
# BEGIN RECORD 2.0-874
node.name = iqn.2000-01.com.synology:diskstation.blocks
node.tpgt = 0
node.startup = manual
....

I didn't knowingly put it there, it is apparently the default value when issuing the "discovery" command:

$ sudo iscsiadm --mode discovery --op update --type sendtargets --portal 192.168.0.100

Now from your explanations, I tried with only the two parts related to iscsi (without open-isci) and I have the same behaviour.

I guess my thinking was right. The logic you explain is that iscsi believes nodes are needed for the startup of the machine (when he finds some) and then waits for the network to become ready (at least) and possibly more to ping the nodes (?)

I don't think iscsi by itself takes a lot of time, or even that there is a timeout with my NAS that is not powered on, it is just because you need to wait for the network that all the "graphical" process is delayed.

Two proofs of that.
I have a fuse mount of my own (1fichierfs : https://gitlab.com/BylonAkila/astreamfs) that runs at session start, since you want to run fuse mounts for the user, and not as root.
For the 20.04, and also for Raspberry OS that does the same "trick", I now introduced an optional "wait for network" feature in the mount itself.

Here is what I get on the log
[1fichierfs 0.000] NOTICE: started: Monday 15 June 2020 at 22:22:36
[1fichierfs 0.000] INFO: successfuly parsed arguments.
[1fichierfs 0.000] INFO: log level is 7.
[1fichierfs 0.000] INFO: user_agent=1fichierfs/1.7.1.1
[1fichierfs 0.008] INFO: <<< API(in) (iReq:1) folder/ls.cgi POST={"folder_id":0,"files":1} name=/
[1fichierfs 8.071] NOTICE: Waited 8 seconds for network at startup.

You see, it said it waited 8 seconds, after when "programs a...

Read more...

Revision history for this message
Zakhar (alainb06) wrote :

And the graph WITH iscsi.

(EDIT from previous post, some words are missing)

In 16.04 the command

$ sudo iscsiadm -m node -l

was returning to the shell prompt.

In 20.04, the same command does NOT return to the shell prompt, but the mount seems to work anyway.

Revision history for this message
Zakhar (alainb06) wrote :

And the 4 files as expected with self-explanatory names.

Revision history for this message
Zakhar (alainb06) wrote :
Revision history for this message
Zakhar (alainb06) wrote :
Revision history for this message
Zakhar (alainb06) wrote :
Revision history for this message
Zakhar (alainb06) wrote :

I am not sure this helps, because apparently "critical-chain" starts only AFTER graphical.target... and we wanted to know why graphical.target itself is delayed.

Also you asked a very relevant question:

"How could iscsi services know whether the user actually needs the mounts or not?"

My answer would be: it can't know! (or it would be overkill like inspecting /etc/ftab and much more places where mounts are needed).

But as my first post said, the "philosophy" seems now rather "not to wait", and give instructions for those who need to wait.

That being said, maybe most users need iscsi mounts to start their machines, and on the contrary that would hurt most users to not do as it is done here, which is wait for network-online and delay all by 10 seconds.

A question of implementation choice probably, and coherence with choice made in other parts of the distrib!

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Zakhar,

For the automatic logins, I suggest you to read the following
documentation:

https://ubuntu.com/server/docs/service-iscsi

I have created that explaining the need of registering iscsi
"interfaces" in iscsi daemon and all needed commands to have that.

$ sudo iscsiadm -m node --loginall=automatic
iscsiadm: No records found

This will only work if you have the interfaces ready and configured in
iscsi daemon (follow the documentation example).

The way I see, if you follow the documentation I have provided, the
only difference will be that you don't want auto login to be set. You
can either change iscsid.conf setting automatic login to manual BEFORE
the discovery, or you can update already discovered nodes/targets
with:

$ sudo iscsiadm -m node --op=update -n node.conn[0].startup -v manual
$ sudo iscsiadm -m node --op=update -n node.startup -v manual

This will allow you to login and logout manually but, yet, the daemons
to start with no waiting time.

Since it seems likely to me that this is a local configuration
problem, rather than a bug in Ubuntu, I am marking this bug as
'Incomplete'.

However, if you believe that this is really a bug in Ubuntu, then we
would be grateful if you would provide a more complete description of
the problem with steps to reproduce, explain why you believe this is a
bug in Ubuntu rather than a problem specific to your system, and then
change the bug status back to "New".

For local configuration issues, you can find assistance here:
http://www.ubuntu.com/support/community

Changed in open-iscsi (Ubuntu):
status: Triaged → Invalid
importance: Wishlist → Undecided
assignee: Rafael David Tinoco (rafaeldtinoco) → nobody
Revision history for this message
Zakhar (alainb06) wrote :
Download full text (3.2 KiB)

Dear Rafael,

I would leave it as it is.

It is not yet clear for me why this is delaying the whole startup, but I agree with you, it is probably not worth investigating more time since the "workaround" is fine and simple.

What is clear is that iscsi needs "network-online", as per the directive in the file /lib/systemd/system/iscsid.service which says:

After=network.target network-online.target

It also says:

Before=remote-fs-pre.target

because indeed it must be completed before mounting the LUNs' remote filesystem.

But you also have the same kind of directive in Openvpn-client: /lib/systemd/system/openvpn-client@.service

After=network-online.target

... and openvpn by itself does NOT delay the whole startup. But it also does not need to put ordering on remote-fs obviously.

So the link I am missing, is what says somewhere that it needs iscsid or the successors like remote-fs, and that results in gdm + plymouth-quit-wait being delay AFTER iscsid and subsequently the whole user session being delayed.

As for my configuration, it is much better with my "workaround" anyway.

Indeed, I have fixed the "hang up" issue. It was PEBCAK... when I "discovered" the LUN it got 2 addresses, ipv4 and ipv6.

$ sudo iscsiadm -m node -l

hit both targets, so ipv6 joined, the second target (ipv4) couldn't work because the LUN was already in use. So iscsi was "working as designed". I removed one of the target and now all is fine.

I have noticed that even though the service is not started at machine boot, issuing the command "manually" starts the service, as shown looking at the systemd status of the services.

So that's a much better configuration for me. Even if it was not delaying the whole process by 10 seconds, it is better NOT to load services you don't systematically need, and load them only "on demand".

This interesting discussion made me realise that those 3 bits at startup are needed ONLY if iscsi mounts are necessary at some point in the boot process. Otherwise they just consume time at startup (a lot in my case) and memory and stay idling.

So maybe an idea (no sure it can be done) would be to inspect if there are "automatic" nodes (mine is "manual") and start the service only then. But I am not sure systemd supports "variable" After/Before/Want clauses, because that would be determined only once the inspection of nodes is done. What I mean is that if determining there is no "automatic" node is enough, you won't need the clause: After=wetwork-online.target. It could also not be worth the time and investment since my "workaround" is really as simple as 1 command!

As a summary, and to be logic:

- if people need iscsi at startup, what is done here is what must be done anyway. You have no other solution but to wait for network-online, then the iscsi mounts (clause Before=remote-fs).

- if other people like me only need "on demand", they have a "workaround" with my report... and they are welcome to spend more time investigating why this is delaying the startup so badly: I don't have the "systemd skills" myself to spot it!

As for my own use, the "workaround" is enough and better, and I will reapply it if iscsi is updated and re-enables ...

Read more...

Revision history for this message
Zakhar (alainb06) wrote :

I did some more investigations anyway, and discovered:

"It is not a bug, it is a feature"!

__________________
Steps to reproduce:
==================
- In Virutalbox, start a new test machine
- Install a fresh 20.04 (minimal install is enough, and I did it in my 20GB RAM disk, it's faster!)
- sudo update/upgrade
- stop the VM

Now on the network configuration of you VM which defaults to NAT, go to advance and "unplug" the cable. That will create (don't ask me why!) a 7.5 sec delay on network start, simulating almost what I have on my real network.

- Start the VM
- Look at the systemd-analyze plot (file attached)

- "Replug" the cable (you can do it without restarting)
- Install open-isci
- Create a node (discover, or what I did is copy my nodes configuration to the VM)
- Stop the VM
- "Unplug" the cable
- Start again
- Look again at systemd-analyze plot (file attached)

What you see is that in the first case "Network-manager-online-wait and "Plymouth-quit-wait" clearly overlap, allowing a lot of the session services to start without waiting for the network.

On the second graph you see that they do NOT overlap and Plymouth-quit-wait waits until the end of the 7.5 seconds timeout to kick in, with the rest of the session services.

The total difference is not 7.5 sec in this case because due to VirtualBox, it benefits from the "7.5 sec" wait to launch some VM related services. So the end difference is rather about 3 or 4 seconds for no iscsi mounts (since none are automatic)

So I guess there is indeed a relation that I don't see when you have iscsi WITH some nodes (without any nodes, indeed systemd does not even try to start iscsi) that instructs Plymouth to be run only after some successor of iscsid.

I guess this is "work as design" because the expectation is that since you have some nodes (although none are marked as "automatic"!) they are supposed to be needed by the user-session so there must be somewhere an instruction to "wait"...

This assumption is wrong in my case, so I don't need the "work as designed" behaviour. Since there is no "fancy configuration" to say to iscsi that he got the wrong assumption, I guess my workaround is also the simplest way to tell it!

QED

Revision history for this message
Zakhar (alainb06) wrote :

(the previous one, as its name said, was isci ENABLED, sorry for the confusion but you would have corrected with the name!)

Revision history for this message
Zakhar (alainb06) wrote :

As explained by Christian on #4, I did further tests:

- Disables only open-iscsi
- To be sure, I moved out of the way what was in /etc/iscsi/nodes which is now empty
- To be sure, did the same with /etc/iscsi/send-targets
- To be extra sure, even removed those directory

And when I boot my machine, iscsid.service is still starting although it is supposed to be "socket activated".

I also removed the 2 lines in iscsid.service:

#[Install]
# WantedBy=sysinit.target

Because according to http://0pointer.de/blog/projects/socket-activation.html
this also unconditionally starts iscsid.service

Here is the result after all that:
$ systemctl status open-iscsi.service iscsid.service iscsid.socket
● open-iscsi.service - Login to default iSCSI targets
     Loaded: loaded (/lib/systemd/system/open-iscsi.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:iscsiadm(8)
             man:iscsid(8)

● iscsid.service - iSCSI initiator daemon (iscsid)
     Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2020-06-20 12:13:55 CEST; 16s ago
TriggeredBy: ● iscsid.socket
       Docs: man:iscsid(8)
    Process: 1430 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
    Process: 1437 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
   Main PID: 1440 (iscsid)
      Tasks: 2 (limit: 38305)
     Memory: 3.6M
     CGroup: /system.slice/iscsid.service
             ├─1439 /sbin/iscsid
             └─1440 /sbin/iscsid

juin 20 12:13:55 alain-HTPC systemd[1]: Starting iSCSI initiator daemon (iscsid)...
juin 20 12:13:55 alain-HTPC iscsid[1437]: iSCSI logger with pid=1439 started!
juin 20 12:13:55 alain-HTPC systemd[1]: iscsid.service: Failed to parse PID from file /run/iscsid.pid: Invalid argument
juin 20 12:13:55 alain-HTPC systemd[1]: Started iSCSI initiator daemon (iscsid).
juin 20 12:13:56 alain-HTPC iscsid[1439]: iSCSI daemon with pid=1440 started!

● iscsid.socket - Open-iSCSI iscsid Socket
     Loaded: loaded (/lib/systemd/system/iscsid.socket; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2020-06-20 12:13:46 CEST; 25s ago
   Triggers: ● iscsid.service
       Docs: man:iscsid(8)
             man:iscsiadm(8)
     Listen: @ISCSIADM_ABSTRACT_NAMESPACE (Stream)
     CGroup: /system.slice/iscsid.socket

juin 20 12:13:46 alain-HTPC systemd[1]: Listening on Open-iSCSI iscsid Socket.

And (as root)
# tree /etc/iscsi/
/etc/iscsi/
├── initiatorname.iscsi
└── iscsid.conf

0 directories, 2 files

According to Christian's explanation at #4

- open-iscsi now cannot trigger the socket since it has been disabled (see above) and also there are now no nodes or send-targets

- iscsid is still started, although the "database" (in fact the tree directory under /etc/iscsi) is completely empty, and the "install" clause has been removed from the service.

Where does this come from?

Is it a bug or some other "traces" I might have in my configuration that make iscsid start?

Changed in open-iscsi (Ubuntu):
status: Invalid → New
Revision history for this message
Zakhar (alainb06) wrote :
Revision history for this message
Zakhar (alainb06) wrote :

(Please disconsider the previous post, the "targets database" was corrupted)

-------------------
Steps to reproduce:
-------------------

- With VirutalBox running on top of Ubuntu Destktop 20.04, create a new Virtual machine
- Install Ubuntu Desktop 20.04 on the new guest (minimal install is Ok)
- sudo apt update
- sudo apt upgrade
- sudo install open-iscsi
- shutdown the guest
- Unplug the network cable on the Network interface of your guest (this simulates a Network-ready delay)
- Start the guest
- systemd-analyze

I did that 3 times to have an average:

Graphical / Total
31.528 / 33.155
31.471 / 33.151
31.490 / 33.150

- Now disable the 3 iscsi services/sockets:
- sudo systemctl disable iscsid.socket iscsid.service open-iscsi.service
- Shutdown again and repeat the 3 measures

Graphical / Total
25.516 / 27.153
25.512 / 27.139
25.526 / 27.157

So I guess there is a regression on bug #1755858, because here we run with an empty database, where the "optimisation" should have kicked in and prevented from loading the "slow bits" (as it is described in bug #1755858)

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Hello again @Zakhar,

It could be, I'll verify.

There is also something else:

https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1877617

We can now configure auto-scan not to scan LUNs when the daemon starts (per openstack's teams requests).

So with that, and your feedback (pointing out to LP: #1755858), I'm taking this under my assignment for part of this week's work (since I'm syncing new upstream release on Debian and merging in Ubuntu 20.10 soon).

Will get back to you soon..

Changed in open-iscsi (Ubuntu):
status: New → Confirmed
importance: Undecided → Medium
assignee: nobody → Rafael David Tinoco (rafaeldtinoco)
Revision history for this message
Zakhar (alainb06) wrote :

Thanks a lot Rafael.

Unlike bug #1877617, I don't have a fix to propose for the regression on LP: #1755858, sorry!

Disabling services is a workaround for me... but definitely not a fix!

My initial "enhancement" report was a little bit more specific, and not sure it will be fixed.

iscsi services work normally in 2 situations:
- you have some LUNs you need mounted at startup (in this case you HAVE to wait anyway)
- you have no nodes at all... that's where LP: #1755858 should have fixed things and not triggered the "slow bits".

I am in a third situation
- I have some nodes, but none in automatic (ie not needed at startup). Would the new auto-scan help (not sure to completely understand what it does!)?

I am in the exact same situation with my NFS mounts (also on my NAS), they are defined in /etc/fstab, but with the options 'noauto' and 'user' meaning they are NOT needed at startup, the the user can mount them later.

With that config in /etc/fstab, NFS runs its service (RPCbind) without damaging the boot performance.

As soon as I declare one of the NFS mounts as 'auto' instead, I observe the same delay on the boot, but it's normal since we now expect some NFS mounts to be there at startup.

Ideally, the same would suit me for iscsi!

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

For anyone interested,

I'm reworking the open-iscsi package after my upstream merge to Debian:

https://salsa.debian.org/linux-blocks-team/open-iscsi/-/merge_requests/4

And will address this in this new version.

Revision history for this message
Zakhar (alainb06) wrote :

Thanks for the heads up Rafael.

Does this mean the fix is going to be in a new version, hence we won't have it for 20.04 due to the "freeze" policy of Ubuntu, but it could be in 20.10 or any subsequent version.

[I have my "workaround" anyway, and it works fine for my own use cases]

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :
Download full text (5.5 KiB)

Hello Zakhar,

I have the merge under review in

https://code.launchpad.net/~rafaeldtinoco/ubuntu/+source/open-iscsi/+git/open-iscsi/+merge/389234

but I'm afraid it contains the same patches as we had for LP: #1755858:

commit ca21418
Author: Rafael David Tinoco <email address hidden>
Date: Wed Aug 12 21:19:36 2020

    * make iscsid socket-activated to only activate it as needed
      - debian/iscsid.socket: systemd socket file for iscsid
      - debian/open-iscsi.service: do not start or check iscsid.service
      - debian/rules: install and enable iscsid.socket
      - debian/patches/iscid-conf-use-systemd.socket-patch: default to the socket
      - debian/open-iscsi.postinst:
        - run restart logic only if service is running on upgrade
        - drop no longer reachable upgrade path that affects iscsid
        - disable iscsid.service on upgrade
        - handle iscsid.socket to be started if the service is not running yet
      - d/iscsi-disk.rules: Add a udev rule so that iscsid.service will be
        run when udev disks are attached.
      - d/iscsid.service: Remove ExecStop= directive.
      - debian/tests/install: fix tests to work with socket activation

      Dropped:
      * make iscsid socket-activated to only activate it as needed
        - debian/patches/iscid-conf-use-systemd.socket-patch: default to the socket
        - debian/open-iscsi.postinst:
          - drop no longer reachable upgrade path that affects iscsid
        - d/iscsi-disk.rules: Add a udev rule so that iscsid.service will be
          run when udev disks are attached.

The dropped part is because Debian has that already now. The patches that make "iscsid socket activated" are still the same. The idea is that iscsid is only activated if needed. In my case, using iscsi disks, I have:

(k)rafaeldtinoco@iscsiubu:~$ systemctl is-enabled open-iscsi.service
enabled
(k)rafaeldtinoco@iscsiubu:~$ systemctl is-enabled iscsid.service
disabled
(k)rafaeldtinoco@iscsiubu:~$ systemctl is-enabled iscsid.socket
enabled

so the open-iscsi.service will inevitably enable iscsid.service (through its socket). If I do:

(k)rafaeldtinoco@iscsiubu:~$ systemctl disable --now open-iscsi.service
Synchronizing state of open-iscsi.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable open-iscsi
Removed /etc/systemd/system/iscsi.service.
Removed /etc/systemd/system/sysinit.target.wants/open-iscsi.service.

(k)rafaeldtinoco@iscsiubu:~$ systemctl disable --now iscsid.service
Synchronizing state of iscsid.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable iscsid
Warning: Stopping iscsid.service, but it can still be activated by:
  iscsid.socket

and reboot...

(k)rafaeldtinoco@iscsiubu:~$ systemctl status iscsid.service
● iscsid.service - iSCSI initiator daemon (iscsid)
     Loaded: loaded (/lib/systemd/system/iscsid.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
TriggeredBy: ● iscsid.socket
       Docs: man:iscsid(8)

(k)rafaeldtinoco@iscsiubu:~$ systemctl status open-iscsi.service
● open-iscsi.service - L...

Read more...

Changed in open-iscsi (Ubuntu):
status: Confirmed → Opinion
tags: removed: server-next
Changed in open-iscsi (Ubuntu):
assignee: Rafael David Tinoco (rafaeldtinoco) → nobody
importance: Medium → Undecided
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.