[ale] Lab Workstation Mystery

Todor Fassl fassl.tod at gmail.com
Tue Apr 26 13:09:32 EDT 2016


I did a fresh install of ubuntu 15.10. It starts a user systemd and the 
ibus processes. But it kills them off when you log off. I then 
configured ldap lookups for user names and nfs mounted home directories. 
It still kills off all the procs when a user logs out. So it's not a bug 
in ubuntu.

I do my installs via FAi (Fully Automated Install). Figuring out what 
I'm doing during an install that causes this behaviour is going to be a 
huge pita.



On 04/25/2016 07:42 AM, DJ-Pfulio wrote:
> Just to clarify, not all Ubuntu systems work that way.  On a 14.04 box:
>
> $ psg systemd
> root     31358     1  0 Apr16 ?   00:00:01 /lib/systemd/systemd-udevd --daemon
> root     31427     1  0 Apr16 ?   00:00:00 /lib/systemd/systemd-logind
>
> Zero ibus stuff. Way too many dbus things.
> message+   810     1  0 Apr10 ?   00:00:01 dbus-daemon --system --fork
>
> That's all.
> I only run LTS, so don't have any 15.xx to check. Need to install a play 16.04
> this week to see what they've done this time. Heard they removed all the
> external amazon data transfers as the default, finally.
>
> On 04/25/2016 07:25 AM, Jim Kinney wrote:
>> I checked my centos 7 and fedora 23 systems. They don't spawn off as user owned
>> processes. In fact, they don't fork at all. The systemd processes handle the
>> needs directly.
>>
>> I find it very odd that ubuntu and centos have such very different systemd
>> methodology.
>>
>> On Apr 24, 2016 11:48 AM, "Todor Fassl" <fassl.tod at gmail.com
>> <mailto:fassl.tod at gmail.com>> wrote:
>>
>>      I'm at home this morning on an ubuntu 15.10 system with a local home
>>      directory and no autofs. Ps shows that all 4 of those processes are running
>>      for me -- systemd, (sd-pam), ibus-daemon, and ibus-dconf. BTW, I googled for
>>      sd-pam and it looks like that is a fork/rename of the systemd process
>>      intended to desstroy the session. ie., sd-pam means "session destroy pam".
>>      Apparently, when you fork/rename a process, you can rename it whatever you
>>      like including putting parens around the name. But those parens don't mean
>>      anything.
>>
>>      I suspect ubuntu starts a per-user systemd process.  The one thing that
>>      might be non-standard for my users is that the default window manager is
>>      gnome, not unity. I'll have to experiment with that. Another thing I'll have
>>      to try is to uninstall the screen reader. I've mentioned before that I am
>>      blind. You can press Alt+Super+s to start the screen reader at the lightdm
>>      login screen. I don't *think* that requires a special process to run unless
>>      you actually use the hotkey. Most of my end-users would not be firing up the
>>      screen reader so I doubt it has anything to do with that.
>>
>>      But I'll have to do a straight up install of ubuntu and log in to unity as a
>>      local user and then log out. The log out part might be tricky. I might have
>>      to get sighted assistance to do that. Oh, as I write this, it occurs to me
>>      that there is probably a hotkey in unity to log out. So set up a plain
>>      vanilla system, log in, log out, and see if those processes hang around.
>>
>>      On 04/23/2016 10:43 AM, Jim Kinney wrote:
>>>
>>>      That is odd. I have systemd machines with automount and I don't see an
>>>      individual systemd process per user. On my centos7 workstations, I have
>>>      only a single systemd process for the systemd itself plus others named
>>>      like systemd-udevd, etc, and all are root owned.
>>>
>>>      On Apr 23, 2016 11:36 AM, "Todor Fassl" <fassl.tod at gmail.com
>>>      <mailto:fassl.tod at gmail.com>> wrote:
>>>
>>>
>>>          The first thing I did was add that option to the nfs mount in the
>>>          autofs config. I thought it didn't work but back then   I didn't have
>>>          as good of a handle on the problem as I do now.  I found bug reports
>>>          related to the default timeout for aufs not working. The bug reports
>>>          said the timeout worked if you set itimplicitly for each share in the
>>>          autofs config files. But that turned out being another red-herring.
>>>
>>>          I am pretty sure this is a systemd problem. I just did an experiment
>>>          -- I killed off the processes left over after a user logged out on 3
>>>          different workstations but I did not unmount their home directory.
>>>          Each of the 3 had the same 4 processes running, systemd,(sd-pam),
>>>          ibus-daemon, and ibus-dconf. All I did was kill off those 4 processes
>>>          and after the usual time, the automounter unmounted their home
>>>          directory. So I am about as sure as I can be that this is not an
>>>          automounter or nfs problem. It's systemd not killing off those
>>>          processes when a user logs out.
>>>
>>>          Three days -- so far so good.
>>>
>>>          On 04/22/2016 11:27 AM, Scott Plante wrote:
>>>>          This isn't a solution to the underlying problem, but you might want
>>>>          to consider the "soft" option for the NFS mount. By default, NFS is
>>>>          designed to act like a physical disk in the sense that once the user
>>>>          initiates a write, it will block at that spot until the write
>>>>          completes. This is great if you have a NAS unit in the next rack slot
>>>>          from your database server. However, if you don't need quite that
>>>>          level of write assurance, the "soft" option acts more like a typical
>>>>          remote network share. If a problem occurs, the writer will just get
>>>>          an I/O error and be forced to deal with it. You won't get the kind of
>>>>          system hanging you experience with hard mounts. If you're just saving
>>>>          documents and doing that kind of basic file I/O this is perfect.
>>>>          You're mounting home directories, so you're somewhere in between, but
>>>>          depending on what your users are actually doing, soft mounts may be
>>>>          for you. Again, this doesn't explain the whole re-mounting read-only
>>>>          behavior but it may still be helpful for you to look into.
>>>>
>>>>          Scott
>>>>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>

-- 
Todd


More information about the Ale mailing list