I had the same issue, and found some information that might be of some interest. If you have ``ionice`` on your system, it'll use it. But as a lot of people noticed, this doesn't change anything, which means that ``ionice`` is not working, which is a second issue. Digging into this last issue, it seems that ``ionice -c3`` only works if the kernel scheduler of that device is set to ``cfq``. You can check the current value with: $ cat /sys/block/sda/queue/scheduler ## of course replace "sda" by your device's id. (More info: http://serverfault.com/questions/485549/ionice-idle-is-ignored ) And you can change it runtime with: $ echo "cfq" > /sys/block/sda/queue/scheduler ## you need to be root for that (More info: http://stackoverflow.com/questions/1009577/selecting-a-linux-i-o-scheduler ) My system remains laggy actually even with this, but it seems better. Any feedback is welcome. An important factor seems also the swap usage, if it is on the same device. For some reason, even if I have plenty of free ram, my swap usage seems to rocket up when launching ``updatedb.mlocate`` (Goes from 0 to 2.5G of swap even if my RSS is only 4.9G on the 8G available.) To check if it's your ``swap`` competing with ``updatedb.mlocate``, check you have enough free ram to receive all your current swap and turn of swap: $ swapoff -a With this last option, my system is almost totally lag free when running ``mlocate.updatedb``. To be noted: my RSS is not going up, so the swapping occuring usually is a decision of the kernel that here happens to be a bad decision. Even if this solves my issue, this is not a viable solution as turning of swap is not benign. You might want to fiddle with the ``swappiness`` of the kernel to hint it not to swap as much (More info: http://askubuntu.com/questions/103915/how-do-i-configure-swappiness ), but even at swappiness=0, this doesn't change the behavior of the kernel with regards to ``mlocate.updatedb``. $ swapoff -a ; sysctl vm.swappiness=0 ; swapon -a ## the command I used to test By looking at memory consumption, ``updatedb.mlocate`` is using very few of it, and remains stable. Could it be related to the filesystem ? My filesystem is ext4 for the system part, but the biggest part is on Btrfs. I would safely bet that this is what is happening: the filesystem driver is looking everywhere and asks for more buffers to store all these paths, but this is not the right decision here. (this is what makes for instance a first "find /" being noticeably sped up the second time you are launching it, but here we surely won't benefit of this because ``updatedb.mlocate`` is run once a day.). Finaly, a good way to help updatedb do its jobs quickly is probably to limit the folders it'll scan. Looking at the output of: $ updatedb.mlocate -v or you could consult your current database: $ locate / And even get some advanced reports as with: $ locate / | cut -f 1-4 -d / | uniq -c | sort -n and: $ locate / | wc -l # gives you the number of file stored in updatedb database These commands will give you some idea of where it spends some time and you might spot a lot of places that are huge and that can't be really usefulll. (my maildir for instance, or listing twice the contents of the BTRFS file because I used bind mounting.) For information I went from 3Million files in a little less than 4minutes (with laggy system) to only 1Million files being updated in less than 1 seconds (!!!!). There seem to be clear bottleneck effect here involving probably what make the kernel stutter on IO requests and is probably related to swapping and filesystem driver asking to save probably too much). These numbers are on SSD with cfq scheduler and swappiness of 0, swap is on. Please keep in mind that the timing are accurate only when run several time, updatedb will try to be clever, and the kernel also is being clever. These two effect means that if you make test, you should take only the second running time as it will be more stable. If you want to test any of these things, you can test simply by re-launching ``updatedb.mlocate`` yourself with: $ updatedb.mlocate or $ ionice -c3 updatedb.mlocate You might want to time all these also by prepending ``time`` before the command. My hard drive is a SSD, and I'm working on a Surface Pro 3 on linux kernel 4.4, on Ubuntu 15.10. As a conclusion, there seem to be a very special issue here that triggers a bottleneck that makes the system taking way too much time, process time, and io requests than obviously is needed. More investigation would be welcome. And if you want to escape this, my guess would be to reduce the span of updatedb lookup to aim being under this bottleneck to reclaim nearly all of the ressources used.