Deprecation warning
This page was last updated in 2015, and is probably outdated vs new versions of tracker. Look at the FAQ instead for up to date information.
Contents
1. Debugging
This page has some tips about how to help if Tracker is not performing the way you expect.
1.1. Logging
1.1.1. Client Side
- If you're running a Tracker daemon or Tracker client on the command line, you can always use:
$ TRACKER_VERBOSITY=3 ./my-binary
You do the same for tracker binaries which are clients of Tracker, e.g. tracker info or tracker search.
$ TRACKER_VERBOSITY=3 tracker search -f foo
1.1.2. Server Side
- For tracker processes, set the log verbosity using:
With releases >= 1.4.x (including git master), you can use the following:
$ tracker daemon --get-log-verbosity Components: Store : minimal Extract : minimal Writeback: minimal Miners (Only those with config listed): Files : minimal $ tracker daemon --set-log-verbosity debug Setting log verbosity for all components to 'debug'… Components: Store : debug Extract : debug Writeback: debug Miners (Only those with config listed): Files : debug
With releases >= 0.12.x, you can use the following:
$ tracker-control --get-log-verbosity Components: Store : minimal Extract : minimal Writeback: minimal Miners (Only those with config listed): Files : minimal $ tracker-control --set-log-verbosity debug Setting log verbosity for all components to 'debug'… Components: Store : debug Extract : debug Writeback: debug Miners (Only those with config listed): Files : debug
With releases >= 0.11.x, you can use the following:
$ gsettings get org.freedesktop.Tracker.Miner.Files verbosity 'errors' $ gsettings get org.freedesktop.Tracker.Store verbosity 'errors' ... $ gsettings set org.freedesktop.Tracker.Store verbosity "'debug'" $ gsettings set org.freedesktop.Tracker.Miner.Files verbosity "'debug'"
With releases <= 0.10.x, you can use the following and set the verbosity to 3:
(This method is the same if you're using the TRACKER_USE_CONFIG_FILES environment variable)
$ gedit ~/.config/tracker/tracker-store.cfg $ gedit ~/.config/tracker/tracker-miner-fs.cfg
If the TRACKER_USE_LOG_FILES environment variable is set, the logs are stored in a standard directory:
$ ls ~/.local/share/tracker
Otherwise logging messages will be sent to one of:~/.xsession-errors
~/.cache/gdm/session.log
~/.cache/upstart/gnome-session.log
- systemd journal
- You can then follow the logs for those processes in real time using (you may have to restart the processes if the config was changed):
$ tail -f ~/.local/share/tracker/tracker-miner-fs.log
1.2. Resetting All Databases
If you think your database is corrupt you might want to reset all the databases. This is especially important if you change schemas which are installed upon first time run of tracker-store.
With releases >= 1.4.x (including master), you can use the following:
$ tracker reset --hard
With releases >= 0.12.x, you can use the following:
$ tracker-control -r
1.3. Starting Tracker
You can start the Tracker miners using (this automatically starts tracker-store by D-Bus):
With releases >= 1.4.x (including master), you can use the following:
$ tracker daemon -s
With releases >= 0.12.x, you can use the following:
$ tracker-control -s
1.4. Control & Status
Other commands you can use to help understand what is happening include:
Using tracker daemon or tracker-control, you can identify if the processes you expect to be running are running (run with no arguments):
With releases >= 1.4.x (including master), you can use the following:
$ tracker daemon Store: 27 Jan 2015, 19:25:09: ✓ Store - Idle Miners: 27 Jan 2015, 19:25:09: 0% Extractor - Extracting metadata 27 Jan 2015, 19:25:09: ✓ Userguides - Idle 27 Jan 2015, 19:25:09: ✓ Applications - Idle 27 Jan 2015, 19:25:09: ✓ File System - Idle
With releases >= 0.12.x, you can use the following:
$ tracker-control -p Found 199 PIDs… Found process ID 7642 for 'tracker-store' Found process ID 7694 for 'tracker-search-bar'
Using tracker daemon or tracker-control:
With releases >= 1.4.x (including master), you can use the following (-f is used to follow state):
$ tracker daemon -f Store: 27 Jan 2015, 19:21:03: ✓ Store - Idle Miners: 27 Jan 2015, 19:21:03: 0% Extractor - Extracting metadata 27 Jan 2015, 19:21:03: ✓ Userguides - Idle 27 Jan 2015, 19:21:03: ✓ Applications - Idle 27 Jan 2015, 19:21:03: 0% File System (PAUSED) - Initialising Press Ctrl+C to stop
With releases >= 0.12.x, you can use the following (-F is used to follow state and -D is for a detailed output), this was formerly tracker-status -fd):
$ tracker-control -FD
Using tracker stats or tracker-stats, you can get an idea of how much data in each class has been stored:
With releases >= 1.4.x (including master), you can use the following:
$ tracker stats
With releases >= 0.12.x, you can use the following:
$ tracker-stats
1.5. Running Manually
Commands like tracker-extract can be run with special command line switches to make debugging much easier.
With tracker-extract, you can use:
$ /usr/libexec/tracker-extract -d
The -d means it doesn't terminate itself after 30 seconds of inactivity. It is also recommended to use -v 3 (for log verbosity) to see everything that is going on.
2. High memory use?
Tracker has stopped using memory checking APIs for tracker-extract because it was causing too many false positives (e.g. extracting data from a huge PDF could use a lot of memory and not incorrectly). Some data extraction actually does require a lot of memory and guessing what the right amount should be is quite impossible.
High memory use can be down to the libraries used to extract that data and they may or may not be broken. In most cases high memory use is due to either:
- the library we're using to index the file is broken or has a bug
- non-standardised importing or encoding of the file being indexed (e.g. embedding huge amounts of data in an MP3)
The most common case tends to be (a). Ultimately, we need to know what file(s) is/are being indexed that causes that memory use. You can do this a number of ways, the easiest is to:
Stop all daemons:
$ tracker daemon -t
Run tracker-extract separately:
$ /usr/libexec/tracker-extract -v 3
Start daemons (all not started will be started):
$ tracker daemon -s
This should at least show you the output of tracker-extract as it runs and it should drop out to the terminal when systemd/kernel kills it. Then you could send us the log in a bug report. There is also a command to collect further debug info that can be used:
$ tracker status --collect-debug-info
That should give us some further information about your system in the bug report.
3. Using Valgrind
3.1. Setup
It is recommended you set your environment correctly before starting Valgrind. Quite often we see memory back traces which blame GSlice. To avoid this (which might be hiding the real issue), you can use:
export G_SLICE="always-malloc"
This can also be useful but not to be used with "always-malloc":
export G_SLICE="debug-blocks"
For more information about GSlice environment variables see: http://library.gnome.org/devel/glib/unstable/glib-running.html
This is also useful and helps out Valgrind produce more accurate results. The above link explains what this does:
export G_DEBUG="gc-friendly"
3.2. Running
For running Valgrind, you mostly want to use something like this:
valgrind --leak-check=full --show-reachable=yes --leak-resolution=high --track-origins=yes --log-file=foo.log /path/to/tracker/command
3.3. FAQ
3.3.1. Why do I get no information in the Valgrind log?
If you see this:
Problem: Valgrind shows everything as beeing 100% perfect and bug free ==20949== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 15 from 1) ==20949== malloc/free: in use at exit: 0 bytes in 0 blocks. ==20949== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==20949== For counts of detected errors, rerun with: -v ==20949== All heap blocks were freed -- no leaks are possible.
It is usually because you are running Valgrind on the libtool script not the actual binary in your checked out or tarball version.
To fix this, simply use .libs/<binary> instead of the shell script.
4. Using gdb
4.1. Setup
If you are seeing GLib errors in the logs, you can run gdb and set some environment variables to break on the issues you are seeing. In gdb, you need to use:
$ set env G_DEBUG fatal_warnings $ run
This means that any warning becomes fatal and as such gdb will see an abort() in those cases. You can do the same for critical messages using:
$ set env G_DEBUG fatal_criticals $ run
You can also break without needing to abort() like above using:
$ break g_log (if ((log_level & (G_LOG_LEVEL_WARNING | G_LOG_LEVEL_CRITICAL)) != 0)) $ run