06/03 Update

Okay, so I achieved all of what I set out in the TODO of the last post and I put it up in the threading branch in git.

It’s still missing commands, but it doesn’t require poking to see output and scrolling, using the browser, quitting etc. works. Now, actually using the command line isn’t so hot because stuff like aliases still don’t work so DON’T USE THE CODE unless you’re curious and probably already have a working config. There’s a reason that it’s tucked into a separate branch.

Threading still isn’t 100% done in that it’s not tested enough and I’m sure I’ll still find some issues in both the client and the daemon, but it’s close enough that I’m willing to move on to completion, which is where I’ll restore all of the commands that I broke and probably redo most of the command infrastructure anyway. There are also some things I’ve left in there for the sake of testing, like hardcoding the sync rate to every five seconds, but that should stay in place until release because it should help provoke any remaining threading problems.

The Memory Leak that Wasn’t

I spent a day or two tracking down an ethereal memory leak. On one of three machines I have laying around (all Intel/x86_64/Arch with the same base software) I discovered a massive jump in top‘s RES stat every time an update cycle occurs. This happens with the old code (July of last year) as well as the latest git, but only on one machine – my work laptop.

I’ve taken a look with gc, objgraph, tracemalloc, pymple and nothing seems to register any changes in the internal Python environment on the broken machine and meanwhile, the RES size will leap up by 4-5M everytime the fetching threads begin. Before and after the updates the number of objects of each type and their respective sizes are identical (barring a few legitimate additional stories every once in awhile).

top is obviously a bit of a blunt instrument here, but even taking into consideration that Python will maintain its own heap and allocations (which means RES might be closer to a high water mark instead of a current usage stat) I can’t see a reason for RES to jump so drastically every single time and I definitely can’t think of a reason that this one particular machine would be a problem.

Anyway, I’m considering this a bit of a non-issue when it comes to the daemon code itself, but I am curious about the cause and plan on doing some digging to at least isolate the important factor (i.e. Python versions, etc.).

tl;dr If your canto-daemon memory is spiralling out of control, let me know (comment/IRC/email).

Leave a Reply

Your email address will not be published. Required fields are marked *