Category: Tech talks

Tech talk: Pattern matching

Today’s tech talk was about functional pattern matching. This was a really fun one since I’ve been sort of “evangelizing” functional programming at work, and it was a blast seeing everyone ask poignant and intersting questions regarding pattern matching.

What spurred the conversation today was a question my boss asked me which was “how is pattern matching actually compiled?” which led me to find this blog post describing different ways the f# compiler compiles pattern matching. The short of it is that the compiler generates a souped up switch statement where it checks each pattern in order. Sometimes it does a good job, sometimes it doesn’t, but that’s OK.

In the process of researching for the tech talk I came across a great paper entitled Pattern Matching in Scala which discussed, obviously, pattern matching in Scala, but also talked about F#, Haskell, and Erlang pattern matching. The interesting thing to me here is how Scala got around comparing classes instead of just algebraic data types. Scala makes you implement classes as specific case classes when you want to be able to match on them, and also you have to implement apply and unapply methods which effectively “box” and “unbox” your pattern.

I don’t have much experience with Scala (I skimmed a Scala book and wrote a hello world, but that’s it), but I am familiar with how F# handled this scenario which is via Active Patterns. I like this since you can mix and match active patterns to provide your own custom way to “compare” items.

An example I used in our talk today was

let (|Pattern1|_|) i = 
    if i = 0 then Some(Pattern1) else None
 
let (|Pattern2|_|) i = 
    if i.ToString() = "yo mamma!" then Some(Pattern2) else None
    
let activePatternTest () = 
    let x = 0
    match x with 
        | Pattern1 -> printf "pattern1"
        | Pattern2 -> printf "pattern2"
        | _ -> printf "something else"

Which really drives the point home that you can do custom work in your pattern match and hide it away from the user. Another, more real world, example is how I matched on regular expressions in my parsec clone project

let (|RegexStr|_|) (pattern:string) (input:IStreamP<string, string>) =
        if String.IsNullOrEmpty input.state then None
        else
            let m = Regex.Match(input.state, "^(" + pattern + ")", RegexOptions.Singleline)
            if m.Success then 
                Some ([ for g in m.Groups -> g.Value ]
                            |> List.filter (String.IsNullOrEmpty >> not)
                            |> List.head) 
            else 
                None

Which can be used to hide away regular expression pattern matching. The usage of this would now be:

member x.regexMatch (input:IStreamP<string, string>) target = 
        if String.IsNullOrEmpty input.state then None
        else 
            match input with 
                | RegexStr target result -> Some(result.Length)
                | _ -> None

Nice and clean, just the way I like it.

Anyways, pattern matching is a really powerful construct and it’s a shame that it’s not available in many OO languages.

Tech talk: CLR Memory Diagnostics

Today’s tech talk we discussed the recent release from Microsoft of ClrMD that lets you attach and debug processes using an exposed API. You used to be able to do this in WinDbg using the SOS plugin, but now they’ve wrapped SOS in a managed dll that you can use to inspect CLR process information. The nice thing about this is you can now automate debugging inspections. It’s now as easy as

int pid = Process.GetProcessesByName("TestApplication")[0].Id;

using (DataTarget dataTarget = DataTarget.AttachToProcess(pid, 5000))
{
    string dacLocation = dataTarget.ClrVersions[0].TryGetDacLocation();
    ClrRuntime runtime = dataTarget.CreateRuntime(dacLocation);

    ClrHeap heap = runtime.GetHeap();

    foreach (ulong obj in heap.EnumerateObjects())
    {
         ClrType type = heap.GetObjectType(obj);
         ulong size = type.GetSize(obj);
         Console.WriteLine("{0,12:X} {1,8:n0} {2}", obj, size, type.Name);
    }
}

ClrMD lets you take stack snapshots of running threads, iterate through all objects in the heap and get their values out, show all loaded modules and more. If you combine it with ScriptCS you’ve got a really powerful debugging tool. What I liked about ClrMD is that you can use the same API to attach to running processes (in modes where you can pause the app while running, or run without pausing the attached app) as you can with process dumps.

While it is nice to be able to inspect the heap and stacks, I found that it’s not totally trivial to get object information. Since all your queries return pointers to values on the heap (unless its a primitive object), you need to recursively go through the entire object reference to find all the details. If you have the source, symbols, and a crash dump I think it’s easier to just toss it into visual studio to get this information. Still, if you don’t have access to this and need to investigate or automate error detection in a low level fashion, then this is an amazing tool.

For more reading check out

https://nuget.org/packages/Microsoft.Diagnostics.Runtime

http://www.infoq.com/news/2013/06/microsoft-diagnostics-runtime

http://www.piotrwalat.net/clr-diagnostics-with-clrmd-and-scriptcs-repl-scriptcs-clrdiagnostics/

http://www.programmingtidbits.com/post/2013/05/09/CLR-Memory-Diagnostics-Released.aspx

Tech talk: B-Trees

Yesterdays tech talk was on b-trees. B-trees are an interesting tree data structure that are used to minimize disk read access. Also, since they are self balancing, and optimized for sequential reads and inserts, they’re really good for file systems and databases. CouchDB, MongoDB, SQLite, SQL Server and other datbases all use either a b-tree or a b+ tree as their data indexes, so it was interesting to discuss b-tree properties.

Disk io optimizations

A big part of the need for b-trees is disk io optimizations. The need to optimize for disk reads comes from the fact that disk io is slow. Imagine you have to make thousands of reads off disk to try and find a certain piece of data. The rotational delay in a platter drive to read a certain disk block can be up to 20 milliseconds. If you have to read hundreds, or even thousands of blocks of data to find something then your search speed can now range in the hundreds of milliseconds, which is certainly noticeable to a user.

A good way of alleviating some of this disk read time is to have an optimized index that tells you where to go look for your data. So instead of reading blocks off disk to search for data, you can read a much smaller index which points to which disk block the data you want is. Then all you have to do is go find the disk block and do one final search inside of that block to find your data.

Insertion and deletion

Deletion in a b-tree is cheap and easy. Once you find the data in the index that should be deleted you mark that index location as deleted. Periodically you can maybe purge the data. This is why in many databases when you do a delete the database size doesn’t actually get any smaller. With SQLite, for example, you have to issue a vacuum command which rebuilds the index. Only at this point does the database get smaller.

Inserts, however, are more complicated. Since the tree needs to be optimized for sequential linear reads, you don’t want to use a dense storage for your data. This would mean every time you needed to add something you would have to shift all your data around. Instead, its cheaper and faster to be sparse: leave empty space for yourself to add things to the index.

Rebalancing

What happens when a block is full? At this point b-trees recursively split themselves. Theoretically, they take the median of the values and create a new split node from that. Half the data goes into the left branch from that node, and half from the right. Look at this trace through from wikipedia

Here you can see the sparseness of the arrays as well as the final split (the last trace) when 6 (the median) is chosen as the split point and 5 and 7 are moved to separate sparse leaves off the 6 branch.

B+ Trees

B+ trees, in comparison to b-trees, keep key/value pair data only in the leaf nodes. B-Tree’s keep key value pair data at each point in the index. You can see that visualized here

source: http://www.mec.ac.in/resources/notes/notes/ds/bplus.htm

When would you make a B-Tree?

After the discussion of the theory behind b-trees a great question came up: when would you ever use one directly? The conclusion we reached was that you probably wouldn’t. If you had so much data you needed to leverage the b tree’s disk access properties, you’d probably dump your data into a database that already implemented the tree. You wouldn’t really want to build a b-tree from scratch, since there are a lot of optimizations that can be done. For example, oracle apparently doesn’t do a 50-50 data split when re-balancing the tree. It instead does a 90-10 split when it detects sequential inserts.

More reading

Animated trace of the b-tree algorithm: http://ats.oka.nu/b-tree/b-tree.html

CouchDB explanation of their b-tree implementation: http://guide.couchdb.org/editions/1/fr/btree.html

Tech Talk: Sorting of ratings

Today’s tech talk discussed different ways to sort ratings system. The topic revolved around a blog post we discovered a while ago breaking down different problems with star based sorts.

The article describes a few problems:

Rating type: Good – Bad

The issue here is that when you use only the difference in positive vs negative ratings, you get skewed results to highly popular (but also maybe highly disliked) items. For example, an item that has upvotes of 200, but downvotes of 50 would have a score of 150. However, an item who has 125 upvotes and no downvotes would be technically scored lower here. The team and I agreed that this isn’t a good way of sorting a rating, since the abscense of negatives is a stronger indication of a positive review. I think most people actually do this kind of analysis in their mind: if something is highly rated in both positive and negative, then even though it may be popular, it also could be problematic.

Rating type: Good / (Good + Bad)

In this scenario, you’re taking the average positive reviews of an item. The article breaks this down pretty clearly when it shows how amazon rates an item only 1 review as higher rated than an item with 500 reviews. Like with the previous example, people intuitively know to discount an item with only a few votes: its not statistically significant enough.

Rating type: Lower bound of Wilson score confidence interval for a Bernoulli parameter

I won’t begin to explain what this is, since I don’t fully understand all the math behind it, but from what we were able to discern this type of rating system uses a Bernoulli distribution and even outs the weights for both large samples and small samples. It would’ve been nice to see a comparison of how this would work with the amazon or urban dictionary example, since without that its hard to visualize how this affects the sort. Also I’m not sure how well (or even at all) this works on a rating scale though (1-5 for example), since the article makes the assumption of a binomial distribution (either yes or no).

Rating type: Baysenian Estimation

In our research for this discussion we also stumbled on how IMDB does their ratings system. In this kind of estimation IMDB weights the actual rating score by the number of ratings. So instead of normalizing, or removing outliers, they adjust the score to lean more towards the average movie rating. This way it takes an exceptional amount of up votes to get something to be rated 10 out of 10 since the score is trying to be weighted towards the average (by both the regular average of the movie AND by the number of votes).

Conclusion

Our conclusion was that the best way to give meaningful data to a user is to allow for many different kinds of ranking systems. It seems obvious to me that there isn’t just one kind of sort that gives a user meaningful data. Different people interpret data differently and are looking for different meanings in the data, so a truly robust system would be able to provide a couple different ways of rating sorts. In the end, here’s a relevant xkcd:

From xkcd http://xkcd.com/937/
From xkcd http://xkcd.com/937/
Tech Talk: Path finding algorithms

Today’s tech talk was about path finding algorithms. The topic was picked because of a recent linked shared to reddit that visualized different algorithms. The neat thing about the link is that you can really see how different algorithms and heuristics modify the route.

In general, path finding algorithms are based off a breadth first search. At each iteration while walking through the graph you check the nearest neighbors you update what was the calculated weight of the path to get to that neighbor. If it was cheaper to get to the neighbor via your own node (than whoever visited it previously) you update the neighbors weight to reflect that. This is pretty much dijsktras algorithm. Disjkstra gives you the shortest path cost, but not necessarily the shortest path. To find the shortest path you mark each node with who its cheapest parent is (i.e. the node you need to get to the neighbor). When you get to the final destination all you have to do is backtrace from the parent reference until you get back to the source.

You can also use a priority queue (implemented as a min heap) to store who is the best neighbor to check next, since each time you check a neighbor you add them to a list of checked (but unvisited) nodes called the “open list”. By ordering the open list with the cheapest neighbor you can more effectively process who to check next.

A couple of neat things we discussed were the differences in algorithms. Almost all of them were based off of Dijskstra, except instead of just using the weight of the edge to calculate the distance to a node, the other algorithms also used some sort of heuristic to guide the direction of neighbor checking. For example, with A* you add the distance from the neighbor to the target to the total weight of a node. With best-first, you add the distance from the neighbor to the target, but you also amplify the distance weight by a large factor (making it like a focused A*).

There are different kinds of distance calculations that you use as the heuristic weight too. Manhattan distance counts only vertical and horizontal movements (like a taxi in manhattan). So a diagonal move would cost you two units, since you have to move once horizontally, and once vertically. Euclidean distance is a vector difference between to the two x,y coordinates. And Chebyschev distance is basically the maximum of either direction (x, or y). On a graph where all units are one, a euclidean distance of going diagonally is sqrt 2, Chebyschev is 1, and Manhattan is 2.

We also talked about jump point search, which is completely different. The idea is to use a set of rules to eliminate nodes you don’t need to check. This makes it much more effective by being able to remove huge swaths of the search space. The downside here is that it only works if you can check cells beyond the reach of where you are at. If you need to actually traverse a cell to find its weight you can’t really use jump point.

After that we got into a discussion of how 3 dimensional path finding algorithms work. With real world robotics you can’t do a direct BFS of every single point in space around you, it would take forever. So, instead what happens is you probabilistically select N number of points in the real world space. This gives you a random sampling of what is out there in the world view, and you create a graph using that. At that point you can use BFS to find the nearest path and take a step in that direction. At each step, you build a new graph and try again. This gives you a discrete set of sample points to work in and generally moves you in the right direction.

A coworker also mentioned a paper discussing when the target moves. When the target moves you can treat your graph as changing vector field. Again, at each time interval you re-evaluate what is the best path to the target and make a move in that direction. See figure 4 (PDF).

In the end path finding is an extremely interesting subject with lots of ways of doing it.

Tech Talk: AngularJS

Today’s tech talk was a continuation on front-end discussions we’re having. Last week we talked about typescript (I forgot to write it up) and this week we discussed the basics of angular. Angular is a front-end MVC framework written by google that, at first glance, looks completely different from previous javascript/html development. The basic gist is to strongly decouple logic into encapsulated modules. But that’s not all there is, there’s a lot to it. Angular has a templating engine, dependency injection, double bindings between views and controllers, event dispatching, etc.

Since it’s hopeless to cover all of angular in one blog post I’ll just mention a few of the good questions that came up during our discussion

  • Can a child component dispatch to a parent component? Let’s say something happened in an innner component and you need a parent compoinent to also register that?
    Sure, angular provides a function called emit that lets a child component redispatch an event to parent components, and a broadcast method for a parent component to dispatch to all child components.
  • Why use client side rendering vs server side rendering?
    This was an interesting discussion we had. While we didn’t come up with a definite answer, we figure it’s partial because browsers can handle more work now, and partially beacuse the trend is to decouple the server from view logic. Servers are more often being designed as a collection of REST apis and to serve static documents, so the UI can now handle compilation of a view and offload that work onto a client. So why does server side rendering still exist? Well, it’s the way its always been done and it’s easy to do. On top of that it does offload client side work. I guess it’s two schools of thought of when to do what.
  • How do you profile angular?
    This was an interesting discussion. We didn’t know of any particular specific ways to profile angular other than built in profiling tools in browsers and other external html/js profilers, but angular has done a really nice job of exposing almost the entire framework in a unit testable way. This means you can step through angular and see where bottlenecks are within the framework.
  • What if angular stops being developed, can you use a later version of JQuery if their bundled version of JQuery lite stops being updated?
    Another good question. You can alias new version of JQuery (apparenlty JQuery already has this feature), but you can also provide a new JQuery object as a provider to any directives if you wanted to. This way you can have the original JQuery that angular comes with, and you can have the injected secondary JQuery that you can use.
  • Is it possible to mix server side rendering (with Razor or Jade) with angular client side templating?
    Sure, since whatever the server doesn’t recognize (in the {{ … }} syntax) will be rendered in the client side.
  • Does angular come with prepackged widgets?
    Not really, but it’s easy to wrap any other frameworks/libraries widgets within directives.

More Info

Here are some links that talk more about angular

Video Tutorial: AngularJS Fundamentals in 60-ish Minutes: http://weblogs.asp.net/dwahlin/archive/2013/04/12/video-tutorial-angularjs-fundamentals-in-60-ish-minutes.aspx

Angular Providers: http://slides.wesalvaro.com/20121113/#/

AngularJS MTV Meetup: Best Practices (2012/12/11): http://www.youtube.com/watch?v=ZhfUv0spHCY

Excellent set of short video tutorials on AngularJS (recommended!): http://www.egghead.io/

http://www.egghead.io/video/HvTZbQ_hUZY

Code Organization in Large AngularJS and JavaScript Applications: http://cliffmeyers.com/blog/2013/4/21/code-organization-angularjs-javascript

Building up AngularJS (Greg Weber yap.TV): http://www.slideshare.net/antonkropp/angular-js-meetup-20416779
http://stephanebegaudeau.tumblr.com/post/48776908163/everything-you-need-to-understand-to-start-with

Tech Talk: Text Editors

Today’s tech talk was a little less tech but no less important. We got together and talked about the different text editors that we use and why we like them.

JuJu Edit

  • Pros: We use JuJu at work a lot because it handles enormous files really well. And when I mean enormous I mean upwards of 50+GB log files. Other editors choke when its this big but JuJu handles it fine. On top of that JuJu lets you do custom syntax highlighting, which is great for log files, since now you can highlight Debug, Warn, Info, Errors, etc. It’s pretty bare bones but its very lightweight and useful for quick note taking and basic text editing. Also having the ability to auto reload the file on changes makes it great as a tail replacement.
  • Cons: JuJu isn’t in active development anymore and the documentation is non-existant (setting up the regex to do log line highlighting was a lot of trial and error). If you set up your regex for highghlitng wrong it can really choke JuJu and make it run super slow. Also, with large files sometimes it can grind to a halt if you need to scroll to an arbitrary position in an enormous file.
  • http://jujusoft.com/jujuedit/

EditPad Lite

  • Pros: The team commented that EditPad Lite handles large files pretty well, has undo/redo support, auto-save, and a multi tab interface to handle lots of open files.
  • Cons: No syntax highlighting and the multi tab interface.
  • http://www.editpadlite.com/

EditPad Pro

  • Pros: On top of what you get with EditPad you also get syntax highlighting which is cool.
  • Cons: Not much other than its not free
  • http://www.editpadpro.com/

Notepad++

  • Pros: Notepad++ is a pretty big winner in the text editor shootout. It has auto-reload on change, plugin support, great find in file support, it’s actively maintained. It also lets you do cool block/column selection (great for selecting code without the leading indentations).
  • Cons: A common complaint was that it’s a heavier application than something like JuJu edit. Even with medium sized files (a few hundred MB) the memory usage of the application can soar. So while this may make a great actual text editor, it makes for a poor log viewer.
  • http://notepad-plus-plus.org/

Intype

  • Pros: Intype is a new player and it’s being actively developed. It has syntax highlighting and commands related to specific file types. Also, it lets you handle language fallback fonts, so if you are dealing with different languages this kind of cool. If the default font doesn’t have the character set that a language wants to load then it can fallback to another font, and then another, and another. Great if you are working with localization.
  • Cons: None yet!
  • http://inotai.com/intype/

Markdown Pad

  • Pros: I mentioned this at our talk because I like to take notes in markdown. Markdown Pad also lets you put in github flavorerd markdown for code highlighting, and you can render the final markdown as html (with inline CSS), or as a PDF. You write markdown in a left panel and it renders the markdown in a right panel, so you can get a preview of what you are writing as you write it. It has a bunch of other really nice features that I like and for note taking I think its better than just a plain text editor.
  • Cons: To get github flavored markdown you need to pay, also to get a bunch of other neat features (like PDF support) you need to get the pro version. THankfully its reasonably cheap and I’ve gone ahead and done it. One thing I’m not too fond of is the delay in rendering markdown. I think this is because they are doing some optimization to handle very large files, but it would be nice if they did it only when a file reached a certain point and not all the time.
  • http://markdownpad.com/

For those who are wondering, we did discuss vim, emacs, textmate, and scintilla, but didn’t spend much time talking about them hence why I didn’t include it. I’ve personally used vim a lot in my linux days but it’s not my personal choice.

Text editors are a highly personal choice, so it’s great that there are so many tools out there. The paid versions of these apps are usually relatively cheap and if you like a piece of software definitely buy it!

Tech talk: Hacking droid

Todays tech talk was based off of a blog entry posted by facebook recently where they described the things they needed to do to get their mobile app running on android OS Froyo (v 2.2).

The gist of the post was that facebook migrated a lot of javascript code to Java and then found themselves in an interesting situation: they were unable to load the app due to the number of methods declared being larger than the method metadata buffer could hold. The buffer, called “LinearAlloc”, on this particular droid version was 5MB in size. Later versions were increased to 8MB which meant that the bug was no longer exhibited.

Facebook’s engineers tried to break their application up into multiple droid executable files (dex), tried automatic source transformations to minimize their method calls, tried refactoring some of their code (but were unable to make siginifant headway due to the strong coupling they had), and ended up having to do some memory hacking insanity to find the source of the buffer and replace it with a pointer to another buffer that was larger. This way they could have enough space to store the methods they needed. They also put in a failsafe where they would scan the entire process memory heap looking for the correct buffer location.

You can see the buffer structure they linked to here:

/*
 * Linear allocation state.  We could tuck this into the start of the
 * allocated region, but that would prevent us from sharing the rest of
 * that first page.
 */
typedef struct LinearAllocHdr {
    int     curOffset;          /* offset where next data goes */
    pthread_mutex_t lock;       /* controls updates to this struct */

    char*   mapAddr;            /* start of mmap()ed region */
    int     mapLength;          /* length of region */
    int     firstOffset;        /* for chasing through */

    short*  writeRefCount;      /* for ENFORCE_READ_ONLY */
} LinearAllocHdr;

The team and I had a good time talking about their thought process and whether what they did was a good idea or not as well as critiquing decisions by the dalvik designers. Some ideas and discussions that came up were:

  1. If the size of the application is so big the operating system can’t load it, should the application be that big? Why not modularize work? If they couldn’t modulraize their application because of how they are reliant on some framework, they should have put proxy’s to decouple their logic. Tightly coupled code suffers from this probelm
  2. Maybe the internal buffer size was chosen at 5MB to maximize storage space vs estimated application size. It could have been an arbitrary choice to choose that number, OR, it could have been an extremely specific value based on other internal factors not known to us. Either way, messing with that buffer seems like a bad idea.
  3. Why not deprecate the application for phones running that operating system? (One of our new engineers brought up that froyo isn’t that old, and facebook’s capital is based on running their application everywhere, unlike Apple who can dictate when to buy new hardware to run whatever software)
  4. Why not have made the internal LinearAlloc buffer dynamic? This got us talking about the memory overhead of having a dynamically resized array (vector) and maybe they chose to not make it dynamic due to performance overhead on an embedded device

We all related to them since I’m sure every engineer has gotten the main business logic to work and then suddenly a show stopping, undocumented, strange heisenbug shows up and you have to stop everything and fix that for weeks.

After that, the team’s discussion diverged a little and we got to talking about basic droid development. One of the QA engineers here had dome some droid work before and walked us through a basic application design, describing what is an intent, activity, and how droid views are created.

All in all, a fun tech talk.

Tech talk: Javascript Memory Leaks and JSWhiz

Todays tech talk revolved around the recently published JSWhiz whitepaper from google. The paper discusses common javascript memory leak patterns. It also goes over how those leaks can be created and how google automated detection of them using Closure type annotations.

JSWhiz

First Google identified common javascript memory leak patterns, though some of these are described in terms of Closure syntax

  • Create without dispose EventHandler – “Closure objects of type EventHandler provide a simple mechanism to group all events and listeners associated with an object together. This allows for easy removal of listeners when the events being listened to can no longer fire. Unfortunately this is also one of the main sources of leaks
  • Setting member to null without removing event listeners. “In JavaScript setting a variable to null is the idiomatic way of providing a hint to the garbage collector that the memory allocated by an object can be reclaimed. But the EventHandler and event listeners attached to the object prevents the garbage collector from disposing of the object and reclaiming memory.
  • Undisposed member object has EventHandler as member
  • Object graveyard (holding onto objects in a persistent structure like a list/dictionary so objects can’t ever get collected)
  • Overwriting EventHandler field in derived class. – “Closure implements inheritance using a prototype-based approach [8], which can break the expectations of programmers coming from a more standard OOP language background (such as C++ or Java). For EventHandlers this can cause memory leaks as the overwritten EventHandler cannot be freed [8].
  • Unmatched listen/unlisten calls. – “The semantics of the listen and unlisten calls require all parameters to be the same. When the parameters do not match, the event listener is not removed and a reference to objects being listened remains, inhibiting the garbage disposal.
  • Local EventHandler – “A local EventHandler instance that does not escape scope, e.g., a locally defined variable that is not assigned to a field member, added to an array, or captured in a closure, can not be disposed of later.

How

Google’s engineers leveraged the fact that they had an annotated AST of javascript (with the Closure compiler) to look these particular patterns and help identify potential leaks. It’s not always as easy as it sounds though, as the paper mentions in one section

Doing the analysis in a flow-insensitive manner is equal to assuming that an eventful object is disposed of if there exists a disposal instruction in the AST tree. This assumption is further necessitated by 1) the dynamic nature of JavaScript, 2) most disposals resulting from runtime events (user actions, that may or may not occur) and 3) the computational demands associated with full program/inter-procedural analysis.

Other pitfalls of their current analysis include

  • Not tracking objects with fully qualified names (such as application.window.toolbar)
  • Objects that are never returned
  • Objects not captured in closures
  • Not checking of double disposal

Still, it’s an impressive feat.

Discussion

After running through the paper we started talking about different garbage collection issues with javascript. One of the big problems people have encountered is that since the DOM and javascript use reference counting you can easily get yourself into circular references which would prevent collection. We talked a bit about generational collection since that’s how the V8 engine that chrome and node.js use, and how that can solve circular references by doing path from root detection.

A couple other neat points came up, such as that the delete keyword doesn’t actually delete memory. It only deletes properties of an object. Just to double check that we found ourselves at the mozilla dev site which had this hilariously appropriate quote on the page for the delete keyword

The delete operator removes a property from an object. Unlike what common beliefs suggests, the delete operator has nothing to do with directly freeing memory (it only does indirectly via breaking references.

Conclusion

In the end we decided that these memory leak patterns aren’t strictly related to javascript. Lots of the same issues occur with actionscript and other languages (even .NET), where you have strong references in the form of event handlers or closures and things can never be garbage collected.

The ultimate way to avoid memory leaks is to be cognizant of who uses what, who holds on to what, and what needs to be removed when.