Category Archives: Javascript

Mutation Testing with Stryker in an Angularjs 1.6 App

In my current project I’m working on a large and messy Angular front-end. We are frequently making small changes and bugs keep creeping into the site. This had me wonder about some questions lately:

How do we make sure that your unit tests are testing the right things? And how do we find out if there is untestable code in the application or if there are tests that are always passing no matter what?

If you think the answer is measuring code coverage then I have to disappoint you: Code coverage only counts which lines of code are being executed while running the unit tests. It does not count which lines are actually tested let alone how thorough the tests are.

So how do we solve this problem? One of the potential answers is called mutation testing. What’s behind the interesting name? A mutation testing library takes the code that you are trying to test… and literally screws it up. It will for example remove function bodies and switch logical or binary operators for their counterpart. This is called mutation.

It will then run your tests against those destroyed versions of the code. Why does it do that? And what should happen? Each of the mutated pieces of code should at least fail one test. If all the tests still pass although a significant part of the code was altered then you know something is wrong with your tests. It’s actually kind of like a test suite for your test suite.

To illustrate the concept I’ve created a diagram: stryker_mutation_testing_principle To summarise: A tree of different so called mutations of our code is generated. Borrowing terms from Genetics, if all tests fail for a particular mutant, we say the mutant is “killed” by the test suite; on the opposite, if the tests passes, we say the mutant survived. Then our task has to be to “kill it” by writing a better or an additional test.

The concept of mutation testing itself has been around for decades but it used to be too computationally expensive to make practical use of it. Now with fast and parallel computing power at our hands we can give it another try… In Javascript among several mutation testing libraries there is one which is quite far developed: Stryker.

Stryker actually incorporates the technique of measuring code coverage to determine which unit tests should be executed against which tests.

Examples please!

Of course, right ahead… I wanted to give mutation testing a try but I couldn’t find any running example Angular applications with Stryker online, so I created one myself.

With a couple of hiccups due to the still thin documentation on the web Stryker was easy to install: adding a couple of dependencies to package.json and adding a stryker.conf.js to the project. In stryker.conf.js we define that we want to use Jasmine as a test framework and karma as a test runner.

Stryker will then import karma.conf.js to determine the test settings. An important thing to highlight is that the list of files to mutate is defined in  stryker.conf.js, but the list of files needed to execute the tests is being imported from  karma.conf.js.

So what does it look like when Stryker is run and how can we make its magic visible? Let’s run

node_modules/.bin/stryker run --logLevel debug stryker.conf.js

which will give us an actual output of what is mutated and how:

[DEBUG] ClearTextReporter - Mutant killed!
[DEBUG] ClearTextReporter - /Users/daten/Sites/AngularStrykerDemo/app.js: line 9:40
[DEBUG] ClearTextReporter - Mutator: BlockStatement
[DEBUG] ClearTextReporter - -               $ctrl.add = function (a, b) {
[DEBUG] ClearTextReporter - -                 var result = a + b;
[DEBUG] ClearTextReporter - -                 return result;
[DEBUG] ClearTextReporter - -               };
[DEBUG] ClearTextReporter - +               $ctrl.add = function (a, b) {
[DEBUG] ClearTextReporter - +   };
[DEBUG] ClearTextReporter - Ran all tests for this mutant.
[DEBUG] ClearTextReporter - Mutant killed!
[DEBUG] ClearTextReporter - /Users/daten/Sites/AngularStrykerDemo/app.js: line 10:27
[DEBUG] ClearTextReporter - Mutator: BinaryOperator
[DEBUG] ClearTextReporter - -                 var result = a + b;
[DEBUG] ClearTextReporter - +                 var result = a - b;
[DEBUG] ClearTextReporter -
[DEBUG] ClearTextReporter - Ran all tests for this mutant.
[DEBUG] ClearTextReporter - Mutant survived!
[DEBUG] ClearTextReporter -   /Users/daten/Sites/AngularStrykerDemo/app.js: line 15:27
[DEBUG] ClearTextReporter - Mutator: LogicalOperator
[DEBUG] ClearTextReporter - -                 var result = a && b;
[DEBUG] ClearTextReporter - +                 var result = a || b;

The first mutation that was created deleted the contents of our whole add method. The second mutation exchanges a plus operator for a minus in that same method. Luckily for both of these mutants at least one test failed. Hence the Mutant killed! output.

The third mutation – switching a logical and for an or – unfortunately was not caught by our unit tests which is indicated by the Mutant survived! output. We can fix this by adding the following test in app.spec.js (which is currently commented out):

it('should return false', function () {
  expect($ctrl.and(false, true)).toEqual(false);
});

Next time we run the mutation Stryker will recognise that the second test fails for this particular modification and so all will be good. No survivors.

To help you analyse the results of the mutation Stryker also gives you a handy summary in the form of an html page via the html reporter. And by the way a full list of the mutators that Stryker has built in can be found here.

Summary

I hope that I have convinced you that mutation testing is a great idea. But how practical is it to use mutation testing in a production app? Unfortunately I don’t have the answer to that. I myself have only tried it in a sample app so far. To integrate it in a productive way into a live app would be a much bigger effort and personally I am skeptical about the usefulness of doing so.

One problem that I see with using mutation testing in a large live application it is the increased time needed to execute all the tests. From the diagram above we can see that the time of execution for the test suite is multiplied by the amount of mutations made on the code. Every unit test is now not only run once but run once against every mutated version of the code.

Another problem that I see is the amount of time you’d have to spend to fix all your surviving mutations. Ideally this would result in great test quality and that’s what we are after. But in my own experience it is hard enough to get a team of developers to write a tests for every feature anyway. Getting them to also execute mutations and fix another layer of breaking tests might encounter a lot of resistance. But of course every project and team is different so I can’t speak for everyone.

However I think the concept is super interesting and can teach us a lot about how good tests are written. I really enjoyed my excursion into mutation testing and hope to someday use it in a way that goes beyond simple experiments. I’d be interested to hear any experiences using mutation testing in production if there are people out there!

How does code coverage report generation with Instanbul work?

Recently I started working on a project that uses Istanbul to measure and report on code coverage of the unit tests. I thought the tool was genius but I asked myself: How does Istanbul actually track code coverage? It seemed like magic. Today I found the time to take a high level  look at the source code of the project and I was able to shed some light on how it actually works.

Please note that this blog post is not to give you instructions about how to use Istanbul. I think there are plenty out there. Please consider this blog post as a work in progress, the information in here might be up to 50 percent wrong (my own estimate). So far my inspection of the source code has been quite high level and here I document how I THINK it works. I will try to update this blog post once I spend more time looking at some of the details of the code. Also I hope that some people will find this (googling “How do Javascript code coverage reports with Istanbul work” didn’t find any relevant results so far) and let me know if there is something wrong with my understanding.

Let’s start with a diagram of some of the core modules within Istanbul:

Diagram of Istanbul modules.

Now this diagram is certainly an outrageous oversimplification, but it will serve us as a base for further discussion. Let’s take a closer look at those modules:

  • instrumenter: enriches the code that is to be tested with extra statements that log the coverage. Such as if you had a function and you wanted to test how many times the function is called you would insert a counter that gets incremented in the first line of the function.
  • walker: ‘walks’ through the code in the form of a tree.
  • esprima parser: an external plugin that is used to parse the code which is to be tested. Esprima spits out a syntax tree that can be walked by the walker.
  • code: well this is the code that is to be covered.
  • coverage object: at some point the code is executed and through the instrumentation the execution of the code is logged. The output of this process is the coverage object. It is simply a json file with information about how many times the statements, functions and branches in the code have been executed.
  • collector: there might be several coverage object generated for different parts of modules of the code. The collector takes these coverage objects and merges them into one coverage object that contains all the information.
  • reporter: takes the final coverage object and turns it into a human readable and condensed coverage report.

In my opinion the instrumenter is at the heart of the whole plugin. Understanding the concept of instrumentation was the entry point for me to understand how code coverage can be measured. This contains the ‘magic’ that I could not get my head around before.

Important to say is that the coverage reporting is virtually independent of the actual unit tests. The only important thing is that the code execution is monitored while the unit tests are executed. This means that the unit tests need to execute the instrumented code in order for the coverage to be recorded. Before diving into the Istanbul source code I thought that the unit tests needed to deeply integrate with the coverage plugin in order for everything to work. Luckily this is not the case which makes the whole thing less error-prone, because it’s less interdependent.

Tadaaa. This was my short overview over how code coverage with Istanbul works. As mentioned before this is a work in progress and I hope to update it as soon as I have more time to dive into the internals of Istanbul more. In the meantime feel free to let me know if you find anything wrong with my explanation or if you know more about a certain area and can help me understand.

EmberCamp 2015 notes

I was lucky enough to be attending the EmberCamp conference in London on the 29th of October 2015. It was the first Ember conference I ever went to and featuring talks by core contributors like Yehuda Katz and Matthew Beale. Here are some notes I took about the conference in the form of bullet points for easy consumption (and because I wrote them while listening to the talks…). I tried to highlight the key messages in bold.

Keynote / Yehuda Katz

  • video link
  • Don’t use Ember if you’re building an app that makes the server render new HTML when you click on a link
  • the previously mentioned type of app is probably going to be dominant on the web forever, but that’s not what Ember is built for
  • Ember release cycle works, incremental updates work
  • coming in 2016: Gilmmer 2, Fastboot updates, Engines

Composable Components / Miguel Camba

  • video link
  • no component works for all use cases
  • Coefficient of reusability: cost of construction > cost of reuse
  • how to build components that are reusable?
  • think about the community first
    • value existing solutions
    • share your pain with others
    • look for help, „single heroes“ die soon
  • think about the API second
    • throw away existing biases about how the API should look
    • start with the minimum
    • DDAU: prefer actions over bindings
  • reduce the amount of options (you can’t cover all use cases; but the end user can if you let him reuse your component)
  • favor composition over inheritance
  • create simple components with one focus that can be composed into more complex ones

ghost & Ember / Hannah Wolfe

  • video link
  • ghost is an open source blogging platform, non-profit
  • built with node.js & Ember = fullstack Javascript application
  • what drove the decision to rewrite the admin UI in Ember? (it was a messy Backbone.js application before)
    • the need to move faster & break less things
    • the lack of direction on how to solve problems
    • the Ember community (is very nice and helpful)
  • just migrating to Ember did not help to achieve all goals (yet), there are still a lot of challenges…
    • there is a tendency in Javascript client side applications to build overly complex things due to the use of global scope
    • the further you go into the development of an app, the more tightly coupled it gets and the harder it is to add things, the complexity curve gets steeper and steeper
    • Ember helps building a loosely coupled app; the complexity lies more in the beginning due to the steep learning curve of Ember, but then the complexity curve might actually become more shallow
  • problems with testing…
    • the things that kept breaking were not covered by the testing suite, they were due to state which got lost
    • unit testing helped, but still didn’t do the job
    • end to end testing suite was causing more and more problems and the testing suite wasn’t testing the right things
    • the majority of these tests were failing (after running for an hour and taking way too long), so eventually they were all removed
    • high quality testing suite needed
  • Ember moves really fast
    • it’s great that the framework evolves so fast, but it also leaves behind a lot of deprecated legacy code
    • it raises the question: what could have been achieved if the code didn’t have to be updated so much?
  • if you build your project with the power of open source, you’re standing „on the shoulders of giants“

The hidden Power of HTMLbars (on Scope in Ember templates) / Matthew Beale

  • video link
  • Javascript uses static/lexical scoping
  • Ember templates were moved from using dynamic scope in version 1.x to use static scope in 2.x
    • it’s easier to read and reason about and
    • makes it easier to implement with good performance (Glimmer has this incorporated)
  • as a user of templates it is hard to determine what changed the scope of a variable
  • yet it is crucial to understand how Ember templates uses scope
  • some tricks are possible:
  • partial application of arguments to actions and components
  • recursively rendering components
  • Ember templates tries to include the tools for scoping that you are using anyway, so you don’t have to apply hacks

Observations on Ember’s vibrant add-on community / Katie Gengler

  • video link
  • an add-on is something that extends ember-cli
  • there are currently 1920 add ons, 1325 of them have documentation and are meant to be used by others (not only for a specific app)
  • motivation to use an add-on: „Someone must have done this before and I bet they it it better than I would“
  • the EmberObserver.com score is made up of: sustainability (number of contributors), popularity, interest, maintained (have had a release in recent time)
  • maintainers are the lifeblood of the community
  • if you are an add-on maintainer you can test your add-on against various Ember version with Ember-try

Routing / Alex Speller

  • video link
  • the router is complex and often misunderstood
  • simple things are simple, hard things are possible
  • index routes are just an initial state of a certain route, they are not just for lists! (as e.g. in rails)
  • using renderTemplate is usually a mistake, except when it’s not, e.g. when rendering a toolbar into the application template or when rendering a modal
  • authentication can be done with an extra sign-in/logged-in route which doesn’t display anything and has the path ‘/’
  • route nesting == template nesting, route nesting === UI nesting (your routes should match your UI, otherwise you will probably end up with bugs)
  • for visualizing routes you can use ember-diagonal

Ember at Intercom / Gavin Joyce

  • video link
  • fast growing company building a customer communication platform
  • huge Ember app with high number of Controllers, Templated etc., page load takes several seconds
  • did a three week experiment to relaunch the app in a faster and smaller version (success: 8 times less code!)
  • did a lot of small performance improvements over time
  • still on Ember version 1.11 and Ember CLI 0.1.2
  • recommended watches:

Ember CLI deploy / Aaron Chambers & Luke Melia

  • video link
  • Version 0.5 just released!
  • its architecture follows the pipeline metaphor
  • the pipeline consists of hooks which get implemented by plugins
  • major hooks are configure, build, prepare, upload
  • e.g. ember-cli-deploy-s3 plugin can be used to upload to Amazon S3 and will be called by each of the hooks at several points during the build
  • there is a huge plugin ecosystem already for ember-cli-deploy 0.5
  • there is another plugin plugin pack, which packages up a number of plugins that belong to a build process for the ease of install

Closing Keynote: JavaScript Infrastructure @ Facebook / Sebastian McKenzie

  • video link
  • Ember was one of the first communities to adopt Babel, a JavaScript compiler that transpires JavaScript code into code that works on all browsers and minifies it
  • Babel is now a part of Ember CLI
  • Babel is implementing new ECMAScript features and participating in the discussions about new features
  • version 6.0 is more modular and lets the user take more decisions about what kind of code transformations should be applied (not opinionated)
  • version 6.0 was deployed live during the talk!

How to compile less on the fly inside a Single Page Ember app

My current Ember project is part of a so-called white-labelled product. We had to find a way to dynamically apply a different design to our app for different customers. The differences in design consist of different colors and a different logo for each customer.

Since our frontend was completely decoupled from the backend we had some discussions as to change that or not. We could have the Ember app’s index.html and app.css rendered by the backend, but we needed quick results and decided against that for now. Instead we created an endpoint in the backend where the frontend could request the properties of the custom design.

For our styles we use the preprocessor less. That way we can make use of variables for colors and it is possible for many css classes to incorporate the color values given by only a few top-level color variables. Since we deliver an uncompiled less file to the client now, we need to compile the less code on-the-fly with the custom values we get from the backend before starting the app.

How to programmatically compile less code on the Client

The less documentation on this matter is very sparse and I had to try out a lot and tweak the code until it finally worked. That’s why I want to share my solution here.

After some experimenting I ended up with the following chain of promises:

var customDesignResourceURI = "api/customdesign",
  lessFileURI = "assets/app.less",
  defaultCssFileURI = "assets/app.css";

application.deferReadiness();

// fire requests for the less file and the design data simulaneously
Ember.RSVP.hash({
  lessFile: ajax(lessFileURI),
  designData: ajax(customDesignResourceURI)
}).then(function (results) {
  return window.less.render(results.lessFile, {
    env: "production", // in this mode, less seems to automatically cache the output css in localStorage (or the input? this is unclear)
    errorReporting: "console",
    useFileCache: true,
    sourceMap: false,
    modifyVars: { // pass the custom design vars into the less compiler
      '@color1': results.designData.color1 ? results.designData.color1 : "",
      '@color2': results.designData.color2 ? results.designData.color2 : "",
      '@logo-uri': results.designData.logo_uri ? "'" + results.designData.logo_uri + "'" : ""
    }
  });
}).then(function (output) {
  Ember.Logger.debug("Less source code successfully compiled into css.");
  return output.css;
}).catch(function (error) {
  Ember.Logger.error("There was an error receiving or compiling the source less file or receiving the company's custom design data.");
  Ember.Logger.debug("Loading default css.");
  return ajax(defaultCssFileURI);
}).then(function (css) {
  Ember.Logger.debug("Appending css to DOM.");
  appendCSStoDOM(css);
  application.advanceReadiness();
}).catch(function (error) {
  Ember.Logger.error("Could not load default css. This means loading any styles failed.");
  throw error;
});

What happens here is that we first request the uncustomized less file and the custom design data from the api endpoint. Both requests are done with ic-ajax and return promises, which we combine with the RSVP.hash method. On success we pass the custom design variables into the less compiler and then return the less.render method, which is again a promise. Should the compilation or the requests before the compilation fail, we catch the error and instead request a default version of the css, which is compiled in the build step of the frontend app. The last then takes the result, be it the compiled custom css or the default css, and passes it to a method which appends it to the DOM.

Appending the compiled CSS to the DOM

The last missing piece to make it all work is the code to append the ready css to the DOM. I made this a util on its own:


/**
 * This method takes a string of CSS and appends it to the DOM.
 *
 * It does this by creating a new <style> element and appending it to
 * the <header> of the page.
 *
 * The code was pretty much copied one to one from:
 * http://stackoverflow.com/questions/10274260/programmatically-editing-less-css-code-with-jquery-like-selector-syntax/31836785#31836785
 */

export appendCssToDom function (cssString) {
  var css = document.createElement('style');
  css.type = 'text/css';
  document.getElementsByTagName('head')[0].appendChild(css);

  if (css.styleSheet) { // IE
    try {
      css.styleSheet.cssText = cssString;
    } catch (e) {
      Ember.Logger.error("Couldn't reassign styleSheet.cssText.");
    }
  } else {
    (function (node) {
      if (css.childNodes.length > 0) {
        if (css.firstChild.nodeValue !== node.nodeValue) {
          css.replaceChild(node, css.firstChild);
        }
      } else {
        css.appendChild(node);
      }
    })(document.createTextNode(cssString));
  }
}

Where to execute the code

All the above code is in our app currently executed within an initializer. This makes sure that the css is available before anything in the app is being displayed. I’m planning to move it soon into the model hook of the application route though, so that we can benefit from a loading screen being displayed while the less is still being requested and compiled. In that case we don’t need to defer the readiness of the application. Instead we just return the promise chain from the hook.

This approach requires injecting some additional css though, probably directly into index.html, to keep the loading screen in place while the final css is not available yet. So both approaches have their advantages.

Another thing to pay attention to here is to what happens when we are testing the app. I decided to skip the whole process of less compilation when a TESTING environment variable is set to true and instead in this case I’m appending the default version of the css to index.html.

Conclusion

The reason why examples on compiling less code on the client are so sparse is probably that it is not actually recommended to do it. At least not in production. Mostly for the obvious reason that it is not very efficient and increases startup time a lot (think that all this might have to be executed on a mobile device). I want to make clear that also for us this is only a short term solution and will eventually be replaced by a properly designed solution where the backend will probably deliver the compiled and customized css code to the client. Until then I’m pretty happy with my solution though. It was an interesting thing to work on.

Backburner.js – A Dive into the Implementation of the Ember Run Loop

Recently in my Ember project I was stuck with the same bug for quite a while. The stack told me that the error was occurring somewhere in a module called Backburner. I had seen this namespace before in the stack and always wondered what it actually was. After doing my research it turned out that this is the namespace which contains the implementation of the Ember Run Loop. The backburner.js module is the Ember Run Loop in form of a generic library that can also be used as a plugin in non-Ember projects.

I decided to take a closer look at the implementation of Backburner, so I could better understand my error stack. The code is not very thoroughly commented, but on the other hand very readable. Documenting my findings here will help me gain a deeper understanding of the Run Loop and I hope it’ll help somebody else, too.

Overview

So let’s dive into the Backburner code first with a diagram to get an overview over the most important namespaces and functions.

backburner_internals

Here I tried to display the methods calling each other as a flow diagram, which does not exactly match reality but makes it easier to understand. Also you’ll notice that I left out most methods that are being used to add actions to the queue(s) for simplicity. Instead I’m focusing here on the execution of the queued actions.

The Backburner namespace

When you call Ember.run the first thing which is called inside Backburner is Backburner.run. This method fires a new Run Loop. It then calls the begin method which instantiates a new set of Queues in the form of a new DeferredActionQueues object. After that setup work is done, control is given back to the run method which after checking for errors promptly calls the end method, which then triggers the flush method of the DeferredActionQueues object.

The DeferredActionQueues namespace

As said before a DeferredActionQueues object holds one set of queues. Now as we know in Ember there are six queues: actions, render, afterRender, routerTransitions, sync and destroy, which are executed one after another. When Backburner is used as a plugin in other applications, of course it can be instantiated with different queues as well.

The DeferredActionQueues.flush method, which is called by the Backburner namespace when the set of queues is meant to be executed, iterates over all queues and calls again the flush method of each queue.

The Queue namespace

A Queue object represents a single queue. This queue contains an array with the actions to be executed. When DeferredActionQueues iterates over the queues in its flush method, it calls the respective flush methods of each queue. This method then iterates over all the actions stored in the queue and invokes them one after another using the invoke method of the namespace.

Conclusion

One addition to make is that Backburner does not only ever contain one set of queues, but can as well have several instances of DeferredActionQueues on the stack. In that case the sets of queues will be executed one after another.

As we see the whole architecture is not very complicated to understand: We have a stack of sets of queues at the top level. When execution is triggered it goes one level down and the set of queues flushes its own queues one after another. Then at the level of a single queue all actions are executed one after another.

That’s it for the most important parts. Let me know if you think I forgot something important. For a full overview over the Run Loop also check out my other blog post on this topic.

Appendix: Logging RSVP errors

This is kind of unrelated to the general topic of this blog post, but I found out later that my bug was triggered by a failing ajax request which was wrapped via ic-ajax in an RSVP promise and executed in a Run Loop. So the whole thing was less mysterious than expected. The first step towards finding out about this was to add the following to my application code:

Ember.RSVP.on('error', function(reason) {
  console.assert(false, reason);
});

Only if you add this to your app.js or to an initializer errors occurring in RSVP promises will be logged in the console and you can inspect the full stack. That was my first gotcha and if you have errors involving promises and the Run Loop, maybe this will make life easier for you, too.

A Brief History of Time: The Ember Run Loop in short

What is the Ember run loop and how can I use it? I needed my time to understand what the Ember run loop does and how to work with it. Partly because its name is very confusing… There is still a lot of room for me to dig deeper but I felt like writing a short article about “the run loop” and give some usage examples for those who are completely new to the subject.

What does the run loop do?

The run loop is used to batch and order the execution of tasks. Through ordering and reordering, the tasks can be executed in a way that is most efficient. This is essential for Ember to run effectively, for example to batch DOM updates, which are always costly.

The batching is done in six different queues, with each one having its own responsibility:

  • actions: the general work queue, contains scheduled tasks like promises
  • render: rendering, DOM updates
  • afterRender: tasks that are executed after all primary render tasks; good for all 3rd-party-library DOM manipulation work
  • routerTransitions: the router’s transition jobs
  • sync: binding & synchronization jobs
  • destroy: jobs for finishing teardown of objects

Contrary to how its name run loop sounds, the run loop is actually not running at all times. Neither does one run loop exist as a singleton. Run loops are started or fired each time either in response to a user’s action or a timer event. When all tasks in one run loop are done, the current run loop finishes and control is returned to the system.

When a user or a timer event triggers the execution of some code, Ember works its way through the code and collects all tasks in the above mentioned set of queues. All tasks which are part of the code are registered as jobs on the queues. Jobs are basically just javascript (callback) functions. When Ember gets to the end of a piece of code that is executed in response to an event, the run loop for this event is closed and Ember starts executing those jobs one after one. Only those jobs themselves can then add more jobs on the queues at that point, while it is closed to any other code. If a new event triggers the execution of the next piece of code, a new run loop collects those jobs and is executed after the current run loop has finished.

How can I make use of this?

We can actively manipulate the run loop using the methods provided by the Ember.run namespace (Ember.run itself can also be called as a function additionally to its purpose as a namespace, which I find rather confusing). Here are some use cases I collected throughout the last months where the manipulation and/or control of the run loop was useful or necessary.

Ember.run.later

If your app is written in Ember and you want to schedule an event after a certain time interval in the future, you should not use the window.setTimeout function. Use Ember.run.later instead, that way the execution lies in the hand of Ember and events to be scheduled at the same time can actually be executed in the same run loop.

Ember.run.later(context, function() {
  // code here will execute within a run loop in about 500ms
}, 500);

Ember.run.scheduleOnce

As mentioned above, if you do DOM manipulation via a 3rd party library like jQuery, you should make use of the afterRender queue.

Ember.run.scheduleOnce('afterRender', this, function(){
  // jQuery or other 3rd-party-library DOM manipulation logic goes here
});

For details on this, also check out this blog post.

Ember.run.bind

Used to correctly integrate 3rd-party-library callbacks into Ember. That way event handlers can be wrapped and batched in run loops, which makes their execution more efficient:

jQuery(window).on('resize', Ember.run.bind(this, this.handleResize));

... is a short version for:

var that = this;
jQuery(window).on('resize', function(){
  Ember.run(function(){
    that.handleResize();
  });
});

Ember.run.once

When observing the @each property of a collection I noticed that on loading the app, the observing function was executed each time one item of the collection was loaded. This seems like too much extra work since one execution when all the items are loaded should be sufficient when the app is started. Also I noticed that the execution had a weird "off by one" problem, meaning that the observing function would be executed the first time when the first item of the collection was actually "not filled with data" yet. This results in the problem that the last execution of the observer function, triggered when the last collection item is supposed to have loaded, is also fired when the data of that item is not available yet... So for example when you want to update a view with the data of a collection and you need to do some calculations in order to do that, the last item is left out, because the data was not available at the moment when the observing function was fired. Admittedly the problem may lie deeper within my Ember app and I don't know if this is a general problem, but...

The below pattern batches the execution of the observer function, so that it only gets executed once within one loop. This reduces the amount of times the function is executed when first loading the data of a collection, although it does not guarantee that it only gets executed once. But the biggest optimization is that it also resolves my "off by one" problem:

_updateMyView: function () {
  // perform updates here...
},

updateMyView: function () {
  Ember.run.once(this, '_updateMyView');
}.observes('model.myCollection.@each'),

To gain a quick overview over what all the methods offered by the Ember.run namespace do, I recommend taking a look at this high level overview.

Ember run loop and tests

This is an important subject. Technically you should not have to call any Ember.run methods directly from your tests. In practice, this may be different. You should be well aware of the side effects you can cause though when doing so (for example calling Ember.run.later will break the andThen method and using run.schedule, run.scheduleOnce or run.once at a time when currently no run loop is running will throw an error...).

On asynchronous side-effects in testing also check out this guide.

The End

That's it already. I recommend taking a look at the Ember run loop FAQs. If you want to do some further reading, check out the Ember runloop handbook and my blog post about the implementation of the Ember Run Loop.