Friday 30 September 2016

Build your “earn” strategy when developing your app with Firebase

Parul Soi
Ameeti Mishra
Scalable Acquisitions App Partnerships Specialist

Successful apps turn into successful revenue generating businesses when the right business model is built into the core app development strategy from the very beginning. Since Firebase is designed to help app developers at every part of their lifecycle, from creating high-quality apps to growing and monetizing their app traffic, let’s take a peek at what monetization concepts you could be thinking about now.

  • Figure out your ideal app monetization strategyMonetization comes in many flavors including, in-app purchases, subscriptions, ads and more. When thinking about in-app ads, they may or may not make sense for your app depending on what value your app is providing and what motivates your audience. Make sure you experiment with different monetization options to decide which one works best for you. You may consider A/B testing different options and assessing click through rates. Read about all your monetization options to help you make smart decisions early on when building your app.

  • Here’s how you can use AdMob to increase your revenueAdMob is the key ingredient within Firebase that can help app developers make money. It offers insights on ad revenue performance and user engagement, while providing access to multiple ad networks to make sure the ads you plan to show your users are valuable to them.

Ready to start exploring AdMob?

Sign up for an AdMob account and link it to your Firebase project.

Share:

Thursday 29 September 2016

Become a Firebase Taskmaster! (Part 3: Wiring up your Tasks)

Doug Stevenson
Doug Stevenson
Developer Advocate

Alrighty! Thanks for joining us for part three of this blog series about the Play services Task API for Android. By now, you've seen the essentials of the API in part one, and how to select the best style of listener in part two. So, at this point, you probably have everything you need to know to make effective use of the Tasks generated by Firebase APIs. But, if you want to press into some advanced usage of Tasks, keep reading!

Put Yourself to Task

We know that some of the Firebase features for Android will do work for you and notify a Task upon completion. But, what if you want to create your own Tasks to perform threaded work? The Task API gives you the tools for this. If you want to work with the Task API without having to integrate Firebase into your app, you can get the library with a dependency in your build.gradle:

    compile 'com.google.android.gms:play-services-tasks:9.6.1'

But, if you are integrating Firebase, you'll get this library included for free, so no need to call it out specifically in that case.

There is just one method (with two variants) you can use to kick off a new Task. You can use the static method named "call" on the Tasks utility class for this. The variants are as follows:

    Task<TResult> call(Callable<TResult> callable)
Task<TResult> call(Executor executor, Callable<TResult> callable)

Just like addOnSuccessListener(), you have a version of call()that executes the work on the main thread and another that submits the work to an Executor. You specify the work to perform inside the passed Callable. A Java Callable is similar to a Runnable, except it's parameterized by some result type, and that type becomes the returned object type of its call() method. This result type then becomes the type of the Task returned by call(). Here's a really simple Callable that just returns a String:

    public class CarlyCallable implements Callable<String> {
@Override
public String call() throws Exception {
return "Call me maybe";
}
}

Notice that CarlyCallable is parameterized by String, which means its call() method must return a String. Now, you can create a Task out of it with a single line:

    Task<String> task = Tasks.call(new CarlyCallable());

After this line executes, you can be certain that the call() method on the CarlyCallable will be invoked on the main thread, and you can add a listener to the Task to find the result (even though that result is totally predictable). More interesting Callables might actually load some data from a database or a network endpoint, and you'd want to have those blocking Callables run on an Executor using the second form of call()that accepts the Executor as the first argument.

Working on the Chain (Gang)

Let's say, for the sake of example, you want to process the String result of the CarlyCallable Task after it's been generated. Imagine that we're not so much interested in the text of the resulting String itself, and more interested in a List of individual words in the String. But, we don't necessarily want to modify CarlyCallable because it's doing exactly what it's supposed to, and it could be used in other places as it’s written now. Instead, we'd rather encapsulate the logic that splits words into its own class, and use that after the CarlyCallable returns its String. We can do this with a Continuation. An implementation of the Continuation interface takes the output of one Task, does some processing on it, and returns a result object, not necessarily of the same type. Here's a Continuation that splits a string of words into an List of Strings with each word:

    public class SeparateWays implements Continuation<String, List<String>> {
@Override
public List<String> then(Task<String> task) throws Exception {
return Arrays.asList(task.getResult().split(" +"));
}
}

Notice that the Continuation interface being implemented here is parameterized by two types, an input type (String) and an output type (List). The input and output types are used in the signature of the lone method then() to define what it's supposed to do. Of particular note is the parameter passed to then(). It's a Task, and the String there must match the input type of the Continuation interface. This is how the Continuation gets its input - it pulls the finished result out of the completed Task.

Here's another Continuation that randomizes a List of Strings:

    public class AllShookUp implements Continuation<List<String>, List<String>> {
@Override
public List<String> then(@NonNull Task<List<String>> task) throws Exception {
// Randomize a copy of the List, not the input List itself, since it could be immutable
final ArrayList<String> shookUp = new ArrayList<>(task.getResult());
Collections.shuffle(shookUp);
return shookUp;
}
}

And another one that joins a List of Strings into a single space-separated String:

    private static class ComeTogether implements Continuation<List<String>, String> {
@Override
public String then(@NonNull Task<List<String>> task) throws Exception {
StringBuilder sb = new StringBuilder();
for (String word : task.getResult()) {
if (sb.length() > 0) {
sb.append(' ');
}
sb.append(word);
}
return sb.toString();
}
}

Maybe you can see where I'm going with this! Let's tie them all together into a chain of operations that randomizes the word order of a String from a starting Task, and generates a new String with that result:

    Task<String> playlist = Tasks.call(new CarlyCallable())
.continueWith(new SeparateWays())
.continueWith(new AllShookUp())
.continueWith(new ComeTogether());
playlist.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String message) {
// The final String with all the words randomized is here
}
});

The continueWith()method on Task returns a new Task that represents the computation of the prior Task after it’s been processed by the given Continuation. So, what we’re doing here is chaining calls to continueWith() to form a pipeline of operations that culminates in a final Task that waits for each stage to complete before completing.

This chain of operations could be problematic if these they have to deal with large Strings, so let's modify it to do all the processing on other threads so we don't block up the main thread:

    Executor executor = ... // you decide!

Task<String> playlist = Tasks.call(executor, new CarlyCallable())
.continueWith(executor, new SeparateWays())
.continueWith(executor, new AllShookUp())
.continueWith(executor, new ComeTogether());
playlist.addOnSuccessListener(executor, new OnSuccessListener() {
@Override
public void onSuccess(String message) {
// Do something with the output of this playlist!
}
});

Now, the Callable, all of the Continuations, and the final Task listener will each run on some thread determined by the Executor, freeing up the main thread to deal with UI stuff while this happens. It should be totally jank-free.

At first blush, it could seem a bit foolish to separate all these operations into all the different classes. You could just as easily write this as a few lines in a single method that do only what's required. So, keep in mind that this is a simplified example intended to highlight how Tasks can work for you. The benefit of chaining of Tasks and Continuations (even for relatively simple functions) becomes more evident when you consider the following:

  • How might you plug in new sources of input Strings? So, what if we also had a BlondieCallable? And a PaulSimonCallable?
  • What about different kinds of processing for the input Strings, such as a YouSpinMeRound continuation that rotated the order of the Strings in a List one position to the right (like a record)?
  • What if you wanted different components of the processing pipeline to be executed on different threads?

Practically speaking, you're more likely to use Task continuations to perform a series of modular chain of filter, map, and reduce functions on a set of data, and keep those units of work off the main thread, if the collections can be large. But, I had fun with music theme here!

What if the Playlist Breaks?

One last thing to know about Continuations. If a runtime exception is thrown during processing at any stage along the way, that exception will normally propagate all the way down to the failure listeners on the final Task in the chain. You can check for this yourself in any Continuation by asking the input Task if it completed successfully with the isSuccessful() method. Or, you can just blindly call getResult() (as is the case in the above samples), and if there was previously a failure, it will get re-thrown and automatically end up in the next Continuation. The listeners on the final Task in the chain should always check for failure, though, if failure is an option.

So, for example, if the CarlyCallable in the above chain returned null, that would cause the SeparateWays continuation to throw a NullPointerException, which would propagate to the end of the Task. And if we had an OnFailureListener registered, that would get invoked with the same exception instance.

Pop Quiz!

What's the most efficient way, with the above chain, of finding out the number of words in the original string, without modifying any of the processing components? Take a moment to think about it before reading on!

The answer is probably more simple than you'd imagine. The most obvious solution is to count the number of words in the final output string, since their order only got randomized. But there is one more trick. Each call to continueWith() returns a new Task instance, but those are all invisible here because we used a chaining syntax to assemble them into the final Task. So you can intercept any of those those tasks and add another listener to it, in addition to the next Continuation:

    Task<List<String>> split_task = Tasks.call(new CarlyCallable())
.continueWith(executor, new SeparateWays());
split_task =
.continueWith(executor, new AllShookUp())
.continueWith(executor, new ComeTogether());
split_task.addOnCompleteListener(executor, new OnCompleteListener<List<String>>() {
@Override
public void onComplete(@NonNull Task<List<String>> task) {
// Find the number of words just by checking the size of the List
int size = task.getResult().size();
}
});
playlist.addOnCompleteListener( /* as before... */ );

When a Task finishes, it will trigger both of the Continuations on it, as well as all of the added listeners. All we've done here is intercept the Task that captures the output of the SeparateWays continuation, and listen to the output of that directly, without affecting the chain of continuations. With this intercepted task, we only have to call size() on the List to get the count.

Wrapping Up (part 3 of this series)

All joking aside, the Task API makes it relatively easy for you to express and execute a sequential pipeline of processing in a modular fashion, while giving you the ability to specify which Executor is used at each stage in the process. You can do this /with or without/ Firebase integrated into your app, using your own Tasks or those that come from Firebase APIs. For the next and final part to this series, we'll look at how Tasks can be used in parallel to kick off multiple units of work simultaneously.

As usual, if you have any questions, consider using Twitter with the #AskFirebase hashtag or the firebase-talk Google Group. We also have a dedicated Firebase Slack channel. And you can follow me @CodingDoug on Twitter to get notified of the next post in this series.

Lastly, if you're wondering about all the songs I referenced in this post, you can find them here:

Continue reading with part 4.
Share:

Tuesday 27 September 2016

Pirate Metrics: Activate Your Users With Firebase

Parul Soi
Parul Soi
Developer Relations Program Manager

This is our third post in the Pirate Metrics with Firebase series. In the first post, we gave an overview of what Pirate Metrics are and why they’re important. In the second, we showed how you can use Firebase to improve your acquisition strategy.

Once you acquire a user, your main goal is to make them use your product. Users often install an app but never get hooked. They would have the app around for a day or two, if you’re lucky, before either forgetting about it or, worse, uninstalling it. All that effort you put into your acquisition goes down the drain.

The first few days are, hence, crucial. Through your data, you want to find a pattern to determine at what point is a user activated, and look at ways to get more users past that point. Examples can be the number of friends on a social networking application or the number of levels crossed in a video game. Devising the right “activation strategy” always involves a lot of experimentation.

To carry out these experiments, we have just the right tool for you - Firebase’s Remote Configuration. Remote Config allows you to set certain key/value pairs on the server, and use them to vary the experience inside of your application. These values when updated on the Firebase console reflect inside of your own application, allowing you to change the experience for users without releasing an update.

If you use this capability of Remote Config, and set values using the “random percentile” targeting, you essentially have an A/B test setup. You can then see the impact on your analytics, and change these values dynamically on the server itself, increasing the rollout for experiments that have proven to work.. It makes for a great solution for A/B testing.

To optimize your testing, we recommend first defining the data points you want to improve (such as an increase in users signing up on the first app open). Then, ideate on the experiments you want to run to improve these data points. These might be experiments that track the impact of different tutorials or signup methods for apps or difficulty settings for initial levels in a game that can ultimately improve your activation percentage.

Share:

Wednesday 21 September 2016

HTTP/2 comes to Firebase Hosting

Michael Bleigh
Michael Bleigh
Engineer

Today we're excited to announce the availability of HTTP/2 on Firebase Hosting. HTTP/2 is a new version of the HTTP protocol that is already supported by 77% of browsers worldwide. It offers some exciting advantages for web developers:

  • Multiple requests can be sent over a single connection. With HTTP/2, it's less necessary to concatenate resources together.
  • It's a binary protocol, which means headers can be compressed and data can generally be sent more efficiently.
  • Servers can proactively "push" content to clients.

Taken together, these add up to significant performance advantages and lots of opportunity to make your web applications load faster on mobile devices with slow connections.

HTTP/2 is currently enabled for all *.firebaseapp.com traffic as well as newly-provisioned custom domains. If you already have a custom domain on Firebase, see Custom Domain Migration below.

Leveraging HTTP/2 on Firebase Hosting

To utilize HTTP/2 on Firebase Hosting, you don't have to do anything! It will automatically be served if the user's browser supports it. However, there are some best practices you should keep in mind if you want to optimize for HTTP/2:

  1. Because a single connection can be used to multiplex simultaneous requests, there is no longer an advantage to concatenate lots of resources together. Since browsers do a good job of caching resources, it's actually better to serve more files that change less often. Be aware though that by more files, we mean in the tens, as hundreds can still carry significant overhead.
  2. It is no longer necessary (or desirable) to "split" assets up between many domains. Firebase Hosting is already served over a fast CDN, and HTTP/2 makes it advantageous to serve all of your files from the same domain.
  3. Only load the assets you need! With fewer round trips, you should optimize your site to load only the files you need to bootstrap your application shell. Other resources should be loaded on-demand based on user interaction.

The above guidelines aren't hard and fast rules -- as with any performance optimization, you should experiment with different combinations of optimizations to see which ones deliver the best result for your app's specific needs.

Experimenting with Server Push

Firebase Hosting has experimental support for HTTP/2 server push using Link headers. Server push allows a server to automatically send the contents for additional resources when an initial request is made. The most common use for server push is to send down associated assets (like JavaScript or CSS files) when a page is loaded.

To configure server push on Firebase Hosting, you need to add the Link header to your firebase.json configuration like so:


{
"hosting": {
"headers": [
{
"source": "/",
"headers": [{"key": "Link", "value": "</js/app.js>;rel=preload;as=script,</css/app.css>;rel=preload;as=style"}]
},
{
"source": "/users/*",
"headers": [{"key": "Link", "value": "</js/app.js>;rel=preload;as=script,</css/app.css>;rel=preload;as=style;nopush,</css/users.css>;rel=preload;as=style"}]
}
]
}
}

Here we are using server push to preload /js/app.js and /css/app.css on the root path, and additionally /css/users.css on any path matching /users/*. You can use the nopush directive (like on app.css in the second example) to preload the asset without HTTP/2 push for files that are likely to already be in the browser cache.

It's still early days for server push, and there are a few things to keep in mind:

  1. Be careful with wildcards in setting Link headers. Resources should never be set to preload themselves.
  2. Server push is a tradeoff between performance and bandwidth usage -- if you push assets that are already cached by the browser you'll be sending unnecessary data. Try to keep pushed assets to small, critical-to-performance assets and be aware that your users may have to pay for that extra data on their mobile devices!
  3. Preloading is great for performance even without push! If you add ;nopush to your preload Link header, it will tell the browser to preload it without server push. This is great for assets you think may already be cached in the browser.

We're excited about HTTP/2's potential to improve that first-load experience, and we're still exploring additional ways to make server push simple, reliable, and effective for your site.

Custom Domain Migration

With our migration to HTTP/2 we're also moving to Server Name Indication (SNI) for our SSL certificates. SNI enables us to continue to scale our infrastructure more reliably and is supported by nearly 98% of browsers worldwide. Because this change has the possibility of affecting user traffic, we are not automatically switching over existing custom domains until December 2016.

If you have a custom domain on Firebase Hosting from before August 11, 2016, you will need to update your DNS records to take advantage of HTTP/2. You can check if you're already on SNI by running dig <your-site>.firebaseapp.com. If you see s-sni.firebaseapp.com in the result, your site is already migrated.

To migrate if you're using a CNAME, update your DNS to point to s-sni.firebaseapp.com. If you're using A records, update them to:


151.101.1.195
151.101.65.195

Once you've changed over your DNS and it's had the chance to propagate, your site will be live with HTTP/2! We will be transitioning all Firebase Hosting traffic to HTTP/2 and SNI by the end of the year, so please reach out to support if you're worried about how SNI might affect your users.

Our goal with Firebase Hosting is to bring the best practices of Progressive Web App development within reach of everyone. HTTP/2 is another step along that path, and we're excited to see what you build with it!

Share:

Tuesday 20 September 2016

Become a Firebase Taskmaster! (Part 2: Choosing the Best Options)

Doug Stevenson
Doug Stevenson
Developer Advocate

Ohai! You've just joined us for the second part of a blog series about the Play Services Task API, which is used by some Firebase features to respond to work that its APIs perform asynchronously. Last time, we got acquainted with a Task used by the Firebase Storage API, and learned a little bit about how Tasks work in general. So, if you haven't seen that post, now's good time to circle back to it before continuing here. In this post, we'll take a look at some of the nuances in behavior between the different variations for adding a listener to a Task to capture its result.

Last time, we saw a listener get added to a Task like this, using the Firebase Storage API:

    Task task = forestRef.getMetadata();
task.addOnSuccessListener(new OnSuccessListener() {
@Override
public void onSuccess(StorageMetadata storageMetadata) {
// Metadata now contains the metadata for 'images/forest.jpg'
}
});

In this code, addOnSuccessListener() is called with a single argument, which is an anonymous listener to invoke upon completion. With this form, the listener is invoked on the main thread, which means we can do things that can only be done on the main thread, such as update a View. It's great that the Task helps put the work back on the main thread, except there is one caveat here. If a listener is registered like this in an Activity, and it's not removed before the Activity is destroyed, there is a possibility of an Activity leak.

But I Don't Want Leaky Activities!

Right, nobody wants leaky Activities! So, what's an Activity leak, anyway? Put briefly, an Activity leak occurs when an object holds onto an Activity object reference beyond its onDestroy() lifecycle method, retaining the Activity beyond its useful lifetime. When onDestroy() is called on an Activity, you can be certain that instance is never going to be used by Android again. After onDestroy(), we want the Android runtime garbage collector to clean up that Activity, all of its Views, other dead objects. But the garbage collector won't clean up the Activity and all of its Views if some other object is holding a strong reference to it!

Activity leaks can be a problem with Tasks, unless you take care to avoid it. In the above code (if it was inside an Activity), the anonymous listener object actually holds a strong, implicit reference to the containing Activity. This is how code inside the listener is able to make changes to the Activity and its members - the compiler silently works out the details of that. An Activity leak occurs when an in-progress Task holds on to the listener past the Activity's onDestroy(). We really don't have any guarantees at all about how long that Task will take, so the listener can be held indefinitely. And since the listener implicitly holds a reference to the Activity, the Activity can be leaked if the Task doesn't complete before onDestroy(). If lots of Tasks holding references to Activities back up over time (for example, due to a hung network), that can cause your app to run out of memory and crash. Yow. You can learn more in this video.

Back to the Task at Hand

If you’re concerned about leaking Activities (and I hope you are!), you should know that the single argument version of addOnSuccessListener() has the caveat of possibly leaking the Activity if you aren't careful to remove the listener at the right time.

It turns out there's a convenient way to do this automatically with the Task API. Let's take the above code in an Activity, and modify its call to addOnSuccessListener() slightly:

    Task task = forestRef.getMetadata();
task.addOnSuccessListener(this, new OnSuccessListener() {
@Override
public void onSuccess(StorageMetadata storageMetadata) {
// Metadata now contains the metadata for 'images/forest.jpg'
}
});

This is exactly like the previous version, except there are now two arguments to addOnSuccessListener(). The first argument is `this`, so when this code is in an Activity, that would make `this` refer to that enclosing Activity instance. When the first parameter is an Activity reference, that tells the Task API that this listener should be "scoped" to the lifecycle of the Activity. This means that the listener will be automatically removed from the task when the Activity goes through its onStop() lifecycle method. This is pretty handy because you don't have to remember to do it yourself for all the Tasks you may create while an Activity active. However, you need to be confident that onStop() is the right place for you to stop listening. onStop() is triggered when an Activity is no longer visible, which is often OK. However, if you intend to keep tracking the Task in the next Activity (such as when an orientation change replaces the current Activity with a new one), you'll need to come up with a way to retain that knowledge in the next Activity. For some information on that, read up on saving Activity state.

Skipping the Traffic on Main St.

There are cases where you simply don't want to react to the completion of a Task on the main thread. Maybe you want to do blocking work in your listener, or you want to be able to handle different Task results concurrently (instead of sequentially). So, you'd like to avoid the main thread altogether and instead process the result on another thread you control. There's one more form of addOnSuccessListener() that can help your app with your threading. It looks like this (with abbreviated listener):

    Executor executor = ...;  // obtain some Executor instance
Task task = RemoteConfig.getInstance().fetch();
task.addOnSuccessListener(executor, new OnSuccessListener() { ... });

Here, we're making a call to the Firebase Remote Config API to fetch new configuration values. Then, the returned Task from fetch() gets a call to addOnSuccessListener()and receives an Executoras the first argument. This Executor determines the thread that will be used to invoke the listener. For those of you unfamiliar with Executor, it's a core Java utility that accepts units of work and routes them to be executed on threads under its control. That could be a single thread, or a pool of threads, all waiting to do work. It's not very common to use for apps to use an Executor directly, and can be seen as an advanced technique for managing the threading behavior of your app. What you should take away from this is the fact that you don't have to receive your listeners on the main thread if that doesn't suit your situation. If you do choose to use an Executor, be sure to manage them as shared singletons, or make sure their lifecycles are managed well so you don’t leak their threads.

One other interesting thing to note about this code is the fact that the Task returned by Remote Config is parameterized by Void. This is the way a Task can say that it doesn't generate any object directly - Void is the data type in Java that indicates absence of type data. The Remote Config API is simply using the Task as an indicator of task completion, and the caller is expected to use other Remote Config APIs to discover any new values that were fetched.

Choose Wisely, Indy!

All told, there are three varieties of addOnSuccessListener():

    Task addOnSuccessListener(OnCompleteListener listener) 
Task addOnSuccessListener(Activity activity, OnSuccessListener listener)
Task addOnSuccessListener(Executor executor, OnSuccessListener listener)

On top of that, we have the same varieties for failure and completion listeners:

    Task addOnFailureListener(OnFailureListener listener)
Task addOnFailureListener(Activity activity, OnFailureListener listener)
Task addOnFailureListener(Executor executor, OnFailureListener listener)

Task addOnCompleteListener(OnCompleteListener listener)
Task addOnCompleteListener(Activity activity, OnCompleteListener listener)
Task addOnCompleteListener(Executor executor, OnCompleteListener listener)

Hold on, what's an OnCompleteListener?

There's nothing too special going on with OnCompleteListener. It's just a single listener that's capable of receiving both success and failure, and you have to check for that status inside the callback. The file metadata callback from the last post could be rewritten like this, instead of giving the task separate success and failure listeners:

    Task task = forestRef.getMetadata();
task.addOnCompleteListener(new OnCompleteListener() {
@Override
public void onComplete (Task task) {
if (task.isSuccessful()) {
StorageMetadata meta = task.getResult();
// Do something with metadata...
} else {
Exception e = task.getException();
// Handle the failure...
}
}
});

So, with OnCompleteListener, you can have a single listener that handles both success and failure, and you find out which one by calling isSuccessful() on the Task object passed to the callback. Practically speaking, this is functionally equivalent to registering both an OnSuccessListener and an OnFailureListener. The style you choose is mostly a matter of preference.

Wrapping Up (part 2 of this series)

Now you've seen that Tasks can receive three different kinds of listeners: success, failure, and overall completion. And, for each of those kinds of listeners, there are three ways to receive that callback: on the main thread, on the main thread scoped to an Activity, and on a thread determined by an Executor. You have some choices here, and it's up to you to choose which one suits your situation the best. However, these aren't the only ways to handle the results of your Tasks. You can create pipelines of Task results for more complex processing. Please join me for those details next time, where you can continue the journey to become a Firebase Taskmaster!

If you have any questions, consider using Twitter with the #AskFirebase hashtag or the firebase-talk Google Group. We also have a dedicated Firebase Slack channel. And you can follow me @CodingDoug on Twitter.

Continue reading with part 3.
Share:

Monday 19 September 2016

Firebase Dev Summit comes to Berlin!

Magnus Hyttsten
Magnus Hyttsten
Developer Advocate

We’re excited to announce that the registration for the Firebase Dev Summit is opening today!

Six months ago, thousands of developers joined us at Google I/O in Mountain View, CA to hear about the expansion of Firebase to become a unified app platform that helps developers build better apps and grow successful businesses. We want to share these updates with you (and maybe even a few new ones!) at the Firebase Dev Summit in Berlin, Germany. Registration is now open, but keep in mind that space will be filled on a first-come, first-serve basis, so make sure to register today.

Our product managers and engineering team (including me!) will be there, and we’re excited to meet you in person and learn how we can make Firebase easier for you to develop extraordinary experiences for your users on iOS, Android, and the Web.

What is the Firebase Dev Summit?

The Firebase Dev Summit is full day event for app developers that will focus on how to use Firebase with your apps. The day will have a packed agenda with valuable sessions from Firebase and our partners, and is a great chance to meet developers from your local community. But, the day isn’t just about us talking to to you -- we also want to see you get your hands dirty with Firebase. You’ll get a chance to put your new knowledge into practice with a hands-on workshop and codelabs that walk you through all the different features of Firebase. Firebase engineers will be on hand to help you get up and running, and answer any questions you may have.

We’re looking forward to meeting you in person. Danke!

Share:

Thursday 15 September 2016

Announcing Firebase 3.6 for iOS

Todd Kerpleman
Todd Kerpelman
Developer Advocate

Hey there, iOS Developers!

We wanted to let you know that Firebase version 3.6 is now available for iOS. This contains a number of important bug fixes and features needed for iOS 10 support, and we encourage you to run a pod update (or manually update your frameworks) and recompile your projects at your earliest convenience.

If you want to see a full list of fixes and improvements, you can review the release notes, but here's a quick summary of what's new.

New Notification Support

Firebase Cloud Messaging now has support for the new iOS 10 user notifications. If your app is running on iOS 10, you can handle incoming notifications using the userNotificationCenter:willPresentNotification: withCompletionHandlermethod. And don't worry -- if your app only has the older application:didReceiveRemoteNotification: completionHandler methods supported, APNs will call those instead if it can't find the newer ones. Need more info? Refer to the updated FCM documentation for more information.

Some Notes Around App Review Guidelines

With the iOS 10 update, Apple made a number of changes to their App Store review guidelines. The latest version of Firebase has made several changes in response to these new guidelines. Most importantly, you should no longer encounter iTunes Connect errors asking you to provide text for things like NSCalendarsUsageDescription andNSBluetoothPeripheralUsageDescription.

One consequence of following these guidelines is that we have removed the technology which up until recently gave you the ability to measure iOS Search app install ads from Safari.

For those of you who are using Firebase Invites, you will need to supply some content for NSContactsUsageDescription in your plistfile. Firebase Invites uses this contact information to populate the list of friends that your user might want to send an invitation to.

Of course, this is an ongoing process. We will monitor the impact of these changes closely, and publish further updates if it ever becomes necessary.

Sign-in Workarounds

You may recall in a recent blog post that Firebase Auth was encountering errors in Xcode 8 due to it not being able to write values to the keychain in the simulator. While that issue still exists, we have developed a workaround where we use NSUserDefaults in the simulator, and continue to use the keychain on the device. This means you can now develop and test out Firebase Auth in the simulator and everything should be working again.

Bug Fixes

You found bugs; we fixed 'em! Please continue to report any issues or feature requests you might have to our online form, and we'll make sure they get handled appropriately.

And if you have any questions, you can always ask them on Stack Overflow with the Firebase tag, or send them to our Google group.

Thanks again for being a Firebase developer! Now go forth and update your apps!

Share:

Wednesday 14 September 2016

Angular, version 2: proprioception-reinforcement

Today, at a special meetup at Google HQ, we announced the final release version of Angular 2, the full-platform successor to Angular 1.

What does "final" mean? Stability that's been validated across a wide range of use cases, and a framework that's been optimized for developer productivity, small payload size, and performance. With ahead-of-time compilation and built-in lazy-loading, we’ve made sure that you can deploy the fastest, smallest applications across the browser, desktop, and mobile environments. This release also represents huge improvements to developer productivity with the Angular CLI and styleguide.
Angular 1 first solved the problem of how to develop for an emerging web. Six years later, the challenges faced by today’s application developers, and the sophistication of the devices that applications must support, have both changed immensely. With this release, and its more capable versions of the Router, Forms, and other core APIs, today you can build amazing apps for any platform. If you prefer your own approach, Angular is also modular and flexible, so you can use your favorite third-party library or write your own.
From the beginning, we built Angular in collaboration with the open source development community. We are grateful to the large number of contributors who dedicated time to submitting pull requests, issues, and repro cases, who discussed and debated design decisions, and validated (and pushed back on) our RCs. We wish we could have brought every one of you in person to our meetup so you could celebrate this milestone with us tonight!
Angular Homepage.png

What’s next?

Angular is now ready for the world, and we’re excited for you to join the thousands of developers already building with Angular 2.  But what’s coming next for Angular?


A few of the things you can expect in the near future from the Angular team:


  • Bug fixes and non-breaking features for APIs marked as stable
  • More guides and live examples specific to your use cases
  • More work on animations
  • Angular Material 2
  • Moving WebWorkers out of experimental
  • More features and more languages for Angular Universal
  • Even more speed and payload size improvements


Semantic Versioning

We heard loud and clear that our RC labeling was confusing. To make it easy to manage dependencies on stable Angular releases, starting today with Angular 2.0.0, we will move to semantic versioning.  Angular versioning will then follow the MAJOR.MINOR.PATCH scheme as described by semver:


  1. the MAJOR version gets incremented when incompatible API changes are made to stable APIs,
  2. the MINOR version gets incremented when backwards-compatible functionality are added,
  3. the PATCH version gets incremented when backwards-compatible bug are fixed.


Moving Angular to semantic versioning ensures rapid access to the newest features for our component and tooling ecosystem, while preserving a consistent and reliable development environment for production applications that depend on stability between major releases, but still benefit from bug fixes and new APIs.

Contributors

Aaron Frost, Aaron (Ron) Tsui, Adam Bradley, Adil Mourahi, agpreynolds, Ajay Ambre, Alberto Santini, Alec Wiseman, Alejandro Caravaca Puchades, Alex Castillo, Alex Eagle, Alex Rickabaugh, Alex Wolfe, Alexander Bachmann, Alfonso Presa, Ali Johnson, Aliaksei Palkanau, Almero Steyn, Alyssa Nicoll, Alxandr, André Gil, Andreas Argelius, Andreas Wissel, Andrei Alecu, Andrei Tserakhau, Andrew, Andrii Nechytailov, Ansel Rosenberg, Anthony Zotti, Anton Moiseev, Artur Meyster, asukaleido, Aysegul Yonet, Aziz Abbas, Basarat Ali Syed, BeastCode, Ben Nadel, Bertrand Laporte, Blake La Pierre, Bo Guo, Bob Nystrom, Borys Semerenko, Bradley Heinz, Brandon Roberts, Brendan Wyse, Brian Clark, Brian Ford, Brian Hsu, dozingcat, Brian Yarger, Bryce Johnson, CJ Avilla, cjc343, Caitlin Potter, Cédric Exbrayat, Chirayu Krishnappa, Christian Weyer, Christoph Burgdorf, Christoph Guttandin, Christoph Hoeller, Christoffer Noring, Chuck Jazdzewski, Cindy, Ciro Nunes, Codebacca, Cody Lundquist, Cody-Nicholson, Cole R Lawrence, Constantin Gavrilete, Cory Bateman, Craig Doremus, crisbeto, Cuel, Cyril Balit, Cyrille Tuzi, Damien Cassan, Dan Grove, Dan Wahlin, Daniel Leib, Daniel Rasmuson, dapperAuteur, Daria Jung, David East, David Fuka, David Reher, David-Emmanuel Divernois, Davy Engone, Deborah Kurata, Derek Van Dyke, DevVersion, Dima Kuzmich, Dimitrios Loukadakis, Dmitriy Shekhovtsov, Dmitry Patsura, Dmitry Zamula, Dmytro Kulyk, Donald Spencer, Douglas Duteil, dozingcat, Drew Moore, Dylan Johnson, Edd Hannay, Edouard Coissy, eggers, elimach, Elliott Davis, Eric Jimenez, Eric Lee Carraway, Eric Martinez, Eric Mendes Dantas, Eric Tsang, Essam Al Joubori, Evan Martin, Fabian Raetz, Fahimnur Alam, Fatima Remtullah, Federico Caselli, Felipe Batista, Felix Itzenplitz, Felix Yan, Filip Bruun, Filipe Silva, Flavio Corpa, Florian Knop, Foxandxss, Gabe Johnson, Gabe Scholz, GabrielBico, Gautam krishna.R, Georgii Dolzhykov, Georgios Kalpakas, Gerd Jungbluth, Gerard Sans, Gion Kunz, Gonzalo Ruiz de Villa, Grégory Bataille, Günter Zöchbauer, Hank Duan, Hannah Howard, Hans Larsen, Harry Terkelsen, Harry Wolff, Henrique Limas, Henry Wong, Hiroto Fukui, Hongbo Miao, Huston Hedinger, Ian Riley, Idir Ouhab Meskine, Igor Minar, Ioannis Pinakoulakis, The Ionic Team, Isaac Park, Istvan Novak, Itay Radotzki, Ivan Gabriele, Ivey Padgett, Ivo Gabe de Wolff, J. Andrew Brassington, Jack Franklin, Jacob Eggers, Jacob MacDonald, Jacob Richman, Jake Garelick, James Blacklock, James Ward, Jason Choi, Jason Kurian, Jason Teplitz, Javier Ros, Jay Kan, Jay Phelps, Jay Traband, Jeff Cross, Jeff Whelpley, Jennifer Bland, jennyraj, Jeremy Attali, Jeremy Elbourn, Jeremy Wilken, Jerome Velociter, Jesper Rønn-Jensen, Jesse Palmer, Jesús Rodríguez, Jesús Rodríguez, Jimmy Gong, Joe Eames, Joel Brewer, John Arstingstall, John Jelinek IV, John Lindquist, John Papa, John-David Dalton, Jonathan Miles, Joost de Vries, Jorge Cruz, Josef Meier, Josh Brown, Josh Gerdes, Josh Kurz, Josh Olson, Josh Thomas, Joseph Perrott, Joshua Otis, Josu Guiterrez, Julian Motz, Julie Ralph, Jules Kremer, Justin DuJardin, Kai Ruhnau, Kapunahele Wong, Kara Erickson, Kathy Walrath, Keerti Parthasarathy, Kenneth Hahn, Kevin Huang, Kevin Kirsche, Kevin Merckx, Kevin Moore, Kevin Western, Konstantin Shcheglov, Kurt Hong, Levente Morva, laiso, Lina Lu, LongYinan, Lucas Mirelmann, Luka Pejovic, Lukas Ruebbelke, Marc Fisher, Marc Laval, Marcel Good, Marcy Sutton, Marcus Krahl, Marek Buko, Mark Ethan Trostler, Martin Gontovnikas, Martin Probst, Martin Staffa, Matan Lurey, Mathias Raacke, Matias Niemelä, Matt Follett, Matt Greenland, Matt Wheatley, Matteo Suppo, Matthew Hill, Matthew Schranz, Matthew Windwer, Max Sills, Maxim Salnikov, Melinda Sarnicki Bernardo, Michael Giambalvo, Michael Goderbauer, Michael Mrowetz, Michael-Rainabba Richardson, Michał Gołębiowski, Mikael Morlund, Mike Ryan, Minko Gechev, Miško Hevery, Mohamed Hegazy, Nan Schweiger, Naomi Black, Nathan Walker, The NativeScript Team, Nicholas Hydock, Nick Mann, Nick Raphael, Nick Van Dyck, Ning Xia, Olivier Chafik, Olivier Combe, Oto Dočkal, Pablo Villoslada Puigcerber, Pascal Precht, Patrice Chalin, Patrick Stapleton, Paul Gschwendtner, Pawel Kozlowski, Pengfei Yang, Pete Bacon Darwin, Pete Boere, Pete Mertz, Philip Harrison, Phillip Alexander, Phong Huynh, Polvista, Pouja, Pouria Alimirzaei, Prakal, Prayag Verma, Rado Kirov, Raul Jimenez, Razvan Moraru, Rene Weber, Rex Ye, Richard Harrington, Richard Kho, Richard Sentino, Rob Eisenberg, Rob Richardson, Rob Wormald, Robert Ferentz, Robert Messerle, Roberto Simonetti, Rodolfo Yabut, Sam Herrmann, Sam Julien, Sam Lin, Sam Rawlins, Sammy Jelin, Sander Elias, Scott Hatcher, Scott Hyndman, Scott Little, ScottSWu, Sebastian Hillig, Sebastian Müller, Sebastián Duque, Sekib Omazic, Shahar Talmi, Shai Reznik, Sharon DiOrio, Shannon Ayres, Shefali Sinha, Shlomi Assaf, Shuhei Kagawa, Sigmund Cherem, Simon Hürlimann (CyT), Simon Ramsay, Stacy Gay, Stephen Adams, Stephen Fluin, Steve Mao, Steve Schmitt, Suguru Inatomi, Tamas Csaba, Ted Sander, Tero Parviainen, Thierry Chatel, Thierry Templier, Thomas Burleson, Thomas Henley, Tim Blasi, Tim Ruffles, Timur Meyster, Tobias Bosch, Tony Childs, Tom Ingebretsen, Tom Schoener, Tommy Odom, Torgeir Helgevold, Travis Kaufman, Trotyl Yu, Tycho Grouwstra, The Typescript Team, Uli Köhler, Uri Shaked, Utsav Shah, Valter Júnior, Vamsi V, Vamsi Varikuti, Vanga Sasidhar, Veikko Karsikko, Victor Berchet, Victor Mejia, Victor Savkin, Vinci Rufus, Vijay Menon, Vikram Subramanian, Vivek Ghaisas, Vladislav Zarakovsky, Vojta Jina, Ward Bell, Wassim Chegham, Wenqian Guo, Wesley Cho, Will Ngo, William Johnson, William Welling, Wilson Mendes Neto, Wojciech Kwiatek, Yang Lin, Yegor Jbanov, Zach Bjornson, Zhicheng Wang, and many more...


With gratitude and appreciation, and anticipation to see what you'll build next, welcome to the next stage of Angular.
Share:

Tuesday 13 September 2016

Become a Firebase Taskmaster! (Part 1: The Essentials)

Doug Stevenson
Doug Stevenson
Developer Advocate

Sometimes, when using the Firebase client APIs for Android, it's required that Firebase perform some work at the request of the developer in an asynchronous fashion. Perhaps some requested data is not immediately available, or work needs to be queued for eventual execution. When we say some work must be done asynchronously in an app, that means the work needs to happen at the same time as the app performs its primary job of rendering the app’s views, but not get in the way of that work. To perform this asynchronous work correctly in Android apps, the work can't occupy time on the Android main thread, otherwise the app may delay rendering of some frames, causing "jank" in the user experience, or worse, the dreaded ANR! Typical examples of work that can cause delays are network requests, reading and writing files, and lengthy computations. In general, we call this blocking work, and we never want to block the main thread!

When a developer uses a Firebase API to request work that would normally block the main thread, the API needs to arrange that work to run on a different thread, in order to avoid jank and ANRs. Upon completion, the results of that work sometimes have to make it back to the main thread in order to safely update views.

That's what the Play services Task API is for. The goal of the Task API is to provide an easy, lightweight, and Android-aware framework for Firebase (and Play services) client APIs to perform work asynchronously. It was introduced in Play services version 9.0.0 along with Firebase. If you've been using Firebase features in your app, it's possible that you may have been using the Task API without even realizing it! So, what I'd like to do in this blog series is unpack some of the ways the Firebase APIs make use of Tasks, and discuss some patterns for advanced use.

Before we begin, it's important to know that the Task API isn't a full replacement for other threading techniques on Android. The Android team has put together some great content that describe other tools for threading, such as Services, Loaders, and Handlers. There's also a whole season of Application Performance Patterns on YouTube that discusses your options. Some developers even opt into third party libraries that will help you with your threading in Android apps. So, it's up to you to learn about those and determine which solution is the best for your particular threading needs. Firebase APIs uniformly use Tasks to manage threaded work, and you can use those in conjunction with other strategies as you see fit.

A Simple Task Example

If you're using Firebase Storage, you'll definitely encounter Tasks at some point. Here's a straightforward example of fetching metadata about a file that's already uploaded to Storage, taken directly from the documentation for file metadata:

    // Create a storage reference from our app
StorageReference storageRef = storage.getReferenceFromUrl("gs://");

// Get reference to the file
StorageReference forestRef = storageRef.child("images/forest.jpg");

forestRef.getMetadata().addOnSuccessListener(new OnSuccessListener() {
@Override
public void onSuccess(StorageMetadata storageMetadata) {
// Metadata now contains the metadata for 'images/forest.jpg'
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception exception) {
// Uh-oh, an error occurred!
}
});

Even though we never see a "Task" anywhere in this code, there is actually a Task in play here. The last part of the above code could be rewritten equivalently like this:

    Task task = forestRef.getMetadata();
task.addOnSuccessListener(new OnSuccessListener() {
@Override
public void onSuccess(StorageMetadata storageMetadata) {
// Metadata now contains the metadata for 'images/forest.jpg'
}
});
task.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception exception) {
// Uh-oh, an error occurred!
}
});

Ah, it looks like there was a Taskhidden in that code after all!

I Promise I'll Do This!

With the sample code rewritten above, it's now more clear how a Task is being used to obtain file metadata. The getMetadata() method on the StorageReference has to assume that the file metadata is not immediately available, so it will make a network request to get a hold of it. So, in order to avoid blocking the calling thread on that network access, getMetadata() returns a Task that can be listened to for eventual success or failure. The API then arranges to perform the request on a thread it controls. The details of this threading are hidden by the API, but the returned Task is used to indicate when the results become available. The returned Task then guarantees that any added listeners will be invoked upon completion. This form of API to manage the results of asynchronous work is sometimes called a Promise in other programming environments.

Notice here that the returned Task is parameterized by the type StorageMetadata, and that's also the type of object that gets passed to onSuccess() in the OnSuccessListener. In fact, all Tasks must declare a generic type in this way to indicate the type of data they generate, and the OnSuccessListener must share that generic type. Also, when an error occurs, an Exception is passed to onFailure() in the OnFailureListener, which will probably be the specific exception that caused the failure. If you want to know more about that Exception, you may have to check its type in order to safely cast it to the expected type.

The last thing to know about this code is that the listeners will be called on the main thread. The Task API arranges for this to happen automatically. So, if you want to do something in response to the StorageMetadata becoming available that must happen on the main thread, you can do that right there in the listener method. (But remember that you still shouldn’t be doing any blocking work in that listener on the main thread!) You have some options about how these listeners work, and I'll say more in a future post about your alternatives.

You Only Get One Shot

Some Firebase features provide other APIs that accept listeners that are not associated with Tasks. For example, if you're using Firebase Authentication, you've almost certainly registered a listener to find out when the user successfully logs in or out of your app:

    private FirebaseAuth auth = FirebaseAuth.getInstance();
private FirebaseAuth.AuthStateListener authStateListener = new FirebaseAuth.AuthStateListener() {
@Override
public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
// Welcome! Or goodbye?
}
};

@Override
protected void onStart() {
super.onStart();
auth.addAuthStateListener(authStateListener);
}

@Override
protected void onStop() {
super.onStop();
auth.removeAuthStateListener(authStateListener);
}

The FirebaseAuth client API makes two main guarantees for you here when you add a listener with addAuthStateListener(). First, it will call your listener immediately with the currently known login state for the user. Then, it will call the listener again with all subsequent changes to the user's login state, for as long as the listener is added to the FirebaseAuth object. This behavior is very different than the way Tasks work!

Tasks only call any added listener at most once, and only after the result is available. Also, the Task will invoke a listener immediately if the result was already available before that listener was added. The Task object effectively remembers the final result object and continues to deal it out to any future listeners, until it has no more listeners and is eventually garbage collected. So if you're using a Firebase API that works with listeners on something other than a Task object, be sure to understand its own behaviors and guarantees. Don't assume that all Firebase listeners behave like Task listeners!

And Don’t Forget this One Important Step

Consider the active lifetime of your added Task listeners.There are two things that can go wrong if you don’t do this. First, you can cause an Activity leak if the Task continues beyond the lifetime of an Activity and its Views that are being referenced by an added listener. Second, the listener might execute when it’s no longer needed, causing wasteful work to be done, and possibly doing things that access Activity state when it’s no longer valid. The next part of this blog series will go into these issues in more detail, and how to avoid them.

Wrapping Up (part 1 of this series)

We've taken a brief look at the Play Services Task API and uncovered its (sometimes hidden!) use in some Firebase sample code. Tasks are the way that Firebase lets you respond to work that has an unknown duration and must be executed off the main thread. Tasks can also arrange for listeners to be executed back on the main thread to deal with the result of the work. However, we've only just scratched the surface of what Tasks can do for you. Next time, we'll look at the variations on Task listeners so you can decide which one best suits your use cases.

If you have any questions, consider using Twitter with the #AskFirebase hashtag or the firebase-talk Google Group. We also have a dedicated Firebase Slack channel. And you can follow me @CodingDoug on Twitter.

Continue reading with part 2.
Share:

RC7 Now Available

Today we’re happy to announce that we are shipping Angular 2.0.0-rc.7. This small release is focused on bugfixes.

What's fixed?

  • Lazy loading with webpack bundled projects
  • RxJS issues for developers using ES5
  • IDE Docs Integration - IDEs such as VS Code should now pull in the latest Angular Decorator documentation as reference
Read the full release notes
Share:

Thursday 8 September 2016

Pirate Metrics: Better Acquisition With Firebase

Parul Soi
Parul Soi
Developer Relations Program Manager

Acquisition - How do we get users?

Rarely does “build it and they will come” motto work in today’s world. Acquisition is a very vast field, and includes several different initiatives such as advertising, public relations, marketing, and more.

In my last post I covered the five components under Pirate Metrics - Acquisition, Activation, Retention, Referral and Revenue - and their importance to the success of a product. In this post, I will focus on the first metric, i.e Acquisition, and demonstrate how one can use the Firebase suite to not only track, but also improve it.

At Google, our key offering over the years for acquisition has been Adwords. Through an Adwords campaign, you can already reach out to users on not only search results, but also places like YouTube and Google Play. With the new Firebase integration for Adwords, you can turbo charge your acquisition workflow even further.

Firstly, you can automatically ensure if your campaign are getting the right users by tracking the app open events being fired by them.

Say you have created a game, and have multiple campaigns running. Through this integration, you not only know which campaigns are bringing you more users at better rates, but also which ones are providing more engaged users.

You can also attribute acquisitions from more than 30 other networks, and track the campaign performances in Firebase Analytics directly. And, as you’d expect, you can segment users acquired from these different sources into dedicated Audiences.

Additionally, you can also specify which of your in-app events are important, and Adwords would automatically target those users likely to perform them. Continuing with the example of the game from before, let's assume your game has both single and multiplayer modes. Simply by letting Adwords know of the event for starting a multiplayer game, you could increase the likelihood of acquiring users who want to play multiplayer.

And, lastly, you can target the audiences you have created in Firebase Analytics. This can be tremendously powerful for retargeting, such as bringing back users who might quit after struggling at a certain level with a deal for a special power up.

The Firebase integration with Adwords helps you get the best bang for your buck. Do check out the official documentation for complete details.

Besides Adwords, another nifty tool that we provide as part of Firebase is Dynamic Links. Dynamic Links allows you to create a single URL to share with potential users, who would be redirected to the appropriate stores to download them on either Android or iOS. You can also add some custom data to a link, which will survive the app installation process. You could use this to considerably improve your acquisition from channels such as social media.

For example, say you want to highlight a product that is available for sale on your E-commerce application. Simply create a Dynamic Link, add some information such as an ID that your app can then consume and deeplink straight to the product. Users who have the app would be taken to the product page. Those who don’t would first be taken to either the Play Store or App Store, and can then be taken straight to the product when they open your application for the first time.

We’ll be covering more about Dynamic Links again in a future post, but do go ahead and check out the documentation for yourself as well.

In our next post, we will be covering Activation.

Share:

Wednesday 7 September 2016

Angular 1.6 - Expression Sandbox Removal

Important Announcement

The Angular expression sandbox will be removed from Angular from 1.6 onwards, making the code faster, smaller and easier to maintain.

The removal highlights a best practice for security in Angular applications: Angular template, and expressions, should be treated similarly to code and user-provided input should not be used to generate templates, or expressions.
Removing the expression sandbox does not change the security surface of Angular 1 applications. In all versions of Angular 1, your application is at risk of malicious attack if you generate Angular templates using untrusted user-provided content (even if the content is sanitized to contain no HTML). This is the case with or without the sandbox and the existence of the sandbox only made some developers incorrectly believe that the expression sandbox protected them against such attacks.

What is the expression sandbox?

Angular puts an emphasis on the idea of separating business logic from user interface rendering. Application business logic should always reside in the code of controllers and components, where it can be unit tested effectively and more easily maintained.
Angular expressions were designed to allow a limited subset of JavaScript inside templates to support the basic logic that is necessary to be able to render the user interface. Angular templates encourage a clear separation of concerns between the simple logic used in rendering and the more complex logic of the application business domain.

The expression sandbox is a mechanism that checks Angular expressions to attempt to prevent accidental access to arbitrary JavaScript code and to discourage business logic from appearing in templates.

Why did we add the sandbox?

We added the sandbox to check that applications were not naively running JavaScript in their expressions since this implies that the template could be doing too much work. The aim was to provide feedback to the developer to prevent them from inadvertently designing applications that would be difficult to test and maintain.
Some time after the sandbox appeared in the codebase, a developer noticed that they could create an XSS attack to an Angular application if they had access to the template such that they could insert an arbitrary Angular expression. A number of these attacks were due to vulnerabilities in the sandbox combined with access to the template. Examples of these have been published on the web:
  • This blog post describes an attack that can be made if the template contains user provided content.
Initially we thought that we could tighten up the sandbox to prevent these attacks. But it became apparent that this was not an adequate defense: control of the Angular templates makes applications vulnerable even when the sandbox is completely secure:
  • This blog post shows a (now closed) vulnerability in the Plunker application due to server-side rendering inside an Angular template.
  • This blog post describes an attack, which does not rely upon an expression sandbox bypass, that can be made because the sample application is rendering a template on the server that contains user entered content.
The only effective security strategy is to ensure that users are never able to provide content that will be used in an Angular template, or expression.

As long as the Angular templates, and expressions, are not constructed using user provided content, there is no possibility of attack via these methods.

Why are we removing the sandbox?

While the sandbox was not a security defense mechanism, developers kept relying upon it as a security feature even though it was always possible to access arbitrary JavaScript code if a malicious user could control the content of Angular templates in applications.

Unfortunately we continued to patch up Angular 1 whenever people found new ways to undermine the sandbox despite the fact that it was not a defense mechanism.

These patches sent the wrong message to developers that they could continue to rely upon the sandbox for security, when they could not.

Removing the sandbox has the following benefits:

  • It clarifies to developers that Angular expression sandbox is not a security feature
  • It simplifies the parser codebase making it easier to maintain and extend.
  • It reduces the size of the parser codebase and therefore the size of the core Angular library distribution file (~1.3% smaller for angular.min.js.gzip).
  • It removes extra checks thus improving the speed and performance of Angular at runtime (~14% faster for complex expression parsing).
  • It provides opportunities to add further performance improvements to the parser.

You can view and comment on the Pull Request that removes the sandbox here.

What are the security implications?

If an attacker has access to control Angular templates or expressions, they can exploit an Angular application regardless of the version. There are a number of ways that templates or expressions can be controlled:

  • Generating Angular templates on the server containing user-provided content
  • Passing user-provided content in calls to these methods on a scope:
    • $watch(userContent, ...)
    • $watchGroup(userContent, ...)
    • $watchCollection(userContent, ...)
    • $eval(userContent)
    • $evalAsync(userContent)
    • $apply(userContent)
    • $applyAsync(userContent)
  • Passing user provided content in calls to services that parse expressions:
    • $compile(userContent)
    • $parse(userContent)
    • $interpolate(userContent)
  • Passing user provided content as the predicate parameter to the orderBy filter:
    • {{ value | orderBy : userContent }}

Each version of Angular 1 up to, but not including, 1.6 reduced the surface area of the vulnerability but never removed it.

If you dynamically generate Angular templates or expressions from user-provided content then you are at risk of XSS whatever version of Angular you are using.

If you do not generate your Angular templates or expressions from user-provided content then you are not at risk of this attack whatever version of Angular you are using.

See the Angular 1 security guide (once the PR has landed) for more information about the sandbox and possible vulnerabilities if you allow access to Angular templates.

What should I do?

If all your templates are static HTML files, or files that are generated without any user-provided input (e.g. using jade), and you are not generating any Angular expressions from user-provided content, then no action is needed and all the security measures provided by Angular are in effect.

If you dynamically generate Angular templates, or expression, using user-provided content then you should conduct a security review of your deployment and assess the risk that the user-provided content poses.

If you have to keep on using the user-provided content then the safest option is to ensure that it is only present in the part of the template that is made inert via the ngNonBindable directive. Be aware that you must ensure that all HTML is sanitized from the user content to prevent a malicious user from closing the tag containing the ngNonBindable directive and getting access to the rest of the template.

Size and performance comparison

Initial benchmarks of the removal show a small file size decrease and an increase in performance of the parser:

  • Gzipped size (angular.min.js.gzip) is 734 bytes smaller (1.3% smaller)
  • Complex expression parsing benchmark 76ms less time (14% faster)


Share: