| |

Flex  RIA


? -

2015-06-16

09:50:46, RichApps
Taking the web offline Service Worker (Death of the dinosaur)

If you are not interested in the full story watch the result here.

If I look at my work for parleys.com it seems like the circle is finally closing. When I started working for parleys.com back in 2007 one of my first tasks was to build a desktop application to download presentations so that our users where able to watch presentations on a plane, train or wherever connectivity was an issue. Even though this application was build from the same code base as the website (Flex/AIR) it were still two separate applications. It required us to provide downloads and the user to perform an installation. We needed to care about updating and about platform compatibility issues.

Still at that time it was a really nice use case and felt like Yeah that`s it!.

8 years later (8 years!!!) things have changed. Who is downloading applications from the internet anymore? Yeah it still exists but it feels somehow clumsy. The good news, today in 2015 we have all the tools to make these kind of desktop apps obsolete.

Let`briefly look what we need to make a web application run offline and what options we have today.

I would divide the requirements into two groups. First we need a way to make our application shell – so the code that we need to actually run our application available offline. I mean things like static assets, your index.html, js files, images And second we need the possibility to store the content data the user needs inside of the application (in our case video, slides and course assets).

Let`s first check the options for storing the data:

Local Storage

Local storage is really only meant as a cookie replacement. It is easy to use but also the options are limited but the biggest problem which ruled it out for our needs is the maximum storage capacity of only around 3-10Mb. Not enough to store videos which combined might be several gigabytes.

IndexedDB

Indexed DB fits ours needs quite well as it allows you to store large data amounts (using quota management and user permissions) and can also store blobs. It is also quite good supported among the ever green browsers. While in theory you can store large data in practice there are known issues when you want to store large blobs in your db. Basically it`s not really possible to append data to an existing blob without reading the existing chunk into memory. So your only option is to have the whole file in memory and save it as one piece. Something which you do not want to do with files several hundred megabytes or even gigabytes large.

Filesystem/File API

So here comes the Filesystem API. The filesystem api is not a W3C standard by now but just a proposal from Google. I have not followed it closely but i have the feeling that it ,might never become a standard. https://hacks.mozilla.org/2012/07/why-no-filesystem-api-in-firefox/comment-page-1/

That`s really unfortunate because it basically solves all our problems with the other apis. As the name implies you have read, write and create access to a sandboxed part of the local filesystem which makes it a perfect fit, if we want to store files. Used with the FileReader API you get random access to files so you can easily manipulate them and append data efficiently. So even with it not being a standard yet, i have decided to go with this option for now. Yes, it is chrome only right now and yes, it is not a standard but you can see it as a progressive enhancement technique. Users on chrome will benefit from the extra functionality and the others can still enjoy the web of yesterday. Also the way I implemented it makes it easy to switch later the filesystem API with Indexed DB (once it deals better with large files) so eventually it will work in more and more browsers.

No let`s examine the options we have for storing our application shell.

Application Cache

On paper application cache gives you exactly what we need and it is around for quite a while. You can define assets in a manifest and those will be cached by the browser but there are many challenges with app cache. It is difficult to manage which assets to cache and when the cache should update.Check out this article for more info.
We actually had a version of parleys for offline use in beta stadium but never released it. I just never felt really confident with the app cache solution we had. You can check out the Devoxx 2012 keynote where we demoed this app if you`re interested in history.

Service Worker + Cache API

What sparked my interest in this web app/offline topic was the introduction of the Service Worker API last year. Service Worker gives you really fine control over network requests. You are able to intercept and transform request all with a pretty simple API. Together with the Cache API you really have full fine grained control over which assets you want to store or what alternative content you want to serve in case e.g. the network is down. There are many approaches how to use Service Workers and Jake Archibald (the man behind the API himself) has a nice collection of usage patterns. The way I am using the SW API for parleys.com really does not do it justice as i am only using it to cache the app shell, but you can go further and improve overall caching for all kind of assets, also content assets and provide fallbacks and improve the overall responsiveness of your application.

Usage on Parleys.com

For parleys.com I have used the Filesystem API to build a full fledge download manager. Files are downloaded in chunks and when a download is interrupted you can always pick it up again when you have connectivity again without restarting the download from scratch.

The Service Worker API is used to serve the application shell so once the app is cached you can shut down wifi go to parleys.com and it still works – it is really a groundbreaking change in my opinion. Well so groundbreaking that it is also kind of an issue. Who knows this? The user is not used to this functionality and if he is sitting in a plane without connectivity he might just not even try to load a web page as he does not expect it to be working. This is really something new where new patterns need to be established. Not yet sure how this can be approached, but in my opinion some thought needs to be put into this. Any suggestions?

After this long introduction let`s have a look at the result:

Issues

Because we were a bit afraid that user behavior and expectation (or lack of expectation) might actually be a problem and because of the chrome only issue we also still investigated other solutions and build a version which can be downloaded as a desktop client. It uses the awesome Electron project which is, like node webkit, based on chromium.

The nice thing is that with the download manager in place without changing one line of code (yeah ok it were 1 or 2) and with the help of this great build packager we have alternative desktop apps (mac/win).

Still I am happythat we will not use them and go with the web version instead. I think this new technology can only be pushed if we use and provide it. Then in 2 years offline web apps will be the norm, maybe like responsive design is today.

Related links:

Parleys Desktop Prototype 2013: https://www.youtube.com/watch?v=RtypJAykq74 (Node-Webkit based)


2015-02-27

00:48:07, ByteArray.org
This blog is no longer updated | Check out typedarray.org

This blog will remain up but an archive fro all Flash/AS3 related things. I will be posting from now on on typedarray.org.

See you there!


2015-01-21

22:39:58, ByteArray.org
From microphone to .WAV to server

I blogged recently about capturing the audio from the Microphone with Web Audio andsaving it as a .wav file. Some people asked me about saving the file on the server, I have to admit I did not have the time to look into this at that time, but it turns out, I had to write a little app this morning that does just that.

So I reused the code from the Web Audio article and just added the 2 additional pieces I needed.

  • The XHR code for the upload
  • 3 lines of PHP needed on the server.

So here we go. In the encoder, I had the following code to package our WAV file:

// we flat the left and right channels down
var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
// we interleave both channels together
var interleaved = interleave ( leftBuffer, rightBuffer );

// we create our wav file
var buffer = new ArrayBuffer(44 + interleaved.length * 2);
var view = new DataView(buffer);

// RIFF chunk descriptor
writeUTFBytes(view, 0, 'RIFF');
view.setUint32(4, 44 + interleaved.length * 2, true);
writeUTFBytes(view, 8, 'WAVE');
// FMT sub-chunk
writeUTFBytes(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
// stereo (2 channels)
view.setUint16(22, 2, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
// data sub-chunk
writeUTFBytes(view, 36, 'data');
view.setUint32(40, interleaved.length * 2, true);

// write the PCM samples
var lng = interleaved.length;
var index = 44;
var volume = 1;
for (var i = 0; i < lng; i++){
    view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
    index += 2;
}

// our final binary blob
var blob = new Blob ( [ view ], { type : 'audio/wav' } );

The key piece is the last line. This is our blob that we will send to the server. To do this, we use our good friend XHR:

function upload(blobOrFile) {
  var xhr = new XMLHttpRequest();
  xhr.open('POST', './upload.php', true);
  xhr.onload = function(e) {};
  // Listen to the upload progress.
  var progressBar = document.querySelector('progress');
  xhr.upload.onprogress = function(e) {
    if (e.lengthComputable) {
      progressBar.value = (e.loaded / e.total) * 100;
      progressBar.textContent = progressBar.value; // Fallback for unsupported browsers.
    }
  };

  xhr.send(blobOrFile);
}

It takes our blob as a parameter and sends it to our php file. Thank you Eric Bidelman for the great article about tricks with XHR on HTML5Rocks.com, the function is literally a copy/paste from there.

And then all you need are these 3 lines of PHP code. Simple, easy.

$fp = fopen( 'savedfile.wav', 'wb' );
fwrite( $fp, $GLOBALS[ 'HTTP_RAW_POST_DATA' ] );
fclose( $fp );

And that's it. Voil


2014-09-08

18:41:02, ByteArray.org
Intel 8080 CPU emulation in JavaScript

Space InvadersThis week end, I wanted to try a real world project to play more with TypeScript. Why TypeScript? Because I wanted to leverage a few ES6 features but also type checking. Note that I did not use strong typing, but just relied on the inference of types provided by the TypeScript compiler.

A few years ago, I wrote an Intel 8080 CPU emulator in ActionScript 3 and thought this would be a great fit for a TypeScript exercise. For the context, the Intel 8080 is a 2Mhz 8bit CPU. Through an Uint8Array(typed array), we can read each instruction (byte per byte) coming from a ROM and fully emulate the CPU.

So which game could we run to test the CPU? The Intel 8080 CPU was used inside the famous Space Invaders arcade machine, so using the original Space Invaders ROM, we can emulate the whole arcade system entirely in JavaScript (CPU/RAM/Input/Screen). Check the different files on the github repo. The CPU is the most important part, but I recommend you guys checking the other pieces, really fun to see how things work and how hardware is emulated.

The tricky thing is that because of the lack of byte type (ActionScript 3 has the same limitation), CPU registers (which are originally 8-bit) use the Number type, which is 64-bit, so each register needs to be masked constantly (register & 0xFF) to avoid overflow.

Here is the game playable here. It runs nicely on most browsers on desktop, it even runs nicely on mobile, except on UIWebView based browsers, where the lack of jitting seriously impacts performance.

(Space Invaders art by Alfimov)


2014-09-07

18:03:18, ByteArray.org
A JavaScript refresh

We will cover here some of the key concepts of JavaScript to get us started. If you have not checked JavaScript for the past few years or if you are new to JavaScript, I hope you find this useful.

We will start by covering the language basics like variables, functions, scope, and the different types, but we will not spend much time on the absolute basics like operators, or what is a function or a variable, you probably already know all that as a developer. We will discover JavaScript by going through simple examples and for each of these, highlight specific behaviors and approach the language from an interactive developer standpoint, coming from other technologies like Flash (ActionScript 3), Java, C# or simply native (C++).

Like other managed languages, JavaScript runs inside a JavaScript VM (Virtual Machine), one key difference to note is that unlike executing bytecode, JavaScript VMs are source based, translating JavaScript source code directly to native code by using what is called a JIT (Just in Time compiler) when available. The JIT performs optimization at runtime (just in time) to leverage platform specific optimizations depending on the architecture the code is being run on. Of course, most browsers available today run JavaScript, the list below highlights the most popular JavaScript VMs today in the industry:

JavaScript can provide some serious advantages over low-level languages like automatic memory allocation and collection through garbage collectors. This, however comes at the cost of speed. But managed languages provides so much value in terms of productivity and platform reach that developers today tend to favor them over low-level languages, despite the loss of performance because of the higher cost for those languages when it comes to targeting multiple platforms.

Before we get started, it is important to differentiate how browsers work. On one side we have the core language, JavaScript, and on the other side we have the browser APIs. Historically, JavaScript and the DOM used to be tightly coupled and most tutorials would cover both at the same time, but they evolved into strong separate entities. So we we will cover JavaScript first, then deep dive into the DOM and browser APIs later on. Everything in this first article could run on the shell without any interaction with the browser APIs, just pure core JavaScript. The list below represents the general-purpose core objects defined in JavaScript:

  • Array
  • Boolean
  • Date
  • Function
  • Number
  • Object
  • RegExp
  • String
  • Error

For a complete list of the global objects, check the Global Objects page from Mozilla. Other objects you might have seen in JavaScript, like Window, CanvasRenderingContext2D, or XMLHttpRequest object or any other, have nothing to do with the JavaScript language itself and are just objects to leverage specific browser capabilities, like network access, audio, rendering and more. We will cover all of these APIs in future articles.

Versions

JavaScript today is implemented across browsers following the ECMAScript specification. As of today, five editions of ECMA-262 have been published. Harmony, the latest revision is a work in progress:

Edition
Date Published
1June 1997
2June 1998
3December 1999
4Abandoned
5December 2009
5.1June 2011
6 (Harmony)In progress

Most browsers today support ECMAScript 5.1, the table below illustrates the conformance test suite used:

Product
Version
Test Suite Version
Chrome24.0.1312.57 mES5.1 (2012-12-17)
Firefox19ES5.1 (2013-02-07)
Internet Explorer10.0 (10.0.9200.16384)ES5.1 (2012-12-17)
Maxthon3.4.2.3000ES5.1 (2012-08-26)
Opera12.14 (build 1738)ES5.1 (2013-02-07)
Safari6.0.2 (8536.26.17)ES5.1 (2012-12-17)

There is also separate versioning for JavaScript with some additional features not necessarily implemented in ECMAScript. The latest JavaScript version is 1.8. The table below illustrates the correspondence between JavaScript and ECMAScript versions:

JavaScript version
Relation with ECMAScript
JavaScript 1.1ECMA-262 - Edition 1 is based on JavaScript 1.1.
JavaScript 1.2ECMA-262 was not complete when JavaScript 1.2 was released. JavaScript 1.2 is not fully compatible with ECMA-262 - Edition 1.
JavaScript 1.3JavaScript 1.3 is fully compatible with ECMA-262 Edition 1. JavaScript 1.3 resolved the inconsistencies that JavaScript 1.2 had with ECMA-262 while keeping all the additional features of JavaScript 1.2 except == and != which were changed to conform with ECMA-262.
JavaScript 1.4JavaScript 1.4 is fully compatible with ECMA-262 Edition 1. The third version of the ECMAScript specification was not finalized when JavaScript 1.4 was released.
JavaScript 1.5JavaScript 1.5 is fully compatible with ECMA-262 Edition 3.

Source (Mozilla)

Throughout this article we will be sticking to the ECMAScript 5.1 feature set, which is the latest revision implemented by all major browsers. From time to time we will have a quick look at some specific features outside the ECMAScript scope just for general information. When such features are covered it will be explicitly mentioned.

Assembly language of the web

In the past years, more and more languages have been targeting JavaScript. Recently projects like Emscripten have proved that it is even possible to take native C/C++ code and compile it to JavaScript. Some people have mentioned the idea of JavaScript being the assembly language of the web. It is indeed an interesting analogy. Languages like TypeScript or Dart have demonstrated this too by cross-compiling into JavaScript.

TypeScript is an implementation of ECMAScript 6 with optional strong typing, whereas Dart is a more aggressive approach as a different language that would ideally a Dart VM for Chrome. Recently, efforts like asm.js from Mozilla have pushed the idea even further by proposing a low-level subset of JavaScript that would be targeted by compilers more efficiently. Some other initiatives like CoffeScript (often called transpilers) help developers by exposing language syntactic sugar not necessarily exposed in JavaScript today.

No Compiling

As we just mentioned, one of the beauty of JavaScript is that you do not have to pre-compile your code to get it to run. Your source code is loaded directly then compiled to native code at runtime by the VM using a JIT. As a developer writing JavaScript code and coming probably from a compiled language like C#, Java or ActionScript, you have to remember at all times that there will not be any optimizations done ahead of time by a static compiler. Everything will be figured out at runtime when you will be hitting refresh.

VMs like V8 introduced engines like CrankShaft which perform key optimizations at runtime like constant folding, inlining, or loop-invariant code motion that will help with performance. However, always keep in mind what you can do in your code to help with performance too. Throughout this article, we will cover key optimizations you can rely on to help your code perform better.

Memo

  • JavaScript is based on the ECMA-262 standard.
  • The latest revision implemented by most browsers is ECMAScript 5.1.
  • The latest revision of the ECMAScript specification is revision 6 called Harmony.
  • JavaScript does not rely on a static compiler.
  • JavaScript source code is directly compiled to native code by the JIT (a component of the virtual machine).
  • Other languages like TypeScript, Dart or CoffeeScript and more are targeting JavaScript too.
Tools

Feel free to use any text editor you want to write your JavaScript code. Examples in this article will be using the console from Chrome for small code samples to test things quickly. For bigger projects, WebStorm will be used. You can follow along using these tools or use your own. Here is below a set of popular tools to write JavaScript:

Lets get started and write some code now.

REPL

As a developer reading this, you will probably want to test small things quickly and iteratively. Traditionally, when using bytecode based languages, you would type your code, hit compile, bytecode would be generated and then executed. For every modification, you would change the source code, then hit compile again and observe the changes. With JavaScript, you could be using REPL and have a way more natural and flexible way to test things and compile them on the fly. So what does REPL stands for? It stands for read-eval-print loop. Some bytecode based languages like C#, F# offer a similar functionality to easily test some pieces of your code where you can type in the command line and evaluate pieces of you code quickly and naturally.

With Chrome Developer Tools or any other console available like Firebug with Firefox, you can use the console and just start typing some code. The figure below shows the Chrome console:

REPL

When pressing enter, your code is injected and executed. In our first example, when declaring a variable, the console will just return undefined as variable declarations return no value. Just referencing the variable we declared will automatically return its value:

REPL

If we want to retrieve the string length, we also get auto-completion directly from the console:

REPL

In the figure below, we retrieve the string length:

REPL

This provides a very nice way to test quickly some pieces of your code. Note that in the Chrome developer tools console for multiline entries, you will need to use the Shift+Enter since Enter triggers code execution. In the next figure, we define a foo function then execute it:

REPL

Note that with Firebug in Firefox, the console supports multiline code in an expanded editor view where Enter creates a new line and Shift+Enter executes it. Firefox also provides ScratchPad as part of their developer tools. In ScratchPad, JavaScript multi-line code can be entered and tested interactively by just selecting the lines needed and press Shift+F4 (MacOS) or CTRL+R (Windows). The figure below shows the ScratchPad window and the console displaying the result:

REPL

Memo

  • The JavaScript console allows you to test code interactively inside the browser.
  • Anything defined on the page can be overriden or added through the console.
  • This interactive mode is called read evaluate print loop known as REPL.
  • ScratchPad in Firefox offers a great REPL workflow.
Getting started

If you come from other languages like C#, C++ or ActionScript, you may have been used to installing lots of tools, compilers, debuggers. One of the very cool things with JavaScript is that all your really need is a text editor and a browser. When embedded in HTML, JavaScript code can be either inlined inside a script tag:

<script>

// some javascript code

</script>

Or referenced and placed in external .js files, which is a better practice:

<script src="js/main.js"></script>

By default, our JavaScript code will be executed synchronously in the order the browser parses the page, from top to bottom. As a result, placing your JavaScript code at the beginning of the page is discouraged; this would cause the browser to wait until the script is executed to start displaying anything on the page. For now, we will stick to a general good practice and place our code just before the end of the body tag:

...

<script src="js/main.js"></script>

</body>

That way, content on the page be displayed first, and once the display list (DOM) is loaded, our code will be executed. This will also ensure that our scripts have access to the DOM and that all objects can be scripted. We will be spending some time on the order of execution of JavaScript in a future article about the DOM. HTML5 introduced some interesting new capabilities in terms of sequencing and order of JavaScript execution that we will cover too.

A dynamically typed language

JavaScript is a dynamically typed language, meaning that any variable can hold any data type. If you are coming from a statically typed language, this may sound scary to you. In JavaScript there is no static typing and there will probably never be. Sadly, it is often perceived that having an explicit typing of variables is required to get type checking. It is actually not necessarily required. Types can be inferred (type-inference) automatically and propagated everywhere and provide solid code completion and type checking even with languages like JavaScript. TypeScript from Microsoft is a good example of this.

Given the absence of a typing system, JavaScript will never enforce the type of a variable, so you would not be able to rely on types to implicitly perform conversions. Remember, JavaScript does not rely on a static compiler, the VM directly interprets the source code and compiles it to native code at runtime using the JIT. To create an array in JavaScript, you could use the new keyword with the Array function constructor:

var scores = new Array();

Or simply (using the literal syntax):

var scores = [];

Note that no types are specified. At runtime the scores variable will be evaluated as an array:

var scores = [];

// outputs: true
console.log ( scores instanceof Array );

Because types cannot be enforced, variables can hold any type at anytime:

// store an array
var scores = [];

// store a number
scores = 5;

// store an object
scores = {};

// store a string
scores = "Hello";

If we try to call an undefined API, no errors will be captured at compile time given that there is no compilation happening statically:

var scores = [];

scores.foo();

Because the foo method is not available on Array, we will get a runtime exception:

Uncaught TypeError: Object has no method 'foo'

Same thing for even the simplest object:

var myObject = {};

myObject.foo();

Which would trigger the following runtime exception:

Uncaught TypeError: Object has no method 'foo'

Browsers report errors like this one through the JavaScript console, and usually include the line that triggered the exception. The exception was uncaught in the example, but we update the code to catch it using the try catch clause:

var scores = [];

try {
 scores.foo();

} catch (e) {
 console.log ('API not available!');
}

We will get back to error handling soon, but for now lets move for now on to some more essentials concepts like variable declaration.

Memo

  • JavaScript is a dynamically typed language.
  • No typing is required, types are evaluated at runtime.
  • Types cannot be enforced.
  • Therefore, a variable can hold any type at anytime.
  • If an object does not have an API available, the error will be triggered at runtime.
Variables and scope

Variables are declared using the var keyword:

var score = 12;

But omitting the var keyword will also work and declare the variable as global:

// declare a global variable
score = 12;

As you can imagine, this is not recommended and you should always use var when declaring variables. So why is that? The var keyword actually dictates the scope. In the code below we use a local variable inside the foo function, making it inaccessible from outside:

function foo() {
    // declare the variable locally
 var score = 12;

 console.log ( score );
}

function foo2() {
 console.log ( score );
}

// triggers: Uncaught ReferenceError: score is not defined
foo2();

Notice that when running the code above, the error is caught at runtime; remember that there is no static compiler involved which would catch this error ahead of time.Omitting the var keyword would make the variable global to all functions:

function foo() {
 // define the variable globally
 score = 12;

 console.log ( score );
}

function foo2() {
 console.log ( score );
}

// outputs: 12
foo();

// outputs: 12
foo2();

Another important behavior when working with variables is hoisting. This behavior allows you to reference a variable before it is defined. Trying to reference an nonexistent variable will trigger an exception:

// triggers: Uncaught ReferenceError: a is not defined
console.log ( a );

But, referencing a variable declared later with var works, returning its default, unset value undefined:

// outputs: undefined
console.log ( a );

// variable a declared later
var a = 'Hello';

What happens behind the scenes is that all the variables are moved to the top of the context block and declared first, but initialization happens where the variables are defined by our code. We will see in an upcoming section that the same behavior applies to functions too. JavaScript 1.5 introduces the concept of constants that we also have in some other languages. Constants are very important and should actually be the default in most of your programs.

Mutability is a common source of bugs. Some languages like functional programming languages rely on immutability by default. Using the keyword const will guarantee your value cannot be changed after initialization. In the code below, we define a constant named LIMIT:

// define a constant
const LIMIT = 512;

// outputs: 512
console.log ( LIMIT );

Note that our constant is uppercase, which is a best practice to easily spot immutability. If you try to change the value at runtime, the original value is preserved:

// define a constant
const LIMIT = 512;

// outputs: 512
console.log ( LIMIT );

// try to overwrite
LIMIT = 45;

// outputs: 512
console.log ( LIMIT );

You may be surprised that no runtime exception is triggered. Actually, some browsers do, like Firefox, since version 13. Unfortunately, as of today, the const keyword is supported in Firefox and Chrome but not in Safari, or IE 9 and 10, which dramatically reduces the reach of this feature. As a result, the const keyword should not be used if you intend to reach a broad audience and a wide variety of browsers. ECMAScript 6 defines const but with different semantics and similar to variables declared with the let statement, where constants declared as const will be block scoped (a concept we will cover in the Functions section).

Memo

  • The var keyword defines the scope.
  • Omitting the var keyword will make the variable global.
  • It is always recommend to use local variables inside functions to prevent conflicts and introduction of states.
  • The const keyword introduced in JavaScript 1.5 introduces immutability but is not widely supported yet.
  • Constants are part of the Harmony proposal (ECMAScript 6).
Type conversions

As we saw earlier, because of JavaScripts dynamic nature, types cannot be enforced. This can be a limitation for debugging, because any variable can basically hold any type, you may be taken by surprise if, for instance, no runtime exception will be triggered due to an implicit runtime conversion failing. JavaScript actually performs implicit type conversions at runtime at different occasions. First, when using numeric and string values with the + operator, the String type has precedence and concatenation is always performed:

// gives: "3hello";
var a = 3 + "hello";

// gives: "hellotrue"
var b = "hello" + true;

When using other arithmetic operators, the Number type has precedence:

// outputs: 9
var a = 10 - "1";

// outputs: 20
var b = 10 * "2";

// outputs: 5
var c = 10 / "2";

Implicit conversions to Number will also happen when using the == or != operators with the Number, String and Boolean types:

// outputs: true
console.log ("1" == 1); // equals to 1 == 1

// outputs: true
console.log ("1" == true); // equals to 1 == 1

// outputs: false
console.log ("1" != 1); // equals to 1 != 1

// outputs: false
console.log ("1" != true); // equals to 1 != 1

// outputs: false
console.log ("true" == true); // equals to NaN == 1

To avoid implicit conversions and verify that both types and values are equal, you can rely on the strict equality (===) or strict inequality (!==) operators, which will never perform automatic conversion implicitly:

// outputs: false
console.log ("1" === 1);

// outputs: false
console.log ("1" === true);

// outputs: true
console.log ("1" !== 1);

// outputs: true
console.log ("1" !== true);

It is therefore a good practice to use the strict operators to reduce the risks of ambiguity. If we need to convert data explicitly, we can use the appropriates types conversion functions:

// convert a string to a number
var a = Number ("3");

// convert a boolean to a number
var b = Number (true);

// tries to convert a non numeric string to a number
var c = Number ("Hello");

// outputs: 3 1 "NaN"
console.log ( a, b, c );

In the same way, converting a string to a number can be done using the parseInt and parseFloat functions:

// convert a string to a number
var a = parseInt ( "4 chicken" );

// convert a boolean to a number
var b = parseFloat ( "1.5 pint" );

// outputs: 4 1.5
console.log ( a, b );

We saw earlier that JavaScript can throw runtime exceptions, lets spend a few minutes on this now.

Memo

  • Implicit conversion to String is performed when using the + operator.
  • Implicit conversion to Number is performed when using other arithmetic operators.
  • Implicit conversion to Number is performed when using the equality and inequality operators with the Number, Boolean and String types.
  • To avoid implicit conversions, it is recommended to use the strict equality and inequality operators.
  • Explicit conversion can be done using the proper conversion functions.
Runtime exceptions

In all projects we have to deal with runtime exceptions. In JavaScript, as we saw earlier, these can be triggered by the runtime itself or explicitly by our code. For example, if you try to call a method not available on an object, this will trigger a runtime exception:

var scores = [];

// triggers: Uncaught TypeError: Object has no method 'foo'
scores.foo();

At any time, if we need to throw an exception ourselves, we can use the throw keyword with Error object:

throw new Error ('Oops, there is a problem');

As expected, a runtime exception needs to be handled otherwise the console will output the following message:

Uncaught Error: Oops, there is a problem

To handle errors, we can use the try catch statement. The message property in thrown Errors contains the error message:

try {
 throw new Error ('Oops, there is a problem');

} catch ( e ) {
 // outputs: Oops, there is a problem catched!
console.log ( e.message + ' catched!');
}

If we need some logic to be executed whether or not an error is thrown after code has executed in the try block, we use the finally statement:

try {
 throw new Error ('Oops, there is a problem');

} catch ( e ) {
 // outputs: Oops, there is a problem catched!
 console.log ( e.message + ' catched!');

} finally {
 // outputs: Code triggered at all times
 console.log ( 'Code triggered at all times' );
}

Note that conditional catch cannot be done the same way as in languages like ActionScript or C#, by placing the appropriate type in the catch block to redirect the exception automatically. In JavaScript we use a single catch block and have the appropriate type test inside that same block:

try {
 throw new Error ('Oops, there is a problem');

} catch ( e ) {
 if ( e instanceof BufferError ) {
 // handle buffer error

 } else if ( e instanceof ParseError ) {
 // handle parse error
 }
} finally {
 // outputs: Code triggered at all times
 console.log ( 'Code triggered at all times' );
}

Note that the JavaScript specification details the ability to inline the if directly inside the catch block.

try {
 throw new Error ('Oops, there is a problem');

} catch ( e if e instanceof BufferError ) {
 // handle buffer error

} catch ( e if e instanceof ParseError ) {
 // handle parsing error

} finally {
 // outputs: Code triggered at all times
 console.log ( 'Code triggered at all times' );
}

Unfortunately, this feature is not part of the ECMAScript specification and will not work in most browsers except Firefox, which again has excellent support for the latest JavaScripts features. You can rely on Firefox for testing this feature, but dont rely on it for a real project. What about performance? In JavaScript, exception handling does not have much impact on performance either, except if you use try catch inside a function. So make sure you dont do this:

function test() {
 try {
  var s = 0;
  for (var i = 0; i < 10000; i++) s = i;
  return s;
 } catch ( e ) {};
}

But instead, move the try catch outside of the function:

function test() {
 var s = 0;
 for (var i = 0; i < 10000; i++) s = i;
 return s;
}

try {
 test();
} catch ( e ) {};

Next, lets have a look now at the different types of data we will be working with in JavaScript, composite and primitive data types.

Memo

  • Runtime exceptions can be triggered from user code or by the runtime.
  • Conditional catch cannot be done implicitly.
  • Conditional catch has to be doneexplicitlyusing an if statement.
Primitive and composite data types

JavaScript defines six data types, and just like in most languages, you can divide these in two categories:

  • Primitive
    • Number
    • String
    • Boolean
    • Null
    • Undefined
  • Composite
    • Object

As expected, primitives are copied by value:

var a = "Sebastian";

var b = "Tinic";

var c = a;

a = "Chris";

// outputs: Sebastian
console.log ( c );

Composite (objects) are everything else, like a Window object, a RegExp, a function, etc. and passed by reference. The example below illustrates the idea:

// create an Array
var a = ['Sebastian', 'Alex', 'Jason'];

// create an Array
var b = ['Sebastian', 'Alex', 'Jason'];

// outputs: false
console.log ( a == b );

Even though the two arrays contain the exact same values, we are actually comparing two different pointers here, not two similar values. In the code below, we illustrate this differently:

// create an Array
var a = ['Sebastian', 'Alex', 'Jason'];

// pass by reference (nothing is copied here)
// b points now to a
var b = a;

// modifying b modifies a
b[1] = 'Scott';

// outputs: ["Sebastian", "Scott", "Jason"]
console.log ( a );

Before jumping into the specific behaviors of JavaScript, lets have a look at the Boolean type now.

Memo

  • There are six data types in total in JavaScript.
  • Primitives are number, string, boolean, null and undefined.
  • Composite (objects) are everything else.
  • Primitives are passed by value, whereas composite types are passed by reference.
Boolean

The concept of Booleans is probably the easiest part of any language. However it is worth nothing a few things when it comes to JavaScript. The code below highlights how primitives behave when tested as Booleans:

var a = true;

var b = "true";

var c = 1;

var d = false;

// outputs: true
console.log ( a == true );

// outputs: false
console.log ( b == true );

// outputs: true
console.log ( c == true );

// outputs: false
console.log ( d == true );

Did you notice the implicit conversion performed here? Like on most languages, you can convert anything to a Boolean by using the Boolean conversion function or Boolean function constructor:

var a = new Boolean ( true );

var b = new Boolean ( "true" );

var c = new Boolean ( "false" );

var d = new Boolean ( 1 );

var e = new Boolean ( false );

var f = new Boolean ( undefined );

var g = new Boolean ( null );

// outputs: true
console.log ( a == true );

// outputs: true
console.log ( b == true );

// outputs: true 
console.log ( c == true );

// outputs: true
console.log ( d == true );

// outputs: false
console.log ( e == true );

// outputs: false
console.log ( f == true );

// outputs: false
console.log ( g == true );

Note that we are using the Boolean constructor here, producing Boolean objects, not the Boolean conversion function. producing primitives. Finally, any type placed inside a conditional expression will be converted to Boolean. In the code below, our test succeeds and displays the message in the console:

// converts automatically "false" to new Boolean("false")
if ( "false" ) {
 console.log ("This will be triggered!")
}

Memo

  • Always remember about implicit conversions.
  • Any type placed inside a conditional expression will be converted to Number.
Number

In JavaScript, there is no such concept of int, uint or float, so in case of decimals, everything is a number. The type number is used for all decimals and is represented as a float and using 64-bit floating point format in memory:

var a = 4;

var b = 6.9;

var c = -5;

// outputs: number number number
console.log ( typeof a, typeof b, typeof c);

Division by zero or overflow and underflow do not trigger exceptions, it just happens silently:

// division by zero
var a = 0 / 0;

// number too big
var b = Number.POSITIVE_INFINITY;

// number too big
var c = Number.NEGATIVE_INFINITY;

// outputs: NaN Infinity -Infinity
console.log ( a, b, c );

It is worth noting that NaN cannot be compared to NaN:

// outputs: false
console.log (Number.NaN == Number.NaN);

But the isNaN function can be used:

// outputs: true
console.log ( isNaN (Number.NaN) );

It is time to talk now about one of the most important core object in JavaScript, the Object type.

Memo

  • At the core, we need to distinct primitive and composite data types.
  • Number is used for all decimals.
  • Every decimal is represented as a float and using 64-bit floating point format in memory.
Object and properties

To create a simple object, we rely on the Object type. Object is in fact pretty much the core of everything in JavaScript. We will come back to this in a few minutes. It is sometimes useful to quickly define an object that would hold a few properties, note that both syntaxes are possible, literal and nonliteral (function constructor):

// custom object with literal syntax
var person = { name: 'Bob', lastName: 'Groove' };

// custom object with new (object constructor)
var person = new Object();

// create some properties
person.name = 'Bob';
person.lastName = 'Groove';

Obviously, the first syntax is shorter and is usually preferred.

Note that we can also use the Object.create() API to create an object, that would allow us to specific which prototype to use for this object.As a dynamic language, it is possible to access properties using multiple syntaxes, the most common one being the dot operator:
// custom object
var person = {};

// create a new property name
person.name = Bob;

Using the bracket notation also works in case you need to evaluate the property dynamically. Keep in mind that this syntax is slightly slower and should not be used by default. This also makes refactoring harder.

// custom object
var person = {};

var prop = "name";
person [prop] = 'Bob';

Which equals to the dot operator syntax:

// custom object
var person = {};

person.name = 'Bob';

// outputs: true
console.log ( person.name == person['name'] );

Note that property access in general tends to be slow and should be minimized. V8 is known to provide fast property access based on a different way it stores object properties. Other VMs use a hashmap to lookup for properties; V8 is using an array approach using hidden classes. The hidden class is used for objects of the same type which stores the offset to access properties values instead of searching through an hash table.

To test if a property is defined on an object, we can use the in operator:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: true
console.log ( "name" in person );

Note that the in keyword will also search for inherited properties through the prototype chain:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: true
console.log ( "toString" in person );

The hasOwnProperty() API will not check through the prototype chain and only for direct custom properties:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: false
console.log ( person.hasOwnProperty ("toString") );

// outputs: true
console.log ( person.hasOwnProperty ("name") );

By default, all instance properties are dynamic and can be deleted using the delete keyword:

// custom object
var person = {};

// add a new property
person.age = 40;

// delete it
delete person.age;

// try to retrieve it
// outputs: undefined
console.log(person.age);

Because of the underlying mechanics of some VMs like V8 (Chrome) it is not recommended to delete an object property. Doing so will alter the structure of the hidden class and have a performance hit. If you dont need a property anymore, set it to null but dont delete it. ECMAScript 5 introduced a set of APIs to allow finer control over object extensions and property attributes. If we want to make sure no properties get added to an object, we can use the Object.preventExtensions() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// prevents any extensibility
Object.preventExtensions(person);

// set the age
person.age = 25;

// outputs: undefined
console.log ( person.age );

// delete the name property
delete person.name;

// outputs: undefined
console.log ( person.name );

Given that methods on objects are in fact properties referencing functions, our object cannot be augmented with any methods either, however, our object can still get its properties deleted. At anytime we can test if the object is extensible or not by using the Object.isExtensible() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// prevents any extensibility
Object.preventExtensions(person);

// outputs: false
console.log ( Object.isExtensible ( person ) );

If we want to prevent from any extensibility or deletion, we can also seal the object through the Object.seal() API. Once an object is sealed, existing properties can still be changed and retrieved, but new ones cannot be added and deletion is also forbidden:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we seal our object
Object.seal ( person );

// outputs: Bob
console.log ( person.name );

// attempt to null the property
person.name = null;

// outputs: null
console.log ( person.name );

// change the name
person.name = "David";

// outputs: David
console.log ( person.name );

// attempt to create a new property
person.age = 30;

// outputs: undefined
console.log ( person.age );

// attempt to delete the property
delete person.name;

// outputs: David
console.log ( person.name );

Finally, we can resort to the Object.freeze() API if we truly want to ensure immutability at all levels:

// custom object
var person= { name: 'Bob', lastName: 'Groove' };

// we freeze our object
Object.freeze ( person );

// attempt to change the name value
person.name = "David";

// outputs: Bob
console.log ( person.name );

// attempt to delete the property
delete person.name;

// outputs: Bob
console.log ( person.name );

// attempt to null the property
person.name = null;

// outputs: Bob
console.log ( person.name );

To test if any object is either sealed or frozen we can rely on the Object.isSealed() and Object.isFrozen() APIs:

// outputs: true
console.log ( Object.isSealed (person) );

// outputs: true
console.log ( Object.isFrozen (person) );

Note that an object can be sealed and frozen at the same time and that once sealed or frozen, you cannot undo it. To summarize:

  • Object.preventsExtensions() prevents any properties or new capabilities (methods) to be added to the object.
  • Object.seal() prevents any properties or new capabilities (methods) to be added to the object but also forbids deletion. However, existing properties can still be changed.
  • Object.freeze() prevents any properties or new capabilities (methods) to be added to the object, forbids deletion, properties cannot be changed. The object is completely immutable.

If we need to go at a more granular level, we can actually know more about each property through the Object.getOwnPropertyDescriptor() API which returns a property descriptor. In the code below, we retrieve a descriptor for the person property:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// retrieve the property descriptor for name
var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: true
console.log(desc.writable);

// outputs: true
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

A property inspector has the following properties:

  • writeable: Indicates if the property value can be changed.
  • configurable: Indicates if the property value can be changed or deleted.
  • enumerable: Indicates if the property is enumerable or not.
  • value: Indicates the value of the property
  • get: A function acting as a getter for the property.
  • set: A function acting as a setter for the property.

By default, all properties are configurable, enumerable and writeable but configurability is modified if we seal our object:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we seal the object
Object.seal ( person );

// retrieve the property descriptor for name
var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: true
console.log(desc.writable);

// outputs: false
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

If we freeze it, then everything is locked except enumeration:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// we freeze the object
Object.freeze ( person );

var desc = Object.getOwnPropertyDescriptor(person, 'name');

// outputs: false
console.log(desc.writable);

// outputs: false
console.log(desc.configurable);

// outputs: true
console.log ( desc.enumerable );

// outputs: "Bob"
console.log(desc.value);

Sealing or freezing an object can be useful in scenarios where you want to ensure some level of immutability in parts of your program. Like we just saw, the Object.getOwnPropertyDescriptor() API returns attributes about any property, you may wonder if it is possible to define a new property and at the same time its attributes? Yes, that is possible too. Up until now, we defined new properties using the dot operator:

myObject.foo = myValue;

Using this syntax is actually a shortcut and makes properties by default enumerable, configurable and writeable. ECMAScript 5 defines a more granular API called Object.defineProperty(), which allows you to define the property attributes using a property descriptor. The API has the following signature:

Object.defineProperty(obj, prop, descriptor)

In the code below, we create a name property and make it writeable, enumerable and configurable:

// custom object
var myObject = {};

Object.defineProperty(myObject, "name", {value : 'Bob',
 writable : true,
 enumerable : true,
 configurable : true});

// outputs: Bob
console.log ( myObject.name );

Given that we set all attributes to true, we can enumerate the property, modify it and even delete it:

// custom object
var myObject = {};

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {value : 'Bob',
 writable : true,
 enumerable : true,
 configurable : true});

// outputs: Bob
console.log ( myObject.name );

// outputs: name
for ( var p in myObject ) {
 console.log ( p );
}

// we update the name
myObject.name = 'Stevie';

// outputs: Stevie
console.log ( myObject.name );

// we delete the property
delete myObject.name;

// outputs: undefined
console.log ( myObject.name );

If we change the attributes, we can be very granular and prevent any configuration or update but still allow enumeration:

// custom object
var myObject = {};

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {value : 'Bob',
 writable : false,
 enumerable : true,
 configurable : false});

// outputs: Bob
console.log ( myObject.name );

// outputs: name
for ( var p in myObject ) {
 console.log ( p );
}

// write access fails silently
myObject.name = 'Stevie';

// deletion fails silently
delete myObject.name;

// outputs: Bob
console.log ( myObject.name );

Even more powerful, if we use the get and set attributes as part of the descriptor object, we can define the implementation of getters and setters using the same API:

// custom object
var myObject = {};

// we define the getter
function getter() {
 return this.nameValue;
}

// we define the setter
function setter(newValue) {
 this.nameValue = newValue;
}

// we create the property name with specific attributes
Object.defineProperty(myObject, "name", {
  get: getter,
  set: setter});

// we change the value
myObject.name = 'Stevie';

// outputs: Stevie
console.log ( myObject.name );

We define the getter and setter for our name property. Note that we use the nameValue alias to reference the property, but we can actually use any value, using foo would work just fine.

// we define the getter
function getter() {
 return this.foo;
}

// we define the setter
function setter(newValue) {
 this.foo = newValue;
}

We now have control over how our value is read or written. In the code below, we make sure that any string read from the name property will always be cased correctly:

// we define the getter
function getter() {
 return this.nameValue.charAt(0).toUpperCase()+this.nameValue.substr(1).toLowerCase();
}

In the code below, we use a non case sensitive string, when we retrieved the value, the string is correctly formatted:

// custom object
var myObject = {};

// we define the getter
function getter() {
 return this.nameValue.charAt(0).toUpperCase()+this.nameValue.substr(1).toLowerCase();
}

// we define the setter
function setter(newValue) {
 this.nameValue = newValue;
}

// Example of an object property added with defineProperty with a data property descriptor
Object.defineProperty(myObject, "name", {
  get: getter,
  set: setter});

// we change the value
myObject.name = 'stevie';

// outputs: Stevie
console.log ( myObject.name );

Pretty powerful right? Note that the Object class also defines a defineProperties() API, allowing you to define multiple properties all at once. Finally, to enumerate properties from an object, we can rely on the Object.keys() API:

// custom object
var person = { name: 'Bob', lastName: 'Groove' };

// outputs: ["name", "lastName"]
console.log ( Object.keys ( person ) );

You may wonder if immutability makes property access faster? Unfortunately, no. It is important to note that these APIs to seal, freeze or prevent extensions of objects have been slow in the past and have made property access slower. Recent benchmarks have shown that performance got much better recently, but keep an eye on them in terms of performance impact.

Memo

  • All objects are mutable but can be sealed or frozen using the appropriates APIs from the Object class.
  • Object.preventsExtensions() prevents any properties or new capabilities (methods) to be added to the object.
  • Object.seal() prevents any properties or new capabilities (methods) to be added to the object but also forbids deletion. However, existing properties can still be changed.
  • Object.freeze() prevents any properties or new capabilities (methods) to be added to the object, forbids deletion, properties cannot be changed. The object is completely immutable.
  • The dot operator and brackets notation can be used to read and write properties.
  • The brackets notation can be useful but performs slower than the dot operator.
Almost everything is an Object

In JavaScript, almost everything is an Object. Lets try the code below to illustrate this:

function foo(){};

// outputs: true
console.log ( foo instanceof Object );

var countries = ['USA', 'FRANCE'];

// outputs: true
console.log ( countries instanceof Object );

var person = { name: "Bob", lastName: "Groove" };

// outputs: true
console.log ( person instanceof Object );

As expected, these composites types are of type Object. Even function, we will get back to that later in this article. But what about primitives, like a string, a decimal or a Boolean?

var result = true;

// outputs: false
console.log ( result instanceof Object );

var name = 'Bob';

// outputs: false
console.log ( name instanceof Object );

var score = 190;

// outputs: false
console.log ( score instanceof Object );

So you may wonder then how come these types have methods defined on them? Like calling length on a String:

var name = 'Bob';

// outputs: 3
console.log ( name.length );

Behind the scenes a wrapping object will be created when methods are called on primitives. This concept is known as boxing. At runtime, the code above will actually generate the following internally:

// outputs: 3
console.log ( (new String (name)).length );

That wrapper object (acting as a box here) is then discarded and garbage collected once the length getter is called. That is why trying to store data on a primitive fails. The temporary object we are writing into is immediately discarded. Trying to retrieve our property later on will create another box and as a result return undefined:

var name = 'Bob';

// write some data
name.foo = 'Some Data'; // equals to (new String (name)).foo = 'Some Data';

// outputs: undefined
console.log ( name.foo ); // equals to console.log ( (new String (name)).foo );

But storing data on a primitive created with a function constructor is possible, because no implicit boxing/unboxing occurs:

// create the string with new (function constructor)
var name = new String("Bob");

// write some data
name.foo = "Some Data";

// outputs: Some Data
console.log(name.foo);

Today's JavaScript VMs are pretty fast at boxing/unboxing. It is not worth worrying too much about it performance wise.

Two other types are not of type Object, null and undefined. The code below demonstrates this:

// outputs: false
console.log ( null instanceof Object );

// outputs: false
console.log ( undefined instanceof Object );

Memo

  • Calling a property or method on a primitive causes boxing and unboxing.
  • Therefore, it is not possible to store data or augment capabilities of a primitive.
Null and undefined

As we just saw, null and undefined are a special kind of types. In JavaScript, any variable not initialized is undefined:

var myObject;
var i;

// outputs: undefined undefined
console.log ( myObject, i );

Same for undefined properties:

var myObject = { name: "Bob" };

// outputs: undefined
console.log ( myObject.firstName );

Some typed languages sometimes initialize primitives values or objects to null. This allows developers to use null for initialization testing. Automatic initialization to null by the runtime cannot happen in JavaScript, keep this in mind at all times. If you forget, you may be tempted to use lots of null checks in your code like in the example below:

var myArray;

// if the Array is not initialized, then initialize it
if ( myArray == null ) {
 myArray = new Array();
 console.log ('Array initialized');
} else console.log ('already created');

The issue is that our test for null here is not truly reliable. Remember, our Array is not initialized, hence returns undefined. On top of that, null and undefined are two different types but resolve to true when compared when we are not using the strict equality operator:

// outputs: object undefined
console.log ( typeof null, typeof undefined );

// outputs: true
console.log ( undefined == null );

Remember about implicit conversions? We are using here the == operator. If were to use the strict equality operator (===), which tests for both type and value, our test would fail:

// outputs: false
console.log ( undefined === null );

Remember we used the strict operators earlier to resolve ambiguity. Here again, it proves to be useful. In our previous code we would now be entering the else block:

var myArray;

// if the Array is not initialized, then initialize it
if ( myArray === null ) {
 myArray = new Array();
 console.log ('Array initialized');
} else console.log ('already created');

You could also just rely on a Boolean evaluation and do a simple if not:

var myArray;

// if the Array is not initialized, then initialize it
if ( !myArray ) {
 myArray = new Array();
 console.log ('Array initialized');
} else console.log ('already created');

18:38:40, ByteArray.org

From microphone to .WAV with: getUserMedia and Web Audio

Update: The new MediaStream recording specification is aiming at solving this use case through a much simpler API. Follow the conversations on the mailing list.

A few years ago, I wrote a little ActionScript 3 library called MicRecorder, which allowed you to record the microphone input and export it to a .WAV file. Very simple, but pretty handy.The other day I thought it would be cool to port it to JavaScript. I realized quickly that it is not as easy. In Flash, the SampleDataEvent directly provides the byte stream PCM samples) from the microphone. With getUserMedia, the Web Audio APIs are required to extract the samples. Note that getUserMedia and Web Audio are not broadly supported yet, but it is coming. Firefox has also landed Web Audio recently, which is great news.

BecauseI did not find an article that went through the steps involved, here is a short article on how it works, from getting access to the microphone to the final .WAV file, it may be useful to you in the future. The most helpful resource Icame across was this niceHTML5 Rocks articlewhich pointed toMatt Diamond'sexample, which contains the key piece I was looking for to get the Web Audio APIs hooked up. Thanks so much Matt! Credits also goes to Matt for the merging and interleaving code of the buffers which works very nicely.

First, we need to get access to the microphone, and we use the getUserMedia API for that.

if (!navigator.getUserMedia)
        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
                      navigator.mozGetUserMedia || navigator.msGetUserMedia;

if (navigator.getUserMedia){
    navigator.getUserMedia({audio:true}, success, function(e) {
    alert('Error capturing audio.');
    });
} else alert('getUserMedia not supported in this browser.');

The first argument of the getUserMedia API provides information on what do we want to get access to (here the microphone), if we wanted to get access to the camera, we would have passed an object with the video flag on:

navigator.getUserMedia({video:true}, success, function(e) {
    alert('Error capturing video.');
});

The two other arguments are callbacks to handle successful access to the hardware or failure. At this point, the success callback will be triggered if the user clicks "Allow" through this panel:

getusermedia-access

Once the user has allowed access to the microphone, we need to start querying the PCM samples, this is where it becomes tricky and the Web Audio APIs comes into the game. If you have not checked the Web Audio spec, you will see that the surface is very large and quite scary when you see it for the first time and that's because the Web Audio APIs can do a lot, like audio filters, synthesized music, 3D audio engines and more.But all we need here are the PCM samples that we would store and pack inside a WAV container using a simple ArrayBuffer.

So our user has clicked "Allow", we now go on and create an audio context and start capturing the audio data:

function success(e){
    // creates the audio context
    audioContext = window.AudioContext || window.webkitAudioContext;
    context = new audioContext();

    // retrieve the current sample rate to be used for WAV packaging
    sampleRate = context.sampleRate;
    
    // creates a gain node
    volume = context.createGain();

    // creates an audio node from the microphone incoming stream
    audioInput = context.createMediaStreamSource(e);

    // connect the stream to the gain node
    audioInput.connect(volume);

    /* From the spec: This value controls how frequently the audioprocess event is 
    dispatched and how many sample-frames need to be processed each call. 
    Lower values for buffer size will result in a lower (better) latency. 
    Higher values will be necessary to avoid audio breakup and glitches */
    var bufferSize = 2048;
    recorder = context.createScriptProcessor(bufferSize, 2, 2);

    recorder.onaudioprocess = function(e){
        console.log ('recording');
        var left = e.inputBuffer.getChannelData (0);
        var right = e.inputBuffer.getChannelData (1);
        // we clone the samples
        leftchannel.push (new Float32Array (left));
        rightchannel.push (new Float32Array (right));
        recordingLength += bufferSize;
    }

    // we connect the recorder
    volume.connect (recorder);
    recorder.connect (context.destination); 
}

The createJavaScriptNode API takes as a first argument the buffer size you want to retrieve, as I added in the comments, this value will dictate how frequently the audioprocess event will be dispatched. For best latency, choose a low value, like 2048 (remember it needs to be a power of two). Every time the event is dispatched, we call the getChannelData APIs for each channel (left and right) and get a new Float32Array buffer for each channel that we clone (sorry GC) and store into two separate Arrays.This code could would be much simpler and more GC friendly if it was possible to write each channel into a Float32Array directly, but given that these cannot have an undefined length, we need to fallback to plain Arrays.

So why do we have to clone the channels? It actually drove me nuts for many hours. What happens is that the returned channel buffers are pointers to the current samples coming in, so you need to snapshot them (clone) otherwise you will end up with samples reflecting the sound coming from the microphone at the instant you stopped recording.

Once we have our arrays of buffers, we need to flat down each channel:

function mergeBuffers(channelBuffer, recordingLength){
  var result = new Float32Array(recordingLength);
  var offset = 0;
  var lng = channelBuffer.length;
  for (var i = 0; i < lng; i++){
    var buffer = channelBuffer[i];
    result.set(buffer, offset);
    offset += buffer.length;
  }
  return result;
}

Once flat, we can interleave both channels together:

function interleave(leftChannel, rightChannel){
  var length = leftChannel.length + rightChannel.length;
  var result = new Float32Array(length);

  var inputIndex = 0;

  for (var index = 0; index < length; ){
    result[index++] = leftChannel[inputIndex];
    result[index++] = rightChannel[inputIndex];
    inputIndex++;
  }
  return result;
}

We then add the little writeUTFBytes utility function:

function writeUTFBytes(view, offset, string){ 
  var lng = string.length;
  for (var i = 0; i < lng; i++){
    view.setUint8(offset + i, string.charCodeAt(i));
  }
}

We are now ready for WAV packaging, you can change the volume variable if needed (from 0 to 1):

// we flat the left and right channels down
var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
// we interleave both channels together
var interleaved = interleave ( leftBuffer, rightBuffer );

// create the buffer and view to create the .WAV file
var buffer = new ArrayBuffer(44 + interleaved.length * 2);
var view = new DataView(buffer);

// write the WAV container, check spec at: https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
// RIFF chunk descriptor
writeUTFBytes(view, 0, 'RIFF');
view.setUint32(4, 44 + interleaved.length * 2, true);
writeUTFBytes(view, 8, 'WAVE');
// FMT sub-chunk
writeUTFBytes(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
// stereo (2 channels)
view.setUint16(22, 2, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
// data sub-chunk
writeUTFBytes(view, 36, 'data');
view.setUint32(40, interleaved.length * 2, true);

// write the PCM samples
var lng = interleaved.length;
var index = 44;
var volume = 1;
for (var i = 0; i < lng; i++){
    view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
    index += 2;
}

// our final binary blob that we can hand off
var blob = new Blob ( [ view ], { type : 'audio/wav' } );

Obviously, if WAV packaging becomes too expensive, it is an ideal task to offload to a background worker ;)
Once done, we can save our blob to a file or do whatever we want with it. We can now save it locally or remotely, or even post process it. You can also check the live demo here for more fun.


2014-09-05

18:51:28, ByteArray.org
Concurrency in JavaScript

Just like with Flash, JavaScript code runs by default on the UI thread, and any expensive computation will usually affect the UI responsiveness. As you may know, at 60 fps, you have around 16ms (1000ms/60) per frame to do what you have to do (computations, rendering and other misc logic). If you exceed that budget, you will alter the frame rate and potentially make your content feel sluggish or worse, unresponsive.

Budget per frame

Web Workers are now broadly available in most browsers even on mobile (caniuse.com stats for Web Workers) and give you the power of concurrency from within JavaScript. It will allow you to move expensive computations to other threads, to permit best responsive programming, and ideally open the doors in the futureto true parallelization in JavaScript. Let's have a look at the reasons why you may be interested into leveraging Web Workers.

  • Responsive programming: When you have an expensive computation and don't want to block the UI.
  • Parallel programming: When you want to leverage multiple CPU cores by having computations running concurrently to solve a specific task.

We will see later on that parallel programming with Web Workers can be challenging, and that they are not really designed for that today unfortunately. But before we dive into the details of responsive programming and parallel programming, let's start with some more details about Web Workers. Note that we will cover here dedicated Web Workers, not the shared Web Workers.

Workers and threads

A very common question around Web Workers is how they compare to low-level threads available in other languages like C++, Java or C#.

First off, Web Workers are way more high-level and heavier than threads, when newing a Web Worker, you are actually instantiating a new JS VM. As a result, instantiating a Web Worker does indeed create another low-level thread behind the scenes to power the VM that runs your code, but you never work with it directly. Therefore, it is important to remember that the cost of a single Worker in memory is pretty high, and same for its instantiation time. Based on that, you can imagine that instantiating many workers on a mobile browser is usually not an option, in fact, most browsers have a limitation in the number of workers that can be allocated for that reason.

Another major difference with threads is how data passing works, where nothing is shared. As we will see later with Web Workers, data is passed by default through message cloning, in other words, any data passed is cloned. Given the overhead of cloning for large objects, you can also transfer ownership when needed. That allows you to pass the reference of your object to another Web Worker, without cloning it, you just lose its reference from where you sent it. It is a pretty elegant approach and very simple to use and works well with the responsive use cases.

Workers and cores

On a multi-core environment, multiple threads can truly run concurrently if distributed over multiple cores. On a single core system, the CPU performs what is called context switching betweens the threads. It happens so fast that you cannot actually tell the threads are not running concurrently but they truly aren't. This leads to another common question: can I leverage multi-core architectures with Web Workers ? In other words, can I choose specifically which core to leverage?

That is not in your control, the OS will decide if the thread utilized by the instantiated VM is spawn on another core or not. It is also not possible to query the number of cores available, even if libraries have emerged to guess that, but you ideally want to stay out of this and let the OS balance things best for you.

Life as a worker

You can use all of the JavaScript features inside a Worker, but the amount of browser APIs available is limited. The main restriction is that there is no way to display anything inside from a Worker, simply because there is no DOM available, which lives on the UI thread. Imaging making the DOM thread safe, that would be pretty tricky across workers. Hence why if you try to access the document object from within a worker, you will get the following runtime exception:

Uncaught ReferenceError: document is not defined

At this point, an error event will be dispatched by the worker, that can be listened to. As a result, it is impossible to render anything on screen from a Web Worker. To achieve this, you will have to send the object back to the main (UI) thread for display. A very classic scenario is anything related to image processing. Given how expensive image processing can be, it is very tempting to offload the expensive code to a Web Worker. In the initial Web Workers specification, it was not possible to pass an ImageData object between Web Workers, you had to pass an array of pixel colors. Recently, the specification got updated to allow message passing of ImageData objects, which is quite nice, given that many use cases rely on image/bitmap manipulation.

Another important use case is resource loading. You may want to load a remote resource from a worker, and that is possible through the XMLHttpRequest object. That is very useful if you want to do some expensive parsing of a resource that you are loading within a game, like parsing an object model, or more simply logging to a server some analytics you have been cooking within a background worker. You can find here more details about the functions available from within Web Workers.

Another thing you may try is to use the console to log things from within a worker, this is also not supported, but you can build a wrapper on top of the postMessage API that we will cover in a few minutes. David Flanagan has posted onetherewhich works well. In addition, you can also use the Chrome Dev Tools which offer you a way to debug Web Workers.

Instantiating a Worker

To instantiate a worker, two approaches are available. The first one requires you to point to the JavaScript code needed to run in the background:

var worker = new Worker("background.js");

When the script logic is passed to the Worker object, the code is immediately ran. Simple and easy. The limitation here is that it relies on a separate file, you may be in a situation where you need to reduce as much as possible the file dependencies. Another technique allows you to remove such a dependency and create the worker logic through the createObjectURL API and a Blob object:

// worker logic
var blob = new Blob(["self.addEventListener('message', messageReceived);function messageReceived(e) {self.postMessage('Doing pretty good here!');}"], {type: 'text/javascript'});

// webkit handling
var URL = window.URL || window.webkitURL;

// create the virtual file
var code = URL.createObjectURL(blob);

// create the worker
var worker = new Worker(code);

Note that the way you retrieve the string passed to the Blob object can vary. Obviously, passing the string like above is not very optimal. As Mozilla points out in their documentation, you can also store the JavaScript code inside a custom script tag and extract its content dynamically and pass it to the Blob object.

In the code below, the JavaScript code is located inside a custom script tag:

<script type="mce-text/js-worker">
self.addEventListener('message', messageReceived); 

function messageReceived(e) {
    self.postMessage('Doing pretty good here!');
}
</script>

Did you notice the use of the keyword self? This is how we refer (from the background code) to the worker itself.

We retrieve the script using the querySelector API and pass it to the Blob object:

// extract the string
var logic = document.querySelectorAll("script[type=\"text\/js-worker\"]")[0].textContent;

// worker logic
var blob = new Blob([logic], {type: 'text/javascript'});

// webkit handling
var URL = window.URL || window.webkitURL;

// create the virtual file
var code = URL.createObjectURL(blob);

// create the worker
var worker = new Worker(code);

Relying on a separate file for the worker logic is the most common way to work with Workers but you now have seen different options in case you had to get rid of a dependent file.

Terminating a Web Worker

To terminate a Worker, just call the terminate API:

worker.terminate()

In the same way, a Worker can close itself through the close API:

self.close()

Note that once terminated or closed, a Web Worker cannot be restored, it would have to be recreated. Now that our Worker object is created, let's send some data back and forth to see how things work.

Sub Web Workers

If needed, a Worker can create a Worker within itself:

self.addEventListener('message', messageReceived);

// a sub worker
var subworker = new Worker ('Task.js');

subworker.addEventListener('message', subTaskReceived);

function messageReceived(e) {
    // some logic
}

function subTaskReceived(e) {
    // some logic
}
Passing data

To communicate, we use the postMessage API (not to be confused with window.postMessage). Any message sent with postMessage triggers the "message" event that dispatched by the Worker object:

// extract the string
var logic = document.querySelectorAll("script[type=\"text\/js-worker\"]")[0].textContent;

// worker logic
var blob = new Blob([logic], {type: 'text/javascript'});

// webkit handling
var URL = window.URL || window.webkitURL;

// create the virtual file
var code = URL.createObjectURL(blob);

// create the worker
var worker = new Worker(code);

// listen to the response from the Worker
worker.addEventListener('message', receiveMessage);

// pass some data
worker.postMessage('How is it going over there?');

// callback handling the response, data is available in the event object
function receiveMessage (e)
{
    console.log (e.data);
}

Our code sends a message to the Worker, which returns a message back to the main thread. The code above outputs the following message:

Doing pretty good here!

In that case, we sent the string: 'How is it going over there', and the Worker returned 'Doing pretty good here'. We sent different data to illustrate how simply message passing works. But in most cases you want to send similar data that you can retrieve on both sides. Some coordinates, a path that has been computed in a game, or more simply a binary blob that you want to process, like a PDF or an image that has been generated. To retrieve the data that is being passed, we use the data property of the event object.

In the code below, we update our worker logic to return the data back to the main thread:

self.addEventListener('message', messageReceived);

function messageReceived(e) {
    self.postMessage(e.data);
}

We still pass a simple string, but it is now sent back to the main thread:

// extract the string
var logic = document.querySelectorAll("script[type=\"text\/js-worker\"]")[0].textContent;

// worker logic
var blob = new Blob([logic], {type: 'text/javascript'});

// webkit handling
var URL = window.URL || window.webkitURL;

// create the virtual file
var code = URL.createObjectURL(blob);

// create the worker
var worker = new Worker(code);

// listen to the response from the Worker
worker.addEventListener('message', receiveMessage);

// we send some data
worker.postMessage("Some data passed back and forth!")

// callback handling the response, data is available in the event object
function receiveMessage (e)
{
    console.log (e.data);
}

If we run the code above, the console outputs:

Some data passed back and forth!

Remember that we are not sharing anything here, our string got cloned implicitly. Now you may wonder, can I pass any kind of data? If you stick to all the primitive types available in the JavaScript language, like Number, String, Boolean, you will be fine. For composite data types, plain JSON objects are supported, in addition to the ImageData and TypedArray types.

In the code below, we pass a simple JSON object:

// a user
var user = { name : "Bob", age: 30, city: "San Francisco" };

// we pass our object
worker.postMessage ( user );

Here again, the object is cloned and accessible from our Worker code:

self.addEventListener('message', messageReceived);

function messageReceived (e)
{
    var user = e.data;
    var name = user.name;
    var age = user.age;
    var city = user.city;
}

If you try to pass an unsupported type, you will get a DataCloneError exception. In the code below, we try to pass a reference to the window object:

// attempt to pass the window object
worker.postMessage(this.window);

Which triggers the following exception:

Uncaught Error: DataCloneError: DOM Exception 25

Same thing with an XHR object:

// attempt to pass an XHR object
worker.postMessage(new XMLHttpRequest());

Which also results in a runtime exception:

Uncaught Error: DataCloneError: DOM Exception 25

In the same way, passing a custom type is not supported. In the code below, we try to pass an Array of Ball objects:

var radius = 10;
var Ball = (function () {
    function Ball(radius, x, y, destX, destY, context, balls) {
        this.friction = .1;
        this.color = 'green';
        this.radius = radius;
        this.x = x;
        this.y = y;
        this.destX = destX;
        this.destY = destY;
        this.context = context;
        this.balls = balls;
    }
    return Ball;
})();

var BALLS_NUM = 100;
var balls = new Array();

for(var i = 0; i < BALLS_NUM; i++) {
    var obj = new Ball(radius, Math.round(Math.random() * 1024), Math.round(Math.random() * 768), Math.random() * 1024, Math.random() * 768, context, balls);
    balls.push(obj);
}

// attempt to pass an array of custom type (Ball) objects
worker.postMessage (balls);

This code will also trigger the same DataCloneError exception. As you can imagine, passing data through cloning has overhead. If you try to pass a big amount of data at a high frequency, cloning will not be the most efficient path. However, this approach has one merit, it is very safe. Given that everything is cloned, it is therefore impossible to get in a situation where data is corrupted or simply ending up with unsynchronized shared data.

Sending larger objects

In the following example, we create a 16mb byte array that we send to the worker, we capture the time it takes to allocate and transfer back and forth:

// extract the string
var logic = document.querySelectorAll("script[type=\"text\/js-worker\"]")[0].textContent;

// worker logic
var blob = new Blob([logic], {type: 'text/javascript'});

// webkit handling
var URL = window.URL || window.webkitURL;

// create the virtual file
var code = URL.createObjectURL(blob);

// create the worker
var worker = new Worker(code);

// listen to the response from the Worker
worker.addEventListener('message', receiveMessage);

// capture current time
var started = Date.now();

// Create a 16MB "file" and fill it.
var uInt8View = new Uint8Array(1024*1024*16); // 16MB
for (var i = 0; i < uInt8View.length; ++i) {
    uInt8View[i] = i;
}

console.log ( "Memory allocation : " + (Date.now() - started) + " ms" );

// capture current time
started = Date.now();

// pass the bytearray (cloned)
worker.postMessage(uInt8View.buffer);

// callback handling the response, data is available in the event object
function receiveMessage (e)
{
    console.log ("Back/forth transfer : " + (Date.now() - started) + " ms");
    console.log ("Byte array size : " + e.data.byteLength);
}

If we run this code on different devices and environments, we get the following results:

Platform
Time (ms)
iOS7 (Safari/iPhone5)214
iOS6 (Safari/iPhone4S)524
MacBook Pro (Chrome/10.8.4)75

Note how expensive data cloning is on a recent device and OS. Let's see if transfer of ownership can help performance.

Transferable objects

In a scenario where you need to pass a large amount of data and cloning could be too costly, you can rely on transferrable objects. To transfer an object, the postMessage API accepts a second optional second argument, which is an array containing the objects to transfer ownership from:

// transfer ownership to the worker
worker.postMessage(uInt8View.buffer, [uInt8View.buffer]);

With that change, we see that passing data becomes much more efficient, around a 3x performance boost on mobile:

Platform
Time (ms)
iOS7 (Safari/iPhone5)80
iOS6 (Safari/iPhone4S)162
MacBook Pro (Chrome/10.8.4)37

Note that just before calling postMessage, our byte array is available, right after the transfer, the reference is no longer accesssible and trying to access it throws a runtime exception:

// outputs: 16777216
// access to the bytearray possible
console.log ( uInt8View.buffer.byteLength );

// transfer ownership to the worker
worker.postMessage(uInt8View.buffer, [uInt8View.buffer]);

// triggers runtime exception: Uncaught TypeError: Cannot read property 'byteLength' of null
console.log ( uInt8View.buffer.byteLength );

This technique can be useful in a scenario where you need to pass a big amount of data, like a big binary blob. Note that since the beginning, we passed data from the main thread to the Worker, but data could be passed the other way around. This time, we will create the memory buffer from the worker and send it to the main thread.

To achieve this, we modify our worker logic:

// Create a 16MB "file" and fill it.
var uInt8View = new Uint8Array(1024*1024*16); // 16MB

for (var i = 0; i < uInt8View.length; ++i) {
    uInt8View[i] = i;
} 

self.postMessage(uInt8View.buffer, [uInt8View.buffer]);

Then, we listen to the message coming from the worker:

// extract the string 
var logic = document.querySelectorAll("script[type=\"text\/js-worker\"]")[0].textContent; 

// worker logic 
var blob = new Blob([logic], {type: 'text/javascript'}); 

// webkit handling 
var URL = window.URL || window.webkitURL; 

// create the virtual file 
var code = URL.createObjectURL(blob); 

// create the Worker 
var worker = new Worker(code); 

// listen to the incoming message from the Worker 
worker.addEventListener('message', receiveMessage); 

function receiveMessage (e) { 
    console.log (e.data.byteLength); 
}

If we run the code above, the console outputs:

16777216

We are now transferring our data from the background Web Worker to the main UI thread.

Responsive use cases

As we said earlier, Web Workers are commonly used today to perform expensive/synchronous tasks in the background. This allows you as a developer to perform expensive computations in the background, without causing the UI to lock so that your application stays always responsive.

Here is below a few use cases:

  • Generating or parsing/rendering a PDF.
  • Compressing an image to a custom file format (JPEG, etc.).
  • Compute an A* path inside a game.
  • Lookup a word in a database for a spell checker in a text editor.
  • Load and parse some resources inside a game.
  • Perform general complex computations in the background and log that remotely.
  • And many others...

For these, message cloning works just fine, because low-latency is acceptable. All you need is to pass your data back to the UI thread and that's it, you don't need to pass messages at a high frequency

Concurrency and the future of the web

As you can see, Web Workers are very easy to use and pretty powerful. The message passing model is simple and easy and they can be leveraged today on desktop and mobile to make your application more responsive and snappier. So, are we done with them or could they be improved? We will see in the following section that there is room for a few improvements that could really move the web forward.

Parallelization

So far, we have covered techniques to offload expensive computations to a background worker, using message cloning and transfer of ownership, but we have not discussed the parallelization use cases. Instead of relying on a single Web Worker to perform a computation in the background, the idea of parallelization is to have multiple Web Workers run in parallel and working on solving the same task but faster, given that we distribute the task to multiple threads. Microsoft has posted a good exampleabout this for image manipulation.

As we have seen, two communication models currently exist for Web Workers, cloning and ownership transfer, unfortunately, both do not work well today for parallelization. The former does not, giving data ownership to one Web Worker at a time. The latter allows you to parallelize tasks by duplicating data when needed, but the cloning has real overhead and that can become limiting in some scenarios. In my tests, passing an array of 100 JSON objects over a few frames brought the performance to a crawl. Being able to truly share some memory (like a simple TypedArray) between workers would be fast, but it would expose some additional complexity to web developers, like having to deal with synchronization of data.

Most of the time, the idea of shared memory makes people worried about nasty things we usually don't want to deal with, like data races and locks leading to memory corruption. I actually agree that within the context of low-level languages, this is pretty scary, but with a managed language like JavaScript, I would argue that it is not as bad. In addition, the main UI thread and Web Workers have been designed so that blocking cannot happen when one is waiting for the other and blocking the UI can already happen today with a simple while (true) sitting on the main thread.

So, really, I am not sure a shared memory model with Web Workers would necessarily hurt the web, but I agree that there is probably a better solution than exposing plain simple shared memory. This would expose to web developers some additional complexity, like dealing with conditions and mutexes, and there's gotta be something better for web developers. Anyway, I am following discussions about bringing a better message passing model to Web Workers with great passion.

Shared resources (WebGL)

Web Workers also offer some great potential when it comes to GPU programming. The most exciting one to me, is the idea of shared resources for WebGL relying on Web Workers. Today, when you are programming with WebGL, all the initialization work required to initialize WebGL, like shader programs initialization and texture uploads happen on the UI thread. In other words, to see something on screen, WebGL requires you to upload your textures and programs (vertex/fragments shaders) and all of this still happens on the main UI thread. In addition. two things usually required when working with GPU APIs have always been slow historically and most importantly blocking operations: texture upload and texture readback.

This is a big limitation, as you obviously need to upload textures and this will once again lock the UI if expensive, making your game/application feel janky. Same thing for texture readback (which means copying the texture uploaded on the GPU back to RAM) to perform other computations on the CPU. With shared resources support for WebGL, multiple contexts could be created across Web Workers to perform some of the tasks mentioned earlier in other threads, allowing developers to produce lock-free, snappy, super responsive GPU experiences.

Additional Web Workers resources

I hope you enjoyed reading these additional details on Web Workers!


2014-07-29

17:42:12, ByteArray.org
Writing cross-platform apps with Node-WebKit

Node-WebKitLast week, the daycare where my son stays told me they were desperately looking for an application to save them a lot of precious time. Every month, they have someone from the school spend 3 hours dealing with invoices. The process is crazy manual, no automation at all:

  1. An Excel (xls) spreadsheet contains all the tuitions due by parents and is updated every month.
  2. A new PDF is created manually for each parent with the tuition that varies every month for some parents.
  3. An email is sent to each parents (across 3 different schools, so that's a lot of parents) with the PDF invoice as an attachment.
  4. Sometimes, a message is added in the PDF invoice to inform about days-off (holidays) during the month.

I told them I could develop the app for them for fun, so I first started thinking, oh cool, I will be developing this in Swift, a cool project. I started looking around for libraries to parse Excel files and found one that could work (libxls). After an hour of pain trying to make it compile in my project, I realized that for such an app, do I really need super tight integration with the OS or the performance that native would provide? Then, I realized the app would be used across different platforms maybe, and I will probably need to fix bugs remotely, push updates, so I needed something super flexible and I decided to have a look at Node.js, and I quickly realized the power of not only the Node stack but more than anything, the ecosystem.

Here is the recipe I ended up with:

  1. XLSJS for the parsing of the Excel documents.
// we parse our file coming from a drag and drop event
var workbook = XLS.read(e.target.result, {type: 'binary'});
// let's grab the sheets names
var sheets = self.workbook.SheetNames;
// grab the first sheet
var sheet1 = self.workbook.Sheets[self.sheets[0]]
// grab the range of cells to work with
var start = self.sheet1['!range'].s.r + 1;
var end = self.sheet1['!range'].e.r;
  1. PDFKit for the PDF generation

Having created AlivePDF back in the days, I was curious to see what the ecosystem looked like with Node and PDFKit works beautifully, simple APIs and gets the job done. Below is the core function that generates the PDF and returns its path for the email attachment:

// core PDF generation function
// takes a name and amount and writes a PDF to the disk
// that path is reused in the email function to send the attachment
function savePDFDocument(name, amount){
    var doc = new PDFDocument()
    var date = returnDate();
    var filePath = "invoices/"+name+" - "+date+".pdf";
    try {
        fs.openSync(filePath, 'r')
        fs.unlinkSync(filePath);
    } catch (e) {
    } finally {
        var stream = fs.createWriteStream(filePath, {flags : 'w'});
        doc.pipe(stream);

        doc.image('images/logo.jpg', 0, 15)

        doc.font('fonts/PalatinoBold.ttf')
            .fontSize(25)
            .text('Invoice for '+name , 100, 100)
            .text('Amount: $'+amount , 100, 200)

        doc.addPage()
           .fontSize(25)
           .text('Here is some vector graphics...', 100, 100)

        doc.save()
           .moveTo(100, 150)
           .lineTo(100, 250)
           .lineTo(200, 250)
           .fill("#FF3300")

        doc.scale(0.6)
           .translate(470, -380)
           .path('M 250,75 L 323,301 131,161 369,161 177,301 z')
           .fill('red', 'even-odd')
           .restore()

        doc.addPage()
           .fillColor("blue")
           .text('Here is a link!', 100, 100)
           //.underline(100, 100, 160, 27, color: "#0000FF")
           .link(100, 100, 160, 27, 'http://google.com/')

        doc.end()
    }
    return [name, filePath];
}

Yeah, returning an Array here, hmm, I miss tuples.

  1. NodeMailer and the SMTP Pool extension for sending large batch of emails.

Here again, I remember writing an SMTP library in AS3 for that on top of flash.net.Socket. NodeMailer here again works seamlessly, here is the core function that takes the path of the PDF and the name of the person to send it to and does the job:

// core email sending function
function sendEmail (to, from, names){
    var mailOptions = {
    from: from, // sender address
    to: to, // list of receivers
    subject: 'Invoice July 2014', // Subject line
    html: 'Hi '+names[0]+", please find attached your invoice for the July 2014 tuition.", // plaintext body
   // html: '<b>Hello world 

2014-07-13

22:05:50, ByteArray.org
Swift: Preloading an image and displaying it

Note: Source code (RemoteImage) available on Github

Yesterday, I explained how to use the NSURLConnection API to load a remote JSON file. Today I will show you how to use the same API to load a remote image. In fact, the NSURLConnection, similar to URLStream (in AS3) can load pretty much anything, basically it will download raw bytes, once loaded, you are free to do whatever you want with them. Serialize them to JSON, or pass them to a UIImage object, and there you go, you have your image ready to be displayed.

I commented each line which should be pretty straightforward:

import SpriteKit

class GameScene: SKScene {
    
    // our properties
    var bytes: NSMutableData?
    var totalBytes: Float64?
    let label: SKLabelNode = SKLabelNode(fontNamed: "Verdana")
    
    override func didMoveToView(view: SKView) {
        // we create our text label for the preloading infos
        label.position = CGPoint (x: 520, y: 380)
        addChild (label)
        
        // this is our remote end point (similar to URLRequest in AS3)
        let request = NSURLRequest(URL: NSURL(string: "https://s3.amazonaws.com/ooomf-com-files/yvDPJ8ZSmSVob7pRxIvU_IMG_40322.jpg"))
        
        // this is what creates the connection and dispatches the varios events to track progression, etc.
        let loader = NSURLConnection(request: request, delegate: self, startImmediately: true)
    }
    
    func connection(connection: NSURLConnection!, didReceiveResponse response: NSURLResponse!) {
        // we initialize our buffer
        self.bytes = NSMutableData()
        
        // we grab the total bytes expected to load
        totalBytes = Float64(response.expectedContentLength)
    }
    
    func connection(connection: NSURLConnection!, didReceiveData conData: NSData!) {
        // we append the bytes as they come in
        self.bytes!.appendData(conData)

        // we calculate our ratio
        // we divide the loaded bytes with the total bytes to get the ratio, we mulitply by 100
        // note that we floor the value
        var ratio = floor((Float64(self.bytes!.length) / totalBytes!) * 100)
        
        // we cast to Int to remove the decimal and concatenate with %
        self.label.text = String (Int(ratio)) + " %"
    }
    
    func connectionDidFinishLoading(connection: NSURLConnection!) {
        // we create a UIImage out of the completed bytes we loaded
        let imageBytes = UIImage(data: self.bytes)
        
        // we create a texture
        let texture = SKTexture(image: imageBytes)
        
        // then a sprite
        let sprite = SKSpriteNode(texture: texture)
        
        // we calculate the ratio so that our image can fit in the canvas size and be scaled appropriately
        var scalingRatio = min (self.view.bounds.width/sprite.size.width, self.view.bounds.height/sprite.size.height)
        
        // we apply the scaling
        sprite.xScale = scalingRatio
        sprite.yScale = scalingRatio
        
        // we position our image
        sprite.position = CGPoint (x: 510, y: 380)
        
        // we remove the percentage label
        label.removeFromParent()
        
        // we add our final image to the display list
        addChild(sprite)
    }
    
    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

During preload, this gives us the following:

Preloading

Once loading is complete, our image is displayed:

Loaded image

I hope this code might be helpful to you guys!


2014-07-12

19:24:41, ByteArray.org
Swift: Loading and parsing a remote JSON file

Note: Source code (RemoteJSON) available on Github

Loading something remotely is a pretty simple task in Flash. We can use the URLStream or URLLoader object, a URL is wrapped inside a URLRequest object, we call the load() API and we are done.

So I was wondering what it looks like in Swift and specifically loadinga JSON file, which is a common thing to do today, so here isa little snippet for that.

First, we use the NSURLRequest (not that different right?)object to specify our remote URL:

// this is our remote end point (similar to URLRequest in AS3)
let request = NSURLRequest(URL: NSURL(string: "http://bytearray.org/wp-content/projects/json/colors.json"))

Now, we need to use that object to connect over there, this is where the NSURLConnection comes into play, think of it as the equivalent of the URLStream object:

// this is what creates the connection and dispatches the varios events to track progression, etc.
let loader = NSURLConnection(request: request, delegate: self, startImmediately: true)

As a first parameter, we pass our NSURLRequest object, then we pass our current class as a delegate hander, and we ask that the request starts immediately. We could have passed on that, and call the NSURLConnection.start() API later.

Our data is now coming in, and we need to save it. With URLStream, the data would be gathered internally and stored, and calls to the URLRequest object directly would let you read the data, just like you would do with a Socket connection. In Cocoa with Swift, it is a little bit different, we need to create our buffer to store the data, then through the progress event, write the bytes to it:

func connection(connection: NSURLConnection!, didReceiveData conData: NSData!) {
     self.bytes?.appendData(conData)
}

We are using the optional operator cause we are initializing our property later on, when we receive the first response from the server:

func connection(didReceiveResponse: NSURLConnection!, didReceiveResponse response: NSURLResponse!) {
    self.bytes = NSMutableData()
}

When data is done loading, we can parse it:

func connectionDidFinishLoading(connection: NSURLConnection!) {
        
      // we serialize our bytes back to the original JSON structure
      let jsonResult: Dictionary = NSJSONSerialization.JSONObjectWithData(self.bytes, options: NSJSONReadingOptions.MutableContainers, error: nil) as Dictionary<String, AnyObject>
        
      // we grab the colorsArray element
      let results: NSArray = jsonResult["colorsArray"] as NSArray

      // we iterate over each element of the colorsArray array
      for item in results {
          // we convert each key to a String
          var name: String = item["colorName"] as String
          var color: String = item["hexValue"] as String
          println("\(name): \(color)")
      }
}

Note that we are casting our result to a Dictionary that we can query later on to extract our colorsArray key. This code above gives us in the following output:

red: #f00
green: #0f0
blue: #00f
cyan: #0ff
magenta: #f0f
yellow: #ff0
black: #000

Note that we did not specify listeners to the various callbacks that are being called during loading and when loading is complete. This is done implicitly when we passed our current instance (self) to the delegate property (when we create the NSURLConnection object). The delegate functions are called because they follow a specific naming convention. Have a look in the final complete code:

import SpriteKit

class GameScene: SKScene {
   
    var bytes: NSMutableData?
    
    override func didMoveToView(view: SKView) {
        
        // this is our remote end point (similar to URLRequest in AS3)
        let request = NSURLRequest(URL: NSURL(string: "http://bytearray.org/wp-content/projects/json/colors.json"))
        
        // this is what creates the connection and dispatches the varios events to track progression, etc.
        let loader = NSURLConnection(request: request, delegate: self, startImmediately: true)
    }
    
    func connection(didReceiveResponse: NSURLConnection!, didReceiveResponse response: NSURLResponse!) {
       self.bytes = NSMutableData()
    }
    
    func connection(connection: NSURLConnection!, didReceiveData conData: NSData!) {
        self.bytes?.appendData(conData)
    }
    
    func connectionDidFinishLoading(connection: NSURLConnection!) {
        
        // we serialize our bytes back to the original JSON structure
        let jsonResult: Dictionary = NSJSONSerialization.JSONObjectWithData(self.bytes, options: NSJSONReadingOptions.MutableContainers, error: nil) as Dictionary<String, AnyObject>
        
        // we grab the colorsArray element
        let results: NSArray = jsonResult["colorsArray"] as NSArray

        // we iterate over each element of the colorsArray array
        for item in results {
            // we convert each key to a String
            var name: String = item["colorName"] as String
            var color: String = item["hexValue"] as String
            println("\(name): \(color)")
        }
    }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

The didReceiveResponse and didReceiveData parameters allow us to specify through the method signature, which method is assigned to each event as a handler. Pretty succinct and efficient. The only exception is the connectionDidFinishLoading handler that needs to use that name, and that's it, we are done!


2014-07-08

13:17:37, ByteArray.org
Swift: ARC vs Flash GC

In ActionScript 3, memory ismanaged through the help of a garbage collector that allocates and deallocate objects through the application lifecycle. The garbage collector (GC) allocates memory when a new object is created, scans the objects graph periodically, detects unreferenced objects and deallocated them, pretty useful. AS3 is not the only language that relies on garbage collection, C# with Mono, JavaScript or Java all rely on garbage collection. On paper, it sounds great, but any developer who has developed more complex content on a GC based platform will tell you that it's not all so blue. Even though GC makes the developer's life easy at first, ActionScript 3 developers worst enemy today is actually the GC, so why is that?

Unpredictability

First, garbage collection is completely unpredictable, I remember when teaching ActionScript 3 to students, the idea that objects would be deallocated "at some point" in time, but nobody knows when, was something pretty hard to grasp. Actually, it was even possible that objects would never get deallocated if the garbage-collector never decided tokick-in. So how do you test this? In ActionScript 3, to test/profile an application, it is possible to trigger the GC from the Flash Builder profiler, but also from AS3 with the System.gc() API or even better with Adobe Scout, that also provides information on who is eligible or who just got deallocated.

In the code below, we set the sprite reference to null, note that this does not trigger anything, our sprite is still alive:

import flash.display.Sprite;

var mySprite:Sprite = new Sprite();

mySprite.addEventListener ( Event.ENTER_FRAME, onFrame );

function onFrame ( e: Event ):void
{
	trace ('I am alive!');
}

// we dereference the object
// collection is not triggered, sprite is still alive and talking
mySprite = null;

At this point, our sprite is eligible garbage collection, but still remains in memory and still dispatches the Event.ENTER_FRAME. To test if our sprite will eventually be garbage collected,we can trigger the GC using the System.gc() API:

import flash.display.Sprite;

var mySprite:Sprite = new Sprite();

mySprite.addEventListener ( Event.ENTER_FRAME, onFrame );

function onFrame ( e: Event ):void
{
	trace ('I am alive!');
}

// we dereference the object
// collection is not triggered, sprite is still alive and talking
mySprite = null;

// collection is triggered, object is killed
System.gc()

Remember that the System.gc() API is a debug-only feature, so you cannot rely on it in production.This GC unpredictability can be pretty sneaky. Typically, you don't want the garbage collectionto happen in the middle of something, in a game, where best performance is crucial, you don't want garbage collection to happen right in the middle of the game, bur rather before the new level gets loaded. In other words, at a time where the experience/performance is not impacted.

In Flash Player 11, we introduced a new API System.pauseForGCIfCollectionImminent() which helped developers influence when the garbage collectionwould kick-in. You still could notcontrol the GC directly, but it is was an improvement.

Synchronous (UI lock)

The reason why you don't want garbage collectionto happen at moments you don't control is because GC in Flash happens on the UI thread, therefore locks the UI when collection happens. The more complex your scene becomes, and the bigger the graphs are, the longer the pause will be. In a game, this is a showstopper, because UI lock ruins the experience and frustrates users.

That's why AS3 developers have developed workarounds over the years to prevent the GC from being triggered, object pools being one of the strategy. The idea behind object pooling is that instead of allocating new objects constantly and pressure the GC, a set of objects are allocated during app initialization and once objects are done doing their tasks, they are placed back inside a pool for later reuse. Keep in mind that this will do the job, but will consume more memory, as objects never get deallocated, you win on the performance side, but lose on memory footprint. You can find more details about object pooling here.

WithSwift?

In Swift, thanks to ARC (Automatic reference counting) you also don't need to manage memory manually by allocating memory and releasing objects like with Objective-C before ARC was introduced. ARC counts the number of references pointing to objects and when the number of references reaches zero, it kills them. Pretty similar to ActionScript 3 you may say, but with a few notable differences.

Compile time vs runtime

In AS3, all this GC work happens at runtime, the bytecode generated by the ActionScript 3 compiler does not emit any specific calls to allocate or release memory. If you were to decompile the assembly code generated by the Swift compiler, you would see the calls to allocate and release objects, like if you had done it manually, in Swift, the compiler does all the work for you.

Synchronous deallocation

In AS3, like we have seen before, setting the last remaining reference to an object to null, won't kill the object. It will make it eligible for garbage collection. In Swift, it will immediately deallocate the object synchronously, and that is a big difference. You can track initialization and de-initializationthrough the use of the init and deinit methods:

class Hero {
    
    let name: String
    
    init ( name: String ) {
        self.name = name
        println ("\(self.name) got initialized")
    }
    
    deinit {
        println ("\(self.name) got deinitialized")
    }
}

// we create our hero
// note the use of the optional (?) operator
// using this operator allows the var bob to be set to nil
var bob: Hero? = Hero(name: "Bob")
        
// we set the only reference to nil (equivalent of null)
// the object is immediately destroyed and the deinit method is called
bob = nil

If we run our application, we see in the output window:

Bob got initialized
Bob got deinitialized

Because the developer has full control over when the objects are deallocated, objects are killed sooner, optimizing for memory consumption (there is no pool of eligible objects in memory waiting to be disposed).It is also a more incremental approach that prevents the UI from locking. Most GC, on most platforms, as beautiful and complex as they are, will always impact the UI thread.

Circular references (retain cycles)

In AS3, if two objects were unreachable from the roots of the application (Stage, Loader), but were still referencing each other, they would still be garbage collected.

In Swift, if two objects areunreachable from the roots of your application, but arestill referencing each other, you havewhat iscalled a circular reference or retain cycle and these two objects will never get deallocated and probably cause a memory leak. The GC in Flash Player/AIR solved this through thecombination of deferred reference counting and conservative mark-and-sweep. The mark and sweep piece is what handles circular references (retain cycles) and that is a big advantage of garbage collection in Flash Player/AIR.

To deal with this in Swift, you would use weak references to express that if the last reference to an object is weak, the object should still be deallocated. Swift also introduces the concept of unowned references, which are non optional. This brings more granularity in object dependencies, you can read more about it here.

I hope you enjoyed this quick overview of differences between the Flash memory model (GC) and Swift (ARC).


2014-06-12

21:21:03, zero point nine
Drive Wiki Maker

I’ve been keeping a publicly viewable wiki of developer notes for a while now, but when I switched taking notes on Google Drive, I still wanted a way to make them publicly accessible, which is what’s led to Drive Wiki Maker.

Drive Wiki Maker is a node.js service which periodically scrapes documents from a Google Drive directory to make them browsable through a public front-end.

And here’s my revamped dev wiki which uses it.


2014-06-10

17:17:36, ByteArray.org
Swift: Bitmap data and filters

Note: Source code (Filter Example) available on Github

I was wondering how bitmap programming works on iOS, just like with BitmapData in Flash, I wanted to perform simple operations for painting but also curious about how filters work. Again, in AS3, you would use the applyFilter API on BitmapData, so here is how things work on iOS/MacOS with Swift and the Quartz/Core Image APIs.

Graphics Context and the Quartz drawing engine

In Flash, to perform drawing operations, you create a BitmapData object and use the pixel APIs defined on it. On iOS/MacOS, things are different, you work with a graphics context (which is offscreen) that you manipulate through high-level functions like CGContextSetRGBFillColor, the CG at the beginning stands for Core Graphicswhich leverages the powerful Quartz drawing engine behind the scenes.

To initiate the drawing, we create a context, by specifying its size, opaque or not and its scaling:

UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)

Note that we will make our bitmap of 200 by 200px and will be opaque. We pass 1 for the scaling, becausetoget the size of the bitmap in pixels, you must multiply the width and height values by the value in the scale parameter. If we had passed 0, it would have taken the scaling of the device's screen.

We now have our context created offscreen, ready for the drawing commands to be passed, but we still don't have a reference to it. The previous high-level functionUIGraphicsBeginImageContextWithOptions creates the context but did not give us a reference to it, for this, we call theUIGraphicsGetCurrentContext API:

letcontext = UIGraphicsGetCurrentContext()

Ok, now we are ready to draw, so we use the high-level APIs for that, the names are pretty explicit about their purpose:

CGContextSetRGBFillColor (context, 1, 1, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 200, 200))
CGContextSetRGBFillColor (context, 1, 0, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 100, 100))
CGContextSetRGBFillColor (context, 1, 1, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 50, 50))
CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);
CGContextFillRect (context, CGRectMake (0, 0, 50, 100))

We are now drawing offscreen. Note that at this point, this is not really a bitmap yet that can be displayed, this is really just raw pixels painted. To display thison screen, we need a high-level wrapper, just like in Flash and the relationship between Bitmap and BitmapData. So we will use the UIGraphicsGetImageFromCurrentImageContext API, which will basically take a snapshot/raster of our drawing:

var image = UIGraphicsGetImageFromCurrentImageContext()

At this point, we could just display our UIImage object returned here. Because I am using SpriteKit for my experiments, we need to wrap the UIImage object into an SKSprite object that holds a SKTexture object, so that gives us:

// we create a texture, pass the UIImage
var texture = SKTexture(image: image)
// wrap it inside a sprite node
var sprite = SKSpriteNode(texture:texture)
// we scale it a bit
sprite.setScale(0.5);
// we position it
sprite.position = CGPoint (x: 510, y: 280)
// let's display it
self.addChild(sprite)

This is what you get:

Simple bitmap data

Here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */
        
        // we create the graphics context
        UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)
        
        // we retrieve it
        var context = UIGraphicsGetCurrentContext()
        
        // we issue drawing commands
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);
        CGContextFillRect (context, CGRectMake (0, 0, 200, 200));// 4
        CGContextSetRGBFillColor (context, 1, 0, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 100, 100));// 4
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 50, 50));// 4
        CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);// 5
        CGContextFillRect (context, CGRectMake (0, 0, 50, 100));
        
        // we query an image from it
        var image = UIGraphicsGetImageFromCurrentImageContext()
        
        // we create a texture, pass the UIImage
        var texture = SKTexture(image: image)
        // wrap it inside a sprite node
        var sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }
    
    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

Pretty simple right? Now you can move your sprite, animate it, scale it, etc. But what if we have an existing image and we want to apply filters on it. In Flash, a loaded bitmap resource would give us a Bitmap object that had a bitmapData property pointing to the bitmap data that we could work with. How does that work here? This is where Core Image comes into play.

Core Image

This is where it gets really cool. If you need to apply filters and perform any video or image processing, real time, you use the powerfulCore Image APIs. So let's take the image below, unprocessed:

Ayden no filter

Now, let's apply a filter with the code below. In that example we use the CIPhotoEffectTransfer, that appliesa nice Instagramy kind of effect, look at all the filters available, pretty endless capabilities:

// we create Core Image context
var ciContext = CIContext(options: nil)
// we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
var coreImage = CIImage(image: image)
// we pick the filter we want
var filter = CIFilter(name: "CIPhotoEffectTransfer")
// we pass our image as input
filter.setValue(coreImage, forKey: kCIInputImageKey)
// we retrieve the processed image
var filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
// returns a Quartz image from the Core Image context
var filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
// this is our final UIImage ready to be displayed
var filteredImage = UIImage(CGImage: filteredImageRef);

This gives us the following result:

Ayden filtered

And here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */
        
        // we reference our image (path)
        var data = NSData (contentsOfFile: "/Users/timbert/Documents/Ayden.jpg")
        // we create a UIImage out of it
        var image = UIImage(data: data)
        
        // we create Core Image context
        var ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        var coreImage = CIImage(image: image)
        // we pick the filter we want
        var filter = CIFilter(name: "CIPhotoEffectTransfer")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we retrieve the processed image
        var filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        var filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        var filteredImage = UIImage(CGImage: filteredImageRef);
        
        // we create a texture, pass the UIImage
        var texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        var sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }
    
    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

We can also apply filters and play with the parameters to customize them, we could also use shaders for more flexibility, more on that later :) In the code below, we apply a pinch distortion effect to our initial image, that will give us the following:

Simple distortion

And here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */
        
        // we reference our image (path)
        var data = NSData (contentsOfFile: "/Users/timbert/Documents/Ayden.jpg")
        // we create a UIImage out of it
        var image = UIImage(data: data)
        
        // we create Core Image context
        var ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        var coreImage = CIImage(image: image)
        // we pick the filter we want
        var filter = CIFilter(name: "CIPinchDistortion")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we pass a custom value for the inputCenter parameter, note the use of the CIVector type here
        filter.setValue(CIVector(x: 300, y: 200), forKey: kCIInputCenterKey)
        // we retrieve the processed image
        var filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        var filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        var filteredImage = UIImage(CGImage: filteredImageRef);
        
        // we create a texture, pass the UIImage
        var texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        var sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }
    
    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

Now, can we apply a filter to the first bitmap we created through drawing commands? Sure. Here is the code for a blur effect:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */
        
        // we create the graphics context
        UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)
        
        // we retrieve it
        var context = UIGraphicsGetCurrentContext()
        
        // we issue drawing commands
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);
        CGContextFillRect (context, CGRectMake (0, 0, 200, 200));// 4
        CGContextSetRGBFillColor (context, 1, 0, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 100, 100));// 4
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 50, 50));// 4
        CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);// 5
        CGContextFillRect (context, CGRectMake (0, 0, 50, 100));
        
        // we query an image from it
        var image = UIGraphicsGetImageFromCurrentImageContext()
        
        // we create Core Image context
        var ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        var coreImage = CIImage(image: image)
        // we pick the filter we want
        var filter = CIFilter(name: "CIGaussianBlur")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we retrieve the processed image
        var filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        var filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        var filteredImage = UIImage(CGImage: filteredImageRef);
        
        // we create a texture, pass the UIImage
        var texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        var sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }
    
    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

And here is the result:

Simple Bitmap data filtered

I hope you guys enjoyed it! Lots of possibilities, lots of fun with these APIs.


2014-06-08

14:30:07, ByteArray.org
Swift: Touch event and physics field

Note: Source code (Physics Field Example) available on Github

Swift logoiOS8 introduces a set of new features for SpriteKit, and one of them is Physics field (SKFieldNode). They simulate physical forces and as a result, automatically affect all the physics bodies living in the same tree, very useful for games or any other kind of experiments.

Multiplekindsof fields are introduced, from vortex, magnetic,spring, etc. A SKFieldNode is invisible, but still needs to be in the same node tree (display list) to affect physics bodies.

The API is very simple, in the code below, I have the field follow the touch position to simulate a force field:

// Think as below as your Main class, basically the Stage
// Note: The code below is for iOS, you can run it with the iOS simulator

// this imports higher level APIs like Starling
import SpriteKit

// canvas size for the positioning
let canvasWidth: UInt32 = 800
let canvasHeight: UInt32 = 800

// From the docs:
// When a physics body is inside the region of a SKFieldNode object, that field nodes categoryBitMask property is
// compared to this physics bodys fieldBitMask property by performing a logical AND operation.
// If the result is a non-zero value, then the field nodes effect is applied to the physics body.
let fieldMask : UInt32 = 1
let categoryMask: UInt32 = 1

// our main logic inside this class
// we subclass the SKScene class by using the :TheType syntax below
class GameScene: SKScene {
    
    // our field node member
    let fieldNode: SKFieldNode
    
    // the NSCoder abstract class declares the interface used by concrete subclasses (thanks 3r1d!)
    // see: http://stackoverflow.com/users/2664437/3r1d
    init(coder decoder: NSCoder!){
        // we create a magnetic field
        fieldNode = SKFieldNode.magneticField()
        // we define its body
        fieldNode.physicsBody = SKPhysicsBody(circleOfRadius: 80)
        // we add it to the display list (tree)
        fieldNode.categoryBitMask = categoryMask
        // strength of the field
        fieldNode.strength = 2.8
        // we initialize the superclass
        super.init(coder: decoder)
    }
    
    // this gets triggered automatically when presented to the view, put initialization logic here
    override func didMoveToView(view: SKView) {
        
        // we set the background color to black, self is the equivalent of this in Flash
        self.scene.backgroundColor = UIColor(red: 0, green: 0, blue: 0, alpha: 1)
        // we live in a world with gravity
        self.physicsWorld.gravity = CGVectorMake(0, -1)
        // we put contraints on the top, left, right, bottom so that our balls can bounce off them
        let physicsBody = SKPhysicsBody (edgeLoopFromRect: self.frame)
        // we set the body defining the physics to our scene
        self.physicsBody = physicsBody
        // we add it to the display list
        self.addChild(fieldNode)
        
        // let's create 300 bouncing cubes
        for i in 1..300 {
            
            // SkShapeNode is a primitive for drawing like with the AS3 Drawing API
            // it has built in support for primitives like a circle, or a rectangle, here we pass a rectangle
            let shape = SKShapeNode(rect: CGRectMake(-10, -10, 20, 20))
            // we set the color and line style
            shape.strokeColor = UIColor(red: 255, green: 0, blue: 0, alpha: 1)
            // we set the stroke width
            shape.lineWidth = 4
            // we set initial random positions
            shape.position = CGPoint (x: CGFloat(arc4random()%(canvasWidth)), y: CGFloat(arc4random()%(canvasHeight)))
            // we add each circle to the display list
            self.addChild(shape)
            // we define the physics body
            shape.physicsBody = SKPhysicsBody(circleOfRadius: shape.frame.size.width/2)
            // from the docs:
            /*The force generated by this field is directed on line that is determined by calculating the cross-product
            between direction fo the the physics bodys velocity property and a line traced between the field node and the
            physics body. The force has a magnitude proportional to the fields strength property and the physics bodys
            charge and velocity properties.*/
            // we define a mass for the gravity
            shape.physicsBody.mass = 0.9
            // the charge and field strength are two properties fun tweaking
            shape.physicsBody.charge = 0.6
            // we set the field mask
            shape.physicsBody.fieldBitMask = fieldMask
            // this will allow the balls to rotate when bouncing off each other
            shape.physicsBody.allowsRotation = true
        }
    }
    
    // we capture the touch move events by overriding touchesMoved method
    override func touchesMoved(touches: NSSet!, withEvent event: UIEvent!) {
        // we grab the UITouch object in the current scene (self) coordinate
        let touch = event.allTouches().anyObject().locationInNode(self)
        // we apply the position of the touch to the physics field node
        self.fieldNode.position = touch
    }
    
    // magic of the physics engine, we don't have to do anything here
    override func update(currentTime: CFTimeInterval) {
    }
}

Here is what this produces:


2014-06-06

05:54:18, ByteArray.org
Swift: Bouncing balls with built-in physics engine

Note: Source code (Bouncing Balls Example) available on Github

Swift logoI have been playing tonight with the built-in physics engine in SpriteKit. Like most Flash developers, I am sure you have played or used the Box2D engine in some projects. I have to admit, the APIs have always been pretty brutal to use.

The engine that comes out of the box with SpriteKit has a much more approachable API, very intuitive. I was able to discover a large portion of it by just playing, tweaking and guessing some APIs.

If you want to test this code, paste it inside GameScene.swift of your SpriteKit project:

// Consider this your Main class, basically the Stage
// Note: The code below is for iOS, you can run it with the iOS simulator

// this imports higher level APIs like Starling
import SpriteKit

// canvas size for the positioning
let canvasWidth: UInt32 = 800
let canvasHeight: UInt32 = 800

// our main logic inside this class
// we subclass the SKScene class by using the :TheType syntax below
class GameScene: SKScene {

    // this gets triggered automatically when presented to the view, put initialization logic here
    override func didMoveToView(view: SKView) {

        // we set the background color to black, self is the equivalent of this in Flash
        self.scene.backgroundColor = UIColor(red: 0, green: 0, blue: 0, alpha: 1)

        // we live in a world with gravity on the y axis
        self.physicsWorld.gravity = CGVectorMake(0, -6)
        // we put contraints on the top, left, right, bottom so that our balls can bounce off them
        let physicsBody = SKPhysicsBody (edgeLoopFromRect: self.frame)
        // we set the body defining the physics to our scene
        self.physicsBody = physicsBody
        
        // let's create 20 bouncing balls
        for i in 1..30 {
        
        // SkShapeNode is a primitive for drawing like with the AS3 Drawing API
        // it has built in support for primitives like a circle, so we pass a radius
        let shape = SKShapeNode(circleOfRadius: 20)
        // we set the color and line style
        shape.strokeColor = UIColor(red: 255, green: 0, blue: 0, alpha: 0.5)
        shape.lineWidth = 4
        // we create a text node to embed text in our ball
        let text = SKLabelNode(text: String(i))
        // we set the font
        text.fontSize = 9.0
        // we nest the text label in our ball
        shape.addChild(text)
        
        // we set initial random positions
        shape.position = CGPoint (x: CGFloat(arc4random()%(canvasWidth)), y: CGFloat(arc4random()%(canvasHeight)))
        // we add each circle to the display list
        self.addChild(shape)
        
        // this is the most important line, we define the body
        shape.physicsBody = SKPhysicsBody(circleOfRadius: shape.frame.size.width/2)
        // this defines the mass, roughness and bounciness
        shape.physicsBody.friction = 0.3
        shape.physicsBody.restitution = 0.8
        shape.physicsBody.mass = 0.5
        // this will allow the balls to rotate when bouncing off each other
        shape.physicsBody.allowsRotation = true
        }
    }

    // magic of the physics engine, we don't have to do anything here
    override func update(currentTime: CFTimeInterval) {
    }
}

And here is the video preview of what this should give you:


2014-06-04

13:50:18, ByteArray.org
Swift: Types conversion, subclassing, casting and drawing

Here is another Swift example, showing how to convert types, subclass a native one, perform casting and draw multiple shapes (like with the Drawing API) and make them move:

// Playground - noun: a place where people can play
// Consider this your Main class, basically the Stage
// Note: The code below is for OSX Playground, not iOS

// this imports higher level APIs like Starling
import SpriteKit
import XCPlayground

// we create our custom MyCircle class extending SKShapeNode
// we define two properties for the destination properties
// we use Float for a 32 bit floating point, we would use Double if we needed 64 bit precision
class MyCircle : SKShapeNode {
    var destX: Float = 0.0
    var destY: Float = 0.0
}

// canvas size for the positioning
let canvasWidth: UInt32 = 500
let canvasHeight: UInt32 = 500

// our main logic inside this class
// we subclass the SKScene class by using the :TheType syntax below
class GameScene: SKScene {
    
    // this gets triggered automatically when presented to the view, put initialization logic here
    override func didMoveToView(view: SKView) {
        
        // let's iterate 20 times
        for i in 1..20 {
            // we create new instances of our MyCircle class
            // note that we don't use new, this is done implicitely
            // SkShapeNode is a primitive for drawing like with the AS3 Drawing API
            // it has built in support for primitives like a circle, so we pass a radius
            let shape = MyCircle(circleOfRadius: 10)
            // we set initial position
            shape.position = CGPoint (x: CGFloat(arc4random()%(canvasWidth)), y: CGFloat(arc4random()%(canvasHeight)))
            // we set random destination values
            // we convert the random values returned as Int to Float
            // Note the use of arc4random() as an equivalent to Math.random()
            shape.destX = Float(arc4random()%(canvasWidth))
            shape.destY = Float(arc4random()%(canvasHeight))
            // we add each circle to the display list
            self.addChild(shape)
        }
    }
    
    // we override update, which is like an Event.ENTER_FRAME or advanceTime in Starling
    override func update(currentTime: CFTimeInterval) {
        // to remove ambiguity we annotate ball
        for ball: AnyObject in self.children{
            // we downcast with the as keyword before using the MyCircle custom properties
            let currentBall = ball as MyCircle
            // we apply easing motion to the balls
            currentBall.position.x += (CGFloat(currentBall.destX)-currentBall.position.x)*0.1
            currentBall.position.y += (CGFloat(currentBall.destY)-currentBall.position.y)*0.1
            // we calculate the difference between positions (distance)
            // by default x and y positions use the CGFloat type that can accept Float and Double values, we cast it to Float to be consistent with our destX and destY properties
            let diffX = Float(currentBall.position.x) - currentBall.destX
            let diffY = Float(currentBall.position.y) - currentBall.destY
            // if less than 1, we are done, let's set new a destination
            if ( diffX <= 1.0 && diffY <= 1.0 ){
                currentBall.destX = Float(arc4random()%(canvasWidth))
                currentBall.destY = Float(arc4random()%(canvasHeight))
            }
        }
    }
}

// we create our scene (from our GameScene above), like a main canvas
let scene = GameScene(size: CGSize(width:CGFloat(canvasWidth), height: CGFloat(canvasHeight)))

// we need a view
let view = SKView(frame: NSRect(x: 0, y: 0, width: CGFloat(canvasWidth), height: CGFloat(canvasHeight)))

// we link both
view.presentScene(scene)

// display it, XCPShowView is a global function that paints the final scene
XCPShowView("result", view)

Which should give you this:

Types conversion, subclassing, casting and drawing


2014-06-03

20:14:34, ByteArray.org
Hello Swift/Playgrounds

Swift logoWow, it's been a while. I have to say, the combination of Swift and Playgrounds (in XCode 6) brings a lot of fun. I spent the night playing with it, and I wanted to share a piece of code you can try and get started quickly. I could talk about the details of the language and how nice it is, as an F# lover, I love that Swift got some inspiration from it (and other languages too).

With the combination of SpriteKit, similar to Starling, you guys should feel at home.

So there we go, I will post more articles in the next weeks (if you guys like the idea) about my Swift experiments and how the language works, but let's start with a classic Flash dev Hello World with some simple API calls, to make something move over time with an EnterFrame style event:

// Playground - noun: a place where people can play
// Consider this your Main class, basically the Stage
// Note: The code below is for OSX Playground, not iOS

// this imports higher level APIs like Starling
import SpriteKit
import XCPlayground

// our main logic inside this class
class GameScene: SKScene {
    
    // properties initialization
    // note that the spriteNode property below is not initialized
    // we initialize it through the init initializer below
    var spriteNode: SKSpriteNode
    var i = 0.0
    
    // this is our initializer, called once when the scene is created
    // we do our initialization/setup here
    init(size: CGSize){
        
        // let's grab an image, like [Embed] in AS3, results in image data like BitmapData
        // let is to declare a constant, var a variable
        // note that we don't type things, you actually can to resolve ambiguity sometimes
        // but it is inferred by default and does not cause performance issues to not statically type
        let sprite = NSImage(contentsOfFile:"/Users/timbert/Documents/Adium.png")
        
        // let's create a bitmap, like Bitmap in AS3
        let myTexture = SKTexture(image: sprite)
    
        // let's wrap it inside a node
        spriteNode = SKSpriteNode(texture: myTexture)
        
        // we position it, we could scale it, etc.
        spriteNode.position = CGPoint (x: 250, y: 250)
        
        // we complete the initialization by initializating the superclass
        super.init(size: size)
    }
    
    // this gets triggered automtically when the scene is presented by the view
    // similar to Event.ADDED_TO_STAGE
    override func didMoveToView(view: SKView) {
        
        // let's add it to the display list
        self.addChild(spriteNode)
    }
    
    // we override update, which is like an Event.ENTER_FRAME or advanceTime in Starling
    override func update(currentTime: CFTimeInterval) {
        i += 0.1
        // oscillation with sin, like Math.sin
        var osc = 1.5 + sin(CDouble(i))
        // let's scale it
        spriteNode.setScale(CGFloat(osc))
        // we could have retrieved spriteNode also with the code below, similar to getChildAt(0)
        //let node = self.children[0] as SKSpriteNode
    }
}

// we create our scene (from our GameScene above), like a main canvas
let scene = GameScene(size: CGSize(width: 500, height: 500))

// we need a view
let view = SKView(frame: NSRect(x: 0, y: 0, width: 500, height: 500))

// we link both
view.presentScene(scene)

// display it, XCPShowView is a global function that paints the final scene
XCPShowView("result", view)

This should give you the following animation (scaled bitmap) in Playgrounds:

Swift Playgrounds Hello World


2014-03-31

20:51:55, zero point nine
Virtual Trackpad for Android

What started out as a quick test with sockets inevitably grew into a fully baked app. I published this over the winter break, but have just pushed an update, making it free with all features unlocked.

This is a quick demo:

I chose to create the desktop client in Java with the hope of being able to share the socket communications code between the desktop and Android clients. This actually worked out, without even a need for any platform-specific conditionals. A pleasant surprise to say the least.

Additionally, since both projects are Java based, I was able to stay in the same IDE (IntelliJ), which was nice. The desktop project uses SWT (a long-time Java UI library that uses OS-specific dependencies to allow for native OS-based UI widgets) and builds four different JAR’s: Windows 32bit, Windows 64bit, OSX 32bit, and OSX 64bit (yikes). In the last step of the build process, a script packages the OSX versions into app bundles.

Cusor movement is done using the Java Robot class. You’ll see that the mouse cursor can have “momentum” on-unpress, which in practice can be super-useful. And mouse acceleration is configurable using this one-liner which I need to remember to find uses for more often:

double logBase(double value, double base) { return log(value) / log(base); }

The network logic is hand-rolled, using UDP sockets. The size of the encoded messages are kept as small as humanly possible. For example, a mouse-move command uses only 20 bytes — 4 bytes for a start-of-message token, 4 bytes for the command type (ie, move the mouse), two floats for the cursor position, and then 4 bytes for an end-of-message token. As a result, at least in my testing, under decent network conditions mouse-updating stays pegged at a steady 60hz.

> Google Play


2014-03-27

18:57:29, zero point nine
Anime Color Palette Experiment
 

This was a novel attempt at ‘extracting’ color palettes from illustration-styled artwork (and specifically, from anime).

Starting with the assumption/requirement that shapes in the source image should have fairly well-defined boundaries, the image is partitioned into a collection of two-dimensional polygons (OpenCV’s findContours plus approxPolyDP).

Each chunk is then sampled for its surface area and its average color (based on a second assumption that each chunk will have a more-or-less fairly uniform color). The resulting chunks’ colors are then grouped together based on “proximity” by treating HSV values as three-dimensional positions, and each color group is ranked based on surface area, resulting in a final color palette.

In the end, the results were mixed, but predictably better with artwork with well-defined outlines and flat colors.

As a bonus, though, since each chunk is represented internally as a polygon, they can be treated as independent objects, and can therefore be subjected to potentially interesting visual treatments.

This is demonstrated in the video by (predictably enough…) doing some tween-ey things on the z-axis. As well as a quick “pencil-drawn” effect.

FWIW, the still images are from Kill La Kill, and the video is the OP of UN-GO.

From 3/2012. C++, cinder, OpenCV.


2014-03-21

02:45:24, zero point nine
Spherical Racing Game Prototype Level Editor

Here’s a video walkthrough of the level editor. The UI needs work, but most of the major features are in place…

There are five main sections:

- Track: The racetrack is drawn directly on the sphere by pressing and dragging. A maximum turning angle can be used to ensure that turns in the track appear smooth.

- Shape: The vertices of the planet’s mesh can be extruded away from or towards the sphere center to create variations in its topology.

- Texture: Images can be burned into the planet’s texture, essentially by mapping image pixels to texels using raycasting. The algorithm runs on multiple threads. The texture can also be color-filled using three independent colors to create a continuous gradient around the sphere.

- Structures: 3D model obstacles can placed on or around the planet. They’re treated as oriented bounding boxes for the collision detection.

- Preview: The user can jump in and out of preview mode to test out the level at any point.

Note on the video capture: To work around the choppiness that can result from doing video capture via AirPlay, I actually halved the framerate of the application and doubled the playback rate in the video encode. :)


2014-02-27

20:30:40, zero point nine
Spherical Racing Game Prototype Gameplay

I’ve been working on this for quite a while. The main conceit is, as you can see, a game whose “game world” is literally a “world”, ie, a globe-shaped object, with a racing track that wraps around it. It’s 90% C++, but 10% Objective C, which means it lives on iOS, and the iPad specifically, using openFrameworks.

A user-facing level editor is integrated into the game. Though it’s probably more accurate to say that the application _is_ the level editor, which has a playable game component as a byproduct of its execution and design (haha). I’ll demo the level editor’s features next, and then post about certain aspects of game from a high level, maybe including spherical versus Euclidean geometry, AI logic, and ray-casting on a polygon to create textures.

It’s been a (seriously) long time since I’ve posted here so I’m hoping to show some video + code from other (mini-er) experiments soon as well.


2013-04-26

14:06:52, RichApps
Protected: HTML5 Parleys Share Test

This post is password protected. To view it please enter your password below:


12:41:03, RichApps
Protected:

This post is password protected. To view it please enter your password below:


2013-04-05

20:02:27, ByteArray.org
PlayScript, AS3 on steroids, powered by Mono

PlayScriptZynga released this week an open-source project called PlayScript, allowing ActionScript 3 developers to target mobile platforms leveraging the Mono runtime. For the context, Mono is an open-source implementation of the .NET runtime, with support for C#, F# and other languages. You want to develop or reuse a library developed with these languages? Like an AI library using beautiful F#? No problem. (F# anyone?)

Mono, allows you to target mobile platforms through static compilation of your code to native, for info, Mono is the runtime powering Unity and the Xamarin offering for application development.

Zynga created the appropriate bindings so that Stage3D can be used, which allows you to use libraries like Starling, Feathers with it and run your Stage3D code untouched, but don't expect to see the whole Flash runtime libraries to be there, but if you are a game developer, you should be all set. Also, by using PlayScript (AS3 on steroids) you can leverage additional types that the Mono runtime gives you that would also find in C#, like float, short, byte and more.

Cherry on top, Mono gives you access to multithreading, fast packaging times for mobile platforms, and native code integration. An experimental backend is also available to target JavaScript. But given that Mono relies on CIL behind the scenes, you could also probably hook that up with JSIL, another exciting project.

Anyway, go check it out. This is great stuff.


2013-03-04

16:53:33, ByteArray.org
Speaking at Max 2013

Adobe Max is coming on May 4-8th in Los Angeles and I will be giving 2 sessions:Adobe Max 2013 - Los Angeles

Adobe Scout: Profiling Taken to the Next Level - Monday 3:30 PM - 515B(link -http://tinyurl.com/c8xs9vf)

Description: Discover how you can perform advanced profiling of your Adobe Flash Player and Adobe AIR content across mobile and desktop. Profiling in Flash Player has been revolutionized, and Gavin Peacock and Thibault Imbert from the Adobe Scout team are your guides to the latest innovations. Join us for a deep dive into next-generation profiling.

This session will include:

  • Introduction to Adobe Scout and its potential uses
  • Technical demonstration on how to use the tool to profile both desktop and mobile content
  • Best practices and tips

We will also unveil some of the new features coming in Scout in the future. You cannot miss this session if you care about performance analysis!

HTML5 for ActionScript 3.0 developers - Monday 5:00 PM - 510(link -http://tinyurl.com/cx4mkjq)

Description: Have you been developing content with ActionScript 3.0 for years and want to take a look at JavaScript? Join Thibault Imbert for a deep dive into JavaScript and HTML5 and discover how they are different from ActionScript 3.0. This session will review the major capabilities available in Adobe Flash Player and AIR and demonstrate the equivalents using web standards.

I will cover:

  • Differences and similarities between JavaScript and ActionScript 3.0
  • An overview of browser APIs and capabilities to power expressive content
  • Profiling and performance optimizations

I will be sharing in this talk how to transition to HTML/JS from an interactive developer standpoint moving from AS3/C#/C++. I started working on a free ebook about this topic called "JavaScript for interactive developers", this session will be a sneak peek of it.

I hope to see you there!


2013-01-29

21:12:59, ByteArray.org
SWF and AMF3 specifications update

SWF Format

We are really happy to announce that we just updated the SWF (SWF19) and AMF3 specs with the latest information.I promised this to you guys a long time ago,this will give you the latest details if you are working with both formats.Some things were either missing inaccurate in both specs, so we fixed that. The AMF3 specification had some types missing like Vector and Dictionary that we introduced in Flash Player 10.

The SWF specification had some undocumented things like a new tag for Telemetry (Adobe Scout) and some other miscellaneous attributes that we now fully document.

You can find both specs at the following url:

http://www.adobe.com/devnet/swf.html


2013-01-22

17:25:40, ByteArray.org
Understanding Flash Player with Adobe Scout

Display List renderingMichael and Mark from the Scout team just published an awesome article on the internals of Flash Player and how it relates to Scout. I am sure you guys will love it. The article is available here on Devnet.

The article starts with an overview, then dives into details like how Flash Player instances work, the core components, ActionScript 3 event handling and how "core loop", the beating heart of the Flash Player works.

It does not stop there and also covers topics like rendering side and how the display list works but also the network stack and more. The article ends with general GPU related performance issues. A must read for every ActionScript 3 developer.


2013-01-19

05:21:42, ByteArray.org
Just received the Starling books! Grab one!

Starling bookWhat a cool surprise today, I received the print copies of the Starling book. For recall, the book has been available for a year now at O'Reilly's website as a downloadable ebook.

I read lots of digital books but I still enjoy reading stuff the old school way, I am sure you do too. Adobe ordered some hard copies that we will be able to distribute at some events. It feels great to physically hold it! For info, this printed version has the latest updates I added last month that I mentioned here.

I thought it would be cool to give a few free copies, if you want to have it or have colleagues that would be interested. The first twenty comments will receive a free copy. Just put your real email when commenting so that I can reach out to you ;) They are all gone!

Have a good reading!


2012-02-06

10:49:34, w3blog
Hacking SWF &#8211; PlaceObject and the Ratio Field

According to the SWF10 specification, PlaceObject2

.. can both add a character to the display list, and modify the attributes of a character that is already on the display list.

The placed character is usually defined earlier in the SWF, and can be anything supported by SWF, e.g., Shape, MorphShape, Sprite, Text, EditText etc. It stays on the display list until it is explicitly removed by the RemoveObject tag. PlaceObject2 might also tell the Flash Player that the placed object is to be treated as a mask (and what depth range will be masked), and might give the character an instance name (if it is a Sprite).

Additionally, PlaceObject2 might carry an optional ratio parameter. According to the SWF10 specification, it

.. specifies a morph ratio for the character being added or modified. This field applies only to characters defined with DefineMorphShape, and controls how far the morph has progressed. A ratio of zero displays the character at the start of the morph. A ratio of 65535 displays the character at the end of the morph. For values between zero and 65535 Flash Player interpolates between the start and end shapes, and displays an in- between shape.

For Flash users, this is better known as a “Shape Tween”.

However, the statement that “this field applies only to characters defined with DefineMorphShape” is incorrect. It also applies to Sprite characters (“MovieClips”).

The Flash Player uses the value of the Ratio field to determine whether or not to reset the playhead in the placed Sprite to frame 1, when jumping to arbitrary frames in the parent timeline.

To illustrate that, let’s create a simple Flash movie using the Flash IDE. We create a one-frame MovieClip (called “square”) containing a simple shape. We create another MovieClip (called “animatedSquare”), which contains “square”, animated by a motion tween over 20 frames. We place “animatedSquare” on the main timeline (one frame only). When the Flash Player executes the resulting SWF, we see“animatedSquare” looping over all its 20 frames, as we would expect.

Here are the guts of the resulting SWF (simplified). Nothing surprising in there:

Now, let’s make the main timeline 10 frames long (each frame containing “animatedSquare”). The behavior doesn’t change,“animatedSquare” still loops over all of its 20 frames as expected. Also, the SWF tags still don’t reveal anything surprising, 9 more ShowFrame tags were added:

Finally, let’s remove“animatedSquare” from frame 5, leaving it only on frames 1-4 and 6-10:

This is where things get interesting. The character at depth 1 (our “animatedSquare” Sprite) is removed after frame 4, frame 5 is displayed without any content, and then“animatedSquare” is placed back on depth 1. Only now, the PlaceObject2 tag carries a value (5) in the Ratio field. Now why is that?

If we let Flash Player execute this SWF, you will notice the following behavior:

  1. In frames 1-4, the first 4 frames of“animatedSquare” are displayed
  2. In frame 5, a blank frame is displayed
  3. In frames 6-10, the first 5 frames of“animatedSquare” are displayed
  4. The main timeline then loops and jumps back to frame 1, displaying the first frame of“animatedSquare” again. Back to 1. Etc.

If we would put a gotoAndPlay(6) action on frame 10 of the main timeline,“animatedSquare” would reset to frame 1 once, and then loop over all 20 frames infinitely.

So what happens here is that once the Flash Player encounters a PlaceObject tag that attempts to place a Sprite on a depth that was previously occupied by the same Sprite, it looks at the ratio fields of the current and previous PlaceObject tags. If both carry the same value, the child Sprite keeps on playing normally. If not, the child Sprite’s playhead is reset to frame 1.

There is a little more to it yet, so if you are interested in digging deeper you should take a look at the Gnash Wiki, which lists many cases.


2011-11-03

14:07:31, zero point nine
FPO Image Generator
Thumbnail

Is it not the norm for the agency developer of whatever flavor to be asked make more progress than what’s reasonable based on the current status of a given project at any given point in time?

Thus, we continuously look for ways to make more progress with less information, even while increasing the risk of wasted work and unseemly displays of developer angst. At the very least, such experiences make a person receptive to finding a few tricks that might shave off a few extra minutes in one’s pursuit to meet the latest unreasonable deadline.

That having been said, the potential utility of this little tool should require no further explanation.

It has its origins in this capricious tweet from some time last winter.

- Download (uses Adobe AIR)

- Source (Flex project)


| 30


 
.