Friday, November 3, 2017

Release 0.2.2: gpf.require

This new release includes a modularization helper as well as feedback from a side project. It also includes fixes for some annoying bugs and quality improvement.

New version

Here comes the new version:

Why did it take so long to get a new version?

First, no timeline is clearly defined for the project. Usually, a release spans over one month and an implicit commitment is made. However, the necessary time is spent to make sure that a release contains everything needed.

In that case, there are three main reasons to explain the delay.

  • The second reason is that a side project dragged a lot of my time. I will later write an article about it as it was my first HTML mobile app experience with a tough customer... my wife.
  • Finally, this release addresses an issue that appeared from time to time and that was never considered seriously. Some tests were failing on a regular basis but not frequently enough to represent a real threat. This problem was prioritized and tackled once for all but it brought many time-consuming challenges.

Oh, and I also designed a logo
logo

Release content

gpf.require

This is clearly the new feature coming with this version. It allows the developer to modularize its project by creating separate source files that depend from each other. A separate article will be written to detail the concept (and implementation) but, in the meantime, you may read the tutorial on how to use it.

To put it in a nutshell:

gpf.require.define({ hello: "hello.js" }, function (require) { "use strict"; console.log(require.hello()) // Output "World!" });

Provided the hello.js file is in the same folder and contains:

gpf.require.define({}, function () { "use strict"; return function () { return "World!"; }; });

Or (using CommonJS syntax)... "use strict"; module.exports = function () { return "World!"; };

This mechanism was envisioned for a while but it was hard to figure out how (and where) to start. After reading some documentation about NodeJS and now that the library is mature enough, it appeared to be surprisingly easy to implement.

I did it
I did it

Some of you may object that, indeed, it already exists in NodeJS and browsers (natively or through different libraries such as RequireJS or browserify). But this new mechanism brings one uniform feature to all supported platforms.

And it comes with a way to alter the require cache to simplify testing by injecting mocked dependencies.

Improved gpf.web.createTagFunction

To develop the mobile app, Scalable Vector Graphics were extensively leveraged. In order to create the appropriate markup, some specific DOM APIs must be used (such as createElementNS). That's why the support of the predefined svg namespace was added to gpf.web.createTagFunction.

... But the use of namespaces is not yet documented... an incident is assigned to the next release.

The same way, this method allocates a function that generates tags exposing methods (toString and appendTo). The documentation was missing but now this is resolved.

Improved gpf.http layer

The HEAD verb was not supported by the HTTP layer. It is now possible.

Another improvement is the possibility to mock any HTTP requests by adding a custom handler. Using this feature, any code that involves HTTP communication can be tested independently from the web server.

For instance:

gpf.http.mock({ method: gpf.http.methods.get, url: new RegExp("echo\\?status=([0-9]+)"), response: function (request, status) { if (status === "400") { return; // Don't mock } return { status: parseInt(status, 10) + 1, headers: { "x-mock": true }, responseText: "It works" }; } }); // No HTTP request will be sent to any server gpf.http.get("/echo?status=200").then(function (response) { assert(response.status === 201); assert(response.headers["x-mock"] === true); assert(response.responseText === "It works"); /*...*/ }); // But this one will... gpf.http.get("/echo?status=400");

Improved Timeout & Promise tests

From time to time, with a ratio of 1 of 1000 executions, the WScript tests were failing. It took ages to narrow the problem down to the place where the assertion failed:

  • An 'infinite performance' mode was added to the testing command line: it loops forever and shows mean as well as deviation on the execution time
  • Error occurring in the testing command line are now documented to help locating the issue

Failure example 1
Failure example 1

Failure example 2
Failure example 2

This highlights the fact that tests should be small and clear enough to immediately spot a problem when it happens.

Upon qualification, it appeared that two test suites were badly designed:

  • Promises testing was using timeouts for no good reasons. Removing them from the test suite was easy: diff
  • Timeout testing was based on global variables to assess several timeout executions and the data got corrupted depending on the executions sequences. Changing the test suite to secure it was more challenging: before after

Once the whole thing was figured out and fixed, the problem of legacy tests remained. Indeed, the test suites are saved after each release and they should remain untouched to ensure backward compatibility.

But then, how do you handle tests that were badly designed? You can't just drop the legacy test suites (more than 600 tests) just because some of them are invalid.

Legacy code
Legacy code

That's why the idea of a legacy management came out. With the help of the legacy.json file, the library offers a way to disable some tests based on their name and version.

Improved quality and tooling

The minimum maintainability ratio has been increased to 70, raising the overall quality requirement for source files.

The grunt connect middleware was modified to automatically scroll the page content. This allowed the recording of this joyful video.

Lessons learned

  • Almost everything can be implemented in oldest JavaScript hosts. This demonstrates the power and flexibility of the language.
  • The way tests are written today significantly impacts tomorrow versions. This is something to keep in mind.
  • Documentation must be reviewed before releasing. However, it becomes more and more complex and I am not necessarily the best person to review my own writing.

Next release

The next release content will mostly focus on:

  • Improving the streams formalism and implement an CSV from/to records engine
  • Improving the release process
  • Doing some cleaning

Wednesday, November 1, 2017

Released 0.2.2

Just a quick note to announce that version 0.2.2 is released.
I am now working on release notes and next release preparation.

I captured the operation to illustrate how easy (and safe) the release process is.