Wednesday, April 18, 2018

Friday, April 6, 2018

Release 0.2.5: Flavors

This release finalizes WScript simulation required for Travis, it improves the development environment and it introduces a mechanism to deliver smaller feature-centric versions of the library.

New version

Here comes the new version:

Release content

Finalized WScript simulation

In the previous release, Travis continuous integration platform was integrated. To assess that all hosts are working fine, WScript had to be simulated because not supported on Linux environments.

At that time, the result was not perfect and the coverage was reaching only 99% (including the documented exceptions).

To achieve 100%, the environment is now tweaked to disable standard objects and let the compatibility layer replace them with polyfills.

An unexpected test case

The simulation is based on the sync-request external library that implements synchronous HTTP requests in NodeJS. After enabling the JSON polyfills, the tests were failing... because of this library.

It revealed two problems:

  • The JSON emulation was incomplete
  • The compatibility tests were not covering all features of the JSON object

Universal Module Definition

After fixing the polyfills problems (see below), the final validation consisted in running the debug and release versions of the library inside the WScript simulator.

However, those versions are based on the concatenation of all sources with the Universal Module Loader.

This mechanism detects the current host and defines the gpf symbol accordingly.

When NodeJS is detected, which is the case here, the loader assigns the gpf namespace to module.exports to be the result of a require call.

But in WScript, the library is supposed to define the gpf symbol globally.

A workaround was implemented to simulate this behavior, it is done by leveraging the AMD syntax.

Improved JSON polyfill

The JSON.parse method accepts a reviver parameter that is used to transform parsed values before the final value is returned.

New tests were added and the missing part was implemented.

The same way, the JSON.stringify method accepts two additional parameters:

  • One, documented as replacer, that can be either a function to transform values before the stringification process or an array of string that serves as a whitelist of properties to be included in the final string.
  • A formatting parameter, documented as space, that is used to beautify the output of the method.

New tests were added and the missing parts were implemented.

Improved development environment

In this release, a significant effort was put in the development environment.

sources.json

The sources.json file is the spinal column of the library. It defines which sources are loaded and additional properties can be associated to them.

For instance, the "test" attribute defines when the associated test file (same name but under test instead of src) should be loaded. The default value being true, only the sources with no test file are flagged with false (for example).

The sources list was recently cleaned up for DeepScan. As a result, the remaining sources were all containing documentation to extract. Consequently, the "doc" attribute has been removed.

Also, a space separated list of tags was added, where:

  • core means the source is a core feature and it must always be loaded
  • host:hostname means the source is specific to the host hostname
  • any other tag is interpreted as a feature name (for instance: define, require, fs...)

This is used for the flavors development that will be detailed right after.

GitHub tile

A new tile was added to the development dashboard.

The GitHub tile
The GitHub tile

It connects to the GitHub API to fetch the current release progress and links directly to it.

This was also the opportunity to redesign the whole dashboard. All the HTML is now generated and modules are organized thanks to the gpf.require.define feature.

Documentation validation

The only way to make sure that all links contained in the documentation are actually pointing to something is to try them. A new step in the build process was added to validate the documentation by checking all the links.

Here again, it was a nice opportunity to test the gpf.http feature.

Dependency viewer

The sources page was redesigned to visually show dependencies. It was done thanks to the awesome dependency wheel from Francois Zaninotto.

Simplified release

Last but not least, the library is now completely released through a single command line.

Flavors

The library starts to generate interest but people often complain about the fact that it handles too many hosts and, consequently, it's too big.

Furthermore, some internal mechanics generate troubles:

  • The use of ActiveXObject may generate security issues on Internet Explorer
  • require("js") produces extra work with webpack
  • ...

Hence, some time was invested to study the ability to build smaller - more dedicated - versions by having a way to specify which parts to consolidate in the library.

The idea is to rely on feature tags.

For instance, if one wants to use require on NodeJS only, the flavor would be "require host:nodejs". From there, an algorithm is capable of listing all the sources that must be included by:

  • filtering sources from tags
  • adding selected sources' dependencies

Mapping stream

This release delivers a mapping stream that completes the filtering stream introduced in the last version.

Here is a modified version of the previous release's sample that includes mapping:

// Reading a CSV file and keep only some records /*global Record*/ var csvFile = gpf.fs.getFileStorage() .openTextStream("file.csv", gpf.fs.openFor.reading), lineAdapter = new gpf.stream.LineAdapter(), csvParser = new gpf.stream.csv.Parser(), filter = new gpf.stream.Filter(function (record) { return record.FIELD === "expected value"; }), map = new gpf.stream.Map(function (record) { return new Record(record); }) output = new gpf.stream.WritableArray(); // csvFile -> lineAdapter -> csvParser -> filter -> map -> output gpf.stream.pipe(csvFile, lineAdapter, csvParser, filter, map, output) .then(function () { return output.toArray(); }) .then(function (records) { // process records });

See how you can easily swap the streams to refactor the code: let say that the Record class has a method named isObsolete which gives the filtering condition. You don't need to rely on the CSV literal object properties to reproduce the logic:

// Reading a CSV file and keep only some records /*global Record*/ var csvFile = gpf.fs.getFileStorage() .openTextStream("file.csv", gpf.fs.openFor.reading), lineAdapter = new gpf.stream.LineAdapter(), csvParser = new gpf.stream.csv.Parser(), filter = new gpf.stream.Filter(function (record) { return !record.isObsolete(); }), map = new gpf.stream.Map(function (record) { return new Record(record); }) output = new gpf.stream.WritableArray(); // csvFile -> lineAdapter -> csvParser -> map -> filter -> output gpf.stream.pipe(csvFile, lineAdapter, csvParser, map, filter, output) .then(function () { return output.toArray(); }) .then(function (records) { // process records });

Lessons learned

This release enabled a 'real' productive use of the library. And, naturally, several weaknesses were identified.

For instance, requests to HTTPS websites were not working with NodeJS.

The same way, the usability of the gpf.require.define feature has to be improved whenever something goes wrong.

If loading fails because the resource file does not exist or its evaluation generates an error, the resulting exception must help the developer to quickly find and fix the problem:

  • Which resource is concerned?
  • Through which intermediate resources it was loaded?
  • What is the problem?
  • Where is the problem (line number)?

Also, debugging loaded module might become a challenge since the evaluation model prevents the browser to map the file in the debugger.

But, in the end, this exercise validated the concepts: the tiles were quickly redesigned and common code put in modules that are shared by all of them:

Next release

The next release content will mostly focus on:

  • Taking care of improving gpf.define.require
  • Releasing a standalone version of the gpf.define.require feature
  • Offering Object Oriented concepts (abstract classes, singletons, final classes)
  • Documenting JSON compatibility