- Dynamic tracking. You can drag the sphere plot to any orientation and magnification, as with Google Earth.
- Good and convenient shading for continuous fields
Script tags can link to an external source. JS can modify the original page; after it has appeared, it goes into listening mode, driven by events like mouse clicks.
When developing a JS program, I usually place it in a file with minimal html, and show it in Firefox, with the free Firebug debugger add-on engaged. Then I can step through the code with all variables displayed in object hierarchy style.
The JS program you write is downloaded and run on other peoples' computers, so there are security limitations. All code and data has to be readable by the user. You can't access file spaces, or even other windows - this can be a nuisance when embedding frames.
HTML 5 and WebGLHTML 5 introduced new drawing facilities, particularly SVG and the canvas. A canvas is a HTML element, specifying an area of screen, and with a context offering JS functions etc that allow you to control drawing in the space. The common context is "2d", which I used extensively in the climate plotter. "webgl" is an alternative context, now offerred by most browsers.
As said above, WebGL is based on OpenGL, which is in turn derived from the Silicon Graphics Iris GL of the 1990's. That was considered 3D graphics, but as Greggman maintains, WebGL is also a 2d context. It is capable of very rapidly implementing the 3D geometry transforms, but you have to write them. However, it supports the OpenGL ES shading language which makes this easy.
Structure of a WebGL AppYou need to create a master object called a program. There can be several. To each you need to attach a vertex and a fragment shader. These are the little programs written in OpenGL ES that will be passed to the GPU. I usually write them (with JS help) as strings, which are compiled and attached.
The vertex shader typically specifies a 3D transform to the original vertex data, expressing user-induced motions. You may also link color information to the vertices.
The fragment shader tells what to do between vertices; it may reduce to just a color specification.
Much of the rest of the program is concerned with putting the geometry data in buffers for GPU access, and connecting the names to the names in the shaders. When that is all done, you give a draw command - I use DrawArrays. At this stage you specify a shape (for which the buffers were designed). Options are POINTS, LINES, TRIANGLES, LINE_STRIP, TRIANGLE_STRIP, LINE_LOOP and TRIANGLE_FAN. In the first three, you just specify each item with all coordinates in full. That's bulky, so the remaining are ways of writing larger structures mitigating writing out the same coords several times. The shapes then appear (you can flush).
WebGL also has provision for perspective, lighting, texture etc, which go into the shaders. For presenting numerical plots, as I do, these aren't needed.
I linked above a minimal program. It doesn't do much, but shows the basic requirements.