Optical Analog Data Storage System

Revision as of 04:39, 2 February 2020 by Themi (talk | contribs) (→‎Rectification and recombination)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The Optical Analog Data Storage System is a data storage system devised and implemented by Themiclesian computer developer and manufacturer United Digital Systems Co. between 1955 and 1965. It utilized full-spectrum light that inscribed information on a rotating disc that had different layers of coating material that reacted to light of a specific wavelength. It permits several streams of data carried by discrete wavelengths to be blended together in the same light beam writing to the same location on the disk. The layered construction of the disk permits each wavelength to write to a different layer on the disk, which preserves information that is later recovered in a single beam of light, which was then separated in a prism to recover the multiple original data streams in parallel. The principle purport of this system was to provide a reliable and dense medium of information capable of quick data reading and writing, and furthermore was compact and easy to transport.

Background

Columbian computer giant Digital Data Computers developed the concept of storing information on a disc sensitive to magnetism, which rotated to allow a mobile head to read and write data by changing the states of magnetic regions on the disc. The direct product of this is the Random Access Machine for Arithmetic Computation, which was announced in 1953. This technology was much faster in response time than sequential systems such as magnetic tape and punched cards. Yet the density of information stored on the first disk drives was not satisfactory, making them tremendously expensive byte-for-byte, often into the millions for a complete system. The solution that the DDC pursued was the miniaturization of the regions of magnetic space so that more information could be stored on a disk of fixed size. The UDS, taking inspiration from this concept, theorized a different approach to increase data density, one that was based on superimposed layers rather than addressing finer cells with electro-magnets.

Development

Special Engineer Hjun assembled a team in 1955 to explore the possibility of using full-spectrum light to represent parallel streams of information through its different wavelengths, observing that sunlight could be split by prism to yield light of different colours. Under Hjun's initial conjecture, each wavelength could, in theory, represent analog information by its intensity (amplitude). It was his team's task to figure out a way to preserve this constant flow of information on a physical medium, from which the original data (represented by wavelengths and amplitudes) could be recovered efficiently and accurately. In 1957, Hjun presented his concept before the Board of Governors of UDS, which allegedly considered his proposal to be insanity. Nevertheless, he argued in favour of his system for the rest of the year, finally fatiguing corporate leadership to grant him the funding necessary to begin developing a working model for his "future data storage". The working model took two years to surface, by which point the company had lavished over $5 million on his activities.

The design process was beset by two major sources of frustration. The first was the search for a chemically stable, relatively safe, and versatile material with which to capture data. Initially, Hjun's team developed two models in parallel, one with separate layers that separated the photosensitive chemicals with blocking materials, and the other where all the photosensitive chemicals were blended together and applied as a single coating. The former suffered from the search for a blocking material that naturally blocked light that was supposed to sensitize the layer above from reaching the layer below. This initial plan called for five separate layers, sensitized to infrared, red, green, blue, and ultraviolet light; this meant that the photosensitive layer for infrared data was supported by a blocking material (carbon paint in this case) that would only allow light of higher energies (carrying separate streams of data) to pass and sensitize the layers beneath. As it turned out, finding the correct consistency of this material was quite challenging, as most materials blocked either too much or too little light. The second called for a single layer of thicker and blended photosensitive coating, which removed the problem of blocking materials but was even more demanding on the front of material consistency and homogeneity, which were very challenging to achieve given the requirement of chemical stability and evenness of distribution.

Digital development

Ultimately, Hjun decided that the first approach was the easier and to be accomplished first, believing that experience there would assist in the development of the second. In 1959, he presented to the Board a disc and a mechanism capable of reading and writing to the disc. Initial results were mixed, showing the ability to store large amounts of data but inconsistent recovery and rather rapid decay in accuracy. Because this system worked on an analog principle, where each wavelength represented a continuum of values through variations in amplitude, input data and recovered data could fluctuate by as much as 25%, which make it inconvenient for computer use. The Board suggested that a digital approach be selected, whereby only a stream of binary information was preserved in each wavelength; under this strategy, Hjun discovered if logic low state was recorded as a negative value and logic high as a positive value (from a non-zero ground state), data recovery became much more reliable. While the strength of the signal could vary by quite a margin while in storage, its decay was geometric, which meant that a positive value should theoretically never decay into a negative value, and vice versa. Hence, if data was stored as binary information, only the sign of the recorded value (either negative or positive) relative to the ground state need be interpreted, rather than its magnitude. Hjun did not oppose this change as the switch from decimal computing to binary was complete by this time, even in the commercial world. In the same year, his work was patented in Themiclesia and announced as the Photo-electric Data Storage System by UDS, which saw only modest sales.

Analog storage

Hjun retired in 1960, and one member of his team saw potential in the abandoned, original analog concept. Computers of course required unambiguous accuracy for each parcel of information, but there were still applications for inaccurate forms of storage, where the accuracy of each parcel was not subject to too much demand, only that the average be consistent with the original over time. Gho observed while working with Hjun, that though each individual reading may show variances by as much as 25% from the original, the device still deviated from true values by only a much smaller margin over a large number of readings; in fact, in 1960, he asserted to the General Manager that "net deviation tends towards zero as stream approaches infinity", meaning that these errors did not have preference of one direction of the other, and that these variations would eventually "average out" to provide a stream of information that represented the original with good accuracy. When asked when such a feature would be useful, he said that motion picture was entirely based on human inability to sense variations in minute time scales, and what only mattered was the average output, which one perceives through persistence of vision. Gho quickly refined the analog original and had it patented as the Optical Analog Data Storage System. In his scheme, the three channels in visible light recorded the intensity of the three primary colours, while the infrared and ultraviolet channels represented sound. When tested, Gho was surprised to find that, though in each stream of data variation was considerable, the overall data recovered was nevertheless within a tolerable range in human perception.

Technologies

CIPS

The CIPS or Constant-Interval Pulse Signal, which was required in order to regulate the density of "datalets" inscribed onto the medium, in the digital version of OADS, consisted of a small cyclotron (usually within 1 m in diameter) that accelerated a beam of electrons and a powerful and rectangular electromagnet that deflected the beam as it spun. The cyclotron produces a continuous beam of electrons, which is fed through the spinning magnet; the polarity of the magnet, as it rotates, deflects parts of the beam to produce discrete packets of electrons; the initial quantization splits the beam into 60,000 pulses per second. The quantized beam is then re-quantized, that is fed through a second magnet operating on similar principles but slightly staggered from the first, which deflects the middle section of each packet. The beam is then directed through a series of reflectors so that it circulates around the magnet, until a final quanta frequency of 3.6 MHz is reached. The beam is then rectified by electronic circuitry, which drives a set of electromagnet alternating bipole and quadrupole magnets that compacts and intensifies the signal. The quality of this beam, measured by the difference in strength between signal and no signal, is referred to as luminosity (to the chigrin of many particle physicists). This constant and powerful electron signal is crucial for the operation of the OADS, as the light source relies on these pulses to generate electromagnetic radiation in precise synchrony. Failing this, the various frequency signals would gradually lapse out of time, resulting in garbled information.

Beam modulation

The process of beam modulation, that is the encoding of information into optical signals, occurs by the variation of beam intensity (separately in each component beam) according to an electrical signal. If the original is not electrical, it must first be made available in that medium. The signal source is first amplified to a voltage much higher than actually required and then connected to a neodymium magnet, which produces a very strong magnetic field has the property of magnetizing and de-magnetizing very responsively, according to the strength of the input. The magnet then induces a current that leads to a pair of electrodes normally electrically neutral. As the magnet's strength changes, this creates a positive charge concentrated in the anode and a negative charge in the cathode; given sufficient induced voltage, an electric arc is generated, temporarily ionizing the intervening air, making it opaque to light. These interruptions in the continuous light beam (in the analog version) modulates data into the beam itself. This process occurs in a supercooled environment, which accentuates the strength of the magnet. Before the beam hits the recording medium it is passed through a semi-opaque filter, which only permits light above a certain intensity to pass through; this eliminates any residual light that successfully crossed the ionization boundary.

Because the frequency of the CIPS is 3.6 MHz, the incoming data can travel at a maximum of 3.6 MHz (under hypothetical conditions) to be recorded in full. In reality, any data that was more frequent than 1.8 MHz or half the design frequency was often lost to various mechanical problems and pollution of the lens.

Rectification and recombination

After the light from each light source has been modulated with data, it is then rectified through another series of lenses and filters, aiming to produce a signal as sharp as possible.

The process of recombination refers to the merger of the several beams of light in different wavelengths into a single beam. This is done through shining the rectified light through alternating polarizing lenses and focus lenses, each of which slightly alters the angle of the light until nearly parallel. A gently-curving convex lenx facilitates the final merger, leaving a single beam of multi-wavelength light travelling perpendicularly against the recording surface, creating the most efficient use of recording space.

See also