Sunday, March 21, 2010
I think it is time to acknowledge the obvious. The age of ubiquitous wireless digital communications has arrived, and largely rendered all previous methods obsolete. It is silly to not take advantage to technologies that have come of age. While these technologies are both powerful and inexpensive, they are not yet designed for safety- critical uses. Is there a way that these technologies can be harnessed for PRT in a way that is perfectly safe, yet extends the capabilities of a PRT system?
First let’s look at safety, and it’s cousin, reliability. For a system to really be perfectly safe it has to embrace the concept of a “fail-safe.” Alert reader and frequent contributor cmfseattle recently submitted this link, which demonstrated the fundamental principle of a fail-safe mechanism, that of being “always on.” This is opposed to a system that comes on in an emergency and is expected to do something, a far riskier proposition. A second concept that must be understood is that of “Mean Times Between Failures” (MTBF) and the concept of redundancy. Consider the tires on your car. It is rare to have a tire failure due to a manufacturing defect, but it occasionally does happen. But what are the odds of it happening to all four tires at once? From factory defects alone, essentially zero.
The MTBF for this scenario is probably in the tens of thousands of years, as it well should be. The MTBF must be correspondingly astronomical for critical PRT systems to pass regulatory scrutiny.
Bringing all of this back to the subject of new technologies, it seems that a dividing line must be drawn between what can and cannot be controlled by such means. An argument can be made that these new technologies are not really essential. Indeed, the fundamental principles for PRT have been around for decades, and safe (and even approved) designs were attainable back then. Why change? The main reason is performance.
One factor limiting the performance of previous systems is the concept of line speed. Consider the case of two turns with a straightaway between them. If you are driving your car, in a hurry, you know just what to do. You round the first curve, step on the gas, and then decelerate for the second turn. With fixed line speed you only go as fast as the sharpest (slowest) turn allows. Many vehicles and track segments, of course, multiply this factor so that it affects the throughput of the system as a whole. There is also the matter of passenger comfort. Previously the whole system would need to go at a speed dictated by the tendency of a small minority of passengers to get motion sick or alarmed by fast, automated travel. Having custom passenger speed preferences can greatly benefit passenger throughput as well. But it is more than many vehicles going a bit faster. It is also being able to route and merge more intelligently on a much larger scale.
These and other factors call for a system of capable of managing extreme complexity, and so the fear is that this complexity would compromise safety and reliability. This is why I have called for system architecture that is flexible enough to advance with the times but has underpinnings that are fundamentally simple, perhaps even simpler than previous systems. Such an architecture, I would argue, should start with vehicles that can self-navigate and avoid collisions – The Autonomy I have spoken of in recent posts. Layered on that would be increasing capability and associated complexity. I envision a hierarchal system wherein a consensus of systems is needed to achieve peak performance, both in terms of individual vehicles and traffic management. Track or vehicles with some degree of problem would have the speed downgraded accordingly. This would enable an optimal condition where systems that may have MTBF of only a dozen years instead of millions can still contribute. This strategy is tailored to the benefits and shortcomings of wireless and networking technologies, which absolutely blow previous control methods out of the water by most metrics. These can be coupled with commodity computer parts and sensors to create a system that has stellar performance, yet is not the primary system, but an overlay upon it. It is better to design a safe, slow system that is endlessly improvable than a one with reasonable base performance but without an easy upgrade path. I would guess that this is also the easiest path toward high performance from a regulatory point of view as well.
The first step is to define the simplest system that can work safely and reliably. The second step would be to identify sensors and other systems that can be bump up this base speed to a reasonable level. The third step would be to overlay modern networking technology and sensor systems that support this third tier. The idea is that the “fail-safe” for the third tier is the second one, and the fail-safe for the second one is the first. If that one has a problem it is time to limp to the nearest station on battery power.
What would this third tier look like? This would be one of those networking challenges that Cisco Systems and Google could really sink their teeth into. I have been trying to research networking models that would work but I must confess my relative ignorance in the field, although searching terms like “campus-wide LAN” or “Mobile WI-FI” are turning up some interesting technologies including VLANs, Layer 2 and 3 switching, LECs, IEEE 802.16, mesh networking, ATM backbones, and a host of other concepts that I hope will make perfect sense to me someday.