Sunday, March 21, 2010

78> The Times, They Are a Changin....


I think it is time to acknowledge the obvious. The age of ubiquitous wireless digital communications has arrived, and largely rendered all previous methods obsolete. It is silly to not take advantage to technologies that have come of age. While these technologies are both powerful and inexpensive, they are not yet designed for safety- critical uses. Is there a way that these technologies can be harnessed for PRT in a way that is perfectly safe, yet extends the capabilities of a PRT system?

First let’s look at safety, and it’s cousin, reliability. For a system to really be perfectly safe it has to embrace the concept of a “fail-safe.” Alert reader and frequent contributor cmfseattle recently submitted this link, which demonstrated the fundamental principle of a fail-safe mechanism, that of being “always on.” This is opposed to a system that comes on in an emergency and is expected to do something, a far riskier proposition. A second concept that must be understood is that of “Mean Times Between Failures” (MTBF) and the concept of redundancy. Consider the tires on your car. It is rare to have a tire failure due to a manufacturing defect, but it occasionally does happen. But what are the odds of it happening to all four tires at once? From factory defects alone, essentially zero.
The MTBF for this scenario is probably in the tens of thousands of years, as it well should be. The MTBF must be correspondingly astronomical for critical PRT systems to pass regulatory scrutiny. 

Bringing all of this back to the subject of new technologies, it seems that a dividing line must be drawn between what can and cannot be controlled by such means. An argument can be made that these new technologies are not really essential. Indeed, the fundamental principles for PRT have been around for decades, and safe (and even approved) designs were attainable back then. Why change? The main reason is performance.

One factor limiting the performance of previous systems is the concept of line speed. Consider the case of two turns with a straightaway between them. If you are driving your car, in a hurry, you know just what to do. You round the first curve, step on the gas, and then decelerate for the second turn. With fixed line speed you only go as fast as the sharpest (slowest) turn allows. Many vehicles and track segments, of course, multiply this factor  so that it affects the throughput of the system as a whole. There is also the matter of passenger comfort. Previously the whole system would need to go at a speed dictated by the tendency of a small minority of passengers to get motion sick or alarmed by fast, automated travel. Having custom passenger speed preferences can greatly benefit passenger throughput as well. But it is more than many vehicles going a bit faster. It is also being able to route and merge more intelligently on a much larger scale. 

These and other factors call for a system of capable of managing extreme complexity, and so the fear is that this complexity would compromise safety and reliability. This is why I have called for system architecture that is flexible enough to advance with the times but has underpinnings that are fundamentally simple, perhaps even simpler than previous systems. Such an architecture, I would argue, should start with vehicles that can self-navigate and avoid collisions – The Autonomy I have spoken of in recent posts. Layered on that would be increasing capability and associated complexity. I envision a hierarchal system wherein a consensus of systems is needed to achieve peak performance, both in terms of individual vehicles and traffic management. Track or vehicles with some degree of problem would have the speed downgraded accordingly. This would enable an optimal condition where systems that may have MTBF of only a dozen years instead of millions can still contribute. This strategy is tailored to the benefits and shortcomings of wireless and networking technologies, which absolutely blow previous control methods out of the water by most metrics. These can be coupled with commodity computer parts and sensors to create a system that has stellar performance, yet is not the primary system, but an overlay upon it. It is better to design a safe, slow system that is endlessly improvable than a one with reasonable base performance but without an easy upgrade path. I would guess that this is also the easiest path toward high performance from a regulatory point of view as well.

The first step is to define the simplest system that can work safely and reliably. The second step would be to identify sensors and other systems that can be bump up this base speed to a reasonable level. The third step would be to overlay modern networking technology and sensor systems that support this third tier. The idea is that the “fail-safe” for the third tier is the second one, and the fail-safe for the second one is the first. If that one has a problem it is time to limp to the nearest station on battery power. 

What would this third tier look like? This would be one of those networking challenges that Cisco Systems and Google could really sink their teeth into. I have been trying to research networking models that would work but I must confess my relative ignorance in the field, although searching terms like “campus-wide LAN” or “Mobile WI-FI” are turning up some interesting technologies including VLANs,  Layer 2 and 3 switching, LECs, IEEE 802.16, mesh networking, ATM backbones, and a host of other concepts that I hope will make perfect sense to me someday.



4 comments:

akauppi said...

Modern RF-id technologies are starting to provide adequate speeds to match the PRT control needs (now I'm thinking track-vehicle communications). Under the title RF-id it's not only dumb tags but also symmetric encrypted communications between two smart systems (called "active" in comparison to passive tags).

There's certain reasons - mainly the encryption and design for fault tolerance - that I would recommend RF-id over generic WLAN technologies. Both use the same 2,4GHz (or 5GHz) spectrum, I believe.

PRT technology may well be one of the application areas that brings the price of such active transponders down. Currently, they are still rediculously high ($100 or so). They could be $5-10.

Dan said...

Hi Akauppi - I guess no discussion of PRT and RFID tags is really complete without mentioning FROG, (Free Ranging On a Grid) the system for automated guided vehicles that is being used in 2getthere’s PRT system in Dubai. As the name suggests, the system allows automated vehicles to move about any paved area like chess pieces on a board, with free x-y axis navigation. The RFID tags on the ground tell the vehicle where on the grid it is.

I question, though, how extensively they would be used in a track-based system. The most obvious use is to (similarly) use them for position identification. This would enable the vehicle, by counting subsequent wheel revolutions, to know exactly where it is. The system could then be recalibrated with other tags downstream. This might be useful in navigation because it could tell a vehicle when it is approaching a key part of a journey, like turning off of the current track. With RFID the vehicle itself would have to pass any info along, unless there was track-based way to rebroadcast the RFID’s signal.

I read about programmable, “active” tags with interest but I haven’t figured out how they would be used, since anything they do is only conveyed to the upstream vehicles as they reach it. What information would they communicate? This capability seems quite limited compared to just getting a message out to the whole system via some track based communications link. Leaky cable can pass messages upstream as well, as long as the next vehicle is not too far back. I don’t rule anything out though, because all forms of backup are on the table.

As I crystallize my thoughts on the multi-level approach, I am thinking that on the most fundamental and autonomous level RFID tags, bar codes, magnets or other unpowered means would help vehicles navigate. The next layer might be the wired or otherwise active technologies that can be run from communications and power lines in the track, and the last layer would be something like WiFi. The idea is that on that third level absolute reliability is not required. I’m thinking of optimal traffic management mostly.

cmfseattle said...

well, GM says it's possible (sans-guideway, no less)...

http://www.popularmechanics.com/automotive/new_cars/4350065.html

Ryan Baker said...

It's not true that wireless blows other methods out of the water by most metrics. In the fundamental areas of speed, reliability, and cost per gigabyte, wireless still trails. Wireless excels via versatility. In some cases this versatility allows it to more economically achieve some metrics, but the most common metric is simple accessibility.

But when discussing an elevated track based system, there is little chance of wireless beating wires in those metrics anytime soon. You could make the argument for it being a contender through reliability, since wires have a single point of failure, or at most a linear number of backup wires. But then again, wireless will always be more subject to bursts of interference.

The best quality of service you could achieve would be through a combination of wires and wireless. Essentially using a wireless mesh as a backup.

Likewise, with the concept of autonomy, complete removal of central systems removes opportunities that are best taken advantage of. I would agree that less emphasis on a fully central model is an improvement, but do recognize that decentralization has existed in the form of intersection controllers for a long time. Is another step in this direction possible?

Yes, but I don't believe that is what is holding back performance. A somewhat central system can manage speed differentials just fine, though it does add quite a bit of complexity. That complexity will not be eliminated by further decentralization. I believe the reason that complexity is being avoided by practical applications is because it makes a system that is already "blowing the minds" of regular people, even harder to understand and, with a lack of sufficient understanding, even more frightening than it already is.

Also, recall the current systems of control allow maximum throughput at 40mph. But is maximum throughput necessary, and when it is, is it best achieved through an optimization of speed/throughput or through additional bypass's or alternate routes? I think some of this is simply trying to compete with heavy rail on a single track, which is generally insane since you could build four tracks with the same money and space.

So in summary the short term application of mesh communications are best used as a wire backup, and the complexity left to autonomous car designers and second generation PRT researchers. The capability to upgrade the software is unlikely to be excluded by any choices made today, and thus your general axiom of concern over excluding future choices does not apply to this question.