Bloor Research Group:
Parallel Database Technology
An Evaluation and Comparison of Scalable Systems
Much Ado about Shared Nothing
It has become increasingly obvious that the future of commercial database is bound up with the ability of databases to exploit hardware platforms that provide multiple CPUs.
In the light of this, we will soon be in a world where all server hardware is multiple CPU hardware. It thus naturally follows that the major database vendors all need to provide parallel features in their database products, in order to exploit the server hardware that is being delivered. We have reached a situation where top-end server hardware is now based on parallelism, and the top-end databases are becoming parallel.
This report explores the wealth of opportunities offered by the new breed of multi-processor computers and the databases that run on them. It explains the issues and dispels many of the myths that have already sprung up around the subject of parallel computing. It is aimed at those who wish to evaluate parallel database technologies, and as a knowledge resource for those who will be involved in building parallel database applications.
There is a tremendous amount of confusion in the market over parallel database technology, among both customers and the vendors. The root of this is a general lack of understanding of the technical issues on both sides. Although the workings of a parallel database are more complex than an ordinary database, understanding it requires more of a change in mind set than an astronomically high IQ. Unfortunately, few decision makers have much understanding of parallel database, and consequently, it is open season for database and hardware marketing people to confuse the market with technical mumbo-jumbo, that they don't fully understand themselves.
In researching this report, we were both surprised and disappointed to find that much of the vendors' marketing literature fails to cover all the important features of their products, particularly the important features related to implementing a parallel database system. This is a shame, because it has led to misunderstanding on the part of customers, and subsequent disappointment when systems have not measured up to user expectations. Unfortunately, there was no authoritative source of information which could be used as a meaningful yardstick by potential customers, to measure vendor claims about their products.
In compiling our research, we have identified a number of industry myths which have sprung up and which, if they remain unchallenged, could obstruct the smooth uptake of the technology into the mainstream of the market. Belief in these myths means that product marketing literature often reflects and reinforces a false picture. Products are sold on dis-information rather than on the benefits that their technical features can provide.
We also caution readers against putting too much faith in the numerous benchmark figures that are generated by the vendors. Benchmarks can be regarded as a virility contest between vendors with large marketing budgets, rather than a valid measure of how well any application will perform in a particular situation. This is particularly so with parallel database. Most suspicious are the figures which sometimes appear in marketing literature, that suggest straight-line scalability. In practice, such scalability is never achieved.
There are many reasons why benchmarks should be distrusted. Chief amongst them is the fact that, by their very nature, benchmarks do not resemble real-world situations. In the area of very large databases, whether a product is used for OLTP or Data Warehousing, extensive design work needs to be done to cater for the likely workload. On top of this, there will be extensive tuning work that is done as the system beds in and the profile of the workload becomes clear. Benchmarks never mimic this situation, but this situation is the reality.
Evaluating Parallel Database
Unlike the benchmark brigade, we do not give you a magic number purporting to say how good is DBMS product is. Rather, we draw your attention to those aspects of the hardware and software architectures of parallel database engines that are relevant to particular categories of database performance problems.
The performance that you actually get out of a database system depends on four factors: the hardware platform, the DBMS, the database itself (in terms of the volume and structure of the data) and last, but not least, on the pattern of usage of the database, ie the workload. We deal with each of these factors individually, but Disappointing database performance and scalability is typically caused by something bad about the `chemistry' when you mix these four ingredients together, or when you try to change one of them. In this report we explain, in product-independent terms, the causes of this bad chemistry, and provide a comparison and rating of each products capabilities in the light of this analysis.
The vendors profiled in this report are very disparate. Some are huge multi-national companies, others are small start-ups with a hundred or so employees. Some sell parallel hardware, some sell parallel DBMS software, and a few sell both. For the sake of comparison, we have broadly divided the market into two categories; DBMS options and hardware options, rating each in different ways.
The DBMS Products
- ADABAS D (Software AG)
- DB2/6000 Parallel Edition (IBM)
- Informix DSA (Informix)
- Navigation Server (Sybase)
- NonStop SQL/MP (Tandem)
- OpenINGRES (CA)
- Oracle 7 (Oracle)
- OracleRdb (Oracle)
- Red Brick Warehouse (Red Brick)
- Teradata (AT T GIS)
- WX9000 RDS (White Cross)
The Parallel Computers
- 3600 (AT T GIS)
- 8400 (Digital)
- CS6400 (Cray Research)
- CS2 (Meiko)
- Exemplar (Convex)
- Goldrush MegaSERVER (ICL)
- Himalaya (Tandem)
- nCUBE2 nCUBE3 (nCUBE)
- OPUS (Unisys)
- POWER CHALLENGE (SGI)
- Reliant RM1000 (Pyramid)
- RS6000/SP (V.2) (IBM)
- Symmetry (Sequent)
- WX9000 (White Cross)
Parallel Database Technology: An Evaluation and Comparison of Scalable Systems
Authors: Dr Michael G. Norman and Dr Peter Thanisch
Edited by: Robin Bloor and Tom Jowitt
Length: 546 pages
Published by: Bloor Research Group
Look how the users rate their systems!
Give your rating of any system and get a market analyse report free of charge!
Give your rating of any IT vendor and get a market analyse report free of charge!
Link to evalu8IT. More than 650 evaluations concerning computer software, hardware and vendors
Data Research DPU
for Evaluation of Information Technology and Computing
Data Research DPU ab - Torsvikssvängen 34, SE-181 34 Lidingö, Sweden - Tel +46 8-446 07 71 - Fax +46 73 5277 60 83 Contact