TTable vs TQuery

  One of the most frequently asked questions from Delphi programmers entering the world of client/server is "Should I use TTable or TQuery to access client/server tables?". It may be more productive to analyze each component's strengths and weaknesses so we may better understand when it is appropriate to use one or the other. In this issue we'll discuss the relative merits of these two components when they are used with client/server databases. Unless otherwise indicated, all discussion of "tables" refers to a table within a client/server database.
 

TTable

The single biggest advantage of TTable is that it provides the most portable data access if you want the same front-end client to operate with multiple back-end servers. However, TTable really seems to have been designed with desktop databases like Paradox and dBase in mind. While it functions with SQL databases, it is not the all-purpose table tool you might think it would be. Relational databases (i.e., SQL databases) are designed to be manipulated in sets. That is, operations are performed on one or more related records in one batch through a query, unlike more traditional databases which permit full freedom to navigate forwards and backwards, and to move to specific records within the table. SQL tables are designed to be queried through the SQL language, which is what TQuery is good at. Unfortunately, the SQL language is not well suited for table navigation, which is what TTable is good at. In straight SQL syntax, there is no mechanism for a "previous" record, or "last" record, or other navigation concepts you take for granted in traditional databases. Some SQL servers provide "scrollable cursors" which are navigable pointers within an SQL result set, but this is not widespread and not all vendor's cursors are fully navigable both backwards and forwards. Furthermore, these cursors typically operate upon a result set obtained through an SQL query, rather than being inherent to the table itself.

This paradigm shift in database design is a very important aspect to keep in mind when designing your front-end. Many
database applications employ file browsers and let the user scroll freely through a particular file, but large client/server tables are not well suited to the "browser" concept. Some alternative approaches include populating a combo box with key values from the records, or providing a search dialog that lets the user narrow down the choices into a more manageable list. In either case, the user selects a particular record from the list and then the entire record can be retrieved and displayed.

The significant issue with TTable and client/server is that all TTable actions are ultimately translated into SQL queries to be processed by the server, and you have lost control over how and when those SQL queries are made. For example, a frequent complaint among Delphi client/server users is that just opening a TTable can take upwards of 15 - 20 seconds. The truth is that the time it takes TTable to open an SQL table is directly proportional to the size of the table being opened. TTable.Open actually issues a series of SQL queries to obtain information about all the columns, indexes and defaults used in the table. This information is gathered only the first time the table is opened. After that, it issues a SELECT query that retrieves all of the columns and all of the rows from the table ordered by the index field (or the primary key if no index is specified). This is done each time the table is opened. Only the first row of this result set is returned to the client (or as many as are needed to populate data-aware controls), but the server is still going to build a result set in response to the query. So you can see that if you try touse TTable to read one value from one row of a very large table, just opening the TTable is going to take some time. This demonstrates the inherent weakness of TTable in client/server: you've lost control over the SQL being sent to the server to perform your tasks; you are at the mercy of the BDE.

The upside is that TTable is generally fine for accessing small client/server tables of a few hundred or a few thousand records.You'll have to do some testing to determine where the point of diminishing returns is for your system. In some cases, such as the TDBLookupCombo component, you have no choice but to use a TTable. There are some third-party components similar toTDBLookupCombo that claim to accept TQuery datasets, but you have to look under the hood. Some of them simply copy the results of the query into a temporary table and use TTable to access it. In these cases you have to consider the overhead of creating, populating, and disposing of the temporary table.
 

Record Retrieval

Is there any advantage to using TTable to lookup a record using FindKey rather than an SQL query? Not necessarily. First, FindKey is restricted to searching only indexed fields, and the only way to search on more than one field is if the fields you are interested in are covered by one or more indexes. An SQL query, on the other hand, can search based on any number of fields in the table, whether they are indexed or not. Granted, unindexed fields are going to be slower to search on, but an SQL query performed at the server will be faster than a sequential search done through TTable in the client application.

Second, FindKey will ultimately send the server a SELECT statement with a WHERE clause to return the desired record; the same thing you would do with an SQL query. However, FindKey will fetch all of the fields of the record back from the server (despite how many fields you selected in the TTable's Fields Editor). With your own SQL query, you could request only the fields you were interested in. With large records where only a few fields are of interest, you could conceivably improve performance considerably.

FindNearest, however, is not easily emulated with SQL queries. Given a nonexistent value to search for, FindNearest returns the record after the point in the sequence where the requested value should be. Assuming we are searching the Name field of the Customer table for the nearest match to "Troxell, Steve", we can almost emulate this functionality with the query

        SELECT * FROM customer

          WHERE name >= "Troxell, Steve"

          ORDER BY name

However, this won't give you just the one row after the point where "Troxell, Steve" ought to be in the sequence; this gives you all the rows after "Troxell, Steve". It turns out that this is exactly the query used by TTable.FindNearest on SQL tables (the entire query is processed, but only the first row of the result set is sent back to the client). A much more efficient query to emulate FindNearest would be to use a nested query to find the value and then retrieve the matching record;

       SELECT * FROM customer

          WHERE name = (SELECT MIN(name) WHERE name >= "Troxell, Steve")
 
 

 In a nested query (sometimes referred to as a "subquery"), the inner select statement is processed first and its results are fed into the outer statement. Here, the inner statement finds the value of the lowest ordered value of the Name field that satisfies the condition that it is greater than or equal to "Troxell, Steve". This value is then used in the WHERE clause of the outer statement to retrieve that particular record. The server does much less processing to evaluate the aggregate function MIN in the inner statement than the brute force query used by TTable.FindNearest. Again, you don't have control with TTable.

Other problems with FindKey and FindNearest are the actions of Prior and Next after establishing a position with one of the Find methods. For example, FindKey is going to issue a query that returns a result set of at most, assuming no duplicate values,one record (the one that matches the key value). What happens if you use the Next method to scroll to the next record? There is no next record in the result set returned by FindKey, but there should be a next record in the actual table. In this case, TTable.Next issues a query very much like that shown above for FindNearest, with the key value you used in FindKey as the point of comparison. For example, if you used FindKey(['Troxell, Steve']) followed by Next, the query sent by Next is identical to that shown above
 

TQuery

TQuery encapsulates one or more SQL statements, executes them, and provides methods by which you can manipulate the results. As you have seen from the past two issues, SQL provides some powerful capabilities in its own right. Queries can be divided into two categories: those that produce result sets (such as a SELECT statement), and those that don't (such as an UPDATE or INSERT statement). Use TQuery.Open to execute a query that produces a result set; use TQuery.ExecSQL to execute queries that do not produce result sets.

Once you call Open, the query is sent to the server to be processed and the first row of the result set is returned to the client application. Subsequent rows generally are not passed back to the client until you specifically request them by navigating through the result set. With a result set, you can do most of the same tasks you can with a TTable: you can navigate using the same First, Next, Prior, etc. methods found in TTable or with the TDBNavigator component, you can link to data-aware controls via a TDataSource, and you can edit the result set provided the query conforms to certain restrictions (more on that in a minute).
 

Unidirectional Movement

An important little property to keep in mind when working with TQueries is the Unidirectional property. If your SQL server does not support bi-directional cursors (that is, you can only move forwards through the result set, not backwards), then you must set this property to False (the default) if you want your application to be able to navigate both forward and backward through the result set. For example, if you have a TDBGrid bound to the dataset. With Unidirectional set to false, Delphi emulates bi-directional movement by buffering the records internally in the client application as they are returned from the server.

You may be concerned by this approach if you anticipate a large result set because resources could be consumed rapidly. If you only need to traverse forward through the result set, then you can set Unidirectional to true, and Delphi will not buffer the records. If you only need to walk through a result set once, then doing so with Unidirectional set to true can be a bit faster since Delphi doesn't have to go through the overhead of buffering the records. However, if you are going to repetitively traverse the result set many times, like with a TDBGrid component, Unidirectional set to False may be a bit faster since the rows will be read from the buffer instead of the server.

Some of the navigation methods are misleading when used on "forwards only" datasets. For example, TQuery.Last actually moves beyond the end of the dataset and "backs up" one record. Since this backwards movement is illegal, you will get a "Capability not supported" exception.
 

Modifiable Datasets

By default, TQuery returns a read-only result set, and this is also a bit faster than a modifiable result set (which Delphi refers to as a "live" result set). With a live result set you can use the same editing techniques (methods or data-aware controls) as with a TTable. To get a live result set, you must first set the RequestLive property to True. Second, you must use a SELECT statement that meets the following requirements (Delphi 1):

     conforms to Local SQL syntax (see the Delphi on-line help),     involves only one table or editable view,
     does not use aggregate functions,     does not use an ORDER BY clause,     the table must have a unique index if it is on a Sybase or Microsoft SQL server.

For live result sets, Delphi converts your SQL query into Query-By-Example syntax. The restrictions listed above ensure that your query can be converted into the QBE syntax. You can examine the CanModify property to determine if your TQuery has met all of the requirements to be editable.
 

Dynamic SQL Statements

The SQL query does not have to be "hard-coded" at design-time through the SQL property editor. The SQL property can be set through code at runtime to produce "dynamic SQL". This can be very powerful, since you can construct statements in response to user actions.

Take a look at theaboveexample. This demonstrates a common technique used in client/server applications to allow
users to pick from a list of choices after the list has been narrowed by supplying one or more search criteria. This is an
inventory search dialog for a tire store and the user is expected to supply one or more of the following: 1) a part manufacturer, 2) a part size, or 3) a part description. The program constructs an SQL SELECT statement with a WHERE clause that reflectsthe values present in the edit boxes. If the user does not supply a value for one of the search fields, it isn't included in the WHERE clause.
 

When the user presses the Find button,  interprets the search values given and constructs an appropriate SQL query to find the matching records. The part manufacturer entry field will match against any record with a manufacturer that starts with the text the user entered, so we use the LIKE operator and append the wildcard "%" to the text.The part number and part description fields can be any text anywhere in the field, so again we use LIKE and appropriate wildcard characters.
 
 
 
 

procedure TForm1.FindBtnClick(Sender: TObject);
var
  WhereClause: string;
begin
  WhereClause := '';
  if Mfg.Text <> '' then
    WhereClause := 'Manufacturer LIKE "' + Mfg.Text + '%"';
  if Size.Text <> '' then
  begin
    if WhereClause <> '' then WhereClause := WhereClause + ' and ';
    WhereClause := WhereClause + 'Size LIKE "%' + Size.Text + '%"';
  end;

  if Desc.Text <> '' then
  begin
    if WhereClause <> '' then WhereClause := WhereClause + ' and ';
    WhereClause := WhereClause + 'Description LIKE "%' + Desc.Text + '%"';
  end;

  if WhereClause = '' then
    raise Exception.Create('At least one search field must be entered');
  Query1.Close;
  Query1.SQL.Clear;
  Query1.SQL.Add('SELECT * FROM Inventory WHERE ' + WhereClause + ' ORDER BY PartNo');
  Query1.Open;
end;
 
 

        SELECT * FROM Inventory

          WHERE Manufacturer LIKE "GOOD%" AND Size LIKE "%R15%"

          ORDER BY PartNo
 
 
 
 

This is just a small example of the flexibility you have with dynamically created SQL statements. You can see how several different search fields can be added, and the user can fill out as many or as few as they desire to narrow their search. Another possibility would be adding radio buttons to control whether the search will be the intersection of matching records (by using a logical AND between operators) or the union of matching records (by using a logical OR between operators). It would be very difficult to construct a search engine with this much flexibility using just TTable methods.
 

Parameterized Queries

If you wanted a query to return a row that matches a specific key value, you could assemble the SQL statement at run-time as shown above, or you could hard-code the SQL statement through the property editor and use a substitution parameter for the key value:
 

        SELECT * FROM atable

          WHERE cust_id = :customer_no
 

Here cust_id is the name of an integer field in the table atable. Customer_no is an arbitrary name for the parameter. It is called a substitution parameter because you will assign a value to the parameter and that value will be substituted in place of the parameter when the query is executed. For example, if you wanted to find the record with cust_no equal to 1234, you would assign the value to the parameter before executing the query
 
 

        Query1.ParamByName('customer_no').AsInteger := 1234;

        Query1.Open;

Notice that the parameter name is preceded by a colon in the SQL statement, but you do not use the colon when referencing the parameter with ParamByName.

You should be wary of using a lot of parameters on separate lines. For example, you may have an INSERT statement with a large number of fields, and you my decide to list each field value on a separate line in the SQL property .

When a Delphi form containing a TQuery component is created at runtime, the SQL property is built one line at a time from the component stream. As the SQL is built, Delphi keeps an internal list of the parameters found. Unfortunately, as each new line of SQL code is added, the internal parameter list is destroyed and rebuilt from the first line of SQL down to the current line. To put this in perspective, if you have a TQuery component with an SQL statement having 25 parameters all on separate lines, when that TQuery is created at runtime, the internal list of parameters is destroyed and rebuilt 25 times with 325 total inserts into the list (only 25 will remain in the list after last destroy/build iteration). This may not seem like much, but it does add a few seconds to the form creation. In one of our projects, we had a small form created on demand at runtime. Adding three TQueries with a total of 75 parameters between them (one on each line) increased the form creation time from a fraction of a second to over 3 seconds. Not devastating, but quite annoying to the user. If we crammed all the parameters on just a few lines, then the time dropped back down to under a second.
 

        INSERT INTO ATable (   Name,  Address,  City,
          ( ...a whole bunch of fields... )
           DateOfBirth)
        VALUES( :Name,  :Address,  :City,
          ( ...a whole bunch of parameters... )
          :DateOfBirth)
 
 

"Preparing" Parameterized Queries

If you are going to reuse the same parameterized query repeatedly (like in a loop), you can improve performance by explicitly "preparing" the query once before the first use. All SQL queries ultimately get sent to the server where they are parsed, validated, compiled and executed. The TQuery.Prepare method sends the SQL statement to the server to complete the preliminary steps of parsing, validation, and compiling. In this way, these preliminary steps are performed only once instead of each time you execute the query from your Delphi application. If you use Prepare, you must also use Unprepare when you are through with the query. Prepare consumes resources in the database (and apparently in the application as well) and you must pair up each Prepare with an Unprepare to release those resources. shows a code fragment illustrating this technique.

        { For a query of:

          SELECT * FROM ATable WHERE Cust_No = :customer_no }
        Query1.Prepare;
        try
          for I := 1 to 1000 do
          begin
            Query1.ParamByName('customer_no').AsInteger := I;
            Query1.Open;
            try
              { do stuff with the returned record }
            finally
              Query1.Close;
            end;
          end;
        finally
          Query1.Unprepare;
        end;
 

With a parameterized query, you can still change the values of the parameters in the compiled form of the query. However, if you change the SQL statement through code, you will have to "prepare" it again. So,  if you chose to construct the SQL statement from code with the appropriate customer number already in place, you will defeat the advantages of preparing the query in advance. The Prepare method, if used, must be called after the SQL statement has been defined. If you define your SQL statements at design time with the property editor, you may want to put your Prepare call in the form's OnCreate event handler and the Unprepare call in the form's OnDestroy event handler.
 

Conclusion

The prevailing wisdom for conventional databases (e.g., Paradox and dBase) is to use TTable wherever possible and only resort to TQuery when a task is beyond the capabilities of TTable. In the client/server world, however, most of the time the opposite is true. Remember that paradigm shift in database methodology: client/server tables are meant to be manipulated in record sets. If you're building a conventional database application in anticipation of eventually upsizing to client/server, be very careful about your choice of data access. What may work just fine in a small local system may limp along like a three-legged dog in a high-volume client/server environment. Contrary to marketing hype, scaling up to client/server is not necessarily just a matter of changing the alias definition.

Back...


Understanding the VCL


           All VCL objects are referenced as pointers. There is no such thing as a static or local instance of a VCL object. You are always  working with a pointer to a VCL object. This is the result of the VCL's origin in the world of Object Pascal.

           Delphi sought to simplify its syntax to the greatest degree possible. However, there were several hurdles that had to be crossed  before the language could be made accessible to a wide variety of programmers. In particular, something had to be done about the complexity of dealing with pointers.

           For various reasons related to performance and wise use of memory, it is usually best to declare an object on the heap, rather   than on the stack, particularly if you are still inhabiting the segmented world of 16-bit programming. As a result, the designers of  Object Pascal wanted to make it as simple as possible for their users to work with objects that live on the heap.  Rather than inflict pointer syntax on unwary Object Pascal programmers, the creators of the VCL decided that all objects would  necessarily be created on the heap, but would support the syntax associated with local objects. In short, they created a world  that eliminated pointer syntax for objects altogether. They could afford to do this because they made it impossible to create an object that was not a pointer. All other language types, such as strings, integers, arrays, structures, and floating-point numbers,  could be treated either as pointers or as static objects. This rule applied only to objects.

           It might help to illustrate this point with examples. Here is hypothetical code for how an Object Pascal programmer might have  treated objects according to the standard rules of Pascal:

           var
             S: ^TObject;
           begin
             S := New(Tobject, Create);
             S^.DoSomething;
             S^.Free;
           end;

           The preceding code will not compile in Delphi, but it is an example of how Delphi code might have looked had the developers  not done something to simplify matters. In particular, Delphi eliminated some syntactical clutter by enabling you to write the    following:

           var
             S: TObject;
           begin
             S := TObject.Create;
             S.DoSomething;
             S.Free;
           end;

           Clearly this is an improvement over the first example. However, both samples produce essentially the same underlying machine  code. In other words, both examples allocate memory for the object, call a constructor called Create, implicitly call a method   called DoSomething, call a destructor called Destroy, and then Free the memory associated with the object. In the second   example, all of this can be done without any need to dereference a pointer with the "^" symbol or without making explicit   reference to the act of llocating memory. The point being that the compiler knows that the variable S has to be a pointer to an  object, because Object Pascal forbids the creation of objects that are not pointers.

           Clearly, in the Object Pascal world, it made sense to decide that all objects had to be pointers. There is no significant overhead  involved with using pointers, and indeed they are often the fastest way to manipulate memory. So why not make this one hard and fast rule in order to make everyone's life simpler?

           Translate this same concept into C++, and suddenly the rule doesn't make quite as much sense. In particular, you can only dare   go so far when changing the C++ language to accommodate a new paradigm, and you therefore can't reap any of the benefits  that adhered to the Object Pascal code shown in the second of the two most recently listed code samples. In other words, C++  reaps no benefits from this rule, while it does tend to limit your choices. In other words, you are no longer free to create VCL   objects locally, but must perforce create them on the heap, whether you want to or not.

           I will therefore take it as a given that the need to create VCL objects on the heap is not particularly beneficial to C++    programmers. On the other hand, there is no particular hardship inherent in doing so, nor does it force you to endure any hit on  the performance of your application. It is therefore merely a fact of life, neither inherently good nor inherently bad. If you find  this unduly irksome, you might consider the many other benefits that BCB brings you, and consider this one limitation as a small  price to pay for getting the benefit of this product's strengths.

           One final, and not at all unimportant, point: Only VCL objects have to be created on the heap. All standard C++ objects can be   handled as you see fit. In other words, you can have both static or dynamic instances of all standard C++ objects. It's only VCL   objects that must be addressed with pointers and must be allocated on the heap!

Back...