Vfp nolock when updating sql
The sample data creation script below requires a table of numbers.
If you do not have one of these already, the script below can be used to create one efficiently.
For example, the first few rows of the sample data might look like this before the update (all end dates set to 9999-12-31): Then like this after the update: One reasonably natural way to express the required update in T-SQL is as follows: The post-execution (actual) execution plan is: The most notable feature is the use of an Eager Table Spool to provide Halloween Protection.
This is required for correct operation here due to the self-join of the update target table.
The vast majority of the logical reads are caused by the Clustered Index Update navigating down the index b-tree to find the update position for each row it processes.
You will have to take my word for it for the moment; more explanation will be forthcoming shortly.
That is pretty much the end of the good news for this form of the query.
The logical reads are again an aggregate over all iterators that access this table in the query plan.The important point is that Hash Aggregate spills depend on the number of unique values output, not on the input size.The task at hand is to update the example data such that the end dates are set to the day before the following start date (per Some ID).The resulting numbers table will contain a single integer column with numbers from one to one million: While the points made in this article apply pretty generally to all current versions of SQL Server, the configuration information below can be used to ensure you see similar execution plans and performance effects: If you run the data creation script above with actual execution plans enabled, the hash aggregate may spill to tempdb, generating a warning icon: When executed on SQL Server 2012 Service Pack 3, additional information about the spill is shown in the tooltip: This spill might be surprising, given that the input row estimates for the Hash Match are exactly correct: We are used to comparing estimates on the rows that is important: The cardinality estimator in SQL Server 2012 makes a rather poor guess at the number of distinct values expected (1,000 versus 999,034 actual); the hash aggregate spills recursively to level 4 at runtime as a consequence.The 'new' cardinality estimator available in SQL Server 2014 onward happens to produce a more accurate estimation for the hash output in this query, so you will not see a hash spill in that case: The number of Actual Rows may be slightly different for you, given the use of a pseudo-random number generator in the script.