Shrink Database
造成Blocking issue
DBCC SHRINKDATABASE (Transact-SQL)
Understanding concurrency issues with DBCC SHRINKDATABASE
The shrink database and shrink file commands can lead to concurrency issues, especially with active maintenance such as rebuilding indexes, or on busy OLTP environments. When your application executes queries against database tables, these queries will acquire and maintain a schema stability lock (Sch-S) until the queries complete their operations. When attempting to reclaim space during regular usage, shrink database and shrink file operations currently require a schema modify lock (Sch-M) when moving or deleting Index Allocation Map (IAM) pages, blocking the Sch-S locks needed by user queries. As a result, long-running queries will block a shrink operation until the queries complete. This means that any new queries requiring Sch-S locks are also queued behind the waiting shrink operation and will also be blocked, further exacerbating this concurrency issue. This can significantly impact application query performance and will also cause difficulties completing the necessary maintenance to shrink database files. Introduced in SQL Server 2022 (16.x) Preview, the shrink wait at low priority (WLP) feature addresses this problem by taking a schema modify lock in WAIT_AT_LOW_PRIORITY mode. For more information, see WAIT_AT_LOW_PRIORITY with shrink operations.
最佳實務:
Best Practices
Consider the following information when you plan to shrink a database:
- A shrink operation is most effective after an operation that creates unused space, such as a truncate table or a drop table operation.
- Most databases require some free space to be available for regular day-to-day operations. If you shrink a database file repeatedly and notice that the database size grows again, this indicates that the free space is required for regular operations. In these cases, repeatedly shrinking the database file is a wasted operation. Autogrow events necessary to grow the database file a hinder performance.
- A shrink operation doesn't preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This result is another reason not to repeatedly shrink the database.
- Unless you have a specific requirement, don't set the AUTO_SHRINK database option to ON.
What If You Really DO Need to Shrink a Database?
不得不縮小資料庫的情況:
1.當你的資料庫有1TB以上,並且刪除了50%的資料,你有500GB以上的空間(empty space),並且這些空間不會因為日常運作(定期的刪除或封存資料)而使用到時
What’s So Bad About Shrinking Databases with DBCC SHRINKDATABASE?
或
2.希望減少資料庫大小以利還原到開發或QA環境時
建議:
不要執行壓縮資料庫、取而代之的是執行以下縮小方法
Don’t shrink. Do this instead:
- Add a new filegroup, add new empty files in it, and set it to be the new default filegroup
- Move objects to the new filegroup using ALTER INDEX…REBUILD commands
- When you’ve moved all the objects over, shrink the old filegroup, and it’ll shrink super-quick
- Just leave it there – this is your new database layout
The ALTER INDEX…REBUILD approach has several key advantages over shrinking:
- It can use parallelism: you can specify how many cores to use with MAXDOP hints. This is really helpful on servers with MAXDOP = 1 at the server level because your query-level hint will override it, even overriding it upwards.
- It can move a lot more than one 8KB page at a time. Heck, you can even do several index rebuilds simultaneously on different threads if you want to, really churning through the work quickly.
- You pick the days/times for each object: maybe some of your objects have very heavy concurrency during some time windows, and you want to avoid touching those until a maintenance window.
- You pick the settings for each object: rebuilding an index with ONLINE = ON is slower. Some of your objects might be archive tables or unused during certain days/times, so you can use ONLINE = OFF on those to get faster performance.
But it does have a few drawbacks, too:
- It’s faster, but it’s also higher load: shrinking a database is a low overhead process: it’s hard for anybody to notice you moving just one 8KB page at a time, with just one CPU core. Index rebuilds, buckle up: people are gonna notice when you throw a ton of CPU cores at the problem and really start hammering your storage. This is just the flip side of the coin to finishing faster: if you wanna finish faster, you’re gonna do more work in less time.
- This also means it generates more logs, faster: because we’re moving so much data and it’s a fully logged process, this can present problems for transaction log backups, log file sizes, database mirroring, Always On Availability Groups, and storage/VM replication. Ease into this gradually, starting with your smaller tables first, so you can see the impact it’s having on your transaction log sizes.
- ONLINE = ON isn’t fully online: even online index rebuilds need a momentary schema mod lock in order to finish their work. Thankfully, since SQL Server 2014, we’ve had the WAIT_AT_LOW_PRIORITY option to help mitigate that problem.
- You have to do some prep work: it’s easy to run DBCC SHRINKDATABASE, but ALTER INDEX…REBUILD will take much more work if you want to leverage all the cool advantages I discussed above.
Reference:
沒有留言:
張貼留言