Thanks to visit codestin.com
Credit goes to www.sqlservercentral.com


Database file shrink issue.

  • Hi experts,

    I have a 3+ TB database on a 2019 sql server which has more than 50% free space. I know database or data file shrink is not a good practice so please not go there, I tried with 100 mb in a loop which is taking much longer time so it's not feasible.

    Any suggestion BESIDES creating a new .ndf file ad then emptying the mdf file into the new file and then remove etc...

    Thanks in advance.

    • This topic was modified 3 weeks, 6 days ago by Tac11.
  • You are using DBCC SHRINKFILE?

    Shrinking just data files?

    Is this one ndf file that's grown or was sized too large?

    Is there a reason you can't let this run from a machine and let it take time to finish?

  • let it run - but don't do 100mb chunks. do 50 GB or 100 GB each time.

    but it will take a long time to run for sure.

  • Hi Steve, Just .mdf file which has more than 50% free space (1+TB). The database is in simple recovery model.

  • Hi frederico_fonseca, only 100 mb to shrink takes more than 5 min. if I shrink 50 GB you can imagine how much time it takes and put the database in a locking mode.

  • Tac11,

    On occasions when I've had to do this is has been shrinking of databases in the 100s of GB range not the TB+ range, and even then it takes a LONG time unless the problem could be solved with a DBCC SHRINKFILE (1, TRUNCATEONLY).   Doing a full index rebuild first seems to help and can sometimes let you grab more with the TRUNCATEONLY.  Once those are done I'd recommend trying a 50GB+ shrink against a backup copy to get a timing.  The relationship between the size of the shrink chunk and the time it takes is not linear (much of the time seems to have to do with the size of the datafile independent of the shrink chunk).

    Good luck.  I don't think there is any way to make this a fast process.

     

  • re-read OP, I see you already tried a loop to incrementally reduce it...not sure how to delete this

  • One thing you might consider if this is really needed is to create a new db of the right size and move data across. You'll need to do this in a way that lets you catch updates, similar to log shipping, where you move most stuff, then move updates, then a smaller set of updates, etc. until you get close enough to turn the old one off and move to the new ones.

    Renaming the dbs might work in this case as a way to move clients easily and quickly, but test, test, test, and have lots of data quality checks.

     

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply