Sql server swap file usage
Generally you should keep an eye on this on a production system to make sure it hasn't outgrown its RAM allocation. We were recently having some performance issues with one of our SQL Server that we weren't able to completely narrow down, and actually used one of our Microsoft support tickets to have them help troubleshoot. If you're looking for high performance, you are going to want to avoid paging completely, so the page file size becomes less significant.
Simply, the page file is a cache for files that gets managed by the OS, and SQL has it's own internal memory management system. The MS article referenced does not qualify that the advice is for the OS running out-of-the-box services such as file sharing.
In this case, the normal recommendation of 1. This very general recommendation is provided under the assumption that all memory is being used by "normal" processes, which can generally have their least-used pages moved to disk without generating massive performance issues for the application process the memory belongs to. For servers running SQL Server generally with very large amounts of RAM , the majority of the physical RAM is committed to the SQL Server process and should be if configured correctly locked in physical memory, preventing it from being paged out to the pagefile.
Since SQL Server already manages its own memory space, this memory space should not be considered "pageable", and not included in a calculation for pagefile size.
It does not allocate a matching page in the pagefile at that time. When a committed memory page is actually written to, that is when the virtual memory system will allocate a physical memory page and possibly bump another memory page from physical RAM to the pagefile.
The Windows OS keeps track of memory pressures between application processes and its own disk cache mechanism and decides when it should bump non-locked memory pages from physical to the pagefile.
My understanding is that having a pagefile that is way too large compared to the actual non-locked memory space can result in Windows overzealously paging out application memory to the pagefile, resulting in those applications suffering the consequences of page misses slow performance.
As long as the server is not running other memory-hungry processes, a pagefile size of 4GB should be plenty. Increasing the pagefile size will only help in this situation insofar as Windows is able to page out memory from non-SQL Server processes. Allowing SQL Server memory to grow into the pagefile in this situation might get rid of the error messages, but it is counterproductive, due to the point earlier about the reason for the data cache in the first place.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Archived Forums. Sign in to vote. Is there any guideline how to interpret pagefile related performance counters on a SQL Server. Thank you,. Monday, February 4, PM. I worked on this with the application developers. Within a few seconds all the jobs waiting in line at the application server where executed - the developers were watching them being removed from the queue one by one very quickly.
The server hosting the SQL Server was not at the time set to allow Lock Pages In Memory for the service account, so I was thinking that if the data needed to handle the requests mentioned were placed in the page file and when the extra RAM was allocated the data was moved from the page file to the RAM maybe that could explain why all the jobs finished so fast immediately after the extra RAM was allocated.
Firstly, the most important thing you really need to know is that it is critical to configure memory usage by SQL Server and other applications on the system to ensure that it never starts paging to disk. RAM access is measured in nanoseconds, and disk access is measured in milliseconds, so whatever gets paged out is going to take hundreds, if not thousands, of times longer to access. So even if you have SSDs, it is still going to hurt if it starts swapping to disk.
When the max memory setting is changed, SQL Server discards everything in the buffer and starts over. So if it was swapping the buffer to disk, yes, this may have caused the disk swapping to stop and performance somewhat restored. However, if it was swapping to disk with a 40 GB setting, it will be more likely to swap in the future now that it's going to use even more memory. However, the change you made could have just been a coincidence--do you know for sure that there were no long-running queries blocking those jobs?
There are dozens of possible causes, and unless you have data from monitoring mechanisms, there's no way to be sure at this point. Memory management to avoid swapping to disk is one of the dangers of running multiple instances of SQL Server on one system. If you have GB of RAM, for example, the max memory setting of all instances should only total around GB or so, perhaps a few GB more after monitoring available memory for a while.
If SQL Server cannot lock pages in memory, then it has no control over what memory will be swapped to disk. A process has an address space of virtual memory, and when the process reads the memory, the OS will retrieve it from the physical location memory, or disk.
The OS keeps track of when memory was last accessed and it tries to page out the memory that is least active to reduce the impact. The process does not know about the physical location of each offset of its virtual memory--this could create quite a lot of pointless overhead in tracking it. SQL Server does, however, have a way of knowing how much of its memory is swapped to disk, as it will log a message as seen in How to reduce paging of buffer pool memory in SQL Server when it reaches a certain threshold.
Many other articles on the Internet incorrectly advise this counter to be used. A high value for this counter does not necessarily imply that your performance bottleneck stems from a shortage of RAM. The operating system uses the paging system for purposes other than swapping pages because of memory over-commitment. This is because the data that's being written to disk first has to be in memory, and then it is "paged" to disk.
The terminology around "paging" is misunderstood by many system administrators and it is often defined as "swapping process working set to disk," but "paging" is a much broader term than covers every read and write to disk. After reading your question multiple times I have the impression that you might be mixing one or other memory management issue. The solution would be to turn on the option Lock Pages in Memory , but Returns information about all the data pages that are currently in the SQL Server buffer pool.
Now, the values are returned as text, which is an integer followed by a unit. To use the values, I need to convert them back to integers. This is in contrast to other system views that return results in number of pages. Starting from the inside. The units are returned as divisors to convert the values to GB.
Then the converted values are used to compute the day-over-day difference, which are the growth factors I need. The space available divided by the daily growth rate will tell me how many days until the database next tries to autogrow:.
The free space on the file system divided by the autogrow amount tells me how many more times the database can grow automatically:. Since the database can continues growing until the last expansion is completely filled. Now I need to make sure that that SAN order is moving along to meet that time! This article showed some DMVs that can help, along with a little math! FROM sys.
0コメント