Connecting live from Power BI Desktop to SSAS generates this error: The Server you’re trying to connect to doesn’t have any models or you don’t have permissions to access them.

I recently ran into an error while connecting live from Power BI Desktop to a SSAS server. Everything was on-premises.There was no cloud component involved. I had full admin rights to the SSAS server and could see all databases and models from SSMS and other tools but was getting this error from PBI Desktop.

The Server you’re trying to connect to doesn’t have any models or you don’t have permissions to access them.

If I explicitly specified the name of the database I wanted to connect to, then it would connect and show me the database contents but if I did not specify the database name, I would get this error. The question is why? Obviously it wasn’t a permissions problem because I could connect to that database if I specified its name and I was admin on the SSAS server and could see that database and other databases from other front end tools such as Excel and SSMS. You can get this exact same error because of permission issues and in those cases this is a valid error. Here I am only exploring the case, where all correct permissions are in place and I still get this error.

The problem was …

The default database on that server was created with a script as an empty shell database for someone to use later. It has stayed empty and never deployed to. This empty database was causing the problem. Here is an example of a simple JSON script that creates an empty SSAS database:

{
"create": {
"database": {
"name": "test 3",
"compatibilityLevel": 1200
}
}
}

Note: This exact same error can happen with SSAS Multidimensional if the first database created on the server has a cube that is in an unprocessed mode.

Default Database on SSAS

The first database that is created on a SSAS server becomes the default database. There is no server property that you can set to modify this later. At least I could not find one through SSMS. I suspect this is in one of the meta data files but I did not find it readily. On the other hand, as I will explain later, there is a SSAS property that can disable automatically setting the first database to the default database but that does not give a choice to set the default database explicitly. Changing this setting has its own implications that I will get into later in this post.

How I found out the cause of the error

I started a SQL Server profiler trace on the SSAS server (You can use Extended Events as well) and attempted the same live connection from Power BI Desktop with different databases on the server. I noticed something very interesting.  The query that Power BI Desktop submits to SSAS to discover the list of databases, seemed different in the following situations:

  1. The SSAS sever is completely empty. No databases exist on the server.
  2. The state of the of the default database. (Empty or unprocessed for Multidimensional)
  3. The AutoDetectDefaultCatalog SSAS server setting.

Let’s explore these.

1) The server is completely empty.

In this case, there are no database on the server. Power BI Desktop can connect live to this server successfully, even though I got an empty list of databases as expected.

Query Profiler With Empty Server
PBIDesktopCanLiveConnectToEmptyServer
PBIDesktopCanLiveConnectToEmptyServer2

In this case, SQL Server Profiler captured the following query from Power BI Desktop:

Select [CATALOG_NAME], [DESCRIPTION], [DATE_MODIFIED], [COMPATIBILITY_LEVEL]
from $System.DBSCHEMA_CATALOGS 
Query Profiler With Empty Server

Notice how this query was run against the server and that the DatabaseName column is empty in the trace shown above. This is possible because this is a server level query. It uses one of the SSAS DMVs (Dynamic Management Views) to get metadata. It returns a list of all databases on the server. You can run this query in SSMS in a DMX query window. Here is an example of what this query would have returned if there were databases on this server:

DMXQuery1Results

To summarize, Power BI Desktop worked successfully in this case. There were no databases to show and no errors were generated. The reason I included this case is for you to see how the query changes in the next case.

2) The default database is empty.

Continuing with the same empty server, I scripted an empty database using the script shown previously. The first database created on a server becomes the default database on the server. This is what I could see at this point from SSMS on this server:

Empty Scripted Database From SSMS

I went to Power BI Desktop and attempted to live connect without specifying the database name. I got this error:

Main Error

The question is: What did adding an empty scripted database do that changed the Power BI Desktop’s behavior?

Here is the DMX query that Power BI Desktop submitted this time which is different than the one in case 1:

Select [CUBE_NAME] from $System.MDSCHEMA_CUBES where [CUBE_SOURCE] = 1
Query Profiler With One Empty Scripted

Notice how in the Profiler trace, the Database is set to “test 3”.

If I run this query in SSMS, it returns no results because that database is empty. There are no models defined in it yet. If there were models in the database, I would have seen something like the following:

DMX Query 2 Results

Note that this DMV has existed since SSAS  Multidimensional, hence the name Cube but it works against both Tabular and Multidimensional SSAS databases, as long as they have a model/processed cube.

3) An empty default database and a non empty database

To test this further, I added another database from a backup file of AdventureWorks. This is what SSMS was showing:

Select [CUBE_NAME] from $System.MDSCHEMA_CUBES where [CUBE_SOURCE] = 1
Added AdventureWorksFromBackup_SSMS

I went back to Power BI Desktop and attempted to live connect as before without specifying a database name. Power BI desktop still generated the same error even though AdventureWorks was fully browsable from SSMS.

Select [CUBE_NAME] from $System.MDSCHEMA_CUBES where [CUBE_SOURCE] = 1

This is the query that Power BI Desktop submitted to SSAS, still just one query and different than the query in case 1:

Query Profiler With One Empty Scripted database and AdventureWorks added

Notice how this query still ran against the default database which is “test 3”, the first (default) database created on the server. 

This is the exact case I was running into in the beginning. I had several databases with models in the them that were fully browsable from SSMS and Excel. There was one empty database that happened to be the default database and was causing the error because the query ran against it and returned no results!

When the above query returns no values, Power BI Desktop shows the error message. This seems like a bug that will hopefully get fixed at some point.

Obviously, one way to get around this error, is to either specify the database name you are trying to connect to or to remove the empty database. In the latter case, another database becomes the default database. You can also delete the empty database and script it again. It will look exactly the same as before except that it won’t be the default database. I tried this case to make sure this whole problem was really a default database issue and not the existence of an empty database.

4) A non empty database and an empty default database

I deleted “test 3” and created it again with the same script as before. This time it was not the default database anymore. SSMS looked exactly as before:

Added AdventureWorksFromBackup_SSMS

I went to Power BI Desktop and tried to live connect. This time it worked! This proved that it was not the empty database causing the issue because of its emptiness. The problem was that it was empty and the default database at the same time. This is what the traces showed this time:

One Non Empty Database As Default And One Empty Database Trace Limited

Power BI Desktop submitted 3 DMX queries, 2 of which are identical and ran against Adventure Works (The default database):

Select [CUBE_NAME] from $System.MDSCHEMA_CUBES where [CUBE_SOURCE] = 1

This was followed by this query:

Select [CATALOG_NAME], [DESCRIPTION], [DATE_MODIFIED], [COMPATIBILITY_LEVEL]
from $System.DBSCHEMA_CATALOGS 

4) AutoDetectDefatInitialCatalog

This one is exactly the same query that ran in case 1 (empty server). In case 2, this query did not run because the first query had returned no values and Power BI Desktop stopped with an error message displayed.

How does Power BI desktop know which database is the default database? As mentioned before, the DXM queries were all submitted in the context of the default database, but how? It is not that Power BI desktop knows which database is the default database, it is that it creates a connection to SSAS without specifying a database name (since I did not explicitly specify one) and it lands on the default database.

This behavior of creating a connection and landing on the default database, is similar to what happens in SSMS when you connect to a SSAS server. You always get connected to the default database and you can change it by changing the name of the database from a drop down box. This behavior is controlled by a setting in SSAS called AutoDetectDefaultInitialCatalog.

AutoDetectDefaultInitialCatalog is one of the Advanced server properties and it is set to True by default.

SSAS Server Properties AutoSetDefaultInitialCatalog

Warning!

Make sure you understand the implications of changing this setting, especially on a production database, before you change it!

When this setting is set to Ture, when you connect to the server using SSMS, you get connected to the default database right away. You can see the default database in the drop down to the left of the Execute query button.

New Query With AutoDetectDefaultDatabase

On the other hand, when AutoDetectDefaultInitialCatalog is set to False, when you connect to the server using SSMS, you get a query that is no connected to any database. You can see this in the drop down to the left of the Execute query button.

SSAS Server Properties AutoSetDefaultInitialCatalog Set To False
New Query With AutoDetectDefaultDatabase Set To False

With this setting set to Fales, I repeated cases 2 and 3. This time everything worked even with the empty default database.

AutoDetectDefaultInitialCatalog Set To False One Empty Database PBI Desktop

How did this work? Power BI desktop only submitted the following query:

Select [CATALOG_NAME], [DESCRIPTION], [DATE_MODIFIED], [COMPATIBILITY_LEVEL]
from $System.DBSCHEMA_CATALOGS 
Query Profiler With One Empty Scripted database and AdventureWorks added and AutoDetectDefaultCatalogSetToFalse

Note how this query ran against the server and no particular database. In this case, Power BI Desktop was able to show the list of all databases, even though the first database created (“test 3”) was still empty.

Please remember that changing this setting can have other implications so I don’t recommend you change it permanently. I only did this for testing. The problem you may run into if you set this setting to False, is that it an cause problems for any code that was dependent on the connection landing on the default database. I have seen this in the past at one client, where custom .NET code was getting metadata from a SSAS database without realizing that they were getting results because their database happened to be the default database. Of course, this was not best practice; however, the point is you should change a production server’s settings very carefully and be ready to work through unforeseen consequences.

Summary

Power BI Desktop as of Sept 2019, has a small bug where you would get an error if you try to live connect to a SSAS sever and explore the database on it without specifying a particular database.

The Server you’re trying to connect to doesn’t have any models or you don’t have permissions to access them.

The error happens when the default database is empty on SSAS Tabular or if the default database is unprocessed in SSAS Multidimensional. You can get around this by removing and adding the empty/unprocessed database so that it is not the default database (assuming you have other good databases on that server.) The other workaround is to change the AutoDetectDefaultInitialCatalog to False but this can have other unforeseen implications on custom code connecting to the sever as explained previously.

Note: You can get this error under other valid circumstances as well. For example, if you truly don not have access to a server. In that case, the error is valid.

Why attend PASS Summit?

I have attended PASS Summit for the past three years in a row and I am planning to attend this year as well. I often get asked if it is worth it to attend PASS Summit. The schedule for this year has been released. The sessions cover a wide range of topics at different levels (see schedule here) and I have always learned from them in the past, but some may think that the material presented will eventually make it to internet or local conferences in some shape or form, so they question the value of attending PASS Summit.

First, the main value of attending PASS Summit for me, has been the networking opportunities that it brings, and the connections made with members of SQL and data community. Over the years, I have been fortunate to get to know many of the speakers, MVPS, members of Microsoft Product teams, and many data professionals from all over the world. If I am really stuck solving a technical problem or if I just need overall architecture advice, I know who I can reach to get guidance for each area. This by itself is extremely valuable to me.

Second, there is something to be said about being able to focus on learning for several days in a row without having to get interrupted for work or life.

Finally here are some of the things I like about attending PASS Summit:

  • There are so many in-depth technical sessions to choose from. You can learn from fundamentals to the in-depth topics on newest advances in the areas of SQL Server, Business Intelligence, Azure Cloud Services, Power BI and more. One concern that I hear from some people who have not attended in the last couple of years, is that they are worried they won’t be able to get to see the sessions that they are interested to see because rooms might be full. This has not happened to me in the last 3 years that I have attended in a row. I have always been able to attend any session I was interested in. PASS has done a great job of managing room sizes. They sometimes move rooms during the conference to accommodate more interest based on how many people sign up for sessions on their personal schedule.
  • You will have direct access to speakers and Microsoft MVPS. There is a great community of speakers and field specialists that love to share their experiences and provide guidance.
  • Talk to Microsoft Product teams. These are the engineers behind different products. You can take your technical questions or even a mockup of a problem to them and get answers on the spot. I was able to mock up a DAX problem I was working on for a client using the AdventureWorks sample database and get answers from the Power BI/SSAS team on site. Make sure you leave room between sessions to visit the product teams.
  • You can network and learn from people in your field in the Community Zone, at breakfast and lunch tables, and after conference events. Here are some of the great events for networking that happened in previous Summits:

1) Birds of a Feather lunch tables: where you can join speakers, MVPS, community members, and SQL/data professionals in your area of interest. Often you can hear from other data professional about some of the real-life challenges they face as they implement new technologies and the solutions they have come up with.

2) Women in Technology Lunch: Provides a wealth of information about challenges that women in technology face and processes that have made a difference in closing the gender gap at workplace. This is a very educational presentation, for not just women but anyone who is interested to learn about increasing diversity at workplace and how to support women at work.

3) Community events such as Game Night. It is usually held at the conference center at the end of one of the conference days. If you are into board games, this is a fun event. There has also been a SQL Run group, Photography group, hiking group, and many more. These have previously been posted on PASS’s website as it gets closer to the event.

4) Vendor dinners: Many vendors have dinner events. These have been listed on PASS’s web site in previous years. You must register for these to get a ticket. These have been free to attend in the past. Of course, by signing up you are agreeing to share your email with them so they can follow up about their products. Register early as some do fill up.

5) Welcome reception and vendors reception: Held at the conference center where you can network and have some light food.

  • Volunteer and make a ton of connections while having fun. In your PASS profile, you can indicate that you are interested in volunteering. There are plenty of opportunities available from holding a Welcome sign to hosting one of the Birds of a Feather tables. A fun perk of volunteering is that you get to attend a volunteer only reception.
  • Speaker Idol: Community speakers who have not spoken at PASS Summit before, get to compete for the coveted prize of a guaranteed session in the next year’s PASS Summit. This is very fun event to watch these speakers compete by presenting short sessions. Judges can be “judgy” at times (think Simon from American Idol) but you will learn a lot about making great presentations even if you are not a public speaker and intend to only make presentations at work. See this post from my experience competing in 2016.

I hope that you can attend PASS Summit. You can register from here: https://www.pass.org/summit/2019/RegisterNow.asp

Finally follow me on Twitter at @shbWatson for more news about PASS Summit 2019.

Shabnam2

Analysis Services Telemetry: What information does SSAS submit? and how has it changed in SSAS 2019?

Starting with SSAS 2016, SSAS sends some usage information back to Microsoft by default. This behavior is similar to how Telemetry works in SQL Server and SSMS and can be turned off.

The first time I noticed SSAS Telemetry, it was while I was checking the SSAS Administrators group membership on a server and saw a member called “NT Service\SSASTELEMETRY”. I was curious to find out exactly what information was potentially being submitted so I started a trace and let it run for 24 hours. I was surprised about the amount of meta data queries that ran.

Of course, the trace just showed the queries ran but that does not show if the results or a portion of them were actually submitted or if the results were anonymized before they were submitted. Based on Christian Wade’s response to my post on Twitter, any personally identifiable data about the customer such as names of databases/measures/etc. would be hashed/anonymized in SSAS 2016/2017 before they get submitted. Also the good news is that in SSAS 2019 (CTP2.2 at the time of writing this post), Telemetry has been significantly dialed back.

What meta data is being collected?

The trace I ran recorded the following queries. As previously said, collected does not mean the result was necessarily sent over to Microsoft but one has to assume that’s the purpose of Telemetry in the first place to submit some amount of meta data.

Every Hour:

In all versions of SSAS (starting 2016), Telemetry runs the following DMX query every hour:

select 1 as [ConnectionCount] from [$SYSTEM].[DISCOVER_CONNECTIONS]

This query uses one of SSAS’s Dynamic Management Views (DMVs) to return the number of user connections to the server. This does not seem too bad but think about this: If this were the only piece of information being sent over by Telemetry, hypothetically, at any given point in time, Microsoft can tell how heavily an instance is being used at any customer location that has Telemetry set to on. Aggregate this data (because that’s what B I people do!) and Microsoft would know how many users are connected to each instance over a region and across the world every hour and over time. Wouldn’t it be interesting if there were already a cube/model built off of this usage data?

Once a day: All versions

Then comes the more interesting queries. Once a day, the following query runs in all versions:

SELECT [Compatibility_Level], [Type], 1 as [CountDBCompatibilityType]
FROM [$system].[DBSCHEMA_CATALOGS]
Order by [Compatibility_Level]

The above query returns one row for each SSAS database on the server with information about its compatibility level. The following screenshot shows the output of this query on a server with two 1400 level databases (SSAS 2019) and one 1200 (SSAS 2016) database.

Once a day: SSAS 2016 and SSAS 2017 only

SSAS 2016 and SSAS 2017 submit more queries once a day. First they get a list of all databases on the server.

--Returns a list of databases on the server.
SELECT [CATALOG_NAME] as [ForEachDBID], 1 as [CountDB]
FROM $System.DBSchema_Catalogs

Then for each database, the following queries run:

--Returns a list of cubes/model names in the database. 
SELECT [CATALOG_NAME], 1 AS [DBCubeCount]
FROM [$System].[MDSCHEMA_CUBES] WHERE [CUBE_SOURCE] = 1
--Returns a list of dimensions/tables in a database.
SELECT [CATALOG_NAME], [CUBE_NAME], 1 as [DimensionCount]
from [$SYSTEM].[MDSCHEMA_DIMENSIONS] ORDER BY [CUBE_NAME]
--Returs a list of measure groups/tables in a database.
SELECT [CATALOG_NAME], [CUBE_NAME], 1 as [MeasureGroupCount]
FROM [$SYSTEM].[MDSCHEMA_MEASUREGROUPS] ORDER BY [CUBE_NAME]
--Returns a list of measures (without name) along with their measure group (table) they belong to.
SELECT [CATALOG_NAME], [CUBE_NAME], [MEASUREGROUP_NAME], [MEASURE_DISPLAY_FOLDER],
1 as [MDDisplayFoldersMeasureCount]
FROM [$SYSTEM].[MDSCHEMA_MEASURES]

This is a lot of meta data. The last couple of queries return more useful information for multidimensional models than they do for tabular models.

How to turn off SSAS Telemetry

In order to turn off Telemetry for SSAS you have to make a change to a registry key. This will stop the Telemetry. I also like to remove the account that Telemetry creates from the SSAS administrators group.

1. Set CPE.CustomerFeedback key to zero

Depending on the SSAS version, the key path is in the following format:

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\YOUR_SSAS_SERVER_INSTANCE\CPE

For example, for SSAS 2017, the path is:

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS14.MSSQLSERVER14\CPE

For SSAS 2019, the path is:

HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS14.MSSQLSERVER15\CPE

Once you find the path, set the CustomerFeedback key’s value to 0 instead of 1.

2. Remove the SSAS Telemetry account from SSAS administrators group

Making the registry key change, stops the Telemetry. This step is only for clean up purposes.

Summary

In SSAS 2016/2017, Telemetry may submit a large amount of meta data which should be anonymized. In SSAS 2019, the amount of meta data collected has been significantly reduced and seems to only contain the number of connections and compatibility level of databases. Either way, Telemetry can be easily turned off by editing a registry setting.

New QueryMemoryLimit setting in SSAS 2019

SSAS 2019 has a very useful new memory setting called QueryMemoryLimit that can be found under Advanced Prperties. While the online documentation says this setting is only available in Azure AS, I can see it in SSAS 2019 CTP.

The purpose of this setting is limit the amount of memory any single query can take. This setting is extremely useful when you want to limit the amount of memory consumption per query for queries across the board. Before this setting, it was possible to have an extremely poorly written query eat up all of a server’s memory and bring all other queries down to a halt. You can see an example of a such a query and SSAS memory settings in my previous post here.

Testing QueryMemoryLimit

To test how this setting , I ran the same poorly written query I had used in my previous post against a server that has 16GB of RAM and collected some stats from SSAS Permon counters in a CSV file.

The default value of QueryMemoryLimit for SSAS 2019 is 50. Since this setting is stated in percentage (if under 100), this means QueryMemoryLimit is 8GB on this server.

This is what happened when I ran the query:

In summary, the query’s memory consumption reached 8GB and got killed with the following error message:

Executing the query …

Resource Governing: The query exceeded the maximum memory allowed for queries executed in the current workload group (Requested 8316096KB, Limit 8316094KB)

This is a significant improvement over previous versions of SSAS. Without this setting, you would have seen the following graph instead. Here the query pushed SSAS’s total memory consumption past the Memory Limit Hard KB and caused all connections (all queries vs. just itself) to close with an out of memory error.

SSAS 2017: For a full explanation of these metrics see this post.

More Details

Unlike other SSAS memory limits, the QueryMemoryLimit is not available directly from a Perfmon Counter. Perhaps a counter will be added in future. For now, it’s value can either be calculated based on total amount of memory on the server or by running a query that passes the limit and looking at the error message. (or if you set it to a value over 100 which will be an exact amount in bytes from SSMS)

Looking at the previous example, since this query ran in an isolated environment, I can use the data reported in Perfmon to estimate the query memory consumption and compare it to the limit set in the properties to see if the query got killed exactly or close to the limit.

Here is how memory counters in Perfmon looked like before I ran the query. Note that Memory Usage KB is 898,884 KB. Since there are no other queries running, you can assume that all of this is SSAS’s base memory consumption (VertiPaq data structures, my session, …)

Here is a snapshot of the same counters a few seconds before the query gets an error. Notice how Memory Usage KB is almost at 8GB. A few seconds later the query is killed.

Finally the query passes 8GB and is killed by SSAS and memory usage is back down.

Using the data recorded by Perfmon, I can estimate the query memory consumption as following. Keep in mind that these are estimates as the base SSAS memory consumption can change slightly over time without any queries.

A = Estimated Base Memory Usage KB (SSAS) memory = 898,884 KB

B = Estimated Memory Usage KB when query errored out = 9,231,448 KB

EM = Estimated max Query Memory Usage = A – B = 8,332,564 KB

L1 = QueryMemoryLimit as set in propeties= 8 GB = 8,388,608 KB

L2 = Limit reported in the error message = 8,316,094 KB

AM = Actual max Query Memory Usage from error message = 8,316,096 KB

You can see the actual QueryMemoryLimit in the message which is a little bit (512KB) less than the setting’s value of 8GB.

Resource Governing: The query exceeded the maximum memory allowed for queries executed in the current workload group (Requested 8316096KB, Limit 8316094KB)

Summary

QueryMemoryUsage is an extremely valuable addition to SSAS 2019 and Azure AS. While it’s default value is set to 50, you should look into changing that to a fixed value if the server has a large amount of memory. If a server has over 100 GB of RAM, the default value allows a query to take as much as 50 GB of RAM before it is killed. This is way too much memory for most queries. If you do have a query that is taking this much memory, chances are that either the query has a mistake in it or can be rewritten better.

Whatever QueryMemoryUsage is set to, it can help you identify the heavy queries that go past the limit assuming the users will complain about their queries getting killed. This is valuable from an admin point of view if you manage a server with a large number of users.

Understanding SSAS memory settings with real examples

If you have worked with SSAS and been responsible for if from the server side, you are most likely familiar with its memory settings. In an ideal world, your server has plenty of physical memory, so you don’t have to worry about these settings. In a real world though, you may have one or more of the following at the same time: Not enough memory on your server, bad database (model) designs that do not follow best practices hence they take more memory than they should (split those datetime columns!), heavy queries (I’ll have one example to show you later), and a large concurrent user volume which you have no control over. In these cases, you can encounter memory pressure on the sever and it is beneficial to understand how to interpret these memory settings.

In this post, I am going to run several experiments and change the default SSAS memory settings, run queries/process models, use PerfMon to collect statistics from the server,  and then graph the results in Excel. For each experiment, I will show you one or two snapshots from Perfmon and the recorded stats in Excel. You can find the corresponding Excel file a the end of each experiment. 

SSAS memory settings

You can see SSAS memory settings by using SQL Sever Management Studio and getting the server properties. The following are the default settings. 

For all the Limits, if the number is under 100, it is a percentage of physical memory on the server (RAM). When HardMemoryLimit is set to 0 its value is half way between TotalMemoryLimit and total physical memory on the sever. You can find a full description of these settings here

The easy to understand memory settings

The Low, Total, and Hard limits have always been easy to grasp specially if you are working with SSAS Multidimensional (yes, they still do exist even if only in spirit for some 🙂 ).

If you were to graph these limits along with memory consumption, you would see something like this. We will see a real example of this later.

If memory consumption is below the Low limit everything is fine and it is free to stay in memory. Once the consumption passes the Low limit a cleaner thread wakes up and tries to clean up memory. At this point price of memory is no longer zero. It starts from 2 at the Low limit and goes as high as 1000 when memory consumption reaches the Total limit. The higher the memory pressure the more aggressive the cleaner gets. Once memory consumption reaches the Hard limit all connections/sessions are closed and queries are cancelled with an out of memory error.

The harder to understand memory settings

SSAS Tabular introduced two more settings.

VertiPaqPagingPolicy

When it is set to 0, VertiPaq data structures must fit in physical memory and cannot be paged.  In this setting, the VertiPaqMemory limit does not play any roles.

When it is set to 1 (default value), VertiPaq data structures can grow past physical memory and as much as windows page file allows them. In this mode, the VertiPaqMemoryLimit comes into play. All other limits also still exist.

VertiPaqMomoryLimit

When paging is allowed, this is the limit of the VertiPaq memory that is considered for calculating the current price of memory by the cleaner thread. Any VertiPaq memory in addition to this limit is not visible to the cleaner.

Perfmon

You can use Windows Performance Monitor (Perfmon) to get a lot of information about SSAS. Specifically, the limits can be seen with Perfmon. 

This is what they look like on my server with default settings. This server has 16GB of memory.

In addition to the limits, you have a few more useful counters:

Memory Usage KB

It shows the amount of memory visible to the cleaner thread for calculating the price of memory. Note that while this can closely follow the total amount of memory that SSAS is using, it is not always the same or even close to it (I will show you an example of this later). This is just what’s visible to the cleaner. Also this is not how much the cleaner can clean if it decides to do so, this is what is uses to set memory price.

Process(msmdsrv) – Private Bytes

It shows the amount of virtual memory given to SSAS by OS. It is up to the OS to decide how much of this is physical memory (RAM) vs. paging file (disk). This counter matches what you see in Task Manager/Resource Monitor under Commit (KB) column.

When you have plenty of memory on a server, you will see this counter and Memory Usage KB, follow each other very closely to the point that if you graph them they appear to fall on the same line. In most of the following examples, the graphs will show this. There are two examples where this is not the case.

Note: In the following graphs I refer to Process(msmdsrv)\Private Kbytes which is a column I add to the Excel file that Perfmon writes to. I do this so that I can graph it along other counters which are all in KB.

Test model

I wanted to work with something larger than AdventureWorks so I downloaded some public data from City of Chicago Service requests data sets from Kaggle. I imported the spreadsheets directly into a Tabular model on my SSAS server. intentionally did not do any cleaning/modeling to make them take as much memory as possible. The resulting model has 5 tables that are not related to each other. Each row in each table represents a service request and has a unique service request number which is exactly what I was looking for: High cardinality –> Large model

Here are the tables and their row counts:

TableRow Count
Graffiti1,027,859
Pot Holes551,603
Street Lights470,288
Garbage Carts392,932
Rodent Baiting       308,632

The model takes about 280 MB of memory. 

Test Query 

Since I don’t have a live query volume, I created an intentionally nonsensical resource intensive query that requires some data spooling in memory by the formula engine. I use this query to push the memory consumption up past all the limits and see how different counters behave.

Here is the link to the gist: code.

DO NOT RUN A QUERY LIKE THIS QUERY IN PRODUCTION!

This query tries to cross join the Service Request Number from all of the above tables (1 million by 500 thousands by 400 thousands by 300 thousands by 300 thousands) which amounts to 3.233571077757377e+28 rows, then find the distinct combinations of all results (which we know is the row count anyways) and finally do a row count on them (redundant but intentional). I got the idea of this query from chapter 14 from the Tabular Modeling in Microsoft Analysis Services” book.

Experiments

Experiment 0: Test query with default limits

Memory settings are still set to their default values as seen previously. The query ran for 14 minutes and 40 seconds before it ate up all the memory SSAS can provide and was killed with the following message:

There’s not enough memory to complete this operation. Please try again later when there may be more memory available.

The following picture shows the details as captured by Perfmon. The primary Y axis shows memory limits/consumption KB. The secondary Y axis shows memory price. The lines for Memory Usage KB and Process(msmdsrv)\Private KBytes fall on top of each other and look like one line. You can see how memory consumption goes up (along with memory price) until the query is killed by SSAS.

Of course this is an extreme example that is convenient for me to use in place of 1000 users running small queries at the same time. You can see a behavior like this on a server with a large user counts and not enough memory.

The following snapshot is from Perfmon a few seconds before the query got the out of memory error. Notice price of memory is at 1000. Memory Usage KB is past the High limit but has not exceeded the Hard limit yet.

Also notice that the VertiPaq memory (VertiPaq Nonpaged KB + VertiPaq Pages KB) did not increase since I only ran a query (vs processing the model). 

You can see the full details here:

Experiment 1: Test query with lower limits

Similar to experiment 0 but this time the query is killed sooner, in 22 seconds as expected.

Memory hits the limits sooner than previous experiment.

Similar to previous experiment, the VertiPaq data does not rise and Memory Usage KB and Process (msmdsrv) Private Bytes fall on the same line.

You can see the full details here:

Experiment 2: Processing a model with VertiPaqMemoryLimit = 0

At this point you are probably thinking that you can safely monitor Cleaner Current Price to identify memory pressure. I am going to show you that, this does not always hold. This is where Memory Usage KB and Process (mdmdsrv) Private Bytes are going to deviate from each other.

The Test database is only 280MB. I have two copies of it plus AdventureWorks on the server. The machine has 16GB of memory. With these settings, a process operation is not going to increase the VertiPaq memory usage high enough to see anything in the graphs. In order to push the VertiPaq memory usage higher than the limits, I drop the memory limits! This is just for simulation. You should NOT try this on a production server. 

In most cases the default settings are sufficient. In some cases, you should consider changing these settings, for example if you have other memory intensive processes running on the same machine (think SQL Server).

Here are the new reduced settings:

which amount to the following values as seen in Perfmon:

Take a look at the VertiPaq memory usage which is the sum of VertiPaq Nonpaged KB + VertiPaq Paged KB = 289,272 KB. 

I have set Memory Limit VertiPaq KB to zero which SSAS sets it to a minimum of  8,192 KB that is much smaller than the current VertiPaq memory usage. Obviously this is not the limit of how much VertiPaq data can grow.  By setting the limit to 8,192, I have essentially told the cleaner thread to only include 8,192 KB of VertiPaq data for calculating the price of memory and ignore the rest. 

Now if I issue a process full command on the test model, the amount of VertiPaq data usage will temporarily increase. This is a screenshot of Perfmon during the processing:

Note that Memory Usage KB increased by 207,164 KB while VertiPaq data usage increased to 739,676 KB which is 450,404 KB more than before. In other words, not all of VertiPaq data usage is being reported in Memory Usage KB. 

Memory Usage KB did not go over Memory Limit Low KB and the Price of memory stayed zero the entire time. Of course, as the following picture shows, Process (msmdsrv) Private Bytes (SSAS  memory) went up and beyond the High limit. If this Hight limit reflected the actual total physical memory on your server,  you would see heavy paging, performance would suffer, and the service/server might crash.


Processing finished in 59 seconds.

In this graph, you can see one separate line for each of Memory Usage KB (blue line) and Process(msmdsrv)\Private KBytes (purple line). The purple line goes well beyond the Total Limit but the cleaner is doing exactly as it was told which is to ignore the VertiPaq data.

You can see the full details here:

Experiment 3: Processing a model with VertiPaqMemoryLimit = 2

This time I am going to increase the VertiPaqMemoryLimit by a small amount. Here are the limits:

which amount to the following numbers:

VertiPaq memory usage is 289,272 KB same as before.

During the process, the VertiPaq memory usage increases. The following screenshot from Perfmon is taken half way through the process.

At this point, VertiPaq memory usage is up to 603,656 KB. This time, the price of memory has increased because Memory Usage KB is including up to 332,643 KB of VertiPaq data and that is enough to push it past the Low Limit and soon the High limit.

Processing finished in 85 seconds.

In this graph, you can see that price of memory increases during the processing operation. You still see two separate lines for Memory Usage KB and Process(msmdsrv)\Private Bytes. If the Total limit were actually based on %90 of the total physical memory on the server, you would see heavy paging and poor performance. 

 You can see the full details here:

Experiment 4: Processing a model with
VertiPaqMemoryLimit = 8

This time I increased VertiPaqMemoryLimit to 8.


which amounts to the following values in Perfmon:

VertiPaqMemoryLimit is about 1GB more than previous experiment. The following screenshot was taken during processing of the model.


Processing finished in 61 seconds.

You can see full detail here:

Experiment 5: Processing a model with default settings

This time I am going to set the memory settings back to their default values.

which amount to the following values in Perfmon:

Notice how VertiPaq memory is split between VertiPaq Paged KB and VertiPaq Nonpaged KB. The memory in VertiPaq Paged KB is not necessarily paged to disk. This memory can be paged to disk. The memory in VertiPaq Nonpaged KB is locked in Physical memory. 

Processing the model with these settings does not put any pressure on the server as expected. The following is a screenshot from Perfmon during processing of the test model.

Notice how both VertiPaq Paged KB and VertiPaq Nonpaged KB values increase during the process. This is in contrast with next experiment where all the increase will be shown in VertiPaq Nonpaged  KB.

You can see full details here:

Experiment 6: Processing a model with VertiPaqPagingPolicy = 0

Finally I want to test what happens with VertiPaqPagingPolicy set to zero. In particular I am interested to see if VertiPaq memory can grow more than VertiPaqMemoryLimit during the processing period. Based on current definition of these settings that should be the case, however; I had initially thought that based on the name when VertiPaqPagingPolicy is set to 0, the VertiPaqPagingMemoryLimit becomes the limit of how much VertiPaq memory can grow.

With all settings at their default values, this is how the counters look like in Perfmon.


It is interesting that all of VertiPaq memory shows under VertiPaq Nonpaged KB this time. 

Here are the results when I process the model:

This time, Memory Usage KB (and process(msmdsrv)\private KBytes) increased as expected during a processing operation and total VertiPaq memory increased past VertiPaqMemoryLimit. This confirms that VertiPaqMemoryLimit is not the limit of how much VertiPaqMemoryLimit can grow (at least during processing time)!

Note: Event though VertiPaqPagingPolicy is not exposed through Perfmon, you can infer its value based on VertiPaq Paged KB and VertiPaq Nonpaged KB values.  VertiPaq Paged KB of zero means all VertiPaq data is locked in memory and cannot be paged which means the VertiPaqPagingPolicy is set to zero.

You can see full details here:

Conclusion

Since the Cleaner Current Price has a value between 0 and 1000 regardless of other memory settings , this counter is my favorite memory counter to look at on a any server. In most cases, if the value stays under 10, memory usage is in a good state. Any values over 10 for a continuous time deserves more investigation using all the other counters I mentioned in this post. A value higher than 10 and close to 1000 definitely shows memory pressure.

Having said this, you have to keep in mind that as I showed in this post, Cleaner Current Price is based on what the cleaner thread sees and not always the total amount of memory that SSAS is using. The cleaner does not see any VertiPaq memory in excess in of VertiPaqMemoryLimit. If Cleaner Current Price stays low and you are still having performance issues, you should get a full picture of memory usage by monitoring the following counters at the minimum:

Category Counter Name
MSAS14 *Cleaner Current Price
MSAS14 Memory Limit Hard KB
MSAS14 Memory Limit High KB
MSAS14 Memory Limit Low KB
MSAS14 Memory Limit VertiPaq KB
MSAS14 Memory Usage KB
MSAS14 VertiPaq Nonpaged KB
MSAS14 VertiPaq Paged KB
Process (msmdsrv) **Private Bytes***
MemoryAvailable KBytes
MemoryPage Faults/sec

* The counters for the default instance of SSAS show under MSASxx categories, where xx is a number corresponding to the version of SSAS you are using. I used SSAS 2017 so the counters were under MSAS14. For SSAS 2019, the counters will be under MSAS15. For SSAS 2016, the counters will be under MSAS13.

** The Process category can be filtered to a particular process. To filter Process information to SSAS, choose msmdsrv from its subcategory. 

*** In the Excel files I shared, I have added a column that converts Process(msmdsrv)\Private Bytes to Kilo Bytes so that I can graph it along other counters.

Finally

If you want to learn how to use Perfmon to add the SSAS counters I used in this post, watch this recording of my session. Half way through the video, I show where you can find the counters and how to easily save them to Excel/SQL Server.

How to clear SSAS cache using C# for query performance tuning

First let me give you a little background of why you would want to clear SSAS cache from C# code when you can do this using an XMLA command from SSMS.

If you have a slow MDX/DAX SSAS query , you have a couple of options for improving the performance (assuming no hardware changes):

  1. Rewrite the query differently if you have control over the query. (You will have two queries that you want to compare against the same database.)
  2. Make changes to the SSAS database to follow a better design. (You will have one query to run against two databases)

Regardless of which route you go, you should compare the performance before and after the changes to see how much you gained from the change.

Measuring performance improvements for one or a couple of queries

If you only have one or a handful of queries, you can compare their run time in SSMS. In order to get a fair comparison you have to measure the query run time with and without a clear cache. You can clear the database cache by running an XMLA command in between runs (See XMLA code for clearing SSAS cache).

<ClearCache xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”&gt;

MyTestDatabase

</ClearCache>

The above code clears the SSAS database cache for MyTestDatabase.

Measuring performance improvements for a large set of queries

Now let’s suppose you have captured a large number of long running queries from a production server and you are working on making changes to your SSAS database design (by following best practices,…) to improve overall query performance. If you can deploy two databases side by side to a Dev or QA environment, you can run the set of queries side by side, record their run time and compare the results to see how your changes have affected the overall performance. This will give you a good indication of how your changes will impact a live query load in production. If you can improve overall query performance on a Dev/QA environment, you will most likely get at least a good percentage of the improvement in your production environment. The reason I say a good percentage vs all is that usually Dev/QA environments have slower hardware and most probably you will be the only user or one of the few users on the Dev/QA environment running queries so the load conditions are different).

In order to get a good measurement, you need to run the set of queries two different ways:

  1. Run the batch without clearing the cache before each query. This is similar to how a live load in production would run (although sequentially vs. many connections at the same time in production). Some of the queries will benefit from previous queries since they may ask for calculations/data that may already be in cache.
  2. Run the batch and clear the cache before each query.

All of this can be done using C# or script task in SSIS. You store the queries in a SQL table and then read them back with a program written in C# from either Visual Studio (Cosole App) or a script task in SSIS. Then you run each one at a time, store their run time results in another table, and then analyze the results using either SQL or Excel. Optionally you clear the cache between each query.Clearing the cache between each query allows you to check individual queries and pinpoint the ones that improve the most or do not benefit from your changes.

When I started searching the methods in Analysis Services DLLs (note: the DLLs have changed since SSAS 2016) looking for a method I could call to clear the cache, unfortunately I did not find anything. After much search, I came across this old post by Darren Gosbell where he used Microsoft.AnalysisServices.Xmla DLL. The code is unbelievably easy to the point where I had to verify it was actually submitting it by running an extended events trace (profiler still works pretty well for SSAS too) and observing the difference. 

Sample Code

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.AnalysisServices.Xmla;
/* If you see a red squiggly line under Microsoft.AnalysysServices.Xmla above
* it means that you need to check your reference to XmlaClient class.
* From solution explorer, go to References –> Add Reference –> Browse –> Navigate to
* Microsoft.AnalysisServices.XMLA on your machine and add it. *
**/

using System.IO; //This is added here so that you can reference the File class without specifying the full name.

namespace Clear_SSAS_Cache
{
class Program
{
static void Main(string[] args)
{
//Notify the user and make sure they actually want to run this code!
char PressedKey = ‘N’;
Console.Title = “Clearing SSAS database cache!!!!!”;
Console.WriteLine(“Clear SSAS database cache? Press Y for Yes or any other key for No followed by Enter.”);
ConsoleKeyInfo PressedKeyInfo = Console.ReadKey();
Console.WriteLine();
PressedKey = Char.ToUpper(PressedKeyInfo.KeyChar);

if (PressedKey == ‘Y’)
{
Console.WriteLine(“Clearing the cache…”);
XmlaClient clnt = new XmlaClient();
//Update the following to your SSAS server instance.
clnt.Connect(@”localhost”);
//update the following path to point to your XMLA file.
string xmla = File.ReadAllText(@”C:\Clear_Cache.xmla”);
clnt.Send(xmla, null);
Console.WriteLine(“Cleared the cache.”);
clnt.Disconnect();
}
else Console.WriteLine(“No changes made.”);

Console.WriteLine(“Press any key to continue.”);
Console.ReadKey();

}
}
}

Obviously do not run this against a production server since clearing a production SSAS database’s cache impacts the performance negatively. Also please note that the above code does not have any expectation handling so if the XMLA passed has any errors it will error out.

I used the above method to clear the cache between queries in the program that runs my query batches. Once I have the results, I can pinpoint the queries that improved the most and the ones that did not improve. It is important that this part be done on a clear cache. Once I have this data I can go back to the design and focus on the parts that did not improve.

Adding a reference to Microsoft.AnalysisServices.Xmla

If you get a red squiggly line under “Microsoft.AnalysisServices.Xmla” in your code, you are missing a reference to your project.

Search for Microsoft.AnalysisServices.Xmla.dll on your machine. Usually it is installed with SSAS at “C:\Windows\Microsoft.NET\assembly\GAC_MSIL” along with all the other SSAS DLLs.

SSAS_DLLs

If you don’t have it on your machine, you can copy if from any other machine that has SSAS running into a folder on your machine.

Once you locate the file, you can add a reference to it to your project by right clicking on References in Solution Explorer. Then click on Add followed by Browse. Find the file and click Add and then OK. The red lines should go away now.

Extended Events Trace for capture Command Begin/End

<Create xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”&gt;
<ObjectDefinition>
<Trace>
<ID>Catch Clear Cache Commans</ID>
<Name>Catch Clear Cache Commans</Name>
<XEvent xmlns=”http://schemas.microsoft.com/analysisservices/2011/engine/300/300″&gt;
<event_session name=”Catch Clear Cache Commans” dispatchLatency=”0″ maxEventSize=”0″ maxMemory=”4″ memoryPartition=”none” eventRetentionMode=”AllowSingleEventLoss” trackCausality=”true” xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine”&gt;
<event package=”AS” name=”CommandBegin” />
<event package=”AS” name=”CommandEndWithError” />
<event package=”AS” name=”CommandEnd” />
<target package=”package0″ name=”event_stream” />
</event_session>
</XEvent>
</Trace>
</ObjectDefinition>
</Create>

 

 

 

 

 

 

 

 

Cannot update VertiPaq Analyzer’s connection to point to my SSAS Tabular database

While trying to set up VertiPaq Analyzer on a new computer, I ran into a problem where Excel was not letting me change the SSAS connection that was built in the workbook. It turns out I had missed one of steps in the instructions in the workbook. As a result, when I got to Connection Properties, everything was grayed out and this message was at the bottom:

Some properties cannot be changed because this connection was modified using PowerPivot Add-in.

The instructions in the workbook said:

Open the “Data” ribbon

Click “Manage Data Model”

Click “Existing Connections”

….

(I was working with Veritpaq Analyzer v1.9 downloaded from https://www.sqlbi.com/articles/data-model-size-with-vertipaq-analyzer/)

 

The wrong way

Working in Excel with the latest version of “Microsoft Office 365 Pro Plus”, the “Manage Data Model” option was not available under the Data ribbon so instead I went to Existing Connections in this ribbon. Notice the message at the bottom:

Pic

 

The right way

 

Interestingly, I already had Power Pivot “Ribbon” visible so I knew it was installed already. I clicked on Power Pivot ribbon and then on Manage. From there I went to “Get External Data” to “Existing Connections”

Pic2

This time clicking on Existing Connections allowed me to modify the connection:

Pic3

After I built the connection I clicked on Refresh and voila! I was connected to my sample AdventureWorks database.

Some notes about VertiPaq Analyzer

What I love about this tool is that it is very easy to get it setup (minus the minor issue I ran into). I love how I can quickly I can look at a database and see how much space each table/column is taking and the database itself ad since all of this is in Excel you can easily sort, filter and find what you are looking for.  My other favorite tab is the measures tab where it lists all the measures.

It is true that you can get all this information from the SSAS DMVs (dynamic management views) but this workbook has everything in it without you having to write queries in DMX language which is not quite like SQL and very limited in what you can put in a where clause or when writing joins.